On standard UNIX databases, the data is read from disk into the UNIX buffer where it is then transferred into the Oracle buffer.
The size of the buffer cache can have a huge impact on Oracle system performance.
The larger the buffer cache, the greater the likelihood that data from a prior transaction will reside in the buffer, thereby avoiding a physical disk I/O.
When a request is made to Oracle to retrieve data, Oracle will first check the internal memory structures to see if the data is already in the buffer. In this fashion, Oracle avoids doing unnecessary I/O. It would be ideal if you could create one buffer for each database page, ensuring that Oracle would read each block only once. However, the cost of memory in the real world makes this prohibitive.
At best, you can only allocate a small number of real-memory buffers, and Oracle will manage this memory for you by using a least-recently-used algorithm to determine which database blocks are to be flushed from memory.
Bad performance in the data buffer is always the result of data being flushed from the buffer because of high I/O activity.
The best remedy is to increase the db_block_buffer init.ora parameter. In this course, you will learn how to calculate the performance improvements that you would get from increasing data buffers and database block size.
Another related memory issue deals with the size of the database blocks
In most UNIX environments, database blocks
are sized to only 2K. The cost of reading a 2K block is not significantly higher than the cost of reading an 8K block and the 8K block will be of no benefit if you only want a small row in a single table. However, if the tables are commonly read front-to-back, or if you make appropriate use of Oracle clusters you can reap dramatic performance improvements by switching to large block sizes.
Remember, the marginal cost of reading a large block is quite small, and the more data that can be accessed in a single I/O, the more data will be placed into your data buffer. The only exception to this rule would be an OLTP database that always fetches a very small row, and does not use any adjacent row data in a block. Even in these cases, index node rows commonly reside on the same block.
For batch-oriented reporting databases, very large block sizes are always recommended.
The db_block_size parameter is used to control the block size. Unlike other relational databases, Oracle allocates the data files on your behalf when you use the CREATE TABLESPACE command.
In this course, you will learn how to calculate the relative performance improvements that you would get from increasing data buffers and database block size. We will investigate the data buffer in great detail in a later module.
In the next lesson, we will examine tuning considerations of the third important area of the SGA, the redo log buffer.