Oracle Instance  «Prev 

Large UNIX blocks

Unlike the mainframe ancestors that allowed blocks of up to 16,000 bytes, for Oracle, large UNIX blocks are not always desirable because of the way UNIX handles its page I/O. Remember, I/O is the single most important slowdown in a client/server system, and the more relevant the data that can be grabbed in a single I/O, the better the performance. As UNIX systems have become increasingly powerful, a number of system vendors and UNIX independent software vendors have developed a requirement to access files that contain more information than can be addressed using a signed long integer. A number of major system vendors and users met at the "Large File Summit" (LFS) to develop a set of changes to the existing Single UNIX Specification (SUS) that allow both new and converted programs to address files of arbitrary sizes.
March 20, 1996 

This set of changes was provided to X/Open for inclusion into the next version of the SUS. In addition, a set of transitional extensions intended to permit users to immediately implement large file support on typical 32-bit UNIX operating systems was proposed.

Buffer Cache and SGA

The buffer cache is the in-memory area of the SGA where incoming Oracle data blocks are kept. On standard Unix databases, the data is read from disk into the Unix buffer where it is then transferred into the Oracle buffer. The size of the buffer cache can have an impact on Oracle system performance. The larger the buffer cache, the greater the likelihood that data from a prior transaction will reside in the buffer, thereby avoiding a physical disk I/O. When a request is made to Oracle to retrieve data, Oracle will first check the internal memory structures to see if the data is already in the buffer. Using this technique, Oracle avoids performing any unnecessary I/O. It would be ideal if you could create one buffer for each database page, ensuring that Oracle would read each block only once.

Real-memory Buffers

One can only allocate a small number of real-memory buffers, and Oracle will manage this memory for you. Oracle utilizes a (LRU) Least recently used algorithm to determine which database pages are to be flushed from memory.

Data Block Size

Another related memory issue emerges that deals with the size of the database blocks. In most Unix environments, database blocks are sized to only 4K. The cost of reading a 2K block is not significantly higher than the cost of reading an 8K block. However, the 8K block read will be of no benefit if you only want a small row in a single table. On the other hand, if the tables are commonly read front-to-back, or if you make appropriate use of Oracle clusters, you can reap performance improvements by switching to large block sizes. For batch-oriented reporting databases, very large block sizes are always recommended. However, many databases are used for online transaction processing during the day, while the batch reports are run in the evenings. As a general rule, 8K block sizes will benefit most systems. Fortunately, Oracle allows for large block sizes, and the db_block_size parameter is used to control the physical block size of the data files. Unlike other relational databases, Oracle allocates the data files on your behalf when the CREATE TABLESPACE command is issued. Avoid running a full-table scan on a large table, since this is one of the worst things that can happen to a buffer cache .