Set log_checkpoint_interval to maximize performance.
Maximize Performance by Setting log_checkpoint_interval
Checkpoint pros and cons
Checkpoints are used to commit data from the Oracle buffers. A checkpoint occurs at every redo log switch. If a previous checkpoint is already in progress, the checkpoint forced by the log switch will override the current checkpoint.
This necessitates well-sized redo logs to avoid unnecessary checkpoints because of frequent log switches.
Frequent checkpoints will enable faster recovery after a crash, but can cause performance degradation during regular processing. A checkpoint can be a highly resource-intensive operation because all datafile headers are frozen during the checkpoint.
Hence, frequent checkpointing for a large database with hundreds of data files will mean that the entire database must pause each time a checkpoint is issued.
The following Slide Show illustrates the checkpoint process.
The init.ora parameter log_checkpoint_interval controls how often a checkpoint operation will be performed. This interval is based upon the number of operating system blocks that have been written to the redo log. If the log_checkpoint_interval is larger than the size of the redo log, then the checkpoint will only occur when Oracle performs a log switch from one group to another, which is ideal. For example, in UNIX systems the OS block size is 512 bytes. If the log_checkpoint_interval is set to 100,000, that a checkpoint will occur after 50MB (512*100,000) has been written to the redo log. If the size of your redo log is 50MB, you are taking one checkpoint for each log. The next lesson wraps up this module.