Instance Architecture   «Prev  Next»

Lesson 11 Advancing the checkpoint
Objective Explain why checkpointing causes data to be written from buffer.

Minimizing gaps between Redo logs and Data Files

Checkpoint advancing can also trigger the database writer to write dirty blocks back to disk. This minimizes the number of changes that must be redone in the event the system crashes or the Oracle instance fails. Recall that each change to a database has a number assigned to it, and that this number corresponds to the redo log record written for that change. To prevent any inordinately large gaps between content in the redo logs and content in the datafiles, Oracle periodically writes the changes back to the datafiles. This is referred to as checkpointing.

Writing frequently-changed blocks to disk

The database buffer cache is part of the reason that checkpointing needs to occur. The buffer cache exists to keep the most frequently accessed data blocks in memory. Blocks that are changed a lot will tend to stay at the most recently used end of the LRU list. Because of this, they might never be forced back to disk by incoming data, as the previous lesson describes. Instead, they might sit in memory forever. You could make a million changes to them, and none of those changes would be written to disk. Obviously, that is not a good situation. Enter checkpointing. Oracle periodically takes a system change number, and ensures that all changes up through and including that change are written to disk. By default, a checkpoint occurs every time a redo log switch occurs.

Overview of Checkpoints

A checkpoint is a crucial mechanism in consistent database shutdowns, instance recovery, and Oracle Database operation generally. The term checkpoint has the following related meanings:
  1. A data structure that indicates the checkpoint position, which is the SCN in the redo stream where instance recovery must begin The checkpoint position is determined by the oldest dirty buffer in the database buffer cache. The checkpoint position acts as a pointer to the redo stream and is stored in the control file and in each data file header.
  2. The writing of modified database buffers in the database buffer cache to disk

Redo Log Files

Every Oracle database has a set of two or more redo log files. The set of redo log files is collectively known as the redo log for the database. A redo log is made up of redo entries (also called redo records). The primary function of the redo log is to record all changes made to data. If a failure prevents modified data from being permanently written to the datafiles, then the changes can be obtained from the redo log, so work is never lost. To protect against a failure involving the redo log itself, Oracle allows a multiplexed redo log so that two or more copies of the redo log can be maintained on different disks.
The information in a redo log file is used only to recover the database from a system or media failure that prevents database data from being written to the datafiles. For example, if an unexpected power outage terminates database operation, then data in memory cannot be written to the datafiles, and the data is lost. However, lost data can be recovered when the database is opened, after power is restored. By applying the information in the most recent redo log files to the database datafiles, Oracle restores the database to the time at which the power failure occurred. The process of applying the redo log during a recovery operation is called rolling forward.