RelationalDBDesign RelationalDBDesign


Archiving Redo log   «Prev 

Monitoring the rate of Redo Log Generation

If you're running in archivelog mode, it is good to know the amount of redo generated for your database on a daily basis. This information can be useful in determining whether or not your archive log destination is large enough to hold all the log files that must be archived. If, for example, your database is generating 500 megabytes of redo per day and your archive log destination can hold only 2 gigabytes of data, you would need to clear out the archive log destination directory at least every four days.
One way to monitor the rate of redo log generation is to use operating system commands to list all the redo log files generated each day, total those files, and multiply by the file size. Another approach is to query the v$log_history view. The query in the following example returns the number of redo log files started on a per day basis:

SQL> SELECT TRUNC(first_time), COUNT(*)

  2  FROM v$log_history

  3  GROUP BY TRUNC(first_time)

  4  ORDER BY TRUNC(first_time);

TRUNC(FIRST_TIME)     COUNT(*)

-------------------- ---------

08-Nov-1999 12:00:00         2

22-Nov-1999 12:00:00         7

23-Nov-1999 12:00:00         1

26-Nov-1999 12:00:00         3

28-Nov-1999 12:00:00         4

Only Useful

This information is useful only if all your log files are the same size. In this example, if the log files are each 10 megabytes, you can see that the rate of redo generation ranges from 20 megabytes (2 * 10 megabytes) to 70 megabytes (7 * 10 megabytes) per day. One caveat: The date in the FIRST_TIME column is not the date that the log file is filled. Rather, it's the date that Oracle starts writing to the log file. Still, the results are generally close enough for the purpose of determining the rate of log file generation.