One of the most important areas of Oracle tuning is the management of the RAM data buffers. Oracle performance will be dramatically improved if you can avoid a physical disk I/O by storing a data block inside the RAM memory of the data buffer caches. Historically, an Oracle professional would allocate an area of RAM cache with a size defined by the db_block_buffers (db_cache_sizein Oracle 9i) parameter, and leave it to Oracle to manage the data blocks within this RAM area. But with the advent of support for multiple block sizes in Oracle 9i, you can manage each individual data buffer pool and segregate tables and indexes into separate data buffers.

Oracle caching in a nutshell
When a SQL statement requests a row from a table, Oracle first checks the internal memory structures to see if the data is already in a data buffer. If the requested data is there, it’s returned, saving a physical IO operation. With the very large SGAs in some 64-bit releases of Oracle, small databases can be entirely cached, as discussed in “Tweak Oracle data buffer parameters to cache entire databases.” For very large databases, however, the RAM data buffers cannot hold all of the database blocks.

Oracle has a scheme for keeping frequently used blocks in RAM. When there isn’t enough room in the data buffer for the whole database, Oracle utilizes a least-recently-used algorithm to determine which database pages are to be flushed from memory. Oracle keeps an in-memory control structure for each block in the data buffer: New data blocks are inserted at the middle of the data buffer, and every time a block is requested, it is moved to the front of the list. Data blocks that aren’t frequently referenced will wind up at the end of the data buffer where they will eventually be erased to make room for a new data block.

Starting in Oracle 8, Oracle provides three separate pools of RAM to hold incoming Oracle data blocks:

  •         KEEP pool is used to hold tables that are frequently referenced by the application, such as small tables that have frequent full table scans and reference tables for the application.
  •         RECYCLE pool is reserved for large tables that experience full table scans that are unlikely to be reread. The RECYCLE pool is used so that the incoming data blocks don’t flush out data blocks from more frequently used tables and indexes.
  •         DEFAULT pool is used for all table and index accesses that aren’t appropriate for the KEEP or RECYCLE pools.

Remember, the KEEP and RECYCLE pools are subsets of the DEFAULT pool. Now that I’ve explained the basic mechanisms of the Oracle data buffers, let’s look at how data dictionary queries can help you view the internal contents of the buffers.

Dictionary queries for data buffers
Oracle provides the v$bh view to allow you to view the contents of the data buffers, along with the number of blocks for each segment type in the buffer. This view is especially useful when you are using multiple data buffers and you want to know the amount of caching used for tables and indexes. Joining the v$bhview with dba_objectsgives you a block-by-block listing of your data buffer contents and shows you how well your data buffers are caching table and index contents.

The script in Listing A formats the data buffer information into a great report you can use to monitor the data buffer activity in your database. Figure A shows the output from this report.

Figure A
Data buffer activity report

As you can see from Listing A, this report can give you some valuable insight into the tables and indexes that reside inside the data buffer. If you happen to have limited RAM resources for the data buffer caches, this report can show you the number of blocks that currently reside in the buffer for each object.

Oracle’s midpoint insertion algorithm tends to segregate each buffer into “hot” and “cold” areas, depending on the frequency with which each data block is read. Each time a data block is re-referenced, it moves to the head of the data block chain on the “hot” side of the data buffer. I find it interesting to run the report repeatedly to watch the data blocks move from cold to hot and back again.