Back Up Next

Chapter 19 *

Tuning Disk Utilization *

Database Configuration *

Use of SYSTEM, RBS, and TEMP Tablespaces *

Exercises *

Configuring Rollback Segments *

Exercises *

Configuring the Database for Specific Backup Needs *

Exercises *

Distributing Files to Reduce I/O Contention *

Exercises *

Using Disk Striping *

Exercises *

Tuning Rollback Segments *

Using V$ Views to Monitor Rollback Segment Performance *

Exercises *

Modifying Rollback Segment Configuration *

Exercises *

Allocating Rollback Segments to Transactions *

Exercises *

Using Oracle Blocks Efficiently *

Determining Block Size *

Exercises *

Setting PCTFREE and PCTUSED *

Examples of PCTFREE and PCTUSED Usage *

Exercises *

Detecting and Resolving Row Migration *

Exercises *

Detecting and Resolving Freelist Contention *

Exercises *

Chapter Summary *

Two-Minute Drill *

 

Chapter 19

Tuning Disk Utilization

In this chapter, you will understand and demonstrate knowledge in the following areas:

Database configuration
Tuning rollback segments
Tuning Oracle block usage

Disk utilization is one of the strongest features Oracle has developed into its database product. By using background processes and memory areas for the movement of data from memory to disk, Oracle prevents users from having to worry about disk usage during the execution of their processes on the database. This configuration frees user processes from being bound by I/O constraints on all but the most transaction-intensive OLTP systems with large amounts of users. This chapter will focus on several areas of Oracle disk usage tuning, including database configuration, tuning rollback segments, and Oracle block usage. These areas comprise 25 percent of the material tested by OCP Exam 4.

Database Configuration

In this section, you will cover the following areas of database configuration:

Identifying inappropriate use of SYSTEM, RBS, and TEMP tablespaces
Configuring rollback segments
Designing for specific backup needs
Distributing files to reduce I/O contention
Using disk striping

The foundation of a well-tuned database is a well-configured database. Since many performance issues stem from improper configuration, this section is designed to explain Oracle’s recommendations on database configuration. The OCP examination for performance tuning also focuses on this area. Good database configuration starts with the effective use of the tablespaces that are part of the instance.

Use of SYSTEM, RBS, and TEMP Tablespaces

A typical database might have five different types of tablespaces in use to store its objects. Those tablespaces might include SYSTEM, RBS, DATA, INDEX, and TEMP tablespaces. Some of these tablespaces must be created by the DBA before they are available for use. One tablespace, SYSTEM, is automatically generated at database creation. It contains items vital to the Oracle instance, such as the data dictionary. Without exception, the instance must have one rollback segment allocated to it from the SYSTEM tablespace. Consider first the use of the SYSTEM tablespace. SYSTEM contains dictionary objects and SYSTEM rollback segments. This tablespace should not be used for storing other database objects such as tables. As soon as other objects are placed in the SYSTEM tablespace, there can be problems. For example, placing a frequently used table in the SYSTEM tablespace opens up the database to I/O contention every time a user or DBA accesses the data dictionary.

Next, consider the RBS tablespace. RBS stands for rollback segment. As discussed in Unit II, rollback segments contain change and original versions of data from uncommitted transactions. Since rollback segments frequently acquire and relinquish additional extents, they have a tendency to fragment a tablespace. They can be disruptive to other objects in the tablespace like tables and indexes, which also require contiguous blocks of free space for extents. Placing rollback segments in their own tablespace can alleviate some of the disruptions they create for other objects.

Two other important types of tablespaces are the DATA and INDEX tablespaces. The first type of tablespace can be used to store table data. Typically, the DBA creates several different DATA tablespaces, each containing database tables. If the database contains objects supporting multiple applications, the database tables for each of those applications may be placed in different tablespaces. The other tablespace discussed is the INDEX tablespace. This tablespace contains indexes that correspond to the tables stored in the DATA tablespaces. There are benefits to having separate tablespaces for data objects and indexes. Separate DATA and INDEX tablespaces on different disks can speed retrieval of information.

The final tablespace considered in this discussion is the TEMP tablespace. This tablespace is used for temporary storage of sort information being manipulated by a user process. A user process trying to manage a large sort or a select statement containing the order by clause might utilize temporary storage. Since this type of data is very dynamic, the DBA is again confronted with the issue of a fragmented tablespace, which can be disruptive to other objects as they attempt to allocate additional extents. By default, a user’s temporary storage tablespace is the SYSTEM tablespace. Here, the DBA can avert a performance issue simply by creating all users with a named temporary tablespace. Additionally, the DBA can create users with a named tablespace as the default tablespace. Since the default tablespace for any user when one isn’t explicitly defined by the create user statement is the SYSTEM tablespace, this act reduces the chance of a data object finding its way into a location that may cause problems later. Figure 19-1 indicates proper protocol for user creation.

Fig19-01.jpg (13806 bytes)

Figure 1: Protocol for user creation

When specifying object placement in a tablespace, there are several factors to consider to find the appropriate choice. Placing an object in an appropriate tablespace can improve performance on the database and reduce the amount of maintenance required to keep the database optimally tuned. There is an entire school of thought around the proper placement of tablespaces on different disks to avoid disk contention. The highlights of these theories will be covered in a discussion of minimizing I/O contention that will appear later in this section.

Exercises

  1. Identify the contents of the RBS and TEMP tablespaces. Why is it inappropriate to place tables and indexes in RBS and TEMP tablespaces? Why is it inappropriate to place the contents of RBS and TEMP tablespaces in other tablespaces?
  2. What are some reasons not to store data objects like tables in the SYSTEM tablespace? A tablespace earmarked for rollback segments?
  3. Identify inappropriate use of the SYSTEM tablespace related to default and temporary tablespace assignment, as related to userid creation.

Configuring Rollback Segments

Rollback segments have many features to configure, which determine how well they perform. At the instance startup, if only the SYSTEM tablespace exists for the database, Oracle will attempt to acquire at least the one rollback segment created automatically by Oracle when the database is created. If there are more tablespaces on the database, and the DBA attempts to place objects in those other tablespaces, Oracle will acquire at least one rollback segment other than the SYSTEM rollback segment. There are two types of database rollback segments, public and private. On databases that do not use the Parallel Server option, public and private rollback segments are the same. In databases that use Oracle’s Parallel Server, public rollback segments may be acquired by any instance in the parallel configuration, while private segments are acquired only by the instance that names the rollback segment explicitly. Public rollback segments will be acquired based on values specified for two parameters, called TRANSACTIONS and TRANSACTIONS_PER_ROLLBACK_SEGMENT. The TRANSACTIONS parameter can be set by the DBA to be an estimated number of concurrent transactions that will occur against the database. The other parameter is the number of transactions that should be allocated to each rollback segment that is acquired by the instance. Together, they are evaluated by Oracle to determine the number of rollback segments acquired at instance startup: TRANSACTIONS / TRANSACTIONS_PER_ROLLBACK_SEGMENT. If Oracle cannot acquire this number of rollback segments at instance startup, then the instance cannot start.

In addition to determining how many rollback segments are acquired at instance startup, the DBA can specify exactly which segments Oracle should acquire. This task is accomplished by using ROLLBACK_SEGMENTS. Private rollback segments are acquired using the ROLLBACK_SEGMENTS parameter. The specific rollback segments Oracle will acquire are listed as values for this parameter, such as ROLLBACK_SEGMENTS=(rbs_01,rbs_02). The DBA may want to specify the rollback segments acquired by the database if there is a rollback segment designed to handle large queries, or if there are rollback segments in a specific instance that must be acquired when using the Parallel Server option.

An important item about rollback segment configuration is their size. There are many different ways to calculate the appropriate sizing of rollback segments, which are based on the number and size of the different transactions that will take place on the database. However, many times it is hard to determine for sure how many different transactions are going to happen on the database at any one time, particularly if there is a large number of users out there, or if there are users with ad hoc query capability. Generally, it is wise to create mostly rollback segments that fit several different sizes and types of queries nicely, while having a few on hand to handle long-running queries and the updates that are inherent in batch processing.

Another important feature to remember about rollback segments is to keep all extents the same size. Though configuring rollback segment extents to be the same size is not required, Oracle began partial enforcement of this recommendation in Oracle7 by eliminating the pctincrease clause from the syntax of the create rollback segment statement. When specifying the extent size for rollback segments, take the total size of the rollback segment initially and divide it by the number of extents that rollback segment will have initially.

An optimal size for each rollback segment is specified as well. The optimal size is specified by the optimal storage option at rollback segment creation. The value for this parameter should depend on the overall size of the transactions using the rollback segment. Some transactions require large rollback segments to store a great deal of transaction information as part of the execution of their queries. Such queries include long-running batch processes as well as some reports, particularly those selecting large amounts of data for monthly or yearly activity. Other processes require only a small amount of rollback segment space in order to complete their transactions. Some examples of this type are short-running ad hoc queries, reports of daily transaction activity, as well as OLTP update, insert, and delete activity where a record is being manipulated in the database. For queries that take long periods of time to execute, it is advisable to set the optimal storage parameter to a high value. For those queries that take shorter periods of time to execute, it is advisable to set the optimal storage parameter to a lower value. These are some of the main areas configured by the DBA for rollback segments.

Exercises

  1. How does Oracle determine the number of rollback segments to acquire at startup? How can the DBA specify certain rollback segments that must be acquired at startup?
  2. How does Oracle enforce the configuration of equally sized extents for rollback segments?

Configuring the Database for Specific Backup Needs

There are certain methods DBAs can use to configure the database to facilitate the backup needs of the Oracle database. This discussion will present the highlights of these methods, which can improve the backup processing of the Oracle database. First and foremost, always run the database in archivelog mode. Also, the DBA should follow these steps when archiving redo logs. Redo information should be sent to a disk location specified by LOG_ARCHIVE_DEST. The DBA can place the archived redo logs onto offline storage such as tape, optical disks, or other storage media later, perhaps on a once- or twice-daily schedule. Also, if it is discovered that an archived redo log is lost, then the DBA should execute a full backup of the database as soon as possible.

When creating backup copies of datafiles, redo logs, password files, parameter files, or control files, it is important to place those backups onto a disk other than the one containing the live production version of the file. If both the backup and the live copy exist on the same disk, then the database is not adequately protected against media failure. As with redo logs, it is usually a good idea to copy your backups to tape or other offline media, although database recovery runs faster when all backup and archive information is readily available on disk.

Another important database configuration strategy designed to improve the recoverability of the database is to create and maintain multiple copies of the control file. Oracle can maintain multiple copies of the control file as long as the names of all control files appear in the CONTROL_FILES parameter of the init.ora file. These control files are ideally placed on separate disks to avoid the situation where a media failure destroys the only copy of a database’s control file. In addition, the control file should always be backed up when the DBA changes the physical structure of the database, either by adding datafiles or redo logs or removing them.

Oracle is particularly vulnerable to disk failures that damage online redo logs. If a disk fails that contains the only copy of an online redo log currently being written by LGWR, the Oracle instance will not be able to continue to run. To combat this vulnerability, Oracle allows the DBA to maintain multiple copies or members of each redo log group. Creating online redo logs with multiple members and placing those members on separate disks is known as multiplexing online redo logs, and is highly recommended by Oracle. In addition, it is a good idea for the DBA to make multiple copies of archived redo logs both on disk and tape, and perhaps even on two different tapes. In an ideal world, at least one copy of the archived redo logs could be kept at a remote yet quickly accessible site, preventing excessive dependence on location in the event of severe disasters such as an earthquake, a tornado, or a hurricane. Keeping archived redo logs on disk has the added advantage of speeding recovery during the roll-forward process of instance or media recovery, since restoring data from tape is often the most time-consuming process of database recovery.

Finally, many DBAs keep some disk space free by using an extra disk that is completely or partially empty. This extra space can be used during DBA maintenance when reorganizing tablespaces, or during recovery as an area to which to restore backup copies of datafiles, redo log files, and control files. In addition, the extra space can be used as a temporary place for a tablespace when a disk fails and the DBA must minimize downtime by bringing the database online as soon as possible.

Exercises

  1. What are some strategies for database configuration that maximize the backup of control files? Archive logs? Datafiles?
  2. What is multiplexing online redo logs? Why is it so important?

Distributing Files to Reduce I/O Contention

There are many techniques and options available for the DBA to distribute files to reduce or eliminate I/O contention. Each tablespace in the Oracle database contains resources that are used by user processes and Oracle processes to store data in the database. Many of these objects are utilized quite frequently. In addition to tablespaces, there are other resources that are used quite frequently, such as online redo logs and rollback segments. The underlying principle of distributing files to reduce I/O contention is as follows. Several Oracle database resources are used at the same time to support database activity, and to eliminate contention for those resources at a hardware level, the DBA should determine which resources contend and then place those resources on different disks.

Oracle prevents its database from being I/O bound by allowing user resources to treat certain areas of memory as repositories for their disk storage, while also having several background processes write the data from memory to disk when the time is right. Unfortunately, in some cases the background processes themselves contend for I/O resources because two different processes may conflict. Take, for example, the LGWR and server processes. LGWR writes redo information from memory to disk, while the server process reads data blocks from disk into the buffer cache in support of user processes. For this example, assume the online redo logs are on the same disk as datafiles supporting a tablespace containing tables in the database. In this situation, there is a chance for LGWR and server process contention because LGWR’s frequent writes to disk may interfere with server processes trying to read blocks from tablespaces.

The most effective way for DBAs and organizations to reduce the frequency of Oracle background processes contending for I/O resources is to place each resource on a separate disk. However, there are often financial constraints for obtaining contention-free and recoverable Oracle databases, and often trade-offs have to be made. Though the prices of disk hardware have come down greatly in terms of cost per megabyte of disk space, it is not the space itself that reduces contention. Rather, it is the presence of multiple instruments to actually write information to storage. Another factor is that some of these resources do not take up much room on a disk drive. A control file, for instance, may only take up 250 kilobytes of memory, while the typical disk drive often can store gigabytes of data.

Ideally, a good balance can be found between the number of disk drives the database hardware will have and the number of valuable database resources each drive will store. That balance will be based on pairing appropriate noncontending resources on the same disk drives, and placing potential I/O contenders on their own drives. What are some pairings the DBA can make in order to reduce the necessary number of disks for the hardware working in support of Oracle, yet still maintain a minimal amount of I/O contention? One initial pairing that is often used is placing online redo logs on the same drives as control files. To improve recoverability, it helps to have multiple copies of the control file or a control file creation script backed up to a trace file. The reason for this pairing is that control files are updated mainly by LGWR (alternately, CKPT) once the instance is started, and since LGWR also maintains the online redo logs, LGWR has almost exclusive access to the disk containing both control files and redo logs. As mentioned in the previous discussion, the DBA should also multiplex members of redo logs on different disks for maximum recoverability, implying there is usually another disk available for control file storage as well. As long as the DBA multiplexes online redo logs, there should be no contention between LGWR and ARCH as well.

Another placement strategy used by many DBAs is to put all rollback segment tablespaces on the same disk. There should be little contention in this arrangement if the rollback tablespaces are created such that smaller rollback segments for typical user processes are placed in one tablespace while larger rollback segments for use with long-running batch processes that run outside of normal business hours are placed in another. In general, all temporary tablespaces can be placed on one disk as well. Other combinations abound. TEMP tablespaces can be placed on the same disks as DATA tablespaces, particularly if the DATA tablespaces are read only or contain tables that have static data. This is an acceptable match because static data in a database tends to be for validation purposes. Other recommended combinations appear in Figure 19-2.

Fig19-02.jpg (19077 bytes)

Figure 2: Appropriate combinations for resources on the same disk

However, there are several "DON’Ts" the DBA should bear in mind when distributing disk resources to minimize I/O contention. The recovery-minded DBA should be careful to ensure that exports, backup files, and archived redo logs don’t appear on the same disks as their live production counterparts, if only for minimizing dependence on any one disk in the host machine running Oracle. These backup files should be archived to tape or some other storage media before too much time passes in order to minimize harm done by a disk crash.

Typically, it is not a good idea to put two items on the same disk if one of the items might be accessed at the same time as the other. Some "DON’Ts" include placing rollback segments on the same disks as redo logs, DATA tablespaces and INDEX tablespaces together, or the SYSTEM tablespace and DATA tablespaces together. Since rollback entries and redo log entries store almost the same thing and are created at almost the same time, having these two items on the same disk can be a major headache, both for performance and recovery. Regarding the placement of DATA tablespaces on the same disk as INDEX tablespaces, the database can run into I/O contention when a query is issued against an indexed table. Having data and indexes on two separate disks allows the database to search the index and retrieve data from the table almost in parallel, whereas having both tablespaces on the same disk can create some friction. Finally, the case for not placing DATA tablespaces on the same disk as the SYSTEM tablespace is this: the SYSTEM tablespace contains the data dictionary. When a select statement is executed against the database that contains a wildcard (*) or the count(*) operation, the database must perform a lookup of that table against the data dictionary to find out what the table’s columns are. If that table is on the same disk as the data dictionary, there could be some I/O contention as the data dictionary and the tables are accessed. Combinations of disk resources that are not recommended appear in Figure 19-3.

Fig19-03.jpg (16778 bytes)

Figure 3: Poor combinations for resources on the same disk

Exercises

  1. Describe the overall concept of using resource placement to reduce I/O contention. Why is placing every Oracle database resource on a separate disk not a good choice for real-world database design?
  2. What are some good combinations of resources on disks that will not produce I/O contention? Why are they good combinations?
  3. What are some bad combinations of resources on disks that will produce I/O contention? Why are they bad combinations?

Using Disk Striping

Sometimes distributing file I/O is not enough to reduce bottlenecks on systems with particularly heavy usage. Databases that can experience high levels of query and data change activity can be of all the types previously discussed. All databases in this situation may have one thing in common—large tables. A table’s size is determined by the number of rows it has and the number and size of each column in each row. Large tables are those with hundreds of thousands or millions of rows. Access to these tables via nonindexed columns requires full table scans that can turn seemingly innocent SQL queries into database performance nightmares.

One method many DBAs use to maximize table access speed is to separate large tables into extents that span datafiles spread over several disks. In Oracle7, this task is called disk striping. A similar feature called partitioning is available in Oracle8, and will be discussed in Unit V. Striping is accomplished by means of several steps. First, create a tablespace to store data using datafiles of equal size placed on several disks where they will encounter little contention.

CREATE TABLESPACE data01
Datafile ‘/DISK04/oracle/data/data01_f1.dbf’ size 90M,
‘/DISK05/oracle/data/data01_f2.dbf’ size 90M
‘/DISK06/oracle/data/data01_f3.dbf’ size 90M,
‘/DISK07/oracle/data/data01_f4.dbf’ size 90M;

At this point, the large table can be configured to distribute over several different datafiles. This task can be done in two ways. The first way is to create a new table with data from an existing one in the tablespace just created, using extents sized in such a way as to force Oracle to stripe automatically. This method cleanly creates data striping across several disks for the entire table. Assuming that the table being striped is 150 megabytes, the table storage parameters can be set such that the extents are all 80 megabtyes, allowing Oracle to place only one extent in each datafile. A public synonym can be used to disguise the new large table as the old large table in order to avoid referencing problems for existing applications and SQL statements.

CREATE TABLE new_large_table

STORAGE (INITIAL 80M
NEXT 80M
PCTINCREASE 0
MINEXTENTS 2)
AS SELECT * FROM old_large_table;

CREATE OR REPLACE PUBLIC SYNONYM large_table
FOR new_large_table;

After this table is created and the data loaded, the table will have two extents in two datafiles striped over two disks. Additionally, there is room for table growth that is properly configured to continue data striping. Generally, this approach is designed with large table storage in mind and in a situation where the DBA can have the database available in restricted mode, ideally during a DBA maintenance period. Unlike the usual approach DBAs take with tablespace creation, which is to create large tablespaces designed to accommodate storage of multiple objects, the striping approach generally means creating tablespaces designed to store only a large table. Trying to use striping for small tables is an approach that will lead to a messy tablespace layout, and is generally not advised. Instead, the DBA should simply try indexing the smaller table and placing the index on another disk to alleviate I/O contention.

The other approach to striping tables is to alter an existing table to allocate an extent in a datafile on another disk. This approach can be executed at any point in database operation, since no extensive data manipulation is required to allocate an extent. Though haphazard, this approach may work in a tight situation; however, striping works best when tablespace layout is considered carefully. One drawback to altering an existing table to allocate an extent in a datafile on another disk is that I/O contention in older extents remains unresolved.

ALTER TABLE large_table ALLOCATE EXTENT
(DATAFILE ‘/u06/oracle/data/data01_f3.dbf’);

The largest performance advantage gained by tuning I/O contention with table striping is when the DBA has set Oracle to use the Parallel Query Option. In fact, to even use parallel query, the machine running Oracle must have enough CPUs and I/O distribution at its disposal. If the machine is already CPU- or I/O-bound, then using parallel query will not help, and may even hurt the situation. But if the hardware resources of the Oracle database are not overburdened by the system, using parallel query will speed query access to nonindexed columns of large striped tables that previously ran long due to full table scans, as demonstrated in Figure 19-4.

Fig19-04.jpg (12826 bytes)

Figure 4: Distributing I/O using parallel queries

This approach to data striping is the one method available from Oracle for distributing data across disks. Other methods are operating system dependent. It will benefit the DBA greatly in planning maintenance activities to have a strong understanding of the options provided by the system on which Oracle runs. For example, many hardware manufacturers now provide data striping products that utilize RAID technology. RAID allows both for file I/O distribution and for data mirroring. RAID and other similar hardware options have a distinct advantage over tablespace striping in that RAID is somewhat easier to manage and provides cleaner I/O distribution, and also allows for the disk mirroring capabilities that are not possible using the Oracle striping method. Still, table striping in Oracle does have the advantage of Oracle being the tool used to manage placement and distribution of data. Oracle’s striping methods may be used over RAID in many situations because of the following reasons:

RAID is more expensive than support costs for table striping in Oracle.
Only a few tables in the Oracle database are large enough to require striping.

Exercises

  1. Explain the concept of striping. How does it reduce I/O contention?
  2. What Oracle option should be used in conjunction with striping to better utilize multiple disk controllers? On what type of query will striping produce the greatest performance improvement?
  3. Compare striping in the Oracle database to hardware options for data striping. What things can using hardware solutions for data striping accomplish that striping in Oracle cannot? When might it still be a good idea to use table striping in Oracle?

Tuning Rollback Segments

In this section, you will cover the following topics related to tuning rollback segments:

Using V$ views to monitor rollback segment performance
Modifying rollback segment configuration
Allocating rollback segments to transactions

Rollback segments store original and change information for uncommitted user transactions. This information is helpful in the event that the process needs to eliminate an update it made against the database. Important to remember about rollback segments is the fact that every update, insert, or delete statement executed on the database tables will produce a rollback segment entry. Rollback segments, then, are an important and heavily used resource. Proper configuration of rollback segments will ensure that user processes attempting to create rollback entries will operate smoothly. If these resources are not configured to run at an optimum level, the backlog of processes needing to write rollback entries will grow quickly, causing a problem for all users on the database.

Using V$ Views to Monitor Rollback Segment Performance

Contention for the rollback segment resource is indicated by contention in memory for the buffers containing blocks of rollback segment information. The dynamic performance view V$WAITSTAT can be used to detect this type of contention. This view is available to users with select any table privileges, such as SYS and SYSTEM. To let another user access this view, the DBA can connect to the database as SYS and grant select privileges on this view to a named user or role.

There are four types of blocks associated with rollback segments that are important to monitor in conjunction with detecting contention on rollback segments. They are the system undo header, system undo block, undo header, and undo block. The first type or class of block associated with rollback segments that needs to be considered is the system undo header. Oracle maintains a statistic on the V$WAITSTAT view that corresponds to this type of block. The value for this statistic corresponds to the number of times any process waited for buffers containing header blocks from the rollback segment in the SYSTEM tablespace. There will always be one rollback segment for the database instance contained in the SYSTEM tablespace.

The next block class associated with rollback segments that needs to be considered is the system undo block. The statistic represented by this type of block corresponds with the system undo header insofar as both numbers represent waits on rollback buffers in the SYSTEM tablespace. However, the system undo block represents the number of times a process waited for buffers containing blocks other than the header from the rollback segment in the SYSTEM tablespace for that instance.

The next two classes of blocks correspond to rollback segments in tablespaces other than SYSTEM. All instances with more tablespaces than SYSTEM are required to have two or more rollback segments allocated to them at startup, only one of which must be in the SYSTEM tablespace. The first class of block we will examine of this type is the undo header. Oracle collects statistics in V$WAITSTAT corresponding to the number of times a process waits for a buffer containing header blocks from rollback segments in tablespaces other than SYSTEM. The second class of block is the undo block. The database also collects this statistic, which corresponds to the number of times a process waits for a buffer containing blocks other than the header from rollback segments in a tablespace other than SYSTEM. The following list shows the types of blocks associated with rollback segments whose usage should be monitored by the DBA:

SYSTEM undo header Number of times user processes waited for buffers containing SYSTEM rollback segment header blocks
SYSTEM undo block Number of times user processes waited for buffers containing SYSTEM rollback segment nonheader blocks
Undo header Number of times user processes waited for buffers containing non-SYSTEM rollback segment header blocks
Undo block Number of times user processes waited for buffers containing non-SYSTEM rollback segment nonheader blocks

These statistics should be monitored at times to verify that no processes are contending for rollback segments excessively. Oracle defines excessive rollback contention as a ratio of 1 percent or greater between the number of waits for any rollback block class and the total number of data requests for the system. The total number of requests for data from the database is defined as total data requests = db block gets + consistent gets. The information required for the calculation above can be gathered from the V$SYSSTAT dynamic performance view. To calculate the ratio of rollback buffer waits to total number of data requests, the DBA can use the following SQL statement:

SELECT w.class, (w.count/SUM(s.value))*100 ratio
FROM v$waitstat w, v$sysstat s
WHERE w.class IN (‘system undo header’,‘system undo block’,
‘undo header’,‘undo block’)
AND s.name IN (‘db block gets’,‘consistent gets’);

If a value in the ratio column for any block class in the output of this query is greater than 1, then there is an issue with contention for that block class. The DBA can reduce contention for rollback segments by adding to the number of rollback segments available to the Oracle instance. The appropriate number of rollback segments for a database instance corresponds to the average number of concurrent transactions against the database. To do so, the DBA can apply the rule of four. This rule is demonstrated in Figure 19-5.

Fig19-05.jpg (14554 bytes)

Figure 5: Typical transaction to rollback segment ratios

TIP: Use the rule of four to determine the appropriate number of rollback segments for your Oracle instance--divide concurrent transactions by 4. If result is less than 4 + 4, round up to nearest multiple of 4. Don’t use more than 50 rollback segments.

Here are some examples of the Rule of Four in action. A database handles 133 transactions concurrently, on average. By applying the first part of the Rule of Four, the DBA knows that 133 / 4 = 33 , or 34. Since this result is greater than 4 + 4 and less than 50, the DBA knows that 34 rollback segments are appropriate for the instance. If that number of concurrent transactions was only 10, however, 10 / 4 = 2 , or 3, which should be rounded up to the nearest multiple of 4, or 4.

Exercises

  1. Describe how to identify contention for rollback segments. What are the dynamic performance views involved in discovering contention for rollback segments?
  2. Name four classes of rollback block classes.
  3. At what block waits to gets ratio is the database considered to be experiencing contention for rollback segments?
  4. You are trying to determine the appropriate number of rollback segments to put on a database with an average of 97 concurrent users. Use the rule of four to calculate the number of rollback segments your database needs.

Modifying Rollback Segment Configuration

Rollback segments are configured to be a certain size during database creation. Other features about rollback segments handled during configuration are the number and size of extents and the optimal size of the rollback segment. The number of active transactions using a particular rollback segment will determine that rollback segment’s size. The current space allocation for any rollback segment can be determined from the V$ROLLSTAT dynamic performance view. Access to this view is granted as part of the select any table privilege or by executing the utlmontr.sql script found in the rdbms/admin directory under the Oracle software home directory. The V$ROLLSTAT view can be queried for the name, number of extents, optimal size setting in bytes, and current size in bytes for the rollback segment named. V$ROLLNAME was used to derive the associated "undo," or rollback segment number, or USN, for the rollback segment named. This number is required to derive any rollback segment information from V$ROLLSTAT.

SELECT rn.name, rs.extents, rs.optsize, rs.rssize
FROM v$rollname rn, v$rollstat rs
WHERE rn.name = ‘rbs_name
AND rn.usn = rs.usn;

Important to notice is whether or not the rollback segment has extended past its optimal size. For example, say a rollback segment’s optimal size is 5M and each extent comprises 1M of the segment. If the value in the RSSIZE column of the V$ROLLSTAT view for this rollback segment is 6M, then the DBA would know that the rollback segment has extended beyond its optimal size. The rollback segment will try to shrink itself back to the optimal size when a query against the rollback segment requires it to extend a second time. Rather than waiting for the transaction to show up that makes the rollback segment extend again, the DBA may want to force the rollback segment to reduce to optimal size. This task is accomplished by using the alter rollback segment shrink statement. If the DBA does not state the size to which the rollback segment should shrink, Oracle will shrink the rollback segment to the size specified by optimal.

Excessive rollback segment shrinkage is an indication of a larger problem—that the rollback segment is improperly sized. If a small rollback segment routinely handles large transactions, that rollback segment will extend to the size required by the query. But if the rollback segment needs two more extents than its optimal size, Oracle will incur extra processing to shrink the rollback segment. The more often this happens, the worse off database performance will be. The DBA can determine if rollback segment shrinkage is causing a performance problem on the database. The information required is stored in V$ROLLSTAT. Again, since V$ROLLSTAT keeps only the rollback segment number, not the name, it is appropriate to join the two views on the USN column to obtain the necessary statistics for the following columns:

EXTENDS This column stores the statistic for the number of times the rollback segment obtained an extent.
SHRINKS This column stores the statistic for the number of times the rollback segment deallocated extents to return to its optimal size.
AVESHRINK This column stores the average shrink size--the amount of space shed by the rollback segment when it reduced itself to the optimal size
AVEACTIVE This column stores the average active extent size in bytes for that rollback segment. This statistic represents the number of bytes for the rollback segment that were part of a transaction that hadn’t committed yet.

In order to determine if there is a problem with rollback segment optimal sizing, the statistics named above can be compared with one another. Most relevant is the comparison between SHRINKS and AVESHRINK. If the number of shrinks is high and the average shrink size is also high, there is definitely a problem with rollback shrinkage. This combination means that the rollback segment shrank often, and shed a lot of extents each time it did so. Similarly, there is a problem with rollback shrinkage if the average shrink size is low. The key point made here is that the act of shrinking is happening too often. In both these cases, the optimal parameter should be set higher to reduce the processing overhead. In contrast, the setting for optimal may be too high if there are few shrinks occurring on the database and the average shrink size is low. The final determinant in this situation is going to be the value for AVEACTIVE. If the average size in bytes of active transactions on that database is much lower than optimal indicated by the value for OPTSIZE in the V$ROLLSTAT view, then optimal is set too high. Table 19-1 indicates the relationships between the various columns of the V$ROLLSTAT view and their meanings.

SHRINKS AVESIZE OPTSIZE
High High Too high, lower optimal
High Low Too high, lower optimal
Low Low Too low, lower optimal (unless nearly equal to AVEACTIVE)
Low High OK

Table 1: V$ROLLSTAT Settings and Their Meaning

The DBA should monitor the number of times transaction information will be wrapped around in a particular rollback segment. Wrapping is an important aspect of rollback segment usage and performance. When a rollback segment wraps, it means that the current extent handling transactions is not large enough to hold those transactions in their entirety. To reduce wrapping requires an increase in the size of each extent of the rollback segment. The number of wraps occurring on the rollback segment can be identified by querying the WRAPS column of the V$ROLLSTAT dynamic performance view, and can have multiple meanings. In the event that WRAPS is high and EXTENDS is high, then the database is experiencing many active transactions that do not fit into one extent. The rollback segment is extending, therefore, and there could be an issue with shrinks and dynamic extension.

However, if an active transaction requires more room than the current rollback extent can offer, it is possible that a wrap may occur, placing information for that active transaction onto the initial rollback segment’s extent. Let’s reexamine the nature of rollback segments for a moment. The entire segment is designed for reusability. Ideally, when the active transactions in the last extent currently allocated to the rollback segment need more space, the transaction information in the initial extent of the rollback segment will already be committed to the database, allowing the rollback segment to reuse the space in that extent for new active transactions. In this case, a high number of wraps that is not coupled with a high number of extends is good, because it demonstrates that the rollback segment is properly sized to handle the number of transactions currently allocated to it.

However, even then there can be problems. In some cases, long-running queries will produce error ORA-01555, "snapshot too old (rollback segment too small)." This problem is due to the fact that too much transaction data is being stored in the rollback segment from a bad combination of long- and short-running updates and queries. As a result, the rollback has obtained and filled the maximum number of extents it is permitted to obtain, yet the transaction needs more. While the short-term solution is to increase the number of extents the rollback segment can obtain until the long-running query has enough space in rollback to complete its transaction, the true solution is to schedule the long-running batch job at a time when the rollback segment activity will be relatively quiet.

To alter the rollback segment as described, use the alter rollback segment statement. Any or several options may be used, including the next, optimal, and other options.

ALTER ROLLBACK SEGMENT rbs_02
STORAGE (NEXT 20M OPTIMAL 100M);

Exercises

  1. What dynamic performance view carries rollback segment performance information?
  2. How does the DBA identify if dynamic extension and shrinks are causing a performance problem on the rollback segments? What storage parameter is changed if the DBA determines there are too many shrinks occurring on the rollback segment?
  3. What are wraps? When might they indicate a problem on the rollback segment? Why aren’t they the best indicator of poor rollback segment performance?
  4. What problem does the ORA-01555, snapshot too old (rollback segment too small) error indicate?

Allocating Rollback Segments to Transactions

The DBA should make every attempt to create rollback segments such that their usage is transparent to the applications and users of the database system. However, sometimes it is not always possible to size every rollback segment in the database to fit every query a user can execute. In the case of long-running transactions exhausting the available space in a rollback segment and receiving the "snapshot too old" error message, it is not necessarily a good solution to increase the size of that rollback segment. Most likely, it was possible that several other queries in the course of that long-running process were able to use the same rollback segment that the marathon process exhausted. In some cases, there may be only a few marathon batch jobs that run weekly that cannot fit into the average rollback segment available on the database, amidst the sea of small queries that use those rollback segments and have no problems.

It is useful to create a certain number of rollback segments that are much larger than the standard rollback segment that currently exists on the database. The larger segment can exist for the use of those marathon processes. In order to force those longer-running queries to use the large rollback segments, the user executing the process can include a statement assigning that query to the larger rollback segment. That assignment statement is the set transaction use rollback segment statement. Given the existence of larger rollback segments to handle the large transaction activity, this statement can eliminate cases of marathon processes receiving the dreaded "snapshot too old" error. At the beginning of the transaction, the process would issue the set transaction statement and specify the large rollback segment in the use rollback segment clause.

Although it is a good idea to assign marathon processes to their own rollback segments, it is not a good idea to assign every transaction explicitly to a rollback segment. In order to assign transactions to rollback segments en masse throughout the database, each process must have a complete idea of the processes running at that time, as well as the knowledge of which rollback segments are online. If too many transactions request the same rollback segment, that could cause the rollback segment to extend and shrink unnecessarily while other rollback segments remain inactive. Oracle itself can do an appropriate job at finding rollback segments for most short and medium duration transactions.

Exercises

  1. What statement is used to assign transactions to a rollback segment?
  2. When is it a good idea to assign transactions to rollback segments? When is it not a good idea to do it?

Using Oracle Blocks Efficiently

In this section, you will cover the following topics related to using Oracle blocks:

Determining block size
Setting pctfree and pctused
Detecting and resolving row migration
Detecting and resolving freelist contention

The foundation of all I/O activity in the Oracle database is the Oracle block. Row data for indexes and columns, rollback segment information, data dictionary information, and every other database component stored in a tablespace is stored in an Oracle block. Proper use of Oracle at the block level will go a long way in enhancing performance of Oracle.

Determining Block Size

The size of Oracle blocks is determined by the DBA and should be based on a few different factors. First and foremost, the size of Oracle blocks should be based on some multiple of the size of operating system blocks in the database. The reason this approach is a good idea is because it allows the operating system to handle I/O usage by Oracle processes in a manner consistent with its own methods for reading operating system blocks from the filesystem. Most operating systems’ block size is 512 or 1,024 bytes. Usually, Oracle block size is a multiple of that. Many times, it is 2K or based on a multiple of 2K—either 4K or 8K. On certain large systems, the Oracle block size can be 16K. Oracle’s default size for blocks depends on the operating system hosting Oracle, and should always be set higher than the size of operating system blocks in order to reduce the number of physical reads the machine hosting Oracle will have to perform as part of I/O activities.

Oracle block size is determined at database creation using the initialization parameter DB_BLOCK_SIZE, which is expressed in bytes. Most of the time, the default setting for Oracle blocks as provided by the operating-system-specific installation is fine. However, there are some situations where an alternate block size is worth consideration. These situations have everything to do with the operating system’s ability to handle I/O and on the size of its own blocks. For example, some large machines, such as massively parallel servers or mainframes, use larger operating system blocks; hence, it is possible to set DB_BLOCK_SIZE to a higher value in Oracle at database creation time.

There is a very important fact to bear in mind about DB_BLOCK_SIZE—once it is specified and the database is created, there is no possibility to alter it without re-creating the database entirely. Therefore, it is important to make sure that the block size is correct the first time in order to prevent issues such as row chaining and migration later. DB_BLOCK_SIZE also determines the size of the buffers in the buffer cache; thus, the calculation for determining the size of the buffer cache in memory is DB_BLOCK_SIZE times DB_BLOCK_BUFFERS. Figure 19-6 gives a pictorial representation of the relationship between DB_BLOCK_SIZE and other parameters.

Fig19-06.jpg (9028 bytes)

Figure 6: Effect of block size on other parameters

Exercises

  1. What is the name of the variable that determines block size of the Oracle database?
  2. Identify some factors on which the database block size depends.
  3. Name another parameter whose value depends on the size of the database blocks.

Setting PCTFREE and PCTUSED

Key to the usage of Oracle database blocks is the appropriate setting of block storage options pctfree and pctused to utilize the space within each block in an effective manner that is consistent with the needs of the Oracle object being stored. For example, the needs of a high-performance, "high response time required" OLTP application with thousands of users entering new information daily are not the same as a mostly static, complex query-intensive data warehouse system with few users. Similarly, the storage options used by database blocks within each system will not be the same, either. The two options, pctfree and pctused, determine how Oracle will fill the space in each Oracle block with table or index data. They can be configured for tables in two ways:

Within create table or alter table and create index or alter index statements
Within create tablespace or alter tablespace statements as default storage clause values

Storage options specified by table and index creation or change take precedence over whatever default values have been specified by the tablespace. For this reason, there is no strict database-wide configuration of Oracle data blocks—other than specifying default values for these options in the tablespace and omitting them from table and index creation, or creating all tables and indexes with the same storage options set explicitly. Nor is such an expansive approach to configuring pctfree and pctused on tables and indexes really advisable. It is generally best to fit these storage options around the needs of the database object created.

The definition of pctfree is the percentage of each data block that Oracle leaves empty, or free, for existing rows in the data block to expand as may be required by updates. When a process calls for row insertions on that object, Oracle will insert rows into the block until the pctfree value is reached. After a data block’s pctfree value is reached, no more rows go into that block. At this point, the pctused value comes into play. The definition of pctused is the usage threshold that any block’s current capacity must fall under before Oracle considers it a "free" block again. A free block is one that is available for insertion of new rows.

The pctfree and pctused options are configured in relation to one another to manage space utilization in data blocks effectively. When set up properly, pctfree and pctused can have many positive effects on the I/O usage on the Oracle database. However, the key to configuring pctfree and pctused properly is knowledge of a few different aspects of how the application intends to use the object whose rows are in the block. Some of the important questions that need to be answered by the DBA before proceeding with pctfree and pctused configuration are as follows:

What kind of data object is using these blocks?
How often will the data in this object be updated?
Will updates to each row in the object increase the size of the row in bytes?

The ranges allowed for specifying pctfree and pctused are between 0 and 99. The sum of the values for these two options should not exceed 100. Consider the effects of various values for pctfree. For example, a pctfree value of 80 will leave 80 percent free from each data block used for storage, while setting pctfree equal to 5 means that only 5 percent of the data block is left free. Situations where lots of free space in data blocks is good are when the application will update row data for a table or index frequently, and the updates will produce significant increases in the size of each row being changed. For example, setting the value for pctfree to 40 will leave ample room for making changes to each row in the block, because once the threshold for adding new rows is reached, the block still has 40 percent of its space available for data additions on existing rows. There is one downside to setting pctfree high to preserve free space in each block for row growth. More blocks will be required to store row data for tables with high pctfree values than for tables with lower pctfree values. Refer to Figure 19-7 for better understanding.

Fig19-07.jpg (8949 bytes)

Figure 7: Block example with pctfree specified

Once block capacity reaches the value set by pctfree, DBWR will not put new rows in the block until enough data is deleted from the block for its usage to dip below pctused. When the percentage of space used in a data block falls below the value specified by pctused, then Oracle will place the block back on the freelist, or list of data blocks that have free space to handle row insertions. Specifying a high value for pctused keeps space usage as high as possible within the data blocks, forcing Oracle to manage data block storage allocation actively. Active storage management translates into higher resource costs associated with database insert, update, and delete statements. For OLTP systems, the resource cost for using a high pctused option can be high as well. Thus, tight disk space management is not effective for systems that have highly volatile space requirements. Usually, it is better to set the pctused value lower on these systems, maximizing the time a block spends on the freelist. On data warehouses containing static data, however, it may be wise to maximize disk capability by setting pctused high.

Examples of PCTFREE and PCTUSED Usage

It is usually not wise to set pctfree and pctused to values that add up to 100 exactly. When these two options add up to 100, Oracle will work very hard to enforce that no data block keeps more free space than specified by pctfree. This additional work keeps Oracle’s processing costs unnecessarily high. A better approach is to set the values for pctfree and pctused to add up to a value close to 100, perhaps 90–95. This approach represents a desirable balance between higher processing costs and efficient storage management. Consider some examples of pctfree and pctused settings and what they mean.

PCTFREE=25, PCTUSED=50

This combination might be used on high transaction volume OLTP systems with some anticipated growth in row size as a result of updates to existing rows. The value for pctfree should accommodate the increase in row size, although it is important to assess as closely as possible the anticipated growth of each row as part of updates in order to maximize the storage of data. The value for pctused prevents a block from being added to the freelist until there is 60 percent free space in the block, allowing many rows to be added to the block before it is taken off the freelist.

PCTFREE=5, PCTUSED=85

This combination of values may be useful for systems such as data warehouses. The setting for pctfree leaves a small amount of room for each row size to increase. The pctused value is high in order to maximize data storage within each block. Since data warehouses typically store mass amounts of data for query access only, these settings should manage storage well.

PCTFREE=10, PCTUSED=40

Oracle assigns a default value to each option if one is not specified either in tablespace default settings or in the table and index creation statements. For pctfree, that value is 10. For pctused, that value is 40.

Exercises

  1. What is the meaning of the pctfree storage option? What is the meaning of the pctused option? How are they specified? What are the ranges for these values? What should the sum of these values add up to?
  2. The DBA is considering a change to the values for pctfree and pctused for a table for an OLTP application that experiences high insert activity. Existing records in the database are updated frequently, and the size of the row is rarely affected by those updates. Should the value for pctfree be high or low? Should the value for pctused be high or low?

Detecting and Resolving Row Migration

The DBA should consider the "soft" performance gain offered when considering use of higher pctfree values—a proactive solution to row chaining and migration. Row migration occurs when a user process updates a row in an already crowded data block, forcing Oracle to move the row out of that block and into another one that can accommodate the row. Chaining is when Oracle attempts to migrate the row but cannot find a block large enough to fit the entire row, so it breaks the row into several parts and stores the parts separately. The DBA should avoid allowing Oracle to migrate or chain rows, due to the fact that performance can drop significantly if many rows are chained or migrated in the table. The importance of avoiding row migration and chaining is demonstrated in Figure 19-8.

Fig19-08.jpg (15010 bytes)

Figure 8: Avoid row migration and chaining

There are tools available with the Oracle database that detect migrated database rows. The analyze command offers several different parameters for use in determining everything from the validity of a table or index structure to collecting statistics for table usage that are incorporated into cost-based query optimization. It also provides the functionality required for discovering if there are chained rows in the database. The syntax for this statement is listed below:

ANALYZE { TABLE | CLUSTER } name
LIST CHAINED ROWS
INTO chained_rows;

The name in this context is the name of the table or cluster being analyzed for chained rows. CHAINED_ROWS is the name of the table into which analyze places the results of its execution. The DBA can then query CHAINED_ROWS table to determine if there is an issue with row chaining on the database. This table is not automatically created as part of database creation. Instead, the DBA must run the utlchain.sql script that is included with the distribution. Usually, this file is found in the rdbms/admin directory under the Oracle software home directory. CHAINED_ROWS is owned by the user that creates it. After running analyze, the original ROWID for each chained row will appear in the CHAINED_ROWS table. To determine the extent of chaining or row migration in a database table or cluster, the DBA can execute the following statement:

SELECT COUNT(*)
FROM chained_rows
WHERE table_name = UPPER(‘name’);

Once the CHAINED_ROWS table is populated and the DBA determines that row migration for the table in question is severe enough to merit resizing the pctfree and pctused values, the DBA need not drop and re-create the table with appropriate values for those two variables. Instead, Oracle allows the pctfree and pctused variables to be adjusted by executing the alter table statement. The new pctfree and pctused settings will be used on all new blocks allocated by table inserts, although the new settings will not apply to blocks already created by the table. A temporary table can be created to store the rows whose head ROWIDs appear in the CHAINED_ROWS table. After creating the temporary table of chained rows and altering the original tables’ block usage settings, the rows from the temporary table can then be inserted back into their original table. The DBA should use discretion when employing this option. Depending on the number of rows that were in the original table vs. the number of rows that have chained, it may be worthwhile to drop and re-create the table. The original data in this case can be preserved by using the EXPORT utility or by spooling the output of select * to a flat file and using SQL*Loader to populate the rows.

Finally, it is important to understand that usually it is not possible to eliminate all instances of chaining or data migration from a database. There are some cases where a single row of table data may exceed the size of a data block. For example, a table with several columns declared to be type VARCHAR2(2000) may easily exceed the capacity of a single data block’s storage. The LONG datatype is another "chainer," as it can contain up to 2 gigabyte of data.

Exercises

  1. Define row migration and chaining. How are these two things similar? How are they different? Describe the performance implications of row migration and chaining.
  2. What command is used to identify row migration and chaining? What is the name of the table that stores information about chaining and migration?

Detecting and Resolving Freelist Contention

When a data block’s row storage capacity hits the limit as expressed by pctfree, no more rows can be inserted into it. When the capacity for that block falls below pctused, the block is again considered available for row insertion. Oracle maintains records of blocks for the particular tables that have space available for data insertion. These records are called freelists. When Oracle needs to insert a new row into a block, it looks at the freelist for that table in memory to find some blocks in which to put the new record.

However, sometimes there is contention in memory for getting to those lists of free blocks. The DBA can identify freelist contention in the Oracle database by looking to see if there is contention for free data blocks within the database buffer cache of the SGA. This information is contained in the dynamic performance view that Oracle maintains called V$WAITSTAT.

Within the V$WAITSTAT view there are columns called CLASS and COUNT. This column contains the names of various classes of statistics Oracle maintains in this view. The class in this case is called ‘free list’. The value in the COUNT column identifies freelist contention—the number of times a process had to wait for free blocks in the database buffer cache since the instance was started.

The next piece of the puzzle is finding out the number of times processes issued requests for data in the same period of time. This statistic is equal to the sum of database block gets and consistent gets. The data for this statistic is gathered from another V$ performance view, this time the V$SYSSTAT performance view. By issuing a select against this performance view for the sum of the value column where the value of the name column is either ‘db block gets’ or ‘consistent gets’, the DBA can obtain the second piece of data for calculating the wait ratio for freelists. That ratio can also be obtained by issuing the following statement against the database. The value produced for FL_WAIT_RATIO in the query should be less than 1. If it is 1 or greater, then there is an issue with freelist contention that that needs to be resolved on the database.

SELECT (w.count/SUM(s.value))*100 fl_wait_ratio
FROM v$waitstat w, v$sysstat s
WHERE w.class = ‘free list’
AND s.name in (‘db block gets’,‘consistent gets’);

The resolution for freelist contention on the database for that table is to add more freelists. Unfortunately, changing the number of freelists for a table is much easier said than done. The only alternative for adding more freelists for a table is to re-create the table with a higher value specified for the freelists storage clause in the table creation statement. Depending on the number of rows in the table, it may be wise either to use IMPORT and EXPORT to store a copy of the data while creating a new table, or to offload the table data into a flat file using select * and then repopulate the data once the table is re-created with higher values specified for the freelists clause.

Determining an appropriate value for the freelists storage clause in the event of detecting freelist contention for a table is as follows. Determine how many processes are concurrently adding to the table. The freelists clause can then be set to that number of processes that are looking for free blocks to add their data in memory. With the number of freelists set to the same number of processes adding row entries to that table, there should be little if any contention for freelists on that table.

Exercises

  1. What are the two performance views used to determine the wait ratio for freelist contention?
  2. How is freelist contention resolved?

Chapter Summary

This chapter discussed the many facets of tuning how Oracle utilizes its disks. The three topics in this section—database configuration, tuning rollback segments and using Oracle blocks efficiently—comprise nearly 25 percent of OCP Exam 4. The first portion of this chapter was a discussion of database configuration. Several types of database tablespaces were identified, along with the ideal contents of each. The five different types of tablespaces discussed were RBS, DATA, SYSTEM, INDEX, and TEMP. Database objects are meant to be placed into these types of tablespaces according to the following breakdown: rollback segments in RBS, tables in DATA, indexes in INDEX, temporary segments required by user processes in TEMP, and the data dictionary tables and initial rollback segment in SYSTEM. Since rollback segments and temporary segments have a tendency to fragment, it is generally wise to keep them out of the tablespaces used by tables and indexes. Particular importance is placed on the SYSTEM tablespace. Since this tablespace contains very important objects such as the data dictionary and initial rollback segment, it is unwise to place any other types of objects in this tablespace. Placing many objects like tables and indexes in SYSTEM can cause a problem. If the SYSTEM tablespace should fill, the only way to add space to it is to drop and re-create it. However, the only way to drop and re-create the SYSTEM tablespace is to drop and re-create the entire database. This act requires a full restore of all data, and generally creates huge problems. Therefore, it is inappropriate to place anything other than the data dictionary and the initial rollback segment in the SYSTEM tablespace.

The topic of rollback segment configuration was discussed. There are two types of rollback segments, public and private. In databases that do not use the Parallel Server Option, public and private rollback segments are the same. In databases that use the Parallel Server Option, public rollback segments are a pool of rollback segments that can be acquired by any instance in the parallel configuration, while private rollback segments are acquired by only the instance that names the rollback segment explicitly. The number of public rollback segments acquired at startup depends on a calculation depending on two initialization parameters, TRANSACTIONS / TRANSACTIONS_PER_ROLLBACK_SEGMENT. This value produces the number of rollback segments the Oracle instance will acquire at startup. The DBA can guarantee that certain private rollback segments are acquired as part of that number by specifying a set of rollback segments in the ROLLBACK_SEGMENTS initialization parameter as well. The appropriate size for rollback segments was also covered. In order to determine that size, the DBA should attempt to find out as much as possible regarding the size of transactions that will take place in the database.

Part of database rollback segment configuration involves choosing an optimal size for the rollback segment. This size is specified as the optimal storage clause, as part of rollback segment creation. Another important feature to remember about rollback segment creation is that all extents of the rollback segment will be the same size. This design choice alleviates the possibility for a long-running query to force a rollback segment to fill the associated tablespace with its extents, making it impossible for any other object in that tablespace to obtain an extent. Oracle enforces this design by removing the pctincrease storage clause from the syntax of the create rollback segment statement.

File distribution to minimize contention was also covered. There are specific means to evaluating which Oracle resources are good to place together on the disks of the machine hosting Oracle. The most important feature of this discussion is to recall what the different components are and how they might interact (and more importantly, interfere) with one another. Some resources are best placed on separate disks to minimize I/O contention. They are DATA tablespaces and INDEX tablespaces, RBS tablespaces and redo logs, DATA tablespaces and the SYSTEM tablespace, DATA tablespaces and RBS tablespaces, and DATA tablespaces and TEMP tablespaces. Some acceptable combinations of resources on the same disk are redo logs and control files, all RBS tablespaces, and others.

For additional reduction of I/O contention in the DATA tablespaces, the option of table striping was explored. Table striping is the practice of placing different extents of a large table in datafiles on separate disks. This method has excellent benefits for SQL queries running with parallel query when searching on nonindexed columns, which results in full table scans. Parallel query makes better use of multiple CPUs and disk controllers that are available with disk striping.

This chapter also covered material regarding the tuning of rollback segments. Tuning rollback segments begins with identifying how to detect contention for rollback segments—the detection of contention in memory for buffers containing rollback segment data. The V$WAITSTAT dynamic performance view is used to determine whether this contention exists. There are four different classes of rollback segment blocks in use in the Oracle instance. They are the system undo header block, the system undo block, the undo header, and the undo block. The difference between header blocks other rollback blocks is that header blocks are ones that contain rollback block header information. The difference between system blocks and other types of blocks is that the system blocks correspond to blocks in the SYSTEM tablespace, while the other blocks are contained in other rollback tablespaces. Whether there is contention for these blocks is determined by the wait ratio, which is derived by (WAITS / GETS) * 100, where waits is the sum of block waits for the types of blocks listed above taken from the V$WAITSTAT performance view, and gets is the total number of data requests as represented by the sum of database block gets and consistent gets from the V$SYSSTAT performance view.

Following the discussion of how to monitor contention for rollback blocks was the discussion of how to determine the number of rollback segments required for the instance. The text documented the rule of four—divide the total number of concurrent transactions by 4. If the number of concurrent transactions is under 32, round the quotient of the previous equation up to the nearest 4. And finally, the total number of rollback segments used in any database instance should not exceed 50.

Dynamic extension of rollback segments should be avoided. The current space allocation of any rollback segment can be determined by querying either the DBA_SEGMENTS view or the V$ROLLSTAT view. Preference is given to the V$ROLLSTAT view, as it serves as the basis for more user-friendly monitoring interfaces like Server Manager, although a join on V$ROLLNAME must be performed in order to pull the statistics for a rollback segment based on rollback segment name. In order to keep the rollback segment at the optimal size that was specified by the optimal clause in rollback segment creation, the instance will perform shrinks on the rollback segment if too many extents are acquired for it. A high number of shrinks as reflected by a high number in the column of the same name in the V$ROLLSTAT performance view indicates that the optimal clause set for the rollback segment is too low. Since allocating an deallocating extents for rollback segments creates additional processing overhead for the Oracle instance, the DBA should carefully monitor the database rollback segment statistics and resize the optimal clause as necessary.

Shrinks occur in the rollback segment after a transaction commits that required the rollback segment to grow more than one extent beyond the size specified by its optimal storage clause. Shrinking a rollback segment can be accomplished by the DBA manually by executing the alter rollback segment shrink statement. If no value is specified in bytes that the rollback segment is to shrink to, the rollback segment will shrink to the size specified by the optimal storage parameter.

The WRAPS statistic that is also maintained in the database can be of some limited value. The number of wraps in an instance’s rollback segments indicates that active transactions could not fit into the current extent, and the rollback data had to wrap across to a new extent. When a high WRAPS statistic appears in conjunction with a high value for the EXTENDS column on the V$ROLLSTAT dynamic performance view, then there is ample evidence to confirm that the rollback segments are extending often (and later shrinking) and that there could be a performance problem with excessive SHRINKS and EXTENDS occurring. However, a high number of wraps by itself indicates simply that transactions cannot fit entirely into one extent. A high number for WRAPS in conjunction with a low number of EXTENDS could indicate that the rollback segment is reusing currently allocated extents, which is a sign that the rollback segment is properly sized to handle the number of transactions assigned to it.

A problem can occur in the database rollback segments when long-running queries are attempting to access data that is volatile as a result of many smaller queries happening to the database, or if the long-running query is making many data changes to the database. If a query requires so many rollback entries to stay active in order to have a read-consistent view that the rollback segment allocates as many extents as it can, and the query still can’t finish, then ORA-01555—snapshot too old (rollback segment too small) will appear. Although this error can be corrected by adding more space to a rollback segment, the more ideal solution is to schedule the long running job to run at off-peak times to lessen the burden on rollback segments. Alternately, this problem can be solved by assigning the transaction to a rollback segment specifically designed to accommodate larger transactions. This task is accomplished by using the set transaction use rollback segment statement.

Several points about tuning Oracle data blocks to perform well were also covered. The first part of this discussion focused on the size of database blocks, which is specified by the DB_BLOCK_SIZE initialization parameter at database creation time. Typically, database block size is a multiple of operating system block size to minimize the number of physical I/O reads it takes for Oracle to retrieve data. DB_BLOCK_SIZE is stated in bytes and determines the value for DB_BLOCK_BUFFERS, the size of block buffers in the database buffer cache of the SGA. Once the database is created, block size cannot be changed except by dropping and re-creating the database.

In this portion, the topic of pctfree and pctused was also discussed. These two storage options determine how Oracle inserts new rows into a database object. We learned that pctfree represents the portion of each data block that Oracle leaves free for growth to existing rows in the block as a result of updates to those rows. When the block is filled, Oracle takes that block off the freelist for that table. The pctused option is the percentage amount of the data block that must be free in order for Oracle to consider placing that block back on the freelist. The range for each value is 0–99, but the sum of pctfree and pctused cannot exceed 100. For performance reasons, pctfree and pctused should be set such that their total is close to, but less than, 100. The impact of setting pctfree is as follows:

High pctfree leaves more space free in the data block for each row to expand during updates. However, it will take more blocks to store the same number of rows than it takes if pctfree is set low.
Low pctfree maximizes block space usage by leaving less space free for existing rows to grow during updates. But, there is an increased chance of row migration and chaining if the block becomes overcrowded and the row needs to expand.

Setting pctused has many implications on the database. The implications of setting pctused are as follows:

High pctused means that Oracle will try to keep the data block filled as high as pctused at all times. This means an additional processing overhead if the database experiences heavy data change activity.
Low pctused means that Oracle will not add rows to a data block until much of the block space is freed by data deletion. The data block will have unused room for a while before being placed onto a freelist, but once it is on the freelist, the block will be available for row inserts for a while as well.

Changing the settings for pctused and pctfree is executed by using the alter table statement.

Row migration and chaining were also discussed. When a row grows too large to fit into a data block, Oracle finds another data block to place it into. This process is called row migration. If there is no block available that can fit the entire row, then Oracle breaks the row into several pieces and stores the components in different blocks. This process is called chaining. These two processes are detrimental to the performance of the database. Table rows that have been chained or migrated can be identified using the analyze command with the list chained rows option. The output from this command will be placed in a special table called CHAINED_ROWS, that must be created by executing the UTLCHAIN utility script before executing the analyze command. The DBA can then copy the chained rows into a temporary table, delete the rows from the original table, change the value for pctfree on the table, and insert the rows from the temporary table. Alternately, the DBA can store the row data using EXPORT or a flat file, drop and re-create the table using a new pctfree setting, and repopulate the table using IMPORT or SQL*Loader.

The topic of freelist contention was also discussed. Freelists are lists that Oracle maintains of blocks with space for row insertion for a table. A freelist is experiencing contention if processes are contending for the free data blocks of that table in memory. To calculate the wait ratio for freelists, the V$WAITSTAT and V$SYSSTAT are used, (WAITS / GETS) * 100, where waits is V$WAITSTAT.COUNT for the associated V$WAITSTAT.CLASS = ‘free list’, and gets is the sum of V$SYSSTAT.VALUE where V$SYSSTAT.NAME is either ‘db block gets’ or ‘consistent gets’. Resolving contention for freelists is accomplished by dropping and re-creating the table with the freelist storage parameter set to the number of concurrent processes trying to insert new rows into the table. The table data can be stored and reloaded using IMPORT/EXPORT or SQL*Loader.

Two-Minute Drill

Five types of tablespaces commonly found on the Oracle database: SYSTEM, DATA, INDEX, RBS, and TEMP.
The SYSTEM tablespace should contain data dictionary tables and initial rollback segments only. It is inappropriate to place any other objects in them as they may fill the SYSTEM tablespace, causing maintenance problems.
The DATA tablespaces should contain table data only. Other types of segments, such as rollback segments or temporary segments, could cause tablespace fragmentation, making it hard for the tables to acquire extents.
The INDEX tablespaces should contain indexes to table data only.
The RBS tablespaces should contain rollback segments only.
The TEMP tablespaces should be available for creation of temporary segments for user queries. No other objects should be placed in this tablespace.
Rollback segments acquire a number of public rollback segments in the Parallel Server Option according TRANSACTIONS / TRANSACTIONS_PER_ROLLBACK_SEGMENT number of rollback segments at instance startup. The two components of this equation are initialization parameters.
The DBA can specify the instance to acquire certain private rollback segments at startup by using the ROLLBACK_SEGMENTS initialization parameter.
On databases that don’t use the Parallel Server Option, public and private rollback segments are the same.
Rollback segments should be sized according to the size and number of transactions occurring on the database.
All extents of a rollback segment are the same size. This is enforced by Oracle with the removal of the pctincrease storage clause in the create rollback segment syntax.
Rollback segments generally have an optimal size specified by the optimal storage clause.
If a data transaction forces the rollback segment to grow more than one extent past its optimal setting, the rollback segment will shrink after the transaction commits.
Shrinks and extends cause additional processing overhead on the Oracle instance.
The DBA can query the V$ROLLSTAT dynamic performance view to determine if a high number of extends and shrinks are happening to the rollback segment.
If a high number of shrinks are occurring as reflected by the SHRINKS column of V$ROLLSTAT, the DBA should increase the optimal storage clause for that rollback segment.
If a transaction exhausts the space allowed for a rollback segment either through data change or through requiring a read-consistent data view in the case of a long-running query, ORA-01555—snapshot too old (rollback segment too small) error will occur.
To fix ORA-01555, increase the number of extents allowed on the rollback segment the transaction uses, explicitly assign it to a larger rollback segment using set transaction use rollback segment, or schedule the transaction to occur during off-peak hours.
Database resources should be distributed across multiple disks to avoid I/O contention.
When considering which resources to place on the same disk, the DBA should evaluate what the utilization of each resource is and when it will be accessed.
Striping is the placement of a table’s extents across multiple disks to reduce I/O contention. Striping works well in situations where the parallel query option is in use.
Block size is determined by the DB_BLOCK_SIZE initialization parameter.
Block size cannot be changed once the database is created.
The size of block buffers in the SGA buffer cache, as expressed by the initialization parameter DB_BLOCK_BUFFERS, should be equal to DB_BLOCK_BUFFERS.
Oracle block size should be a multiple of operating system block size.
pctfree and pctused are the data block space usage options.
pctfree is the amount of space Oracle leaves free in each block for row growth.
pctused is the amount of space that must be freed after the block initially fills in order for Oracle to add that block to the freelist.
A freelist is a list of blocks associated with a database object that currently have room available for rows to be added.
A high pctfree means the block leaves a lot of room for rows to grow. This is good for high-volume transaction systems with row growth, but has the potential to waste disk space.
A low pctfree maximizes disk space by leaving little room for rows to grow. Space is well utilized but potential is there for chaining and row migration.
Row migration is where a row has grown too large for the block it is currently in, so Oracle moves it to another block.
Chaining is where Oracle tries to migrate a row, but no block in the freelist can fit the entire row, so Oracle breaks it up and stores the pieces where it can.
pctfree + pctused < = 100.
Row migration and chaining can be detected by using the analyze command with list chained rows clause.
The analyze command places ROWIDs for chained rows in the CHAINED_ROWS table created by utlchain.sql. This table must be present for analyze to work.
Freelist contention is when processes are contending with one another to get free data blocks when trying to insert new rows on the table.
Freelist contention is determined by selecting wait information from V$WAITSTAT, get information from V$SYSSTAT, dividing waits by total of block gets and consistent gets, and multiplying the quotient by 100. If the product is greater than 1, there is freelist contention.
Resolving freelist contention is done by increasing the freelists storage option in the table creation statement to the number of concurrent processes at any time that will insert rows to that table. This step usually requires dropping and re-creating the table.
Hosted by uCoz