Performance improvement is an iterative process. Removing the first bottleneck might not lead to performance improvement immediately, because another bottleneck might be revealed that has an even greater performance impact on the system. For this reason, the Oracle performance method is iterative. Accurately diagnosing the performance problem is the first step towards ensuring that the changes you make to the system will result in improved performance.
Performance problems generally result from a lack of throughput, unacceptable user or job response time, or both. The problem might be localized to specific application modules, or it might span the entire system. Before looking at any database or operating system statistics, it is crucial to get feedback from the most important components of the system: the users of the system and the people ultimately paying for the application. Getting feedback from users makes determining the performance goal easier, and improved performance can be measured in terms of real business goals, rather than system statistics.
The Oracle performance method can be applied until performance goals are met or deemed impractical. This process is iterative, and it is likely that some investigations will be made that have little impact on the performance of the system. It takes time and experience to accurately pinpoint critical bottlenecks in a timely manner. The Automatic Database Diagnostic Monitor (ADDM) implements the Oracle performance method and analyzes statistics to provide automatic diagnosis of major performance problems. Using ADDM can significantly shorten the time required to improve the performance of a system.
Limit the Scope
Does the performance
issue originate in the operating system (OS), the instance, or the application
SQL? This question is not always easy to answer.
Poorly performing SQL can
cause excessive physical reads and writes, appearing to be an I/O issue. Improperly
sized memory components (an instance configuration issue) can lead to excessive
swapping in the OS.
Poor disk configuration can appear to be an instance
configuration problem, causing a large redo file waits or commit waits, and
other problems.
Eliminate possibilities.
When the instance appears to have I/O problems, compare the instance file I/O
statistics to OS level statistics. The differences can guide you to the actual
problem. For example: A higher than normal average wait time on a particular
tablespace, could be due to:
Hardware: A file is on a slow drive or an improper
RAID configuration.
OS: The OS is busy with other files on the
same drive or partition.
Instance: The tablespace was created with different
properties than other tablespaces have, other busy database files are on the
same disk or partition (the database I/O is not balanced across all the
drives), or the objects being accessed are mostly in the same tablespace, file,
or disk.
Application: The application is doing excessive I/O due
to poor access path choice by the optimizer due to out of date statistics,
inefficient indexes, or other reasons.
Determine the scope of
the problem to focus your efforts on the solutions that provide the most
benefit.
No comments:
Post a Comment