Monday, February 24, 2014

Performance Versus Business Requirements

Factors that affect performance:
  • Frequent checkpointing
  • Performing archiving
  • Block check sums
  • Redundancy
    • Frequent backups of data files
    • Multiple control files
    • Multiple redo log members in a group
  • Security
    • Auditing
    • Encryption
    • Virtual Private Database/Fine Grained Access Control

There is always a cost of doing business in certain ways. Often the requirements of the business can impact performance. The impact of downtime and crash recovery, and even the unlikely event of block corruption, must be considered against the overhead of protecting against these events. Redundancy improves availability but requires more I/Os

The question is not whether to use the security features but what is required, and use only those features.

Often the business can only tolerate minimal downtime, so you configure the database for the business, and then tune your database accordingly. The uptime requirement, the mean time to recovery, and the amount of data that could be lost in a disk or system crash are all business issues.

ADDM Tuning Session


ADDM consolidates the manual tuning session to quickly identify the areas that produce the greatest benefit. ADDM runs automatically each time a AWR snapshot is taken. The findings of the latest ADDM report is displayed on the Database Home page of Enterprise Manager. 

Tuning During the Life Cycle



An application’s life cycle can be divided into different phases:
  • Application design and development
  • Testing: Database configuration
  • Deployment: Adding a new application to an existing database
  • Production: Troubleshooting and tuning
  • Migration, upgrade, and environment changes


Tuning can be divided into two classes:
  • Proactive (make it better, so it will not break) 
    • Test scenarios. 
    • Find the problem areas. 
    • Resolve the problem. 
  • Reactive (wait until it breaks, then fix it) 
    • Monitor active instance. 
    • Tune issues as needed.


Tuning during the life cycle involves two courses of action: proactive tuning or reactive. During the design, development, and testing phases tuning is mostly proactive; that is, scenarios and test cases are designed and tested. The results are measured and compared against other configurations. In the deployment and production environments, the tuning is mostly reactive. The need for hypothetical loads is removed as actual users and workloads are created, but the ability to anticipate problems also diminishes. You can monitor the database instance to observe changes and trends in performance metrics. From the information you gather by monitoring, you may be able to mitigate performance issues before they are noticed by users. 

The DBA may be involved in tuning from the earliest stages of design and development. It is less expensive to correct bugs and performance problems early in the life cycle than later. The differences in tuning the later phases of the life cycle are primarily in what is allowed. Many DBAs in a production environment are not allowed to change the SQL or data structures. However, a design change to improve performance may warrant a change request to the application vendor or development team.

Common Tuning Problems



The most common tuning problems:
  1. Inefficient or high-load SQL statements
  2. Suboptimal use of Oracle Database by the application
  3. Undersized memory structures
  4. Concurrency issues
  5. I/O issues 
  6. Database configuration issues
  7. Short-lived performance problems
  8. Degradation of database performance over time 
  9. Unexpected performance regression after environment changes 
  10. Locking issues

Tuning inefficient or high-load SQL statements has an wide impact that can reduce memory use, CPU, and IO resources. SQL tuning issues includes poorly written SQL, ineffective use of indexes, access path costs, and sorting. In this course, we assume that the DBA has little or no opportunity to change the SQL statements. 

Problems such as establishing new database connections repeatedly, excessive SQL parsing, and high levels of contention for a small amount of data (also known as application-level block contention) can degrade the application performance significantly. These are all poor use of the database by the application. 

Memory issues are high on the list of instance tuning problems. Proper sizing of the System Global Area (SGA) including the Shared pool and Buffer cache, and the Process Global Area (PGA) reduce contention for memory resources and indirectly reduces IO and CPU. 

A high degree of concurrent activities, multiple processes, or users might result in contention for shared resources that can be manifested in the forms of various types of waits. Many resources can only be accessed by only one process at a time. Several processes attempting to access the same resource creates contention.


In any database, I/O issues, such as database file layout on disk or RAID devices, can be a source of performance problems. In OLTP applications, the amount of redo and undo generated can create bottlenecks in memory or I/O.

Some problems are reported by users but may not be apparent from reports that span intervals of 30 minutes or longer. The Oracle Database has additional tools (Active Session History ASH) that allow the DBA to view statistics and metrics over small segments of time in the recent past. 

Many databases will have gradual changes: the number of user, the amount of data, the number of reports, and modules in use. These changes may lead to a degradation in performance. The proactive DBA will capture and save statistics sets from when the database is performing acceptably, to compare with statistics when the database performance is poor to identify the differences.

The environment of the database is seldom static. Patches, upgrades, new hardware, or changes to the instance parameters can change the performance of the database. Sometimes a change improves performance in one area and causes another area to degrade. 

Locking issues are not common problems, but when you have locking issues they become very important.

Top SQL Reports


The following Top SQL sections sort the SQL statements with the top resource usage in multiple ways, as indicated by their titles:
  • SQL ordered by Elapsed Time
  • SQL ordered by CPU Time
  • SQL ordered by Gets
  • SQL ordered by Reads
  • SQL ordered by Executions
  • SQL ordered by Parse Calls
  • SQL ordered by Sharable Memory
  • SQL ordered by Version Count
These sorts allow you to find the troublesome SQL statement. You can also view the Complete List of SQL Text section to view the entire SQL text. In this example, a single SQL statement is responsible for almost all of the instance activity.

Setting the Priority

Determine which problem to tune first. In the performance reports, you see many statistics; even a well-tuned database shows a set of top wait events. The Oracle server provides a set of wait event statistics for processes that are idle or waiting. The Oracle server also records CPU utilization for processes that are running. To determine the impact of a particular event, it must be compared with the overall time spent. 


Each request to the database server has a response time consisting of a wait time and a service time. The service time is the time spent actively working on the request (CPU time). The wait time is by definition the time waiting for any reason. Both service time and wait time may be tuned. 

To tune the service time something has to change: the processing, the SQL, the access path, or the data storage structure. Wait times can be tuned by reducing contention for the resource where the wait is occurring. 

How to determine the priority 

You can take help from AWR reports.

The top wait events always have some values. In example shown in the slide, the users are complaining of slow response time. The Top 5 Timed Foreground Events only shows that the instance is using the CPU.

The Instance CPU section show that the instance is using 65% of the OS CPU; this information is inconclusive. The instance is supposed to use CPU and not wait. This set of diagnostics may mean that the instance is CPU bound. As pointed out earlier, performance tuning can either reduce wait time or service time. In this case, the service time needs to be reduced. To reduce service time, SQL is the usual area to be examined.


Setting the Priority: Example
The top wait events did not give a clear direction. So you continue with the time model to find which areas are consuming the DB time. You can determine the top-priority tuning tasks by comparing the time spent in various waits and tasks with the overall wait time and service time. Both major tools report the Time Model Statistics to guide your tuning efforts. For example, the AWR report excerpt in the slide shows that the database CPU time (time in user calls) is 474.04 seconds. The time spent in user calls is 44.85% of the total DB time. The “sql execute elapsed time” shows 1050.06 seconds; this time includes wait times. Just from this limited view, the wait times for the SQL execution are significant, and would lead you to examine the wait statistics related to the SQL execution and the SQL reports to identify individual SQL statements for tuning.
The “% of DB Time” values indicate a relative impact tuning this area could have. If the “sql execute elapsed time” could be reduced, then the maximum possible improvement is 1050 seconds or 99%. SQL will always take some time to execute. Therefore, the actual improvement may be much less, depending on the amount of improvement you can get from that area.


Defining the Problem


Problems can arise at any time. A proactive DBA watches for problems and corrects them before they are noticed by users. In the past, the discovery and definition step has been tedious and frequently dependent on listening to user feedback. User feedback is important, but is often subjective and not reproducible. In Oracle Database 11g, many of the following information sources can be viewed from the Enterprise Manager interface:
  • Monitor the current state of the database instance and compare it to a previous state.
    • Use Statspack or AWR to collect performance metrics regularly. Changes can point to issues before they become noticeable to users. 
    • Use OS or EM tools to check for CPU and disk queuing, disk utilization, and memory swapping. These are the signs of an overloaded system. 
  • Examine AWR or Statspack reports and instance files carefully.
    • Use the available tools, such as Statspack or AWR reports, to identify SQL statements in the applications that are consuming the most resources. Have these changed?
    • Check the alert logs, and trace files for error messages that might give a quick clue to the nature of the problem. Do not overlook system- and application-specific logs.
    • Ensure that the initialization parameter settings make sense for the system. 
    • Collect instance and OS statistics. Statspack reports point to components where the greatest waits and the greatest use of resources occur. ADDM goes further by focusing on those components with the greatest potential benefit

Sunday, February 23, 2014

Comparative Performance Analysis with AWR Baselines

Comparative Performance Analysis with AWR Baselines
What is the proper threshold to set on a performance metric? What is it that you want to detect? If you want to know that the performance metric value indicates that the server is nearing capacity, an absolute value is correct. But if you want to know that the performance is different today than it was at this time last week, or last month, the current performance must be compared to a baseline.

A baseline is a set of snapshots taken over a period of time. These snapshots are grouped statistically to yield a set of baseline values that vary over time. For example, the number of transactions per second in a certain database varies depending on the time of the day. The values for transactions per second are higher during working hours and lower during nonworking hours. The baseline records this variation and can be set to alert you if the current number of transactions per second is significantly different from the baseline values.

Oracle Database 11g baselines provide the data required to calculate time-varying thresholds based on the baseline data. The baseline allows a real-time comparison of performance metrics with baseline data and can be used to produce AWR reports that compare two periods.



What is the proper threshold to set on a performance metric? 

What is it that you want to detect? 

If you want to know that the performance metric value indicates that the server is nearing capacity, an absolute value is correct.

But if you want to know that the performance is different today than it was at this time last week, or last month, the current performance must be compared to a baseline.

Using EM to Configure and Changing Adaptive Threshold Settings

Steps to Configure and  Change Adaptive Threshold Settings
  • Create the baseline
  • Schedule the computation of statistics 
  • After AWR baseline statistics are computed for a particular baseline, you can set metric thresholds specific to your baseline.

Compute baseline statistics directly from the Baselines page (as previously discussed). Then go to the AWR Baseline Metric Thresholds page and select the type of metrics that you want to set. When done, select a specific metric and click Edit Thresholds.

On the corresponding Edit AWR Baseline Metric Thresholds page, specify your thresholds in the Thresholds Settings section, and then click Apply Thresholds.

You can specify thresholds based on the statistics computed for your baseline. This is illustrated in the slide. In addition to “Significance Level,” the other possibilities are “Percentage of Maximum” and “Fixed Values.”


Note: After a threshold is set using Baseline Metric Thresholds, the previous threshold values are forgotten forever and the statistics from the associated baseline are used to determine the threshold values until they are cleared (by using the Baseline Metric Threshold UI or PL/SQL interface).


Managing Baselines with PL/SQL


DBMS_WORKLOAD_REPOSITORY Package



Static , Single Baseline Creation
----------------------------------

BEGIN
  -- Using procedures.
  DBMS_WORKLOAD_REPOSITORY.create_baseline(
    start_snap_id => 56,
    end_snap_id   => 57,
    baseline_name => 'test1_bl',
    expiration    => 60);
END;
/


DBMS_WORKLOAD_REPOSITORY.create_baseline(
    start_time    => TO_DATE('09-JUL-2008 17:00', 'DD-MON-YYYY HH24:MI'),
    end_time      => TO_DATE('09-JUL-2008 18:00', 'DD-MON-YYYY HH24:MI'),
    baseline_name => 'test2_bl',
    expiration    => NULL);
END;
/


COLUMN baseline_name FORMAT A15

SELECT baseline_id, baseline_name, START_SNAP_ID,
       TO_CHAR(start_snap_time, 'DD-MON-YYYY HH24:MI') AS start_snap_time,      
       END_SNAP_ID,          
       TO_CHAR(end_snap_time, 'DD-MON-YYYY HH24:MI') AS end_snap_time
FROM   dba_hist_baseline
WHERE  baseline_type = 'STATIC'
ORDER BY baseline_id;


Information about a specific baseline can be displayed by using the BASELINE_ID with the SELECT_BASELINE_DETAILS pipelined table function, or the BASELINE_NAME with the SELECT_BASELINE_METRIC pipelined table function.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------

SELECT * FROM   TABLE(DBMS_WORKLOAD_REPOSITORY.select_baseline_details(7));  < 7 is the ID of the Baseline>

SELECT * FROM   TABLE(DBMS_WORKLOAD_REPOSITORY.select_baseline_metric('SYSTEM_MOVING_WINDOW'));

Baselines are renamed using the RENAME_BASELINE procedure.
----------------------------------------------------------
BEGIN
  DBMS_WORKLOAD_REPOSITORY.rename_baseline(
    old_baseline_name => 'test1_bl',
    new_baseline_name => 'sun1_bl');
END;
/

Baselines are dropped using the DROP_BASELINE procedure.
--------------------------------------------------------
BEGIN
  DBMS_WORKLOAD_REPOSITORY.drop_baseline(baseline_name => 'test1_bl');
  DBMS_WORKLOAD_REPOSITORY.drop_baseline(baseline_name => 'test2_bl');
  DBMS_WORKLOAD_REPOSITORY.drop_baseline(baseline_name => 'test3_bl');
  DBMS_WORKLOAD_REPOSITORY.drop_baseline(baseline_name => 'test5_bl');
END;
/

or

BEGIN
  DBMS_WORKLOAD_REPOSITORY.drop_baseline(baseline_name => 'sun1_bl');
 
END;
/

Retention Period Details
---------------------------

SELECT retention FROM dba_hist_wr_control;


BEGIN
  DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
    retention => 43200);  -- Minutes (= 30 Days).
END;
/

Current Moving Window
------------------------

SELECT moving_window_size
FROM   dba_hist_baseline
WHERE  baseline_type = 'MOVING_WINDOW';

How to change size of moving window
-------------------------------------

BEGIN
  DBMS_WORKLOAD_REPOSITORY.modify_baseline_window_size(
    window_size => 25);
END;
/

Baseline Templates
------------------
Baseline templates allow you to define baselines you would like to capture in the future.
Creating a single baseline template is similar to creating a time-based baseline, except the time is in the future.


BEGIN
  DBMS_WORKLOAD_REPOSITORY.create_baseline_template(
    start_time    => TO_DATE('20-DEC-2013 00:00', 'DD-MON-YYYY HH24:MI'),
    end_time      => TO_DATE('20-DEC-2013 05:00', 'DD-MON-YYYY HH24:MI'),
    baseline_name => '01_dec_013_00_05_bl',
    template_name => '01_dec_013_00_05_tp',
    expiration    => 100);
END;
/

Templates for repeating baselines are a little different as they require some basic scheduling information.
-----------------------------------------------------------------------------------------------------------
The following example creates a template that will run for the next six months, gathering a baseline every Monday between 00:00 and 05:00.


BEGIN
  DBMS_WORKLOAD_REPOSITORY.create_baseline_template(
   day_of_week          => 'MONDAY',
   hour_in_day          => 0,
   duration             => 5,
   start_time           => SYSDATE,
   end_time             => ADD_MONTHS(SYSDATE, 6),
   baseline_name_prefix => 'monday_morning_bl_',
   template_name        => 'monday_morning_tp',
   expiration           => NULL);
END;
/

Information about baseline templates is displayed using the DBA_HIST_BASELINE_TEMPLATE view
--------------------------------------------------------------------------------------------
SELECT template_name,
       template_type,
       baseline_name_prefix,
       start_time,
       end_time,
       day_of_week,
       hour_in_day,
       duration,
       expiration
FROM   dba_hist_baseline_template;

Baseline templates are dropped using the DROP_BASELINE_TEMPLATE procedure.
----------------------------------------------------------------------------

BEGIN
  DBMS_WORKLOAD_REPOSITORY.drop_baseline_template (template_name => '01_dec_013_00_05_tp');
  DBMS_WORKLOAD_REPOSITORY.drop_baseline_template (template_name => 'monday_morning_tp');
END;
/

Baselines in Performance Page Settings

The data for any defined baseline in the past is available in Oracle Database 11g. The baseline data may be displayed on the Performance page of Enterprise Manager. You have three display options:
    • Do not show baseline information.
    • Show the information from a specified static baseline.
    • Show the information from the system moving baseline.

Note: The system moving window baseline becomes valid after sufficient data has been collected and the statistics calculation occurs. By default, the statistics calculation is scheduled for every Saturday at midnight.


Automatic Workload Repository Baselines

In Oracle Database 11g, Automatic Workload Repository (AWR) baselines provide powerful capabilities for defining dynamic and future baselines, and considerably simplify the process of creating and managing performance data for comparison purposes.

Oracle Database 11g has, by default, a system-defined moving window baseline that corresponds to all of the AWR data within the AWR retention period. There can only be one moving window baseline.

Oracle Database 11g provides the ability to collect two kinds of baselines: static baselines  and moving window.

Static baselines can be either single or repeating. A single AWR baseline is collected over a single time period. A repeating baseline is collected over a repeating time period (for example, every Monday in June).

In Oracle Database 11g, baselines are enabled by default if STATISTICS_LEVEL=TYPICAL or ALL.

Monitoring Tools Overview

 Monitoring Tools Overview

Below are the links where you get lots of Information about Monitoring Tools overview

Enterprise Manager



Statistics and Wait Events

    Statistics are counters of events that happen in the database. These are indicators of the amount of useful work that is being completed. Statistics include counts such as the number of user commits or db file sequential reads. Many statistics have corresponding wait events. There are several statistics that can indicate performance problems that do not have corresponding wait events. The full list of statistics can be found in the V$STATNAME view.

  Wait events are statistics that are incremented by a server process or thread to indicate that it had to wait for an event to complete before being able to continue processing. Wait event data reveals various symptoms of problems that might be affecting performance, such as latch contention, buffer contention, and I/O contention.


  Remember that these are only symptoms of problems, not the actual causes. The full list of wait events can be found in the V$EVENT_NAME view.


  The CLASS column for each statistic contains a number representing one or more statistic classes. The following class numbers are additive:

1, User
2, Redo
4, Enqueue
8, Cache
16, OS
32, Real Application Clusters
64, SQL
128, Debug

For example, a class value of 72 represents a statistic that relates to SQL statements and caching.


Note: Some statistics are populated only if the TIMED_STATISTICS initialization parameter is set to true.

Wait Classes



Statistic Levels

You determine the level of statistic collection on the database by setting the value of the STATISTICS_LEVEL parameter. The values for this parameter are:
  • BASIC: No advisory or other statistical data is collected. You can manually set other statistic collection parameters such as TIMED_STATISTICS and DB_CACHE_ADVICE. Many of the statistics required for a performance baseline are not collected. Oracle strongly recommends that you do not disable statistic gathering. 
  • TYPICAL: This is the default value. Data is collected for segment-level statistics, timed statistics, and all advisories. The value of other statistic collection parameters is overridden.
  • ALL: Collection is made of all the TYPICAL level data, the timed operating system statistics, and the row source execution statistics. The value of other statistic collection parameters is overridden.

Query V$STATISTICS_LEVEL to determine which other parameters are affected by the STATISTICAL_LEVEL parameter.

 select statistics_name, activation_level 
 from v$statistics_level
 order by 2;

Dynamic Performance Views


  • These views are owned by SYS.
  • Different views are available at different times:
    • The instance has been started.
    • The database is mounted.
    • The database is open.
  • You can query V$FIXED_TABLE to see all the view names.
  • These views are often referred to as “v-dollar views.”
  • Read consistency is not guaranteed on these views because the data is dynamic. No locking mechanism on these views
  • SELECT_CATALOG_ROLE grant is required to allow a user to select the V$ views


Dynamic Performance Views Usage Examples

a.  What are the SQL statements and their associated number of executions where the CPU time consumed is greater than 200,000 microseconds?

    SELECT sql_text, executions
    FROM v$sqlstats
    WHERE cpu_time > 200000;

b.  What sessions logged in from the EDRSR9P1 computer within the last day?

    SQL> SELECT * FROM v$session
    WHERE machine = 'EDRSR9P1' and
    logon_time > SYSDATE - 1;

c.  What are the session IDs of any sessions that are currently holding a lock that is blocking another user, and how long has that lock been held? (block may be 1 or 0; 1 indicates that this session is the blocker.)

    SELECT sid, ctime
    FROM v$lock WHERE block > 0;

Time Model Overview


There are many components involved in tuning an Oracle database system and each has its own set of statistics. How can you measure the expected benefit from a tuning action on the overall system? For example, would the overall performance improve if you move memory from the buffer cache to the shared pool? When you look at the system as a whole, time is the only common ruler for comparison across components. In the Oracle database server, most of the advisories report their findings in time. Also statistics called “time model statistics” appear as the V$SYS_TIME_MODEL and V$SESS_TIME_MODEL performance views. This instrumentation helps the Oracle database server to identify quantitative effects on the database operations.

The most important of the time model statistics is DB time. This statistic represents the total time spent in database calls and indicates the total instance workload. It is the sum of the CPU and wait times of all sessions not waiting on idle wait events (non-idle user sessions).

The objective for tuning an Oracle database system could be stated as reducing the time that users spend in performing some action on the database, or simply reducing DB time.

Other time model statistics provide quantitative effects (in time) on specific actions, such as logon operations, hard and soft parses, PL/SQL execution, and Java execution.




DB Time

DB Time = DB Wait Time + DB CPU Time

Tuning is not just about reducing waits. It aims at improving end-user response time and/or minimizing the average resources used by each request.

Sometimes these go together, but in other cases there is a trade-off (for example, with a parallel query). In general, you can say that tuning is the avoidance of consuming or holding resources in a wasteful manner.

Any request to the database is composed of two distinct segments: a wait time (DB wait time) and a service time (DB CPU time). The wait time is the sum of all the waits for various database instance resources. The CPU time is the sum of the time that is spent actually working on the request. These times are not necessarily composed of one wait and one block of CPU time. Often processes will wait a short time for a DB resource and then run briefly on the CPU, and do this repeatedly.

Tuning consists of reducing or eliminating the wait time and reducing the CPU time. This definition applies to any application type, online transaction processing (OLTP) or data warehouse (DW).

Note: A very busy system shows longer DB CPU times and this can inflate other times.

CPU and Wait Time Tuning Dimensions


Top Wait Events



The Top 5 Timed Foreground Wait Events is a good place to start when tuning. 

Tuning Objectives

The objectives of tuning are:

  • Minimizing response time
  • Increasing throughput (Doing more using less resources)
  • Increasing load capabilities
  • Decreasing recovery time


Business requirements affect tuning goals. Performance may be limited by safety concerns, as in the “Decreasing recovery time” goal.

In a business environment where down time may be measured in hundreds or thousands of dollars per minute, the overhead of protecting the instance from failure and reducing recovery time is more important than the user response time.

Performance Tuning Tools

Available tools:

  • Basic: 
    • Time model
    • Top wait events
    • Dynamic performance views and tables
    • Alert log
    • Trace files
    • Enterprise Manager pages
  • Add-in: Statspack --- collect snapshots of statistics
  • Options:
    • Diagnostics pack
    • Tuning pack (requires Diagnostics Pack)


Performance Tuning Diagnostics

Diagnostic tools gather and format the following types of performance data:
  • Cumulative statistics:
    • Wait events with time information
      • The raw counts have little meaning until the counts are compared over time. The events that collect the most time tend to be the most important. 
    • Time model
      • The statistics in Oracle Database 11g are correlated by the use of a time model. The time model statistics are based on a percentage of DB time, giving them a common basis for comparison. 
  • Metrics: Statistic rates /count per unit
    • Metrics provide a basis to proactively monitor performance.
    • You can set thresholds on a metric causing an alert to be generated. For example, you can set thresholds for when the reads per millisecond exceed a previously recorded peak value or when the archive log area is 95% full.  
  • Sampled statistics: Active Session History (allow you to look back in time)
    • Statistics by session
    • Statistics by SQL
    • Statistics by service
    • Other dimensions


General Tuning Session

Tuning sessions have the same procedure: 

  1. Define the problem and state the goal.
  2. Collect current performance statistics.
  3. Consider some common performance errors.
  4. Build a trial solution.
  5. Implement and measure the change.
  6. Decide: “Did the solution meet the goal?”


    • No? Then go to step 3 and repeat.
    • Yes? Then create a new baseline. 


Effective Tuning Goals


Both tuning goals and SLAs must have three characteristics to be effective. They are:
  • Specific
  • Measurable 
  • Achievable

“Make the instance run as fast as possible” is not specific. A specific goal would be “The month end report suite must complete in less than 4 hours.”

  • A goal that is specific is easily made measurable as well. 
The goal of “user response time to a request is 10 seconds” is easily stated, but
        • Is this for all user requests? 
        • Is it the average response time? 
        • How do you measure average response time? 
Having specific definitions for the words of your goal is essential. By restating the goal as “User response time to a particular request is 20 seconds or less,” you can objectively determine when the goal has been met.
  • Achievable goals are possible and within the control of the persons responsible for tuning.

The following are examples of unachievable goals for a typical DBA: 
  • When the goal is to tune the instance to create a high-performance application, but you are not allowed to change the SQL or the data structures, there is a limited amount of tuning that is possible. 

  • When the goal is to have a response time of 1 second, but the network latency between the server and the client is 2 seconds. Without a change to the network, a response time of 1 second is impossible. 

You should always establish measurable tuning goals. Without a tuning goal, it is difficult to determine when you have performed enough tuning.

Tuning Methodology

Tuning steps: 

  • Tune the following from the top down:
    • The design before tuning the application code
    • The code before tuning the instance
  • Tune the area with the greatest potential benefit—identify:
    • The longest waits
    • The largest service times
  • Stop tuning when the goal is met.


Check the OS statistics and the general machine health before tuning the instance to be sure that the problem is in the database instance.

How to Tune

The procedures used to tune depend on the tool.
  • Basic tools:
    • Dynamic performance views
    • Statistics
    • Metrics
    • Enterprise Manager pages
  • AWR or Statspack
  • Automatic Database Diagnostic Monitor (ADDM)
  • DBA scripts

Saturday, February 22, 2014

SYSTEM and SYSAUX Tablespaces

  • The SYSTEM and SYSAUX tablespaces are mandatory tablespaces that are created at the time of database creation. They must be online.
  • The SYSTEM tablespace is used for core functionality (for example, data dictionary tables).
  • The auxiliary SYSAUX tablespace is used for additional database components (such as the Enterprise Manager Repository,AWR).



Logical and Physical Database Structures

  • Segments exist in a tablespace.
  • Segments are collections of extents.
  • Extents are collections of data blocks.
  • Data blocks are mapped to disk blocks.