Tuning the Oracle server
This topic explains the following performing tuning options for the Oracle database:
Although the commands and syntax differ, similar methodologies can also be effective for other databases.
Initial database configuration
The basic memory structures associated with Oracle include:
- System Global Area (SGA): Shared by all server and background processes. The SGA holds:
- Database buffer cache
- Redo log buffer
- Shared pool
- Large pool (if configured)
- Program Global Area (PGA): Private to each server and background process. Each process has one PGA. The PGA holds:
- Stack areas
- Data areas
In Oracle 11g, the following parameters enable automatic memory management to control the SGA and the PGA:
memory_max_target—Governs the total maximum memory for the SGA and the PGA.
memory_target—Governs the existing sizes for the SGA and the PGA.
In Oracle 10g, there are separate automatic memory management parameters for the SGA and the PGA:
sga_max_target—Governs total maximum memory for the SGA
sga_target—Governs existing memory for the SGA
pga_aggregate_target—Defines the memory size for the PGA
For Oracle 10g and 11g, the regular memory parameters, such as
shared_pool_size, define the minimum size that the database maintains for each subarea in SGA.
With BMC Digital Workplace, BMC recommends setting the Oracle parameter cursor_sharing to FORCED if you are using a case-insensitive database and to EXACT if using a case-sensitive database.
Alter system set cursor_sharing=FORCED scope=spfile;
BMC Digital Workplace does not provide any bitmap indexes out of the box. However, the optimizer can choose a bitmap access path without the presence of bitmap indexes in the database. This can be done by using the CPU-intensive BITMAP CONVERSION FROM/TO ROWID operation. Set
_b_tree_bitmap_plans to False to avoid this issue.
Alter system set "_b_tree_bitmap_plans"=FALSE scope=spfile;
When used with Oracle 11g or later, BMC Digital Workplace depends on Oracle’s Cost-Based Optimizer (CBO) for performance. When the BMC Digital Workplace server sends an SQL statement, the CBO uses database statistics to determine an optimal execution plan to fetch the required data. By default, Oracle automatically gathers statistics by using a scheduled job (GATHER_STATS_JOB), which runs locally within a maintenance window between 10:00 P.M. and 6:00 A.M. weeknights and all day on weekends.
Query the database to determine when statistics were last gathered. If statistics were not gathered recently, make sure that automatic statistics gathering is enabled. Lack of statistics can lead to poor SQL execution plans and inefficient database processing.
In general, the database statistics should reflect the current data volume and data distribution along with other data characteristics. Occasionally, you might need to gather statistics manually. You can follow the Oracle's recommendations for gathering database statistics. BMC Digital Workplace does not have any special requirement; however, it depends heavily on database statistics for SQL query performance.
Oracle CLOB storage
The default for character large object (CLOB) column storage during a BMC Digital Workplace installation is In Row. For example, the actual data is stored inline with the actual row that contains the CLOB column.
If you need to make the Oracle database case insensitive, set the following parameters in Oracle 11g:
Specifies how the predicates in an SQL statement will be compared. Valid values are:
Specifies the collating sequence for ORDER_BY queries. Valid values are:
To make Oracle use an existing index on a column that is present in one or more queries:
- Set NLS_COMP and NLS_SORT at the session level or the database level.
Drop and re-create all ARADMIN indexes as function-based indexes.
Setting NLS_SORT in a session SQL> alter session set NLS_SORT=BINARY_ CI; Drop and recreate an index as a function-based index SQL> drop index <index name>: SQL> create index <index name> on <table_name>(NLSSORT(<column_name>, 'NLS_SORT=BINARY CI')):
If you still have a Full Table Scan against the table after following this procedure, force the Oracle CBO to pick up the index.
To do this, at the session level, set the Oracle parameter
SQL> alter session set optimizer_index_cost_adj=1:
The default value for
This setting forces the Oracle CBO to choose index lookups over Full Table Scans. BMC recommends thoroughly assessing the impact of these settings in your development and quality assurance environments before implementing this setting in production.
Oracle database diagnostics
Typically, Oracle 11g diagnostics come in the form of a report called Automatic Workload Repository (AWR). To create an AWR report, a snapshot is taken before and after the period of interest. The report is then generated to show how the system counters (V$ views) changed between the two snapshots. Oracle AWR snapshots are automatically taken every hour unless the DBA changes the interval. Reports are most valuable when they focus on a period of high activity.
The AWR reports include a high-level summary of system usage, specific observed wait events, and a list of high-load SQL statements. Oracle documentation explains how to interpret the reports. The following points provide additional guidelines.
Make sure the Buffer Cache and Shared Pool are well used.
Instance efficiency percentages (target 100%)
Buffer NoWait %:
Redo NoWait %:
Buffer Hit %:
In-memory Sort %:
Library Hit %:
Soft Parse %:
Execute to Parse %:
Latch Hit %:
Parse CPU to Parse Elapsed %:
% Non-Parse CPU:
If the Buffer Cache and Shared Pool are well used, check for blocking wait events.
In the wait event summary at the top of the AWR report, the CPU time event should be near or at the top of the list (ideally 70% or more). Typically, CPU time might be low if you are I/O bound. If the top wait events are relate to I/O, you might be getting poor SQL execution plans.
The high-load SQL statements that Buffer Gets order are listed further down the report. If you see statements with very high Buffer Gets per Execute, an index might be missing on a table that the statement accesses. db_file_scattered_read wait events are associated with Full Table Scans or Index Fast Full Scans and can indicate a need for additional indexes (or up-to-date statistics for Oracle).
The following table lists the top timed events:
Average wait (msec)
% total call time
LNS wait on SENDREQ
Log file parallel write
Log file sync
Control file sequential read
- Use the Oracle SQL
To learn about executed SQL statements and their execution plans, use the Oracle SQL
When SQL tracing is on, raw trace files are generated in an Oracle dump directory. The
tkprof utility can use these trace files to create reports on executed SQL statements. Multi-threaded applications might produce multiple trace files at the same time, and the trace files might get large quickly, so managing the generated trace files can be challenging.
Other ways to evaluate SQL execution plans include directly using the SQL plan tables, such as V$SQLAREA, V$SQL_TEXT, and V$SQL_PLAN. If the EXECUTION PLAN has been aged out of V$SQL_PLAN, use the EXPLAIN PLAN and AUTOTRACE commands.
When tuning SQL statements, getting the runtime execution plan for the SQL statements is important. Oracle supplies an awrsqrpt.sql script (available in $ORACLE_HOME/rdbms/admin). This script takes the Start and End AWR Snapshot IDs and the SQL ID of the statement to be examined as arguments. If a runtime plan is not available, then use the EXPLAIN PLAN command to get an execution plan for the statement.
When two plans differ in the Cost column, the plan with the lower cost values might not necessarily be better. The execution for the SQL with the higher cost might be better. To verify this, run the SQL in a tool, such as SQL*Plus, with timing turned on.
Adaptive Query Optimization and Dynamic Sampling
One of the major changes to the optimizer in Oracle Database 12c is Adaptive Query Optimization—a set of capabilities that enables the optimizer to make runtime adjustments to execution plans and discover additional information that can lead to better statistics. This new approach is extremely helpful when existing statistics are not sufficient to generate an optimal plan.
During the compilation of an SQL statement, the optimizer decides if the available statistics are sufficient to generate a good execution plan or if it should consider using dynamic sampling. Dynamic sampling is used to compensate for missing or insufficient statistics that would otherwise lead to a very bad plan.
Dynamic statistics has been enhanced in Oracle 12c, but this can lead to a longer parse time. Automatic dynamic statistics is used more often in 12c, which can lead to a parse time increase in the more cases than before.
If you see intermittently slow performing queries for Remedy Workload on Oracle 12c database, review these new features in Oracle 12c and disable the same, if needed.