This documentation supports the 18.08 version of Remedy Deployment.

To view the latest version, select the version from the Product version menu.

Tuning the Oracle database server

This topic was edited by a BMC Contributor and has not been approved.  More information.

This topic explains the following performing tuning options for the Oracle database:

Note

Although the commands and syntax differ, similar methodologies can also be effective for other databases.

Initial database configuration

The basic memory structures associated with Oracle include System Global Area (SGA) and Program Global Area (PGA). The details about SGA and PGA are as follows:

  • SGA: Shared by all server and background processes. The SGA holds the following:
    • Database buffer cache
    • Redo log buffer
    • Shared pool
    • Large pool (if configured)
  • PGA: Private to each server and background process. Each process has one PGA. The PGA holds the following:
    • Stack areas
    • Data areas

In Oracle 11g, the following parameters enable automatic memory management to control both the SGA and the PGA:

  • memory_max_target: governs the total maximum memory for the SGA and the PGA.
  • memory_target: governs the existing sizes for the SGA and the PGA.

In Oracle 10g, there are separate automatic memory management parameters for the SGA and the PGA:

  • sga_max_target: Governs total maximum memory for the SGA
  • sga_target: Governs existing memory for the SGA
  • pga_aggregate_target: Defines the memory size for the PGA

For Oracle 10g and 11g, the regular memory parameters, such as db_cache_size and shared_pool_size, define the minimum size that the database maintains for each subarea in SGA.

The following table lists the capacity recommendations for small, medium, and large Oracle databases.

Note:

The following size estimates must be used as a starting point. You should conduct benchmarks in your environment because sizing will vary depending on your workload mix, data volumes, and data composition. 

 

Oracle database recommendations

ComponentDatabase size
SmallMediumLarge
BMC Remedy AR Database Server
  • 8 CPU core
  • 16 GB RAM
  • 200 GB disk space
    (10 GB used by RSSO)
  • 16 CPU core
  • 32 GB RAM
  • 200 GB disk space
    (10 GB used by RSSO)
  • 32 CPU core
  • 64 GB RAM
  • 200 GB disk space
    (10 GB used by RSSO)
memory_target (>11g)81632
sga_target (10g)51020
Db_cache_size 3715
Shared_pool_size235
Pga_aggregate_target3612

Cursor_sharing

For case-sensitive databaseEXACTEXACTEXACT
For case-insensitive database and Remedy version 9.1.04 and beforeEXACTEXACTEXACT
For case-insensitive database and Remedy version 9.1.4.002 and laterEXACTEXACTEXACT

_b_tree_bitmap_plans

FalseFalseFalse

For more information, see https://communities.bmc.com/docs/DOC-48031.

Bitmap Indexes

BMC Remedy AR System applications do not provide any bitmap indexes out-of-the-box. However, the optimizer can choose a bitmap access path without the presence of bitmap indexes in the database. This can be done using the CPU-intensive BITMAP CONVERSION FROM/TO ROWID operation. Set _b_tree_bitmap_plans to False to avoid this issue.

ALTER SYSTEM SET "_b_tree_bitmap_plans"=false SCOPE=SPFILE;

Cost-Based Optimizer

When used with Oracle 11g or later, BMC Remedy AR System applications depend on Oracle’s Cost-Based Optimizer (CBO) for performance. When the AR System server sends an SQL statement, the CBO uses database statistics to determine an optimal execution plan to fetch the required data. By default, Oracle automatically gathers statistics using a scheduled job (GATHER_STATS_JOB), which runs locally within a maintenance window between 10:00 P.M. and 6:00 A.M. weeknights and all day on weekends.

Query the database to determine when statistics were last gathered. If statistics were not gathered recently, make sure that automatic statistics gathering is enabled. Lack of statistics can lead to poor SQL execution plans and inefficient database processing.

In general, the database statistics should reflect the current data volume and data distribution along with other data characteristics. There may be times when you need to gather statistics manually. You can follow the Oracle's recommendations for gathering database statistics. BMC Remedy AR System applications do not have any special requirement, however it does depend heavily on database statistics for SQL query performance.

Oracle CLOB storage

The default for character large object (CLOB) column storage during an BMC Remedy AR System server installation is In Row. For example, the actual data is stored inline with the actual row that contains the CLOB column.

If high database space growth is a concern and your CLOB are stored to Out Row, convert CLOB storage to In Row. For more information, see The page ._brid_NW_LinksLibrary v20.02 was not found  -- Please check/update the page name used in the MultiExcerpt-Include macro.

Oracle case-insensitivity

By default, the Oracle database is case-sensitive. This section discusses how you can enable case-insensitive searching for fixed-length text fields in BMC Remedy AR System using Oracle.

To enable case-insensitive searching for fixed-length text fields in BMC Remedy AR System:

  1. Configure the AR System server.
    1. Add the following configuration to the AR System server configuration file (ar.cfg or ar.conf):
      Db-Case-Insensitive: T. See The page ._brid_NW_LinksLibrary v20.02 was not found  -- Please check/update the page name used in the MultiExcerpt-Include macro on the ar.cfg or ar.conf options C-D page in AR System online documentation.
      This causes special session parameters (NLS_SORT and NLS_COMP) to be set to support case-insensitive searches.

      ALTER SESSION SET NLS_COMP=LINGUISTIC
      ALTER SESSION SET NLS_SORT=BINARY_CI
    2. Add the following configuration to the AR System server configuration file (ar.cfg or ar.conf):
      Db-Functional-Index: T. See The page ._brid_NW_LinksLibrary v20.02 was not found  -- Please check/update the page name used in the MultiExcerpt-Include macro on the ar.cfg or ar.conf options C-D page in AR System online documentation.

    By default, the AR System server creates regular indexes when you add an index to a form. To support case-insensitive searches on Oracle databases, for all indexes based on character fields, functional indexes are required instead of regular indexes. Setting this parameter causes the AR System server to create functional indexes for the form when a new index is added to a form. This parameter helps to avoid performance degradation that can result from not using database indexes.
  2. Restart the AR System server to ensure that the configuration changes take effect.
    These configuration changes cause the AR System server to configure Oracle to support case-insensitive searching. This is applicable to all searches, including those generated by workflow (active links, filters, escalations, and so on).
  3. Recreate indexes in the Oracle database as functional indexes.
    This step requires the expertise of a DBA to rebuild all character indexes as Linguistic functional indexes. Refer KA 000102992 for details. This article includes a PL/SQL script that can help the DBA to automate the process.
  4. Test.
    It is important to test these steps in a non-production environment before implementing them in the live database. Changing to Oracle case-insensitivity causes a different set of execution plans to be used, and this may have a noticeable effect on performance. This also produces changes in behavior because upper-case and lower-case letters are no longer distinguished.

For more information on the Db-Case-Insensitive setting, see The page ._brid_NW_LinksLibrary v20.02 was not found  -- Please check/update the page name used in the MultiExcerpt-Include macro on the Configuration settings C-D page in AR System online documentation. Also, refer to the knowledge article KA-000102992.

Oracle database diagnostics

Typically, Oracle 11g diagnostics come in the form of a report called Automatic Workload Repository (AWR). To create an AWR report, a snapshot is taken before and after the period of interest. The report is then generated to show how the system counters (V$ views) changed between the two snapshots. Oracle AWR snapshots are automatically taken every hour unless changed by the DBA. Reports are most valuable when they focus on a period of high activity.

The AWR reports include a high-level summary of system usage, specific observed wait events, and a list of high-load SQL statements. Oracle documentation explains how to interpret the reports. The following points provide additional guidelines.

  • Make sure the Buffer Cache and Shared Pool are well used. 

    Instance efficiency percentages (target 100%)

    Buffer NoWait %:

    100.00

    Redo NoWait %:

    100.00

    Buffer Hit %:

    99.99

    In-memory Sort %:

    100.00

    Library Hit %:

    98.99

    Soft Parse %:

    99.51

    Execute to Parse %:

    3.51

    Latch Hit %:

    100.00

    Parse CPU to Parse Elapsed %:

    86.68

    % Non-Parse CPU:

    93.57

  • If the Buffer Cache and Shared Pool are well used, check for blocking wait events. 

    In the wait event summary at the top of the AWR report, the CPU time event should be near or at the top of the list (ideally 70% or more). Typically, CPU time might be low if you are I/O bound. If the top wait events are relate to I/O, you might be getting poor SQL execution plans. 

    The high-load SQL statements that Buffer Gets order are listed further down the report. If you see statements with very high Buffer Gets per Execute, an index might be missing on a table that the statement accesses. db_file_scattered_read wait events are associated with Full Table Scans or Index Fast Full Scans and can indicate a need for additional indexes (or up-to-date statistics for Oracle). 

    The following table lists the top timed events: 

    Event

    Waits

    Time (sec)

    Average wait (msec)

    % total call time

    Wait class

    CPU time

    n/a

    94

    n/a

    95.7

    n/a

    LNS wait on SENDREQ

    713

    51

    71

    51.8

    Network

    Log file parallel write

    15,869

    10

    1

    10.6

    System I/O

    Log file sync

    14,965

    10

    1

    10.4

    Commit

    Control file sequential read

    21,781

    10

    0

    10.0

    System I/O

  • Use the Oracle SQL tkprof trace utility.  

    To learn about executed SQL statements and their execution plans, use the Oracle SQL tkprof trace utility.

When SQL tracing is on, raw trace files are generated in an Oracle dump directory. The tkprof utility can use these trace files to create reports on executed SQL statements. Multi-threaded applications might produce multiple trace files at the same time, and the trace files might get large quickly, so managing the generated trace files can be challenging.

Other ways to evaluate SQL execution plans include directly using the SQL plan tables, such as V$SQLAREA, V$SQL_TEXT, and V$SQL_PLAN. If the EXECUTION PLAN has been aged out of V$SQL_PLAN, use the EXPLAIN PLAN and AUTOTRACE commands.

When tuning SQL statements, getting the runtime execution plan for the SQL statements in question is important. Oracle supplies a awrsqrpt.sql script, (available in $ORACLE_HOME/rdbms/admin). This script takes the Start and End AWR Snapshot IDs and the SQL ID of the statement to be examined as arguments. If a runtime plan is not available, then use the EXPLAIN PLAN command to get an execution plan for the statement.

When two plans differ in the Cost column, the plan with the lower cost values may not necessarily be better. The execution for the SQL with the higher cost might be better. To verify this, run the SQL in a tool, such as SQL*Plus, with timing turned on.

Adaptive Query Optimization and Dynamic Sampling


One of the major changes to the optimizer in Oracle Database 12c is Adaptive Query Optimization. Adaptive Query Optimization is a set of capabilities that enable the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics. This new approach is extremely helpful when existing statistics are not sufficient to generate an optimal plan.

During the compilation of a SQL statement, the optimizer decides if the available statistics are sufficient to generate a good execution plan or if it should consider using dynamic sampling. Dynamic sampling is used to compensate for missing or insufficient statistics that would otherwise lead to a very bad plan.

Dynamic statistics has been enhanced in Oracle 12c, but this can lead to a longer parse time. Automatic dynamic statistics is used more often in 12c which can lead to a parse time increase in the more cases than before.

Note

If you see intermittently slow performing queries for Remedy Workload on Oracle 12cdatabase, it is recommended to review these new features in Oracle 12c and disable the same, if needed.

Oracle performance case study

Issue

A single user submitted a incident ticket, and it took 18 seconds to process.

Symptoms

  • The database trace accounted for less than 2 seconds of time.
  • The arserverd process accounted for 2 to 3 seconds of CPU time.
  • The SQL log from AR System shows SQL time at approximately 1 second.

Diagnostic steps

  1. Reduce the number of connections to the database.
  2. Measure the CPU time of all the relevant processes on the AR System server and Oracle database as follows:
    1. Bring up the environment (with limited connections from the AR System server to Oracle).
    2. Execute the transaction once to warm up the environment.
    3. Start the transaction again up to the point before submission.
    4. Record the process status (ps) time for arserverd and Oracle processes.
    5. Click SUBMIT.
    6. Run the top command and verify which Oracle process is consuming time.
    7. Record the process status time for arserverd and Oracle processes.

Observations and analysis

  • The arserverd took little time. The Oracle process consumed most of the CPU time (close to 18 seconds).
  • A combined log for API callsand SQL and filter processing was informative. There was a gap of 0.8 seconds on every LOB write. If all LOB writes at different places in the log file were added, the total time accounted for was over 15 seconds.

    * Thu Dec 07 2010 08:13:54.2535 */OK 
    * Tue Dec 07 2010 08:13:54.2535 */SELECT C456 FROM T1253 WHERE C1 = '000000000004902' FOR UPDATE 
    * Tue Dec 07 2010 08:13:54.2536 */Set LOB into the above row... 
    * Tue Dec 07 2010 08:13:54.2550 */OK 
    * Tue Dec 07 2010 08:13:55.0727 */UPDATE T1253 SET C459 = EMPTY_CLOB() WHERE C1 = '000000000004902' 
    * Tue Dec 07 2010 08:13:55.0759 */OK 
  • The Oracle raw trace file showed a similar pattern during the direct write operation.

    WAIT #4: nam='SQL*Net message from client' ela= 155 driver id=1650815232 #bytes=1 p3=0 obj#=-1 
    tim=295710595994 
    WAIT #9: nam='direct path write' ela= 0 file number=7 first dba=396004 block cnt=1 obj#=-1  
    tim=295710596586 
    WAIT #9: nam='direct path write' ela= 1 file number=7 first dba=396004 block cnt=1 obj#=-1 
    tim=295710596738 
    WAIT #9: nam='direct path write' ela= 52 file number=7 first dba=396004 block cnt=1 obj#=-1 
    tim=295710596787WAIT #0: nam='SQL*Net message to client' ela= 5 driver id=1650815232 #bytes=1 
    p3=0 obj#=-1 tim=295711477135 
    WAIT #0: nam='SQL*Net message from client' ela= 146 driver id=1650815232 #bytes=1 p3=0 obj#=-1 
    tim=295711477427 
    STAT #4 id=1 cnt=1 pid=0 pos=1 obj=0 op='FOR UPDATE (cr=6 pr=0 pw=0 time=85 us)' 
  • Oracle Support had a note with a similar issue: 

    Note 393780.1 Poor Performance Writing Blobs Symptoms. A job that is writing BLOBs are taking long time and consumes a lot of CPUs. 

    The 'pstack' output of the spinning process shows: 
    __fdsync ksfdsyncdata kcflsync kcblsy kcblcn kcblrr kcbldrcls kdlpdba ktsplbfmb ktsplbrecl ktspgsp_cbk1 kdlgsp kdlsfb kdl_write1 kokliccb koklcre kokleva evaopn2 insolev insbrp insrow insdrv inscovexe insExecStmtExecIniEngine insexe opiexe kpoal8 opiodr ttcpip opitsk opiino opiodr opidrv sou2o opimai_real main _start 

    CauseEspecially the __fdsync() system calls indicate that the problem is related to OS.
  • Technical note from Veritas: 

    Oracle Import takes longer when using buffered VxFS than using unbuffered VxFS. 

    Loading data into database using the "import" utility may be slower with buffered VxFS. Double buffering could be easily avoided by enabling Quick I/O for VERITAS File System (VxFS). 

    Quick I/O allows regular files built on VxFS to be accessed as a raw device, bypassing normal file system buffering and allowing direct I/O. There is no question of double-buffering when Quick I/O is used.

Recommendation

Use Quick I/O or raw devices.

Was this page helpful? Yes No Submitting... Thank you

Comments