Cleaning up the BMC Server Automation database

This topic was edited by a BMC Contributor and has not been approved.  More information.

Performing cleanup on your BMC Server Automation database is an important way to ensure that your database and file server contain only wanted information, and that any information that is no longer needed or useful is purged.

What is BMC Server Automation database cleanup?

As you use the various functions available in BMC Server Automation (compliance, patch, provisioning, deploy, snapshot, etc.), data is generated for the corresponding jobs and objects and they get stored in the BMC Server Automation database and File Server. This data grows over time and it is a good practice to regularly delete this data if you don’t need it for historical or reporting purposes. Database cleanup involves removing the BMC Server Automation database data which is no longer in use and will not be used in the future.

Not running DB Cleanup regularly can result in the following issues:

  • Database size increasing with old data and quickly using up your allocated DB disk space
  • Degradation in the performance of the BMC Server Automation user interface
  • DB Cleanup attempted after a big gap can result in cleanup taking a long time, and sometimes requiring a longer downtime of your BMC Server Automation environment, which in many cases is not affordable.

If you remove the unused data periodically, then the database size is maintained, BMC Server Automation performance is better and your successive cleanups can complete in shorter durations.

How to set up BMC Server Automation DB cleanup?

To ensure that your BMC Server Automation database does not grow to an unmanageable size, you must proactively set up DB Cleanup and run the out-of-the-box scripts and jobs at regular intervals.

The first step in DB Cleanup setup is to configure a data retention policy for Job Run data and 'Auto-Generated' objects.

What is a DB retention policy?

Retention policy is the time period for which you need to retain/keep your data in the BMC Server Automation database.

For example, your organization has a policy to to keep the compliance data from the last 60 days. In this case, you need to set the retention policy to 60 days so that any data which is prior to the last 60 days is deleted by the cleanup operation.

How to set a retention policy in BMC Server Automation?

Retention Policy setting is done in two places.

  1. The default time (in days) is set in the RCP console in the Property Dictionary in the Job class. The property can be set at various Job sub type (class) levels as well, if you would like to have a different retention policy for NSH Script Jobs and Patching Jobs, for example. Note also that this will affect new jobs only, and you will have to update any existing job objects with the new default settings. The RESULT_RETENTION_POLICY defines how long job runs will be kept for the job. This property also effectively limits how long the Job Run log and Job Result data will be kept, because those are children of the job run. (More information about this property is provided later on this page.)
  2. For auto-generated depot objects and jobs, the retention time is defined by a setting in the blasadmin utility, Cleanup:AutoGeneratedRetentionTime. To set this value (in days) you can run blasadmin set Cleanup AutoGeneratedRetentionTime <number of days> on the application server. Auto-generated objects are any object with the AUTO_GENERATED property set to true. One example of auto-generated objects are the remediation artifacts associated with Patching Jobs. If the AUTO_GENERATED property is set to true on a BLPackage or other Depot Object, any associated jobs will be soft-deleted when the blcli Delete executeRetentionPolicy command is executed, due to the dependency relationship, so the AUTO_GENERATED property value need not be set on the Job level.

You must also run the blasadmin command blasadmin set Cleanup EnableRetentionPolicy true and restart the application server service for these options to take effect.

How to apply a retention policy?

After you define the retention policy you need to apply it to the BMC Server Automation DB, which marks all the necessary objects in the DB for deletion, and are hence deleted when the DB Cleanup operation is run. This step is mandatory before you run DB Cleanup. The retention policy is applied to the BMC Server Automation DB by running the following BLCLI command:

blcli Delete executeRetentionPolicy

This command is run as part of a built-in cleanup script discussed later, but this is what is actually run.

What does the Retention policy blcli command do?

This command finds and updates the records in database to be deleted. This process is called soft deletion. That means, the data is being marked for deletion.

In the BMC Server Automation DB, Historical data is the data related to jobs run logs, results and schedules, which are stored in job results, job events, compliance results, audit results, snapshot results, and job schedule tables.

These types of data are typically responsible for the bulk of the database growth on a daily basis, with the Job Run Event Log data (JobRunEvent) and the Audit Trail data (AuditTrail) usually consuming the most space.

Job Run, Job Result, Job Run Event ?

Job Run and Job Run Event (Job Run Log) data are two different things and can be retained differently as noted. The implication is that it's possible to delete Job Run Event (Job Run Log) data while still leaving the record of the Job Run in the GUI/database. The screenshot below shows where you see the various data in the GUI.

The next screenshot shows how it's possible to have Job Result data still present but for the Job Run Event (Job Run Log) data to have been cleaned up.

It's entirely acceptable to remove Job Run Event/Log data and leave the Job Run, it just depends on the requirements of your environment.

What is the difference between Cleanup and Object Deletion?

When you delete an object in the BMC Server Automation GUI, your are soft deleting the object. This results in the is_deleted column being set to 1 for the row in the database table for the latest version of the object, and possibly for other dependent objects. So the object still exists in the database, but it is not accessible in the GUI. Cleanup finds the is_deleted = 1 objects and removes the entry from the database altogether.

Can I un-delete soft deleted objects?

If it is a matter of flipping the is_deleted = 1 to 0, then can I recover that blpackage if I did not mean to delete it? The answer is that this is not a supported feature. While you may be able to do this, even under the guidance of BMC Support, it is not supported. In some cases it is not enough to flip the value on one row, as additional associated rows must be updated.

After soft deleting an object, when will the hard delete take place?

Soft deletion takes place when you delete through the GUI, or when executeRetentionPolicy runs as noted above (for example, to delete a job run that is over your retention policy). Hard deletion of soft-deleted objects takes place when clenaupHistoricalData (for the Historical data types), cleanupDatabase (for workspace objects), and cleanupFileServer (for file payloads on the file server and corresponding database entries) run.

Is all this true for a depot object or a snapshot, or something that has a corresponding object on the file server or the application server?

The soft delete will happen but the hard delete will not. For example, if you take a snapshot, that snapshot serves as the baseline, and you run your snapshot job every day, resulting in deltas based on the baseline. The job run for the baseline gets soft deleted (is_deleted = 1).

After cleanupHistoricalData and SnapshotResult run, cleanupDatabase runs and sees that it is deleted, but then sees that it is being used by the delta snapshots, and maybe an audit job or two, so it does not delete it. Specifically, some of the snapshot-related data, such as file contents (bl_file), were not handled at all in 7.x cleanup, so while some snapshot-related data was touched by cleanupDatabase, not everything would be.

In 8.2 there is a new command called ‘hardDeleteAllSharedObjects’ which handles the blfile objects and other ‘shared’ objects and will flag them so that cleanupFileServer will eventually purge any objects on the file server file system associated with these objects.

Additional Cleanup Commands

Cleanup can be broken down into a few different areas:

  • Historical Data — Job Run Logs, Job Results (Snapshot, Audit, Compliance), prior Job Schedules and Object Audit Trail objects. These types of data typically consume the bulk of the space in the database and are responsible for the bulk of the database growth on a daily basis, with the Job Run Event Log data (JobRunEvent) and the Audit Trail data (AuditTrail) typically consuming the most space.
  • Soft deleted Data — Any time an object is deleted in the GUI, or deleted by the retention policy, this data must eventually be hard-deleted from the database by the cleanupDatabase command
  • File Server Data — After objects are removed from the database, the actual underlying file system objects need to be removed from the file server. Note that the cleanupDatabase step must complete before the cleanupFileServer is run.
  • Shared Object Data — This data is typically related to Snapshot and Audit data, and can consume a large amount of database space, especially if Inventory Snapshots (or any other environment-wide Snapshot Jobs or Audit Jobs) are run on a frequent basis.

Cleanup JobRunEvent

Command used: blcli Delete cleanupHistoricalData JobRunEvent

This command deletes the data from JOB_RUN_EVENT table. JOB_RUN_EVENT holds all the job run event log data (right-click on a Job Run and select Show Log). There have been instances of having 2-4 million entries per day in this table. So it is highly recommended to run this command periodically.

This command does not honor the RESULT_RETENTION_TIME set on the Job, and instead it uses the default retention time set in the HK_CLEANUP_PROCEDURE if no retention time is passed to the command. If RESULT_RETENTION_TIME is set to 30 days and you run blcli Delete cleanupHistoricalData JobRunEvent, then the default 14 day retention value would be used and you would end up with Job Run entries in the GUI that lack any Job Run Log data.

Cleanup AuditTrail

Command used: blcli Delete cleanupHistoricalData AuditTrail

This command deletes the data from AUDIT_TRAIL table. This table stores all the Authorization audit entries and is known to grow large unless cleaned up regularly.

Cleanup JobSchedule

Command used: blcli Delete cleanupHistoricalData JobSchedule

This command deletes the data from the SCHEDULE and related tables.

Cleanup AuditResult

Command used: blcli Delete cleanupHistoricalData AuditResult

This command cleans up the audit results from the database. This command honors the RESULT_RETENTION_TIME setting on the job and ignores the retention time passed as an argument.

Cleanup SnapshotResult

Command used: blcli Delete cleanupHistoricalData SnapshotResult

This command deletes the snapshot related data from the database. This command honors the RESULT_RETENTION_TIME setting on the job and ignores the retention time passed as an argument.

Cleanup ComplianceResult

Command used: blcli Delete cleanupHistoricalData ComplianceResult

This command deletes the data from compliance results tables. This command honors the RESULT_RETENTION_TIME setting on the job and ignores the retention time passed as an argument.

Cleanup database

Command used: blcli Delete cleanupDatabase

cleanupDatabase will cleanup the snapshot, audit, and compliance job results. It will also cleanup other BMC Server Automation objects and jobs that are marked for soft delete in the BMC Server Automation DB. cleanupDatabase will use the default retention policy set in the UI and which has been enforced by the Delete executeRetentionPolicy command.

Note that certain objects are not deleted due to potential object dependencies. The cleanup process deletes all top-level model objects of the classes listed in the hk_allowed_classes database table, as well as all their child classes (which are not listed in this table). BMC recommends that you do NOT modify the contents of the hk_allowed_classes table.

Cleanup fileserver

Command used: blcli Delete cleanupFileServer

This command removes the data from file server which are no longer required. It is highly recommended that you run this command after all above commands are executed to free up space from your fileserver. When all of the references of files are successfully deleted, then that file will get deleted from the file server.

Cleanup SharedObjects

Command used: blcli Delete hardDeleteAllSharedObjects

This command deletes the shared objects from the database. Shared Object Data — This is the data that is shared across multiple modules like ACL policies, config objects, file contents for Snapshot and Audit Jobs, etc. Shared objects are deleted if all of the referring objects are successfully deleted in cleanup DB.

Note that certain objects are not deleted due to potential object dependencies. The cleanup process deletes all top-level model objects of the classes listed in the hk_allowed_classes database table, as well as all their child classes (which are not listed in this table). BMC recommends that you do NOT modify the contents of the hk_allowed_classes table.

Cleanup App Server Cache

Command used: blcli Delete cleanupAllAppServerCaches <retention period>

This command deletes old temporary files on all the Application Servers that are currently up and running (and accessible).

Cleanup Agent

Command used: blcli Delete cleanupAgent <retention period>

This command deletes files related to old deployments. This is not directly related to database cleanup.

Cleanup Repeater Server

Command used: blcli Delete cleanupRepeater <repeater server same> <retention time in days> <maximum cache size in MB>

This command cleans up files from the staging directory of a repeater server. It removes old files until the desired max cache size is reached or until files remain that match the retention time criteria. This command does not delete the lost + found directory. Instead, it preserves this directory and any files contained in the directory. However for other directories, this command deletes the directory and any files contained in the directory. 

Running Cleanup

After you set the retention policy and execute it, you need to regularly run the DB Cleanup provided out-of-the-box in BMC Server Automation: BSA Recommended Database Cleanup Job (an NSH script job)

Your scheduling of DB Cleanup depends upon your reporting needs. Customers usually run the ETL job to populate the BDSSA data warehouse from the BMC Server Automation DB before they run DB Cleanup to ensure they have their old data in the BDS warehouse DB for reporting.

Assuming you have done ETL operation, you run BSA Recommended Database Cleanup Job from the BMC Server Automation console for performing DB Cleanup. This Job will run other cleanup commands discussed later.

Prior to product version 8.2, no built-in object was available for performing cleanup, and the user had to create NSH Scripts and Jobs to call the underlying BLCLI commands that run cleanup. As of version 8.2, a built-in NSH Script Object is available. This object contains the required BLCLI commands that were discussed above and takes a number of arguments to determine what to run. For more information about using this out-of-the-box BMC Server Automation job and its parameters, see Changing Database Cleanup script options and commands.

Recommendations for Running Cleanup in a large Operational Environment

Recommended Schedule

Based on experience at customer environments, we have created some recommendations on how and when to run a database cleanup in a BMC Server Automation environment. The general recommendation is to run the Historical cleanup on a daily basis (for example, all days of the week except for Sunday) and then run the hard delete cleanups (cleanupAllSharedObjects, cleanupDatabase, cleanupFileServer) on a weekly basis (for example, every Sunday). Also, ensure that if you are using the BMC BladeLogic Decision Support for Server Automation product (the reporting application for BMC Server Automation), the ETL does not run during the 'hard delete' cleanup run.

Create the objects noted in the table below. For 8.0/8.1, you create NSH Scripts and Jobs that contain the command noted. For 8.2 you create NSH Script Jobs based on the built-in NSH Script Job for cleanup (labeled BSA Recommended Database Cleanup Job in the BMC Maintenance folder in the BSA Jobs workspace), and set the Execution Mode as noted (see Changing Database Cleanup script options and commands for detailed instructions on this process).

Job Name

8.0/8.1 Script Content

8.2 Job ExecutionMode Option

Cleanup - Retention

blcli Delete executeRetentionPolicy

RETENTION

Cleanup - JobRunEvent

blcli Delete cleanupHistoricalData JobRunEvent

HISTORY_O JobRunEvent

Cleanup - AuditTrail

blcli Delete cleanupHistoricalData AuditTrail

HISTORY_O AuditTrail

Cleanup - SnapshotResult

blcli Delete cleanupHistoricalData SnapshotResult

HISTORY_O SnapshotResult

Cleanup - AuditResult

blcli Delete cleanupHistoricalData AuditResult

HISTORY_O AuditResult

Cleanup - ComplianceResult

blcli Delete cleanupHistoricalData ComplianceResult

HISTORY_O ComplianceResult

Cleanup - JobSchedule

blcli Delete cleanupHistoricalData JobSchedule

HISTORY_O JobSchedule

Cleanup - Database

blcli Delete cleanupDatabase 1 null <duration>

CLEAN_DB

Cleanup - SharedObjects

blcli Delete hardDeleteAllSharedObjects
(8.1 SP4+ only)

CLEAN_OBJECTS
or
CLEAN_SHARED_OBJECTS

Cleanup - FileServer

blcli Delete cleanupFileServer

CLEAN_FS

Cleanup - AppServerCaches

blcli Delete cleanupAllAppServerCaches 14

CLEAN_ALL_AS

Set up the following Jobs and execution schedules:

  • Daily 'Historical' Cleanup:
    A Batch Job named "Daily Cleanup". This job contains the following jobs, set to run sequentially:
    • Cleanup Retention
    • Batch Job "Cleanup Historical" - with the following jobs set to run in parallel:
      • Batch Job "Cleanup Result" - with the following cleanups set to run sequentially:
        • Cleanup AuditResult
        • Cleanup SnapshotResult
        • Cleanup ComplianceResult
      • Cleanup JobRunEvent
      • Cleanup AuditTrail
      • Cleanup JobSchedule
    • Cleanup AppServerCaches
  • Weekly 'Hard Delete' Cleanup:
    A Batch Job named "Weekly Cleanup". This job contains the following jobs, set to run sequentially:
    • Cleanup Retention
    • Cleanup Database
    • Cleanup Shared Objects (8.1 SP4+ only)
    • Cleanup FileServer

Note

For BMC Server Automation 8.1 with an MS-SQL database back end, the JobRunEvent cleanup should be run as noted:

blcli Delete cleanupHistoricalData JobRunEvent 1> blcli.out

This will let the cleanup complete without resource contention in the DB, so the Java process will not run into timeout issue. We have encountered this defect only in the MS-SQL Server, and have not encountered this issue in an Oracle database.

Note

In the script provided in BMC Server Automation 8.2,e modify how the Delete cleanupDatabase is called. It should be called with a lower logging level to reduce the possibility of causing blocking in the database. Around line 698 in the file, change

blcli -n false Delete cleanupDatabase $CONTINUE_ON_ERR $dbDuration

to

blcli -n false Delete cleanupDatabase 1 $CONTINUE_ON_ERR $dbDuration

This option is available in 8.1 SP5 as well. In BMC Server Automation 8.1 SP4 or earlier, change all occurances of

Delete cleanupDatabase 4

to

Delete cleanupDatabase 1

in the <install dir>/NSH/br/xml/cli/Delete-Additional.xml on your application servers.

Setting the MaxObjectsPurgedPerTransaction

The default value of this blasadmin setting is 1000. This determines how many top level objects are deleted in one go and if there are more objects than this cleanup will run another batch. This is so that we are not spending so much time in deletes and updates that we cause the other transactions in the database (jobs) from being able to run.

This number can be a bit misunderstood. One batch of 1000 objects will also delete the child objects of the objects in the batch. So for example if we are processing Jobs we go delete the job, the job runs, the job run event messages associated with the job runs (assuming JobRunEvent cleanup has not deleted them already). A single job could have thousands of job run event entries and if you have a thousand jobs in the batch that are similar you are now deleting 1 million rows for this batch of 1000 objects.

In larger, busy environments, it's recommended to reduce the default setting, some customers are running their cleanup w/ the value set to 100 with good results.

Discussion of Duration

Most of the cleanup commands include a duration parameter. This tells the cleanup how long it can run for before cleanly exiting even if the cleanup is not complete. This is of import for the blcli Delete cleanupDatabase command. As noted above in the discussion of the MaxObjectsPurgedPerTransaction setting, the deletions of objects are done in batches. The duration is checked after each batch completes. This can result in the duration time expiring well into one of the batch deletes (for example, job runs with millions of rows). So the parameter can be misleading. By reducing the transaction size it's possible to get close to the actual value of the duration window because there will be less time spent in the delete with less objects in the delete batch.

JOB_PART_TIMEOUT and JOB_TIMEOUT

These two property values on the NSH Script Job(s) for cleanup should always be set to 0 and if the NSH Jobs are run as a Batch Job as suggested above, the Batch Job should have these properties set to zero.  If the JOB level timeout is hit, that will stop the script job from running, but this will not cleanly stop any of the blcli Delete commands properly and will likely leave sessions open in the database that continue to execute.  The only correct way to limit the run time of cleanup is to use the duration parameter to the commands.

Improving JobRunEvent cleanup

BMC has provided updated stored procedures for the blcli Delete cleanupHistoricalData JobRunEvent cleanup that will increase the amount of data these procedures remove which will in turn improve the performance of cleanupDatabase.  The updated stored procedure is provided in a sql script that can be run as the bladelogic database user and will replace the existing stored procedure.  

When the historical JobRunEvent cleanup runs with the new procedure, rows in job_run_event that are associated with soft-deleted job_run rows will also be hard deleted.  Then, when cleanupDatabase runs after this, it will have less rows to delete when it is hard deleting job_run rows, as there will be less or no child rows in job_run_event associated with the rows in job_run being deleted.  This will speed up cleanupDatabase and reduce the likelihood that it will cause an impact to the operation environment. This is added into the product in 8.3.03 and later and available from BMC Support.

Additional Cleanups

Several targeted sql-based cleanup scripts are available for particular tables or data domains. These were created based on the experiences of customers at sites where certain table groupings grew significantly and normal cleanups did not run successfully or the installed version of BMC Server Automation could not handle those tables. To see what may be useful in your environment, run the information-gathering scripts noted in this article and open a ticket with BMC Support.

Additional Indexes

Much work was done between version 8.0 and version 8.2 of BMC Server Automation to improve the database referential integrity. The result of this work was that a number of new foreign key constraints were added.  Recently BMC produced a script that adds indexes on these foreign keys to improve general database performance, including cleanup. 

Scripts that address this issue are available in the Server Automation community: 

Catching up on cleanup

In many environments cleanup may not have been run properly for some amount of time and there is such a large amount of data that the blcli-based cleanup will have a difficult time catching up. Depending on what tables are involved, you can use several alternative methods for catching up.

The first thing to do to determine the state of your database is to run a few diagnostic queries.

  • Run the assessment utility with the 'size' argument to output the appropriate result files.
    • Review the results. Here is a sample from the Oracle script, from the BMC-Bladelogic_DCT_output-table_rows_<datetime>.csv file.

      Table Name

      Row Count

      JOB_RUN_EVENT

      1107304220

      CONFIG_FILE_OBJECT_VALUE

      52987184

      AUDIT_TRAIL

      47247168

      ASSET_ATTRIBUTE_VALUE

      38265288

      PROP_SET_INSTANCE_PROP_VAL

      26528806

      JOB_RESULT_DEVICE

      26487184

      LIST_BL_VALUE_ELEMENT

      25886376

      BL_VALUE

      24838668

      DEVICE_AUDIT_TRAIL

      20979556

      BL_ACL_AGG_AUTH

      16089460

      PROP_SET_INSTANCE_AUDIT_TRAIL

      14771248

  • These are some of the most problematic tables:
    • JOB_RUN_EVENT: Typically this is handled by running blcli Delete cleanupHistoricalData JobRunEvent but with 1 billion rows it could take a while to take care of that data. JOB_RUN_EVENT holds all the job run event log data (right-click on a Job Run and select Show Log). This table can be handled by running offline database cleanup (available for BMC Server Automation 8.3.02 or later).
    • AUDIT_TRAIL: No 'catchup' script is available for this table, but this table is typically a candidate for growing quite large as it stores all the Authorization audit entries. This table is also handled by running offline database cleanup.
    • CONFIG_FILE_OBJECT_VALUE: This file typically grows large when the customer is using pre-8.2 BMC BladeLogic Decision Support for Server Automation Inventory Templates, which take snapshots of the os_config.nshExtended Object. If this is done daily, this table can grow quite large. After upgrading to BMC BladeLogic Decision Support for Server Automation 8.2 or above, the data contained in this table is mostly irrelevant and can be deleted. There are a couple ways to handle this:
      • In BMC Server Automation 8.1.04+ and 8.2+, you can use a blcli Delete hardDeleteAllSharedObjects command to clean this table.
      • For very large CONFIG_FILE_OBJECT_VALUE tables, running offline database cleanup will also take care of this data.
    • RESULT: These tables can be handled by the corresponding blcli Delete cleanupHistoricalData Result commands. Running offline database cleanup will also touch some of the related tables.
  • The general ideal course of action for an attempted catchup is to get all of the cleanupHistoricalData commands completed in less than 1 day on a consistent basis. Then try running hardDeleteAllSharedObjects (if it applies) and then cleanupDatabase and cleanupFileServer in an ad-hoc fashion until they too are completing in a reasonable amount of time. Then you can switch to the schedule mentioned above.
  • After any large change in table size it is worthwhile to perform a shrink on the indexes (for Oracle) and likely an online index rebuild for MSSQL. Then run the 'gather stats' command (Oracle).

Monitoring cleanup runs

BSA Health and Value Dashboard

The BSA Health and Value Dashboard utility gathers information about the database, such as when stats was computed, tables sizes, when/what cleanup was run and how much it has cleaned.  

Checking on cleanup

To check how many compliance results are left to be deleted you can issue the following query:

select count(*),trunc(date_created,'mon') from job_result where object_type_id = 5107 and is_deleted = 1 group by trunc(date_created,'mon') order by trunc(date_created,'mon');

To check how many audit results are left you can issue this query instead:

select count(*),trunc(date_created,'mon') from job_result where object_type_id = 11 and is_deleted = 1 group by trunc(date_created,'mon') order by trunc(date_created,'mon');

To see how many snapshot results remain to be deleted, use the following query:

select count(*), trunc(date_created,'mon') from job_result where object_type_id = 9 and is_deleted = 1 group by trunc(date_created,'mon') order by trunc(date_created,'mon');

The previous queries will give you a count of deleted objects by month, so as cleanup runs you should see count of deleted objects in months past decrease.  It's possible the overall count of deleted objects could increase if you are creating and then soft deleting more objects that you are hard deleting.  The queries noted above are examples based on the job_result table - it's possible to use similar queries to check job_run, job_run_event, etc.

To find out whether any long running cleanup task is in processing stage, execute the following query:

set lines 132
                col current_action format a80
                col duration format a15
                select
                    task_id,
                    cast(
                         (cast(updated_at as timestamp)-cast(started_at as timestamp))
                         as interval day(0) to second(0)
                    )
                    as duration,
                    current_action,
                    to_char(deleted_rows,'9G999G999G999') as deleted_rows
                from
                    delete_tasks
                where
                    ended_at is null
                ;

Miscellaneous guidelines

The following sections discuss miscellaneous guidelines that are related to the cleanup process.

Monitoring Oracle tablespace growth

It can be worthwhile to monitor the database growth over time with a handy Daily_Table_Report.sql.txt and Daily_Table_Report.ksh.txt. Modify them to match your environment and set them to run as a sysdba. This will send you an email every day and show you how much tablespace was consumed by the tables compared to the day before. This method can also be useful to spot problematic trends (for example, someone enabled the Audit Trail logging on Server.Read).

Assessing the Retention Policy

A few queries can be useful to see how much data you have broken down by month (or other interval) — a few useful queries are noted below. These could be extended to other tables as long as they include a date you can group by. The idea here would be to see if your retention policy is not aggressive enough.

select trunc(event_date,'mon'),count\(*) from job_run_event group by trunc(event_date,'mon') order by trunc(event_date,'mon');

select trunc(log_date,'mon'),count\(*) from audit_trail group by trunc(log_date,'mon') order by trunc(log_date,'mon');

select trunc(start_time,'mon'),count\(*) from job_result_device group by trunc(start_time,'mon') order by trunc(start_time,'mon');

select trunc(start_time,'mon'),count\(*) from job_run group by trunc(start_time,'mon') order by trunc(start_time,'mon');

Gather statistics

Oracle provides a default routine to calculate statistics on the tables and the DBA usually has this enabled by default. BMC has worked with Oracle and has created an improved procedure and this ships in the <version>-external-files.zip for each version and is available from the EPD. The Oracle default should be disabled and the DBA should install and enable these to run on a weekly (at minimum) basis. This will improve both cleanup performance and database/user interface performance in general.

To see if stats are current, run the Health and Value Dashboard utility and review the results. Stats dates should be fairly current. If a few tables have current dates and many others have fairly old dates this might be an indication that the default Oracle stats are still in place.

As an alternative to running these processes on the database server, you can create an NSHScript Job that runs the DBManager - blGatherSchemaStats BLCLI Command on a scheduled basis.

Alternative Gather Stats check

As an alternative to running the script that was mentioned you could also use the dbdiagnostics command to determine if the stats are up to date. In the <install_dir>/NSH/br directory there is a 'dbdiagnostics' binary. This can be run by:

# ./dbdiagnostics runDiag diabId=1000006 

The results can be viewed by:

#./dbdiagnostics getResLastExec diagId=1000006

diagId=1000006

execDiagId=2000040

execStartTime=2012-09-21 10:00:30.0

messageLevel=INFO

message=DBMS_STATS_CHK: DBMS_STATS on the Database ran 221 days ago, which is NOT OK. The Expected running of DBMS_STATS is once in 15 days. Please run BL_GATHER_SCHEMA_STATS PROC for this schema.

messageTime=2012-09-21 10:00:31.0

Attached Scripts

Here's a list of the attached files that can be used with this process.

Name Version Published
Daily_Table_Report.ksh.txt 2 2013-10-01 02:46
Daily_Table_Report.sql.txt 2 2013-10-01 02:46
arsCleanup1.4.zip 1 2015-02-03 02:15
bl_tbl_size_oracle.sql 1 2012-09-21 09:43
calc_tbl_size_sqlserver.sql 1 2012-09-21 09:50
check_del_task.sql 1 2013-10-01 02:46
delete_tasks_oracle.sql 1 2012-09-21 09:42
delete_tasks_sqlserver.sql 1 2012-09-21 09:50
gather_schema_stats.zip 2 2012-09-21 09:35
getLastRun_GatherStat_oracle.sql 2 2012-09-21 09:43

Was this page helpful? Yes No Submitting... Thank you

Comments