Job types
The following series of sections provide details about the available types of jobs that you can define and the parameters that you use to define each type of job.
Job:Command
The following example shows how to use the Job:Command to run operating system commands.
"Type" : "Job:Command",
"Command" : "echo hello",
"PreCommand": "echo before running main command",
"PostCommand": "echo after running main command",
"Host" : "myhost.mycomp.com",
"RunAs" : "user1"
}
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
RunAs | Identifies the operating system user that will run the job. |
PreCommand | (Optional) A command to execute before the job is executed. |
PostCommand | (Optional) A command to execute after the job is executed. |
Job:Script
The following example shows how to use Job:Script to run a script from a specified script file.
"Type" : "Job:Script",
"FileName" : "task1123.sh",
"FilePath" : "/home/user1/scripts",
"PreCommand": "echo before running script",
"PostCommand": "echo after running script",
"Host" : "myhost.mycomp.com",
"RunAs" : "user1"
}
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
RunAs | Identifies the operating systems user that will run the job. |
FileName together with FilePath | Indicates the location of the script. NOTE: Due to JSON character escaping, each backslash in a Windows file system path must be doubled. For example, "c:\\tmp\\scripts". |
PreCommand | (Optional) A command to execute before the job is executed. |
PostCommand | (Optional) A command to execute after the job is executed. |
Job:EmbeddedScript
The following example shows how to use Job:EmbeddedScript to run a script that you include in the JSON code. Control-M deploys this script to the Agent host during job submission. This eliminates the need to deploy scripts as part of your application stack.
"Type":"Job:EmbeddedScript",
"Script":"#!/bin/bash\\necho \"Hello world\"",
"Host":"myhost.mycomp.com",
"RunAs":"user1",
"FileName":"myscript.sh",
"PreCommand": "echo before running script",
"PostCommand": "echo after running script"
}
Script | Full content of the script, up to 64 kilobytes. |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
RunAs | Identifies the operating systems user that will run the job. |
FileName | Name of a script file. This property is used for the following purposes:
|
PreCommand | (Optional) A command to execute before the job is executed. |
PostCommand | (Optional) A command to execute after the job is executed. |
Job:FileTransfer
The following example shows a Job:FileTransfer for a file transfer from a local filesystem to an SFTP server:
"FileTransferFolder" :
{
"Type" : "Folder",
"Application" : "aft",
"TransferFromLocalToSFTP" :
{
"Type" : "Job:FileTransfer",
"ConnectionProfileSrc" : "LocalConn",
"ConnectionProfileDest" : "SftpConn",
"NumberOfRetries": "3",
"Host": "AgentHost",
"FileTransfers" :
[
{
"Src" : "/home/controlm/file1",
"Dest" : "/home/controlm/file2",
"TransferType": "Binary",
"TransferOption": "SrcToDest"
},
{
"Src" : "/home/controlm/otherFile1",
"Dest" : "/home/controlm/otherFile2",
"TransferOption": "DestToSrc"
}
]
}
}
}
Here is another example for a file transfer from an S3 storage service to a local filesystem:
"MyS3AftFolder": {
"Type": "Folder",
"Application": "aft",
"TransferFromS3toLocal":
{
"Type": "Job:FileTransfer",
"ConnectionProfileSrc": "amazonConn",
"ConnectionProfileDest": "LocalConn",
"NumberOfRetries": "4",
"S3BucketName": "bucket1",
"Host": "agentHost",
"FileTransfers": [
{
"Src" : "folder/sub_folder/file1",
"Dest" : "folder/sub_folder/file2"
}
]
}
}
}
Here is another example for a file transfer from an S3 storage service to another S3 storage service:
"MyS3AftFolder": {
"Type": "Folder",
"Application": "aft",
"TransferFromS3toS3":
{
"Type": "Job:FileTransfer",
"ConnectionProfileSrc": "amazonConn",
"ConnectionProfileDest": "amazon2Conn",
"NumberOfRetries": "6",
"S3BucketNameSrc": "bucket1",
"S3BucketNameDest": "bucket2",
"Host": "agentHost",
"FileTransfers": [
{
"Src" : "folder/sub_folder/file1",
"Dest" : "folder/sub_folder/file2"
}
]
}
}
}
And here is another example for a file transfer from a local filesystem to an AS2 server.
Note: File transfers that use the AS2 protocol are supported only in one direction — from a local filesystem to an AS2 server.
"MyAs2AftFolder": {
"Type": "Folder",
"Application": "AFT",
"MyAftJob_AS2":
{
"Type": "Job:FileTransfer",
"ConnectionProfileSrc": "localAConn",
"ConnectionProfileDest": "as2Conn",
"NumberOfRetries": "Default",
"Host": "agentHost",
"FileTransfers": [
{
"Src": "/dev",
"Dest": "/home/controlm/",
"As2Subject": "Override subject",
"As2Message": "Override conntent type"
}
]
}
}
}
The following parameters were used in the examples above:
Parameter | Description |
---|---|
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. |
ConnectionProfileSrc | The connection profile to use as the source |
ConnectionProfileDest | The connection profile to use as the destination |
ConnectionProfileDualEndpoint | If you need to use a dual-endpoint connection profile that you have set up, specify the name of the dual-endpoint connection profile instead of ConnectionProfileSrc and ConnectionProfileDest. A dual-endpoint connection profile can be used for FTP, SFTP, and Local filesystem transfers. For more information about dual-endpoint connection profiles, see ConnectionProfile:FileTransfer:DualEndPoint. |
NumberOfRetries | Number of connection attempts after a connection failure Range of values: 0–99 or "Default" (to inherit the default) Default: 5 attempts |
S3BucketName | For file transfers between a local filesystem and an Amazon S3 or S3-compatible storage service: The name of the S3 bucket |
S3BucketNameSrc | For file transfers between two Amazon S3 or S3-compatible storage services: The name of the S3 bucket at the source |
S3BucketNameDest | For file transfers between two Amazon S3 or S3-compatible storage services: The name of the S3 bucket at the destination |
FileTransfers | A list of file transfers to perform during job execution, each with the following properties: |
Src | Full path to the source file |
Dest | Full path to the destination file |
TransferType | (Optional) FTP transfer mode, either Ascii (for a text file) or Binary (non-textual data file). Ascii mode handles the conversion of line endings within the file, while Binary mode creates a byte-for-byte identical copy of the transferred file. Default: "Binary" |
TransferOption | (Optional) The following is a list of the transfer options:
Default: "SrcToDest" |
As2Subject | Optional for AS2 file transfer: A text to use to override the subject of the AS2 message. |
As2Message | Optional for AS2 file transfer: A text to use to override the content type in the AS2 message. |
The following example presents a File Transfer job in which the transferred file is watched using the File Watcher utility:
"FileTransferFolder" :
{
"Type" : "Folder",
"Application" : "aft",
"TransferFromLocalToSFTPBasedOnEvent" :
{
"Type" : "Job:FileTransfer",
"Host" : "AgentHost",
"ConnectionProfileSrc" : "LocalConn",
"ConnectionProfileDest" : "SftpConn",
"NumberOfRetries": "3",
"FileTransfers" :
[
{
"Src" : "/home/sftp/file1",
"Dest" : "/home/sftp/file2",
"TransferType": "Binary",
"TransferOption" : "SrcToDestFileWatcher",
"PreCommandDest" :
{
"action" : "rm",
"arg1" : "/home/sftp/file2"
},
"PostCommandDest" :
{
"action" : "chmod",
"arg1" : "700",
"arg2" : "/home/sftp/file2"
},
"FileWatcherOptions":
{
"MinDetectedSizeInBytes" : "200",
"TimeLimitPolicy" : "WaitUntil",
"TimeLimitValue" : "2000",
"MinFileAge" : "3Min",
"MaxFileAge" : "10Min",
"AssignFileNameToVariable" : "FileNameEvent",
"TransferAllMatchingFiles" : true
}
}
]
}
}
}
This example contains the following additional optional parameters:
PreCommandSrc PreCommandDest PostCommandSrc PostCommandDest | Defines commands that occur before and after job execution.
| ||||||||||||
FileWatcherOptions | Additional options for watching the transferred file using the File Watcher utility: | ||||||||||||
MinDetectedSizeInBytes | Defines the minimum number of bytes transferred before checking if the file size is static | ||||||||||||
TimeLimitPolicy/ | Defines the time limit to watch a file: TimeLimitValue: If TimeLimitPolicy: WaitUntil, the TimeLimitValue is the specific time to wait, for example 04:22 would be 4:22 AM. | ||||||||||||
MinFileAge | Defines the minimum number of years, months, days, hours, and/or minutes that must have passed since the watched file was last modified Valid values: 9999Y9999M9999d9999h9999Min For example: 2y3d7h | ||||||||||||
MaxFileAge | Defines the maximum number of years, months, days, hours, and/or minutes that can pass since the watched file was last modified Valid values: 9999Y9999M9999d9999h9999Min For example: 2y3d7h | ||||||||||||
AssignFileNameToVariable | Defines the variable name that contains the detected file name | ||||||||||||
TransferAllMatchingFiles | Whether to transfer all matching files (value of true) or only the first matching file (value of false) after waiting until the watching criteria is met. Valid values: true | false |
Job:FileWatcher
A File Watcher job enables you to detect the successful completion of a file transfer activity that creates or deletes a file. The following example shows how to manage File Watcher jobs using Job:FileWatcher:Create and Job:FileWatcher:Delete.
"Type" : "Job:FileWatcher:Create",
"RunAs":"controlm",
"Path" : "C:/path*.txt",
"SearchInterval" : "45",
"TimeLimit" : "22",
"StartTime" : "201705041535",
"StopTime" : "201805041535",
"MinimumSize" : "10B",
"WildCard" : true,
"MinimalAge" : "1Y",
"MaximalAge" : "1D2H4MIN"
},
"FWJobDelete" : {
"Type" : "Job:FileWatcher:Delete",
"RunAs":"controlm",
"Path" : "C:/path.txt",
"SearchInterval" : "45",
"TimeLimit" : "22",
"StartTime" : "201805041535",
"StopTime" : "201905041535"
}
This example contains the following parameters:
Path | Path of the file to be detected by the File Watcher You can include wildcards in the path — * for any number of characters, and ? for any single character. |
SearchInterval | Interval (in seconds) between successive attempts to detect the creation/deletion of a file |
TimeLimit | Maximum time (in minutes) to run the process without detecting the file at its minimum size (for Job:FileWatcher:Create) or detecting its deletion (for Job:FileWatcher:Delete). If the file is not detected/deleted in this specified time frame, the process terminates with an error return code. Default: 0 (no time limit) |
StartTime | The time at which to start watching the file The format is yyyymmddHHMM. For example, 201805041535 means that the File Watcher will start watching the file on May 4, 2018 at 3:35 PM. |
StopTime | The time at which to stop watching the file. Format: yyyymmddHHMM or HHMM (for the current date) |
MinimumSize | Minimum file size to monitor for, when watching a created file Follow the specified number with the unit: B for bytes, KB for kilobytes, MB for megabyes, or GB for gigabytes. If the file name (specified by the Path parameter) contains wildcards, minimum file size is monitored only if you set the Wildcard parameter to true. |
Wildcard | Whether to monitor minimum file size of a created file if the file name (specified by the Path parameter) contains wildcards Values: true | false |
MinimalAge | (Optional) The minimum number of years, months, days, hours, and/or minutes that must have passed since the created file was last modified. For example: 2Y10M3D5H means that 2years, 10 months, 3 days, and 5 hours must pass before the file will be watched. 2H10Min means that 2 hours and 10 minutes must pass before the file will be watched. |
MaximalAge | (Optional) The maximum number of years, months, days, hours, and/or minutes that can pass since the created file was last modified. For example: 2Y10M3D5H means that after 2years, 10 months, 3 days, and 5 hours have passed, the file will no longer be watched. 2H10Min means that after 2 hours and 10 minutes have passed, the file will no longer be watched. |
Job:Database
The following types of database jobs are available:
- Embedded Query job, using Job:Database:EmbeddedQuery
- SQL Script job, using Job:Database:SQLScript
- Stored Procedure job, using Job:Database:StoredProcedure
- MSSQL Agent job, using Job:Database:MSSQL:AgentJob
- SSIS Package job, using Job:Database:MSSQL:SSIS
Job:Database:EmbeddedQuery
The following example shows how to create a database job that runs an embedded query.
"PostgresDBFolder": {
"Type": "Folder",
"EmbeddedQueryJobName": {
"Type": "Job:Database:EmbeddedQuery",
"ConnectionProfile": "POSTGRESQL_CONNECTION_PROFILE",
"Query": "SELECT %%firstParamName AS VAR1 \\n FROM DUMMY \\n ORDER BY \\t VAR1 DESC",
"Host": "${agentName}",
"RunAs": "PostgressCP",
"Variables": [
{
"firstParamName": "firstParamValue"
}
],
"Autocommit": "N",
"OutputExecutionLog": "Y",
"OutputSQLOutput": "Y",
"SQLOutputFormat": "XML"
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Query | The embedded SQL query that you want to run. The SQL query can contain auto edit variables. During job run, these variables are replaced by the values that you specify in Variables parameter (next row). For long queries, you can specify delimiters using \\n (new line) and \\t (tab). |
Variables | Variables are pairs of name and value. Every name that appears in the embedded script will be replaced by its value pair. The maximum length of a variable name is 38 alphanumeric characters and it is case-sensitive. |
The following optional parameters are also available for all types of database jobs:
Autocommit | (Optional) Commits statements to the database that completes successfully Default: N |
OutputExecutionLog | (Optional) Shows the execution log in the job output Default: Y |
OutputSQLOutput | (Optional) Shows the SQL sysout in the job output Default: N |
SQLOutputFormat | (Optional) Defines the output format as either Text, XML, CSV, or HTML Default: Text |
Job:Database:SQLScript
The following example shows how to create a database job that runs a SQL script from a file system.
"OracleDBFolder": {
"Type": "Folder",
"testOracle": {
"Type": "Job:Database:SQLScript",
"Host": "AgentHost",
"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
"ConnectionProfile": "ORACLE_CONNECTION_PROFILE",
"Parameters": [
{"firstParamName": "firstParamValue"},
{"secondParamName": "secondParamValue"}
]
}
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Parameters | Parameters are pairs of name and value. Every name that appears in the SQL script will be replaced by its value pair. |
For additional optional parameters, see above.
Another example:
"OracleDBFolder": {
"Type": "Folder",
"testOracle": {
"Type": "Job:Database:SQLScript",
"Host": "app-redhat",
"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
"ConnectionProfile": "ORACLE_CONNECTION_PROFILE"
}
}
}
Job:Database:StoredProcedure
The following example shows how to create a database job that runs a program that is stored on the database.
"storeFolder": {
"Type": "Folder",
"jobStoredProcedure": {
"Type": "Job:Database:StoredProcedure",
"Host": "myhost.mycomp.com",
"StoredProcedure": "myProcedure",
"Parameters": [ "value1","variable1",["value2","variable2"]],
"ReturnValue":"RV",
"Schema": "public",
"ConnectionProfile": "DB-PG-CON"
}
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
StoredProcedure | Name of stored procedure that the job runs |
Parameters | A comma-separated list of values and variables for all parameters in the procedure, in the order of their appearance in the procedure. The value that you specify for any specific parameter in the procedure depends on the type of parameter:
In the example above, three parameters are listed, in the following order: [In,Out,Inout] |
ReturnValue | A variable for the Return parameter (if the procedure contains such a parameter) |
Schema | The database schema where the stored procedure resides |
Package | (Oracle only) Name of a package in the database where the stored procedure resides The default is "*", that is, any package in the database. |
ConnectionProfile | Name of a connection profile that contains the details of the connection to the database |
For additional optional parameters, see above.
Job:Database:MSSQL:AgentJob
9.0.19.210 The following example shows how to create an MSSQL Agent job, for management of a job defined in the SQL server.
"MSSQLFolder": {
"Type": "Folder",
"ControlmServer": "LocalControlM",
"MSSQLAgentJob": {
"Type": "Job:Database:MSSQL:AgentJob",
"ConnectionProfile": "MSSQL-WE-EXAMPLE",
"Host": "agentHost",
"JobName": "get_version",
"Category": "Data Collector"
}
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
JobName | The name of the job defined in the SQL server |
Category | The category of the job, as defined in the SQL server |
For additional optional parameters, see above.
Job:Database:MSSQL:SSIS
9.0.19.220 The following example shows how to create SSIS Package jobs for execution of SQL Server Integration Services (SSIS) packages:
"MSSQLFolder": {
"Type": "Folder",
"ControlmServer": "LocalControlM",
"SSISCatalog": {
"Type": "Job:Database:MSSQL:SSIS",
"ConnectionProfile": "MSSQL-CP-NAME",
"Host": "agentHost",
"PackageSource": "SSIS Catalog",
"PackageName": "\\Data Collector\\SqlTraceCollect",
"CatalogEnv": "ENV_NAME",
"ConfigFiles": [
"C:\\Users\\dbauser\\Desktop\\test.dtsConfig",
"C:\\Users\\dbauser\\Desktop\\test2.dtsConfig"
],
"Properties": [
{
"PropertyName": "PropertyValue"
},
{
"PropertyName2": "PropertyValue2"
}
]
},
"SSISPackageStore": {
"Type": "Job:Database:MSSQL:SSIS",
"ConnectionProfile": "MSSQL-CP-NAME",
"Host": "agentHost",
"PackageSource": "SSIS Package Store",
"PackageName": "\\Data Collector\\SqlTraceCollect",
"ConfigFiles": [
"C:\\Users\\dbauser\\Desktop\\test.dtsConfig",
"C:\\Users\\dbauser\\Desktop\\test2.dtsConfig"
],
"Properties": [
{
"PropertyName": "PropertyValue"
},
{
"PropertyName2": "PropertyValue2"
}
]
}
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
PackageSource | The source of the SSIS package, one of the following:
|
PackageName | The name of the SSIS package. |
CatalogEnv | If PackageSource is 'SSIS Catalog': The name of the environment on which to run the package. Use this optional parameter if you want to run the package on a different environment from the one that you are currently using. |
ConfigFiles | (Optional) Names of configuration files that contain specific data that you want to apply to the SSIS package |
Properties | (Optional) Pairs of names and values for properties defined in the SSIS package. Each property name is replaced by its defined value during SSIS package execution. |
For additional optional parameters, see above.
Job:Hadoop
Various types of Hadoop jobs are available for you to define using the Job:Hadoop objects:
- Spark Python
- Spark Scala or Java
- Pig
- Sqoop
- Hive
- DistCp (distributed copy)
- HDFS commands
- HDFS File Watcher
- Oozie
- MapReduce
- MapReduce Streaming
- Tajo Input File
- Tajo Query
Job:Hadoop:Spark:Python
The following example shows how to use Job:Hadoop:Spark:Python to run a Spark Python program.
"Type": "Job:Hadoop:Spark:Python",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"SparkScript": "/home/user/processData.py"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
"Type": "Job:Hadoop:Spark:Python",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"SparkScript": "/home/user/processData.py",
"Arguments": [
"1000",
"120"
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
},
"SparkOptions": [
{"--master": "yarn"},
{"--num":"-executors 50"}
]
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:Spark:ScalaJava
The following example shows how to use Job:Hadoop:Spark:ScalaJava to run a Spark Java or Scala program.
"Type": "Job:Hadoop:Spark:ScalaJava",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar": "/home/user/ScalaProgram.jar",
"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
"Type": "Job:Hadoop:Spark:ScalaJava",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar": "/home/user/ScalaProgram.jar"
"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName",
"Arguments": [
"1000",
"120"
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
},
"SparkOptions": [
{"--master": "yarn"},
{"--num":"-executors 50"}
]
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:Pig
The following example shows how to use Job:Hadoop:Pig to run a Pig script.
"Type" : "Job:Hadoop:Pig",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"PigScript" : "/home/user/script.pig"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
"Type" : "Job:Hadoop:Pig",
"ConnectionProfile": "DEV_CLUSTER",
"PigScript" : "/home/user/script.pig",
"Host" : "edgenode",
"Parameters" : [
{"amount":"1000"},
{"volume":"120"}
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:Sqoop
The following example shows how to use Job:Hadoop:Sqoop to run a Sqoop job.
{
"Type" : "Job:Hadoop:Sqoop",
"Host" : "edgenode",
"ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",
"SqoopCommand" : "import --table foo --target-dir /dest_dir"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
{
"Type" : "Job:Hadoop:Sqoop",
"Host" : "edgenode",
"ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",
"SqoopCommand" : "import --table foo",
"SqoopOptions" : [
{"--warehouse-dir":"/shared"},
{"--default-character-set":"latin1"}
],
"SqoopArchives" : "",
"SqoopFiles": "",
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
SqoopOptions | These are passed as the specific sqoop tool args |
SqoopArchives | Indicates the location of the Hadoop archives. |
SqoopFiles | Indicates the location of the Sqoop files. |
Job:Hadoop:Hive
The following example shows how to use Job:Hadoop:Hive to run a Hive beeline job.
{
"Type" : "Job:Hadoop:Hive",
"Host" : "edgenode",
"ConnectionProfile" : "HIVE_CONNECTION_PROFILE",
"HiveScript" : "/home/user1/hive.script"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
{
"Type" : "Job:Hadoop:Hive",
"Host" : "edgenode",
"ConnectionProfile" : "HIVE_CONNECTION_PROFILE",
"HiveScript" : "/home/user1/hive.script",
"Parameters" : [
{"ammount": "1000"},
{"topic": "food"}
],
"HiveArchives" : "",
"HiveFiles": "",
"HiveOptions" : [
{"hive.root.logger": "INFO,console"}
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
HiveSciptParameters | Passed to beeline as --hivevar “name”=”value”. |
HiveProperties | Passed to beeline as --hiveconf “key”=”value”. |
HiveArchives | Passed to beeline as --hiveconf mapred.cache.archives=”value”. |
HiveFiles | Passed to beeline as --hiveconf mapred.cache.files=”value”. |
Job:Hadoop:DistCp
The following example shows how to use Job:Hadoop:DistCp to run a DistCp job. DistCp (distributed copy) is a tool used for large inter/intra-cluster copying.
{
"Type" : "Job:Hadoop:DistCp",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"TargetPath" : "hdfs://nns2:8020/foo/bar",
"SourcePaths" :
[
"hdfs://nn1:8020/foo/a"
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
{
"Type" : "Job:Hadoop:DistCp",
"Host" : "edgenode",
"ConnectionProfile" : "HADOOP_CONNECTION_PROFILE",
"TargetPath" : "hdfs://nns2:8020/foo/bar",
"SourcePaths" :
[
"hdfs://nn1:8020/foo/a",
"hdfs://nn1:8020/foo/b"
],
"DistcpOptions" : [
{"-m":"3"},
{"-filelimit ":"100"}
]
}
TargetPath, SourcePaths and DistcpOptions | Passed to the distcp tool in the following way: distcp <Options> <TargetPath> <SourcePaths>. |
Job:Hadoop:HDFSCommands
The following example shows how to use Job:Hadoop:HDFSCommands to run a job that executes one or more HDFS commands.
{
"Type" : "Job:Hadoop:HDFSCommands",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"Commands": [
{"get": "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm": "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Job:Hadoop:HDFSFileWatcher
The following example shows how to use Job:Hadoop:HDFSFileWatcher to run a job that waits for HDFS file arrival.
{
"Type" : "Job:Hadoop:HDFSFileWatcher",
"Host" : "edgenode",
"ConnectionProfile" : "DEV_CLUSTER",
"HdfsFilePath" : "/inputs/filename",
"MinDetecedSize" : "1",
"MaxWaitTime" : "2"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
HdfsFilePath | Specifies the full path of the file being watched. |
MinDetecedSize | Defines the minimum file size in bytes to meet the criteria and finish the job as OK. If the file arrives, but the size is not met, the job continues to watch the file. |
MaxWaitTime | Defines the maximum number of minutes to wait for the file to meet the watching criteria. If criteria are not met (file did not arrive, or minimum size was not reached) the job fails after this maximum number of minutes. |
Job:Hadoop:Oozie
The following example shows how to use Job:Hadoop:Oozie to run a job that submits an Oozie workflow.
"Type" : "Job:Hadoop:Oozie",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"JobPropertiesFile" : "/home/user/job.properties",
"OozieOptions" : [
{"inputDir":"/usr/tucu/inputdir"},
{"outputDir":"/usr/tucu/outputdir"}
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
JobPropertiesFile | The path to the job properties file. |
Optional parameters:
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false , that is, the job will complete successfully even if any post-command fails. |
OozieOptions | Set or override values for given job property. |
Job:Hadoop:MapReduce
The following example shows how to use Job:Hadoop:MapReduce to execute a Hadoop MapReduce job.
{
"Type" : "Job:Hadoop:MapReduce",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
"MainClass" : "pi",
"Arguments" :[
"1",
"2"]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
{
"Type" : "Job:Hadoop:MapReduce",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
"MainClass" : "pi",
"Arguments" :[
"1",
"2"
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:MapredStreaming
The following example shows how to use Job:Hadoop:MapredStreaming to execute a Hadoop MapReduce Streaming job.
"Type": "Job:Hadoop:MapredStreaming",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"InputPath": "/user/robot/input/*",
"OutputPath": "/tmp/output",
"MapperCommand": "mapper.py",
"ReducerCommand": "reducer.py",
"GeneralOptions": [
{"-D": "fs.permissions.umask-mode=000"},
{"-files": "/home/user/hadoop-streaming/mapper.py,/home/user/hadoop-streaming/reducer.py"}
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
GeneralOptions | Additional Hadoop command options passed to the hadoop-streaming.jar, including generic options and streaming options. |
Job:Hadoop:Tajo:InputFile
The following example shows how to execute a Hadoop Tajo job based on an input file.
{
"Type" : "Job:Hadoop:Tajo:InputFile",
"ConnectionProfile" : "TAJO_CONNECTION_PROFILE",
"Host" : "edgenode",
"FullFilePath" : "/home/user/tajo_command.sh",
"Parameters" : [
{"amount":"1000"},
{"volume":"120"}
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
FullFilePath | The full path to the input file used as the Tajo command source |
Parameters | Optional parameters for the script, expressed as name:value pairs |
Additional optional parameters:
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:Tajo:Query
The following example shows how to execute a Hadoop Tajo job based on a query.
{
"Type" : "Job:Hadoop:Tajo:Query",
"ConnectionProfile" : "TAJO_CONNECTION_PROFILE",
"Host" : "edgenode",
"OpenQuery" : "SELECT %%firstParamName AS VAR1 \\n FROM DUMMY \\n ORDER BY \\t VAR1 DESC",
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
OpenQuery | An ad-hoc query to the Apache Tajo warehouse system |
Additional optional parameters:
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:SAP
SAP-type jobs enable you to manage SAP processes through the Control-M environment. To manage SAP-type jobs, you must have the Control-M for SAP plugin installed in your Control-M environment.
The following JSON objects are available for creating SAP-type jobs:
- Job:SAP:R3:CREATE— Creates a new SAP R3 job
- Job:SAP:R3:COPY — Creates an SAP R3 job by copying an existing job
- Job:SAP:BW:ProcessChain — Defines a job to run and monitor a Process Chain in SAP Business Warehouse (SAP BW)
Job:SAP:R3:CREATE
This job type enables you to create a new SAP R3 job.
The following example is a simple job that relies mostly on default settings and contains one step that executes an external command.
"Type": "Job:SAP:R3:CREATE",
"ConnectionProfile": "SAPCP",
"SapJobName": "SAP_job",
"CreatedBy": "user1",
"Steps": [
{
"StepType": "ExternalCommand",
"UserName": "user01",
"TargetHost": "host01",
"ProgramName": "PING"
}
],
"SpoolListRecipient": {
"ReciptNoForwarding": false
}
}
The following example is a more complex job that contains two steps that run ABAP programs. Each of the ABAP steps has an associated variant that contains variable definitions.
"Type": "Job:SAP:R3:CREATE",
"ConnectionProfile": "SAPCP",
"SapJobName": "SAP_job2",
"StartCondition": "Immediate",
"RerunFromStep": "3",
"Target": "controlmserver",
"CreatedBy": "user1",
"Steps": [
{
"StepType": "ABAP",
"TimeToPrint": "PrintLater",
"CoverPrintPage": true,
"OutputDevice": "prt",
"UserName": "user",
"SpoolAuthorization": "Auth",
"CoverDepartment": "dpt",
"SpoolListName": "spoolname",
"OutputNumberRows": "62",
"NumberOfCopies": "5",
"NewSpoolRequest": false,
"PrintArchiveMode": "PrintAndArchive",
"CoverPage": "Print",
"ArchiveObjectType": "objtype",
"SpoolListTitles": "titles",
"OutputLayout": "layout",
"CoverSheet": "Print",
"ProgramName": "ABAP_PROGRAM",
"Language": "e",
"ArchiveInformationField": "inf",
"DeleteAfterPrint": true,
"PrintExpiration": "3",
"OutputNumberColumns": "88",
"ArchiveDocumentType": "doctype",
"CoverRecipient": "recipient",
"VariantName": "NameOfVariant",
"VariantParameters": [
{
"Type": "Range",
"High": "2",
"Sign": "I",
"Option": "BT",
"Low": "1",
"Name": "var1",
"Modify": false
},
{
"Low": "5",
"Type": "Range",
"Option": "BT",
"Sign": "I",
"Modify": true,
"High": "6",
"Name": "var3"
}
]
},
{
"StepType": "ABAP",
"PrintArchiveMode": "Print",
"ProgramName": "ABAP_PROGRAM2",
"VariantName": "Myvar_with_temp",
"TemporaryVariantParameters": [
{
"Type": "Simple",
"Name": "var",
"Value": "P11"
},
{
"Type": "Simple",
"Name": "var2",
"Value": "P11"
}
]
}
],
"PostJobAction": {
"JobLog": "CopyToFile",
"JobCompletionStatusWillDependOnApplicationStatus": true,
"SpoolSaveToPDF": true,
"JobLogFile": "fileToCopy.txt"
},
"SpoolListRecipient": {
"ReciptNoForwarding": false
}
}
The following table lists the parameters that can be used in SAP jobs of this type:
ConnectionProfile | Name of the SAP connection profile to use for the connection |
SapJobName | Name of SAP job to be monitored or submitted |
Exec | Type of execution target where the SAP job will run, one of the following:
|
Target | The name of the SAP application server or SAP group (depending on the value specified in the previous parameter) |
JobClass | Job submission priority in SAP, one of the following options:
|
StartCondition | Specifies when the job should run, one of the following:
|
AfterEvent | The name of the SAP event that the job waits for (if you set StartCondition to AfterEvent) |
AfterEventParameters | Parameters in the SAP event to watch for. Use space characters to separate multiple parameters. |
RerunFromPointOfFailure | Whether to rerun the SAP R/3 job from its point of failure, either true of false (the default) Note: If RerunFromPointOfFailure is set to false, use the CopyFromStep parameter to set a specific step from which to rerun. |
CopyFromStep | The number of a specific step in the SAP R/3 job from which to rerun The default is step 1 (that is, the beginning of the job). Note: If RerunFromPointOfFailure is set to true, the CopyFromStep parameter is ignored. |
Steps | An object that groups together the definitions of SAP R/3 job steps |
StepType | The type of program to execute in this step, one of the following options:
|
ProgramName | The name of the program or command |
UserName | The authorized owner of the step |
Description | A textual description or comment for the step |
Further parameters for each individual step depend on the type of program that is executed in the step. These parameters are listed in separate tables: | |
PostJobAction | This object groups together several parameters that control post-job actions for the SAP R/3 job. |
Spool | How to manage spool output, one of the following options:
|
SpoolFile | The file to which to copy the job's spool output (if Spool is set to CopyToFile) |
SpoolSaveToPDF | Whether to save the job's spool output in PDF format (if Spool is set to CopyToFile) |
JobLog | How to manage job log output, one of the following options:
|
JobLogFile | The file to which to copy the job's log output (if JobLog is set to CopyToFile) |
JobCompletionStatusWillDependOnApplicationStatus | Whether job completion status depends on SAP application status, either true or false (the default) |
DetectSpawnedJob | This object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job |
DetectAndCreate | How to determine the properties of detected spawned jobs:
|
JobName | Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition) Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values. |
StartSpawnedJob | Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default) |
JobEndInControlMOnlyAftreChildJobsCompleteOnSap | Whether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default) |
JobCompletionStatusDependsOnChildJobsStatus | Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default) When set to true, the parent job does not end OK if any child job fails. This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true. |
SpoolListRecipient | This object groups together several parameters that define recipients of Print jobs |
RecipientType | Type of recipient of the print job, one of the following:
|
RecipientName | Recipient of the print job (of the type defined by the previous parameter) |
RecipientCopy | Whether this recipient is a copied (CC) recipient, either true or false (the default) |
RecipientBlindCopy | Whether this recipient is a blind copied (BCC) recipient, either true or false (the default) |
RecipientExpress | For a CC or BCC recipient: Whether to send in express mode, either true or false (the default) |
ReciptNoForwarding | For a CC or BCC recipient: Whether to set the recipient to "No Forwarding", either true or false (the default) |
The following additional parameters are available f or steps that involve the execution of an ABAP program. Most of these parameters are optional.
Language | SAP internal one-character language code for the ABAP step For example, German is D and Serbian (using the Latin alphabet) is d. For the full list of available language codes, see SAP Knowledge Base Article 2633548. |
VariantName | The name of a variant for the specified ABAP program or Archiving Object |
VariantDescription | A textual description or comment for the variant |
VariantParameters | This object groups together the variables defined in the variant. For each variable, you can set the following parameters:
|
TemporaryVariantParameters | This object groups together the variables defined in a temporary variant. For each variable, you can set the same parameters listed above, except for Modify (which is not supported by a temporary variant). |
OutputDevice | The logical name of the designated printer |
NumberOfCopies | Number of copies to be printed The default is 1. |
PrintArchiveMode | Whether the spool of the step is printed to an output device, to the archive, or both. Choose from the following available values:
|
TimeToPrint | When to print the job output, one of the following options:
|
PrintExpiration | Number of days until a print job expires Valid values are single-digit numbers:
The default is 8 days. |
NewSpoolRequest | Whether to request a new spool, either true (the default) or false |
DeleteAfterPrint | Whether to delete the report after printing, either true or false (the default) |
OutputLayout | Print layout format |
OutputNumberRows | (Mandatory) Maximum number of rows per page Valid values:
|
OutputNumberColumns | (Mandatory) Maximum number of characters in an output line Valid values:
|
CoverRecipient | Name of the recipient of the job output on the cover sheet The name can be up to 12 characters. |
CoverDepartment | Name of the spool department on the cover sheet The department name can be up to 12 characters. |
CoverPage | Type of cover page for output, one of the following options:
|
CoverSheet | Type of cover sheet for output, one of the following options:
|
CoverPrintPage | Whether to use a cover page, either true or false The default is false. |
SpoolListName | Name of the spool list The name can be up to 12 characters. |
SpoolListTitles | The spool list titles |
SpoolAuthorization | Name of a user with print authorization The name can be up to 12 characters. |
ArchiveId | SAP ArchiveLink Storage system ID Values are two carachters long. The default is ZZ. Note that Archive parameters are relevant only when you set PrintArchiveMode to Archive or PrintAndArchive. |
ArchiveText | Free text description of the archive location, up to 40 characters |
ArchiveObjectType | Archive object type Valid values are up to 10 characters. |
ArchiveDocumentType | Archive object document type Valid values are up to 10 characters. |
ArchiveInformationField | Archive information Values can be 1–3 characters. |
The following additional parameters are available for steps that involve the execution of an external program or an external command:
TargetHost | Host computer on which the program or command runs |
OperatingSystem | Operating system on which the external command runs The default is ANYOS. |
WaitExternalTermination | Whether SAP waits for the external program or external command to end before starting the next step, or before exiting. Values are either true (the default) or false. |
LogExternalOutput | Whether SAP logs external output in the joblog Values are either true (the default) or false. |
LogExternalErrors | Whether SAP logs external errors in the joblog Values are either true (the default) or false. |
ActiveTrace | Whether SAP activates traces for the external program or external command Values are either true or false (the default). |
Job:SAP:R3:COPY
This job type enables you to create a new SAP R3 job by duplicating an existing job. The following example shows how to use Job:SAP:R3:COPY:
"Type" : "Job:SAP:R3:COPY",
"ConnectionProfile":"SAP-CON",
"SapJobName" : "CHILD_1",
"Exec": "Server",
"Target" : "Server-name",
"JobCount" : "SpecificJob",
"JobCountSpecificName" : "sap-job-1234",
"NewJobName" : "My-New-Sap-Job",
"StartCondition" : "AfterEvent",
"AfterEvent" : "HOLA",
"AfterEventParameters" : "parm1 parm2",
"RerunFromPointOfFailure": true,
"CopyFromStep" : "4",
"PostJobAction" : {
"Spool" : "CopyToFile",
"SpoolFile": "spoolfile.log",
"SpoolSaveToPDF" : true,
"JobLog" : "CopyToFile",
"JobLogFile": "Log.txt",
"JobCompletionStatusWillDependOnApplicationStatus" : true
},
"DetectSpawnedJob" : {
"DetectAndCreate": "SpecificJobDefinition",
"JobName" : "Specific-Job-123",
"StartSpawnedJob" : true,
"JobEndInControlMOnlyAftreChildJobsCompleteOnSap" : true,
"JobCompletionStatusDependsOnChildJobsStatus" : true
}
}
This SAP job object uses the following parameters:
ConnectionProfile | Name of the SAP connection profile to use for the connection |
SapJobName | Name of SAP job to copy |
Exec | Type of execution target where the SAP job will run, one of the following:
|
Target | The name of the SAP application server or SAP group (depending on the value specified in the previous parameter) |
JobCount | How to define a unique ID number for the SAP job, one of the following options:
If you specify SpecificJob, you must provide the next parameter. |
JobCountSpecificName | A unique SAP job ID number for a specific job (that is, for when JobCount is set to SpecificJob) |
NewJobName | Name of the newly created job |
StartCondition | Specifies when the job should run, one of the following:
|
AfterEvent | The name of the SAP event that the job waits for (if you set StartCondition to AfterEvent) |
AfterEventParameters | Parameters in the SAP event to watch for. Use space characters to separate multiple parameters. |
RerunFromPointOfFailure | Whether to rerun the SAP R/3 job from its point of failure, either true of false (the default) Note: If RerunFromPointOfFailure is set to false, use the CopyFromStep parameter to set a specific step from which to rerun. |
CopyFromStep | The number of a specific step in the SAP R/3 job from which to rerun or copy The default is step 1 (that is, the beginning of the job). Note: If RerunFromPointOfFailure is set to true, the CopyFromStep parameter is ignored. |
PostJobAction | This object groups together several parameters that control post-job actions for the SAP R/3 job. |
Spool | How to manage spool output, one of the following options:
|
SpoolFile | The file to which to copy the job's spool output (if Spool is set to CopyToFile) |
SpoolSaveToPDF | Whether to save the job's spool output in PDF format (if Spool is set to CopyToFile) |
JobLog | How to manage job log output, one of the following options:
|
JobLogFile | The file to which to copy the job's log output (if JobLog is set to CopyToFile) |
JobCompletionStatusWillDependOnApplicationStatus | Whether job completion status depends on SAP application status, either true or false (the default) |
DetectSpawnedJob | This object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job |
DetectAndCreate | How to determine the properties of detected spawned jobs:
|
JobName | Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition) Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values. |
StartSpawnedJob | Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default) |
JobEndInControlMOnlyAftreChildJobsCompleteOnSap | Whether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default) |
JobCompletionStatusDependsOnChildJobsStatus | Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default) When set to true, the parent job does not end OK if any child job fails. This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true. |
Job:SAP:BW:ProcessChain
This job type runs and monitors a Process Chain in SAP Business Warehouse (SAP BW).
NOTE: For the job that you define through Control-M Automation API to work properly, ensure that the Process Chain defined in the SAP BW system has Start Using Meta Chain or API as the start condition for the trigger process (Start Process) of the Process Chain. To configure this parameter, from the SAP transaction RSPC, right-click the trigger process and select Maintain Variant.
The following example shows how to use Job:SAP:BW:ProcessChain:
"Type": "Job:SAP:BW:ProcessChain",
"ConnectionProfile": "PI4-BW",
"ProcessChainDescription": "SAP BW Process Chain",
"Id": "123456",
"RerunOption": "RestartFromFailiurePoint",
"EnablePeridoicJob": true,
"ConsiderOnlyOverallChainStatus": true,
"RetrieveLog": false,
"DetectSpawnedJob": {
"DetectAndCreate": "SpecificJobDefinition",
"JobName": "ChildJob",
"StartSpawnedJob": false,
"JobEndInControlMOnlyAftreChildJobsCompleteOnSap": false,
"JobCompletionStatusDependsOnChildJobsStatus": false
}
}
This SAP job object uses the following parameters:
ConnectionProfile | Name of the SAP connection profile to use for the connection. |
ProcessChainDescription | The description of the Process Chain that you want to run and monitor, as defined in SAP BW. Maximum length of the textual description: 60 characters |
Id | ID of the Process Chain that you want to run and monitor. |
RerunOption | The rerun policy to apply to the job after job failure, one of the following values:
|
EnablePeridoicJob | Whether the first run of the Process Chain prepares for the next run and is useful for reruns when big Process Chains are scheduled. Values are either true (the default) or false. |
ConsiderOnlyOverallChainStatus | Whether to view only the status of the overall Process Chain. Values are either true or false (the default) . |
RetrieveLog | Whether to add the Process Chain logs to the job output. Values are either true (the default) or false. |
DetectSpawnedJob | This object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job |
DetectAndCreate | How to determine the properties of detected spawned jobs:
|
JobName | Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition) Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values. |
StartSpawnedJob | Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default) |
JobEndInControlMOnlyAftreChildJobsCompleteOnSap | Whether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default) |
JobCompletionStatusDependsOnChildJobsStatus | Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default) When set to true, the parent job does not end OK if any child job fails. This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true. |
Job:PeopleSoft
PeopleSoft-type jobs enable you to manage PeopleSoft jobs and processes through the Control-M environment. To manage PeopleSoft-type jobs, you must have the Control-M for PeopleSoft plugin installed in your Control-M environment.
The following example shows the JSON code used to define a PeopleSoft job.
"Type": "Job:PeopleSoft",
"ConnectionProfile": "PS_CONNECT",
"User": "PS_User3",
"ControlId": "ControlId",
"ServerName": "ServerName",
"ProcessType": "ProcessType",
"ProcessName": "ProcessName",
"AppendToOutput": false,
"BindVariables": ["value1","value2"],
"RunAs": "controlm"
}
This PeopleSoft job object uses the following parameters:
ConnectionProfile | Name of the PeopleSoft connection profil e to use for the connection |
User | A PeopleSoft user ID that exists in the PeopleSoft Environment |
ControlId | Run Control ID for access to run controls at runtime |
ServerName | The name of the server on which to run the PeopleSoft job or process |
ProcessType | A PeopleSoft process type that the user is authorized to perform |
ProcessName | The name of the PeopleSoft process to run |
AppendToOutput | Whether to include PeopleSoft job output in the Control-M job output, either true or false The default is false. |
BindVariables | Values of up to 20 USERDEF variables for sharing data between Control-M and the PeopleSoft job or process |
Job:ApplicationIntegrator
Use Job:ApplicationIntegrator:<JobType> to define a job of a custom type using the Control-M Application Integrator designer. For information about designing job types, see the Control-M Application Integrator Help.
The following example shows the JSON code used to define a job type named AI Monitor Remote Job:
"Type": "Job:ApplicationIntegrator:AI Monitor Remote Job",
"ConnectionProfile": "AI_CONNECTION_PROFILE",
"AI-Host": "Host1",
"AI-Port": "5180",
"AI-User Name": "admin",
"AI-Password": "*******",
"AI-Remote Job to Monitor": "remoteJob5",
"RunAs": "controlm"
}
In this example, the ConnectionProfile and RunAs properties are standard job properties used in Control-M Automation API jobs. The other job properties will be created in the Control-M Application Integrator, and they must be prefixed with "AI-" in the .json code.
The following images show the corresponding settings in the Control-M Application Integrator, for reference purposes.
Job:Informatica
Informatica-type jobs enable you to automate Informatica workflows through the Control-M environment. To manage Informatica-type jobs, you must have the Control-M for Informatica plugin installed in your Control-M environment.
The following example shows the JSON code used to define an Informatica job.
"Type": "Job:Informatica",
"ConnectionProfile": "INFORMATICA_CONNECTION",
"RepositoryFolder": "POC",
"Workflow": "WF_Test",
"InstanceName": "MyInstamce",
"OsProfile": "MyOSProfile",
"WorkflowExecutionMode": "RunSingleTask",
"RunSingleTask": "s_MapTest_Success",
"WorkflowRestartMode": "ForceRestartFromSpecificTask",
"RestartFromTask": "s_MapTest_Success",
"WorkflowParametersFile": "/opt/wf1.prop",
}
This Informatica job object uses the following parameters:
ConnectionProfile | Name of the Informatica connection profile to use for the connection |
RepositoryFolder | The Repository folder that contains the workflow that you want to run |
Workflow | The workflow that you want to run in Control-M for Informatica |
InstanceName | (Optional) The specific instance of the workflow that you want to run |
OsProfile | (Optional) The operating system profile in Informatica |
WorkflowExecutionMode | The mode for executing the workflow, one of the following:
|
StartFromTask | The task from which to start running the workflow This parameter is required only if you set WorkflowExecutionMode to StartFromTask. |
RunSingleTask | The workflow task that you want to run This parameter is required only if you set WorkflowExecutionMode to RunSingleTask. |
Depth | The number of levels within the workflow task hierarchy for the selection of workflow tasks Default: 10 levels |
EnableOutput | Whether to include the workflow events log in the job output (either true or false) Default: true |
EnableErrorDetails | Whether to include a detailed error log for a workflow that failed (either true or false) Default: true |
WorkflowRestartMode | The operation to execute when the workflow is in a suspended satus, one of the following:
|
RestartFromTask | The task from which to restart a suspended workflow This parameter is required only if you set WorkflowRestartMode to ForceRestartFromSpecificTask |
WorkflowParametersFile | (Optional) The path and name of the workflow parameters file This enables you to use the same workflow for different actions. |
Job:AWS
AWS-type jobs enable you to automate a select list of AWS services through Control-M Automation API. To manage AWS-type jobs, you must have the Control-M for AWS plugin installed in your Control-M environment.
The following JSON objects are available for creating AWS-type jobs:
An additional Job:AWS Glue job type is provided for the AWS Glue service. To support this job type, you must have the Control-M Application Integrator plugin installed and you must deploy the AWS Glue integration using the deploy jobtype command.
Job:AWS:Lambda
The following example shows how to define a job that executes an AWS Lambda service on an AWS server.
"Type": "Job:AWS:Lambda",
"ConnectionProfile": "AWS_CONNECTION",
"FunctionName": "LambdaFunction",
"Version": "1",
"Payload" : "{\"myVar\" :\"value1\" \\n\"myOtherVar\" : \"value2\"}"
"AppendLog": true
}
This AWS job object uses the following parameters :
FunctionName | The Lambda function to execute |
Version | (Optional) The Lambda function version The default is $LATEST (the latest version). |
Payload | (Optional) The Lambda function payload, in JSON format Escape all special characters. |
AppendLog | Whether to add the log to the job’s output, either true (the default) or false |
Job:AWS:StepFunction
The following example shows how to define a job that executes an AWS Step Function service on an AWS server.
"Type": "Job:AWS:StepFunction",
"ConnectionProfile": "AWS_CONNECTION",
"StateMachine": "StateMachine1",
"ExecutionName": "Execution1",
"Input": ""{\"myVar\" :\"value1\" \\n\"myOtherVar\" : \"value2\"}" ",
"AppendLog": true
}
This AWS job object uses the following parameters :
StateMachine | The State Machine to use |
ExecutionName | A name for the execution |
Input | The Step Function input in JSON format Escape all special characters. |
AppendLog | Whether to add the log to the job’s output, either true (the default) or false |
Job:AWS:Batch
The following example shows how to define a job that executes an AWS Batch service on an AWS server.
"Type": "Job:AWS:Batch",
"ConnectionProfile": "AWS_CONNECTION",
"JobName": "batchjob1",
"JobDefinition": "jobDef1",
"JobDefinitionRevision": "3",
"JobQueue": "queue1",
"AWSJobType": "Array",
"ArraySize": "100",
"DependsOn": {
"DependencyType": "Standard",
"JobDependsOn": "job5"
},
"Command": [ "ffmpeg", "-i" ],
"Memory": "10",
"vCPUs": "2",
"JobAttempts": "5",
"ExecutionTimeout": "60",
"AppendLog": false
}
This AWS job object uses the following parameters :
JobName | The name of the batch job |
JobDefinition | The job definition to use |
JobDefinitionRevision | The job definition revision |
JobQueue | The queue to which the job is submitted |
AWSJobType | The type of job, either Array or Single |
ArraySize | (For a job of type Array) The size of the array (that is, the number of items in the array) Valid values: 2–10000 |
DependsOn | Parameters that determine a job dependency |
DependencyType | (For a job of type Array) Type of dependency, one of the following values:
|
JobDependsOn | The JobID upon which the Batch job depends This parameter is mandatory for a Standard or N-to-N dependency, and optional for a Sequential dependency. |
Command | A command to send to the container that overrides the default command from the Docker image or the job definition |
Memory | The number of megabytes of memory reserved for the job Minimum value: 4 megabytes |
vCPUs | The number of vCPUs to reserve for the container |
JobAttempts | The number of retry attempts Valid values: 1–10 |
ExecutionTimeout | The timeout duration in seconds |
AppendLog | Whether to add the log to the job’s output, either true (the default) or false |
Job:AWS Glue
9.0.20.100The following example shows how to define a job that executes Amazon Web Services (AWS) Glue, a serverless data integration service.
To deploy and run an AWS Glue job, ensure that you have the Control-M Application Integrator plugin installed and have deployed the AWS Glue integration using the deploy jobtype command.
"Type": "Job:AWS Glue",
"ConnectionProfile": "GLUECONNECTION",
"AI-Glue Job Name": "AwsGlueJobName",
"AI-Glue Job Arguments": "checked",
"AI-Arguments": "{\"--myArg1\": \"myVal1\", \"--myArg2\": \"myVal2\"}",
"AI-Status Polling Frequency": "20"
}
The AWS Glue job object uses the following parameters :
ConnectionProfile | Name of a connection profile to use to connect to the AWS Glue service |
AI-Glue Job Name | The name of the AWS Glue job that you want to execute. |
AI-Glue Job Arguments | Whether to enable specification of arguments to be passed when running the AWS Glue job (see next property). Values are checked or unchecked. The default is unchecked. |
AI-Arguments | (Optional) Specific arguments to pass when running the AWS Glue job Format: {\"--myArg1\": \"myVal1\", \"--myArg2\": \"myVal2\"} For more information about the available arguments, see Special Parameters Used by AWS Glue in the AWS documentation. |
AI-Status Polling Frequency | (Optional) Number of seconds to wait before checking the status of the job. Default: 30 |
Job:Azure
9.0.19.220 The Azure job type enables you to automate workflows that include a select list of Azure services. To manage Azure-type jobs, you must have the Control-M for Azure plugin installed in your Control-M environment.
The following JSON objects are available for creating Azure-type jobs:
An additional Job:ADF job type is provided for the Azure Data Factory service. To support this job type, you must have the Control-M Application Integrator plugin installed and you must deploy the ADF integration using the deploy jobtype command.
Job:Azure:Function
The following example shows how to define a job that executes an Azure function service.
"Type": "Job:Azure:Function",
"ConnectionProfile": "AZURE_CONNECTION",
"AppendLog": false,
"Function": "AzureFunction",
"FunctionApp": "AzureFunctionApp",
"Parameters": [
{"firstParamName": "firstParamValue"},
{"secondParamName": "secondParamValue"}
]
}
This Azure job object uses the following parameters :
Function | The name of the Azure function to execute |
FunctionApp | The name of the Azure function app |
Parameters | (Optional) Function parameters defined as pairs of name and value. |
AppendLog | (Optional) Whether to add the log to the job’s output, either true (the default) or false |
Job:Azure:LogicApps
The following example shows how to define a job that executes an Azure Logic App service.
"Type": "Job:Azure:LogicApps",
"ConnectionProfile": "AZURE_CONNECTION",
"LogicAppName": "MyLogicApp",
"RequestBody": "{\\n \"name\": \"BMC\"\\n}",
"AppendLog": false
}
This Azure job object uses the following parameters :
LogicAppName | The name of the Azure Logic App |
RequestBody | (Optional) The JSON for the expected payload |
AppendLog | (Optional) Whether to add the log to the job’s output, either true (the default) or false |
Job:Azure:BatchAccount
The following example shows how to define a job that executes an Azure Batch Account service.
"Type": "Job:Azure:BatchAccount",
"ConnectionProfile": "AZURE_CONNECTION",
"JobId": "AzureJob1",
"CommandLine": "echo \"Hello\"",
"AppendLog": false,
"Wallclock": {
"Time": "770",
"Unit": "Minutes"
},
"MaxTries": {
"Count": "6",
"Option": "Custom"
},
"Retention": {
"Time": "1",
"Unit": "Hours"
}
}
This Azure job object uses the following parameters :
JobId | The ID of the batch job |
CommandLine | A command line that the batch job runs |
AppendLog | (Optional) Whether to add the log to the job’s output, either true (the default) or false |
Wallclock | (Optional) Maximum limit for the job's run time If you do not include this parameter, the default is unlimited run time. Use this parameter to set a custom time limit. Include the following next-level parameters:
|
MaxTries | (Optional) The number of times to retry running a failed task If you do not include this parameter, the default is none (no retries). Use this parameter to choose between the following options:
|
Retention | (Optional) File retention period for the batch job If you do not include this parameter, the default is an unlimited retention period. Use this parameter to set a custom time limit for retention. Include the following next-level parameters:
|
Job:ADF
9.0.20.100The following example shows how to define a job that executes an Azure Data Factory (ADF) service, a cloud-based ETL and data integration service that allows you to create data-driven workflows to automate the movement and transformation of data.
To deploy and run an ADF job, ensure that you have the Control-M Application Integrator plugin installed and have deployed the ADF integration using the deploy jobtype command.
"Type": "Job:ADF",
"ConnectionProfile": "DataFactoryConnection",
"AI-Resource Group Name": "AzureResourceGroupName",
"AI-Data Factory Name": "AzureDataFactoryName",
"AI-Pipeline Name": "AzureDataFactoryPipelineName",
"AI-Parameters": "{\"myVar\":\"value1\", \"myOtherVar\": \"value2\"}",
"AI-Status Polling Frequency": "20"
}
The ADF job object uses the following parameters :
ConnectionProfile | Name of a connection profile to use to connect to Azure Data Factory |
AI-Resource Group Name | The Azure Resource Group that is associated with a specific data factory pipeline. A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. |
AI-Data Factory Name | The Azure Data Factory Resource to use to execute the pipeline |
AI-Pipeline Name | The data pipeline to run when the job is executed |
AI-Parameters | Specific parameters to pass when the Data Pipeline runs, defined as pairs of name and value Format: {\"var1\":\"value1\", \"var2\":\"value2\"} |
AI-Status Polling Frequency | (Optional) Number of seconds to wait before checking the status of the job. Default: 30 |
Job:SLAManagement
SLA Management jobs enable you to identify a chain of jobs that comprise a critical service and must complete by a certain time. The SLA Management job is always defined as the last job in the chain of jobs.
To manage SLA Management jobs, you must have the SLA Management add-on (previously known as Control-M Batch Impact Manager) installed in your Control-M environment.
The following example shows the JSON code of a simple chain of jobs that ends with an SLA Management job. In this chain of jobs:
- The first job is a Command job that prints Hello and then adds an event named Hello-TO-SLA_Job_for_SLA-GOOD.
- The second (and last) job is an SLA Management job for a critical service named SLA-GOOD. This job waits for the event added by the first job and then deletes it.
"SLARobotTestFolder_Good": {
"Type": "SimpleFolder",
"ControlmServer": "LocalControlM",
"Hello": {
"Type": "Job:Command",
"CreatedBy": "emuser",
"RunAs": "controlm",
"Command": "echo \"Hello\"",
"eventsToAdd": {
"Type": "AddEvents",
"Events": [
{
"Event": "Hello-TO-SLA_Job_for_SLA-GOOD"
}
]
}
},
"SLA": {
"Type": "Job:SLAManagement",
"ServiceName": "SLA-GOOD",
"ServicePriority": "1",
"CreatedBy": "emuser",
"RunAs": "DUMMYUSR",
"JobRunsDeviationsTolerance": "2",
"CompleteIn": {
"Time": "00:01"
},
"eventsToWaitFor": {
"Type": "WaitForEvents",
"Events": [
{
"Event": "Hello-TO-SLA_Job_for_SLA-GOOD"
}
]
},
"eventsToDelete": {
"Type": "DeleteEvents",
"Events": [
{
"Event": "Hello-TO-SLA_Job_for_SLA-GOOD"
}
]
}
}
}
}
The following table lists the parameters that can be included in an SLA Management job:
Parameter | Description |
---|---|
ServiceName | A logical name, from a user or business perspective, for the critical service. BMC recommends that the service name be unique. Names can contain up to 64 alphanumeric characters. |
ServicePriority | The priority level of this service, from a user or business perspective. Values range from 1 (highest priority) to 5 (lowest priority). Default: 3 |
CreatedBy | The Control‑M/EM user who defined the job. |
RunAs | The operating system user that will run the job. |
JobRunsDeviationsTolerance | Extent of tolerated deviation from the average completion time for a job in the service, expressed as a number of standard deviations based on percentile ranges. If the run time falls within the tolerance set, it is considered on time, otherwise it has run too long or ended too early. Select one of the following values:
Note: The JobRunsDeviationsTolerance parameter and the AverageRunTimeTolerance parameter are mutually exclusive. Specify only one of these two parameters. |
AverageRunTimeTolerance | Extent of tolerated deviation from the average completion time for a job in the service, expressed as a percentage of the average time or as the number of minutes that the job can be early or late. If the run time falls within the tolerance set, it is considered on time, otherwise it has run too long or ended too early. The following example demonstrates how to set this parameter based on a percentage of the average run time: "AverageRunTimeTolerance": { "Units": "Percentage", "AverageRunTime": "94" The following example demonstrates how to set this parameter based on a number of minutes: "AverageRunTimeTolerance": { "Units": "Minutes", "AverageRunTime": "10" Note: The AverageRunTimeTolerance parameter and the JobRunsDeviationsTolerance parameter are mutually exclusive. Specify only one of these two parameters. |
CompleteBy | Defines by what time (in HH:MM) and within how many days the critical service must be completed to be considered on time. In the following example, the critical service must complete by 11:51 PM, 3 days since it began running. "CompleteBy": { "Time": "23:51", "Days": "3" } The default number of days is 0 (that is, on the same day). Note: The CompleteBy parameter and the CompleteIn parameter are mutually exclusive. Specify only one of these two parameters. |
CompleteIn | Defines the number of hours and minutes for the critical service to complete and be considered on time, as in the following example: "CompleteIn": { "Time": "15:21" } Note: The CompleteIn parameter and the CompleteBy parameter are mutually exclusive. Specify only one of these two parameters. |
ServiceActions | Defines automatic interventions (actions, such as rerunning a job or extending the service due time) in response to specific occurrences (If statements, such as a job finished too quickly or a service finished late). For more information, see Service Actions. |
Service Actions
The following example demonstrates a series of Service Actions that are triggered in response to specific occurrences (If statements). Note that this example includes only a select group of If statements and a select group of actions; for the full list, see the tables that follow.
"If:SLA:ServiceIsLate_0": {
"Type": "If:SLA:ServiceIsLate",
"Action:SLA:Notify_0": {
"Type": "Action:SLA:Notify",
"Severity": "Regular",
"Message": "this is a message"
},
"Action:SLA:Mail_1": {
"Type": "Action:SLA:Mail",
"Email": "email@okmail.com",
"Subject": "this is a subject",
"Message": "this is a message"
},
"If:SLA:JobFailureOnServicePath_1": {
"Type": "If:SLA:JobFailureOnServicePath",
"Action:SLA:Order_0": {
"Type": "Action:SLA:Order",
"Server": "LocalControlM",
"Folder": "folder",
"Job": "job",
"Date": "OrderDate",
"Library": "library"
}
},
"If:SLA:ServiceEndedNotOK_5": {
"Type": "If:SLA:ServiceEndedNotOK",
"Action:SLA:Set_0": {
"Type": "Action:SLA:Set",
"Variable": "varname",
"Value": "varvalue"
},
"Action:SLA:Increase_2": {
"Type": "Action:SLA:Increase",
"Time": "04:03"
}
},
"If:SLA:ServiceLatePastDeadline_6": {
"Type": "If:SLA:ServiceLatePastDeadline",
"Action:SLA:Event:Add_0": {
"Type": "Action:SLA:Event:Add",
"Server": "LocalControlM",
"Name": "addddd",
"Date": "AnyDate"
}
The following If statements can be used to define occurrences for which you want to take action:
If statement | Description |
---|---|
If:SLA:ServiceIsLate | The service will be late according to SLA Management calculations. |
If:SLA:JobFailureOnServicePath | One or more of the jobs in the service failed and caused a delay in the service. An SLA Management service is considered OK even if one of its jobs fails, provided that another job, with an Or relationship to the failed job, runs successfully. |
If:SLA:JobRanTooLong | One of the jobs in the critical service is late. Lateness is calculated according to the average run time and Job Runtime Tolerance settings. A service is considered on time even if one of its jobs is late, provided that the service itself is not late. |
If:SLA:JobFinishedTooQuickly | One of the jobs in the critical service is early. The end time is calculated according to the average run time and Job Runtime Tolerance settings. A service is considered on time even if one of its jobs is early. |
If:SLA:ServiceEndedOK | The service ended OK. |
If:SLA:ServiceEndedNotOK | The service ended late, after the deadline. |
If:SLA:ServiceLatePastDeadline | The service is late, and passed its deadline. |
For each If statement, you define one or more actions to be triggered. The following table lists the available Service Actions:
Action | Description | Sub-parameters |
---|---|---|
Action:SLA:Notify | Send notification to the Alerts Window |
|
Action:SLA:Mail | Send an email to a specific email recipient. |
|
Action:SLA:Remedy | Open a ticket in the Remedy Help Desk. |
|
Action:SLA:Order | Run a job, regardless of its scheduling criteria. |
|
Action:SLA:SetToOK | Set the job's completion status to OK, regardless of its actual completion status. |
|
Action:SLA:SetToOK:ProblematicJob | Set the completion status to OK for a job that is not running on time and will impact the service. | No parameters |
Action:SLA:Rerun | Rerun the job, regardless of its scheduling criteria |
|
Action:SLA:Rerun:ProblematicJob | Rerun a job that is not running on time and will impact the service. | No parameters |
Action:SLA:Kill | Kill a job while it is still executing. |
|
Action:SLA:Kill:ProblematicJob | Kill a problematic job (a job that is not running on time in the service) while it is still executing. | No parameters |
Action:SLA:Set | Assign a value to a variable for use in a rerun of the job. |
|
Action:SLA:SIM | Send early warning notification regarding the critical service to BMC Service Impact Manager. |
|
Action:SLA:Increase | Allow the job or critical service to continue running by extending (by hours and/or minutes) the deadline until which the job or service can run and still be considered on time. |
|
Action:SLA:Event:Add | Add an event. |
|
Action:SLA:Event:Delete | Delete an event. |
|
Job:Dummy
The following example shows how to use Job:Dummy to define a job that always ends successfully without running any commands.
"Type" : "Job:Dummy"
}