Job types
The following series of sections provide details about the available types of jobs that you can define and the parameters that you use to define each type of job.
Job:Command
The following example shows how to use the Job:Command to run operating system commands.
"JobName": {
"Type" : "Job:Command",
"Command" : "echo hello",
"PreCommand": "echo before running main command",
"PostCommand": "echo after running main command",
"Host" : "myhost.mycomp.com",
"RunAs" : "user1"
}
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. |
RunAs | Identifies the operating system user that will run the job. |
PreCommand | (Optional) A command to execute before the job is executed. |
PostCommand | (Optional) A command to execute after the job is executed. |
Job:Script
The following example shows how to use Job:Script to run a script from a specified script file.
"JobWithPreAndPost": {
"Type" : "Job:Script",
"FileName" : "task1123.sh",
"FilePath" : "/home/user1/scripts",
"PreCommand": "echo before running script",
"PostCommand": "echo after running script",
"Host" : "myhost.mycomp.com",
"RunAs" : "user1"
"Arguments":[
"arg1",
"arg2"
]
}
FileName together with FilePath | Indicates the location of the script. NOTE: Due to JSON character escaping, each backslash in a Windows file system path must be doubled. For example, "c:\\tmp\\scripts". |
PreCommand | (Optional) A command to execute before the job is executed. |
PostCommand | (Optional) A command to execute after the job is executed. |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
RunAs | Identifies the operating systems user that will run the job. |
Arguments | (Optional) An array of strings that are passed to the script. |
Job:EmbeddedScript
The following example shows how to use Job:EmbeddedScript to run a script that you include in the JSON code. Control-M deploys this script to the Agent host during job submission. This eliminates the need to deploy scripts as part of your application stack.
"EmbeddedScriptJob":{
"Type":"Job:EmbeddedScript",
"Script":"#!/bin/bash\\necho \"Hello world\"",
"Host":"myhost.mycomp.com",
"RunAs":"user1",
"FileName":"myscript.sh",
"PreCommand": "echo before running script",
"PostCommand": "echo after running script"
}
Script | Full content of the script, up to 64 kilobytes. |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
RunAs | Identifies the operating systems user that will run the job. |
FileName | Name of a script file. This property is used for the following purposes:
|
PreCommand | (Optional) A command to execute before the job is executed. |
PostCommand | (Optional) A command to execute after the job is executed. |
Job:FileTransfer
The following example shows a Job:FileTransfer for a file transfer from a local filesystem to an SFTP server:
{
"FileTransferFolder" :
{
"Type" : "Folder",
"Application" : "aft",
"TransferFromLocalToSFTP" :
{
"Type" : "Job:FileTransfer",
"ConnectionProfileSrc" : "LocalConn",
"ConnectionProfileDest" : "SftpConn",
"NumberOfRetries": "3",
"Host": "AgentHost",
"FileTransfers" :
[
{
"Src" : "/home/controlm/file1",
"Dest" : "/home/controlm/file2",
"TransferType": "Binary",
"TransferOption": "SrcToDest"
},
{
"Src" : "/home/controlm/otherFile1",
"Dest" : "/home/controlm/otherFile2",
"TransferOption": "DestToSrc"
}
]
}
}
}
Here is another example for a file transfer from an S3 storage service to a local filesystem:
{
"MyS3AftFolder": {
"Type": "Folder",
"Application": "aft",
"TransferFromS3toLocal":
{
"Type": "Job:FileTransfer",
"ConnectionProfileSrc": "amazonConn",
"ConnectionProfileDest": "LocalConn",
"NumberOfRetries": "4",
"S3BucketName": "bucket1",
"Host": "agentHost",
"FileTransfers": [
{
"Src" : "folder/sub_folder/file1",
"Dest" : "folder/sub_folder/file2"
}
]
}
}
}
Here is another example for a file transfer from an S3 storage service to another S3 storage service:
{
"MyS3AftFolder": {
"Type": "Folder",
"Application": "aft",
"TransferFromS3toS3":
{
"Type": "Job:FileTransfer",
"ConnectionProfileSrc": "amazonConn",
"ConnectionProfileDest": "amazon2Conn",
"NumberOfRetries": "6",
"S3BucketNameSrc": "bucket1",
"S3BucketNameDest": "bucket2",
"Host": "agentHost",
"FileTransfers": [
{
"Src" : "folder/sub_folder/file1",
"Dest" : "folder/sub_folder/file2"
}
]
}
}
}
AAnd here is another example for a file transfer from a local filesystem to an AS2 server.
Note: File transfers that use the AS2 protocol are supported only in one direction — from a local filesystem to an AS2 server.
{
"MyAs2AftFolder": {
"Type": "Folder",
"Application": "AFT",
"MyAftJob_AS2":
{
"Type": "Job:FileTransfer",
"ConnectionProfileSrc": "localAConn",
"ConnectionProfileDest": "as2Conn",
"NumberOfRetries": "Default",
"Host": "agentHost",
"FileTransfers": [
{
"Src": "/dev",
"Dest": "/home/controlm/",
"As2Subject": "Override subject",
"As2Message": "Override conntent type"
}
]
}
}
}
The following parameters were used in the examples above:
Parameter | Description |
---|---|
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. |
ConnectionProfileSrc | The connection profile to use as the source |
ConnectionProfileDest | The connection profile to use as the destination |
ConnectionProfileDualEndpoint | If you need to use a dual-endpoint connection profile that you have set up, specify the name of the dual-endpoint connection profile instead of ConnectionProfileSrc and ConnectionProfileDest. A dual-endpoint connection profile can be used for FTP, SFTP, and Local filesystem transfers. For more information about dual-endpoint connection profiles, see ConnectionProfile:FileTransfer:DualEndPoint. |
NumberOfRetries | Number of connection attempts after a connection failure Range of values: 0–99 or "Default" (to inherit the default) Default: 5 attempts |
S3BucketName | For file transfers between a local filesystem and an Amazon S3 or S3-compatible storage service: The name of the S3 bucket |
S3BucketNameSrc | For file transfers between two Amazon S3 or S3-compatible storage services: The name of the S3 bucket at the source |
S3BucketNameDest | For file transfers between two Amazon S3 or S3-compatible storage services: The name of the S3 bucket at the destination |
FileTransfers | A list of file transfers to perform during job execution, each with the following properties: |
Src | Full path to the source file |
Dest | Full path to the destination file |
TransferType | (Optional) FTP transfer mode, either Ascii (for a text file) or Binary (non-textual data file). Ascii mode handles the conversion of line endings within the file, while Binary mode creates a byte-for-byte identical copy of the transferred file. Default: "Binary" |
TransferOption | (Optional) The following is a list of the transfer options:
Default: "SrcToDest" |
As2Subject | Optional for AS2 file transfer: A text to use to override the subject of the AS2 message. |
As2Message | Optional for AS2 file transfer: A text to use to override the content type in the AS2 message. |
The following example presents a File Transfer job in which the transferred file is watched using the File Watcher utility:
{
"FileTransferFolder" :
{
"Type" : "Folder",
"Application" : "aft",
"TransferFromLocalToSFTPBasedOnEvent" :
{
"Type" : "Job:FileTransfer",
"Host" : "AgentHost",
"ConnectionProfileSrc" : "LocalConn",
"ConnectionProfileDest" : "SftpConn",
"NumberOfRetries": "3",
"FileTransfers" :
[
{
"Src" : "/home/sftp/file1",
"Dest" : "/home/sftp/file2",
"TransferType": "Binary",
"TransferOption" : "SrcToDestFileWatcher",
"PreCommandDest" :
{
"action" : "rm",
"arg1" : "/home/sftp/file2"
},
"PostCommandDest" :
{
"action" : "chmod",
"arg1" : "700",
"arg2" : "/home/sftp/file2"
},
"FileWatcherOptions":
{
"MinDetectedSizeInBytes" : "200",
"TimeLimitPolicy" : "WaitUntil",
"TimeLimitValue" : "2000",
"MinFileAge" : "3Min",
"MaxFileAge" : "10Min",
"AssignFileNameToVariable" : "FileNameEvent",
"TransferAllMatchingFiles" : true
}
}
]
}
}
}
This example contains the following additional optional parameters:
PreCommandSrc PreCommandDest PostCommandSrc PostCommandDest | Defines commands that occur before and after job execution.
| ||||||||||||
FileWatcherOptions | Additional options for watching the transferred file using the File Watcher utility: | ||||||||||||
MinDetectedSizeInBytes | Defines the minimum number of bytes transferred before checking if the file size is static | ||||||||||||
TimeLimitPolicy/ TimeLimitValue | Defines the time limit to watch a file: TimeLimitValue: If TimeLimitPolicy: WaitUntil, the TimeLimitValue is the specific time to wait, for example 04:22 would be 4:22 AM. | ||||||||||||
MinFileAge | Defines the minimum number of years, months, days, hours, and/or minutes that must have passed since the watched file was last modified Valid values: 9999Y9999M9999d9999h9999Min For example: 2y3d7h | ||||||||||||
MaxFileAge | Defines the maximum number of years, months, days, hours, and/or minutes that can pass since the watched file was last modified Valid values: 9999Y9999M9999d9999h9999Min For example: 2y3d7h | ||||||||||||
AssignFileNameToVariable | Defines the variable name that contains the detected file name | ||||||||||||
TransferAllMatchingFiles | Whether to transfer all matching files (value of true) or only the first matching file (value of false) after waiting until the watching criteria is met. Valid values: true | false |
Job:FileWatcher
A File Watcher job enables you to detect the successful completion of a file transfer activity that creates or deletes a file. The following example shows how to manage File Watcher jobs using Job:FileWatcher:Create and Job:FileWatcher:Delete.
"FWJobCreate" : {
"Type" : "Job:FileWatcher:Create",
"RunAs":"controlm",
"Path" : "C:/path*.txt",
"SearchInterval" : "45",
"TimeLimit" : "22",
"StartTime" : "201705041535",
"StopTime" : "201805041535",
"MinimumSize" : "10B",
"WildCard" : true,
"MinimalAge" : "1Y",
"MaximalAge" : "1D2H4MIN"
},
"FWJobDelete" : {
"Type" : "Job:FileWatcher:Delete",
"RunAs":"controlm",
"Path" : "C:/path.txt",
"SearchInterval" : "45",
"TimeLimit" : "22",
"StartTime" : "201805041535",
"StopTime" : "201905041535"
}
This example contains the following parameters:
Path | Path of the file to be detected by the File Watcher You can include wildcards in the path — * for any number of characters, and ? for any single character. |
SearchInterval | Interval (in seconds) between successive attempts to detect the creation/deletion of a file |
TimeLimit | Maximum time (in minutes) to run the process without detecting the file at its minimum size (for Job:FileWatcher:Create) or detecting its deletion (for Job:FileWatcher:Delete). If the file is not detected/deleted in this specified time frame, the process terminates with an error return code. Default: 0 (no time limit) |
StartTime | The time at which to start watching the file The format is yyyymmddHHMM. For example, 201805041535 means that the File Watcher will start watching the file on May 4, 2018 at 3:35 PM. |
StopTime | The time at which to stop watching the file. Format: yyyymmddHHMM or HHMM (for the current date) |
MinimumSize | Minimum file size to monitor for, when watching a created file Follow the specified number with the unit: B for bytes, KB for kilobytes, MB for megabyes, or GB for gigabytes. If the file name (specified by the Path parameter) contains wildcards, minimum file size is monitored only if you set the Wildcard parameter to true. |
Wildcard | Whether to monitor minimum file size of a created file if the file name (specified by the Path parameter) contains wildcards Values: true | false |
MinimalAge | (Optional) The minimum number of years, months, days, hours, and/or minutes that must have passed since the created file was last modified. For example: 2Y10M3D5H means that 2years, 10 months, 3 days, and 5 hours must pass before the file will be watched. 2H10Min means that 2 hours and 10 minutes must pass before the file will be watched. |
MaximalAge | (Optional) The maximum number of years, months, days, hours, and/or minutes that can pass since the created file was last modified. For example: 2Y10M3D5H means that after 2years, 10 months, 3 days, and 5 hours have passed, the file will no longer be watched. 2H10Min means that after 2 hours and 10 minutes have passed, the file will no longer be watched. |
Job:Database
The following types of database jobs are available:
- Embedded Query job, using Job:Database:EmbeddedQuery
- SQL Script job, using Job:Database:SQLScript
- Stored Procedure job, using Job:Database:StoredProcedure
- MSSQL Agent job, using Job:Database:MSSQL:AgentJob
- SSIS Package job, using Job:Database:MSSQL:SSIS
Job:Database:EmbeddedQuery
The following example shows how to create a database job that runs an embedded query.
{
"PostgresDBFolder": {
"Type": "Folder",
"EmbeddedQueryJobName": {
"Type": "Job:Database:EmbeddedQuery",
"ConnectionProfile": "POSTGRESQL_CONNECTION_PROFILE",
"Query": "SELECT %%firstParamName AS VAR1 \\n FROM DUMMY \\n ORDER BY \\t VAR1 DESC",
"Host": "${agentName}",
"RunAs": "PostgressCP",
"Variables": [
{
"firstParamName": "firstParamValue"
}
],
"Autocommit": "N",
"OutputExecutionLog": "Y",
"OutputSQLOutput": "Y",
"SQLOutputFormat": "XML"
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host, as well as the Databases plug-in. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Query | The embedded SQL query that you want to run. The SQL query can contain auto edit variables. During job run, these variables are replaced by the values that you specify in Variables parameter (next row). For long queries, you can specify delimiters using \\n (new line) and \\t (tab). |
Variables | Variables are pairs of name and value. Every name that appears in the embedded script will be replaced by its value pair. The maximum length of a variable name is 38 alphanumeric characters and it is case-sensitive. |
The following optional parameters are also available for all types of database jobs:
Autocommit | (Optional) Commits statements to the database that completes successfully Default: N |
OutputExecutionLog | (Optional) Shows the execution log in the job output Default: Y |
OutputSQLOutput | (Optional) Shows the SQL sysout in the job output Default: N |
SQLOutputFormat | (Optional) Defines the output format as either Text, XML, CSV, or HTML Default: Text |
Job:Database:SQLScript
The following example shows how to create a database job that runs a SQL script from a file system.
{
"OracleDBFolder": {
"Type": "Folder",
"testOracle": {
"Type": "Job:Database:SQLScript",
"Host": "AgentHost",
"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
"ConnectionProfile": "ORACLE_CONNECTION_PROFILE",
"Parameters": [
{"firstParamName": "firstParamValue"},
{"secondParamName": "secondParamValue"}
]
}
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host, as well as the Databases plug-in. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Parameters | Parameters are pairs of name and value. Every name that appears in the SQL script will be replaced by its value pair. |
For additional optional parameters, see above.
Another example:
{
"OracleDBFolder": {
"Type": "Folder",
"testOracle": {
"Type": "Job:Database:SQLScript",
"Host": "app-redhat",
"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
"ConnectionProfile": "ORACLE_CONNECTION_PROFILE"
}
}
}
Job:Database:StoredProcedure
The following example shows how to create a database job that runs a program that is stored on the database.
{
"storeFolder": {
"Type": "Folder",
"jobStoredProcedure": {
"Type": "Job:Database:StoredProcedure",
"Host": "myhost.mycomp.com",
"StoredProcedure": "myProcedure",
"Parameters": [ "value1","variable1",["value2","variable2"]],
"ReturnValue":"RV",
"Schema": "public",
"ConnectionProfile": "DB-PG-CON"
}
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host, as well as the Databases plug-in. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
StoredProcedure | Name of stored procedure that the job runs |
Parameters | A comma-separated list of values and variables for all parameters in the procedure, in the order of their appearance in the procedure. The value that you specify for any specific parameter in the procedure depends on the type of parameter:
In the example above, three parameters are listed, in the following order: [In,Out,Inout] |
ReturnValue | A variable for the Return parameter (if the procedure contains such a parameter) |
Schema | The database schema where the stored procedure resides |
Package | (Oracle only) Name of a package in the database where the stored procedure resides The default is "*", that is, any package in the database. |
ConnectionProfile | Name of a connection profile that contains the details of the connection to the database |
For additional optional parameters, see above.
Job:Database:MSSQL:AgentJob
The following example shows how to create an MSSQL Agent job, for management of a job defined in the SQL server.
{
"MSSQLFolder": {
"Type": "Folder",
"ControlmServer": "IN01",
"MSSQLAgentJob": {
"Type": "Job:Database:MSSQL:AgentJob",
"ConnectionProfile": "MSSQL-WE-EXAMPLE",
"Host": "agentHost",
"JobName": "get_version",
"Category": "Data Collector"
}
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host, as well as the Databases plug-in. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
JobName | The name of the job defined in the SQL server |
Category | The category of the job, as defined in the SQL server |
For additional optional parameters, see above.
Job:Database:MSSQL:SSIS
The following example shows how to create SSIS Package jobs for execution of SQL Server Integration Services (SSIS) packages:
{
"MSSQLFolder": {
"Type": "Folder",
"ControlmServer": "IN01",
"SSISCatalog": {
"Type": "Job:Database:MSSQL:SSIS",
"ConnectionProfile": "MSSQL-CP-NAME",
"Host": "agentHost",
"PackageSource": "SSIS Catalog",
"PackageName": "\\Data Collector\\SqlTraceCollect",
"CatalogEnv": "ENV_NAME",
"ConfigFiles": [
"C:\\Users\\dbauser\\Desktop\\test.dtsConfig",
"C:\\Users\\dbauser\\Desktop\\test2.dtsConfig"
],
"Properties": [
{
"PropertyName": "PropertyValue"
},
{
"PropertyName2": "PropertyValue2"
}
]
},
"SSISPackageStore": {
"Type": "Job:Database:MSSQL:SSIS",
"ConnectionProfile": "MSSQL-CP-NAME",
"Host": "agentHost",
"PackageSource": "SSIS Package Store",
"PackageName": "\\Data Collector\\SqlTraceCollect",
"ConfigFiles": [
"C:\\Users\\dbauser\\Desktop\\test.dtsConfig",
"C:\\Users\\dbauser\\Desktop\\test2.dtsConfig"
],
"Properties": [
{
"PropertyName": "PropertyValue"
},
{
"PropertyName2": "PropertyValue2"
}
]
}
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host, as well as the Databases plug-in. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
PackageSource | The source of the SSIS package, one of the following:
|
PackageName | The name of the SSIS package. |
CatalogEnv | If PackageSource is 'SSIS Catalog': The name of the environment on which to run the package. Use this optional parameter if you want to run the package on a different environment from the one that you are currently using. |
ConfigFiles | (Optional) Names of configuration files that contain specific data that you want to apply to the SSIS package |
Properties | (Optional) Pairs of names and values for properties defined in the SSIS package. Each property name is replaced by its defined value during SSIS package execution. |
For additional optional parameters, see above.
Job:Hadoop
Various types of Hadoop jobs are available for you to define using the Job:Hadoop objects:
- Spark Python
- Spark Scala or Java
- Pig
- Sqoop
- Hive
- DistCp (distributed copy)
- HDFS commands
- HDFS File Watcher
- Oozie
- MapReduce
- MapReduce Streaming
Job:Hadoop:Spark:Python
The following example shows how to use Job:Hadoop:Spark:Python to run a Spark Python program.
"ProcessData": {
"Type": "Job:Hadoop:Spark:Python",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"SparkScript": "/home/user/processData.py"
}
ConnectionProfile | See ConnectionProfile:Hadoop |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Optional parameters:
"ProcessData1": {
"Type": "Job:Hadoop:Spark:Python",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"SparkScript": "/home/user/processData.py",
"Arguments": [
"1000",
"120"
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
},
"SparkOptions": [
{"--master": "yarn"},
{"--num":"-executors 50"}
]
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:Spark:ScalaJava
The following example shows how to use Job:Hadoop:Spark:ScalaJava to run a Spark Java or Scala program.
"ProcessData": {
"Type": "Job:Hadoop:Spark:ScalaJava",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar": "/home/user/ScalaProgram.jar",
"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName"
}
ConnectionProfile | See ConnectionProfile:Hadoop |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Optional parameters:
"ProcessData1": {
"Type": "Job:Hadoop:Spark:ScalaJava",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar": "/home/user/ScalaProgram.jar"
"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName",
"Arguments": [
"1000",
"120"
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
},
"SparkOptions": [
{"--master": "yarn"},
{"--num":"-executors 50"}
]
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:Pig
The following example shows how to use Job:Hadoop:Pig to run a Pig script.
"ProcessDataPig": {
"Type" : "Job:Hadoop:Pig",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"PigScript" : "/home/user/script.pig"
}
ConnectionProfile | See ConnectionProfile:Hadoop |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Optional parameters:
"ProcessDataPig1": {
"Type" : "Job:Hadoop:Pig",
"ConnectionProfile": "DEV_CLUSTER",
"PigScript" : "/home/user/script.pig",
"Host" : "edgenode",
"Parameters" : [
{"amount":"1000"},
{"volume":"120"}
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:Sqoop
The following example shows how to use Job:Hadoop:Sqoop to run a Sqoop job.
"LoadDataSqoop":
{
"Type" : "Job:Hadoop:Sqoop",
"Host" : "edgenode",
"ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",
"SqoopCommand" : "import --table foo --target-dir /dest_dir"
}
ConnectionProfile | See Sqoop ConnectionProfile:Hadoop |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Optional parameters:
"LoadDataSqoop1" :
{
"Type" : "Job:Hadoop:Sqoop",
"Host" : "edgenode",
"ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",
"SqoopCommand" : "import --table foo",
"SqoopOptions" : [
{"--warehouse-dir":"/shared"},
{"--default-character-set":"latin1"}
],
"SqoopArchives" : "",
"SqoopFiles": "",
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
SqoopOptions | These are passed as the specific sqoop tool args |
SqoopArchives | Indicates the location of the Hadoop archives. |
SqoopFiles | Indicates the location of the Sqoop files. |
Job:Hadoop:Hive
The following example shows how to use Job:Hadoop:Hive to run a Hive beeline job.
"ProcessHive":
{
"Type" : "Job:Hadoop:Hive",
"Host" : "edgenode",
"ConnectionProfile" : "HIVE_CONNECTION_PROFILE",
"HiveScript" : "/home/user1/hive.script"
}
ConnectionProfile | See Hive ConnectionProfile:Hadoop |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Optional parameters:
"ProcessHive1" :
{
"Type" : "Job:Hadoop:Hive",
"Host" : "edgenode",
"ConnectionProfile" : "HIVE_CONNECTION_PROFILE",
"HiveScript" : "/home/user1/hive.script",
"Parameters" : [
{"ammount": "1000"},
{"topic": "food"}
],
"HiveArchives" : "",
"HiveFiles": "",
"HiveOptions" : [
{"hive.root.logger": "INFO,console"}
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
HiveSciptParameters | Passed to beeline as --hivevar “name”=”value”. |
HiveProperties | Passed to beeline as --hiveconf “key”=”value”. |
HiveArchives | Passed to beeline as --hiveconf mapred.cache.archives=”value”. |
HiveFiles | Passed to beeline as --hiveconf mapred.cache.files=”value”. |
Job:Hadoop:DistCp
The following example shows how to use Job:Hadoop:DistCp to run a DistCp job. DistCp (distributed copy) is a tool used for large inter/intra-cluster copying.
"DistCpJob" :
{
"Type" : "Job:Hadoop:DistCp",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"TargetPath" : "hdfs://nns2:8020/foo/bar",
"SourcePaths" :
[
"hdfs://nn1:8020/foo/a"
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Optional parameters:
"DistCpJob" :
{
"Type" : "Job:Hadoop:DistCp",
"Host" : "edgenode",
"ConnectionProfile" : "HADOOP_CONNECTION_PROFILE",
"TargetPath" : "hdfs://nns2:8020/foo/bar",
"SourcePaths" :
[
"hdfs://nn1:8020/foo/a",
"hdfs://nn1:8020/foo/b"
],
"DistcpOptions" : [
{"-m":"3"},
{"-filelimit ":"100"}
]
}
TargetPath, SourcePaths and DistcpOptions | Passed to the distcp tool in the following way: distcp <Options> <TargetPath> <SourcePaths>. |
Job:Hadoop:HDFSCommands
The following example shows how to use Job:Hadoop:HDFSCommands to run a job that executes one or more HDFS commands.
"HdfsJob":
{
"Type" : "Job:Hadoop:HDFSCommands",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"Commands": [
{"get": "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm": "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Job:Hadoop:HDFSFileWatcher
The following example shows how to use Job:Hadoop:HDFSFileWatcher to run a job that waits for HDFS file arrival.
"HdfsFileWatcherJob" :
{
"Type" : "Job:Hadoop:HDFSFileWatcher",
"Host" : "edgenode",
"ConnectionProfile" : "DEV_CLUSTER",
"HdfsFilePath" : "/inputs/filename",
"MinDetecedSize" : "1",
"MaxWaitTime" : "2"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
HdfsFilePath | Specifies the full path of the file being watched. |
MinDetecedSize | Defines the minimum file size in bytes to meet the criteria and finish the job as OK. If the file arrives, but the size is not met, the job continues to watch the file. |
MaxWaitTime | Defines the maximum number of minutes to wait for the file to meet the watching criteria. If criteria are not met (file did not arrive, or minimum size was not reached) the job fails after this maximum number of minutes. |
Job:Hadoop:Oozie
The following example shows how to use Job:Hadoop:Oozie to run a job that submits an Oozie workflow.
"OozieJob": {
"Type" : "Job:Hadoop:Oozie",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"JobPropertiesFile" : "/home/user/job.properties",
"OozieOptions" : [
{"inputDir":"/usr/tucu/inputdir"},
{"outputDir":"/usr/tucu/outputdir"}
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
JobPropertiesFile | The path to the job properties file. |
Optional parameters:
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false , that is, the job will complete successfully even if any post-command fails. |
OozieOptions | Set or override values for given job property. |
Job:Hadoop:MapReduce
The following example shows how to use Job:Hadoop:MapReduce to execute a Hadoop MapReduce job.
"MapReduceJob" :
{
"Type" : "Job:Hadoop:MapReduce",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
"MainClass" : "pi",
"Arguments" :[
"1",
"2"]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Optional parameters:
"MapReduceJob1" :
{
"Type" : "Job:Hadoop:MapReduce",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
"MainClass" : "pi",
"Arguments" :[
"1",
"2"
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:MapredStreaming
The following example shows how to use Job:Hadoop:MapredStreaming to execute a Hadoop MapReduce Streaming job.
"MapredStreamingJob1": {
"Type": "Job:Hadoop:MapredStreaming",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"InputPath": "/user/robot/input/*",
"OutputPath": "/tmp/output",
"MapperCommand": "mapper.py",
"ReducerCommand": "reducer.py",
"GeneralOptions": [
{"-D": "fs.permissions.umask-mode=000"},
{"-files": "/home/user/hadoop-streaming/mapper.py,/home/user/hadoop-streaming/reducer.py"}
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. |
Optional parameters:
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true , that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
GeneralOptions | Additional Hadoop command options passed to the hadoop-streaming.jar, including generic options and streaming options. |
Job:SAP
SAP-type jobs enable you to manage SAP processes through the Control-M environment. To manage SAP-type jobs, you must have the SAP plug-in installed.
The following JSON objects are available for creating SAP-type jobs:
- Job:SAP:R3:CREATE— Creates a new SAP R3 job
- Job:SAP:R3:COPY — Creates an SAP R3 job by copying an existing job
- Job:SAP:BW:ProcessChain — Defines a job to run and monitor a Process Chain in SAP Business Warehouse (SAP BW)
- Job:SAP:BW:InfoPackage — Defines a job to run and monitor SAP InfoPackage that is pre-defined in SAP Business Warehouse (SAP BW)
Job:SAP:R3:CREATE
This job type enables you to create a new SAP R3 job.
The following example is a simple job that relies mostly on default settings and contains one step that executes an external command.
"SAPR3_external_command": {
"Type": "Job:SAP:R3:CREATE",
"ConnectionProfile": "SAPCP",
"SapJobName": "SAP_job",
"CreatedBy": "user1",
"Steps": [
{
"StepType": "ExternalCommand",
"UserName": "user01",
"TargetHost": "host01",
"ProgramName": "PING"
}
],
"SpoolListRecipient": {
"ReciptNoForwarding": false
}
}
The following example is a more complex job that contains two steps that run ABAP programs. Each of the ABAP steps has an associated variant that contains variable definitions.
"SapR3CreateComplete": {
"Type": "Job:SAP:R3:CREATE",
"ConnectionProfile": "SAPCP",
"SapJobName": "SAP_job2",
"StartCondition": "Immediate",
"RerunFromStep": "3",
"Target": "controlmserver",
"CreatedBy": "user1",
"Steps": [
{
"StepType": "ABAP",
"TimeToPrint": "PrintLater",
"CoverPrintPage": true,
"OutputDevice": "prt",
"UserName": "user",
"SpoolAuthorization": "Auth",
"CoverDepartment": "dpt",
"SpoolListName": "spoolname",
"OutputNumberRows": "62",
"NumberOfCopies": "5",
"NewSpoolRequest": false,
"PrintArchiveMode": "PrintAndArchive",
"CoverPage": "Print",
"ArchiveObjectType": "objtype",
"SpoolListTitles": "titles",
"OutputLayout": "layout",
"CoverSheet": "Print",
"ProgramName": "ABAP_PROGRAM",
"Language": "e",
"ArchiveInformationField": "inf",
"DeleteAfterPrint": true,
"PrintExpiration": "3",
"OutputNumberColumns": "88",
"ArchiveDocumentType": "doctype",
"CoverRecipient": "recipient",
"VariantName": "NameOfVariant",
"VariantParameters": [
{
"Type": "Range",
"High": "2",
"Sign": "I",
"Option": "BT",
"Low": "1",
"Name": "var1",
"Modify": false
},
{
"Low": "5",
"Type": "Range",
"Option": "BT",
"Sign": "I",
"Modify": true,
"High": "6",
"Name": "var3"
}
]
},
{
"StepType": "ABAP",
"PrintArchiveMode": "Print",
"ProgramName": "ABAP_PROGRAM2",
"VariantName": "Myvar_with_temp",
"TemporaryVariantParameters": [
{
"Type": "Simple",
"Name": "var",
"Value": "P11"
},
{
"Type": "Simple",
"Name": "var2",
"Value": "P11"
}
]
}
],
"PostJobAction": {
"JobLog": "CopyToFile",
"JobCompletionStatusWillDependOnApplicationStatus": true,
"SpoolSaveToPDF": true,
"JobLogFile": "fileToCopy.txt"
},
"SpoolListRecipient": {
"ReciptNoForwarding": false
}
}
The following table lists the parameters that can be used in SAP jobs of this type:
ConnectionProfile | Name of the SAP connection profile to use for the connection |
SapJobName | Name of SAP job to be monitored or submitted |
Exec | Type of execution target where the SAP job will run, one of the following:
|
Target | The name of the SAP application server or SAP group (depending on the value specified in the previous parameter) |
JobClass | Job submission priority in SAP, one of the following options:
|
StartCondition | Specifies when the job should run, one of the following:
|
AfterEvent | The name of the SAP event that the job waits for (if you set StartCondition to AfterEvent) |
AfterEventParameters | Parameters in the SAP event to watch for. Use space characters to separate multiple parameters. |
RerunFromPointOfFailure | Whether to rerun the SAP R/3 job from its point of failure, either true of false (the default) Note: If RerunFromPointOfFailure is set to false, use the CopyFromStep parameter to set a specific step from which to rerun. |
CopyFromStep | The number of a specific step in the SAP R/3 job from which to rerun The default is step 1 (that is, the beginning of the job). Note: If RerunFromPointOfFailure is set to true, the CopyFromStep parameter is ignored. |
Steps | An object that groups together the definitions of SAP R/3 job steps |
StepType | The type of program to execute in this step, one of the following options:
|
ProgramName | The name of the program or command |
UserName | The authorized owner of the step |
Description | A textual description or comment for the step |
Further parameters for each individual step depend on the type of program that is executed in the step. These parameters are listed in separate tables: | |
PostJobAction | This object groups together several parameters that control post-job actions for the SAP R/3 job. |
Spool | How to manage spool output, one of the following options:
|
SpoolFile | The file to which to copy the job's spool output (if Spool is set to CopyToFile) |
SpoolSaveToPDF | Whether to save the job's spool output in PDF format (if Spool is set to CopyToFile) |
JobLog | How to manage job log output, one of the following options:
|
JobLogFile | The file to which to copy the job's log output (if JobLog is set to CopyToFile) |
JobCompletionStatusWillDependOnApplicationStatus | Whether job completion status depends on SAP application status, either true or false (the default) |
DetectSpawnedJob | This object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job |
DetectAndCreate | How to determine the properties of detected spawned jobs:
|
JobName | Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition) Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values. |
StartSpawnedJob | Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default) |
JobEndInControlMOnlyAftreChildJobsCompleteOnSap | Whether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default) |
JobCompletionStatusDependsOnChildJobsStatus | Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default) When set to true, the parent job does not end OK if any child job fails. This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true. |
SpoolListRecipient | This object groups together several parameters that define recipients of Print jobs |
RecipientType | Type of recipient of the print job, one of the following:
|
RecipientName | Recipient of the print job (of the type defined by the previous parameter) |
RecipientCopy | Whether this recipient is a copied (CC) recipient, either true or false (the default) |
RecipientBlindCopy | Whether this recipient is a blind copied (BCC) recipient, either true or false (the default) |
RecipientExpress | For a CC or BCC recipient: Whether to send in express mode, either true or false (the default) |
ReciptNoForwarding | For a CC or BCC recipient: Whether to set the recipient to "No Forwarding", either true or false (the default) |
The following additional parameters are available f or steps that involve the execution of an ABAP program. Most of these parameters are optional.
Language | SAP internal one-character language code for the ABAP step For example, German is D and Serbian (using the Latin alphabet) is d. For the full list of available language codes, see SAP Knowledge Base Article 2633548. |
VariantName | The name of a variant for the specified ABAP program or Archiving Object |
VariantDescription | A textual description or comment for the variant |
VariantParameters | This object groups together the variables defined in the variant. For each variable, you can set the following parameters:
|
TemporaryVariantParameters | This object groups together the variables defined in a temporary variant. For each variable, you can set the same parameters listed above, except for Modify (which is not supported by a temporary variant). |
OutputDevice | The logical name of the designated printer |
NumberOfCopies | Number of copies to be printed The default is 1. |
PrintArchiveMode | Whether the spool of the step is printed to an output device, to the archive, or both. Choose from the following available values:
|
TimeToPrint | When to print the job output, one of the following options:
|
PrintExpiration | Number of days until a print job expires Valid values are single-digit numbers:
The default is 8 days. |
NewSpoolRequest | Whether to request a new spool, either true (the default) or false |
DeleteAfterPrint | Whether to delete the report after printing, either true or false (the default) |
OutputLayout | Print layout format |
OutputNumberRows | (Mandatory) Maximum number of rows per page Valid values:
|
OutputNumberColumns | (Mandatory) Maximum number of characters in an output line Valid values:
|
CoverRecipient | Name of the recipient of the job output on the cover sheet The name can be up to 12 characters. |
CoverDepartment | Name of the spool department on the cover sheet The department name can be up to 12 characters. |
CoverPage | Type of cover page for output, one of the following options:
|
CoverSheet | Type of cover sheet for output, one of the following options:
|
CoverPrintPage | Whether to use a cover page, either true or false The default is false. |
SpoolListName | Name of the spool list The name can be up to 12 characters. |
SpoolListTitles | The spool list titles |
SpoolAuthorization | Name of a user with print authorization The name can be up to 12 characters. |
ArchiveId | SAP ArchiveLink Storage system ID Values are two carachters long. The default is ZZ. Note that Archive parameters are relevant only when you set PrintArchiveMode to Archive or PrintAndArchive. |
ArchiveText | Free text description of the archive location, up to 40 characters |
ArchiveObjectType | Archive object type Valid values are up to 10 characters. |
ArchiveDocumentType | Archive object document type Valid values are up to 10 characters. |
ArchiveInformationField | Archive information Values can be 1–3 characters. |
The following additional parameters are available for steps that involve the execution of an external program or an external command:
TargetHost | Host computer on which the program or command runs |
OperatingSystem | Operating system on which the external command runs The default is ANYOS. |
WaitExternalTermination | Whether SAP waits for the external program or external command to end before starting the next step, or before exiting. Values are either true (the default) or false. |
LogExternalOutput | Whether SAP logs external output in the joblog Values are either true (the default) or false. |
LogExternalErrors | Whether SAP logs external errors in the joblog Values are either true (the default) or false. |
ActiveTrace | Whether SAP activates traces for the external program or external command Values are either true or false (the default). |
Job:SAP:R3:COPY
This job type enables you to create a new SAP R3 job by duplicating an existing job. The following example shows how to use Job:SAP:R3:COPY:
"JobSapR3Copy" : {
"Type" : "Job:SAP:R3:COPY",
"ConnectionProfile":"SAP-CON",
"SapJobName" : "CHILD_1",
"Exec": "Server",
"Target" : "Server-name",
"JobCount" : "SpecificJob",
"JobCountSpecificName" : "sap-job-1234",
"NewJobName" : "My-New-Sap-Job",
"StartCondition" : "AfterEvent",
"AfterEvent" : "HOLA",
"AfterEventParameters" : "parm1 parm2",
"RerunFromPointOfFailure": true,
"CopyFromStep" : "4",
"PostJobAction" : {
"Spool" : "CopyToFile",
"SpoolFile": "spoolfile.log",
"SpoolSaveToPDF" : true,
"JobLog" : "CopyToFile",
"JobLogFile": "Log.txt",
"JobCompletionStatusWillDependOnApplicationStatus" : true
},
"DetectSpawnedJob" : {
"DetectAndCreate": "SpecificJobDefinition",
"JobName" : "Specific-Job-123",
"StartSpawnedJob" : true,
"JobEndInControlMOnlyAftreChildJobsCompleteOnSap" : true,
"JobCompletionStatusDependsOnChildJobsStatus" : true
}
}
This SAP job object uses the following parameters:
ConnectionProfile | Name of the SAP connection profile to use for the connection |
SapJobName | Name of SAP job to copy |
Exec | Type of execution target where the SAP job will run, one of the following:
|
Target | The name of the SAP application server or SAP group (depending on the value specified in the previous parameter) |
JobCount | How to define a unique ID number for the SAP job, one of the following options:
If you specify SpecificJob, you must provide the next parameter. |
JobCountSpecificName | A unique SAP job ID number for a specific job (that is, for when JobCount is set to SpecificJob) |
NewJobName | Name of the newly created job |
StartCondition | Specifies when the job should run, one of the following:
|
AfterEvent | The name of the SAP event that the job waits for (if you set StartCondition to AfterEvent) |
AfterEventParameters | Parameters in the SAP event to watch for. Use space characters to separate multiple parameters. |
RerunFromPointOfFailure | Whether to rerun the SAP R/3 job from its point of failure, either true of false (the default) Note: If RerunFromPointOfFailure is set to false, use the CopyFromStep parameter to set a specific step from which to rerun. |
CopyFromStep | The number of a specific step in the SAP R/3 job from which to rerun or copy The default is step 1 (that is, the beginning of the job). Note: If RerunFromPointOfFailure is set to true, the CopyFromStep parameter is ignored. |
PostJobAction | This object groups together several parameters that control post-job actions for the SAP R/3 job. |
Spool | How to manage spool output, one of the following options:
|
SpoolFile | The file to which to copy the job's spool output (if Spool is set to CopyToFile) |
SpoolSaveToPDF | Whether to save the job's spool output in PDF format (if Spool is set to CopyToFile) |
JobLog | How to manage job log output, one of the following options:
|
JobLogFile | The file to which to copy the job's log output (if JobLog is set to CopyToFile) |
JobCompletionStatusWillDependOnApplicationStatus | Whether job completion status depends on SAP application status, either true or false (the default) |
DetectSpawnedJob | This object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job |
DetectAndCreate | How to determine the properties of detected spawned jobs:
|
JobName | Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition) Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values. |
StartSpawnedJob | Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default) |
JobEndInControlMOnlyAftreChildJobsCompleteOnSap | Whether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default) |
JobCompletionStatusDependsOnChildJobsStatus | Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default) When set to true, the parent job does not end OK if any child job fails. This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true. |
Job:SAP:BW:ProcessChain
This job type runs and monitors a Process Chain in SAP Business Warehouse (SAP BW).
NOTE: For the job that you define through Control-M Automation API to work properly, ensure that the Process Chain defined in the SAP BW system has Start Using Meta Chain or API as the start condition for the trigger process (Start Process) of the Process Chain. To configure this parameter, from the SAP transaction RSPC, right-click the trigger process and select Maintain Variant.
The following example shows how to use Job:SAP:BW:ProcessChain:
"JobSapBW": {
"Type": "Job:SAP:BW:ProcessChain",
"ConnectionProfile": "PI4-BW",
"ProcessChainDescription": "SAP BW Process Chain",
"Id": "123456",
"RerunOption": "RestartFromFailiurePoint",
"EnablePeridoicJob": true,
"ConsiderOnlyOverallChainStatus": true,
"RetrieveLog": false,
"DetectSpawnedJob": {
"DetectAndCreate": "SpecificJobDefinition",
"JobName": "ChildJob",
"StartSpawnedJob": false,
"JobEndInControlMOnlyAftreChildJobsCompleteOnSap": false,
"JobCompletionStatusDependsOnChildJobsStatus": false
}
}
This SAP job object uses the following parameters:
ConnectionProfile | Name of the SAP connection profile to use for the connection. |
ProcessChainDescription | The description of the Process Chain that you want to run and monitor, as defined in SAP BW. Maximum length of the textual description: 60 characters |
Id | ID of the Process Chain that you want to run and monitor. |
RerunOption | The rerun policy to apply to the job after job failure, one of the following values:
|
EnablePeridoicJob | Whether the first run of the Process Chain prepares for the next run and is useful for reruns when big Process Chains are scheduled. Values are either true (the default) or false. |
ConsiderOnlyOverallChainStatus | Whether to view only the status of the overall Process Chain. Values are either true or false (the default) . |
RetrieveLog | Whether to add the Process Chain logs to the job output. Values are either true (the default) or false. |
DetectSpawnedJob | This object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job |
DetectAndCreate | How to determine the properties of detected spawned jobs:
|
JobName | Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition) Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values. |
StartSpawnedJob | Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default) |
JobEndInControlMOnlyAftreChildJobsCompleteOnSap | Whether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default) |
JobCompletionStatusDependsOnChildJobsStatus | Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default) When set to true, the parent job does not end OK if any child job fails. This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true. |
Job:SAP:BW:InfoPackage
This job type runs and monitors an InfoPackage that is pre-defined in SAP Business Warehouse (SAP BW).
The following example shows how to use Job:SAP:BW:InfoPackage
"JobSapBW": {
"Type": "Job:SAP:BW:InfoPackage",
"ConnectionProfile": "PI4-BW",
"CreatedBy": "emuser1",
"Description": "description of the job",
"RunAs": "ProductionUser",
"InfoPackage": {
"BackgroundJobName": "Background job name",
"Description": "description of the InfoPackage",
"TechName": "LGXT565_TGHBNS453BGHJ784"
}
}
This SAP job object uses the following parameters:
ConnectionProfile | Defines the name of the SAP connection profile to use for the job. 1-30 characters. Case sensitive. No blanks. |
CreatedBy | Defines the name of the user that creates the job. |
Description | (Optional) Describes the job. |
RunAs | (Optional) Defines a "Run as" user—an account that is used to log in to the host. |
InfoPackage | An object that groups together the parameters that describe the InfoPackage. |
BackgroundJobName | Defines the InfoPackage background job name. 1-25 characters. |
Description | (Optional) Describes the InfoPackage. |
TechName | Defines a unique SAP BW generated InfoPackage ID. |
Job:PeopleSoft
PeopleSoft-type jobs enable you to manage PeopleSoft jobs and processes through the Control-M environment. To manage PeopleSoft-type jobs, you must have the Control-M for PeopleSoft plug-in installed.
The following example shows the JSON code used to define a PeopleSoft job.
"PeopleSoft_job": {
"Type": "Job:PeopleSoft",
"ConnectionProfile": "PS_CONNECT",
"User": "PS_User3",
"ControlId": "ControlId",
"ServerName": "ServerName",
"ProcessType": "ProcessType",
"ProcessName": "ProcessName",
"AppendToOutput": false,
"BindVariables": ["value1","value2"],
"RunAs": "controlm"
}
This PeopleSoft job object uses the following parameters:
ConnectionProfile | Name of the PeopleSoft connection profile to use for the connection |
User | A PeopleSoft user ID that exists in the PeopleSoft Environment |
ControlId | Run Control ID for access to run controls at runtime |
ServerName | The name of the server on which to run the PeopleSoft job or process |
ProcessType | A PeopleSoft process type that the user is authorized to perform |
ProcessName | The name of the PeopleSoft process to run |
AppendToOutput | Whether to include PeopleSoft job output in the Control-M job output, either true or false The default is false. |
BindVariables | Values of up to 20 USERDEF variables for sharing data between Control-M and the PeopleSoft job or process |
Job:Airflow
The Airflow job enables you to monitor and manage DAG workflows. To manage Airflow-type jobs, you must have the Control-M for Airflow plug-in installed in your Control-M environment.
The following example shows the JSON code used to define an Airflow job.
"AirflowJob": {
"Type": "Job:Airflow",
"Host": "AgentHost",
"ConnectionProfile": "AIRFLOW_CONNECTION_PROFILE",
"DagId": "example_bash_operator",
"ConfigurationJson": "\{\"key1\":1, \"key2\":2, \"key3\":\"value3\"\}",
"OutputDetails": "FAILED_TASKS"
}
The Airflow job object uses the following parameters:
ConnectionProfile | Name of the Airflow connection profile to use for the connection |
DagId | Defines the unique identifier of a DAG. |
ConfigurationJson | (Optional) Defines the JSON object, which describes additional configuration parameters (key:value pairs). |
OutputDetails | Determines whether to include Airflow DAG task logs in the Control-M job output, as follows:
Default: FAILED_TASKS |
Job:ApplicationIntegrator
Use Job:ApplicationIntegrator:<JobType> to define a job of a custom type using the Control-M Application Integrator designer. For information about designing job types, see Job Type Definition in the section about Application Integrator in the Helix Control-M Online Help.
The following example shows the JSON code used to define a job type named AI Monitor Remote Job:
"JobFromAI" : {
"Type": "Job:ApplicationIntegrator:AI Monitor Remote Job",
"ConnectionProfile": "AI_CONNECTION_PROFILE",
"AI-Host": "Host1",
"AI-Port": "5180",
"AI-User Name": "admin",
"AI-Password": "*******",
"AI-Remote Job to Monitor": "remoteJob5",
"RunAs": "controlm"
}
In this example, the ConnectionProfile and RunAs properties are standard job properties used in Control-M Automation API jobs. The other job properties will be created in the Application Integrator, and they must be prefixed with "AI-" in the .json code.
The following images show the corresponding settings in the Control-M Application Integrator, for reference purposes.
Job:Informatica
Informatica-type jobs enable you to automate Informatica workflows through the Control-M environment. To manage Informatica-type jobs, you must have the Informatica plug-in installed.
The following example shows the JSON code used to define an Informatica job.
"InformaticaApiJob": {
"Type": "Job:Informatica",
"ConnectionProfile": "INFORMATICA_CONNECTION",
"RepositoryFolder": "POC",
"Workflow": "WF_Test",
"InstanceName": "MyInstamce",
"OsProfile": "MyOSProfile",
"WorkflowExecutionMode": "RunSingleTask",
"RunSingleTask": "s_MapTest_Success",
"WorkflowRestartMode": "ForceRestartFromSpecificTask",
"RestartFromTask": "s_MapTest_Success",
"WorkflowParametersFile": "/opt/wf1.prop",
}
This Informatica job object uses the following parameters:
ConnectionProfile | Name of the Informatica connection profile to use for the connection |
RepositoryFolder | The Repository folder that contains the workflow that you want to run |
Workflow | The workflow that you want to run in Control-M for Informatica |
InstanceName | (Optional) The specific instance of the workflow that you want to run |
OsProfile | (Optional) The operating system profile in Informatica |
WorkflowExecutionMode | The mode for executing the workflow, one of the following:
|
StartFromTask | The task from which to start running the workflow This parameter is required only if you set WorkflowExecutionMode to StartFromTask. |
RunSingleTask | The workflow task that you want to run This parameter is required only if you set WorkflowExecutionMode to RunSingleTask. |
Depth | The number of levels within the workflow task hierarchy for the selection of workflow tasks Default: 10 levels |
EnableOutput | Whether to include the workflow events log in the job output (either true or false) Default: true |
EnableErrorDetails | Whether to include a detailed error log for a workflow that failed (either true or false) Default: true |
WorkflowRestartMode | The operation to execute when the workflow is in a suspended satus, one of the following:
|
RestartFromTask | The task from which to restart a suspended workflow This parameter is required only if you set WorkflowRestartMode to ForceRestartFromSpecificTask |
WorkflowParametersFile | (Optional) The path and name of the workflow parameters file This enables you to use the same workflow for different actions. |
Job: Informatica CS
Informatica Cloud Services (CS) jobs enable you to automate your Informatica workflows for multi-cloud and on-premises data integration through the Control-M environment.
To deploy and run an Informatica CS job, ensure that you have installed the Informatica CS plug-in using the provision image command or the provision agent::update command.
The following example shows the JSON code used to define an Informatica CS job:
"InformaticaCloudCSJob": {
"Type": "Job:Informatica CS",
"ConnectionProfile": "INFORMATICA_CS_CONNECTION",
"Task Type": "Synchronization task",
"Use Federation ID": "checked",
"Task Name": "",
"Folder Path": "Default/defualt-MappingTask1",
"Call Back URL": "",
"Status Polling Frequency": "10"
}
The following example shows the JSON code used to define an Informatica CS job for a taskflow:
"InformaticaCloudCSJob": {
"Type": "Job:Informatica CS",
"ConnectionProfile": "INFORMATICA_CS_CONNECTION",
"Task Type": "Taskflow",
"TaskFlow URL": "https://xxx.dm-xx.informaticacloud.com/active-bpel/rt/xyz",
"Input Fields": "input1=val1&input2=val2&input3=val3",
"Call Back URL": "",
"Rerun suspended Taskflow": "checked",
"Rerun Run ID": "RUN-UCM-RUNID",
"Status Polling Frequency": "10"
}
The Informatica CS job object uses the following parameters:
ConnectionProfile | Defines the name of the Informatica CS connection profile to use for the connection to Informatica Cloud |
Task Type | Determines one of the following task types to run on Informatica Cloud:
|
Use Federation ID | Determines whether to identify the task using a Federated Task ID, which is a unique identifier that is used track and manage tasks across distributed environments in a federated environment. This ID is generated by the Informatica domain and is important for monitoring and troubleshooting tasks. This parameter is not required when you run a taskflow. Valid values: checked | unchecked Default: unchecked |
Task Name | Defines the name of the task that executes on Informatica Cloud. This parameter is not required when you run a taskflow or use a Federated Task ID. |
Folder Path | Defines the folder path of the task that executes on Informatica Cloud. This parameter is required if you are using a Federated Task ID. |
TaskFlow URL | Defines the service URL of the taskflow that executes on Informatica Cloud. You can find this URL by clicking in the top, right corner of the TaskFlow main page of Informatica Data Integrator and clicking Properties Detail.... |
Input Fields | Defines input fields for a taskflow, expressed as input=value pairs separated by the & character |
Call Back URL | (Optional) Defines a publicly available URL where the job status is posted. |
Rerun suspended Taskflow | Determines whether to rerun a suspended taskflow. Valid values: checked | unchecked Default: unchecked |
Rerun Run ID | Defines the Run ID to rerun a suspended taskflow. The Run ID is unique to each job run and is available in the job output, next to the variable name RUN-UCM-RUNID. |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the Informatica Cloud Services job. |
Job:AWS
AWS-type jobs enable you to automate a select list of AWS services through Control-M Automation API. To manage AWS-type jobs, you must have the AWS plug-in installed.
The following JSON objects are available for creating AWS-type jobs:
For the following additional job types you must have installed the relevant plug-ins using the provision image command or the provision agent::update command.
- Job:AWS Batch
- Job:AWS Glue
- Job:AWS Glue DataBrew
- Job:AWS EMR
- Job:AWSEC2
- Job:AWS ECS
- Job:AWS Step Functions
- Job:AWS QuickSight
- Job:AWS Sagemaker
- Job:AWS Athena
- Job:AWS Mainframe Modernization
- Job:AWS CloudFormation
Job:AWS:Lambda
The following example shows how to define a job that executes an AWS Lambda service on an AWS server.
"AwsLambdaJob": {
"Type": "Job:AWS:Lambda",
"ConnectionProfile": "AWS_CONNECTION",
"FunctionName": "LambdaFunction",
"Version": "1",
"Payload" : "{\"myVar\" :\"value1\" \\n\"myOtherVar\" : \"value2\"}"
"AppendLog": true
}
This AWS job object uses the following parameters:
FunctionName | The Lambda function to execute |
Version | (Optional) The Lambda function version |
Payload | (Optional) The Lambda function payload, in JSON format Escape all special characters. |
AppendLog | Whether to add the log to the job’s output, either true (the default) or false |
Job:AWS:StepFunction
The following example shows how to define a job that executes an AWS Step Function service on an AWS server.
"AwsStepFunctionJob": {
"Type": "Job:AWS:StepFunction",
"ConnectionProfile": "AWS_CONNECTION",
"StateMachine": "StateMachine1",
"ExecutionName": "Execution1",
"Input": ""{\"myVar\" :\"value1\" \\n\"myOtherVar\" : \"value2\"}" ",
"AppendLog": true
}
This AWS job object uses the following parameters :
StateMachine | The State Machine to use |
ExecutionName | A name for the execution |
Input | The Step Function input in JSON format Escape all special characters. |
AppendLog | Whether to add the log to the job’s output, either true (the default) or false |
Job:AWS:Batch
The following example shows how to define a job that executes an AWS Batch service on an AWS server.
"AwsBatchJob": {
"Type": "Job:AWS:Batch",
"ConnectionProfile": "AWS_CONNECTION",
"JobName": "batchjob1",
"JobDefinition": "jobDef1",
"JobDefinitionRevision": "3",
"JobQueue": "queue1",
"AWSJobType": "Array",
"ArraySize": "100",
"DependsOn": {
"DependencyType": "Standard",
"JobDependsOn": "job5"
},
"Command": [ "ffmpeg", "-i" ],
"Memory": "10",
"vCPUs": "2",
"JobAttempts": "5",
"ExecutionTimeout": "60",
"AppendLog": false
}
This AWS job object uses the following parameters :
JobName | The name of the batch job |
JobDefinition | The job definition to use |
JobDefinitionRevision | The job definition revision |
JobQueue | The queue to which the job is submitted |
AWSJobType | The type of job, either Array or Single |
ArraySize | (For a job of type Array) The size of the array (that is, the number of items in the array) Valid values: 2–10000 |
DependsOn | Parameters that determine a job dependency |
DependencyType | (For a job of type Array) Type of dependency, one of the following values:
|
JobDependsOn | The JobID upon which the Batch job depends This parameter is mandatory for a Standard or N-to-N dependency, and optional for a Sequential dependency. |
Command | A command to send to the container that overrides the default command from the Docker image or the job definition |
Memory | The number of megabytes of memory reserved for the job Minimum value: 4 megabytes |
vCPUs | The number of vCPUs to reserve for the container |
JobAttempts | The number of retry attempts Valid values: 1–10 |
ExecutionTimeout | The timeout duration in seconds |
AppendLog | Whether to add the log to the job’s output, either true (the default) or false |
Job:AWS Batch
The following examples show how to define an AWS Batch job, which enables you to manage and run batch computing workloads in AWS.
To deploy and run an AWS Batch job, ensure that you have installed the AWS Batch plug-in using the provision image command or the provision agent::update command.
The following example defines a Batch job with basic parameters:
"AWS_Batch_Job_basic": {
"Type": "Job:AWS Batch",
"ConnectionProfile": "AWS_BATCH",
"Use Advanced JSON Format": "unchecked",
"Job Name": "job1",
"Job Definition and Revision": "ctm-batch-job-definition:1",
"Job Queue": "ctm-batch-job-queue",
"Container Overrides Command": "[\"echo\", \"hello from control-m\"]",
"Job Attempts": "2",
"Execution Timeout": "65",
"Status Polling Frequency": "20"
}
In the following example, various job parameters are provided through a submitted JSON body:
"AWS_Batch_Job_advanced": {
"Type": "Job:AWS Batch",
"ConnectionProfile": "AWS_BATCH",
"Use Advanced JSON Format": "checked",
"JSON Format": "{\"containerOverrides\":{\"command\":[\"echo\",\"Hello, from Control-M\"],\"resourceRequirements\":[{\"type\":\"VCPU\",\"value\":\"2\"}]},\"jobDefinition\": \"ctm-batch-jobdefinition:1\",\"jobName\": \"job1\",\"jobQueue\": \"ctm-batch-job-queue\",\"timeout\": {\"attemptDurationSeconds\": 70}}",
"Status Polling Frequency": "20"
}
The AWS Batch job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to the AWS Batch service |
Use Advanced JSON Format | Determines whether further batch job parameters are defined through a submitted JSON body (represented by the JSON Format parameter below). Values: checked | unchecked Default: unchecked |
JSON Format | Defines the parameters for the batch job, in JSON format, that enable you to control how the job runs. For a description of the syntax of this JSON, see the description of SubmitJob in the AWS Batch API Reference. JSON Format is relevant only if you set Use Advanced JSON Format to checked. This JSON body replaces the use of all other parameters described below, except for Status Polling Frequency, and enables you to include additional parameters. |
Job Name | Defines the name of the batch job. |
Job Definition and Revision | Determines which predefined job definition and version number (revision) is applied to the job, as follows:
|
Job Queue | Determines the job queue, which stores your batch job. |
Container Overrides Command | (Optional) Defines a command, in JSON format, that overrides the specified command in the job definition. |
Job Attempts | (Optional) Determines the number of times to retry a job run, which overrides the number of retry attempts determined in the job definition. Valid Values: 1–10 |
Execution Timeout | (Optional) Determines the number of seconds to wait before a timeout occurs in a batch job, which overrides the timeout defined in the job definition. |
Status Polling Frequency | (Optional) Determines the number of seconds to wait before checking the job status. Default: 20 |
Job:AWS Glue
The following example shows how to define a job that executes Amazon Web Services (AWS) Glue, a serverless data integration service.
To deploy and run an AWS Glue job, ensure that you have installed the AWS Glue plug-in using the provision image command or the provision agent::update command.
"AwsGlueJob": {
"Type": "Job:AWS Glue",
"ConnectionProfile": "GLUECONNECTION",
"Glue Job Name": "AwsGlueJobName",
"Glue Job Arguments": "checked",
"Arguments": "{\"--myArg1\": \"myVal1\", \"--myArg2\": \"myVal2\"}",
"Status Polling Frequency": "20",
"Failure Tolerance": "2"
}
The AWS Glue job object uses the following parameters :
ConnectionProfile | Name of a connection profile to use to connect to the AWS Glue service |
Glue Job Name | The name of the AWS Glue job that you want to execute. |
Glue Job Arguments | Whether to enable specification of arguments to be passed when running the AWS Glue job (see next property). Values are checked or unchecked. The default is unchecked. |
Arguments | (Optional) Specific arguments to pass when running the AWS Glue job Format: {\"--myArg1\": \"myVal1\", \"--myArg2\": \"myVal2\"} For more information about the available arguments, see Special Parameters Used by AWS Glue in the AWS documentation. |
Status Polling Frequency | (Optional) Number of seconds to wait before checking the status of the job. Default: 30 |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 |
Job:AWS Glue DataBrew
The following example shows how to define an AWS Glue DataBrew job that you can use to visualize your data and publish it to the Amazon S3 Data Lake.
To deploy and run an AWS Glue DataBrew job, ensure that you have installed the AWS Glue DataBrew plug-in using the provision image command or the provision agent::update command.
"AWS Glue DataBrew_Job": {
"Type": "Job:AWS Glue DataBrew",
"ConnectionProfile": "AWSDATABREW",
"Job Name": "databrew-job",
"Output Job Logs": "checked",
"Status Polling Frequency": "10",
"Failure Tolerance": "2"
}
The AWS Glue DataBrew job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to AWS Glue DataBrew. |
Job Name | Defines the AWS Glue DataBrew job name. |
Output Job Logs | Determines whether the DataBrew job logs are included in the Control-M output. Values: checked | unchecked Default: unchecked |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the DataBrew job. Default: 10 seconds |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 |
Job:AWS Data Pipeline
The following examples show how to define an AWS Data Pipeline job that you can use to automate the transfer, processing, and storage of your data.
To deploy and run an AWS Data Pipeline job, ensure that you have installed the AWS Data Pipeline plug-in using the provision image command or the provision agent::update command.
The following example shows a job for the creation of a pipeline:
"AWS Data Pipeline_Job": {
"Type": "Job:AWS Data Pipeline",
"ConnectionProfile": "AWSDATAPIPELINE",
"Action": "Create Pipeline",
"Pipeline Name": "demo-pipeline",
"Pipeline Unique Id": "235136145",
"Parameters": {
"parameterObjects": [
{
"attributes": [
{
"key": "description",
"stringValue": "S3outputfolder"
}
],
"id": "myS3OutputLoc"
}
],
"parameterValues": [
{
"id": "myShellCmd",
"stringValue": "grep -rc \"GET\" ${INPUT1_STAGING_DIR}/* > ${OUTPUT1_STAGING_DIR}/output.txt"
}
],
"pipelineObjects": [
{
"fields": [
{
"key":"input",
"refValue":"S3InputLocation"
},
{
"key":"stage",
"stringValue":"true"
}
],
"id": "ShellCommandActivityObj",
"name": "ShellCommandActivityObj"
}
]
}
"Trigger Created Pipeline": "checked",
"Status Polling Frequency": "20",
"Failure Tolerance": "3"
}
The following example shows a job for triggering an existing pipeline:
"AWS Data Pipeline_Job": {
"Type": "Job:AWS Data Pipeline",
"ConnectionProfile": "AWSDATAPIPELINE",
"Action": "Trigger Pipeline",
"Pipeline ID": "df-020488024DNBVFN1S2U",
"Trigger Created Pipeline": "unchecked",
"Status Polling Frequency": "20",
"Failure Tolerance": "3"
}
The AWS Data Pipeline job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to AWS Data Pipeline. |
Action | Determines one of the following AWS Data Pipeline actions:
|
Pipeline Name | For a creation action: Defines the name of the new AWS Data Pipeline. |
Pipeline Unique ID | For a creation action: Defines the unique ID (idempotency key) that guarantees the pipeline is created only once. After successful execution, this ID cannot be used again. Valid characters: Any alphanumeric characters |
Parameters | For a creation action: Defines the parameter objects, which define the variables, for your AWS Data Pipeline in JSON format. For more information about the available parameter objects, see the descriptions of the PutPipelineDefinition and GetPipelineDefinition actions in the AWS Data Pipeline API Reference. |
Trigger Created Pipeline | Determines whether to run, or trigger, the newly created AWS Data Pipeline. Valid values: checked|unchecked This parameter is relevant only for a creation action. For a trigger action, set it to unchecked. |
Pipeline ID | For a trigger action: Determines which pipeline to run, or trigger. |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the Data Pipeline job. Default: 20 seconds |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 |
Job:AWS EMR
The following example shows how to define a job that executes Amazon Web Services (AWS) EMR to run big data frameworks .
To deploy and run an AWS EMR job, ensure that you have installed the AWS EMR plug-in using the provision image command or the provision agent::update command.
"AWS EMR_Job_2": {
"Type": "Job:AWS EMR",
"ConnectionProfile": "AWS_EMR",
"Cluster ID": "j-21PO60WBW77GX",
"Notebook ID": "e-DJJ0HFJKU71I9DWX8GJAOH734",
"Relative Path": "ShowWaitingAndRunningClusters.ipynb",
"Notebook Execution Name": "TestExec",
"Service Role": "EMR_Notebooks_DefaultRole",
"Use Advanced JSON Format": "unchecked",
}
The AWS EMR job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to the AWS EMR service. |
Cluster ID | Defines the name of the AWS EMR cluster to connect to the Notebook. Also known as the Execution Engine ID (in the EMR API). |
Notebook ID | Determines which Notebook ID executes the script. Also known as the Editor ID (in the EMR API). |
Relative Path | Defines the full path and name of the script file in the Notebook. |
Notebook Execution Name | Defines the job execution name. |
Service Role | Defines the service role to connect to the Notebook. |
Use Advanced JSON Format | Enables you to provide Notebook execution information through JSON code. Values: checked or unchecked (the default) When you set this parameter to checked, the JSON Body parameter (see below) replaces several other parameters discussed above (Cluster ID, Notebook ID, Relative Path, Notebook Execution Name, and Service Role). |
JSON Body | Defines Notebook execution settings in JSON format. For a description of the syntax of this JSON, see the description of StartNotebookExecution in the Amazon EMR API Reference. JSON Body is relevant only if you set Use Advanced JSON Format to checked. Example:
|
Job:AWSEC2
The following example shows how to define a job that performs operations on an AWS EC2 Virtual Machine (VM).
To deploy and run an AWS EC2 job, ensure that you have installed the AWS EC2 plug-in using the provision image command or the provision agent::update command.
}
"AWSEC2_create": {
"Type": "Job:AWSEC2",
"ConnectionProfile": "AWSEC2",
"Operations": "Create",
"Placement Availability Zone": "us-west-2c",
"Instance Type": "m1.small",
"Subnet ID": "subnet-00aa899a7db25494d",
"Key Name": "ctm-aws-ec2-key-pair",
"Get Instances logs": "unchecked"
}
The AWS EC2 job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to the AWS EC2 Virtual Machine. |
Operations | Determines one of the following operations to perform on the AWS EC2 Virtual Machine:
|
Launch Template ID | Defines the template to use to create a VM from a template. |
Instance ID | Defines the name of the VM instance where you want to run the operation. This parameter is available for all operations except for the Create operations. |
Instance Name | Defines the name of a new VM instance for Create operations. |
Placement Availability Zone | Determines which AWS EC2 zone to use for a Create operation. |
Instance Type | Determines the software requirements of the host computer when you create a new AWS EC2 Virtual Machine. |
Subnet ID | Defines the Subnet ID that is required to launch the instance in a Create operation. |
Key Name | Defines the security credential key set for a Create operation. |
Image ID | Defines the ID of the Amazon Machine Image (AMI) that is required to launch the instance in a Create operation. |
Number of copies | Number of copies of the VM to create in a Create operation. Default: 1 |
Get Instance logs | Determines whether to display logs from the AWS EC2 instance at the end of the job output. This parameter is available for all operations except for the Terminate operation. Values: checked|unchecked Default: unchecked |
Verification Poll Interval | Determines the number of seconds to wait before job status verification. Default: 15 seconds |
Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 times |
Job:AWS ECS
The following example shows how to define an AWS ECS job. AWS Elastic Container Service (ECS) is a container management service that enables you to run, stop, manage, and monitor containerized applications in a cluster.
To deploy and run an AWS ECS job, ensure that you have installed the AWS ECS plug-in using the provision image command or the provision agent::update command.
The following example defines an AWS ECS job using parameters that are preset in Control-M:
"AWS ECS_Job_1": {
"Type": "Job:AWS ECS",
"ConnectionProfile": "ECS",
"Action": "Preset Json",
"ECS Cluster Name": "ECSIntegrationCluster",
"ECS Task Definition": "ECSIntegrationTask",
"Launch Type": "FARGATE",
"Assign Public IP": "True",
"Network Security Groups": "\"sg-01e4a5bfac4189d10\"",
"Network Subnets": "\"subnet-045ddaf41d4852fd7\", \"subnet-0b574cca721d462dc\", \"subnet-0e108b6ba4fc0c4d7\"",
"Override Container": "IntegrationURI",
"Override Command": "\"/bin/sh -c 'whoami'\"",
"Environment Variables": "{\"name\": \"var1\", \"value\": \"1\"}",
"Logs": "Get Logs",
"Status Polling Frequency":"10",
"Failure Tolerance":"5"
}
In the following example, various job parameters are provided through a submitted JSON body:
"AWS ECS_Job_2": {
"Type": "Job:AWS ECS",
"ConnectionProfile": "ECS",
"Action": "Manual Json",
"Parameters":{"cluster":"ECSIntegrationCluster","launchType":"FARGATE","networkConfiguration":{"awsvpcConfiguration":{"assignPublicIp":"ENABLED","securityGroups":["sg-01e4a5bfac4189d10"],"subnets":["subnet-045ddaf41d4852fd7","subnet-0b574cca721d462dc","subnet-0e108b6ba4fc0c4d7"]}},"overrides":{"containerOverrides":[{"command":["/bin/sh -c 'whoami'"],"environment":[{"name":"var1","value":"hello"}],"name":"IntegrationURI"}]},"taskDefinition":"ECSIntegrationTask"},
"Logs": "Get Logs",
"Status Polling Frequency":"10",
"Failure Tolerance":"5"
}
The AWS ECS job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to AWS ECS. |
Action | Determines one of the following actions to perform on AWS ECS:
|
Parameters | Defines parameters to submit as a JSON body, to control how the AWS ECS task runs. The JSON body replaces various other parameters described below, provided that you set the Action parameter to Manual Json. For a description of this JSON syntax, see RunTask in the AWS ECS API Reference. |
ECS Cluster Name | Defines the ECS cluster on the AWS ECS platform where the job runs. An ECS cluster is a logical group of tasks and services. |
ECS Task Definition | Defines the task definition on the AWS ECS platform. The task definition describes the container image, command, environment variables, and other parameters that run your application. |
Launch Type | Determines the type infrastructure where your tasks and services run:
|
Assign Public IP | Determines whether the job has a public internet protocol (IP). Values: True|False |
Network Security Groups | Defines which network security group your task is connected to, through the elastic network interface, which is a virtual network card that controls inbound and outbound traffic. |
Network Subnets | Defines the virtual subnet, which determines the IP addresses for the task. |
Override Container | Defines which override container to use that overrides the default container image, command, or other settings specified in the task definition. |
Override Command | Defines the command to run in the container that overrides any command specified in the task definition. |
Environment Variables | Defines the environment variables for the container, which are used to manage the container and pass information to the application that runs inside it. |
Logs | Determines whether the logs from the AWS ECS platform appear at the end of the Control-M job output.
|
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the job. Default: 10 |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 5 |
Job:AWS Step Functions
The following example shows how to define an AWS Step Functions job that you can use to create visual workflows that can integrate other AWS services.
To deploy and run an AWS Step Functions job, ensure that you have installed the AWS Step Functions plug-in using the provision image command or the provision agent::update command.
"AWS Step Functions_Job_2": {
"Type": "Job:AWS Step Functions",
"ConnectionProfile": "STEPFUNCTIONSCCP",
"Execution Name": "Step Functions Exec",
"State Machine ARN": "arn:aws:states:us-east-1:155535555553:stateMachine:MyStateMachine",
"Parameters": "{\\\"parameter1\\\":\\\"value1\\\"}",
"Show Execution Logs": "checked",
"Status Polling Frequency":"10",
"Failure Tolerance":"2"
}
The AWS Step Functions job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to AWS Step Functions. |
Execution Name | Defines the name of the Step Function execution. An execution runs a state machine, which is a workflow. |
State Machine ARN | Determines the Step Function state machine to use. A state machine is a workflow, and an Amazon Resource Name (ARN) is a standardized AWS resource address. |
Parameters | Defines the parameters for the Step Function job, in JSON format, which enables you to control how the job runs. |
Show Execution Logs | Determines whether job logs from AWS Step Functions are included in the Control-M output. Values: checked | unchecked Default: unchecked |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the job in AWS Step Functions. Default: 20 seconds |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 |
Job:AWS QuickSight
The following example shows how to define an AWS QuickSight job that you can use to visualize, analyze, and share large workloads of data.
To deploy and run an AWS QuickSight job, ensure that you have installed the AWS QuickSight plug-in using the provision image command or the provision agent::update command.
"AWS QuickSight_Job_2": {
"Type": "Job:AWS QuickSight",
"ConnectionProfile": "QUICKSIGHT",
"AWS Dataset ID": "f351ce9e-1500-4291-b0e1-78b2d6f48861",
"Refresh Type": "Full Refresh",
"Status Polling Frequency": "30",
"Failure Tolerance": "2"
}
The AWS QuickSight job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to AWS QuickSight. |
AWS Dataset ID | Determines the ID of the AWS QuickSight job that is created in an AWS QuickSight workspace. |
Refresh Type | Determines which of the following refresh functions to perform:
|
Status Polling Frequency | (Optional) Determines the number of seconds to wait before checking the status of the QuickSight job. Default: 30 seconds |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 |
Job:AWS Sagemaker
The following example shows how to define an AWS SageMaker job, which enables you to create and manage machine learning models.
To deploy and run an AWS SageMaker job, ensure that you have installed the AWS SageMaker plug-in using the provision image command or the provision agent::update command.
"AWS Sagemaker_Job": {
"Type": "Job:AWS Sagemaker",
"ConnectionProfile": "AWSSAGEMAKER",
"Pipeline Name": "SageMaker_Pipeline",
"Idempotency Token": "Token_Control-M_for_SageMaker%%ORDERID",
"Add Parameters": "checked",
"Parameters": "{"Name":"string1", "value":"string2"}",
"Retry Pipeline Execution": "checked",
"Pipeline Execution ARN": "arn:aws:sagemaker:us-east-1:122343283363:pipeline/test-123-p-ixxyfil39d9o/execution/4tl5r9q0ywpw",
"Status Polling Frequency": "30",
"Failure Tolerance": "2"
}
The AWS Sagemaker job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to AWS SageMaker. |
Pipeline Name | Determines the name of the preexisting AWS SageMaker pipeline used in this job. |
Idempotency Token | (Optional) Defines a unique ID (idempotency token) that guarantees that the job is executed only once. After successful execution, this ID cannot be used again. To allow a rerun of the job with a new token, replace the default value with a unique ID that has not been used before. Use the RUN_ID, which can be retrieved from the job output. Default: Token_Control-M_for_SageMaker%%ORDERID — job run cannot be executed again. |
Add Parameters | Determines whether to add or change default parameters in the execution of the pipeline. Values: checked | unchecked Default: unchecked |
Parameters | Defines the parameters to add or change (if you have set Add Parameters to checked), according to the AWS SageMaker convention, in JSON format. The list of parameters must begin with the name of the parameter type. |
Retry Pipeline Execution | Determines whether to retry the execution of a pipeline, which you might want to do if a previous execution fails or stops. Values: checked | unchecked Default: unchecked |
Pipeline Execution ARN | Defines the Amazon Resource Name (ARN) of the pipeline, which is required to retry the execution of the pipeline. An ARN is a standardized AWS resource address. |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the SageMaker job. Default: 30 seconds |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 |
Job:AWS Athena
The following example shows how to define an AWS Athena job, which enables you to process, analyze, and store your data in the cloud.
To deploy and run an AWS Athena job, ensure that you have installed the AWS Athena plug-in using the provision image command or the provision agent::update command.
The following example shows a job that executes a SQL-based query:
"AWS Athena_Job_2": {
"Type": "Job:AWS Athena",
"ConnectionProfile": "AWSATHENA",
"Athena Client Request Token": "aws-athena-client-request-token-%%ORDERID-%%TIME",
"DB Catalog Name": "DB_Catalog_Athena",
"Database Name": "DB_Athena",
"Action": "Query",
"Query": "Select * from Athena_Table",
"Output Location": "s3://{BucketPath}",
"Workgroup": "Primary",
"Add Configurations": "checked",
"S3 ACL Option": "BUCKET_OWNER_FULL_CONTROL",
"Encryption Options": "SSE_KMS",
"KMS Key": "arn:aws:kms:us-west-2:123456789012:key/abcd1234-5678-9012-efgh-ijklmnopqrst",
"Bucket Owner": "Account_ID",
"Show JSON Output": "unchecked",
"Status Polling Frequency": "10",
"Tolerance": "2"
}
The AWS Athena job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to AWS Athena. |
Athena Client Request Token | Defines a unique ID (idempotency token), which guarantees that the job executes only once. Default: aws-athena-client-request-token-%%ORDERID-%%TIME |
DB Catalog Name | Defines the name of the group of databases (catalog) that the query references. |
Database Name | Defines the name of the database that the query references. |
Action | Determines which of the following queries executes:
|
Query | Defines the SQL-based query that executes. |
Prepared Query Name | Defines the name of the predefined query that is stored in the AWS Athena platform. |
Table Name | Defines the name of the table that is created, which is populated by the results of a query in AWS Athena. |
Unload File Type | Determines the file format that the query results are saved in, as follows:
|
Output Location | Defines the AWS S3 bucket path where the file is saved. Format: s3://<path> Note: AWS Athena automatically generates a filename that incorporates the Query Execution ID, which is a unique ID applied to each query that is executed. |
Workgroup | Defines the workgroup for this job. Workgroups can consist of users, teams, applications, or workloads, and they can set limits on the data that each query or group processes. |
Add Configurations | Determines whether to add additional job definitions. Valid Values: checked | unchecked Default: unchecked |
S3 ACL Option | Defines the Amazon S3 canned access control list (ACL), which is a predefined set of grantees and permissions assigned to your stored query results. BUCKET_OWNER_FULL_CONTROL is the only canned ACL that is currently supported in AWS Athena. This setting gives you and the bucket owner full control of the query results. |
Encryption Options | Determines one of the following ways to encrypt the query results:
|
KMS Key | (SSE_KMS and CSE_KMS Only) Defines the Amazon Resource Name (ARN) of the KMS key. An ARN is a standardized AWS resource address. |
Bucket Owner | Defines the AWS account ID of the Amazon S3 bucket owner. |
Show JSON Output | Determines whether to show the full JSON API response in the job output. Valid Values: checked | unchecked Default: unchecked |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the job. Default: 10 |
Tolerance | Determines the number of times to check the job status before ending Not OK. Default: 2 |
Job:AWS Mainframe Modernization
The following examples show how to define an AWS Mainframe Modernization job, which enables you to migrate, manage, and run mainframe applications in the AWS cloud.
To deploy and run an AWS Mainframe Modernization job, ensure that you have installed the AWS Mainframe Modernization plug-in using the provision image command or the provision agent::update command.
The following example shows a job that executes a batch job for an application:
"AWS Mainframe Modernization_batch_job": {
"Type": "Job:AWS Mainframe Modernization",
"ConnectionProfile": "AWS_MAINFRAME",
"Application Name": "Demo",
"Action": "Start Batch Job",
"JCL Name": "DEMO.JCL",
"Retrieve CloudWatch Logs": "checked",
"Status Polling Frequency": "15",
"Tolerance": "3"
}
The following example shows a job that deploys the newest version of an application:
"AWS Mainframe Modernization_deploy_app": {
"Type": "Job:AWS Mainframe Modernization",
"ConnectionProfile": "AWS_MAINFRAME",
"Application Name": "Demo",
"Action": "Application Management",
"Application Action": "Deploy Application",
"Client Token": "Token_Mainframe_%%ORDERID",
"Application Version": "1",
"Environment ID": "sdfywxdeg4wer238634",
"Status Polling Frequency": "15",
"Tolerance": "3"
}
Parameter | Action | Description |
---|---|---|
ConnectionProfile | N/A | Defines the name of a connection profile to use to connect to AWS Mainframe Modernization. |
Application Name | N/A | Defines the name of the predefined application, on the AWS Mainframe Modernization service, that executes. |
Action | N/A | Determines one of the following actions to perform:
|
JCL Name | Start Batch Job | Defines the JCL job stream filename to execute. A job stream is a sequence of JCL statements and data that form a single unit of work for an operating system. |
Retrieve CloudWatch Logs | Start Batch Job | Determines whether to append the CloudWatch JCL job stream log to the output. |
Application Action | Application Management | Determines one of the following actions to perform on the defined application:
|
Latest Application Version | Update | Defines the current application version on the AWS Mainframe Modernization service. |
Definition S3 Location | Update | Defines the pathname of the Amazon S3 bucket that holds the application definition. Example: s3://Mainframe/definition.json |
Client Token | Deploy | Defines a unique ID (idempotency token) that guarantees that the job executes only once. Tokens expire one hour after the job is executed. Default: Token_Mainframe_%%ORDERID |
Application Version | Deploy | Determines which application version to use for succeeding jobs. |
Environment ID | Deploy | Defines the environment ID where the application is executed. |
Status Polling Frequency | All Actions | Determines the number of seconds to wait before Control-M checks the status of the job. Default: 15 |
Tolerance | All Actions | Determines the number of times to check the job status before the job ends Not OK. Default: 3 |
Job:AWS CloudFormation
The following example shows how to define an AWS CloudFormation job, which enables you to create, configure, test, and manage your AWS infrastructure (a collection of AWS services and resources).
To deploy and run an AWS CloudFormation job, ensure that you have installed the AWS CloudFormation plug-in using the provision image command or the provision agent::update command.
"AWS CloudFormation_Job": {
"Type": "Job:AWS CloudFormation",
"ConnectionProfile": "CLOUDFORMATION",
"Action": "Update Stack",
"Stack Name": "Demo",
"Stack Parameters": "Template URL",
"Template URL": "https://ayatest.s3.amazonaws.com/dynamodbDemo.yml",
"Template Body": "",
"Role ARN": "arn:aws:iam::12343567890:role/AWS-QuickSetup-StackSet-Local-AdministrationRole",
"Capabilities Type": "Capability Named IAM",
"Enable Termination Protection": "unchecked",
"On Failure": "Delete",
"Status Polling Frequency": "15",
"Failure Tolerance": "2",
}
Parameter | Description |
---|---|
ConnectionProfile | Defines the name of a connection profile to use to connect to AWS Mainframe Modernization. |
Action | Determines one of the following CloudFormation actions to perform:
|
Stack Name | Defines a unique stack name. A stack is a collection of AWS resources, such as a web server or database. |
Stack Parameters | Determines one of the following templates to create or update:
A template defines the properties of your AWS infrastructure. |
Template URL | Defines the URL for a preexisting template. Rules:
Examples:
|
Template Body | Defines the template in JSON or YAML format. Example in YAML:
|
Role ARN | Defines the Amazon Resource Name (ARN) of the AWS IAM Role that CloudFormation runs as to create or update a stack. An ARN is a standardized AWS resource address. The AWS IAM role must be granted read and write privileges to create or update any of the AWS resources that are in the stack. |
Capabilities Type | Defines the capabilities of your template and stack.
Default: Capability IAM |
Enable Termination Protection | Determines whether to prevent deletion of this stack by other users. Valid Values: checked | unchecked Default: unchecked |
On Failure | Determines one of the following actions to take when the job ends Not OK:
Default: Do Nothing |
Status Polling Frequency | Determines the number of seconds to wait before Control-M checks the status of the job. Default: 15 |
Failure Tolerance | Determines the number of times to check the job status before the job ends Not OK. Default: 2 |
Job:Azure
The Azure job type enables you to automate workflows that include a select list of Azure services. To manage Azure-type jobs, you must have the Azure plug-in installed.
The following JSON objects are available for creating Azure-type jobs:
Additional job types are provided for the following Azure services. To support these job types, you must have installed the relevant plug-ins using the provision image command or the provision agent::update command.
- Job:ADF (Azure Data Factory)
- Job:Azure Databricks
- Job:AzureFunctions
- Job:Azure Batch Accounts
- Job:Azure Machine Learning
- Job:Azure Synapse
- Job:Azure HDInsight
- Job:Azure Backup
- Job:Azure Resource Manager
Job:Azure:Function
The following example shows how to define a job that executes an Azure function service.
"AzureFunctionJob": {
"Type": "Job:Azure:Function",
"ConnectionProfile": "AZURE_CONNECTION",
"AppendLog": false,
"Function": "AzureFunction",
"FunctionApp": "AzureFunctionApp",
"Parameters": [
{"firstParamName": "firstParamValue"},
{"secondParamName": "secondParamValue"}
]
}
This Azure job object uses the following parameters :
Function | The name of the Azure function to execute |
FunctionApp | The name of the Azure function app |
Parameters | (Optional) Function parameters defined as pairs of name and value. |
AppendLog | (Optional) Whether to add the log to the job’s output, either true (the default) or false |
Job:Azure:LogicApps
The following example shows how to define a job that executes an Azure Logic App service.
"AzureLogicAppJob": {
"Type": "Job:Azure:LogicApps",
"ConnectionProfile": "AZURE_CONNECTION",
"LogicAppName": "MyLogicApp",
"RequestBody": "{\\n \"name\": \"BMC\"\\n}",
"AppendLog": false
}
This Azure job object uses the following parameters :
LogicAppName | The name of the Azure Logic App |
RequestBody | (Optional) The JSON for the expected payload |
AppendLog | (Optional) Whether to add the log to the job’s output, either true (the default) or false |
Job:Azure:BatchAccount
The following example shows how to define a job that executes an Azure Batch Account service.
"AzureBatchJob": {
"Type": "Job:Azure:BatchAccount",
"ConnectionProfile": "AZURE_CONNECTION",
"JobId": "AzureJob1",
"CommandLine": "echo \"Hello\"",
"AppendLog": false,
"Wallclock": {
"Time": "770",
"Unit": "Minutes"
},
"MaxTries": {
"Count": "6",
"Option": "Custom"
},
"Retention": {
"Time": "1",
"Unit": "Hours"
}
}
This Azure job object uses the following parameters :
JobId | The ID of the batch job |
CommandLine | A command line that the batch job runs |
AppendLog | (Optional) Whether to add the log to the job’s output, either true (the default) or false |
Wallclock | (Optional) Maximum limit for the job's run time If you do not include this parameter, the default is unlimited run time. Use this parameter to set a custom time limit. Include the following next-level parameters:
|
MaxTries | (Optional) The number of times to retry running a failed task If you do not include this parameter, the default is none (no retries). Use this parameter to choose between the following options:
|
Retention | (Optional) File retention period for the batch job If you do not include this parameter, the default is an unlimited retention period. Use this parameter to set a custom time limit for retention. Include the following next-level parameters:
|
Job:ADF
The following example shows how to define a job that executes an Azure Data Factory (ADF) service, a cloud-based ETL and data integration service that allows you to create data-driven workflows to automate the movement and transformation of data.
To deploy and run an ADF job, ensure that you have installed the ADF plug-in using the provision image command or the provision agent::update command.
"AzureDataFactoryJob": {
"Type": "Job:ADF",
"ConnectionProfile": "DataFactoryConnection",
"Resource Group Name": "AzureResourceGroupName",
"Data Factory Name": "AzureDataFactoryName",
"Pipeline Name": "AzureDataFactoryPipelineName",
"Parameters": "{\"myVar\":\"value1\", \"myOtherVar\": \"value2\"}",
"Status Polling Frequency": "20",
"Failure Tolerance": "3"
}
The ADF job object uses the following parameters :
ConnectionProfile | Name of a connection profile to use to connect to Azure Data Factory |
Resource Group Name | The Azure Resource Group that is associated with a specific data factory pipeline. A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. |
Data Factory Name | The Azure Data Factory Resource to use to execute the pipeline |
Pipeline Name | The data pipeline to run when the job is executed |
Parameters | Specific parameters to pass when the Data Pipeline runs, defined as pairs of name and value Format: {\"var1\":\"value1\", \"var2\":\"value2\"} |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the Data Factory job. Default: 45 |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 3 |
Job:Azure Databricks
The following example shows how to define a job that executes the Azure Databricks service, a cloud-based data analytics platform that enables you to process large workloads of data.
To deploy and run an Azure Databricks job, ensure that you have installed the Azure Databricks plug-in using the provision image command or the provision agent::update command.
"Azure Databricks notebook": {
"Type": "Job:Azure Databricks",
"ConnectionProfile": "AZURE_DATABRICKS",
"Databricks Job ID: "65",
"Parameters": "\"notebook_params\":{\"param1\":\"val1\", \"param2\":\"val2\"}",
"Idempotency Token": "Control-M-Idem_%%ORDERID",
"Status Polling Frequency": "30"
}
The Azure Databricks job object uses the following parameters :
ConnectionProfile | Name of a connection profile to use to connect to the Azure Databricks workspace |
Databricks Job ID | The job ID created in your Databricks workspace |
Parameters | Task parameters to override when the job runs, according to the Databricks convention. The list of parameters must begin with the name of the parameter type. For example:
For more information about the parameter types, review the properties of RunParameters in the OpenAPI specification provided through the Azure Databricks documentation. For no parameters, specify a value of "params": {}. For example: |
Idempotency Token | (Optional) A token to use to rerun job runs that timed out in Databricks Values:
|
Status Polling Frequency | (Optional) Number of seconds to wait before checking the status of the job. Default: 30 |
Job:AzureFunctions
The following example shows how to define a job that executes a cloud-based Azure Function for serverless application development.
To deploy and run an Azure Functions job, ensure that you have installed the Azure Functions plug-in using the provision image command or the provision agent::update command.
"AzureFunction": {
"Type": "Job:AzureFunctions",
"ConnectionProfile": "AZUREFUNCTIONS",
"Function App": "new-function",
"Function Name": "Hello",
"Optional Input Parameters": "\"{\"param1\":\"val1\", \"param2\":\"val2\"}\"",
"Function Type":"activity",
"Status Polling Frequency": "20",
"Failure Tolerance": "2"
}
The Azure Functions job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to the Azure Functions workspace |
Function App | Defines the name of the Azure function application that you want to run. |
Function Name | Defines the name of the function that you want to run. |
Optional Input Parameters | Defines the function parameters, in JSON format, that enable you to control the presentation of data. Format: {\"param1\":\"val1\", \"param2\":\"val2\"} For no parameters, specify {}. |
Function Type | Determines which of the following types of Azure functions to run:
|
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the Azure Functions job. Default: 20 |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 |
Job:Azure Batch Accounts
The following example shows how to define a job that executes cloud-based Azure Batch Accounts for large-scale compute-intensive tasks.
To deploy and run an Azure Batch Accounts job, ensure that you have installed the Azure Batch Accounts plug-in using the provision image command or the provision agent::update command.
"Azure Batch Accounts_Job_2": {
"Type": "Job:Azure Batch Accounts",
"ConnectionProfile": "AZURE_BATCH",
"Batch Job ID": "abc-jobid",
“Task ID Prefix”: "ctm",
"Task Command Line": "cmd /c echo hello from Control-M",
"Max Wall Clock Time": "Custom",
"Max Wall Time Digits": "3",
"Max Wall Time Unit": "Minutes",
"Max Task Retry Count": "Custom",
"Retry Number": "3",
"Retention Time": "Custom",
"Retention Time Digits": "4",
"Retention Time Unit": "Days",
"Append Log to Output": "checked",
"Status Polling Interval": "20"
}
The Azure Batch Accounts job object uses the following parameters :
ConnectionProfile | Determines which connection profile to use to connect to Azure Batch. |
Batch Job ID | Defines the name of the Batch Account Job created in Azure Portal. |
Task ID Prefix | Defines a prefix string to append to the task ID. |
Task Command Line | Defines the command line that runs your application or script on the compute node. The task is added to the job at runtime. |
Max Wall Clock Time | Defines a maximum time limit for the job run, with the following possible values:
Default: Unlimited |
Max Wall Time Digits | Defines the number (of the specified time unit) for a custom maximum time limit. Default: 1 |
Max Wall Time Unit | Defines one of the following time units for a custom maximum time limit:
Default: Minutes |
Max Task Retry Count | Defines a maximum number of times to retry running a failed task, with the following possible values:
Default: None |
Retry Number | Defines the number of retries for a custom task retry count. Default: 1 |
Retention Time | Defines a minimum period of time for retention of the Task directory of the batch job, with the following possible values:
Default: Unlimited |
Retention Time Digits | Defines the number (of the specified time unit) for a custom retention period. Default: 1 |
Retention Time Unit | Defines one of the following time units for a custom retention period:
Default: Hours |
Append Log to Output | Whether to add task stdout.txt content to the plugin job output. Values: checked|unchecked Default: checked |
Status Polling Interval | Number of seconds to wait before checking the status of the job. Default: 20 |
Job:Azure Logic Apps
The following example shows how to define a job that executes an Azure Logic Apps service, which enables you to design and automate cloud-based workflows and integrations.
To deploy and run an Azure Logic Apps job, ensure that you have installed the Azure Logic Apps plug-in using the provision image command or the provision agent::update command.
"Azure Logic Apps Job": {
"Type": "Job:Azure Logic Apps",
"ConnectionProfile": "AZURE_LOGIC_APPS",
"Workflow": "tb-logic",
"Parameters": "{\"bodyinfo\":\"hello from CM\",\"param2\":\"value2\"}",
"Get Logs": "unchecked",
"Status Polling Frequency": "20",
"Failure Tolerance": "2"
}
This Azure job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to Azure. |
Workflow | Determines which of the Consumption logic app workflows to run from your predefined set of workflows. Note: This job does not run Standard logic app workflows. |
Parameters | Defines parameters that enable you to control the presentation of data. Rules:
|
Get Logs | Determines whether to display the job output when the job ends. |
Status Polling Frequency | Determines the number of seconds to wait before checking the job status. Default: 20 |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 |
Job:Azure Machine Learning
The following example shows how to define an Azure Machine Learning job, which enables you to create and manage machine learning models.
To deploy and run an Azure Machine Learning job, ensure that you have installed the Azure Machine Learning plug-in using the provision image command or the provision agent::update command.
The following example shows a job for a Compute Management action:
"Azure Machine Learning_Job": {
"Type": "Job:Azure Machine Learning",
"ConnectionProfile": "AZURE_ML",
"Workspace Name": "testML2",
"Resource Group Name": "My_Resource_Group",
"Action": "Compute Management",
"Compute Name": "Compute_Name",
"Compute Action": "Stop",
"Status polling interval": "20",
"Failure Tolerance": "2"
}
The following example shows a job for triggering an endpoint pipeline:
"Azure Machine Learning_Job_2": {
"Type": "Job:Azure Machine Learning",
"ConnectionProfile": "AZURE_ML",
"Workspace Name": "testML2",
"Resource Group Name": "My_Resource_Group",
"Action": "Trigger Endpoint Pipeline",
"Pipeline Endpoint Id": "353c4707-fd23-40f6-91e2-83bf7cba764c",
"Parameters": "{"ExperimentName":"test", "DisplayName":"test1123"}",
"Status polling interval": "20",
"Failure Tolerance": "2"
}
The Azure Machine Learning job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to the Azure Machine Learning workspace. |
Workspace Name | Determines the name of the Azure Machine Learning workspace for the job. |
Resource Group Name | Determines the Azure resource group that is associated with a specific Azure Machine Learning workspace. A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. |
Action | Determines one of the following Azure Machine Learning actions to perform:
|
Pipeline Endpoint Id | Determines the pipeline endpoint ID, which points to a published pipeline in Azure Machine Learning. |
Parameters | Defines additional parameters for the pipeline, in JSON format. For no parameters, specify {}. |
Compute Name | Defines the name of the compute function. |
Compute Action | Determines one of the following compute actions to perform:
|
Status Polling Interval | Determines the number of seconds to wait before checking the status of the job. Default: 15 seconds |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 |
Job:Azure Synapse
The following example shows how to define a job that performs data integration and analytics using the Azure Synapse Analytics service.
To deploy and run an Azure Synapse job, ensure that you have installed the Azure Synapse plug-in using the provision image command or the provision agent::update command.
"Azure Synapse_Job": {
"Type": "Job:Azure Synapse",
"ConnectionProfile": "AZURE_SYNAPSE",
"Pipeline Name": "ncu_synapse_pipeline",
"Parameters": "{\"periodinseconds\":\"40\", \"param2\":\"val2\"}",
"Status Polling Interval": "20"
}
The Azure Synapse job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to the Azure Synapse workspace. |
Pipeline Name | Defines the name of a pipeline that you defined in your Azure Synapse workspace. |
Parameters | Defines pipeline parameters to override when the job runs, defined in JSON format as pairs of name and value. Format: {\"param1\":\"val1\", \"param2\":\"val2\"} For no parameters, specify {}. |
Status Polling Interval | (Optional) Defines the number of seconds to wait before checking the status of the job. Default: 20 seconds |
Job:Azure HDInsight
The following example shows how to define a job that collaborates with Azure HDInsight to run an Apache Spark batch job for big data analytics.
To deploy and run an Azure HDInsight job, ensure that you have installed the Azure HDInsight plug-in using the provision image command or the provision agent::update command.
"Azure HDInsight_Job": {
"Type": "Job:Azure HDInsight",
"ConnectionProfile": "AZUREHDINSIGHT",
"Parameters": "{
"file" : "wasb://<BlobStorageContainerName>@<StorageAccountName>.blob.core.windows.net/sample.jar",
"args" : ["arg0", "arg1"],
"className" : "com.sample.Job1",
"driverMemory" : "1G",
"driverCores" : 2,
"executorMemory" : "1G",
"executorCores" : 10,
"numExecutors" : 10
}",
"Status Polling Interval": "10",
"Bring job logs to output": "checked"
}
The Azure HDInsight job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to the Azure HDInsight workspace. |
Parameters | Defines parameters to be passed on the Apache Spark Application during job execution, in JSON format (name:value pairs). This JSON must include the file and className elements. For more information about common parameters, see Batch Job in the Azure HDInsight documentation. |
Status Polling Interval | Defines the number of seconds to wait before verification of the Apache Spark batch job. Default: 10 seconds |
Bring job logs to output | Determines whether logs from Apache Spark are shown in the job output. Values: checked | unchecked Default: unchecked |
Job:Azure VM
The following example shows how to define a job that performs operations on an Azure Virtual Machine (VM).
To deploy and run an Azure VM job, ensure that you have installed the Azure VM plug-in using the provision image command or the provision agent::update command.
"Azure VM_update": {
"Type": "Job:Azure VM",
"ConnectionProfile": "AZUREVM",
"VM Name": "tb-vm1",
"Operation": "Create\\Update",
"Input Parameters": "{\"key\": \"val\"}",
"Get Logs": "checked",
"Verification Poll Interval": "10",
"Tolerance": "3"
}
The Azure VM job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to the Azure Virtual Machine. |
VM Name | Defines the name of the Azure Virtual Machine to run the operation. |
Operation | Determines one of the following operations to perform on the Azure Virtual Machine
|
Input Parameters | Defines the input parameters in JSON format for a Create operation. Format: {\"param1\":\"val1\", \"param2\":\"val2\"} |
Get Logs | Determines whether to display logs from Azure VM at the end of the job output. This parameter is available for all operations except for the Delete operation. Values: checked|unchecked Default: unchecked |
Delete VM disk | Determines whether to delete the Azure Virtual Machine disk when you delete an Azure Virtual Machine. Values: checked|unchecked Default: unchecked |
Verification Poll Interval | Determines the number of seconds to wait before job status verification. Default: 15 seconds |
Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 times |
Job:Azure Backup
The following examples show how to define an Azure Backup job, which enables you to back up and restore your data in the Microsoft Azure cloud.
To deploy and run an Azure Backup job, ensure that you have installed the Azure Backup plug-in using the provision image command or the provision agent::update command.
The following example shows a job for a Backup action:
"Azure Backup_Job": {
"Type": "Job:Azure Backup",
"ConnectionProfile": " ABK_CCP_SERVICE_PRINCIPAL",
"Action": "Backup",
"Vault Resource Group": "ncu-if",
"Vault Name": "Test",
"VM Resource Group": "ncu-if",
"VM Name": "ncu-if-squid-proxy",
"Policy Name": "DefaultPolicy",
"Include Or Exclude Disks": "Include",
"Disk List": "0,1,2",
"Status Polling Frequency": "300",
"Failure Tolerance": "2"
}
The following example shows a job for a Restore action:
"Azure Backup_restore": {
"Type": "Job:Azure Backup",
"ConnectionProfile": " ABK_CCP_SERVICE_PRINCIPAL ",
"Action": "Restore From Backup",
"Vault Resource Group": "ncu-if",
"Vault Name": "Test",
"VM Resource Group": "ncu-if",
"VM Name": "ncu-if-squid-proxy",
"Restore to Latest Recovery Point": "checked",
"Recovery Point Name": "123245428486171",
"Storage Account Name": "stasaccount",
"Restore Region": "UK South",
"Disk List": "0,1,2",
"Status Polling Frequency": "300",
"Failure Tolerance": "2"
}
The Azure Backup job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to the Azure Backup workspace. |
Action | Determines one of the following Azure Backup actions to perform:
|
Vault Resource Group | Defines the name of the resource group for the storage vault in the Backup Center, which is the management platform in Azure Backup. The name is not case-sensitive. |
Vault Name | Defines the name of the storage vault in the Backup Center. The name is not case-sensitive. |
VM Resource Group | Defines the name of the resource group where the virtual machine with your data is located. |
VM Name | Defines the name of the virtual machine with your data that you want to back up. |
Policy Name | Defines the Azure policy that is enforced on your virtual machine and backup job. Default: DefaultPolicy |
Include Or Exclude Disks | Determines one of the following actions to perform when you back up your data:
Default: Include |
Disk List | Defines the list of logical unit numbers (LUN) to include or exclude in your backup or to include in your restore. A LUN is an address that points to an area of storage on a logical or virtual disk. Valid Values: 0–9, separated by commas. |
Restore to Latest Recovery Point | Determines whether to restore a backup from the latest recovery point. |
Recovery Point Name | Defines the name of the recovery point, which is a copy of the original data from a specific time. This name is found in the Backup Job or Restore Point Collection areas in the Backup Center. |
Storage Account Name | Defines the name of the storage account that is associated with the recovery point. |
Restore Region | Determines the region of the virtual machine where the data is restored. |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the Azure Backup job. Default: 150 seconds |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 0 |
Job:Azure Resource Manager
The following example shows how to define an Azure Resource Manager job, which enables you to create, configure, test, and manage your Azure resources infrastructure.
To deploy and run an Azure Resource Manager job, ensure that you have installed the Azure Resource Manager plug-in using the provision image command or the provision agent::update command.
"Azure Resource Manager_Job_2": {
"Type": "Job:Azure Resource Manager",
"ConnectionProfile": "AZURE_RESOURCE_MANAGER",
"Action": "Create Deployment",
"Resource Group Name": "my_Resource_Group",
"Deployment Name": "Demo",
"Deployment Properties": "{
"properties": {
"templateLink": {
"uri": "https://123.blob.core.windows.net/test123/123.json?sp=r&st=2023-05-23T08:39:09Z&se=2023-06-10T16:39:09Z&sv=2022-11-02&sr=b&sig=RqrATxi4Sic2UwQKFu%2FlwaQS7fg5uPZyJCQiWX2D%2FCc%3D",
"queryString": "sp=r&st=2023-05-23T08:39:09Z&se=2023-06-10T16:39:09Z&sv=2022-11-02&sr=b&sig=RqrATxi4Sic2UwQKFu%2FlwaQS7fg5uPZyJCQiWX2D%1234"
},
"parameters": {},
"mode": "Incremental"
}
}",
"Failure Tolerance": "2",
"Status Polling Frequency": "15"
}
The Azure Resource Manager job object uses the following parameters :
Parameter | Description |
---|---|
ConnectionProfile | Defines the name of a connection profile to use to connect Control-M to Azure Resource Manager. |
Resource Group Name | Defines a unique resource group name. A resource group is a collection of Azure resources, such as a virtual machine or database, that share the same permissions. |
Action | Determines one of the following actions to perform:
|
Deployment Name | Defines the deployment name. |
Deployment Properties | Defines an API request, in JSON format, that enables you to add or update resources in a resource group. |
Failure Tolerance | Determines the number of times to check the job status before the job ends Not OK. Default: 2 |
Status Polling Frequency | Determines the number of seconds to wait before Control-M checks the status of the job. Default: 15 |
Job:Web Services REST
The following examples shows how to define a Web Services REST job, which enables you to design and execute a single REST API call.
To deploy and run an Web Services REST job, ensure that you have installed the Web Services REST plug-in using the provision image command or the provision agent::update command.
"Web Services REST_output_parameters": {
"Type": "Job:Web Services REST",
"ConnectionProfile": "REST_OAUTH2",
"Method": "POST",
"Append Request": "checked",
"Append Response": "checked",
"Endpoint URL": "https://6943019930999707.net",
"URL Request Path": "/api/2.1/jobs/run-now",
"Variables": [
{
"UCM-BODY_REQ_TYPE": "text"
},
{
"UCM-BODY_BODY_REQUEST": "{%4E \"job_id\": 298,%4E \"params\":{}%4E}"
},
{
"UCM-HEADERS_KEY_001": "Authorization"
}
],
"OutputHandling": [
{
"HttpCode": "*",
"Parameter": "$.run_id",
"Variable": "file:/home/dbauser/temp.txt"
},
{
"HttpCode": "*",
"Parameter": "$.run_id",
"Variable": "LOCAL_CTM_VAR"
},
{
"HttpCode": "*",
"Parameter": "$.run_id",
"Variable": "\\GLOBAL_CTM_VAR"
}
]
}
Parameter | Description |
---|---|
ConnectionProfile | Determines which connection profile to connect Control-M to Web Services REST. |
Endpoint URL | Defines the endpoint base URL, which is the common resource prefix that the API uses to navigate. |
Method | Determines one of the following HTTP methods that is used to execute the REST job:
|
URL Request Path | Defines the URL request path. |
Request Definition | Determines one of the following requests to perform:
|
Body | Defines the request body, which is the data in the resource that the API creates or edits, in JSON format. For no body, type {}. |
URL Parameters | (Optional) Defines URL parameter key names and values. |
HTTP Headers | (Optional) Defines HTTP header key names and values. |
OutputHandling | (Optional) Defines the following output parameters:
Rules:
|
Connection Timeout | Determines the number of seconds to wait after Control-M initiates a connection request to Web Services before a timeout occurs. Default: 50 |
Append Request | Determines whether to append the API request to the output. Values: checked|unchecked Default: unchecked |
Append Response | Determines whether to append the API response to the output. Values: checked|unchecked Default: unchecked |
Job:Web Services SOAP
The following example shows how to define a Web Services SOAP job, which enables you to design and execute a single SOAP API call.
To deploy and run an Web Services SOAP job, ensure that you have installed the Web Services SOAP plug-in using the provision image command or the provision agent::update command.
"Web Services SOAP_Job_2": {
"Type": "Job:Web Services SOAP",
"ConnectionProfile": "SOAP_BASIC_AUTH",
"Endpoint URL": "http://vw-usr1:8091/ws/register",
"Append Request": "unchecked",
"Append Response": "unchecked",
"SOAP Action": "http://www.bmc.com/ctmem/#register",
"Variables": [
{
"UCM-SOAP_REQUEST_REQ_TYPE": "xml"
},
{
"UCM-SOAP_REQUEST_SOAP_REQUEST": "<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:sch=\"http://www.bmc.com/ctmem\">%4E <soapenv:Header/>%4E <soapenv:Body>%4E <sch:request_register>%4E <!--Optional:-->%4E <sch:component>?</sch:component>%4E <sch:user_name>?</sch:user_name>%4E <sch:password>?</sch:password>%4E <sch:timeout>?</sch:timeout>%4E <!--Optional:-->%4E <sch:hostname>?</sch:hostname>%4E </sch:request_register>%4E </soapenv:Body>%4E</soapenv:Envelope>"
}
],
"OutputHandling": [
{
"HttpCode": "200",
"Parameter": "//sch:error_message",
"Variable": "CTM_VAR_RESULT"
},
{
"HttpCode": "200",
"Parameter": "//sch:error_message",
"Variable": "file:/home/dbauser/soap_result.txt"
}
],
}
Parameter | Description |
---|---|
ConnectionProfile | Determines which connection profile to connect Control-M to Web Services SOAP. |
Endpoint URL | Defines the endpoint base URL, which is the common resource prefix that the API uses to navigate. |
SOAP Action | Defines a single SOAP action (operation), which you must take from the WSDL file. Example: In the WSDL file, copy the SOAP action http://tempuri.org/SOAP.Demo.AddInteger from the body of the following operation:
|
Request Definition | Determines one of the following requests to perform:
|
SOAP Request | Defines the SOAP request. Example:
|
HTTP Headers | (Optional) Defines HTTP header key names and values. |
OutputHandling | (Optional) Defines the following output parameters:
Rules:
|
Connection Timeout | Determines the number of seconds to wait after Control-M initiates a connection request to Web Services before a timeout occurs. Default: 50 |
Append Request | Determines whether to append the API request to the output. Values: checked|unchecked Default: unchecked |
Append Response | Determines whether to append the API response to the output. Values: checked|unchecked Default: unchecked |
Job:SLAManagement
SLA Management jobs enable you to identify a chain of jobs that comprise a critical service and must complete by a certain time. The SLA Management job is always defined as the last job in the chain of jobs.
To manage SLA Management jobs, you must have the SLA Management add-on (previously known as Control-M Batch Impact Manager) installed in your Control-M environment.
The following example shows the JSON code of a simple chain of jobs that ends with an SLA Management job. In this chain of jobs:
- The first job is a Command job that prints Hello and then adds an event named Hello-TO-SLA_Job_for_SLA-GOOD.
- The second (and last) job is an SLA Management job for a critical service named SLA-GOOD. This job waits for the event added by the first job and then deletes it.
{
"SLARobotTestFolder_Good": {
"Type": "SimpleFolder",
"ControlmServer": "IN01",
"Hello": {
"Type": "Job:Command",
"CreatedBy": "emuser",
"RunAs": "controlm",
"Command": "echo \"Hello\"",
"eventsToAdd": {
"Type": "AddEvents",
"Events": [
{
"Event": "Hello-TO-SLA_Job_for_SLA-GOOD"
}
]
}
},
"SLA": {
"Type": "Job:SLAManagement",
"ServiceName": "SLA-GOOD",
"ServicePriority": "1",
"CreatedBy": "emuser",
"RunAs": "DUMMYUSR",
"JobRunsDeviationsTolerance": "2",
"CompleteIn": {
"Time": "00:01"
},
"eventsToWaitFor": {
"Type": "WaitForEvents",
"Events": [
{
"Event": "Hello-TO-SLA_Job_for_SLA-GOOD"
}
]
},
"eventsToDelete": {
"Type": "DeleteEvents",
"Events": [
{
"Event": "Hello-TO-SLA_Job_for_SLA-GOOD"
}
]
}
}
}
}
The following table lists the parameters that can be included in an SLA Management job:
Parameter | Description |
---|---|
ServiceName | A logical name, from a user or business perspective, for the critical service. BMC recommends that the service name be unique. Names can contain up to 64 alphanumeric characters. |
ServicePriority | The priority level of this service, from a user or business perspective. Values range from 1 (highest priority) to 5 (lowest priority). Default: 3 |
CreatedBy | The Control‑M/EM user who defined the job. |
RunAs | The operating system user that will run the job. |
JobRunsDeviationsTolerance | Extent of tolerated deviation from the average completion time for a job in the service, expressed as a number of standard deviations based on percentile ranges. If the run time falls within the tolerance set, it is considered on time, otherwise it has run too long or ended too early. Select one of the following values:
Note: The JobRunsDeviationsTolerance parameter and the AverageRunTimeTolerance parameter are mutually exclusive. Specify only one of these two parameters. |
AverageRunTimeTolerance | Extent of tolerated deviation from the average completion time for a job in the service, expressed as a percentage of the average time or as the number of minutes that the job can be early or late. If the run time falls within the tolerance set, it is considered on time, otherwise it has run too long or ended too early. The following example demonstrates how to set this parameter based on a percentage of the average run time:
The following example demonstrates how to set this parameter based on a number of minutes:
Note: The AverageRunTimeTolerance parameter and the JobRunsDeviationsTolerance parameter are mutually exclusive. Specify only one of these two parameters. |
CompleteBy | Defines by what time (in HH:MM) and within how many days the critical service must be completed to be considered on time. In the following example, the critical service must complete by 11:51 PM, 3 days since it began running.
The default number of days is 0 (that is, on the same day). Note: The CompleteBy parameter and the CompleteIn parameter are mutually exclusive. Specify only one of these two parameters. |
CompleteIn | Defines the number of hours and minutes for the critical service to complete and be considered on time, as in the following example:
Note: The CompleteIn parameter and the CompleteBy parameter are mutually exclusive. Specify only one of these two parameters. |
ServiceActions | Defines automatic interventions (actions, such as rerunning a job or extending the service due time) in response to specific occurrences (If statements, such as a job finished too quickly or a service finished late). For more information, see Service Actions. |
Service Actions
The following example demonstrates a series of Service Actions that are triggered in response to specific occurrences (If statements). Note that this example includes only a select group of If statements and a select group of actions; for the full list, see the tables that follow.
"ServiceActions": {
"If:SLA:ServiceIsLate_0": {
"Type": "If:SLA:ServiceIsLate",
"Action:SLA:Notify_0": {
"Type": "Action:SLA:Notify",
"Severity": "Regular",
"Message": "this is a message"
},
"Action:SLA:Mail_1": {
"Type": "Action:SLA:Mail",
"Email": "email@okmail.com",
"Subject": "this is a subject",
"Message": "this is a message"
},
"If:SLA:JobFailureOnServicePath_1": {
"Type": "If:SLA:JobFailureOnServicePath",
"Action:SLA:Order_0": {
"Type": "Action:SLA:Order",
"Server": "IN01",
"Folder": "folder",
"Job": "job",
"Date": "OrderDate",
"Library": "library"
}
},
"If:SLA:ServiceEndedNotOK_5": {
"Type": "If:SLA:ServiceEndedNotOK",
"Action:SLA:Set_0": {
"Type": "Action:SLA:Set",
"Variable": "varname",
"Value": "varvalue"
},
"Action:SLA:Increase_2": {
"Type": "Action:SLA:Increase",
"Time": "04:03"
}
},
"If:SLA:ServiceLatePastDeadline_6": {
"Type": "If:SLA:ServiceLatePastDeadline",
"Action:SLA:Event:Add_0": {
"Type": "Action:SLA:Event:Add",
"Server": "IN01",
"Name": "addddd",
"Date": "AnyDate"
}
The following If statements can be used to define occurrences for which you want to take action:
If statement | Description |
---|---|
If:SLA:ServiceIsLate | The service will be late according to SLA Management calculations. |
If:SLA:JobFailureOnServicePath | One or more of the jobs in the service failed and caused a delay in the service. An SLA Management service is considered OK even if one of its jobs fails, provided that another job, with an Or relationship to the failed job, runs successfully. |
If:SLA:JobRanTooLong | One of the jobs in the critical service is late. Lateness is calculated according to the average run time and Job Runtime Tolerance settings. A service is considered on time even if one of its jobs is late, provided that the service itself is not late. |
If:SLA:JobFinishedTooQuickly | One of the jobs in the critical service is early. The end time is calculated according to the average run time and Job Runtime Tolerance settings. A service is considered on time even if one of its jobs is early. |
If:SLA:ServiceEndedOK | The service ended OK. |
If:SLA:ServiceEndedNotOK | The service ended late, after the deadline. |
If:SLA:ServiceLatePastDeadline | The service is late, and passed its deadline. |
For each If statement, you define one or more actions to be triggered. The following table lists the available Service Actions:
Action | Description | Sub-parameters |
---|---|---|
Action:SLA:Notify | Send notification to the Alerts Window |
|
Action:SLA:Mail | Send an email to a specific email recipient. |
|
Action:SLA:Order | Run a job, regardless of its scheduling criteria. |
|
Action:SLA:SetToOK | Set the job's completion status to OK, regardless of its actual completion status. |
|
Action:SLA:SetToOK:ProblematicJob | Set the completion status to OK for a job that is not running on time and will impact the service. | No parameters |
Action:SLA:Rerun | Rerun the job, regardless of its scheduling criteria |
|
Action:SLA:Rerun:ProblematicJob | Rerun a job that is not running on time and will impact the service. | No parameters |
Action:SLA:Kill | Kill a job while it is still executing. |
|
Action:SLA:Kill:ProblematicJob | Kill a problematic job (a job that is not running on time in the service) while it is still executing. | No parameters |
Action:SLA:Set | Assign a value to a variable for use in a rerun of the job. |
|
Action:SLA:SIM | Send early warning notification regarding the critical service to BMC Service Impact Manager. |
|
Action:SLA:Increase | Allow the job or critical service to continue running by extending (by hours and/or minutes) the deadline until which the job or service can run and still be considered on time. |
|
Action:SLA:Event:Add | Add an event. |
|
Action:SLA:Event:Delete | Delete an event. |
|
Job:UI Path
The following example shows how to define a UiPath job, which performs robotic process automation (RPA).
To deploy and run a UiPath job, ensure that you have installed the UiPath plug-in using the provision image command or the provision agent::update command.
"UI Path_Job": {
"Type": "Job:UI Path",
"ConnectionProfile": "UIPATH_Connect",
"Folder Name": "Default",
"Folder Id": "374999",
"Process Name": "control-m-process",
"packagekey": "209c467e-1704-4b6y-b613-6c5a2c9acbea",
"Robot Name": "abc-ctm-bot",
"Robot Id": "153999",
"Optional Input Parameters": "{
"parm1": "Value1",
"parm2": "Value2",
"parm3": "Value3"
}",
"Status Polling Frequency": "30",
"Host": "host1"
}
The UiPath job object uses the following parameters :
ConnectionProfile | Name of a connection profile to use to connect to the UiPath Robot service |
Folder Name | Name of the UiPath folder where UiPath projects are stored |
Folder Id | Identification number for the UiPath folder |
Process Name | Name of a UiPath process associated with the UiPath folder |
packagekey | UiPath package published from the UiPath Studio to the UiPath Orchestrator |
Robot Name | UiPath Robot name |
Robot Id | UiPath Robot identification number |
Optional Input Parameters | ( Optional ) Input parameters to be passed on to job execution, in the following format: |
Status Polling Frequency | ( Optional ) Number of seconds to wait before checking the status of the job. Default: 15 |
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. |
Job:Automation Anywhere
The following example shows how to define an Automation Anywhere job, which performs robotic process automation (RPA).
To deploy and run an Automation Anywhere job, ensure that you have installed the Automation Anywhere plug-in using the provision image command or the provision agent::update command.
"Automation Anywhere_Job_2": {
"Type": "Job:Automation Anywhere",
"ConnectionProfile": "AACONN",
"Automation Type": "Bot",
"Bot to run": "bot123",
"Bot Input Parameters": "{
"Param1":{
"type": "STRING",
"string": "Hello world"
},
"NumParam":{
"type": "NUMBER",
"integer": 11
}
},
"Connection timeout": 10,
"Status Polling Frequency": 5
}
The Automation Anywhere job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to the Automation Anywhere. |
Automation Type | Determines the type of automation to run, either a Bot or a Process. |
Bot to run | (For Bot automation) Defines the Bot name. |
Process to run | (For Process automation) Defines the Process name. |
Process URI Path | (For Process automation) Defines the URI path of the folder that contains the process to run. Use the slash character (/) as the separator in this path (not the backslash). Example: Bots/TEST/Folder1 |
Connection timeout | Defines the maximum number of seconds to wait for REST API requests to respond, before disconnecting. Default: 10 seconds |
Status Polling Frequency | (Optional) Defines the number of seconds to wait before checking the status of the job. Default: 5 seconds |
Bot Input Parameters | (Optional, for Bot automation) Defines optional input parameters to use during bot execution, defined in JSON format. You can define a variety of types of parameters (STRING, NUMBER, BOOLEAN, LIST, DICTIONARY, or DATETIME). For more information about the syntax of the JSON-format input parameters, see the description of the botInput element in a Bot Deploy request in the Automation Anywhere API documentation. For no parameters, specify {}. |
Job:DBT
The following example shows how to define a DBT job. DBT (Data Build Tool) job is a cloud-based computing platform that enables you to develop, test, schedule, document, and analyze data models.
To deploy and run a DBT job, ensure that you have installed the DBT plug-in using the provision image command or the provision agent::update command.
"DBT_Job_2": {
"Type": "Job:DBT",
"ConnectionProfile": "DBT_CP",
"DBT Job Id": "12345",
"Run Comment": "A DBT job",
"Override Job Commands": "checked",
"Variables": [
{
"UCM-DefineCommands-N001-element": "dbt test"
},
{
"UCM-DefineCommands-N002-element": "dbt run"
}
],
"Status Polling Frequency": "10",
"Failure Tolerance": "2"
}
The DBT job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to DBT. |
DBT Job ID | Defines the ID of the preexisting job in the DBT platform that you want to run. |
Run Comment | Defines a free-text description of the job. |
Override Job Commands | Determines whether to override the predefined DBT job commands. Values: checked|unchecked Default: unchecked |
Variables | Defines the new DBT job commands, as variable pairs in the following format: "UCM-DefineCommands-Nnnn-element": "command string" where nnn is a counter for the sequential position of each command. |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the job. Default: 10 |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 |
Job:GCP DataFlow
The following example shows how to define a Google Cloud Platform (GCP) Dataflow job, which performs cloud-based data processing for batch and real-time data streaming applications .
To deploy and run a GCP Dataflow job, ensure that you have installed the GCP Dataflow plug-in using the provision image command or the provision agent::update command.
"GCP DataFlow_Job_1": {
"Type": "Job:GCP DataFlow",
"ConnectionProfile": "GCPDATAFLOW",
"Project ID": "applied-lattice-11111",
"Location": "us-central1",
"Template Type": "Classic Template",
"Template Location (gs://)": "gs://dataflow-templates-us-central1/latest/Word_Count",
"Parameters (JSON Format)": {
"jobName": "wordcount",
"parameters": {
"inputFile": "gs://dataflow-samples/shakespeare/kinglear.txt",
"output": "gs://controlmbucket/counts"
}
}
"Verification Poll Interval (in seconds)": "10",
"output Level": "INFO",
"Host": "host1"
}
The GCP Dataflow job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to Google Cloud Platform. |
Project ID | Defines the project ID for your Google Cloud project. |
Location | Defines the Google Compute Engine region to create the job. |
Template Type | Defines one of the following types of GCP Dataflow templates:
|
Template Location (gs://) | Defines the path for temporary files. This must be a valid Google Cloud Storage URL that begins with gs://. The pipeline option tempLocation is used as the default value, if it has been set. |
Parameters (JSON Format) | Defines input parameters to be passed on to job execution, in JSON format (name:value pairs). This JSON must include the jobname and parameters elements. |
Verification Poll Interval (in seconds) | (Optional) Determines the number of seconds to wait before checking the status of the job. Default: 10 |
output Level | Determines one of the following levels of details to retrieve from the GCP outputs in the case of job failure:
|
Host | Defines the name of the host machine where the job runs. An agent must be installed on this host. Optionally, you can define a host group instead of a host machine. |
Job:GCP Dataproc
The following examples show how to define a Google Cloud Platform (GCP) Dataproc job, which performs cloud-based big data processing and machine learning.
To deploy and run a GCP Dataproc job, ensure that you have installed the GCP Dataproc plug-in using the provision image command or the provision agent::update command.
The following example shows a job for a Dataproc task of type Workflow Template:
"GCP Dataproc_Job": {
"Type": "Job:GCP Dataproc",
"ConnectionProfile": "GCPDATAPROC",
"Project ID": "gcp_projectID",
"Account Region": "us-central1",
"Dataproc task type": "Workflow Template",
"Workflow Template": "Template2",
"Verification Poll Interval (in seconds)": "20",
"Tolerance": "2"
}
The following example shows a job for a Dataproc task of type Job:
"GCP Dataproc_Job": {
"Type": "Job:GCP Dataproc",
"ConnectionProfile": "GCPDATAPROC",
"Project ID": "gcp_projectID",
"Account Region": "us-central1",
"Dataproc task type": "Job",
"Parameters (JSON Format)": {
"job": {
"placement": {},
"statusHistory": [],
"reference": {
"jobId": "job-e241f6be",
"projectId": "gcp_projectID"
},
"labels": {
"goog-dataproc-workflow-instance-id": "44f2b59b-a303-4e57-82e5-e1838019a812",
"goog-dataproc-workflow-template-id": "template-d0a7c"
},
"sparkJob": {
"mainClass": "org.apache.spark.examples.SparkPi",
"properties": {},
"jarFileUris": [
"file:///usr/lib/spark/examples/jars/spark-examples.jar"
],
"args": [
"1000"
]
}
}
}
"Verification Poll Interval (in seconds)": "20",
"Tolerance": "2"
}
The GCP Dataproc job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to Google Cloud Platform. |
Project ID | Defines the project ID for your Google Cloud project. |
Account Region | Defines the Google Compute Engine region to create the job. |
Dataproc task type | Defines one of the following Dataproc task types to execute:
|
Workflow Template | (For a Workflow Template task type) Defines the ID of a Workflow Template. |
Parameters (JSON Format) | (For a Job task type) Defines input parameters to be passed on to job execution, in JSON format. You retrieve this JSON content from the GCP Dataproc UI, using the EQUIVALENT REST option in job settings. |
Verification Poll Interval (in seconds) | (Optional) Determines the number of seconds to wait before checking the status of the job. Default: 20 |
Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 times |
Job:GCP BigQuery
The following example shows how to define a GCP BigQuery job. GCP BigQuery is a Google Cloud Platform computing service that you can use for data storage, processing, and analysis.
To deploy and run a GCP BigQuery job, ensure that you have installed the GCP BigQuery plug-in using the provision image command or the provision agent::update command.
The following example shows a job for a Query action in GCP BigQuery:
"GCP BigQuery_query": {
"Type": "Job:GCP BigQuery",
"ConnectionProfile": "BIGQSA",
"Action": "Query",
"Project Name": "proj",
"Dataset Name": "Test",
"Run Select Query and Copy to Table": "checked",
"Table Name": "IFTEAM",
"SQL Statement": "select user from IFTEAM2",
"Query Parameters": {
"name": "IFteam",
"paramterType": {
"type": "STRING"
},
"parameterValue": {
"value": "BMC"
}
},
"Job Timeout": "30000",
"Connection Timeout": "10",
"Status Polling Frequency": "5"
}
The GCP BigQuery job object uses the following parameters:
Parameter | Actions | Description |
---|---|---|
ConnectionProfile | All Actions | Determines which connection profile to use to connect to GCP BigQuery. |
Action | N/A | Determines one of the following GCP BigQuery actions to perform:
|
Project Name | All Actions | Determines the project that the job uses. |
Dataset Name |
| Determines the database that the job uses. |
Run Select Query and Copy to Table | Query | (Optional) Determines whether to paste the results of a SELECT statement into a new table. |
Table Name |
| Defines the new table name. |
SQL Statement | Query | Defines one or more SQL statements supported by GCP BigQuery. Rule: It must be written in a single line, with character strings separated by one space only. |
Query Parameters | Query | Defines the query parameters, which enables you to control the presentation of the data.
Example
|
Copy Operation Type | Copy | Determines one of the following copy operations:
|
Source Table Properties | Copy | Defines the properties of the table that is cloned, backed up, or copied, in JSON format. You can copy or back up one or more tables at a time.
Example
|
Destination Table Properties |
| Defines the properties of a new table, in JSON format.
Example
|
Destination/Source Bucket URIs |
| Defines the source or destination data URI for the table that you are loading or extracting. You can load or extract multiple tables. Rule: Use commas to distinguish elements from each other. Example: "gs://source1_site1/source1.json" |
Show Load Options | Load | Determines whether to add more fields to a table that you are loading. |
Load Options | Load | Defines additional fields for the table that you are loading.
Example
|
Extract As | Extract | Determines one of the following file formats to export the data to:
|
Routine | Routine | Defines a routine and the values that it must run.
Example
|
Job Timeout | All Actions | Determines the maximum number of milliseconds to run the GCP BigQuery job. |
Connection Timeout | All Actions | Determines the number of seconds to wait before the job ends NOT OK. Default: 10 |
Status Polling Frequency | All Actions | Determines the number of seconds to wait before checking the status of the job. Default: 5 seconds |
Job:GCP VM
The following example shows how to define a job that performs operations on a Google Virtual Machine (VM).
To deploy and run a Google VM job, ensure that you have installed the Google VM plug-in using the provision image command or the provision agent::update command.
"GCP VM_create": {
"Type": "Job:GCP VM",
"ConnectionProfile": "GCPVM",
"Project ID": "applied-lattice",
"Zone": "us-central1-f",
"Operation": "Create",
"Parameters": "{ \"key\": \"value\"}",
"Instance Name": "tb-mastercluster-m",
"Get Logs": "checked",
"Verification Poll Interval": "20",
"Tolerance": "3"
}
The Google VM job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to Google VM. |
Project ID | Defines the project ID of the Google Cloud Project Virtual Machine. |
Zone | Defines the name of the zone for the request. |
Operation | Determines one of the following operations to perform on the Google Virtual Machine:
|
Template Name | Defines the name of a template for creation of a new Google Virtual Machine from a template. |
Instance Name | Defines the name of the VM instance where you want to run the operation. This parameter is available for all operations except for the Create operations. |
Parameters | Defines the input parameters in JSON format for a Create operation. Format: {"param1":"value1", "param2", "value2", …} |
Get Logs | Determines whether to display logs from Google VM at the end of the job output. This parameter is available for all operations except for the Delete operation. Values: checked|unchecked Default: unchecked |
Verification Poll Interval | Determines the number of seconds to wait before job status verification. Default: 15 seconds |
Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 times |
Job:GCP Batch
The following example shows how to define a GCP Batchjob. Google Cloud Platform (GCP) Batch enables you to manage, schedule, and run batch computing workloads on a virtual machine that is provisioned to accommodate your resource and capacity needs.
To deploy and run a Google Batch job, ensure that you have installed the Google Batch plug-in using the provision image command or the provision agent::update command.
"GCP Batch_Job_2": {
"Type": "Job:GCP Batch",
"ConnectionProfile": "GCP_BATCH",
"Project ID": "gcp_projectID",
"Region": "us-central1",
"Override Region": "Yes",
"Allowed Locations": ["zones/us-central1-a", "zones/us-central1-c"],
"Job Name": "unique",
"Priority": "99",
"Runnable Type": "Container",
"Task Script Text": "echo hello world",
"Override Commands": "Yes",
"Commands": "\"echo\",\"hello world\"",
"CPU": "1500",
"Memory": "1500",
"Instance Policy": "Machine Template",
"Machine Type": "e2-medium",
"Machine Template": "template-name",
"Provisioning Model": "Spot",
"Logs Policy": "Cloud Logging",
"Use Advanced JSON Format": "unchecked",
"Status Polling Frequency": "10"
}
The Google Batch job object uses the following parameters :
ConnectionProfile | Defines the name of a connection profile to use to connect to Google Batch. |
Project ID | Defines the GCP project ID where the batch job runs. A project is a set of configuration settings that define the resources your GCP Batch jobs use and how they interact with GCP. |
Region | Defines the region that is predefined in the GCP Batch platform where the virtual machine resources are located. |
Override Region | Determines whether to override the predefined region in the GCP Batch platform, as follows:
Default: No |
Allowed Locations | Defines the new region, or zones in a region, where the virtual machine resources that are used to run the job are located. |
Job Name | Defines a unique name for the batch job. |
Priority | Determines the run priority of the batch job. Larger numbers indicate higher priority. Valid values: Any number between 1 and 99. Default: 99 |
Runnable Type | Determines one of the following types of batch jobs:
|
Task Script Text | Defines the shell script that the batch job runs. |
Container Image URI | Defines the Uniform Resource Identifier (URI) that points to the container image. |
Entry Point | Defines the entry point (ENTRYPOINT) for the Docker container, which overrides the entry point defined in the original image. An entry point is the location in the container where the program begins its execution. Example: "/bin/project" |
Override Commands | Determines whether to override the Docker command (CMD), which is the executable container application code that is defined in the original image, as follows.
Default: No |
Commands | Defines the Docker command (CMD) that executes when a Docker container runs. If the container image contains an entry point or if an entry point is defined by the Entry Point parameter, the command is appended as an argument. |
Container Volumes | Defines the file or directory to copy (mount) onto a container volume, which is a virtual hard drive, inside the Docker container. Example: A value of "/home/usr:/app/" copies the /home/user directory on the host machine and pastes it into the /app directory on a volume in the Docker container. |
CPU | Determines the number of millicores of virtual CPU resources that are reserved for the batch job. Virtual machines measure CPU resources in millicores (m), which are thousandths of a CPU core. For example, 2000m equals 2 cores. |
Memory | Determines the number of mebibytes (MiB; mega binary bytes) of virtual memory resources that are reserved for the batch job. |
Maximum Retry Count | Determines the number of times to retry a job run when it fails. Valid values: Any number between 0 and 10. Default: 0 |
Instance Policy | Determines which kind of virtual machine (instance) runs the job.
|
Machine Type | Defines the virtual machine type that runs the job. |
Machine Template | Defines the virtual machine template that runs the job. |
Provisioning Model | Determines the price and availability of virtual machine resources, as follows:
Default: Standard |
Logs Policy | Determines whether save the batch job logs and where they appear.
|
Use Advanced JSON Format | Determines whether you supply your own JSON parameters, instead of several of the parameters described above. |
JSON Format | Defines the parameters for the batch job, in JSON format, that enable you to control how the job runs. For a description of this JSON syntax, see the description of Resource:Job in the GCP Batch Job API Reference. |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the GCP Batch job. Default: 10 |
Job:GCP Functions
The following example shows how to define a GCP Functions job, which enables you to develop, test, and run applications in the cloud.
To deploy and run a Google Functions job, ensure that you have installed the Google Functions plug-in using the provision image command or the provision agent::update command.
"GCP Functions_Job": {
"Type": "Job:GCP Functions",
"ConnectionProfile": "GCPFUNCTIONS",
"Project ID": "myProject",
"Location": "us-central1",
"Function Name": "myFunction",
"Function Parameters": "Body",
"Body": "{\\\"message\\\":\\\"controlm-body-%%ORDERID\\\"}",
"Status Polling Frequency": "20",
"Failure Tolerance": "2",
"Get Logs": "unchecked",
}
The Google Functions job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect Control-M to GCP Functions. |
Project ID | Defines the GCP project ID where the GCP Functions job executes. A project is a set of configuration settings that define the resources your GCP Batch jobs use and how they interact with GCP. |
Location | Defines where the functions job executes. |
Function Name | Defines the name of the function that you want to execute. |
Function Parameters | Determines one of the following types of parameters to pass to the function:
|
URL Parameters | Defines the URL parameters that are passed to the function, in the following format: <Parameter_Name_1>=<Value_1> |
Body | Defines the JSON-based body parameters that are passed to the function, in the following format: {\\\"parameter_1\\\":\\\"value_1\\\", \\\"parameter_2\\\":\\\"value_2\\\"} |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the GCP Functions job. Default: 20 |
Failure Tolerance | Determines the number of times to check the job status before ending Not OK. Default: 2 |
Get Logs | Determines whether to append the GCP Functions logs to the output. Valid Values: checked | unchecked Default: unchecked |
Job:GCP Dataprep
The following example shows how to define a GCP Dataprep job, which enables you to visualize, format, and prepare your data for analysis.
To deploy and run a Google Dataprep job, ensure that you have installed the Google Dataprep plug-in using the provision image command or the provision agent::update command.
"GCP Dataprep_Job": {
"Type": "Job:GCP Dataprep",
"ConnectionProfile": "GCP_DATAPREP",
"Flow Name": "data_manipulation",
"Parameters": "{schemaDriftOptions":{"schemaValidation": "true","stopJobOnErrorsFound": "true" }}",
"Execute Job With Idempotency Token": "checked",
"Idempotency Token": "Control-M-Token-%%ORDERID",
"Status Polling Frequency": "10",
"Failure Tolerance": "2"
}
The Google Dataprep job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to Google Dataprep. |
Flow Name | Defines the name of the flow, which is the workspace where you format and prepare your data. |
Parameters | Defines parameters that override the flow or its data sets when the job executes. For more information on parameter types, see the properties of runFlow service in the GCP Dataprep API documentation. |
Execute Job with Idempotency Token | Determines whether to execute the job with an idempotency token. Valid Values: checked | unchecked Default: unchecked |
Idempotency Token | Defines a unique ID (idempotency token), which guarantees that the job executes only once. Default: Control-M-Idem-%%ORDERID |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the job. Default: 10 |
Failure Tolerance | Determines the number of times to check the job status before ending Not OK. Default: 2 |
Job:GCP Deployment Manager
The following example shows how to define a GCP Deployment Manager job, which enables you to create, configure, test, and manage your GCP resources infrastructure.
To deploy and run a Google Deployment Manager job, ensure that you have installed the Google Deployment Manager plug-in using the provision image command or the provision agent::update command.
"GCP Deployment Manager_job": {
"Type": "Job:GCP Deployment Manager",
"ConnectionProfile": "DEPLOY_MANAGEMENT",
"Project ID": "applied-lattice-333111",
"Action": "Create Deployment",
"Deployment Name": "demo_deployment",
"Yaml Config Content": "{resources: [{type: compute.v1.instance, name: quickstart-deployment-vm, properties: {zone: us-central1-f, machineType: 'https://www.googleapis.com/compute/v1/projects/applied-lattice-333108/zones/us-central1-f/machineTypes/e2-micro', disks: [{deviceName: boot, type: PERSISTENT, boot: true, autoDelete: true, initializeParams: {sourceImage: 'https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/family/debian-11'}}], networkInterfaces: [{network: 'https://www.googleapis.com/compute/v1/projects/applied-lattice-333108/global/networks/default', accessConfigs: [{name: External NAT, type: ONE_TO_ONE_NAT}]}]}}]}",
"Failure Tolerance": "2",
"Status Polling Frequency": "10"
}
The Google Deployment Manager job object uses the following parameters:
Parameter | Description |
---|---|
ConnectionProfile | Defines the name of a connection profile to use to connect Control-M to GCP Deployment Manager. |
Project ID | Defines a unique GCP project ID for this job. |
Action | Determines one of the following actions to perform:
A deployment is a collection of API resources, such as a Google Compute Engine or GCP Cloud SQL instance. |
Deployment Name | Defines a unique deployment name. |
YAML Config Content | Defines a configuration, in YAML format, which enables you to add or update resources in a deployment. You must use the YAML Minifier Tool to remove all unnecessary characters from your configuration code. Example YAML:
|
Status Polling Frequency | Determines the number of seconds to wait before Control-M checks the status of the job. Default: 10 |
Tolerance | Determines the number of times to check the job status before the job ends Not OK. Default: 3 |
Job:Boomi
The following example shows how to define a Boomi job, which enables the integration of Boomi processes with your existing Control-M workflows.
To deploy and run a Boomi job, ensure that you have installed the Boomi plug-in using the provision image command or the provision agent::update command.
"Boomi_Job_2": {
"Type": "Job:Boomi",
"ConnectionProfile": "BOOMI_CCP",
"Atom Name": "Atom1",
"Process Name": "New Process",
"Polling Intervals": "20",
"Tolerance": "3"
}
The Boomi job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to the Boomi endpoint. |
Atom Name | Defines the name of a Boomi Atom associated with the Boomi process. |
Process Name | Defines the name of a Boomi process associated with the Boomi Atom. |
Polling Intervals | (Optional) Number of seconds to wait before checking the status of the job. Default: 20 seconds |
Tolerance | Defines the number of API call retries during the status check phase. If the API call that checks the status fails due to the Boomi limitation of a maximum of 5 calls per second, it will retry again according to the number in the Tolerance field. Default: 3 times |
Job:Databricks
The following example shows how to define a Databricksjob, which enables the integration of jobs created in the Databricks environment with your existing Control-M workflows.
To deploy and run a Databricks job, ensure that you have installed the Databricks plug-in using the provision image command or the provision agent::update command.
"Databricks_Job": {
"Type": "Job:Databricks",
"ConnectionProfile": "DATABRICKS",
"Databricks Job ID": "91",
"Parameters": "\"notebook_params\":{\"param1\":\"val1\", \"param2\":\"val2\"}",
"Idempotency Token": "Control-M-Idem_%%ORDERID",
"Status Polling Frequency": "30"
}
The Databricks job object uses the following parameters:
ConnectionProfile | Determines which connection profile to use to connect to the Databricks workspace. |
Databricks Job ID | Determines the job ID created in your Databricks workspace. |
Parameters | Defines task parameters to override when the job runs, according to the Databricks convention. The list of parameters must begin with the name of the parameter type. For example:
For more information about the parameter types, review the properties of RunParameters in the OpenAPI specification provided through the Azure Databricks documentation. For no parameters, specify a value of "params": {}. For example: |
Idempotency Token | (Optional) Defines a token to use to rerun job runs that timed out in Databricks. Values:
|
Status Polling Frequency | (Optional) Determines the number of seconds to wait before checking the status of the job. Default: 30 |
Job:Microsoft Power BI
The following examples show how to define a Power BI job, which enables integration of Power BI workflows with your existing Control-M workflows.
To deploy and run a Power BI job, ensure that you have installed the Power BI plug-in using the provision image command or the provision agent::update command.
The following example shows a job for refreshing a dataset in Power BI:
"Microsoft Power BI_Job_2": {
"Type": "Job:Microsoft Power BI",
"ConnectionProfile": "POWERBI",
"Dataset Refresh/ Pipeline Deployment": "Dataset Refresh",
"Workspace Name": "Demo",
"Workspace ID": "a7989345-8cfe-44e7-851d-81560e67973f",
"Dataset ID": "9976ce6c-e21a-4c33-9b8c-37c8303231cf",
"Parameters": "{\"type\":\"Full\",\"commitMode\":\"transactional\",\"maxParallelism\":20,\"retryCount\":2}",
"Connection Timeout": "10",
"Status Polling Frequency": "10"
}
The following example shows a job for deploying a Power BI Pipeline from dev to test to production:
"Microsoft Power BI_Job_2": {
"Type": "Job:Microsoft Power BI",
"ConnectionProfile": "POWERBI",
"Dataset Refresh/ Pipeline Deployment": "Pipeline Deployment",
"Pipeline ID": "83f36385-4e38-43g4-8263-10aa12e3175c",
"Connection Timeout": "10",
"Status Polling Frequency": "10"
}
The Power BI job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to the Power BI endpoint. |
Dataset Refresh/ Pipeline Deployment | Determines one of the following options for execution in Power BI:
|
Workspace Name | (For Dataset) Defines a Power BI workspace where you want to refresh data. |
Workspace ID | (For Dataset) Defines the ID for the specified Power BI workspace (defined in Group Name). |
Dataset ID | Defines a Power BI data set that you want to refresh under the specified workspace. |
Parameters | (For Dataset) Defines specific parameters to pass when the job runs, defined as JSON pairs of parameter name and value. For more information about available parameters, see Datasets - Refresh Dataset in the Microsoft Power BI documentation. To specify parameters, the dataset must be in Premium group. Format: {"param1":"value1", "param2":"value2"} For no parameters, specify {}. Example:
|
Connection Timeout | (Optional) Determines the maximum number of seconds to wait for REST API requests to respond, before disconnecting. Default: 10 seconds |
Status Polling Frequency | (Optional ) Determines the number of seconds to wait before checking the status of the job. Default : 10 seconds |
Pipeline ID | Defines the ID of a Power BI pipeline that you want to deploy from dev to test and then to production. |
Job:Qlik Cloud
The following example shows how to define a Qlik Cloud job, which enables integration with Qlik Cloud Data Services for data visualization through Qlik Sense.
To deploy and run a Qlik Cloud job , ensure that you have installed the Qlik Cloud plug-in using the provision image command or the provision agent::update command.
"Qlik Cloud_Job": {
"Type": "Job:Qlik Cloud",
"ConnectionProfile": "QLIK-TEST",
"Reload Type": "Full",
"App Name": "Demo1",
"Print Log to Output": "Yes",
"Status Polling Frequency": "10",
"Tolerance": "2"
}
The Qlik Cloud job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect to the Qlik endpoint. |
Reload Type | Determines one of the following options to load data into the environment:
|
App Name | Defines the Qlik Sense app name, which contains one or more workspaces, called sheets. |
Print Log to Output | Determines whether the job logs are included in the Control-M output. Values: Yes|No Default: Yes |
Status Polling Frequency | (Optional ) Determines the number of seconds to wait before checking the status of the job. Default : 10 seconds |
Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 2 times |
Job:Tableau
The following example shows how to define a Tableau job, which enables you to visualize, analyze, and share large workloads of data.
To deploy and run a Tableau job, ensure that you have installed the Tableau plug-in using the provision image command or the provision agent::update command.
"Tableau_Refresh_Datasource": {
"Type": "Job:Tableau",
"ConnectionProfile": "TABLEAU_CP",
"Action": "Refresh Datasource",
"Datasource Name": "BQ_Dataset",
"Status Polling Frequency": "10",
"Failure Tolerance": "2"
}
The Tableau job object uses the following parameters:
ConnectionProfile | Defines the name of a connection profile to use to connect Control-M to Tableau. |
Action | Determines one of the following Tableau actions to perform:
|
Datasource Name | Defines the name of the data source that is refreshed. Tableau can connect to the following types of data sources:
|
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the job. Default : 30 |
Failure Tolerance | Determines the number of times to check the job status before ending NOT OK. Default: 1 |
Job:Snowflake
The following example shows how to define a Snowflake job, which enables integration with Snowflake, a cloud computing platform that you can use for data storage, processing, and analysis.
To deploy and run a Snowflake job, ensure that you have installed the Snowflake plug-in using the provision image command or the provision agent::update command.
The following example shows a job for a SQL Statement action in Snowflake:
"Snowflake_Job": {
"Type": "Job:Snowflake",
"ConnectionProfile": "SNOWFLAKE_CONNECTION_PROFILE",
"Database": "FactoryDB",
"Schema": "Public",
"Action": "SQL Statement",
"Snowflake SQL Statement": "Select * From Table1",
"Statement Timeout": "60",
"Show More Options": "unchecked",
"Show Output": "unchecked",
"Polling Interval": "20"
}
The Snowflake job object uses the following parameters:
Parameter | Actions | Description |
---|---|---|
ConnectionProfile | All Actions | Determines one of the following types of connection profiles to use to connect to Snowflake: |
Database | All Actions | Determines the database that the job uses. |
Schema | All Actions | Determines the schema that the job uses. A schema is an organizational model that describes layout and definition of the fields and tables, and their relationships to each other, in a database. |
Action | N/A | Determines one of the following Snowflake actions to perform:
|
Snowflake SQL Statement | SQL Statement | Determines one or more Snowflake-supported SQL commands. Rule: Must be written in a single line, with strings separated by one space only. |
Query to Location | Copy from Query | Defines the cloud storage location. |
Query Input | Copy from Query | Defines the query used for copying the data. |
Storage Integration |
| Defines the storage integration object, which stores an Identity and Access Management (IAM) entity and an optional set of blocked cloud storage locations. |
Overwrite |
| Determines whether to overwrite an existing file in the cloud storage, as follows:
|
File Format |
| Determines one of the following file formats for the saved file:
|
Copy Destination | Copy from Table | Defines where the JSON or CSV file is saved. You can save to Amazon Web Services, Google Cloud Platform, or Microsoft Azure. Example: s3://<bucket name>/ |
From Table | Copy from Table | Defines the name of the copied table. |
Create Table Name | Create Table and Query | Defines the name of the new or existing table where the data is queried. |
Query | Create Table and Query | Defines the query used for the copied data. |
Snowpipe Name |
| Defines the name of the Snowpipe. A Snowpipe loads data from files when they are ready, or staged. |
Table Name | Copy into Table | Defines the name of the table that the data is copied into. |
From Location | Copy into Table | Defines the cloud storage location from where the data is copied, in CSV or JSON format. Example: s3://location-path/FileName.csv |
Start or Pause Snowpipe | Start or Pause Snowpipe | Determines whether to start or pause the Snowpipe, as follows:
|
Stored Procedure Name | Stored Procedure | Defines the name of the stored procedure. |
Procedure Argument | Stored Procedure | Defines the value of the argument in the stored procedure. |
Table Name | Snowpipe Load Status | Defines the table that is monitored when loaded by the Snowpipe. |
Stage Location | Snowpipe Load Status | Defines the cloud storage location. A stage is a pointer that indicates where data is stored, or staged. Example: s3://CloudStorageLocation/ |
Days Back | Snowpipe Load Status | Determines the number of days to monitor the Snowpipe load status. |
Status File Cloud Location Path | Snowpipe Load Status | Defines the cloud storage location where a CSV file log is created. The CSV file log details the load status for each Snowpipe. |
Storage Integration | Snowpipe Load Status | Defines the Snowflake configuration for the cloud storage location (as defined in the previous parameter, Status File Cloud Location Path). Example: S3_INT |
Statement Timeout | All Actions | Determines the maximum number of seconds to run the job in Snowflake. |
Show More Options | All Actions | Determines whether the following parameters are included in the job definitions:
|
Parameters | All Actions | Defines Snowflake-provided parameters that let you control how data is presented. Format: |
Role | All Actions | Determines the Snowflake role used for this Snowflake job. A role is an entity that can be assigned privileges on secure objects. You can be assigned one or more roles from a limited selection. |
Bindings | All Actions | Defines the values to bind to the variables used in the Snowflake job, in JSON format. For more information about bindings, see the Snowflake documentation Example: The following JSON defines two binding variables:
|
Warehouse | All Actions | Determines the warehouse used in the Snowflake job. A warehouse is a cluster of virtual machines that processes a Snowflake job. |
Show Output | All Actions | Determines whether to show a full JSON response in the log output. Values: checked|unchecked Default: unchecked |
Status Polling Frequency | All Actions | Determines the number of seconds to wait before checking the status of the job. Default: 20 seconds |
Job:Talend Data Management
The following examples show how to define a Talend Data Management job, which enables the integration of data management and data integration tasks or plans from Talend with your existing Control-M workflows.
To deploy and run a Talend Data Management job, ensure that you have installed the Talend Data Management plug-in using the provision image command or the provision agent::update command.
The following example shows a job for a Talend task:
"Talend Data Management": {
"Type": "Job: Talend Data Management",
"ConnectionProfile": "TALENDDATAM",
"Task/Plan Execution": "Execute Task",
"Task Name": "GetWeather job",
"Parameters": "{"parameter_city":"London","parameter_appid":"43be3fea88g092d9226eb7ca"}"
"Log Level": "Information",
"Bring logs to output": "checked",
"Task Polling Intervals" : "10"
}
The following example shows a job for a Talend plan:
"Talend Data Management": {
"Type": "Job: Talend Data Management",
"ConnectionProfile": "TALENDDATAM",
"Task/Plan Execution": "Execute Plan",
"Plan Name": "Sales Operation Plan",
"Plan Polling Intervals" : "10"
}
The Talend Data Management job object uses the following parameters:
ConnectionProfile | Determines which connection profile to use to connect to the Talend Data Management Platform. |
Task/Plan Execution | Determines one of the following options for execution in Talend:
|
Task Name / | Defines the name of the Talend task or plan to execute, as defined in the Tasks and Plans page in the Talend Management Console. |
Parameters | (For a task) Defines specific parameters to pass when the Talend job runs, defined as JSON pairs of parameter name and value. All parameter names must contain the parameter_ prefix. Format: {"parameter_param1":"value1", "parameter_param2":"value2"} For no parameters, specify {}. |
Log Level | (For a task) Determines one of the following levels of detail in log messages for the triggered task in the Talend Management Console:
|
Bring logs to output | (For a task) Determines whether to show Talend log messages in the job output. Values: checked|unchecked Default: unchecked |
Task Polling Intervals / Plan Polling Intervals | Determines the number of seconds to wait before checking the status of the triggered task or plan. Default: 10 second |
Job:Trifacta
The following example shows how to define a Trifacta job. Trifacta is a data-wrangling platform that allows you to discover, organize, edit, add to, and publish data in different formats and to multiple clouds, including AWS, Azure, Google, Snowflake, and Databricks.
To deploy and run a Trifacta job, ensure that you have installed the Trifacta plug-in using the provision image command or the provision agent::update command.
"Trifacta_Job_2": {
"Type": "Job:Trifacta",
"ConnectionProfile": "TRIFACTA",
"Flow Name": "Flow",
"Rerun with New Idempotency Token": "checked",
"Idempotent Token": "Control-M-Idem_%%ORDERID'",
"Retrack Job Status": "checked",
"Run ID": "Run_ID",
"Status Polling Frequency": "15"
}
The Trifacta job object uses the following parameters:
ConnectionProfile | Determines which connection profile to use to connect to the Trifacta platform. |
Flow Name | Determines which Trifacta flow the job runs. |
Rerun with New Idempotency Token | Determines whether to allow rerun of the job in Trifacta with a new idempotency token (for example, when the job run times out). Values: checked|unchecked Default: unchecked |
Idempotent Token | Defines the idempotency token that guarantees that the job run is executed only once. To allow rerun of the job with a new token, replace the default value with a unique ID that has not been used before. Use the RUN_ID, which can be retrieved from the job output. Default: Control-M-Idem_%%ORDERID — job run cannot be executed again. |
Retrack Job Status | Determines whether to track job run status as the job run progresses and the status changes (for example, from in-progress to failed or to completed). Values: checked|unchecked Default: unchecked |
Run ID | Defines the RUN_ID number for the job run to be tracked. The RUN_ID is unique to each job run and it can be found in the job output. |
Status Polling Frequency | Determines the number of seconds to wait before checking the status of the Trifacta job. Default: 10 second |
Job:Micro Focus Windows and Job:Micro Focus Linux
The following example shows how to define a Micro Focus job, which enables you to run Job Control Language (JCL) files on mainframe environments.
Micro Focus jobs are supported on UNIX/Linux (Job:Micro Focus Linux) and on Windows (Job:Micro Focus Windows).
To deploy and run a Micro Focus job, ensure that you have installed the Micro Focus plug-in using the provision image command or the provision agent::update command.
"MicroFocusJob": {
"Type": "Job:Micro Focus Windows",
"ConnectionProfile": "MICROFWINDOWS",
"JCL Filename": "JCL14",
"PDS": "PDSLIBRARY",
"Enable JCL Variables": "checked",
"Additional Variables": [
{
"UCM-exports-N001-element": "MYVAR1=MYVAL1"
},
{
"UCM-exports-N002-element": "MYVAR2=MYVAL2"
}
],
"Restart on Rerun": "checked",
"Rerun Job ID": "J0002571",
"From Step/Proc": "/jrestart:1006#fSTEP2(3):PSTEP(1)",
"To Step/Proc": "#tSTEP4(1):PSTEP(2)",
"Recapture ABEND Codes": "Ignore",
"Recapture COND Codes": "Ignore",
"Auto Adjust Restart": "Ignore",
"Set MF_UCC11": "Ignore",
"Restart With Modified JCL": "No"
}
The Micro Focus job object uses the following parameters:
ConnectionProfile | Determines which connection profile to use to connect to the Micro Focus platform. |
JCL Filename | Defines the JCL job stream filename to execute. A job stream is a sequence of JCL statements and data the form a single unit of work for an operating system. |
PDS | Defines the Partitioned Data Set (PDS) and its members for Mainframe Subsystem Support (MSS). A PDS is a computer file that contains multiple data sets, which are called members. MSS is a program that enables JCL applications to be migrated from a mainframe and maintained, developed, and run on Windows or UNIX/Linux platforms. |
Enable JCL Variables | Determines whether to enable JCL variables. This option must be checked if the JCL job stream needs JCL variables, or if it needs additional variables that are defined by the Additional Variables parameter. Valid values: checked | unchecked Default: unchecked |
Additional Variables | Defines the environment variables that the Micro Focus Batch Scheduler Integration JCL (MFBSIJCL) needs to submit the JCL job stream. |
Restart on Rerun | Determines whether to rerun a JCL job stream from (and to) a specific step that you define. A JCL job stream contains one or more programs. Each program execution is called a job step, or step. You can initiate a rerun from the Monitoring domain as long as a job stream has not been hard-killed. Valid values: checked | unchecked Default: unchecked |
Rerun Job ID | Defines the Job ID for the JCL job stream to rerun.
|
From Step/Proc | Determines which step the JCL job stream rerun starts from, which enables you to define the range of steps in the JCL job stream that is rerun. |
To Step/Proc | Defines which step the JCL job stream rerun ends on, which enables you to define the range of steps in the JCL job stream that is rerun. |
Recapture ABEND Codes | Determines how to handle abnormal end (ABEND) codes, which call attention to a software or hardware error, from a previous JCL job stream run, as follows:
This parameter is relevant only when Restart on Rerun is checked. |
Recapture COND Codes | Determines how to handle condition (COND) codes from a previous JCL job stream run, as follows.
This parameter is relevant only when Restart on Rerun is checked. |
Auto Adjust Restart | Determines whether to automatically adjust the specified From Step/Proc step if an earlier step is bypassed and must be run to successfully rerun a later step in the JCL job streams, as follows:
This parameter is relevant only when Restart on Rerun is checked. |
Step-Specific Condition Codes | Defines changes to step condition codes and their values in the JCL job stream rerun. Example: #cSTEP20(3):PSTEP10(2):1 This parameter is relevant only when Restart on Rerun is checked. |
Set MF_UCC11 | Determines how to enable the UCC11 environment variable, which affects JCL job stream restart functionality, as follows:
This parameter is relevant only when Restart on Rerun is checked. |
Advanced Restart Parameters | Defines the advanced restart parameters to add to the rerun JCL job stream. This parameter is relevant only when Restart on Rerun is checked. |
Restart With Modified JCL | Determines whether to rerun the JCL job stream with the modified JCL job stream file, as follows:
|
Modified JCL Path and Filename | Defines the full path and filename of the modified JCL job stream file. |
Job:Communication Suite
The following examples show how to define Communication Suite jobs, which enable you to automate business messaging and communication over Microsoft Teams, Slack, Telegram, and WhatsApp.
To deploy and run a Communication Suite job, ensure that you have installed the Communication Suite plug-in using the provision image command or the provision agent::update command.
"Communication Suite_Job_Teams": {
"Type": "Job:Communication Suite",
"ConnectionProfile": "COMM_SUITE",
"Application Name": "Microsoft Teams",
"Teams Parameters":"{ "type":"message", "attachments":[ {"contentType":"application/vnd.microsoft.card.adaptive",
"contentUrl":null, "content":{ "$schema":"http://adaptivecards.io/schemas/adaptive-card.json","type":"AdaptiveCard","version":"1.2",
"body":[{"type": "TextBlock","text": "For Samples and Templates, see [https://adaptivecards.io/samples](https://adaptivecards.io/samples)" } ] } }]}",
"Silent Message": "unchecked",
"Protect Content": "unchecked"
}
"Communication Suite_Job_Slack": {
"Type": "Job:Communication Suite",
"ConnectionProfile": "COMM_SUITE",
"Application Name": "Slack",
"Slack Parameters":"{"blocks": [{"type": "section","text": {"type": "mrkdwn","text": "The job finished successfully orderid: %%ORDERID, and <https://google.com|this is a link>"}}]}",
"Silent Message": "unchecked",
"Protect Content": "unchecked"
}
"Communication Suite_Job_Telegram": {
"Type": "Job:Communication Suite",
"ConnectionProfile": "COMM_SUITE",
"Application Name": "Telegram",
"Telegram Parameters":"The job finished successfully orderid: %%ORDERID",
"Silent Message": "unchecked",
"Protect Content": "unchecked"
}
"Communication Suite_Job_WhatsApp": {
"Type": "Job:Communication Suite",
"ConnectionProfile": "COMM_SUITE",
"Application Name": "WhatsApp",
"WhatsApp Parameters":" { "messaging_product": "whatsapp", "to": "17181231234", "type": "template", "template": { "name": "control_m", "language": { "code": "en" } }}",
"Silent Message": "unchecked",
"Protect Content": "unchecked"
}
Communication Suite jobs use the following parameters:
ConnectionProfile | Determines which connection profile to use to connect to the Communication Suite. |
Application Name | Determines one of the following communications platforms to use:
|
Teams Parameters | Defines the parameters, in JSON format, that instruct Teams to perform multiple actions. For more information about the supported parameters, see the Microsoft Teams documentation. |
Slack Parameters | Defines the parameters, in JSON format, that instruct Slack to perform multiple actions. For more information about the supported parameters, see the Slack documentation. |
Telegram Parameters | Defines the parameters, in simple text format, that instruct Telegram to perform multiple actions. Rule: 1–4096 characters You can add Control-M variables to the text. |
Silent Message | Determines whether to send your Telegram message without a notification, which is useful for after-hours or non-urgent messages. |
Protect Content | Determines whether to prevent your Telegram message from being saved or forwarded. |
WhatsApp Parameters | Defines the parameters, in JSON format, that instruct WhatsApp to perform multiple actions. For more information about the supported parameters, see the WhatsApp documentation. |
Job:Kubernetes
The following example shows how to define a Kubernetes job, which enables you to run a pod to completion in a Kubernetes-based cluster.
To deploy and run a Kubernetes job, ensure that you have set up Managing Kubernetes Workloads with Helix Control-M, as described in Setting Up Control-M for Kubernetes.
"Kubernetes_Job": {
"Type": "Job:Kubernetes",
"ConnectionProfile": "KBN_CCP",
"Description": "Containerized Hello World",
"Job Spec Yaml" : "apiVersion: batch/v1\nkind: Job\nmetadata:\n name: {{job_yaml_file_params:jobname}}\nspec:\n template:\n spec:\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: busybox0\n image: busybox\n command: [\"echo\", \"Hello {{job_yaml_file_params:subject}}\"]\n restartPolicy: Never\n backoffLimit: 4\n",
"Job Spec Parameters" : "{\"jobname\":\"ctmjob-%%ORDERID\",\"subject\":\"Sweden\"}",
"Get Pod Logs": "Get Logs",
"Job Cleanup": "Keep",
"Job Status Polling Interval": "20"
}
Kubernetes jobs use the following parameters:
ConnectionProfile | Determines which connection profile to use to connect to Kubernetes. |
Job Spec Yaml | Defines the settings of the job in Kubernetes. Convert the contents of the yaml file to JSON and include them in the JSON code. Tip: You can use the JQ command on Linux to convert the yaml file to JSON. |
Job Spec Parameters | Defines input parameters required by the Kubernetes job, as pairs of name and value. Example: {\"jobname\":\"ctmjob-%%ORDERID\",\"iterations\":\"10\",\"delay\":\"20\"} |
Get Pod Logs | Determines whether to fetch logs of the pods of the Kubernetes job upon completion and append to the Control-M job output. The maximum output size is 10 megabytes.
Default: Get Logs |
Job Cleanup | Determines whether to delete Kubernetes resources that were created for the job.
Default: Delete Job |
Job Status Polling Interval | Determines the number of seconds between status checks of the Kubernetes job. Default: 20 seconds |
Job:Dummy
The following example shows how to use Job:Dummy to define a job that always ends successfully without running any commands.
"DummyJob" : {
"Type" : "Job:Dummy"
}
Job:zOS:Member
The following example shows how to use Job:zOS:Member to run jobs on a z/OS system:
{
"ZF_DOCS" : {
"Type" : "Folder",
"ControlmServer" : "M2MTROLM",
"FolderLibrary" : "IOAA.CCIDM2.CTM.OPR.SCHEDULE",
"RunAs" : "emuser",
"CreatedBy" : "emuser",
"When" : {
"RuleBasedCalendars" : {
"Included" : [ "EVERYDAY" ],
"EVERYDAY" : {
"Type" : "Calendar:RuleBased",
"When" : {
"DaysRelation" : "OR",
"WeekDays" : [ "NONE" ],
"MonthDays" : [ "ALL" ]
}
}
}
},
"ZJ_DATA" : {
"Type" : "Job:zOS:Member",
"SystemAffinity" : "ABCD",
"SchedulingEnvironment" : "PLEX8ALL",
"ControlDCategory" : "SEQL_FILE",
"PreventNCT2" : "Yes",
"MemberLibrary" : "IOA.WORK.JCL",
"SAC" : "Prev",
"CreatedBy" : "emuser",
"RequestNJENode" : "NODE3",
"RunAs" : "emuser",
"StatisticsCalendar": "CALPERIO",
"TaskInformation" : {
"EmergencyJob" : true,
"RunAsStartedTask" : true,
},
"OutputHandling" : {
"Operation" : "Copy"
"FromClass" : "X",
"Destination" : "NODE3",
},
"History" : {
"RetentionDays": "05",
"RetentionGenerations" : "07"
},
"Archiving" : {
"JobRunsToRetainData" : "4",
"DaysToRetainData" : "1",
"ArchiveSysData" : true
},
"Scheduling" : {
"MinimumNumberOfTracks" : "5",
"PartitionDataSet" : "fgf"
},
"RerunLimit" : {
"RerunMember" : "JOBRETRY",
"Units" : "Minutes",
"Every" : "7"
},
"MustEnd" : {
"Minutes" : "16",
"Hours" : "17",
"Days" : "0"
},
"When" : {
"WeekDays" : [ "NONE" ],
"Months" : [ "NONE" ],
"MonthDays" : [ "NONE" ],
"DaysRelation" : "OR"
},
"CRS" : {
"Type" : "Resource:Lock",
"IfFail" : "Keep",
"LockType" : "Shared"
},
"QRS" : {
"Type" : "Resource:Pool",
"IfFail" : "Keep",
"IfOk" : "Discard",
"Quantity" : "1"
},
"Demo" : {
"Type" : "StepRange",
"FromProgram" : "STEP1",
"FromProcedure" : "SMPIOA",
"ToProgram" : "STEP8",
"ToProcedure" : "CTBTROLB"
},
"IfCollection:zOS_0" : {
"Type" : "IfCollection:zOS",
"Ifs" : [ {
"Type" : "If:zOS:AnyProgramStep",
"ReturnCodes" : [ "OK" ],
"Procedure" : "SMPIOA"
}, "OR", {
"Type" : "If:zOS:EveryProgramStep",
"ReturnCodes" : [ "*$EJ", ">S002" ],
"Procedure" : "SMPIOA"
} ],
"CtbRuleData_2" : {
"Type" : "Action:ControlMAnalyzerRule",
"Name" : "RULEDEMO",
"Arg" : "3"
}
},
"IfCollection:zOS_1" : {
"Type" : "IfCollection:zOS",
"Ifs" : [ {
"Type" : "If:zOS:SpecificProgramStep",
"Program" : "Demo",
"ReturnCodes" : [ "*****" ],
"Procedure" : "SMPIOA"
}, "OR", {
"Type" : "If:zOS:SpecificProgramStep",
"Program" : "STEP5",
"ReturnCodes" : [ ">U0002" ],
"Procedure" : "SMPIOA"
} ],
"IfRerun_2" : {
"Type" : "Action:Restart",
"FromProgram" : "STEP1",
"FromProcedure" : "SMPIOA",
"ToProgram" : "STEP5",
"ToProcedure" : "CTBTROLB"
"Confirm" : false,
}
}
}
}
}
FolderLibrary | Defines the location of the Member that contains the job folder. Rules:
|
SystemAffinity | Defines the identity of the system in which the Job must be initiated and executed (in JES2). Rules:
|
SchedulingEnvironment | Defines the JES2 workload management scheduling environment that is to be associated with the Job. Rules:
|
ControlDCategory | Defines the name of the Control-D Report Decollating Mission Category. If specified, the report decollating mission is scheduled whenever the Job is scheduled under Control-M. Rules:
|
PreventNCT2 | Determines whether to perform data set cleanup before the original job runs. Values: yes | no |
MemberLibrary | Defines the location of the Member that contains the JCL, started task procedure, or warning message. Rules:
|
SAC | (Optional) Determines whether to adjust the outputical date for a job converted from a scheduling product other than Control‑M. Valid values:
|
RequestNJENode | Defines the node in the JES network where the Job executes Rules:
|
StatisticsCalendar | (Optional) Defines the Control-M periodic calendar used to collect statistics relating to the job. This provides more precise statistical information about the job execution. If the StatisticsCalendar parameter is not defined, the statistics are based on all run times of the job. Rules:
|
TaskInformation | Defines additional optional settings for the job. |
EmergencyJob | Determines whether to run the job as an emergency job. Values: true | false |
RunAsStartedTask | Determines whether to run the job as a started task. Values: true | false |
OutputHandling | Defines how the job output is handled. |
Operation | Defines the output handling action. Valid values:
|
FromClass | Defines the previous class name. |
Destination | Defines the output name and full path to move the output. Note: Do not use an internal Control-M directory or subdirectory. Mandatory if the value for Operation is Copy, Move, or ChangeJobClass. An asterisk (*) indicates the original MSGCLASS for the job output. |
History | (Optional) Determines how long to retain the job in the History Jobs file Note: Retention Days and Retention Generations are mutually exclusive. A value can be specified for either, but not both. |
RetentionDays | Number of days Valid values: 001 - 999 |
RetentionGenerations | Number of generations Valid values: 000 - 999 |
Archiving | Determines how long Control-M Workload Archiving retains the job output |
JobRunsToRetainData | Determines the number of times the job run data is retained in the job output |
DaysToRetainData | Determines the number of days the job run data is retained in the job output |
ArchiveSysData | Determines whether to archive the job SYSDATA. Values: true | false |
Scheduling | Defines the scheduling parameters when or how often the job is scheduled for submission |
MinimumNumberOfTracks | Determines the minimum number of free partitioned data set tracks required by the library specified for the PartitionDataSet parameter. |
PartitionDataSet | Defines the name of a partitioned data set to check for free space. If PartitionDataSet has fewer than the minimum number of required free tracks specified in the MinimumNumberOfTracks parameter, the job executes. |
RerunLimit | Determines the maximum number of reruns that can be performed for the job. When a z/OS job reruns, the job status is set to NOTOK, even if it was previously specified as OK. |
RerunMember | Defines the name of the JCL member to use when the job automatically reruns. Rules:
|
Units | Defines the unit of measurement to wait between reruns. Valid values:
|
Every | Determines the number of Units to wait between reruns. |
MustEnd | Defines the time of day and days offset when the folder must finish executing. |
Hours | Hour of the day Format: HH Valid values: 00 - 23 |
Minutes | Minutes of the hour Format: MM Valid values: 00 - 59 |
Days | Number of days Format: DDD Valid values: 000 - 120 |
LockType | Determines whether a lock resource is shared or exclusive. For more information, see Resources. |
IfFail | Determines what happens to the lock or pool resource if the job fails to use the resource. For more information, see Resources. Valid values: Release | Keep |
IfOk | Determines what happens to a pool resource if the job successfully uses the resource. For more information, see Resources. Valid values: Discard | Release |
Quantity | Determines the number of lock or pool resources to allocate to the job. For more information, see Resources. |
StepRange | Determines the job steps to execute during restart of a job. Parameters:
Parameter rules:
|
IfCollection:zOS - Ifs | The following unique If objects apply to a z/OS job:
|
IfCollection:zOS - Actions | The following unique Action objects apply to a z/OS job:
|
Job:zOS:InStreamJCL
The following example shows how to create an in-stream JCL job which runs an embedded script on a z/OS system:
{
"ZF_ROBOT" : {
"Type" : "SimpleFolder",
"ControlmServer" : "R2MTROLM",
"FolderLibrary" : "CTMP.V900.SCHEDULE",
"OrderMethod" : "Manual",
"Z_R1" : {
"Type" : "Job:zOS:InStreamJCL",
"JCL" : "0036//ROASMCL JOB ,ASM,CLASS=A,REGION=0M0033// JCLLIB ORDER=IOAP.V900.PROCLIB0024// INCLUDE MEMBER=IOASET0035//S1 EXEC IOATEST,PARM='TERM=C0000'0035//S2 EXEC IOATEST,PARM='TERM=C0000'",
"CreatedBy" : "emuser",
"RunAs" : "emuser",
"When" : {
"WeekDays" : [ "NONE" ],
"MonthDays" : [ "ALL" ],
"DaysRelation" : "OR"
},
"Archiving" : {
"ArchiveSysData" : true
}
}
}
}
For descriptions of parameters, see the list of parameters available for Job:zOS:Member (except for MemberLibrary, which is not relevant here).
The following additional parameter is required for an in-stream JCL job:
JCL | Defines a script as it would be specified in a terminal for the specific computer and is part of the job definition. Each line begins with // and ends with \\n |
Comments
Log in or register to comment.