Job types


The following series of sections provide details about the available types of jobs that you can define and the parameters that you use to define each type of job.

Job:Command

The following example shows how to use the Job:Command to run operating system commands.

"JobName": {
"Type" : "Job:Command",
   "Command" : "echo hello",
       "PreCommand": "echo before running main command",
       "PostCommand": "echo after running main command",
   "Host" : "myhost.mycomp.com",
   "RunAs" : "user1"  
}

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAs

Identifies the operating system user that will run the job.

PreCommand

(Optional) A command to execute before the job is executed.

PostCommand

(Optional) A command to execute after the job is executed.

Back to top

Job:Script

 The following example shows how to use Job:Script to run a script from a specified script file.

    "JobWithPreAndPost": {
       "Type" : "Job:Script",
       "FileName" : "task1123.sh",
       "FilePath" : "/home/user1/scripts",
       "PreCommand": "echo before running script",
       "PostCommand": "echo after running script",
       "Host" : "myhost.mycomp.com",
       "RunAs" : "user1"   
   }

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAs

Identifies the operating systems user that will run the job.

FileName together with FilePath

Indicates the location of the script. 

NOTE: Due to JSON character escaping, each backslash in a Windows file system path must be doubled. For example, "c:\\tmp\\scripts".

PreCommand

(Optional) A command to execute before the job is executed.

PostCommand

(Optional) A command to execute after the job is executed.

Back to top

Job:EmbeddedScript

The following example shows how to use Job:EmbeddedScript to run a script that you include in the JSON code. Control-M deploys this script to the Agent host during job submission. This eliminates the need to deploy scripts as part of your application stack.

    "EmbeddedScriptJob":{
       "Type":"Job:EmbeddedScript",
       "Script":"#!/bin/bash\\necho \"Hello world\"",
       "Host":"myhost.mycomp.com",
       "RunAs":"user1",
       "FileName":"myscript.sh",
       "PreCommand": "echo before running script",
       "PostCommand": "echo after running script"
   }

Script

Full content of the script, up to 64 kilobytes.

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAs

Identifies the operating systems user that will run the job.

FileName

Name of a script file. This property is used for the following purposes:

  • The file extension provides an indication of how to interpret the script. If this is the only purpose of this property, the file does not have to exist.
  • If you specify an alternative script override using the OverridePath job property, the FileName property indicates the name of the alternative script file.

PreCommand

(Optional) A command to execute before the job is executed.

PostCommand

(Optional) A command to execute after the job is executed.

Back to top

Job:FileTransfer

The following example shows a Job:FileTransfer for a file transfer from a local filesystem to an SFTP server:

{
 "FileTransferFolder" :
 {
"Type" : "Folder",
"Application" : "aft",
"TransferFromLocalToSFTP" :
{
"Type" : "Job:FileTransfer",
"ConnectionProfileSrc" : "LocalConn",
"ConnectionProfileDest" : "SftpConn",
       "NumberOfRetries": "3",
"Host": "AgentHost",
"FileTransfers" :
[
{
"Src" : "/home/controlm/file1",
"Dest" : "/home/controlm/file2",
"TransferType": "Binary",
"TransferOption": "SrcToDest"
},
{
"Src" : "/home/controlm/otherFile1",
"Dest" : "/home/controlm/otherFile2",
"TransferOption": "DestToSrc"
}
]
}
 }
}

Here is another example for a file transfer from an S3 storage service to a local filesystem:

{
"MyS3AftFolder": {
  "Type": "Folder",
  "Application": "aft",
  "TransferFromS3toLocal":
  {
     "Type": "Job:FileTransfer",
     "ConnectionProfileSrc": "amazonConn",
     "ConnectionProfileDest": "LocalConn",
       "NumberOfRetries": "4",
     "S3BucketName": "bucket1",
     "Host": "agentHost",
     "FileTransfers": [
        {
"Src" : "folder/sub_folder/file1",
"Dest" : "folder/sub_folder/file2"
        }
      ]
  }
}
}

And here is another example for a file transfer from a local filesystem to an AS2 server.

Note: File transfers that use the AS2 protocol are supported only in one direction — from a local filesystem to an AS2 server.

{
 "MyAs2AftFolder": {
   "Type": "Folder",
   "Application": "AFT",
   "MyAftJob_AS2":
   {
     "Type": "Job:FileTransfer",
     "ConnectionProfileSrc": "localAConn",
     "ConnectionProfileDest": "as2Conn",
       "NumberOfRetries": "Default",
     "Host": "agentHost",
     "FileTransfers": [
       {
         "Src": "/dev",
         "Dest": "/home/controlm/",
         "As2Subject": "Override subject",
         "As2Message": "Override conntent type"
       }
      ]
   }
 }
}

The following parameters were used in the examples above:

Parameter

Description

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host.
In addition, Control-M File Transfer plugin version 8.0.00 or later must be installed.

ConnectionProfileSrc

The connection profile to use as the source

ConnectionProfileDest

The connection profile to use as the destination

ConnectionProfileDualEndpoint

If you need to use a dual-endpoint connection profile that you have set up, specify the name of the dual-endpoint connection profile instead of ConnectionProfileSrc and ConnectionProfileDest.

A dual-endpoint connection profile can be used for FTP, SFTP, and Local filesystem transfers. For more information about dual-endpoint connection profiles, see ConnectionProfile:FileTransfer:DualEndPoint.

NumberOfRetries

Number of connection attempts after a connection failure

Range of values: 0–99 or "Default" (to inherit the default)

Default: 5 attempts

S3BucketName

For file transfers between a local filesystem and an Amazon S3 or S3-compatible storage service: The name of the S3 bucket

FileTransfers

A list of file transfers to perform during job execution, each with the following properties:

   Src

Full path to the source file

   Dest

Full path to the destination file

   TransferType

(Optional) FTP transfer mode, either Ascii (for a text file) or Binary (non-textual data file).

Ascii mode handles the conversion of line endings within the file, while Binary mode creates a byte-for-byte identical copy of the transferred file.

Default: "Binary"

   TransferOption

(Optional) The following is a list of the transfer options:

  • SrcToDest - transfer file from source to destination
  • DestToSrc - transfer file from destination to source
  • SrcToDestFileWatcher - watch the file on the source and transfer to destination only when all criteria is met
  • DestToSrcFileWatcher - watch the file on the destination and transfer to source only when all criteria is met
  • FileWatcher - watch a file, and if successful, the succeeding job will run.
  • 9.0.20.000

     DirectoryListing– list the source and destination files

  • 9.0.20.000

     SyncSrcToDest– scan source and destination, transfer only new or modified files from source to destination, and delete destination files that do not exist on the source

  • 9.0.20.000

     SyncDestToSrc– scan source and destination, transfer only new or modified files from destination to source, and delete source files that do not exist on the destination

Default: "SrcToDest"

   As2Subject

Optional for AS2 file transfer: A text to use to override the subject of the AS2 message.

   As2Message

Optional for AS2 file transfer: A text to use to override the content type in the AS2 message.

The following example presents a File Transfer job in which the transferred file is watched using the File Watcher utility:

{
 "FileTransferFolder" :
 {
"Type" : "Folder",
"Application" : "aft",
"TransferFromLocalToSFTPBasedOnEvent" :
{
"Type" : "Job:FileTransfer",
"Host" : "AgentHost",
"ConnectionProfileSrc" : "LocalConn",
"ConnectionProfileDest" : "SftpConn",
       "NumberOfRetries": "3",
"FileTransfers" :
[
{
"Src" : "/home/sftp/file1",
"Dest" : "/home/sftp/file2",
"TransferType": "Binary",
"TransferOption" : "SrcToDestFileWatcher",
"PreCommandDest" :
{
"action" : "rm",
"arg1" : "/home/sftp/file2"
},
"PostCommandDest" :
{
"action" : "chmod",
"arg1" : "700",
"arg2" : "/home/sftp/file2"
},
"FileWatcherOptions":
{
"MinDetectedSizeInBytes" : "200",
"TimeLimitPolicy" : "WaitUntil",
"TimeLimitValue" : "2000",
"MinFileAge" : "3Min",
"MaxFileAge" : "10Min",
"AssignFileNameToVariable" : "FileNameEvent",
"TransferAllMatchingFiles" : true
}
}
]
}
 }
}

This example contains the following additional optional parameters: 

PreCommandSrc

PreCommandDest

PostCommandSrc

PostCommandDest

Defines commands that occur before and after job execution.
Each command can run only one action at a time.

Action

Description

chmod

Change file access permission:

arg1: mode

arg2: file name

mkdir

Create a new directory

arg1: directory name

rename

Rename a file/directory

arg1: current file name

arg2: new file name

rm

Delete a file

arg1: file name

rmdir

Delete a directory

arg1: directory name

FileWatcherOptions

Additional options for watching the transferred file using the File Watcher utility:

    MinDetectedSizeInBytes

Defines the minimum number of bytes transferred before checking if the file size is static

    TimeLimitPolicy/
    TimeLimitValue

Defines the time limit to watch a file:
TimeLimitPolicy options:”WaitUntil”, "MinutesToWait"

TimeLimitValue: If TimeLimitPolicy: WaitUntil, the TimeLimitValue is the specific time to wait, for example 04:22 would be 4:22 AM.
If TimeLimitPolicy: MinutesToWait, the TimeLimitValue is the number of minutes to wait.

    MinFileAge

Defines the minimum number of years, months, days, hours, and/or minutes that must have passed since the watched file was last modified

Valid values: 9999Y9999M9999d9999h9999Min

For example: 2y3d7h

    MaxFileAge

Defines the maximum number of years, months, days, hours, and/or minutes that can pass since the watched file was last modified

Valid values: 9999Y9999M9999d9999h9999Min

For example: 2y3d7h

    AssignFileNameToVariable

Defines the variable name that contains the detected file name

    TransferAllMatchingFiles

Whether to transfer all matching files (value of true) or only the first matching file (value of false) after waiting until the watching criteria is met.

Valid values: true | false
Default value: false

Back to top

Job:FileWatcher

A File Watcher job enables you to detect the successful completion of a file transfer activity that creates or deletes a file. The following example shows how to manage File Watcher jobs using Job:FileWatcher:Create and Job:FileWatcher:Delete

    "FWJobCreate" : {
   "Type" : "Job:FileWatcher:Create",
"RunAs":"controlm",
    "Path" : "C:/path*.txt",
   "SearchInterval" : "45",
   "TimeLimit" : "22",
   "StartTime" : "201705041535",
   "StopTime" : "201805041535",
   "MinimumSize" : "10B",
   "WildCard" : true,
   "MinimalAge" : "1Y",
   "MaximalAge" : "1D2H4MIN"
   },
   "FWJobDelete" : {
       "Type" : "Job:FileWatcher:Delete",
       "RunAs":"controlm",
       "Path" : "C:/path.txt",
       "SearchInterval" : "45",
       "TimeLimit" : "22",
       "StartTime" : "201805041535",
       "StopTime" : "201905041535"
   }

This example contains the following parameters:

Path

Path of the file to be detected by the File Watcher

You can include wildcards in the path — * for any number of characters, and ? for any single character.

SearchInterval

Interval (in seconds) between successive attempts to detect the creation/deletion of a file

TimeLimit

Maximum time (in minutes) to run the process without detecting the file at its minimum size (for Job:FileWatcher:Create) or detecting its deletion (for Job:FileWatcher:Delete). If the file is not detected/deleted in this specified time frame, the process terminates with an error return code.

Default: 0 (no time limit)

StartTime

The time at which to start watching the file

The format is yyyymmddHHMM. For example, 201805041535 means that the File Watcher will start watching the file on May 4, 2018 at 3:35 PM.
Alternatively, to specify a time on the current date, use the HHMM format.

StopTime

The time at which to stop watching the file.

Format: yyyymmddHHMM or HHMM (for the current date)

MinimumSize

Minimum file size to monitor for, when watching a created file

Follow the specified number with the unit: B for bytes, KB for kilobytes, MB for megabyes, or GB for gigabytes.

If the file name (specified by the Path parameter) contains wildcards, minimum file size is monitored only if you set the Wildcard parameter to true.

Wildcard

Whether to monitor minimum file size of a created file if the file name (specified by the Path parameter) contains wildcards

Values: true | false
Default: false

MinimalAge

(Optional) The minimum number of years, months, days, hours, and/or minutes that must have passed since the created file was last modified. 

For example: 2Y10M3D5H means that 2years, 10 months, 3 days, and 5 hours must pass before the file will be watched. 2H10Min means that 2 hours and 10 minutes must pass before the file will be watched.

MaximalAge

(Optional) The maximum number of years, months, days, hours, and/or minutes that can pass since the created file was last modified.

For example: 2Y10M3D5H means that after 2years, 10 months, 3 days, and 5 hours have passed, the file will no longer be watched. 2H10Min means that after 2 hours and 10 minutes have passed, the file will no longer be watched.

Back to top

Job:Database

The following types of database jobs are available:

Job:Database:EmbeddedQuery

The following example shows how to create a database job that runs an embedded query.

{
    "PostgresDBFolder": {  
"Type": "Folder",
        "EmbeddedQueryJobName": {
          "Type": "Job:Database:EmbeddedQuery",
          "ConnectionProfile": "POSTGRESQL_CONNECTION_PROFILE",
          "Query": "SELECT %%firstParamName AS VAR1 \\n FROM DUMMY \\n ORDER BY \\t VAR1 DESC",
          "Host": "${agentName}",
          "RunAs": "PostgressCP",
          "Variables": [
             {
   "firstParamName": "firstParamValue"
             }
          ],
  "Autocommit": "N",
  "OutputExcecutionLog": "Y",
  "OutputSQLOutput": "Y",
  "SQLOutputFormat": "XML"
     }
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine.  See Control-M in a nutshell.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Query

The embedded SQL query that you want to run.

The SQL query can contain auto edit variables. During job run, these variables are replaced by the values that you specify in Variables parameter (next row).

For long queries, you can specify delimiters using \\n (new line) and \\t (tab).

Variables

Variables are pairs of name and value. Every name that appears in the embedded script will be replaced by its value pair.

The maximum length of a variable name is 38 alphanumeric characters and it is case-sensitive.
The variable name cannot begin with a number and must not contain blank spaces or any of the following special characters:
< > [ ] { } ( ) = ; ` ~ | : ? . + - * / & ^ # @ ! , " '

The following optional parameters are also available for all types of database jobs:

Autocommit

(Optional) Commits statements to the database that completes successfully

Default: N

OutputExcecutionLog

(Optional) Shows the execution log in the job output

Default: Y

OutputSQLOutput

(Optional) Shows the SQL sysout in the job output

Default: N

SQLOutputFormat

(Optional) Defines the output format as either Text, XML, CSV, or HTML

Default: Text

Job:Database:SQLScript

The following example shows how to create a database job that runs a SQL script from a file system.

{
"OracleDBFolder": {
"Type": "Folder",
"testOracle": {
"Type": "Job:Database:SQLScript",
"Host": "AgentHost",
"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
"ConnectionProfile": "ORACLE_CONNECTION_PROFILE",
"Parameters": [
{"firstParamName": "firstParamValue"},
{"secondParamName": "secondParamValue"}
]
}
}
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine.  See Control-M in a nutshell.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Parameters

Parameters are pairs of name and value. Every name that appears in the SQL script will be replaced by its value pair.

For additional optional parameters, see above.

Another example:

{
"OracleDBFolder": {
"Type": "Folder",
"testOracle": {
"Type": "Job:Database:SQLScript",
"Host": "app-redhat",
"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
"ConnectionProfile": "ORACLE_CONNECTION_PROFILE"
}
}
}

Job:Database:StoredProcedure

The following example shows how to create a database job that runs a program that is stored on the database.

{
"storeFolder": {
"Type": "Folder",
"jobStoredProcedure": {
"Type": "Job:Database:StoredProcedure",
"Host": "myhost.mycomp.com",
"StoredProcedure": "myProcedure",
"Parameters": [ "value1","variable1",["value2","variable2"]],
"ReturnValue":"RV",
"Schema": "public",
"ConnectionProfile": "DB-PG-CON"
}
}
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine.  See Control-M in a nutshell.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

StoredProcedure

Name of stored procedure that the job runs

Parameters

A comma-separated list of values and variables for all parameters in the procedure, in the order of their appearance in the procedure.

The value that you specify for any specific parameter in the procedure depends on the type of parameter:

  • For an In parameter, specify an input value.
  • For an Out parameter, specify an output variable.
  • For an In/Out parameter, specify a pair of input value + output variable, enclosed in brackets: [value,variable]

In the example above, three parameters are listed, in the following order: [In,Out,Inout]

ReturnValue

A variable for the Return parameter (if the procedure contains such a parameter)

Schema

The database schema where the stored procedure resides

Package

(Oracle only) Name of a package in the database where the stored procedure resides

The default is "*", that is, any package in the database.

ConnectionProfile

Name of a connection profile that contains the details of the connection to the database

For additional optional parameters, see above.

Job:Database:MSSQL:AgentJob

9.0.19.210 The following example shows how to create an MSSQL Agent job, for management of a job defined in the SQL server.

{
   "MSSQLFolder": {
       "Type": "Folder",
       "ControlmServer": "LocalControlM",
       "MSSQLAgentJob": {
           "Type": "Job:Database:MSSQL:AgentJob",
           "ConnectionProfile": "MSSQL-WE-EXAMPLE",
           "Host": "agentHost",
           "JobName": "get_version",
           "Category": "Data Collector"
       }
   }
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine.  See Control-M in a nutshell.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

JobName

The name of the job defined in the SQL server

Category

The category of the job, as defined in the SQL server

For additional optional parameters, see above.

Job:Database:MSSQL:SSIS

9.0.19.220 The following example shows how to create SSIS Package jobs for execution of SQL Server Integration Services (SSIS) packages:

{
  "MSSQLFolder": {
      "Type": "Folder",
      "ControlmServer": "LocalControlM",
      "SSISCatalog": {
           "Type": "Job:Database:MSSQL:SSIS",
           "ConnectionProfile": "MSSQL-CP-NAME",
           "Host": "agentHost",
           "PackageSource": "SSIS Catalog",
           "PackageName": "\\Data Collector\\SqlTraceCollect",
           "CatalogEnv": "ENV_NAME",
           "ConfigFiles": [
               "C:\\Users\\dbauser\\Desktop\\test.dtsConfig",
               "C:\\Users\\dbauser\\Desktop\\test2.dtsConfig"
            ],
           "Properties": [
               {
                   "PropertyName": "PropertyValue"
               },
               {
                   "PropertyName2": "PropertyValue2"
               }
            ]
       },
       "SSISPackageStore": {
           "Type": "Job:Database:MSSQL:SSIS",
           "ConnectionProfile": "MSSQL-CP-NAME",
           "Host": "agentHost",
           "PackageSource": "SSIS Package Store",
           "PackageName": "\\Data Collector\\SqlTraceCollect",
           "ConfigFiles": [
               "C:\\Users\\dbauser\\Desktop\\test.dtsConfig",
               "C:\\Users\\dbauser\\Desktop\\test2.dtsConfig"
            ],
           "Properties": [
               {
                   "PropertyName": "PropertyValue"
               },
               {
                   "PropertyName2": "PropertyValue2"
               }
            ]
       }    
   }
} 

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine.  See Control-M in a nutshell.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

PackageSource

The source of the SSIS package, one of the following:

  • SQL Server — Package stored on an MSSQL database.
  • File System — Package stored on the Control-M/Agent's local file system.
  • SSIS Package Store — Package stored on a file system that is managed by an SSIS service.
  • SSIS Catalog — Package stored on a file system that is managed by an SSIS Catalog service.

PackageName

The name of the SSIS package.

CatalogEnv

If PackageSource is 'SSIS Catalog': The name of the environment on which to run the package.

Use this optional parameter if you want to run the package on a different environment from the one that you are currently using.

ConfigFiles

(Optional) Names of configuration files that contain specific data that you want to apply to the SSIS package

Properties

(Optional) Pairs of names and values for properties defined in the SSIS package.

Each property name is replaced by its defined value during SSIS package execution.

For additional optional parameters, see above.

Back to top

Job:Hadoop

Various types of Hadoop jobs are available for you to define using the Job:Hadoop objects:

Job:Hadoop:Spark:Python

The following example shows how to use Job:Hadoop:Spark:Python to run a Spark Python program.

    "ProcessData": {
       "Type": "Job:Hadoop:Spark:Python",
       "Host" : "edgenode",
       "ConnectionProfile": "DEV_CLUSTER",

       "SparkScript": "/home/user/processData.py"
   }

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
   "Type": "Job:Hadoop:Spark:Python",
   "Host" : "edgenode",
   "ConnectionProfile": "DEV_CLUSTER",
   "SparkScript": "/home/user/processData.py",            
   "Arguments": [
       "1000",
       "120"
    ],            

   "PreCommands": {
       "FailJobOnCommandFailure" :false,
       "Commands" : [
           {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
           {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
   },

   "PostCommands": {
       "FailJobOnCommandFailure" :true,
       "Commands" : [
           {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
   },

   "SparkOptions": [
       {"--master": "yarn"},
       {"--num":"-executors 50"}
    ]
}

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is falsethat is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Spark:ScalaJava

The following example shows how to use Job:Hadoop:Spark:ScalaJava to run a Spark Java or Scala program.

 "ProcessData": {
 "Type": "Job:Hadoop:Spark:ScalaJava",
   "Host" : "edgenode",
   "ConnectionProfile": "DEV_CLUSTER",

   "ProgramJar": "/home/user/ScalaProgram.jar",
"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName" 
}

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
 "Type": "Job:Hadoop:Spark:ScalaJava",
   "Host" : "edgenode",
   "ConnectionProfile": "DEV_CLUSTER",

   "ProgramJar": "/home/user/ScalaProgram.jar"
"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName",

   "Arguments": [
       "1000",
       "120"
    ],            

   "PreCommands": {
       "FailJobOnCommandFailure" :false,
       "Commands" : [
           {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
           {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
   },

   "PostCommands": {
       "FailJobOnCommandFailure" :true,
       "Commands" : [
           {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
   },

   "SparkOptions": [
       {"--master": "yarn"},
       {"--num":"-executors 50"}
    ]
}

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is falsethat is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Pig

The following example shows how to use Job:Hadoop:Pig to run a Pig script.

"ProcessDataPig": {
   "Type" : "Job:Hadoop:Pig",
   "Host" : "edgenode",
   "ConnectionProfile": "DEV_CLUSTER",

   "PigScript" : "/home/user/script.pig" 
}

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "ProcessDataPig1": {
       "Type" : "Job:Hadoop:Pig",
       "ConnectionProfile": "DEV_CLUSTER",
       "PigScript" : "/home/user/script.pig",            
       "Host" : "edgenode",
       "Parameters" : [
           {"amount":"1000"},
           {"volume":"120"}
        ],            
       "PreCommands": {
           "FailJobOnCommandFailure" :false,
           "Commands" : [
               {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
               {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
       },
       "PostCommands": {
           "FailJobOnCommandFailure" :true,
           "Commands" : [
               {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
       }
   }

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is falsethat is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Sqoop

The following example shows how to use Job:Hadoop:Sqoop to run a Sqoop job.

    "LoadDataSqoop":
    {
     "Type" : "Job:Hadoop:Sqoop",
 "Host" : "edgenode",
     "ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",

     "SqoopCommand" : "import --table foo --target-dir /dest_dir" 
    }

 

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "LoadDataSqoop1" :
    {
       "Type" : "Job:Hadoop:Sqoop",
       "Host" : "edgenode",
       "ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",

       "SqoopCommand" : "import --table foo",
"SqoopOptions" : [
{"--warehouse-dir":"/shared"},
{"--default-character-set":"latin1"}
],
 
       "SqoopArchives" : "",
       
       "SqoopFiles": "",
       
       "PreCommands": {
           "FailJobOnCommandFailure" :false,
           "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
       "PostCommands": {
           "FailJobOnCommandFailure" :true,
           "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is falsethat is, the job will complete successfully even if any post-command fails.

SqoopOptions

These are passed as the specific sqoop tool args

SqoopArchives

Indicates the location of the Hadoop archives.

SqoopFiles

Indicates the location of the Sqoop files.

Back to top

Job:Hadoop:Hive

The following example shows how to use Job:Hadoop:Hive to run a Hive beeline job.

    "ProcessHive":
   {
     "Type" : "Job:Hadoop:Hive",
     "Host" : "edgenode",
     "ConnectionProfile" : "HIVE_CONNECTION_PROFILE",

     "HiveScript" : "/home/user1/hive.script"
   }

 

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "ProcessHive1" :
   {
       "Type" : "Job:Hadoop:Hive",
       "Host" : "edgenode",
       "ConnectionProfile" : "HIVE_CONNECTION_PROFILE",


       "HiveScript" : "/home/user1/hive.script", 
       "Parameters" : [
           {"ammount": "1000"},
           {"topic": "food"}
        ],

       "HiveArchives" : "",
       
       "HiveFiles": "",
       
       "HiveOptions" : [
           {"hive.root.logger": "INFO,console"}
        ],

       "PreCommands": {
           "FailJobOnCommandFailure" :false,
           "Commands" : [
               {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
               {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
       },
       "PostCommands": {
           "FailJobOnCommandFailure" :true,
           "Commands" : [
               {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
       }
   }

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is falsethat is, the job will complete successfully even if any post-command fails.

HiveSciptParameters

Passed to beeline as --hivevar “name”=”value”.

HiveProperties

Passed to beeline as --hiveconf “key”=”value”.

HiveArchives

Passed to beeline as --hiveconf mapred.cache.archives=”value”.

HiveFiles

Passed to beeline as --hiveconf mapred.cache.files=”value”.

Back to top

Job:Hadoop:DistCp

The following example shows how to use Job:Hadoop:DistCp to run a DistCp job.  DistCp (distributed copy) is a tool used for large inter/intra-cluster copying.

        "DistCpJob" :
       {
           "Type" : "Job:Hadoop:DistCp",
           "Host" : "edgenode",
           "ConnectionProfile": "DEV_CLUSTER",
        
           "TargetPath" : "hdfs://nns2:8020/foo/bar",
           "SourcePaths" :
            [
               "hdfs://nn1:8020/foo/a"
            ]
       }  

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.  See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "DistCpJob" :
   {
       "Type" : "Job:Hadoop:DistCp",
       "Host" : "edgenode",
       "ConnectionProfile" : "HADOOP_CONNECTION_PROFILE",
       "TargetPath" : "hdfs://nns2:8020/foo/bar",
       "SourcePaths" :
        [
           "hdfs://nn1:8020/foo/a",
           "hdfs://nn1:8020/foo/b"
        ],
       "DistcpOptions" : [
           {"-m":"3"},
           {"-filelimit ":"100"}
        ]
   }

TargetPath, SourcePaths and DistcpOptions

Passed to the distcp tool in the following way: distcp <Options> <TargetPath> <SourcePaths>.

Back to top

Job:Hadoop:HDFSCommands

The following example shows how to use Job:Hadoop:HDFSCommands to run a job that executes one or more HDFS commands.

        "HdfsJob":
       {
           "Type" : "Job:Hadoop:HDFSCommands",
           "Host" : "edgenode",
           "ConnectionProfile": "DEV_CLUSTER",

           "Commands": [
               {"get": "hdfs://nn.example.com/user/hadoop/file localfile"},
               {"rm": "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
       }

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Back to top

Job:Hadoop:HDFSFileWatcher

The following example shows how to use Job:Hadoop:HDFSFileWatcher to run a job that waits for HDFS file arrival.

    "HdfsFileWatcherJob" :
   {
       "Type" : "Job:Hadoop:HDFSFileWatcher",
       "Host" : "edgenode",
       "ConnectionProfile" : "DEV_CLUSTER",

       "HdfsFilePath" : "/inputs/filename",
       "MinDetecedSize" : "1",
       "MaxWaitTime" : "2"
   }

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

HdfsFilePath

Specifies the full path of the file being watched.

MinDetecedSize

Defines the minimum file size in bytes to meet the criteria and finish the job as OK. If the file arrives, but the size is not met, the job continues to watch the file.

MaxWaitTime

Defines the maximum number of minutes to wait for the file to meet the watching criteria. If criteria are not met (file did not arrive, or minimum size was not reached) the job fails after this maximum number of minutes.

Back to top

Job:Hadoop:Oozie

The following example shows how to use Job:Hadoop:Oozie to run a job that submits an Oozie workflow.

    "OozieJob": {
       "Type" : "Job:Hadoop:Oozie",
       "Host" : "edgenode",
       "ConnectionProfile": "DEV_CLUSTER",


       "JobPropertiesFile" : "/home/user/job.properties",
       "OozieOptions" : [
         {"inputDir":"/usr/tucu/inputdir"},
         {"outputDir":"/usr/tucu/outputdir"}
        ]
   }

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

JobPropertiesFile

The path to the job properties file.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false , that is, the job will complete successfully even if any post-command fails.

OozieOptions

Set or override values for given job property.

Back to top

Job:Hadoop:MapReduce

 The following example shows how to use Job:Hadoop:MapReduce to execute a Hadoop MapReduce job.

    "MapReduceJob" :
   {
      "Type" : "Job:Hadoop:MapReduce",
       "Host" : "edgenode",
       "ConnectionProfile": "DEV_CLUSTER",


       "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
       "MainClass" : "pi",
       "Arguments" :[
           "1",
           "2"]
   }

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "MapReduceJob1" :
   {
       "Type" : "Job:Hadoop:MapReduce",
       "Host" : "edgenode",
       "ConnectionProfile": "DEV_CLUSTER",


       "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
       "MainClass" : "pi",
       "Arguments" :[
           "1",
           "2"
        ],
       "PreCommands": {
           "FailJobOnCommandFailure" :false,
           "Commands" : [
               {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
               {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
       },
       "PostCommands": {
           "FailJobOnCommandFailure" :true,
           "Commands" : [
               {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
       }    
   }

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is falsethat is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:MapredStreaming

The following example shows how to use Job:Hadoop:MapredStreaming to execute a Hadoop MapReduce Streaming job.

     "MapredStreamingJob1": {
       "Type": "Job:Hadoop:MapredStreaming",
       "Host" : "edgenode",
       "ConnectionProfile": "DEV_CLUSTER",


       "InputPath": "/user/robot/input/*",
       "OutputPath": "/tmp/output",
       "MapperCommand": "mapper.py",
       "ReducerCommand": "reducer.py",
       "GeneralOptions": [
           {"-D": "fs.permissions.umask-mode=000"},
           {"-files": "/home/user/hadoop-streaming/mapper.py,/home/user/hadoop-streaming/reducer.py"}              
        ]
   }

ConnectionProfile

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is falsethat is, the job will complete successfully even if any post-command fails.

GeneralOptions

Additional Hadoop command options passed to the hadoop-streaming.jar, including generic options and streaming options.

Back to top

Job:SAP

SAP-type jobs enable you to manage SAP processes through the Control-M environment. To manage SAP-type jobs, you must have the Control-M for SAP plugin installed in your Control-M environment.

The following JSON objects are available for creating SAP-type jobs:

Job:SAP:R3:COPY

This job type enables you to create a new SAP R3 job by duplicating an existing job. The following example shows how to use Job:SAP:R3:COPY:

"JobSapR3Copy" : {
   "Type" : "Job:SAP:R3:COPY",
   "ConnectionProfile":"SAP-CON",
   "SapJobName" : "CHILD_1",
   "Exec": "Server",
   "Target" : "Server-name",
   "JobCount" : "SpecificJob",
   "JobCountSpecificName" : "sap-job-1234",
   "NewJobName" : "My-New-Sap-Job",
   "StartCondition" : "AfterEvent",
   "AfterEvent" : "HOLA",
   "AfterEventParameters" : "parm1 parm2",
   "RerunFromPointOfFailure": true,
   "CopyFromStep" : "4",
   "PostJobAction" : {
       "Spool" : "CopyToFile",
       "SpoolFile": "spoolfile.log",
       "SpoolSaveToPDF" : true,
       "JobLog" : "CopyToFile",
       "JobLogFile": "Log.txt",
       "JobCompletionStatusWillDependOnApplicationStatus" : true
    },
   "DetectSpawnedJob" : {
       "DetectAndCreate": "SpecificJobDefinition",
       "JobName" : "Specific-Job-123",
       "StartSpawnedJob" : true,
       "JobEndInControlMOnlyAftreChildJobsCompleteOnSap" : true,
       "JobCompletionStatusDependsOnChildJobsStatus" : true
    }
}

This SAP job object uses the following parameters:

ConnectionProfile

Name of the SAP connection profile to use for the connection

SapJobName

Name of SAP job to copy

Exec

Type of execution target where the SAP job will run, one of the following:

  • Server — an SAP application server
  • Group — an SAP group

Target

The name of the SAP application server or SAP group (depending on the value specified in the previous parameter)

JobCount

How to define a unique ID number for the SAP job, one of the following options:

  • FirstScheduled (the default)
  • LastScheduled
  • First
  • Last
  • SpecificJob

If you specify SpecificJob, you must provide the next parameter.

JobCountSpecificName

A unique SAP job ID number for a specific job (that is, for when JobCount is set to SpecificJob)

NewJobName

Name of the newly created job

StartCondition

Specifies when the job should run, one of the following:

  • ASAP — Job runs as soon as a background work process is available for it in SAP (the default). If the job cannot start immediately, it is transformed in SAP into a time-based job.
  • Immediate — Job runs immediately. If there are no work processes available to run it, the job fails.
  • AfterEvent — Job waits for an event that you specify (in the next two parameters) to be triggered.

AfterEvent

The name of the SAP event that the job waits for (if you set StartCondition to AfterEvent)

AfterEventParameters

Parameters in the SAP event to watch for.

Use space characters to separate multiple parameters.

RerunFromPointOfFailure

Whether to rerun the SAP R/3 job from its point of failure, either true of false (the default)

Note: If RerunFromPointOfFailure is set to false, use the CopyFromStep parameter to set a specific step from which to rerun.

CopyFromStep

The number of a specific step in the SAP R/3 job from which to rerun or copy

The default is step 1 (that is, the beginning of the job).

Note: If RerunFromPointOfFailure is set to true, the CopyFromStep parameter is ignored.

PostJobAction

This object groups together several parameters that control post-job actions for the SAP R/3 job.

Spool

How to manage spool output, one of the following options:

  • DoNotCopy (the default)
  • CopyToOutput
  • CopyToFile

SpoolFile

The file to which to copy the job's spool output (if Spool is set to CopyToFile)

SpoolSaveToPDF

Whether to save the job's spool output in PDF format (if Spool is set to CopyToFile)

JobLog

How to manage job log output, one of the following options:

  • DoNotCopy
  • CopyToOutput (the default)
  • CopyToFile

JobLogFile

The file to which to copy the job's log output (if JobLog is set to CopyToFile)

JobCompletionStatusWillDependOnApplicationStatus

Whether job completion status depends on SAP application status, either true or false (the default)

DetectSpawnedJob

This object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job

DetectAndCreate

How to determine the properties of detected spawned jobs:

  • CurrentJobDefinition — Properties of detected spawned jobs are identical to the current (parent) job properties (the default)
  • SpecificJobDefinition — Properties of detected spawned jobs are derived from a different job that you specify

JobName

Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition)

Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values.

StartSpawnedJob

Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default)

JobEndInControlMOnlyAftreChildJobsCompleteOnSap

Whether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default)

JobCompletionStatusDependsOnChildJobsStatus

Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default)

When set to true, the parent job does not end OK if any child job fails.

This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true.

Job:SAP:BW:ProcessChain

This job type runs and monitors a Process Chain in SAP Business Warehouse (SAP BW).

NOTE: For the job that you define through Control-M Automation API to work properly, ensure that the Process Chain defined in the SAP BW system has Start Using Meta Chain or API as the start condition for the trigger process (Start Process) of the Process Chain. To configure this parameter, from the SAP transaction RSPC, right-click the trigger process and select Maintain Variant.

The following example shows how to use Job:SAP:BW:ProcessChain:

"JobSapBW": {
   "Type": "Job:SAP:BW:ProcessChain",
   "ConnectionProfile": "PI4-BW",
   "ProcessChainDescription": "SAP BW Process Chain",
   "Id": "123456",
   "RerunOption": "RestartFromFailiurePoint",
   "EnablePeridoicJob": true,
   "ConsiderOnlyOverallChainStatus": true,
   "RetrieveLog": false,
   "DetectSpawnedJob": {
       "DetectAndCreate": "SpecificJobDefinition",
       "JobName": "ChildJob",
       "StartSpawnedJob": false,
       "JobEndInControlMOnlyAftreChildJobsCompleteOnSap": false,
       "JobCompletionStatusDependsOnChildJobsStatus": false
       }
   }

This SAP job object uses the following parameters:

ConnectionProfile

Name of the SAP connection profile to use for the connection.

ProcessChainDescription

The description of the Process Chain that you want to run and monitor, as defined in SAP BW.

Maximum length of the textual description: 60 characters

Id

ID of the Process Chain that you want to run and monitor.

RerunOption

The rerun policy to apply to the job after job failure, one of the following values:

  • RestartFromFailiurePoint — Restart the job from the point of failure (the default)
  • RerunFromStart — Rerun the job from the beginning

EnablePeridoicJob

Whether the first run of the Process Chain prepares for the next run and is useful for reruns when big Process Chains are scheduled.

Values are either true (the default) or false.

ConsiderOnlyOverallChainStatus

Whether to view only the status of the overall Process Chain.

Values are either true or false (the default) .

RetrieveLog

Whether to add the Process Chain logs to the job output.

Values are either true (the default) or false.

DetectSpawnedJob

This object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job

    DetectAndCreate

How to determine the properties of detected spawned jobs:

  • CurrentJobDefinition — Properties of detected spawned jobs are identical to the current (parent) job properties (the default).
  • SpecificJobDefinition — Properties of detected spawned jobs are derived from a different job that you specify.

    JobName

Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition)

Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values.

    StartSpawnedJob

Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default)

    JobEndInControlMOnlyAftreChildJobsCompleteOnSap

Whether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default)

    JobCompletionStatusDependsOnChildJobsStatus

Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default)

When set to true, the parent job does not end OK if any child job fails.

This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true.

Back to top

Job:ApplicationIntegrator

Use Job:ApplicationIntegrator:<JobType> to define a job of a custom type using the Control-M Application Integrator designer. For information about designing job types, see the Control-M Application Integrator Help.

The following example shows the JSON code used to define a job type named AI Monitor Remote Job:

"JobFromAI" : {
   "Type": "Job:ApplicationIntegrator:AI Monitor Remote Job",
   "ConnectionProfile": "AI_CONNECTION_PROFILE",
   "AI-Host": "Host1",
   "AI-Port": "5180",
   "AI-User Name": "admin",
   "AI-Password": "*******",
   "AI-Remote Job to Monitor": "remoteJob5",
   "RunAs": "controlm"
}

In this example, the ConnectionProfile and RunAs properties are standard job properties used in Control-M Automation API jobs. The other job properties will be created in the Control-M Application Integrator, and they must be prefixed with "AI-" in the .json code.

The following images show the corresponding settings in the Control-M Application Integrator, for reference purposes.

Click here to expand...
  • The name of the job type appears in the Name field in the job type details.
    AIjobDetails2.png
  • Job properties appear in the Job Type Designer, in the Connection Profile View and the Job Properties View.
    When defining these properties through the .json code, you prefix them with "AI-", except for the property that specifies the name of the connection profile.
    AIdesigner2.png


Back to top

Job:Informatica

Informatica-type jobs enable you to automate Informatica workflows through the Control-M environment. To manage Informatica-type jobs, you must have the Control-M for Informatica plugin installed in your Control-M environment.

The following example shows the JSON code used to define an Informatica job.

"InformaticaApiJob": {
   "Type": "Job:Informatica",
   "ConnectionProfile": "INFORMATICA_CONNECTION",
   "RepositoryFolder": "POC",
   "Workflow": "WF_Test",
   "InstanceName": "MyInstamce",
   "OsProfile": "MyOSProfile",
   "WorkflowExecutionMode": "RunSingleTask",
   "RunSingleTask": "s_MapTest_Success",
   "WorkflowRestartMode": "ForceRestartFromSpecificTask",
   "RestartFromTask": "s_MapTest_Success",
   "WorkflowParametersFile": "/opt/wf1.prop",
}

This Informatica job object uses the following parameters:

ConnectionProfile

Name of the Informatica connection profile to use for the connection

RepositoryFolder

The Repository folder that contains the workflow that you want to run

Workflow

The workflow that you want to run in Control-M for Informatica

InstanceName

(Optional) The specific instance of the workflow that you want to run

OsProfile

(Optional) The operating system profile in Informatica

WorkflowExecutionMode

The mode for executing the workflow, one of the following:

  • RunWholeWorkflow — run the whole workflow
  • StartFromTask — start running the workflow from a specific task, as specified by the StartFromTask parameter
  • RunSingleTask — run a single task in the workflow, as specified by the RunSingleTask parameter

  StartFromTask

The task from which to start running the workflow

This parameter is required only if you set WorkflowExecutionMode to StartFromTask.

  RunSingleTask

The workflow task that you want to run

This parameter is required only if you set WorkflowExecutionMode to RunSingleTask.

Depth

The number of levels within the workflow task hierarchy for the selection of workflow tasks

Default: 10 levels

EnableOutput

Whether to include the workflow events log in the job output (either true or false)

Default: true

EnableErrorDetails

Whether to include a detailed error log for a workflow that failed (either true or false)

Default: true

WorkflowRestartMode

The operation to execute when the workflow is in a suspended satus, one of the following:

  • Recover — recover the suspended workflow
  • ForceRestart — force a restart of the suspended workflow
  • ForceRestartFromSpecificTask — force a restart of the suspended workflow from a specific task, as specified by the RestartFromTask parameter

  RestartFromTask

The task from which to restart a suspended workflow

This parameter is required only if you set WorkflowRestartMode to ForceRestartFromSpecificTask

WorkflowParametersFile

(Optional) The path and name of the workflow parameters file

This enables you to use the same workflow for different actions.

Back to top

Job:AWS

AWS-type jobs enable you to automate a select list of AWS services through Control-M Automation API. To manage AWS-type jobs, you must have the Control-M for AWS plugin installed in your Control-M environment.

The following JSON objects are available for creating AWS-type jobs:

Job:AWS:Lambda

The following example shows how to define a job that executes an AWS Lambda service on an AWS server.

"AwsLambdaJob": {
   "Type": "Job:AWS:Lambda",
   "ConnectionProfile": "AWS_CONNECTION",
   "FunctionName": "LambdaFunction",
   "Version": "1",
   "Payload" : "{\"myVar\" :\"value1\" \\n\"myOtherVar\" : \"value2\"}"
   "AppendLog": true
}

This AWS job object uses the following parameters :

FunctionName

The Lambda function to execute

Version

(Optional) The Lambda function version

The default is $LATEST (the latest version).

Payload

(Optional) The Lambda function payload, in JSON format

Escape all special characters.

AppendLog

Whether to add the log to the job’s output, either true (the default) or false

Job:AWS:StepFunction

The following example shows how to define a job that executes an AWS Step Function service on an AWS server.

"AwsLambdaJob": {
   "Type": "Job:AWS:StepFunction",
   "ConnectionProfile": "AWS_CONNECTION",
   "StateMachine": "StateMachine1",
   "ExecutionName": "Execution1",
   "Input": ""{\"myVar\" :\"value1\" \\n\"myOtherVar\" : \"value2\"}" ",
   "AppendLog": true
}

This AWS job object uses the following parameters :

StateMachine

The State Machine to use

ExecutionName

A name for the execution

Input

The Step Function input in JSON format

Escape all special characters.

AppendLog

Whether to add the log to the job’s output, either true (the default) or false

Job:AWS:Batch

The following example shows how to define a job that executes an AWS Batch service on an AWS server.

"AwsLambdaJob": {
   "Type": "Job:AWS:Batch",
   "ConnectionProfile": "AWS_CONNECTION",
   "JobName": "batchjob1",
   "JobDefinition": "jobDef1",
   "JobDefinitionRevision": "3",
   "JobQueue": "queue1",
   "AWSJobType": "Array",
   "ArraySize": "100",
   "DependsOn": {
       "DependencyType": "Standard",
       "JobDependsOn": "job5"
       },
   "Command": [ "ffmpeg", "-i" ],
   "Memory": "10",
   "vCPUs": "2",
   "JobAttempts": "5",
   "ExecutionTimeout": "60",
   "AppendLog": false
}

This AWS job object uses the following parameters :

JobName

The name of the batch job

JobDefinition

The job definition to use

JobDefinitionRevision

The job definition revision

JobQueue

The queue to which the job is submitted

AWSJobType

The type of job, either Array or Single

ArraySize

(For a job of type ArrayThe size of the array (that is, the number of items in the array)

Valid values: 2–10000

DependsOn

Parameters that determine a job dependency

    DependencyType

(For a job of type ArrayType of dependency, one of the following values:

  • Standard
  • Sequential
  • N-to-N

   JobDependsOn

The JobID upon which the Batch job depends

This parameter is mandatory for a Standard or N-to-N dependency, and optional for a Sequential dependency.

Command

A command to send to the container that overrides the default command from the Docker image or the job definition

Memory

The number of megabytes of memory reserved for the job

Minimum value: 4 megabytes

vCPUs

The number of vCPUs to reserve for the container

JobAttempts

The number of retry attempts

Valid values: 1–10

ExecutionTimeout

The timeout duration in seconds

AppendLog

Whether to add the log to the job’s output, either true (the default) or false

Back to top

Job:Azure

9.0.19.220 The Azure job type enables you to automate workflows that include a select list of Azure services. To manage Azure-type jobs, you must have the Control-M for Azure plugin installed in your Control-M environment.

The following JSON objects are available for creating Azure-type jobs:

Job:Azure:Function

The following example shows how to define a job that executes an Azure function service.

"AzureFunctionJob": {
 "Type": "Job:Azure:Function",
 "ConnectionProfile": "AZURE_CONNECTION",
 "AppendLog": false,
 "Function": "AzureFunction",
 "FunctionApp": "AzureFunctionApp",
 "Parameters": [
     {"firstParamName": "firstParamValue"},
 {"secondParamName": "secondParamValue"}
  ]
}

This Azure job object uses the following parameters :

Function

The name of the Azure function to execute

FunctionApp

The name of the Azure function app

Parameters

(Optional) Function parameters defined as pairs of name and value. 

AppendLog

(Optional) Whether to add the log to the job’s output, either true (the default) or false

Job:Azure:LogicApps

The following example shows how to define a job that executes an Azure Logic App service.

"AzureLogicAppJob": {
 "Type": "Job:Azure:LogicApps",
 "ConnectionProfile": "AZURE_CONNECTION",
  "LogicAppName": "MyLogicApp",
 "RequestBody": "{\\n  \"name\": \"BMC\"\\n}",
 "AppendLog": false
}

This Azure job object uses the following parameters :

LogicAppName

The name of the Azure Logic App

RequestBody

(Optional) The JSON for the expected payload

AppendLog

(Optional) Whether to add the log to the job’s output, either true (the default) or false

Job:Azure:BatchAccount

The following example shows how to define a job that executes an Azure Batch Account service.

"AzureBatchJob": {
 "Type": "Job:Azure:BatchAccount",
 "ConnectionProfile": "AZURE_CONNECTION",
 "JobId": "AzureJob1",
 "CommandLine": "echo \"Hello\"",
 "AppendLog": false,
 "Wallclock": {
   "Time": "770",
   "Unit": "Minutes"
 },
 "MaxTries": {
   "Count": "6",
   "Option": "Custom"
 },
 "Retention": {
   "Time": "1",
   "Unit": "Hours"
 }
}

This Azure job object uses the following parameters :

JobId

The ID of the batch job

CommandLine

A command line that the batch job runs

AppendLog

(Optional) Whether to add the log to the job’s output, either true (the default) or false

Wallclock

(Optional) Maximum limit for the job's run time

If you do not include this parameter, the default is unlimited run time.

Use this parameter to set a custom time limit. Include the following next-level parameters:

  • Time — number (of the specified time unit), 0 or higher
  • Unit — time unit, one of the following: Seconds, Minutes, Hours, Days

MaxTries

(Optional) The number of times to retry running a failed task

If you do not include this parameter, the default is none (no retries).

Use this parameter to choose between the following options:

  • Unlimited number of retries. For this option, include the following next-level parameter:
    "Option""Unlimited"
  • Custom number of retries, 1 or higher. For this option, include the following next-level parameters:
    • "Count": number
    • "Option": "Custom"

Retention

(Optional) File retention period for the batch job

If you do not include this parameter, the default is an unlimited retention period.

Use this parameter to set a custom time limit for retention. Include the following next-level parameters:

  • Time — number (of the specified time unit), 0 or higher
  • Unit — time unit, one of the following: Seconds, Minutes, Hours, Days

Back to top

Job:Dummy

The following example shows how to use Job:Dummy to define a job that always ends successfully without running any commands. 

"DummyJob" : {
  "Type" : "Job:Dummy"
}

Back to top

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*