Code Reference
Introduction
The code samples below describe how to define Control-M objects using JSON notation.
Each Control-M object begins with a "Name" and then a "Type" specifier as the first property. All object names are defined in PascalCase notation with first letters in capital case. In the examples below, the "Name" of the object is "ObjectName" and the "Type" is "ObjectType".
"Type" : "ObjectType"
}
The following object types are the most basic objects in Control-M job management:
Object type | Description |
---|---|
Folder | A container of jobs. The following types of folders are available:
You can use a Flowto define order dependency between jobs in a folder. |
A business process that you schedule and run in your enterprise environment | |
Access methods and security credentials for a specific application that runs jobs | |
Definitions of default parameter values that you can apply to multiple objects, all at once |
Folder
A folder is a container of jobs and subfolders. The default type of folder (as opposed to a simple folder) enables you to configure various settings such as scheduling, event management, adding resources, or adding notifications on the folder level. Folder-level definitions are inherited by the jobs or subfolders within the folder.
For example, you can specify scheduling criteria on the folder level instead of defining the criteria per job in the folder. All jobs in the folder will take on the rules of the folder. This reduces job definition in code.
"Type": "Folder",
"Job1": {
"Type": "Job:Command",
"Command": "echo I am a Job",
"RunAs": "controlm"
},
"Job2": {
"Type": "Job:Command",
"Command": "echo I am a Job",
"RunAs": "controlm"
}
}
Optional parameters:
"Type": "Folder",
"AdjustEvents": true,
"ControlmServer": "controlm",
"SiteStandard": "myStandards",
"OrderMethod": "Manual",
"Application": "ApplicationName",
"SubApplication" : "SubApplicationName",
"RunAs" : "controlm",
"When" : {
"WeekDays": ["SUN"]
},
"ActiveRetentionPolicy": "KeepAll",
"DaysKeepActiveIfNotOk" : "41",
"mut1" : {
"Type": "Resource:Mutex",
"MutexType": "Exclusive"
},
"Notify1": {
"Type": "Notify:ExecutionTime",
"Criteria": "LessThan",
"Value": "3",
"Message": "Less than expected"
}
}
AdjustEvents | Whether a job in a folder or subfolder should start running and not wait for an event from a predecessor job that was not scheduled. Values: true | false |
ControlmServer | Specifies a Control-M Scheduling Server. If more than one Control-M Scheduling Server is configured in the system, you must define the server that the folder belongs to. |
SiteStandard | Enforces the defined Site Standard to the folder and all jobs contained within the folder. See Control-M in a nutshell. |
OrderMethod | Options are:
|
RunAs | The name of the user responsible for running the jobs in the folder or subfolder. For more details, see RunAs. |
When | Defines scheduling for all jobs in the folder or subfolder, using various scheduling parameters or rule-based calendars. For more details, see When. |
ActiveRetentionPolicy | The retention policy for jobs in the folder or subfolder, one of the following options:
|
DaysKeepActiveIfNotOk | Defines the number of days to keep all jobs in the folder active after the folder is set to NOT OK. This parameter is relevant only when ActiveRetentionPolicy=KeepAll. Valid values are 0-99 (where 99 is forever). The default is 0. |
Resource:Mutex | See Resource:Mutex for detailed information about the resource. |
Notification | Issues notifications for various scenarios that occur before, during, or after the execution of jobs within the folder or subfolder. For more details, see Notification. |
If | See Ifand If Actions for detailed information |
Job | See Job for detailed information |
Events | See Eventsfor detailed information |
Flow | See Flowfor detailed information |
Folders support the use of the following additional properties and parameters:
- Application, SubApplication
- CreatedBy
- DaysKeepActive
- Description
- Documentation
- EndFolder
- Priority
- Rerun
- RerunLimit
- Time Zone
- Variables
The following example shows description and time zone for the object Folder8.
"Type": "Folder",
"Description": "folder desc",
"Application" : "Billing",
"SubApplication" : "Payable",
"TimeZone":"HAW",
"SimpleCommandJob": {
"Type": "Job:Command",
"Description": "job desc",
"Application" : "BillingJobs",
"SubApplication" : "PayableJobs",
"TimeZone":"MST",
"Host":"agent8",
"RunAs":"owner8",
"Command":"ls"
}
}
SubFolder
A subfolder is a folder contained within another (parent) folder or subfolder. A subfolder can contain a group of jobs or a next-level subfolder, and it can also contain a flow. Subfolders offer many (but not all) of the capabilities that are offered by regular folders.
The following example shows a folder that contains two subfolders with the most basic definitions:
"Type":"Folder",
"SubFolder1":{
"Type":"SubFolder",
"job1": {
"Type": "Job:Script",
"FileName": "scriptname.sh",
"FilePath": "/home/user/scripts",
"RunAs": "em900cob"
}
},
"SubFolder2":{
"Type":"SubFolder",
"job1": {
"Type": "Job:Script",
"FileName": "scriptname2.sh",
"FilePath": "/home/user/scripts",
"RunAs": "em900cob"
}
}
}
The following example shows a more complex hierarchy of subfolders, with scheduling properties:
"Type" : "Folder",
"ControlmServer" : "LocalControlM",
"When" : {
"RuleBasedCalendars" : {
"Included" : [ "glob1", "cal2","cal1" ],
"Excluded" : [ "glob2" ],
"cal1" : {
"Type" : "Calendar:RuleBased",
"When" : {
"WeekDays" : [ "MON" ],
"MonthDays" : [ "NONE" ]
}
}
}
},
"subF1" : {
"Type" : "SubFolder",
"Application" : "application",
"When" : {
"RuleBasedCalendars" : {
"Included" : [ "USE PARENT" ]
}
}
},
"subF2" : {
"Type" : "SubFolder",
"Application" : "application",
"When" : {
"FromTime":"1211",
"ToTime":"2211",
"RuleBasedCalendars" : {
"Included" : [ "cal1" ]
}
},
"subF2a" : {
"Type" : "SubFolder",
"Application" : "application",
"job3" : {
"Type" : "Job:Script",
"FileName" : "scriptname.sh",
"FilePath" : "/home/user/scripts",
"RunAs" : "em900cob",
"Application" : "application"
}
}
}
}
Subfolders support the use of the following properties and parameters:
- Application, SubApplication
- When— only the FromTime and ToTime parameters, as well as rule-based calenders
- Events
- Ifstatements with If Actions
- AdjustEvents(a Folder property)
- Confirm
- DaysKeepActive
- Description
- Documentation
- Mutex resources
- Notification
- Priority
- RunAs
- Time Zone
- Variables
Simple Folder
Simple Folder is a container of jobs. A Simple Folder does not enable configuration of job definitions at the folder level. The following example shows how to use a simple folder.
"SimpleFolderName": {
"Type": "SimpleFolder",
"ControlmServer": "ec2-54-191-85-182",
"job1": {
"Type": "Job:Command",
"Command": "echo 123",
"RunAs": "controlm"
},
"job2": {
"Type": "Job:Command",
"Command": "echo 123",
"RunAs": "controlm"
},
"Flow": {
"Type": "Flow",
"Sequence": ["job1", "job2"]
}
}
}
The following example shows optional parameters for SimpleFolder:
"FolderSampleAll": {
"Type": "SimpleFolder",
"ControlmServer": "controlm",
"SiteStandard": "myStandards",
"OrderMethod": "Manual"
}
}
ControlmServer | Specifies a Control-M Scheduling Server. If more than one Control-M Scheduling Server is configured in the system, you must define the server that the folder belongs to. |
SiteStandard | Enforces the defined Site Standard to the folder and all jobs contained within the folder. See Control-M in a nutshell. |
OrderMethod | Options are:
|
Flow
The Flow object type allows you to define order dependency between jobs in folders and subfolders. A job must end successfully for the next job in the flow to run.
"Type":"Flow",
"Sequence":["job1", "job2", "job3"]
}
The following example shows how one job can be part of multiple flows. Job3 will execute if either Job1 or Job2 end successfully.
{
"Type" : "Folder",
"Job1": {
"Type" : "Job:Command",
"Command" : "echo hello",
"RunAs" : "user1"
},
"Job2": {
"Type" : "Job:Command",
"Command" : "echo hello",
"RunAs" : "user1"
},
"Job3": {
"Type" : "Job:Command",
"Command" : "echo hello",
"RunAs" : "user1"
},
"flow1": {
"Type":"Flow",
"Sequence":["Job1", "Job3"]
},
"flow2": {
"Type":"Flow",
"Sequence":["Job2", "Job3"]
}
}
The following example shows how to create flow sequences with jobs contained within different folders and subfolders.
{
"Type" : "Folder",
"Job1": {
"Type" : "Job:Command",
"Command" : "echo hello",
"RunAs" : "user1"
}
},
"FolderB" :
{
"Type" : "Folder",
"Job1": {
"Type" : "Job:Command",
"Command" : "echo hello",
"RunAs" : "user1"
},
"SubFolderB1" : {
"Type" : "SubFolder",
"Job2": {
"Type" : "Job:Command",
"Command" : "echo hello again from subjob",
"RunAs" : "user1"
}
}
},
"CrossFoldersFlowSample": {
"Type":"Flow",
"Sequence":["FolderA:Job1", "FolderB:Job1", "FolderB:SubFolderB1:Job2]
}
The following example shows a flow defined within a subfolder and referencing jobs within next-level subfolders.
"Type":"Folder",
"SubFolderA":{
"Type":"SubFolder",
"SubFolder1":{
"Type":"SubFolder",
"job1": {
"Type": "Job:Script",
"FileName": "scriptname.sh",
"FilePath": "/home/user/scripts",
"RunAs": "em900cob"
}
},
"SubFolder2":{
"Type":"SubFolder",
"job1": {
"Type": "Job:Script",
"FileName": "scriptname2.sh",
"FilePath": "/home/user/scripts",
"RunAs": "em900cob"
}
},
"flowInSubFolderA": {
"Type": "Flow",
"Sequence": [
"SubFolder1:job1",
"SubFolder2:job1"
]
}
}
}
Job Properties
Below is a list of job properties for Control-M objects.
Type
Defines the type of job. For example:
"Type" : "Job:Command",
"Command" : "echo hello",
"RunAs" : "user1"
}
Many of the other properties that you include in the job's definitions depend on the type of job that you are running. For a list of supported job types and more information about the parameters that you use in each type of job, see Job types.
Application, SubApplication
Supplies a common descriptive name to a set of related Jobs, Folders, or SubFolders. The jobs do not necessarily have to run at the same time.
"Type": "Job:Command",
"Application": "ApplicationName",
"SubApplication": "SubApplicationName",
"Command": "echo I am a Job",
"RunAs": "controlm"
}
Comment
Allows you to write a comment on an object. Comments are not uploaded to Control-M.
"Type" : "Job:Command",
"Comment" : "code reviewed by tom",
"Command" : "echo hello",
"RunAs" : "user1"
}
When
Enables you to define scheduling parameters for Jobs, Folders and SubFolders, including the option of using calendars. If When is used in a Folder or SubFolder, those parameters apply to all Jobs in the Folder or Subfolder.
When working in a Control-M Workbench environment, jobs will not wait for time constants and will run in an ad-hoc manner. Once deployed to a Control-M instance, all time constraints will be obeyed.
The following example defines scheduling based on a combination of date and time constraints:
"Schedule":"Never",
"Months": ["JAN", "OCT", "DEC"],
"MonthDays":["22","1","11"],
"WeekDays":["MON","TUE"],
"FromTime":"1500",
"ToTime":"1800"
}
The following example defines scheduling based on specific dates that you specify explicitly:
"WeekDays" : [ "NONE" ],
"Months" : [ "NONE" ],
"MonthDays" : [ "NONE" ],
"SpecificDates" : [ "03/01", "03/10" ],
"FromTime":"1500",
"ToTime":"1800"
}
The following date/time constraints are available for use in the definitions of a Job, Folder, or SubFolder:
Option | Description | Job | Folder | SubFolder |
---|---|---|---|---|
WeekDays | One or more of the following: "SUN","MON","TUE","WED","THU","FRI","SAT" For all days of the week, use "ALL" (the default value). | ✅️ | ✅️ | ❌️ |
Months | One or more of the following: "JAN", "FEB", "MAR", "APR","MAY","JUN", "JUL", "AUG", "SEP", "OCT", "NOV", "DEC" For all months of the year, use "ALL" (the default value). | ✅️ | ✅️ | ❌️ |
MonthDays | One or more days in the range of 1 to 31 For all days of the month, use "ALL" (the default value). | ✅️ | ✅️ | ❌️ |
FromTime | FromTime specifies that a job will not start before this time Format: HHMM | ✅️ | ✅️ | ✅️ |
ToTime | ToTime specifies that a job will not start after this time Format: HHMM To allow the job to be submitted even after its original scheduling date (if it was not submitted on the original date), specify a value of ">". | ✅️ | ✅️ | ✅️ |
Schedule | One of the following options:
| ✅️ | ✅️ | ❌️ |
SpecificDates | Specific dates for running jobs. For each date, use the format "MM/DD" (enclosed in quotes). Separate multiple dates with commas. Note: The SpecificDates option cannot be used in combination with options WeekDays, Months, or MonthDays. | ✅️ | ✅️ | ❌️ |
MonthDay additional parameters
You can specify the days of the month the job will run by referencing a predefined calendar set up in Control-M.
"MonthDaysCalendar": “Summer2017”
}
You can specify the days of the month the job will run by referencing a predefined calendar set up in Control-M, and also using advanced rules specified in the MonthDays parameter:
"MonthDaysCalendar": “Summer2017”,
“MonthDays":[“1”,”+2”,”-3”,”>4”,”<5”,”D6”,”L7”]
}
Where:
MonthDays syntax | Description |
---|---|
1 | Day 1 included only if defined in the calendar |
+2 | Day 2 included regardless of calendar |
-3 | Day 3 excluded regardless of calendar |
>4 | Day 4 or next closest calendar working day |
<5 | Day 5 or previous closest calendar working day |
D6 | The 6th calendar working day |
L7 | The 7th from the last calendar working day |
D6PA or D6P* | If MonthDaysCalendar is of type periodical, you can use PA or P* to specify a calendar period name such as A,B,C, or you can use * for any period. |
-D6 or -L6P* | D and L can also have an exclude specifier |
WeekDays additional parameters
You can specify the days of the week that the job will run by referencing a predefined calendar set up in Control-M, and also using advanced rules specified in the WeekDays parameter:
"WeekDaysCalendar" : "Summer2017",
"WeekDays" : ["SUN","+MON","-TUE",">WED","<THU"]
}
Where:
WeekDays syntax | Description |
---|---|
SUN | Sunday included only if defined in the calendar |
+MON | Monday included regardless of calendar |
-TUE | Tuesday excluded regardless of calendar |
>WED | Wednesday or next closest calendar working day |
<THU | Thursday or previous closest calendar working day |
Specifying start/end dates of a job run
You can add start and end dates for a job run in addition to the other date/time elements.
"StartDate":"20160322",
"EndDate":"20160325"
}
Where:
StartDate | First date that a job can run |
EndDate | Last date that a job can run |
Relationship between MonthDays and WeekDays
"Months": ["JAN", "OCT", "DEC"],
"MonthDays":["22","1","11"],
“DaysRelation” : “OR”,
"WeekDays":["MON","TUE"]
}
Parameter | Description |
---|---|
DaysRelation | Default: AND AND - the job will run only if the WeekDays and MonthDays contraints are met OR - the job will run if the WeekDays or MonthDays constraints are met |
Using rule-based calendars in job scheduling
You can base scheduling on predefined rule-based calendars (RBC). Under the When parameter for a specific Job, Folder, or SubFolder, you can specify the RBCs to include and the RBCs to exclude from scheduling. For more information, see Rule-based Calendar and Excluded Rule-based Calendar lists in the Control-M Online Help.
"Type" : "Job:Command",
"RunAs" : "controlm",
"Command" : "ls -l",
"When" : {
"RuleBasedCalendars":{
"Included": ["calendar1"],
"Excluded": ["calendar2"]
}
}
}
You can combine the use of rule-based calendars with standard scheduling parameters. In such a case, the Relationship parameter enables you to define the logical relationship (AND/OR) between the criteria defined by the calendars and all other basic scheduling criteria. In other words, you can decide whether either set of criteria, or both sets of criteria, must be satisfied. The default relationship is OR.
"Type" : "Job:Command",
"RunAs" : "controlm",
"Command" : "ls -l",
"When" : {
"RuleBasedCalendars":{
"Included": ["weekdays"],
"Excluded": ["endOfQuarter"],
"Relationship": "AND"
},
"Months":["JAN","FEB","MAR"],
"WeekDays":["TUE","WED"]
}
}
Rule-based calendars can also be used with SubFolders. In the following example, two different SubFolders are defined in the parent folder. For one of these SubFolders, RBCs to include and exclude are explicitly specified. These RBCs are either global or from the parent folder. For the other SubFolder, the "USE PARENT" value is specified instead of the name of an actual RBC, so that all scheduling is inherited from the parent folder (as defined in the next example).
"Type" : "SubFolder",
"When" : {
"FromTime":"1211",
"ToTime":"2211",
"RuleBasedCalendars" : {
"Included" : [ "calendar1" ],
"Excluded" : [ "calendar2" ]
}
}
},
"subF2" : {
"Type" : "SubFolder",
"When" : {
"RuleBasedCalendars" : {
"Included" : [ "USE PARENT" ]
}
}
}
For folders, you can define Folder RBCs in addition to using predefined RBCs or other basic scheduling criteria. Folder RBCs are specific to a single folder and are applied to all jobs within the folder. The following example shows how to define RBCs under the folder's When parameter:
"Type": "Folder",
"ControlmServer": "LocalControlM",
"When": {
"RuleBasedCalendars": {
"endOfQ":{
"Type": "Calendar:RuleBased",
"When": {
"Months": ["MAR","JUN","SEP","DEC"],
"MonthDays": ["29","30","31"]
}
},
"winterWeekendDays": {
"Type": "Calendar:RuleBased",
"When": {
"Months": ["DEC","JAN","FEB"],
"WeekDays": ["SAT","SUN"]
}
},
"Included": ["winterWeekendDays","calendar1"],
"Excluded": ["endOfQ"]
},
"Months": ["APR"],
"WeekDays": ["FRI","MON"]
},
"TestJobInTestFolder": {
"Type": "Job:Command",
"RunAs": "controlm",
"Command": "ls -l",
"When": {
"RuleBasedCalendars": {
"Included": ["calendar3"],
"Excluded": ["calendar2"]
}
}
}
}
Note the following guidelines:
- Each Folder RBC that you define has its own When parameter, with scheduling parameters under it. The scheduling parameters under this When parameter are similar to the scheduling parameters under a When parameter of a job or folder, with the following exceptions:
- The When parameter of an RBC does not support the FromTime and ToTime parameters.
- For the When parameter of an RBC, you can also use the DaysKeepActiveparameter.
- You cannot nest another RuleBasedCalendars parameter under the When parameter of an RBC.
- To apply the defined Folder RBCs to the jobs within the folder, you must list each of the defined Folder RBCs in either the Included parameter or the Excluded parameter.
- You can list additional predefined Control-M RBCs in the Included and Excluded parameters. In the example above, a predefined RBC named calendar1 is listed along with the Folder RBC winterWeekendDays.
- You can combine the use of RBCs with other scheduling parameters. In the example above, the additional Months and WeekDays settings were added after the RuleBasedCalendars definitions. These further scheduling parameters are combined with the RBCs based on a logical AND.
- The folder's scheduling parameters are inherited by each of the jobs in the folder. In addition, for any specific job in the folder, you can add further scheduling parameters or RBCs. In the example above, further RBCs (calendar3 and calendar2) are associated with a job named TestJobInTestFolder.
Events
Events can be generated by Control-M or can trigger jobs. Events are defined by a name and a date.
Here is a list of the various capabilities of event usages:
- A job can wait for events before running, add events after running, or delete events after running. See WaitForEvents, AddEvents, and DeleteEvents
- Jobs can add or remove events from Control-M. See Event:Add or Event:Delete.
- You can add or remove events from the Control-M by an API call. See Event Management.
You can set events for a Job, Folder, or SubFolder.
For "OrderDate", you can use the following values:
Date Type | Description |
---|---|
AnyDate | Any scheduled date |
NoDate | Not date-specific |
OrderDate | Control-M scheduled date. If you do not specify an OrderDate value, this is the default. |
PreviousOrderDate | Previous Control-M scheduled date |
NextOrderDate | Next Control-M scheduled date |
MMDD | Specific date Example: "0511" |
WaitForEvents
The following example shows how to define events that the job must wait for before running:
"Type": "WaitForEvents",
"Events": [
{"Event":"e1"},
{"Event":"e2"},
{"Event":"e3", "OrderDate":"AnyDate"}
]
}
You can specify the logical relationship between events, using logical operators (AND/OR) and parentheses. The default relationship is AND. Note that nesting of parentheses within parentheses is not supported.
"Type": "WaitForEvents",
"Events": [
"(",
{"Event":"ev1"},
"OR",
{"Event":"ev2"},
")",
"OR",
"(",
{"Event":"ev3"},
{"Event":"ev4"},
")"
]
}
AddEvents
The following example shows how to specify events for the job to add after running:
{
"Type": "AddEvents",
"Events": [
{"Event":"a1"},
{"Event":"a2"},
{"Event":"a3", "OrderDate":"1112"}
]
}
DeleteEvents
The following example shows how to specify events for the job to remove after running:
{
"Type": "DeleteEvents",
"Events": [
{"Event":"d1"},
{"Event":"d2", "OrderDate":"1111"},
{"Event":"d3"}
]
}
If
If statements trigger one or more actions when job-related criteria are fulfilled (for example, the job ended with a specific status or the job failed several times).
The following If statements are available for specifying job-related criteria that must occur for action to be taken:
- If:CompletionStatus
- If:NumberOfReruns
- If:NumberOfFailures
- If:JobNotSubmitted
- If:JobOutputNotFound
- If:NumberOfExecutions
- If:Output
For descriptions of the various actions that can be triggered in response to an If statement that is fulfilled, see If Actions.
If:CompletionStatus
The following example shows an If statement that triggers actions based on job completion status. In this example, if the job runs unsuccessfully, it sends an email and runs another job. You can set this property for a Job, Folder, or SubFolder.
"Type" : "Job:Command",
"Command" : "echo hello",
"Host" : "myhost.mycomp.com",
"RunAs" : "user1",
"ActionIfFailure" : {
"Type": "If",
"CompletionStatus": "NOTOK",
"mailToTeam": {
"Type": "Action:Mail",
"Message": "Job %%JOBNAME failed",
"To": "team@mycomp.com"
},
"CorrectiveJob": {
"Type": "Action:Run",
"Folder": "FolderName",
"Job": "JobName"
}
}
}
If can be triggered based on one of the following CompletionStatus values:
Value | Action |
NOTOK | When job fails |
OK | When job completed successfully |
ANY | When the job completed regardless of success or failure |
value | When completion status = value Example: value=10 |
Even | When completion status is an even number |
Odd | When completion status is an odd number |
">=5", "<=5", "<5", ">5", "!=5" | When the completion status comparison operator is true |
If:NumberOfReruns
The following example shows how to trigger an action based on number of job reruns. You can set this property for a Job, Folder, or SubFolder.
"Type": "If:NumberOfReruns",
"NumberOfReruns": ">=4",
"RunJob": {
"Type": "Action:Run",
"Folder": "Folder1",
"Job": "job1"
}
}
Where:
Parameter | Description | Possible values |
---|---|---|
NumberOfReruns | Performs an action if the condition of number of job reruns is met. | "Even" "Odd" "!=value" ">=v alue" "<=v alue" ">v alue" "<v alue" "value" |
If:NumberOfFailures
The following example shows how to trigger an action based on number of job failures. You can set this property for a Job, Folder, or SubFolder.
"Type": "If:NumberOfFailures",
"NumberOfFailures": "1",
"RunJob": {
"Type": "Action:Run",
"Folder": "Folder1",
"Job": "job1"
}
}
Where:
Parameter | Description | Possible values |
---|---|---|
NumberOfFailures | Performs an action if the condition of number of job failures is met | "value" |
If:JobNotSubmitted
The following example shows how to trigger an action based on whether the job is not submitted.
"Type": "If:JobNotSubmitted"
"RunJob": {
"Type": "Action:Run",
"Folder": "Folder1",
"Job": "job1"
}
}
If:JobOutputNotFound
The following example shows how to trigger an action based on whether the job output is not found.
"Type": "If:JobOutputNotFound"
"RunJob": {
"Type": "Action:Run",
"Folder": "Folder1",
"Job": "job1"
}
}
If:NumberOfExecutions
The following example shows how to trigger an action based on number of job executions. You can set this property for a Job, Folder, or SubFolder.
"Type": "If:NumberOfExecutions",
"NumberOfExecutions": ">=5"
"RunJob": {
"Type": "Action:Run",
"Folder": "Folder1",
"Job": "job1"
}
}
Where:
Parameter | Description | Possible values |
---|---|---|
NumberOfExecutions | Performs an action if the condition of number of job executions is met | "Even" "Odd" "!=value" ">=value" "<=value" ">value" "<value" "value" |
If:Output
The following example shows how to trigger an action based on whether a specified string is found within the job output. You can set this property for a Job, Folder, or SubFolder.
"Type": "If:Output",
"Code": "myfile.sh",
"Statement": "ls -l",
"RunJob":{
"Type":"Action:Run",
"Folder":"Folder1",
"Job":"job1"
}
}
Where:
Parameter | Description |
---|---|
Code | The string to search for in the output. You can include wildcards in the code — * for any number of characters, and $ or ? for any single character. |
Statement | (Optional) Limits the search to a specific statement within the output. If no statement is specified, all statements in the output are searched. You can include wildcards in the statement — * for any number of characters, and $ or ? for any single character. |
If Actions
The following actions can be triggered in response to an If statement that is fulfilled:
- Action:Mail
- ActionRerun
- Action:Set
- Action:SetToOK
- Action:SetToNotOK
- Action:StopCyclicRun
- Action:Run
- Action:Notify
- Event:Add
- Event:Delete
- Action:Output
You can set any of these properties for a Job, Folder, or SubFolder.
For descriptions of the various If statements that you can specify to trigger these actions, see If.
Action:Mail
The following example shows an action that sends an e-mail.
"Type": "Action:Mail",
"Message": "%%JOBNAME failed",
"To": "team@mycomp.com"
}
The following example shows that you can add optional parameters to the email action.
"Type": "Action:Mail",
"Urgency": "Urgent",
"Subject" : "Completion Email",
"Message": "%%JOBNAME just completed",
"To": "team@mycomp.com",
"CC": "other@mycomp.com",
"AttachOutput": true
}
The following table describes the parameters of the email action:
Parameter | Description |
---|---|
Urgency | Level of urgency of the message — Regular, Urgent, or VeryUrgent. The default is Regular. |
Subject | A subject line for the message. |
Message | The message text. |
To | A list of email recipients to whom the message is directed. Use the semicolon (;) to separate multiple email addresses. |
CC | A list of email recipients who receive a copy of the message. Use the semicolon (;) to separate multiple email addresses. |
AttachOutput | Whether to include the job output as an email attachment, either true or false. If no value is specified, the default follows the configuration of the Control-M/Server. |
Action:Rerun
The following example shows an action that reruns the job.
"Type": "Action:Rerun"
}
Action:Set
The following example shows an action that sets a variable.
"Type": "Action:Set",
"Variable": "var1",
"Value": "1"
}
Action:SetToOK
The following example shows an action that sets the job status to OK.
"Type": "Action:SetToOK"
}
Action:SetToNotOK
The following example shows an action that sets the job status to not OK.
"Type": "Action:SetToNotOK"
}
Action:StopCyclicRun
The following example shows an action that disables the cyclic attribute of the job.
"Type": "Action:StopCyclicRun"
}
Action:Run
The following example shows an action that runs another job.
"Type": "Action:Run",
"Folder": "FolderName",
"Job": "JobName",
"ControlmServer":"RemoteControlM",
"Date":"010218",
"Variables":[{"Cvar1":"val1"}, {"Cvar2":"val2"}]
}
The run action has the following optional properties:
Property | Description |
---|---|
ControlmServer | The Control-M Scheduling Server for the run action. By default, the Control-M Scheduling Server for the run action is the same as defined for the folder. You can use this property to specify a different, remote server. |
Date | Value to be used as the original scheduling date for the job. The default is OrderDate (that is, the Control-M scheduled date). For any other date, specify the date in the relevant 4-character or 6-character format — mmdd, ddmm, yymmdd, or yyddmm, depending on the site standard. |
Variables | If the run action is defined for a remote Control-M Scheduling server, you can define variables for the run action. |
Action:Notify
The following example shows an action that sends a notification.
"Type": "Action:Notify",
"Message": "job1 just ran",
"Destination": "JobLog",
"Urgency": "VeryUrgent"
}
Event:Add
The following example shows an action that adds an event for the current date.
"Type": "Event:Add",
"Event": "e1"
}
Optional parameters:
"Type": "Event:Add",
"Event": "e1",
"OrderDate": "1010"
}
Date Type | Description |
---|---|
AnyDate | Any scheduled date |
NoDate | Not date-specific |
OrderDate | Control-M scheduled date. |
PreviousOrderDate | Previous Control-M scheduled date |
NextOrderDate | Next Control-M scheduled date |
MMDD | Specific date Example: "0511" |
Event:Delete
The following example shows an action that deletes an event.
OrderDate possible values:
- "AnyDate"
- "NoDate"
- "OrderDate"
- "PreviousOrderDate"
- "NextOrderDate"
- "0511" - (MMDD)
"Type": "Event:Delete",
"Event": "e2",
"OrderDate": "PreviousOrderDate"
}
Action:Output
The Output action supports the following operations:
- Copy
- Move
- Delete
The following example shows an action that copies the output to the specified destination.
"Type": "Action:Output",
"Operation": "Copy",
"Destination": "/home/copyHere"
}
Confirm
Allows you to define a job or subfolder that requires user confirmation. This can be done by running the run confirm command.
"Type" : "Job:Command",
"Comment" : "this job needs user confirmation to start execution",
"Command" : "echo hello",
"RunAs" : "user1",
"Confirm" : true
}
CreatedBy
Allows you to specify the Control‑M user responsible for job definitions. You can define this property for a Job object or Folder object.
"Type": "Job:Command",
"Command": "echo I am a Job.",
"RunAs": "controlm",
"CreatedBy":"username"
}
The behavior of this property depends on the security policy defined in the Control-M environment, as controlled by the AuthorSecurity parameter in Control-M/Enterprise Manager:
Security level | Allowed values | Default value |
---|---|---|
Permissive | Any user name | ctmdk (an internal user) |
Restrictive | User currently logged in | User currently logged in |
Critical
Allows you to set a critical job. A critical job is a job that has a higher priority to reserve resources in order to run.
Default: false
DaysKeepActive
Allows you to define the number of days to keep a job if the job did not run at its scheduled date. You can set this property for a Job, Folder, or SubFolder.
Jobs in a folder are kept until the maximum DaysKeepActive value for any of the jobs in the folder has passed. This enables you to retrieve job status of all the jobs in the folder.
"Type" : "Folder",
"Defaults": {
"RunAs":"owner8"
},
"keepForeverIfDidNotRun": {
"Type": "Job:Command",
"Command":"ls",
"DaysKeepActive": "Forever"
},
"keepForThreeDaysIfDidNotRun": {
"Type": "Job:Command",
"Command":"ls",
"DaysKeepActive": "3"
}
}
Where:
DaysKeepActive | Valid values:
Default: 0 |
Description
Enables you to add a description to a Job, Folder, or SubFolder.
{
"Type" : "Folder",
"Description":"folder description",
"SimpleCommandJob": {
"Type": "Job:Command",
"Description":"job description",
"RunAs":"owner8",
"Command":"ls"
}
}
Documentation
Allows you to add the location and name of a file that contains the documentation for the Job, Folder, or SubFolder.
"Path": "C://temp",
"FileName": "job.txt"
}
}
Allows you to add the URL location of a file that contains the documentation for the Job, Folder, or SubFolder.
"Url": "http://bmc.com"
}
}
EndFolder
Enables you to specify which job is the end point in a folder. After this job completes, no additional jobs in the folder will run, unless they have already started running. The folder is complete once all jobs that are still running complete. Remaining jobs that have not yet started running change to status WAIT SCHEDULE.
Values: true | false
Default: false
"Type": "Folder",
"EndFolderJob": {
"Type": "Job:Command",
"Command": "echo When this job ends, the folder is complete",
"RunAs": "controlm",
"EndFolder": true
}
Notification
Allows you to create a notification for certain scenarios before, during and after job execution. You can set notifications for a Job, Folder, or SubFolder.
This example shows a notification sent to JobLog of critical job failure.
"Type":"Notify:NotOK",
"Message": "Critical job failed, details in job output",
"Urgency": "Urgent",
"Destination": "JobLog"
}
The following parameters are relevant to all notification types.
Parameters | Description |
---|---|
Message | The message to display |
Destination (description for each of the options) | The message is sent to one of the following:
Or Predefined destination values (for example, FinanceGroup) |
Urgency | The message urgency is logged as one of the following:
|
Notify:OK
When setting the notification type to OK, if job executed with no errors, the notification "Job run OK" is sent to the JobLog.
"Type": "Folder",
"job1": {
"Type": "Job:Command",
"Command": "ls",
"RunAs": "user1",
"Notify1": {
"Type": "Notify:OK",
"Message": "Job run OK",
"Destination": "JobLog"
}
}
}
}
Notify:NotOK
When setting the notification type to NotOK, if job executed with errors, the notification "Job run not OK" is sent to the JobLog.
"Type": "Folder",
"job1": {
"Type": "Job:Command",
"Command": "ls",
"RunAs": "user1",
"Notify1": {
"Type": "Notify:NotOK",
"Message": "Job run not OK",
"Destination": "JobLog"
}
}
}
}
Notify:DoesNotStart
If the job has not started by 15:10, a notification is immediately sent to the email defined in the job with the message that the job has not started.
"Folder1": {
"Type": "Folder",
"job1": {
"Type": "Job:Command",
"Command": "ls",
"RunAs": "user1",
"Notify3": {
"Type": "Notify:DoesNotStart",
"By": "1510",
"Message": "Job has not started",
"Destination": "mail",
"Urgency": "VeryUrgent"
}
}
}
}
Parameters | Description |
---|---|
By | Format: HHMM Notification sent when job does not start by specified time |
Notify:ExecutionTime
When setting the notification type ExecutionTime to LessThan, if the job completes in less than 3 minutes, the notification "Less than expected" is sent to the Alert destination.
"Folder1": {
"Type": "Folder",
"job1": {
"Type": "Job:Command",
"Command": "ls",
"RunAs": "user1",
"Notify1": {
"Type": "Notify:ExecutionTime",
"Criteria": "LessThan",
"Value": "3",
"Message": "Less than expected"
}
}
}
}
Criteria | Value | Description |
---|---|---|
LessThan | Value in minutes Example: 3 | If the job runs less than the defined value, a notification is sent to the defined destination |
GreaterThan | Value in minutes Example: 5 | If the job runs longer than the defined value, a notification is sent to the defined destination |
LessThanAverage | Value in minutes or percentage Example: 10% | If the job runs less than the defined value of the average execution time of the job, a notification is sent to the defined destination |
GreaterThanAverage | Value in minutes or percentage Example: 10% | If the job runs longer than the defined value of the average execution time of the job, a notification is sent to the defined destination |
Notify:DoesNotEnd
When setting the notification type DoesNotEnd, if job does not end by the specified time, message is sent to JobLog.
"Folder1": {
"Type": "Folder",
"job1": {
"Type": "Job:Command",
"Command": "ls",
"RunAs": "user1",
"Notify1": {
"Type": "Notify:DoesNotEnd",
"By": "1212",
"Message": "Job does not end",
"Destination": "JobLog"
}
}
}
}
Parameters | Description |
---|---|
By | Format: HHMM Notification sent when job does not end by specified time |
Notify:ReRun
When setting the notification type ReRun, when job reruns, message is sent to console.
"Folder1": {
"Type": "Folder",
"job1": {
"Type": "Job:Command",
"Command": "ls",
"RunAs": "user1",
"Notify1": {
"Type": "Notify:ReRun",
"Message": "Job5 ReRun",
"Destination": "Console"
}
}
}
}
OverridePath
Enables you to specify an alternative location for a script file without changing the original script file that you specified for a job of type Job:Script using the FileName and FilePath properties. The alternative script file in the alternative location must have the same name as the original job script file, in order for it to run instead of the original job script file. This is especially useful for troubleshooting purposes or during development of a new release.
The following example demonstrates the use of the OverridePath property:
"Type": "Job:Script",
"FileName": "task1123.sh",
"FilePath": "/home/user1/scripts",
"Host": "myhost.mycomp.com",
"RunAs": "user1",
"Priority": "XB",
"OverridePath":"/usr/lib/overridepath" ,
"RunAsDummy": true
}
Priority
Allows you to define the priority that a job has over other jobs. You can set this property for a Job, Folder, or SubFolder.
The following options are supported:
- Very High
- High
- Medium
- Low
- Very Low (default)
"Folder8": {
"Type": "Folder",
"Description": "folder desc",
"Application": "Billing",
"SubApplication": "Payable",
"SimpleCommandJob": {
"Type": "Job:Command",
"Description": "job desc",
"Application": "BillingJobs",
"Priority": "High",
"SubApplication": "PayableJobs",
"TimeZone": "MST",
"Host": "agent8",
"RunAs": "owner8",
"Command": "ls"
}
}
}
Rerun
Allows to define cyclic jobs.
The example shows how to define a cyclic job that runs every 2 minutes indefinitely.
"Every": "2"
}
The following example shows how to run a job four times where each run starts three days after the previous run ended.
"Every": "3",
"Units": "Days",
"From": "End",
"Times": "4"
}
Units | One of the following: "Minutes" "Hours" or "Days". The default is "Minutes" |
From | One of the following values:
The default is "Start". |
Times | Number of cycles to run. To run forever, define 0 The default is run forever. |
RerunIntervals
Allows to define a set of time intervals for the job to rerun.
The example shows how to define Rerunintervals for the job to run every 12 months, 12 days, 11 hours and 1 minute from the end of the last job run.
"Intervals" : ["12m","11h","12d","1m"],
"From": "End"
}
Intervals | The time intervals for job to run again in months, hours, days and minutes |
From | One of the following values:
The default is "Start". |
RerunSpecificTimes
Allows to rerun a job at specific times.
The following example shows how to define RerunSpecificTimes for jobs to run at those specific times.
"Type": "Job:Command",
"Command":"ls",
"RunAs": "user1",
"RerunSpecificTimes": {
"At" : ["0900","1100","1230","1710"],
"Tolerance": "20"
}
}
At | One or more time of day in the format HHMM |
Tolerance | Maximum delay in minutes permitted for a late submission of a specific time |
RerunLimit
Allows you to set a limit to the number of times a non cyclic job can rerun.
"Type":"Job:Command",
"Command":"ls",
"RunAs":"user1",
"RerunLimit": {
"Times":"5"
}
}
Times | Maximum number of times a non cyclic job can rerun Default 0, no limit to the number of reruns. |
Resources
Resource:Semaphore
Allows you to set the Semaphore (also known as quantitative resources) quantity to the job, used to control access to a resource that is concurrently shared by other jobs. For API command information on resources, see Resource Management.
The following example shows how to add a semaphore parameter to a job.
"FolderRes": {
"Type": "Folder",
"job1": {
"Type": "Job:Command",
"Command": "ls",
"RunAs": "pok",
"Critical": true,
"sem1": {
"Type": "Resource:Semaphore",
"Quantity": "3"
}
}
}
}
Resource:Mutex
Allows you to set a Mutex (also known as control resource) as shared or exclusive. If the resource is shared, other jobs can use the resource concurrently. If set to exclusive, the job has to wait until the resource is available before it can run. You can set a Mutex for a Job, Folder, or SubFolder.
The following example shows how to add a Mutex parameter to a job.
"FolderRes": {
"Type": "Folder",
"job1": {
"Type": "Job:Command",
"Command": "ls",
"RunAs": "pok",
"Critical": true,
"mut1": {
"Type": "Resource:Mutex",
"MutexType": "Exclusive"
}
}
}
}
RetroactiveJob
Enables you to order a job retroactively to make up for days on which the job did not run. For example, Control-M was down for two days due to a hardware issue; as soon as jobs can run again, this job is scheduled retroactively to run an additional two times, to make up for the days that Control-M was inactive.
Values: true | false
Default: false
"Type": "Job:Command",
"Command": "echo I am a retroactive order Job, I will be ordered even after my date",
"RunAs": "controlm",
"RetroactiveOrder": true
}
RunAs
Enables you to define the OS user responsible for running a job (or all jobs in a folder or subfolder).
By default, jobs are run by the user account where the Control-M/Agent is installed. To specify a diferent user, the agent must be running as root.
"Type": "Job:Command",
"Command": "echo I am a Job",
"RunAs": "controlm",
}
RunAsDummy
Enables you to run a job of any type (other than Dummy) as a dummy job.
This is useful, for example, when a job is temporarily not in use but is still included in a flow. You can temporarily set this job to run as a dummy job, so that there is no impact to the flow.
Values: true | false
Default: false
"Type": "Job:Command",
"Command": "echo I am a Job",
"RunAs": "controlm",
"RunAsDummy": true
}
RunOnAllAgentsInGroup
Allows you to set jobs to run on all agents in the group.
The example shows how to define RunOnAllAgentsInGroup
"Type": "Job:Dummy",
"RunOnAllAgentsInGroup" : true,
"Host" : "dummyHost"
}
RunOnAllAgentsInGroup | true | false Default: false |
Time Zone
Allows you to add the time zone to jobs, folder, or subfolder. Time zones should be defined at least 48 hours before the intended execution date. We recommend to define the same time zone for all jobs in a folder.
Time zone possible values: |
---|
HNL (GMT-10:00) |
Variables
Allows you to use job level variables with %% notation in job fields. You can set this property for a Job, Folder, or SubFolder.
"Type": "Job:Script",
"FileName": "scriptname.sh",
"FilePath":"%%ScriptsPath",
"RunAs": "em900cob",
"Arguments":["--date", "%%TodayDate" ],
"Variables": [
{"TodayDate": "%%$DATE"},
{"ScriptsPath": "/home/em900cob"}
]
}
For specifications of system defined variables such as %%$DATE see Control-M system variables in the Control-M Online Help.
Named pools of variables can share data between jobs using the syntax "\\poolname\variable". NOTE that due to JSON character escaping, each backslash in the pool name must be doubled. For example, "\\\\pool1\\date".
"Type": "Job:Dummy",
"Variables": [
{"\\\\pool1\\date": "%%$DATE"}
]
},
"job2": {
"Type": "Job:Script",
"FileName": "scriptname.sh",
"FilePath":"/home/user/scripts",
"RunAs": "em900cob",
"Arguments":["--date", "%%\\\\pool1\\date" ]
}
Jobs in a folder can share variables at the folder level using the syntax "\\variable name" to set and %%variable to use
"Type" : "Folder",
"Variables": [
{"TodayDate": "%%$DATE"}
],
"job1": {
"Type": "Job:Dummy",
"Variables": [
{"\\\\CompanyName": "compName"}
]
},
"job2": {
"Type": "Job:Script",
"FileName": "scriptname.sh",
"FilePath":"/home/user/scripts",
"RunAs": "em900cob",
"Arguments":["--date", "%%TodayDate", "--comp", "%%CompanyName" ]
}
}
Job types
The following series of sections provide details about the available types of jobs that you can define and the parameters that you use to define each type of job.
- Command job
- Script job
- Embedded Script job
- File Transfer job
- File Watcher job
- Database job
- Hadoop jobs (various types)
- SAP job
- Application Integrator job
- Dummy job
Job:Command
The following example shows how to use the Job:Command to run operating system commands.
"Type" : "Job:Command",
"Command" : "echo hello",
"PreCommand": "echo before running main command",
"PostCommand": "echo after running main command",
"Host" : "myhost.mycomp.com",
"RunAs" : "user1"
}
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
RunAs | Identifies the operating system user that will run the job. |
PreCommand | (Optional) A command to execute before the job is executed. |
PostCommand | (Optional) A command to execute after the job is executed. |
Job:Script
The following example shows how to use Job:Script to run a script from a specified script file.
"Type" : "Job:Script",
"FileName" : "task1123.sh",
"FilePath" : "/home/user1/scripts",
"PreCommand": "echo before running script",
"PostCommand": "echo after running script",
"Host" : "myhost.mycomp.com",
"RunAs" : "user1"
}
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
RunAs | Identifies the operating systems user that will run the job. |
FileName together with FilePath | Indicates the location of the script. NOTE: Due to JSON character escaping, each backslash in a Windows file system path must be doubled. For example, "c:\\tmp\\scripts". |
PreCommand | (Optional) A command to execute before the job is executed. |
PostCommand | (Optional) A command to execute after the job is executed. |
Job:EmbeddedScript
The following example shows how to use Job:EmbeddedScript to run a script that you include in the JSON code. Control-M deploys this script to the Agent host during job submission. This eliminates the need to deploy scripts as part of your application stack.
"Type":"Job:EmbeddedScript",
"Script":"#!/bin/bash\\necho \"Hello world\"",
"Host":"myhost.mycomp.com",
"RunAs":"user1",
"FileName":"myscript.sh",
"PreCommand": "echo before running script",
"PostCommand": "echo after running script"
}
Script | Full content of the script, up to 64 kilobytes. |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
RunAs | Identifies the operating systems user that will run the job. |
FileName | Name of a script file. This property is used for the following purposes:
|
PreCommand | (Optional) A command to execute before the job is executed. |
PostCommand | (Optional) A command to execute after the job is executed. |
Job:FileTransfer
The following example shows a Job:FileTransfer.
{
"Type" : "Folder",
"Application" : "aft",
"TransferFromLocalToSFTP" :
{
"Type" : "Job:FileTransfer",
"ConnectionProfileSrc" : "LocalConn",
"ConnectionProfileDest" : "SftpConn",
"Host": "AgentHost",
"FileTransfers" :
[
{
"Src" : "/home/controlm/file1",
"Dest" : "/home/controlm/file2",
"TransferType": "Binary",
"TransferOption": "SrcToDest"
},
{
"Src" : "/home/controlm/otherFile1",
"Dest" : "/home/controlm/otherFile2",
"TransferOption": "DestToSrc"
}
]
}
}
Where:
Parameter | Description |
---|---|
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. |
ConnectionProfileSrc | The connection profile to use as the source |
ConnectionProfileDest | The connection profile to use as the destination |
ConnectionProfileDualEndpoint | If you need to use a dual-endpoint connection profile that you have set up, specify the name of the dual-endpoint connection profile instead of ConnectionProfileSrc and ConnectionProfileDest. For more information about dual-endpoint connection profiles, see ConnectionProfile:FileTransfer:DualEndPoint. |
FileTransfers | A list of file transfers to perform during job execution, each with the following properties: |
Src | Full path to the source file |
Dest | Full path to the destination file |
TransferType | (Optional) FTP transfer mode, either Ascii (for a text file) or Binary (non-textual data file). Ascii mode handles the conversion of line endings within the file, while Binary mode creates a byte-for-byte identical copy of the transferred file. Default: "Binary" |
TransferOption | (Optional) The following is a list of the transfer options:
Default: "SrcToDest" |
The following example presents a File Transfer job in which the transferred file is watched using the File Watcher utility:
{
"Type" : "Folder",
"Application" : "aft",
"TransferFromLocalToSFTPBasedOnEvent" :
{
"Type" : "Job:FileTransfer",
"Host" : "AgentHost",
"ConnectionProfileSrc" : "LocalConn",
"ConnectionProfileDest" : "SftpConn",
"FileTransfers" :
[
{
"Src" : "/home/sftp/file1",
"Dest" : "/home/sftp/file2",
"TransferType": "Binary",
"TransferOption" : "SrcToDestFileWatcher",
"PreCommandDest" :
{
"action" : "rm",
"arg1" : "/home/sftp/file2"
},
"PostCommandDest" :
{
"action" : "chmod",
"arg1" : "700",
"arg2" : "/home/sftp/file2"
},
"FileWatcherOptions":
{
"MinDetectedSizeInBytes" : "200",
"TimeLimitPolicy" : "WaitUntil",
"TimeLimitValue" : "2000",
"MinFileAge" : "3Min",
"MaxFileAge" : "10Min",
"AssignFileNameToVariable" : "FileNameEvent",
"TransferAllMatchingFiles" : true
}
}
]
}
}
This example contains the following additional optional parameters:
PreCommandSrc PreCommandDest PostCommandSrc PostCommandDest | Defines commands that occur before and after job execution.
| ||||||||||||
FileWatcherOptions | Additional options for watching the transferred file using the File Watcher utility: | ||||||||||||
MinDetectedSizeInBytes | Defines the minimum number of bytes transferred before checking if the file size is static | ||||||||||||
TimeLimitPolicy/ | Defines the time limit to watch a file: TimeLimitValue: If TimeLimitPolicy: WaitUntil, the TimeLimitValue is the specific time to wait, for example 04:22 would be 4:22 AM. | ||||||||||||
MinFileAge | Defines the minimum number of years, months, days, hours, and/or minutes that must have passed since the watched file was last modified Valid values: 9999Y9999M9999d9999h9999Min For example: 2y3d7h | ||||||||||||
MaxFileAge | Defines the maximum number of years, months, days, hours, and/or minutes that can pass since the watched file was last modified Valid values: 9999Y9999M9999d9999h9999Min For example: 2y3d7h | ||||||||||||
AssignFileNameToVariable | Defines the variable name that contains the detected file name | ||||||||||||
TransferAllMatchingFiles | Whether to transfer all matching files (value of True) or only the first matching file (value of False) after waiting until the watching criteria is met. Valid values: True | False |
Job:FileWatcher
A File Watcher job enables you to detect the successful completion of a file transfer activity that creates or deletes a file. The following example shows how to manage File Watcher jobs using Job:FileWatcher:Create and Job:FileWatcher:Delete.
"Type" : "Job:FileWatcher:Create",
"RunAs":"controlm",
"Path" : "C:/path*.txt",
"SearchInterval" : "45",
"TimeLimit" : "22",
"StartTime" : "201705041535",
"StopTime" : "201805041535",
"MinimumSize" : "10B",
"WildCard" : true,
"MinimalAge" : "1Y",
"MaximalAge" : "1D2H4MIN"
},
"FWJobDelete" : {
"Type" : "Job:FileWatcher:Delete",
"RunAs":"controlm",
"Path" : "C:/path.txt",
"SearchInterval" : "45",
"TimeLimit" : "22",
"StartTime" : "201805041535",
"StopTime" : "201905041535"
}
This example contains the following parameters:
Path | Path of the file to be detected by the File Watcher You can include wildcards in the path — * for any number of characters, and ? for any single character. |
SearchInterval | Interval (in seconds) between successive attempts to detect the creation/deletion of a file |
TimeLimit | Maximum time (in minutes) to run the process without detecting the file at its minimum size (for Job:FileWatcher:Create) or detecting its deletion (for Job:FileWatcher:Delete). If the file is not detected/deleted in this specified time frame, the process terminates with an error return code. Default: 0 (no time limit) |
StartTime | The time at which to start watching the file The format is yyyymmddHHMM. For example, 201805041535 means that the File Watcher will start watching the file on May 4, 2018 at 3:35 PM. |
StopTime | The time at which to stop watching the file. Format: yyyymmddHHMM or HHMM (for the current date) |
MinimumSize | Minimum file size to monitor for, when watching a created file Follow the specified number with the unit: B for bytes, KB for kilobytes, MB for megabyes, or GB for gigabytes. If the file name (specified by the Path parameter) contains wildcards, minimum file size is monitored only if you set the Wildcard parameter to true. |
Wildcard | Whether to monitor minimum file size of a created file if the file name (specified by the Path parameter) contains wildcards Values: true | false |
MinimalAge | (Optional) The minimum number of years, months, days, hours, and/or minutes that must have passed since the created file was last modified. For example: 2Y10M3D5H means that 2years, 10 months, 3 days, and 5 hours must pass before the file will be watched. 2H10Min means that 2 hours and 10 minutes must pass before the file will be watched. |
MaximalAge | (Optional) The maximum number of years, months, days, hours, and/or minutes that can pass since the created file was last modified. For example: 2Y10M3D5H means that after 2years, 10 months, 3 days, and 5 hours have passed, the file will no longer be watched. 2H10Min means that after 2 hours and 10 minutes have passed, the file will no longer be watched. |
Job:Database
The following types of database jobs are available:
- SQL Script job, using Job:Database:SQLScript
- Stored Procedure job, using Job:Database:StoredProcedure
Job:Database:SQLScript
The following example shows how to create a database job that runs a SQL script from a file system.
"OracleDBFolder": {
"Type": "Folder",
"testOracle": {
"Type": "Job:Database:SQLScript",
"Host": "AgentHost",
"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
"ConnectionProfile": "ORACLE_CONNECTION_PROFILE",
"Parameters": [
{"firstParamName": "firstParamValue"},
{"secondParamName": "secondParamValue"}
],
"Autocommit": "N",
"OutputExcecutionLog": "Y",
"OutputSQLOutput": "Y",
"SQLOutputFormat": "XML"
}
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Parameters | Parameters are pairs of name and value. Every name that appears in the SQL script will be replaced by its value pair. |
Autocommit | (Optional) Commits statements to the database that completes successfully Default: N |
OutputExcecutionLog | (Optional) Shows the execution log in the job output Default: Y |
OutputSQLOutput | (Optional) Shows the SQL sysout in the job output Default: N |
SQLOutputFormat | (Optional) Defines the output format as either Text, XML, CSV, or HTML Default: Text |
Another example:
"OracleDBFolder": {
"Type": "Folder",
"testOracle": {
"Type": "Job:Database:SQLScript",
"Host": "app-redhat",
"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
"ConnectionProfile": "ORACLE_CONNECTION_PROFILE"
}
}
}
Job:Database:StoredProcedure
The following example shows how to create a database job that runs a program that is stored on the database.
"storeFolder": {
"Type": "Folder",
"jobStoredProcedure": {
"Type": "Job:Database:StoredProcedure",
"Host": "myhost.mycomp.com",
"StoredProcedure": "myProcedure",
"Parameters": [ "value1","variable1",["value2","variable2"]],
"ReturnValue":"RV",
"Schema": "public",
"ConnectionProfile": "DB-PG-CON",
"Autocommit": "N",
"OutputExcecutionLog": "Y",
"OutputSQLOutput": "Y",
"SQLOutputFormat": "XML"
}
}
}
This example contains the following parameters:
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
StoredProcedure | Name of stored procedure that the job runs |
Parameters | A comma-separated list of values and variables for all parameters in the procedure, in the order of their appearence in the procedure. The value that you specify for any specific parameter in the procedure depends on the type of parameter:
In the example above, three parameters are listed, in the following order: [In,Out,Inout] |
ReturnValue | A variable for the Return parameter (if the procedure contains such a parameter) |
Schema | The database schema where the stored procedure resides |
Package | (Oracle only) Name of a package in the database where the stored procedure resides The default is "*", that is, any package in the database. |
ConnectionProfile | Name of a connection profile that contains the details of the connection to the database |
Autocommit | (Optional) Commits statements to the database that completes successfully Default: N |
OutputExcecutionLog | (Optional) Shows the execution log in the job output Default: Y |
OutputSQLOutput | (Optional) Shows the SQL sysout in the job output Default: N |
SQLOutputFormat | (Optional) Defines the output format as either Text, XML, CSV, or HTML Default: Text |
Job:Hadoop
Various types of Hadoop jobs are available for you to define using the Job:Hadoop objects:
- Spark Python
- Spark Scala or Java
- Pig
- Sqoop
- Hive
- DistCp (distributed copy)
- HDFS commands
- HDFS File Watcher
- Oozie
- MapReduce
- MapReduce Streaming
Job:Hadoop:Spark:Python
The following example shows how to use Job:Hadoop:Spark:Python to run a Spark Python program.
"Type": "Job:Hadoop:Spark:Python",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"SparkScript": "/home/user/processData.py"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
"Type": "Job:Hadoop:Spark:Python",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"SparkScript": "/home/user/processData.py",
"Arguments": [
"1000",
"120"
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
},
"SparkOptions": [
{"--master": "yarn"},
{"--num":"-executors 50"}
]
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true, that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:Spark:ScalaJava
The following example shows how to use Job:Hadoop:Spark:ScalaJava to run a Spark Java or Scala program.
"Type": "Job:Hadoop:Spark:ScalaJava",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar": "/home/user/ScalaProgram.jar",
"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
"Type": "Job:Hadoop:Spark:ScalaJava",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar": "/home/user/ScalaProgram.jar"
"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName",
"Arguments": [
"1000",
"120"
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
},
"SparkOptions": [
{"--master": "yarn"},
{"--num":"-executors 50"}
]
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true, that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:Pig
The following example shows how to use Job:Hadoop:Pig to run a Pig script.
"Type" : "Job:Hadoop:Pig",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"PigScript" : "/home/user/script.pig"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
"Type" : "Job:Hadoop:Pig",
"ConnectionProfile": "DEV_CLUSTER",
"PigScript" : "/home/user/script.pig",
"Host" : "edgenode",
"Parameters" : [
{"amount":"1000"},
{"volume":"120"}
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true, that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:Sqoop
The following example shows how to use Job:Hadoop:Sqoop to run a Sqoop job.
{
"Type" : "Job:Hadoop:Sqoop",
"Host" : "edgenode",
"ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",
"SqoopCommand" : "import --table foo --target-dir /dest_dir"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
{
"Type" : "Job:Hadoop:Sqoop",
"Host" : "edgenode",
"ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",
"SqoopCommand" : "import --table foo",
"SqoopOptions" : [
{"--warehouse-dir":"/shared"},
{"--default-character-set":"latin1"}
],
"SqoopArchives" : "",
"SqoopFiles": "",
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true, that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
SqoopOptions | These are passed as the specific sqoop tool args |
SqoopArchives | Indicates the location of the Hadoop archives. |
SqoopFiles | Indicates the location of the Sqoop files. |
Job:Hadoop:Hive
The following example shows how to use Job:Hadoop:Hive to run a Hive beeline job.
{
"Type" : "Job:Hadoop:Hive",
"Host" : "edgenode",
"ConnectionProfile" : "HIVE_CONNECTION_PROFILE",
"HiveScript" : "/home/user1/hive.script"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
{
"Type" : "Job:Hadoop:Hive",
"Host" : "edgenode",
"ConnectionProfile" : "HIVE_CONNECTION_PROFILE",
"HiveScript" : "/home/user1/hive.script",
"Parameters" : [
{"ammount": "1000"},
{"topic": "food"}
],
"HiveArchives" : "",
"HiveFiles": "",
"HiveOptions" : [
{"hive.root.logger": "INFO,console"}
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true, that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
HiveSciptParameters | Passed to beeline as --hivevar “name”=”value”. |
HiveProperties | Passed to beeline as --hiveconf “key”=”value”. |
HiveArchives | Passed to beeline as --hiveconf mapred.cache.archives=”value”. |
HiveFiles | Passed to beeline as --hiveconf mapred.cache.files=”value”. |
Job:Hadoop:DistCp
The following example shows how to use Job:Hadoop:DistCp to run a DistCp job. DistCp (distributed copy) is a tool used for large inter/intra-cluster copying.
{
"Type" : "Job:Hadoop:DistCp",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"TargetPath" : "hdfs://nns2:8020/foo/bar",
"SourcePaths" :
[
"hdfs://nn1:8020/foo/a"
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
{
"Type" : "Job:Hadoop:DistCp",
"Host" : "edgenode",
"ConnectionProfile" : "HADOOP_CONNECTION_PROFILE",
"TargetPath" : "hdfs://nns2:8020/foo/bar",
"SourcePaths" :
[
"hdfs://nn1:8020/foo/a",
"hdfs://nn1:8020/foo/b"
],
"DistcpOptions" : [
{"-m":"3"},
{"-filelimit ":"100"}
]
}
TargetPath, SourcePaths and DistcpOptions | Passed to the distcp tool in the following way: distcp <Options> <TargetPath> <SourcePaths>. |
Job:Hadoop:HDFSCommands
The following example shows how to use Job:Hadoop:HDFSCommands to run a job that executes one or more HDFS commands.
{
"Type" : "Job:Hadoop:HDFSCommands",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"Commands": [
{"get": "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm": "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Job:Hadoop:HDFSFileWatcher
The following example shows how to use Job:Hadoop:HDFSFileWatcher to run a job that waits for HDFS file arrival.
{
"Type" : "Job:Hadoop:HDFSFileWatcher",
"Host" : "edgenode",
"ConnectionProfile" : "DEV_CLUSTER",
"HdfsFilePath" : "/inputs/filename",
"MinDetecedSize" : "1",
"MaxWaitTime" : "2"
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
HdfsFilePath | Specifies the full path of the file being watched. |
MinDetecedSize | Defines the minimum file size in bytes to meet the criteria and finish the job as OK. If the file arrives, but the size is not met, the job continues to watch the file. |
MaxWaitTime | Defines the maximum number of minutes to wait for the file to meet the watching criteria. If criteria are not met (file did not arrive, or minimum size was not reached) the job fails after this maximum number of minutes. |
Job:Hadoop:Oozie
The following example shows how to use Job:Hadoop:Oozie to run a job that submits an Oozie workflow.
"Type" : "Job:Hadoop:Oozie",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"JobPropertiesFile" : "/home/user/job.properties",
"OozieOptions" : [
{"inputDir":"/usr/tucu/inputdir"},
{"outputDir":"/usr/tucu/outputdir"}
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
JobPropertiesFile | The path to the job properties file. |
Optional parameters:
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true, that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
OozieOptions | Set or override values for given job property. |
Job:Hadoop:MapReduce
The following example shows how to use Job:Hadoop:MapReduce to execute a Hadoop MapReduce job.
{
"Type" : "Job:Hadoop:MapReduce",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
"MainClass" : "pi",
"Arguments" :[
"1",
"2"]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
{
"Type" : "Job:Hadoop:MapReduce",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
"MainClass" : "pi",
"Arguments" :[
"1",
"2"
],
"PreCommands": {
"FailJobOnCommandFailure" :false,
"Commands" : [
{"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
{"rm" : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
]
},
"PostCommands": {
"FailJobOnCommandFailure" :true,
"Commands" : [
{"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
]
}
}
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true, that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
Job:Hadoop:MapredStreaming
The following example shows how to use Job:Hadoop:MapredStreaming to execute a Hadoop MapReduce Streaming job.
"Type": "Job:Hadoop:MapredStreaming",
"Host" : "edgenode",
"ConnectionProfile": "DEV_CLUSTER",
"InputPath": "/user/robot/input/*",
"OutputPath": "/tmp/output",
"MapperCommand": "mapper.py",
"ReducerCommand": "reducer.py",
"GeneralOptions": [
{"-D": "fs.permissions.umask-mode=000"},
{"-files": "/home/user/hadoop-streaming/mapper.py,/home/user/hadoop-streaming/reducer.py"}
]
}
ConnectionProfile | |
Host | Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell. NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host. |
Optional parameters:
PreCommands and PostCommands | Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup. |
FailJobOnCommandFailure | This parameter is used to ignore failure in the pre- or post- commands. The default for PreCommands is true, that is, the job will fail if any pre-command fails. The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails. |
GeneralOptions | Additional Hadoop command options passed to the hadoop-streaming.jar, including generic options and streaming options. |
Job:SAP
SAP-type jobs enable you to manage SAP processes through the Control-M environment. To manage SAP-type jobs, you must have the Control-M for SAP plugin installed in your Control-M environment.
Job:SAP:R3:COPY
This job type enables you to create a new SAP R3 job by duplicating an existing job. The following example shows how to use Job:SAP:R3:COPY:
"Type" : "Job:SAP:R3:COPY",
"ConnectionProfile":"SAP_CON",
"SapJobName" : "CHILD_1",
"Exec": "Server",
"Target" : "Server-name",
"JobCount" : "SpecificJob",
"JobCountSpecificName" : "sap-job-1234",
"NewJobName" : "My-New-Sap-Job",
"StartCondition" : "AfterEvent",
"AfterEvent" : "HOLA",
"AfterEventParameters" : "parm1 parm2",
"RerunFromPointOfFailure": true,
"CopyFromStep" : "4",
"PostJobAction" : {
"Spool" : "CopyToFile",
"SpoolFile": "spoolfile.log",
"SpoolSaveToPDF" : true,
"JobLog" : "CopyToFile",
"JobLogFile": "Log.txt",
"JobCompletionStatusWillDependOnApplicationStatus" : true
},
"DetectSpawnedJob" : {
"DetectAndCreate": "SpecificJobDefinition",
"JobName" : "Specific-Job-123",
"StartSpawnedJob" : true,
"JobEndInControlMOnlyAftreChildJobsCompleteOnSap" : true,
"JobCompletionStatusDependsOnChildJobsStatus" : true
}
}
This SAP job object uses the following parameters:
ConnectionProfile | Name of the SAP connection profile to use for the connection |
SapJobName | Name of SAP job to copy |
Exec | Type of execution target where the SAP job will run, one of the following:
|
Target | The name of the SAP application server or SAP group (depending on the value specified in the previous parameter) |
JobCount | How to define a unique ID number for the SAP job, one of the following options:
If you specify SpecificJob, you must provide the next parameter. |
JobCountSpecificName | A unique SAP job ID number for a specific job (that is, for when JobCount is set to SpecificJob) |
NewJobName | Name of the newly created job |
StartCondition | Specifies when the job should run, one of the following:
|
AfterEvent | The name of the SAP event that the job waits for (if you set StartCondition to AfterEvent) |
AfterEventParameters | Parameters in the SAP event to watch for. Use space characters to separate multiple parameters. |
RerunFromPointOfFailure | Whether to rerun the SAP R/3 job from its point of failure, either true of false (the default) Note: If RerunFromPointOfFailure is set to false, use the CopyFromStep parameter to set a specific step from which to rerun. |
CopyFromStep | The number of a specific step in the SAP R/3 job from which to rerun or copy The default is step 1 (that is, the beginning of the job). Note: If RerunFromPointOfFailure is set to true, the CopyFromStep parameter is ignored. |
PostJobAction | This object groups together several parameters that control post-job actions for the SAP R/3 job. |
Spool | How to manage spool output, one of the following options:
|
SpoolFile | The file to which to copy the job's spool output (if Spool is set to CopyToFile) |
SpoolSaveToPDF | Whether to save the job's spool output in PDF format (if Spool is set to CopyToFile) |
JobLog | How to manage job log output, one of the following options:
|
JobLogFile | The file to which to copy the job's log output (if JobLog is set to CopyToFile) |
JobCompletionStatusWillDependOnApplicationStatus | Whether job completion status depends on SAP application status, either true or false (the default) |
DetectSpawnedJob | This object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job |
DetectAndCreate | How to determine the properties of detected spawned jobs:
|
JobName | Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition) Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values. |
StartSpawnedJob | Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default) |
JobEndInControlMOnlyAftreChildJobsCompleteOnSap | Whether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default) |
JobCompletionStatusDependsOnChildJobsStatus | Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default) When set to true, the parent job does not end OK if any child job fails. |
Job:ApplicationIntegrator
Use Job:ApplicationIntegrator:<JobType> to define a job of a custom type using the Control-M Application Integrator designer. For information about designing job types, see the Control-M Application Integrator Help.
The following example shows the JSON code used to define a job type named Monitor Remote Job:
"Type": "Job:ApplicationIntegrator:Monitor Remote Job",
"ConnectionProfile": "AI_CONNECTION_PROFILE",
"AI-Host": "Host1",
"AI-Port": "5180",
"AI-User Name": "admin",
"AI-Password": "*******",
"AI-Remote Job to Monitor": "remoteJob5",
"RunAs": "controlm"
}
In this example, the ConnectionProfile and RunAs properties are standard job properties used in Control-M Automation API jobs. The other job properties will be created in the Control-M Application Integrator, and they must be prefixed with "AI-" in the .json code.
The following images show the corresponding settings in the Control-M Application Integrator, for reference purposes.
Job:Dummy
The following example shows how to use Job:Dummy to define a job that always ends successfully without running any commands.
"Type" : "Job:Dummy"
}
Connection Profile
Connection profiles are used to define access methods and security credentials for a specific application. They can be referenced by multiple jobs. To do this, you must deploy the connection profile definition before running the relevant jobs.
ConnectionProfile:Hadoop
These examples show how to use connection profiles for the various types of Hadoop jobs.
Hadoop (all types)
These are the required parameters for all Hadoop job types.
{
"Type" : "ConnectionProfile:Hadoop",
"TargetAgent" : "edgenode",
"TargetCTM" : "CTMHost"
}
Parameter | Description |
TargetAgent | The Control-M/Agent to which to deploy the connection profile. |
TargetCTM | The Control-M/Server to which to deploy the connection profile. If there is only one Control-M/Server, that is the default. |
These are the optional parameters for defining the user running the Hadoop job types.
{
"Type" : "ConnectionProfile:Hadoop",
"TargetAgent" : "edgenode",
"TargetCTM" : "CTMHost",
"RunAs": "",
"KeyTabPath":""
}
RunAs | Defines the user of the account on which to run Hadoop jobs. Leave this field empty to run Hadoop jobs using the user account where the agent was installed. The Control-M/Agent must run as root, if you define a specific RunAs user. |
In the case of Kerberos security
RunAs | Principal name of the user |
KeyTabPath | Keytab file path for the target user |
Apache Oozie
The following example shows a connection profile that defines access to an Oozie server.
{
"Type" : "ConnectionProfile:Hadoop",
"TargetAgent" : "hdp-ubuntu",
"Oozie" :
{
"SslEnabled" : false,
"Host" : "hdp-centos",
"Port" : "11000"
}
}
}
Parameter | Description |
---|---|
Host | Oozie server host |
Port | Oozie server port Default: 11000 |
SslEnabled | true | false Default: false |
Apache Sqoop
The following example shows a connection profile that defines a Sqoop data source and access credentials.
{
"Type" : "ConnectionProfile:Hadoop",
"TargetAgent" : "edgenode",
"TargetCTM" : "CTMHost",
"Sqoop" :
{
"User" : "username",
"Password" : "userpassword",
"ConnectionString" : "jdbc:mysql://mysql.server/database",
"DriverClass" : "com.mysql.jdbc.Driver"
}
}
The following table describes the parameters in the example above, as well as several additional optional parameters:
Parameter | Description |
---|---|
User | The database user connected to the Sqoop server |
Password | A password for the specified user If you are updating an existing connection profile and do not want to change the existing password, enter a string of 5 asterisk characters, "*****". |
ConnectionString | JDBC-compliant database: The connection string used to connect to the database |
DriverClass | JDBC-compliant database: The driver class for the driver .jar file, which indicates the entry-point to the driver |
PasswordFile | (Optional) The full path to a file located on the HDFS that contains the password to the database Note: To use a JCEKS file, include the .jceks file extension. |
DatabaseVendor | (Optional) The database vendor of an automatically supported database used with Sqoop, one of the following:
|
DatabaseName | (Optional) Name of an automatically supported database used with Sqoop |
DatabaseHost | (Optional) The host server of an automatically supported database used with Sqoop |
DatabasePort | (Optional) The port number for an automatically supported database used with Sqoop |
Apache Tajo
The following example shows a connection profile that defines access to a Tajo server. Tajo is an advanced data warehousing system on top of HDFS.
"TAJO_CP" :
{
"Type" : "ConnectionProfile:Hadoop",
"TargetAgent" : "edgenode",
"Tajo":
{
"BinaryPath": "$TAJO_HOME/bin/",
"DatabaseName": "myTajoDB",
"MasterServerName" : "myTajoServer",
"MasterServerPort": "26001"
}
}
}
Parameter | Description |
---|---|
BinaryPath | Path to the bin directory where tsql utility is located |
DatabaseName | Name of the Tajo database |
MasterServerName | Host name of the server where the Tajo master is running |
MasterServerPort | Tajo master port number |
Apache Hive
The following example shows a connection profile that defines a Hive beeline endpoint and access credentials. The parameters in the example translate to this beeline command:
beeline -u jdbc:hive2://<Host>:<Port>/<DatabaseName>
{
"Type" : "ConnectionProfile:Hadoop",
"TargetAgent" : "edgenode",
"TargetCTM" : "CTMHost",
"Hive" :
{
"Host" : "hive_host_name",
"Port" : "10000",
"DatabaseName" : "hive_database",
}
}
The following shows how to use optional parameters for a Hadoop Hive job type connection profile.
The parameters in the example translate to this beeline command:
beeline -u jdbc:hive2://<Host>:<Port>/<DatabaseName>;principal=<Principal> -n <User> -p <Password>
{
"Type" : "ConnectionProfile:Hadoop",
"TargetAgent" : "edgenode",
"TargetCTM" : "CTMHost",
"Hive" :
{
"Host" : "hive_host_name",
"Port" : "10000",
"DatabaseName" : "hive_database",
"User" : "user_name",
"Password" : "user_password",
"Principal" : "Server_Principal_of_HiveServer2@Realm"
}
}
If you are updating an existing connection profile and do not want to change the existing password, enter a string of 5 asterisk characters, "*****".
ConnectionProfile:FileTransfer
The following examples show you how to define a connection profile for File Transfers. File Transfer connection profiles are divided into two types, depending on the number of hosts for which they contain connection details:
- Single endpoint: Each connection profile contains the connection details of a single host. Such a connection profile can be used for either the source host or the destination host in a file transfer.
- Dual endpoint: The connection profile contains connection details of two hosts, both the source host and the destination host, in a file transfer.
Connection details can be based on the FTP or SFTP communication protocols or can be to a local file system.
ConnectionProfile:FileTransfer:FTP
The following examples show a connection profile for a file transfer to a single endpoint using the FTP communication protocol.
Simple ConnectionProfile:FileTransfer:FTP
"Type" : "ConnectionProfile:FileTransfer:FTP",
"TargetAgent" : "AgentHost",
"TargetCTM" : "CTMHost",
"HostName": "FTPServer",
"User" : "FTPUser",
"Password" : "ftp password"
}
ConnectionProfile:FileTransfer:FTP with optional parameters
"Type" : "ConnectionProfile:FileTransfer:FTP",
"TargetAgent" : "AgentHost",
"TargetCTM" : "CTMHost",
"WorkloadAutomationUsers":["john","bob"],
"HostName": "FTPServer",
"Port": "21",
"User" : "FTPUser",
"Password" : "ftp password",
"HomeDirectory": "/home/FTPUser",
"OsType": "Unix"
}
TargetAgent | The Control-M/Agent to which to deploy the connection profile. |
TargetCTM | The Control-M/Server to which to deploy the connection profile. If there is only one Control-M/Server, that is the default. |
WorkloadAutomationUsers | (Optional) Users that are allowed to access the connection profile. NOTE: You can use "*" as a wildcard. For example, "e*" Default: * (all users) |
VerifyChecksum | (Optional) Enable or disable error detection on file transfer True | False Default: False |
OsType | (Optional) FTP server operating system type Default: Unix Types: Unix, Windows |
Password | (Optional) Password for FTP server account. Use Secrets in code to not expose the password in the code. If you are updating an existing connection profile and do not want to change the existing password, enter a string of 5 asterisk characters, "*****". |
HomeDirectory | (Optional) User home directory |
Passive | Set the FTP client mode. Passive False means active. True | False Default: False True - recommended for servers behind a firewall |
ConnectionProfile:FileTransfer:SFTP
The following examples show a connection profile for a file transfer to a single endpoint using the SFTP communication protocol.
Simple ConnectionProfile:FileTransfer:SFTP
"Type": "ConnectionProfile:FileTransfer:SFTP",
"TargetAgent": "AgentHost",
"TargetCTM" : "CTMHost",
"HostName": "SFTPServer",
"Port": "22",
"User" : "SFTPUser",
"Password" : "sftp password"
}
ConnectionProfile:FileTransfer:SFTP with optional parameters
"Type": "ConnectionProfile:FileTransfer:SFTP",
"TargetAgent": "AgentHost",
"TargetCTM" : "CTMHost",
"HostName": "SFTPServer",
"Port": "22",
"User" : "SFTPUser",
"HomeDirectory": "/home/SFTPUser",
"PrivateKeyName": "/home/controlm/ctm_agent/ctm/cm/AFT/data/Keys/sFTPkey",
"Passphrase": "passphrase"
}
TargetAgent | Which agent computer to deploy the connection profile |
TargetCTM | The Control-M/Server to which to deploy the connection profile. When there is only one Control-M/Server, this will be the default. |
WorkloadAutomationUsers | (Optional) Users that are allowed to access the connection profile. NOTE : You can use "*" as a wildcard. For example, "e*" Default: * (all users) |
VerifyChecksum | (Optional) Enable or disable error detection on file transfer True | False Default: False |
PrivateKeyName | (Optional) Private key full file path |
Passphrase | (Optional) Password for the private key. Use Secrets in code to not expose the password in the code. If you are updating an existing connection profile and do not want to change the existing password, enter a string of 5 asterisk characters, "*****". |
Password | (Optional) Password for SFTP Server account. Use Secrets in code to not expose the password in the code. If you are updating an existing connection profile and do not want to change the existing password, enter a string of 5 asterisk characters, "*****". |
HomeDirectory | (Optional) User home directory |
ConnectionProfile:FileTransfer:Local
The following example shows a connection profile for a file transfer to a single endpoint on a Local File System.
"Type" : "ConnectionProfile:FileTransfer:Local",
"TargetAgent" : "AgentHost",
"TargetCTM" : "CTMHost",
"User" : "controlm",
"Password" : "local password"
}
TargetAgent | The Control-M/Agent to which to deploy the connection profile. |
TargetCTM | The Control-M/Server to which to deploy the connection profile. When there is only one Control-M/Server, this will be the default. |
WorkloadAutomationUsers | (Optional) Users that are allowed to access the connection profile. NOTE : You can use "*" as a wildcard. For example, "e*" Default: * (all users) |
VerifyChecksum | (Optional) Enable or disable error detection on file transfer True | False Default: False |
OsType | (Optional) Local server operating system type Default: Unix Types: Unix, Windows |
Password | (Optional) Password for local account. Use Secrets in code to not expose the password in the code. If you are updating an existing connection profile and do not want to change the existing password, enter a string of 5 asterisk characters, "*****". |
ConnectionProfile:FileTransfer:DualEndPoint
In a dual-endpoint connection profile, you specify connection details for the source host and for the destaination host of the file transfer. Connection details can be based on the FTP or SFTP communication protocols or can be to a local file system.
The following example shows a dual-endpoint connection profile. One endpoint uses the FTP communication protocol and the other endpoint uses the SFTP communication protocol.
"Type" : "ConnectionProfile:FileTransfer:DualEndPoint",
"WorkloadAutomationUsers" : [ "emuser1" ],
"TargetAgent" : "AgentHost",
"src_endpoint" : {
"Type" : "Endpoint:Src:FTP",
"User" : "controlm",
"Port" : "10023",
"HostName" : "localhost",
"Password" : "password",
"HomeDirectory" : "/home/controlm/"
},
"dest_endpoint" : {
"Type" : "Endpoint:Dest:SFTP",
"User" : "controlm",
"Port" : "10023",
"HostName" : "host2",
"Password" : "password",
"HomeDirectory" : "/home/controlm/"
}
}
The dual-endpoint connection profile can have the following parameters:
TargetAgent | The Control-M/Agent to which to deploy the connection profile. |
TargetCTM | The Control-M/Server to which to deploy the connection profile. If there is only one Control-M/Server, that is the default. |
WorkloadAutomationUsers | (Optional) Users that are allowed to access the connection profile. NOTE: You can use "*" as a wildcard. For example, "e*" Default: * (all users) |
VerifyChecksum | (Optional) Enable or disable error detection on file transfer True | False Default: False |
Endpoint | Two endpoint objects, one for the source host and one for the destaination host. Each endpoint can be based on FTP, SFTP, or local file system. Here are all the possible types of Endpoint objects:
Parameters under the Endpoint object are the same as the remaining parameters for a single-endpoint connection profile, depending on type of connection: |
ConnectionProfile:Database
The connection profile for database allows you to connect to the following database types:
- ConnectionProfile:Database:DB2
- ConnectionProfile:Database:Sybase
- ConnectionProfile:Database:PostgreSQL
- ConnectionProfile:Database:MSSQL
- ConnectionProfile:Database:Oracle
- ConnectionProfile:JDBC
The following example shows how to define an MSSQL database connection profile.
"MSSQL_CONNECTION_PROFILE": {
"Type": "ConnectionProfile:Database:MSSQL",
"TargetCTM":"CTMHost",
"TargetAgent": "AgentHost",
"Host": "MSSQLHost",
"User": "db user",
"Port":"1433",
"Password": "db password",
"DatabaseName": "master",
"DatabaseVersion": "2005",
"MaxConcurrentConnections": "9",
"ConnectionRetryTimeOut": "34",
"ConnectionIdleTime": "45"
},
"MSsqlDBFolder": {
"Type": "Folder",
"testMSSQL": {
"Type": "Job:Database:SQLScript",
"Host": "app-redhat",
"SQLScript": "/home/controlm/sqlscripts/selectArgs.sql",
"ConnectionProfile": "MSSQL_CONNECTION_PROFILE",
"Parameters": [
{ "firstParamName": "firstParamValue" },
{ "second": "secondParamValue" }
],
"Autocommit": "N",
"OutputExcecutionLog": "Y",
"OutputSQLOutput": "Y",
"SQLOutputFormat": "XML"
}
}
}
This example contains the following parameters:
Parameter | Description |
---|---|
Port | The database port number. If the port is not specified, the following default values are used for each database type:
|
Password | Password to the database account. Use Secrets in code to not expose the password in the code. If you are updating an existing connection profile and do not want to change the existing password, enter a string of 5 asterisk characters, "*****". |
DatabaseName | The name of the database |
DatabaseVersion | The version of the database. The following database drivers are supported in Control-M for database V9:
The default version for each database is the earliest version listed above. |
MaxConcurrentConnections | The maximum number of connections that the database can process at the same time. Allowed values: 1–512 |
ConnectionRetryTimeOut | The number of seconds to wait before attempting to connect again. Allowed values: 1–300 |
ConnectionIdleTime | The number of seconds that the database connection profile can remain idle before disconnecting. Default value: 300 seconds |
ConnectionRetryNum | The number of times to attempt to reconnect after a connection failure. Allowed values: 1–24 |
AuthenticationType | SQL Server Authentication Possible values are:
|
ConnectionProfile:Database:DB2
The following example shows how to define a connection profile for DB2. Additional available parameters are described in the table above.
"DB2_CONNECTION_PROFILE": {
"Type": "ConnectionProfile:Database:DB2",
"TargetCTM":"CTMHost",
"TargetAgent": "AgentHost",
"Host": "DB2Host",
"Port":"50000",
"User": "db user",
"Password": "db password",
"DatabaseName": "db2"
}
}
ConnectionProfile:Database:Sybase
The following example shows how to define a connection profile for Sybase. Additional available parameters are described in the table above.
"SYBASE_CONNECTION_PROFILE": {
"Type": "ConnectionProfile:Database:Sybase",
"TargetCTM":"CTMHost",
"TargetAgent": "AgentHost",
"Host": "SybaseHost",
"Port":"4100",
"User": "db user",
"Password": "db password",
"DatabaseName": "Master"
}
}
ConnectionProfile:Database:PostgreSQL
The following example shows how to define a connection profile for PostgreSQL. Additional available parameters are described in the table above.
"POSTGRESQL_CONNECTION_PROFILE": {
"Type": "ConnectionProfile:Database:PostgreSQL",
"TargetCTM":"CTMHost",
"TargetAgent": "AgentHost",
"Host": "PostgreSQLHost",
"Port":"5432",
"User": "db user",
"Password": "db password",
"DatabaseName": "postgres"
}
}
ConnectionProfile:Database:Oracle
Oracle includes three types of database definition types:
ConnectionProfile:Database:Oracle:SID
The following example shows how to define a connection profile for an Oracle database using the SID identifier. Additional available parameters are described in the table above.
"Type": "ConnectionProfile:Database:Oracle:SID",
"TargetCTM": "controlm",
"Port": "1521",
"TargetAgent": "AgentHost",
"Host": "OracleHost",
"User": "db user",
"Password": "db password",
"SID": "ORCL"
}
ConnectionProfile:Database:Oracle:ServiceName
The following example shows how to define a connection profile for an Oracle database using a single service name. Additional available parameters are described in the table above.
"Type": "ConnectionProfile:Database:Oracle:ServiceName",
"TargetCTM": "controlm",
"Port": "1521",
"TargetAgent": "AgentHost",
"Host": "OracleHost",
"User": "db user",
"Password": "db password",
"ServiceName": "ORCL"
}
ConnectionProfile:Database:Oracle:ConnectionString
The following example shows how to define a connection profile for an Oracle database using a connection string that contains text from your tnsname.ora file. Additional available parameters are described in the table above.
"ORACLE_CONNECTION_PROFILE": {
"Type": "ConnectionProfile:Database:Oracle:ConnectionString",
"TargetCTM":"CTMHost",
"ConnectionString":"OracleHost:1521:ORCL",
"TargetAgent": "AgentHost",
"User": "db user",
"Password": "db password"
}
}
ConnectionProfile:Database:JDBC
The following example shows how to define a connection profile using a custom defined database type created using JDBC. Additional available parameters are described in the table above.
"JDBC_CONNECTION_PROFILE": {
"Type": "ConnectionProfile:Database:JDBC",
"User":"db user",
"TargetCTM":"CTMHost",
"Host": "PGSQLHost",
"Driver":"PGDRV",
"Port":"5432",
"TargetAgent": "AgentHost",
"Password": "db password",
"DatabaseName":"dbname"
}
}
Parmeter | Description |
Driver | JDBC driver name as defined in Control-M or as defined using the Driver object |
Driver:JDBC:Database
You can define a driver to be used by a connection profile. The following example shows the parameters that you use to define a driver:
"MyDriver": {
"Type": "Driver:Jdbc:Database",
"TargetAgent":"app-redhat",
"StringTemplate":"jdbc:sqlserver://<HOST>:<PORT>/<DATABASE>",
"DriverJarsFolder":"/home/controlm/ctm/cm/DB/JDBCDrivers/PostgreSQL/9.4/",
"ClassName":"org.postgresql.Driver",
"LineComment" : "--",
"StatementSeparator" : ";"
}
}
Parameter | Description |
---|---|
TargetAgent | The Control-M/Agent to which to deploy the driver. |
StringTemplate | The structure according to which a connection profile string is created. |
DriversJarsFolder | The path to the folder where the database driver jars are located. |
ClassName | Name of driver class |
LineComment | The syntax used for line comments in the scripts that run on the database. |
StatementSeparator | The syntax used for statement separator in the scripts that run on the database. |
ConnectionProfile:SAP
SAP connection profiles that you create for use by your SAP jobs can be defined for logon to a specific SAP Application Server or for an SAP logon group.
The following example is for a connection profile for a specific SAP Application Server:
"SAP-SERVER-CONN" : {
"Type" : "ConnectionProfile:SAP",
"User" : "my-user",
"Password" : "123456",
"SapClient" : "100",
"Language" : "",
"XBPVersion": "XBP3.0",
"AppVersion": "R3",
"ApplicationServerLogon" : {
"Host" : "localhost",
"SystemNumber" : "12"
},
"SecuredNetworkConnection": {
"SncLib": " " ,
"SncPartnerName": "",
"QualityOfProtection": "2",
"SncMyName":""
},
"SapResponseTimeOut" : "180",
"UseExtended": yes,
"SolutionManagerConnectionProfile" : "IP4-GROUP",
"IsSolutionManagerConnectionProfile": true
}
}
The following example is for a connection profile for an SAP logon group:
"SAP-GROUP-CONN" : {
"Type" : "ConnectionProfile:SAP",
"User" : "my-user",
"Password" : "123456",
"SapClient" : "100",
"Language" : "",
"XBPVersion": "XBP3.0",
"AppVersion": "R3",
"GroupLogon": {
"SystemID": "123",
"MessageServerHostName": "msgsv",
"GroupName": "group1"
},
"SecuredNetworkConnection": {
"SncLib": " " ,
"SncPartnerName": "",
"QualityOfProtection": "2",
"SncMyName":""
},
"SapResponseTimeOut" : "180",
"UseExtended": yes,
"SolutionManagerConnectionProfile" : "IP4-GROUP",
"IsSolutionManagerConnectionProfile": true
}
}
Parameter | Description |
---|---|
User | SAP user name |
Password | Password for the SAP user If you are updating an existing connection profile and do not want to change the existing password, enter a string of 5 asterisk characters, "*****". |
SapClient | SAP client number, a three-digit number |
Language | The SAP language. The default is English. |
XBPVersion | SAP XBP interface version installed on the SAP server Valid values are:
|
AppVersion | Version of the SAP application. Currently, only R3 is supported. |
GroupLogon | Parameters for defining an SAP logon group |
SystemID | SAP system ID, three alpha-numeric characters |
MessageServerHostName | Host name of the computer on which the SAP System Message Server is running |
GroupName | SAP logon group name The Group name is defined in the SAP SMLG transaction. |
ApplicationServerLogon | Parameters for defining a specific SAP Application Server |
Host | The host name of the computer that runs the SAP Application Server |
SystemNumber | SAP instance number, a two-digit number |
SecuredNetworkConnection | Parameters that you can use to activate a secured network connection |
SncLib | Client full path and file name to SAP crypto lib For example: /home1/agsapfp/SNC/libsapcrypto.sl |
SncPartnerName | SNC name of the application server For example: p:CN=LE1, OU=BPM, O=BMC, C=US |
QualityOfProtection | Quality of the protection Valid values: 1, 2, 3, 8 (default), 9 |
SncMyName | SNC name of the user sending the RFC Default: The name provided by the security product for the user who is logged on. |
SapResponseTimeOut | Number of seconds that the program waits for an RFC request to be handled by the SAP system before returning a timeout error Default: 180 |
UseExtended | Whether the extended functionality of Control-M for SAP should be used, either No (default) or Yes NOTE: If you select this option, the Control-M Function Modules must be installed on your SAP server, as described in Control-M for SAP XBP interface and Control-M Function Modules in the Control-M for SAP online help. |
SolutionManagerConnectionProfile | Solution Manager connection profile for retrieval of SAP job Documentation This parameter is relevant only if the current connection profile is not a Solution Manager connection profile, as discussed in the next parameter. |
IsSolutionManagerConnectionProfile | Whether the current connection profile is a Solution Manager connection profile, either true or false (the default) |
ConnectionProfile:ApplicationIntegrator
The following example shows how to define a connection profile for a job type defined in the Control-M Application Integrator. For information about the Control-M Application Integrator, see the Control-M Application Integrator Help.
Properties defined for the connection profile in Control-M Application Integrator are all prefixed with "AI-" in the .json code, as shown in the following example.
"Type": "ConnectionProfile:ApplicationIntegrator:<JobType>",
"TargetAgent": "AgentHost",
"TargetCTM":"CTMHost",
"AI-Param03": "45",
"AI-Param04": "group"
}
Secrets in Code
You can use the Secret object in your JSON code when you do not want to expose confidential information in the source (for example, the password field in a Connection Profile). The syntax below enables you to reference a named secret as defined in the Control-M vault. To learn how to manage secrets, see section Config Secrets. The value of the secret is resolved during deployment.
The following syntax is used to reference a secret.
The following example shows how to use secrets in code:
"Type": "ConnectionProfile:Hadoop",
"Hive": {
"Host": "hiveServer",
"Principal": "a@bc",
"Port": "1024",
"User": "emuser",
"Password": {"Secret": "hive_dev_secret"}
}
}
Defaults
Allows you to define default parameter values for all objects at once.
The following example shows how to define scheduling criteria using the When parameter. This configures all jobs to run according to the same scheduling criteria. Note that if you also set a specific value at the job level, the job-level value overrides the value in the global-level Defaults section.
"Defaults" : {
"Host" : "HOST",
"When" : {
"WeekDays":["MON","TUE"],
"FromTime":"1500",
"ToTime":"1800"
}
}
}
The following example shows how to define defaults for all objects of type Job:*.
"Defaults" : {
"Job": {
"Host" : "HOST",
"When" : {
"WeekDays":["MON","TUE"],
"FromTime":"1500",
"ToTime":"1800"
}
}
}
}
The following example shows how to define defaults at the folder level that override defaults at the global level.
"Folder1": {
"Type": "Folder",
"Defaults" : {
"Job:Hadoop": {
"Host" : "HOST1",
"When" : {
"WeekDays":["MON","TUE"],
"FromTime":"1500",
"ToTime":"1800"
}
}
}
}
}
The following example shows how to define defaults that are user-defined objects, that is, objects with a Type property and a user-defined name. In this example, defaults are defined for an object named actionIfSuccess of type If. For each job that succeeds, an email is sent.
"Defaults" : {
"Job": {
"Host" : "HOST",
"actionIfSuccess" : {
"Type": "If",
"CompletionStatus":"OK",
"mailTeam": {
"Type": "Mail",
"Message": "Job %%JOBNAME succeeded",
"Subject": "Success",
"To": "team@mycomp.com"
}
}
}
}
}
Related information
For more information about Control-M, use the following resources.