Code Reference

Introduction

The code samples below describe how to define Control-M objects using JSON notation. 

Each Control-M object begins with a "Name" and then a "Type" specifier as the first property. All object names are defined in PascalCase notation with first letters in capital case. In the examples below, the "Name" of the object is "ObjectName" and the "Type" is "ObjectType".

	"ObjectName" : {
		"Type" : "ObjectType"
	}

The following object types are the most basic objects in Control-M job management:

Object typeDescription
Folder

A container of jobs. Two kinds of folders are available:

  • Regular folder — Groups together a collection of jobs and enables you to configure definitions at the folder level and have these definitions inherited by the jobs within the folder. For example, you can set schedules and manage events, resources, and notifications at the folder level, to be applied to all jobs in the folder.
  • Simple folder — Groups together a collection of jobs. Folder definitions are not inherited by jobs within the folder.

You can use a Flow to define order dependency between jobs in a folder.

JobA business process that you schedule and run in your enterprise environment
Connection ProfileAccess methods and security credentials for a specific application that runs jobs
DefaultsDefinitions of default parameter values that you can apply to multiple objects, all at once

Folder

A folder is a container of jobs. The default type of folder (as opposed to a simple folder) enables you to configure various settings such as scheduling, event management, adding resources, or adding notifications on the folder level. Folder-level definitions are inherited by the jobs within the folder.

For example, you can specify scheduling criteria on the folder level instead of defining the criteria per job in the folder. All jobs in the folder will take on the rules of the folder. This reduces job definition in code.

    "FolderSample": {
        "Type": "Folder",

         "Job1": {
           "Type": "Job:Command",
                "Command": "echo I am a Job",
                "RunAs": "controlm"
         },
         "Job2": {
           "Type": "Job:Command",
                "Command": "echo I am a Job",
                "RunAs": "controlm"
         }
    }

Optional parameters:

     "FolderSampleAll": {
        "Type": "Folder",
        "ControlmServer": "controlm",
        "SiteStandard": "",
        "OrderMethod": "Manual",
        "Application": "ApplicationName",
        "SubApplication" : "SubApplicationName",
		"RunAs" : "controlm",
		"When" : {
            "WeekDays": ["SUN"]
        },
		"mut1" : {
			 "Type": "Resource:Mutex",
			 "MutexType": "Exclusive" 
		},
		"Notify1": {
        	"Type": "Notify:ExecutionTime",
        	"Criteria": "LessThan",
        	"Value": "3",
        	"Message": "Less than expected"
		}
    }

ControlmServerThis parameter specifies a Control-M Scheduling Server. If more than one Control-M Scheduling Server is configured in the system, you must define the server that the folder belongs to.
SiteStandardThis is used to enforce the defined Site Standard to the folder and all jobs contained within the folder. See Control-M in a nutshell.
OrderMethod

Options are:

  • (Default) Automatic: The folder and its jobs are automatically ordered on days specified in the 'When' property
  • Manual: The 'When' property is ignored. To order such a folder use the "ctm run order" API or Action Type:Run
  • Any other value : See the description of the <user daily> method and other methods in Order Method in the Control-M Online Help.
RunAs

Control-M security mechanism uses this parameter for deployment authorization of the folder.

WhenDefines scheduling for all jobs in the folder, using various scheduling parameters or rule-based calendars. For more details, see When.
Resource:MutexSee Resource:Mutex for detailed information about the resource.
Notification

The following notification types described under job properties are relevant to folder:

IfSee If for detailed information
JobSee Job for detailed information
EventsSee Events for detailed information
FlowSee Flow for detailed information

The following example shows description and time zone for the object Folder8. 

"Folder8": {
	"Type": "Folder",
	"Description": "folder desc",
	"Application" : "Billing",
	"SubApplication" : "Payable",
	"TimeZone":"HAW",
	"SimpleCommandJob":
		{ "Type": "Job:Command", "Description": "job desc", "Application" : "BillingJobs", "SubApplication" : "PayableJobs", "TimeZone":"MST", "Host":"agent8", "RunAs":"owner8", "Command":"ls" }
}

Back to top

Simple Folder

A Simple Folder is a container of jobs. A Simple Folder does not enable configuration of job definitions at the folder level. The following example shows how to use a simple folder.

{
  "SimpleFolderName": {
    "Type": "SimpleFolder",
    "ControlmServer": "ec2-54-191-85-182",
    "job1": {
      "Type": "Job:Command",
      "Command": "echo 123",
      "RunAs": "controlm"
    },
    "job2": {
      "Type": "Job:Command",
      "Command": "echo 123",
      "RunAs": "controlm"
    },
    "Flow": {
      "Type": "Flow",
      "Sequence": ["job1", "job2"]
    }
  }
}

The following example shows optional parameters for SimpleFolder:

{
"FolderSampleAll": {
		"Type": "SimpleFolder",
		"ControlmServer": "controlm", 
		"SiteStandard": "myStandards",
		"OrderMethod": "Manual"
		}
}


ControlmServerSpecifies a Control-M Scheduling Server. If more than one Control-M Scheduling Server is configured in the system, you must define the server that the folder belongs to.
SiteStandardEnforces the defined Site Standard to the folder and all jobs contained within the folder. See Control-M in a nutshell.
OrderMethod

Options are:

  • (Default) Automatic: The folder and its jobs are automatically ordered on days specified in the 'When' property
  • Manual: The 'When' property is ignored. To order such a folder use the "ctm run order" API or Action Type:Run
  • Any other value : See the description of the <user daily> method and other methods in Order Method in the Control-M Online Help.

Back to top

Flow

Allows you to define order dependency between jobs using an object type Flow. A job must end successfully for the next job in the flow to run.

    "flowName": {
      "Type":"Flow",
      "Sequence":["job1", "job2", "job3"]
    }

The example shows how one job can be part of multiple flows. Job3 will execute if either Job1 or Job2 end successfully. 

    "FlowSamples" :
    {
        "Type" : "Folder",

        "Job1": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        },    
        "Job2": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        },    
        "Job3": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        }, 
        "flow1": {
          "Type":"Flow",
          "Sequence":["Job1", "Job3"]
        },
        "flow2": {
          "Type":"Flow",
          "Sequence":["Job2", "Job3"]
        }

    }

The following example shows how to create flow sequences with jobs contained within different folders.

    "FolderA" :
    {
        "Type" : "Folder",
        "Job1": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        }
    },    
    "FolderB" :
    {
        "Type" : "Folder",
        "Job1": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        }
    },    
    "CrossFoldersFlowSample": {
        "Type":"Flow",
          "Sequence":["FolderA:Job1", "FolderB:Job1"]
    }


Back to top

Job Properties

Below is a list of job properties for Control-M objects.

Type

Defines the type of job. For example:

    "CommandJob1": {
        "Type" : "Job:Command",
        "Command" : "echo hello",
        "RunAs" : "user1"
        }

Many of the other properties that you include in the job's definitions depend on the type of job that you are running. For a list of supported job types and more information about the parameters that you use in each type of job, see Job types.

Application, SubApplication 

Supplies a common descriptive name to a set of related jobs. The jobs do not necessarily have to run at the same time.

    "Job1": {
	    "Type": "Job:Command",
 		"Application": "ApplicationName",
		"SubApplication": "SubApplicationName",
        "Command": "echo I am a Job",
        "RunAs": "controlm"
    }

Back to top

Comment

Allows you to write a comment on an object. Comments are not uploaded to Control-M.

    "JobName": {
        "Type" : "Job:Command",
        "Comment" : "code reviewed by tom",
        "Command" : "echo hello",
        "RunAs" : "user1"
        }

Back to top

When

Allows you to define scheduling parameters for jobs and folders. If When is used in a folder, those parameters apply to all jobs in the folder.

When working in a Control-M Workbench environment, jobs will not wait for time constants and will run in an ad-hoc manner. Once deployed to a Control-M instance, all time constraints will be obeyed. 

      "When" : {
                "Schedule":"Never",
				"Months": ["JAN", "OCT", "DEC"],
                "MonthDays":["22","1","11"],
                "WeekDays":["MON","TUE"],
                "FromTime":"1500",
                "ToTime":"1800"      
            }

One or more of the date/time constraints can be defined.

WeekDays

One or more of the following:

"SUN","MON","TUE","WED","THU","FRI","SAT"

For all days of the week, use "ALL" (the default value).

Month

One or more of the following:

"JAN", "FEB", "MAR", "APR","MAY","JUN", "JUL", "AUG",

"SEP", "OCT", "NOV", "DEC"

For all months of the year, use "ALL" (the default value).

MonthDays

One or more days in the range of 1 to 31

For all days of the month, use "ALL" (the default value).

FromTime

FromTime specifies that a job will not start before this time

Format: HHMM

ToTime

ToTime specifies that a job will not start after this time

Format: HHMM

To allow the job to be submitted even after its original scheduling date (if it was not submitted on the original date), specify a value of ">".

Schedule

One of the following options:

"Everyday", "Never"

MonthDay additional parameters 

You can specify the days of the month the job will run by referencing a predefined calendar set up in Control-M. 

"When": {
	"MonthDaysCalendar": “Summer2017”
}

You can specify the days of the month the job will run by referencing a predefined calendar set up in Control-M, and also using advanced rules specified in the MonthDays parameter:

"When": {
	"MonthDaysCalendar": “Summer2017”,
    “MonthDays":[“1”,”+2”,”-3”,”>4”,”<5”,”D6”,”L7”]
}

Where:

MonthDays syntax

Description

1

Day 1 included only if defined in the calendar
+2Day 2 included regardless of calendar
-3Day 3 excluded regardless of calendar
>4Day 4 or next closest calendar working day
<5Day 5 or previous closest calendar working day
D6The 6th calendar working day
L7The 7th from the last calendar working day
D6PA or D6P*

If MonthDaysCalendar is of type periodical, you can use PA or P* to specify a calendar period name such as A,B,C, or you can use * for any period.

-D6 or -L6P* D and L can also have an exclude specifier

WeekDays additional parameters 

You can specify the days of the week that the job will run by referencing a predefined calendar set up in Control-M, and also using advanced rules specified in the WeekDays parameter:

"When" : {
	"WeekDaysCalendar" : "Summer2017",
	"WeekDays" : ["SUN","+MON","-TUE",">WED","<THU"]
}

Where:

WeekDays syntaxDescription

SUN

Sunday included only if defined in the calendar

+MONMonday included regardless of calendar
-TUETuesday excluded regardless of calendar
>WEDWednesday or next closest calendar working day
<THUThursday or previous closest calendar working day

Specifying start/end dates of a job run

You can add start and end dates for a job run in addition to the other date/time elements. 

 "When": { 
            "StartDate":"20160322", 
            "EndDate":"20160325" 
         }

Where:

StartDate

First date that a job can run

EndDate

Last date that a job can run

Relationship between MonthDays and WeekDays

"When" : {
    "Months": ["JAN", "OCT", "DEC"],
 
	"MonthDays":["22","1","11"],
	“DaysRelation” : “OR”,
    "WeekDays":["MON","TUE"]
}
ParameterDescription
DaysRelation

Default: AND

AND - the job will run only if the WeekDays and MonthDays contraints are met

OR - the job will run if the WeekDays or MonthDays constraints are met

Back to top

Events

Events can be generated by Control-M or can trigger jobs. Events are defined by a name and a date.

Here is a list of the various capabilities of event usages:

  1. A job can wait for events before running, add events after running, or delete events after running. See WaitForEvents, AddEvents, and DeleteEvents
  2. Jobs can add or remove events from Control-M. See Event:Add or Event:Delete.
  3. You can add or remove events from the Control-M by an API call. See Event Management.


For "OrderDate", you can use the following values:

Date Type

Description
AnyDateAny scheduled date
OrderDate

Control-M scheduled date.

If you do not specify an OrderDate value, this is the default.

PreviousOrderDatePrevious Control-M scheduled date
NextOrderDateNext Control-M scheduled date
MMDD

Specific date

Example: "0511"

WaitForEvents

The following example shows how to define events that the job must wait for before running:

"Wait1":
{
          "Type": "WaitForEvents",
          "Events": [
              {"Event":"e1"}, 
              {"Event":"e2"}, 
              {"Event":"e3", "OrderDate":"AnyDate"}
          ]
}

AddEvents

The following example shows how to specify events for the job to add after running:

"add1" :
{
          "Type": "AddEvents",
          "Events": [
              {"Event":"a1"}, 
              {"Event":"a2"}, 
              {"Event":"a3", "OrderDate":"1112"}
          ]
}

DeleteEvents

The following example shows how to specify events for the job to remove after running:

"del1" :
{
          "Type": "DeleteEvents",
          "Events": [
              {"Event":"d1"},
              {"Event":"d2", "OrderDate":"1111"},
              {"Event":"d3"}
          ]
}

Back to top

If

If can trigger one or more actions conditional to the job completion status. In the following example, if the job runs unsuccessfully, it sends an email and runs another job.

    "JobName": {
        "Type" : "Job:Command",
        "Command" : "echo hello",
        "Host" : "myhost.mycomp.com",
        "RunAs" : "user1",  
        
        "ActionIfFailure" : {
            "Type": "If",        
            "CompletionStatus": "NOTOK",
            
            "mailToTeam": {
              "Type": "Action:Mail",
              "Message": "Job %%JOBNAME failed",
              "To": "team@mycomp.com"
            },
            "CorrectiveJob": {
              "Type": "Action:Run",
              "Folder": "FolderName",
              "Job": "JobName"
            }
        }
    }

If can be triggered based on one of the following CompletionStatus values:

ValueAction

NOTOK

When job fails

OK

When job completed successfully

ANY

When the job completed regardless of success or failure

value

When completion status = value

Example: value=10

EvenWhen completion status is an even number
OddWhen completion status is an odd number
">=5", "<=5", "<5", ">5", "!=5"When the completion status comparison operator is true

If:NumberOfReruns

The following example shows the criteria for number of job reruns to trigger an action.

"ActionByNumberOfReruns" : {
	"Type": "If:NumberOfReruns", 
	"NumberOfReruns": ">=4",
    "RunJob": {
    	 "Type": "Action:Run",
   		 "Folder": "Folder1",
   		 "Job": "job1"
	}
}

Where:

ParameterDescriptionPossible values
NumberOfRerunsPerforms an action if the condition of number of job reruns is met.

"Even"

"Odd"

"!=value"

">=value"

"<=value"

">value"

"<value"

"value"

If:NumberOfFailures

The following example shows the criteria for number of job failures to trigger an action.

"ActionByNumberOfFailures" : {
	"Type": "If:NumberOfFailures", 
	"NumberOfFailures": "1",
    "RunJob": {
          "Type": "Action:Run",
          "Folder": "Folder1",
          "Job": "job1"   
    }
}

Where:

ParameterDescriptionPossible values
NumberOfFailuresPerforms an action if the condition of number of job failures is met

"value"

If:JobNotSubmitted

The following example shows how to trigger an action based on whether the job is not submitted.

"ActionByJobNotSubmitted" : {
	"Type": "If:JobNotSubmitted"
    "RunJob": {
          "Type": "Action:Run",
          "Folder": "Folder1",
          "Job": "job1"   
    }
}

If:JobOutputNotFound

The following example shows how to trigger an action based on whether the job output is not found.

"ActionByOutputNotFound" : {
	"Type": "If:JobOutputNotFound"
    "RunJob": {
          "Type": "Action:Run",
          "Folder": "Folder1",
          "Job": "job1"   
    }
}

If:JobNumberOfExecutions

The following example shows the criteria for number of job executions to trigger an action.

"ActionByNumberExecutions" : {
	"Type": "If:JobNumberOfExecutions", 
	"NumberOfExecutions": ">=5"
    "RunJob": {
          "Type": "Action:Run",
          "Folder": "Folder1",
          "Job": "job1"   
    }
}

Where:

ParameterDescriptionPossible values
NumberOfExecutionsPerforms an action if the condition of number of job executions is met

"Even"

"Odd"

"!=value"

">=value"

"<=value"

">value"

"<value"

"value"

Back to top

If Actions

Action:Mail 

 The following example shows an action that sends an e-mail.

    "mailToTeam": {
      "Type": "Action:Mail",
      "Message": "%%JOBNAME failed",
      "To": "team@mycomp.com"
    }

The following example shows that you can add optional parameters to the email action.

    "mailToTeam": {
      "Type": "Action:Mail",
      "Urgency": "Urgent", 
      "Subject" : "Completion Email",
      "Message": "%%JOBNAME just completed", 
      "To": "team@mycomp.com",
      "CC": "other@mycomp.com"
 }

The following table describes the parameters of the email action:

ParameterDescription
UrgencyLevel of urgency of the message — Regular, Urgent, or VeryUrgent. The default is Regular.
SubjectA subject line for the message.
MessageThe message text.
To

A list of email recipients to whom the message is directed.

Use the semicolon (;) to separate multiple email addresses.

CC

A list of email recipients who receive a copy of the message.

Use the semicolon (;) to separate multiple email addresses.

Action:Rerun

The following example shows an action that reruns the job.

"RerunActionName": {
      "Type": "Action:Rerun"  
}

Action:Set

The following example shows an action that sets a variable.

"SetVariable": {
   "Type": "Action:Set",
    "Variable": "var1",
    "Value": "1"
}

Action:SetToOK

The following example shows an action that sets the job status to OK.

"SetToOKActionName": {
      "Type": "Action:SetToOK"
}

Action:SetToNotOK

The following example shows an action that sets the job status to not OK.

"SetToNotOKActionName": {
      "Type": "Action:SetToNotOK"
}

Action:StopCyclicRun

The following example shows an action that disables the cyclic attribute of the job.

"CyclicRunActionName": {
      "Type": "Action:StopCyclicRun"
}

Action:Run

The following example shows an action that runs another job.

"CorrectiveJob": {
      "Type": "Action:Run",
      "Folder": "FolderName",
      "Job": "JobName"
}

Action:Notify

The following example shows an action that sends a notification.

"Notifying": {
   "Type": "Action:Notify",
   "Message": "job1 just ran",
   "Destination": "JobLog",
   "Urgency": "VeryUrgent"
}

Event:Add

The following example shows an action that adds an event for the current date.

"setEvent1": {
    "Type": "Event:Add",
    "Event": "e1"
}

Optional parameters:

"setEvent1": {
    "Type": "Event:Add",
    "Event": "e1",
    "OrderDate": "1010"
}
Date TypeDescription
AnyDateAny scheduled date
NoDateNot date specific
OrderDateControl-M scheduled date.
PreviousOrderDatePrevious Control-M scheduled date
NextOrderDateNext Control-M scheduled date
MMDD

Specific date

Example: "0511"

Event:Delete

The following example shows an action that deletes an event.

OrderDate possible values:

  • "AnyDate"
  • "OrderDate"
  • "PreviousOrderDate"
  • "NextOrderDate"
  • "0511" - (MMDD) 
"unsetEvent2": {
    "Type": "Event:Delete",
    "Event": "e2",
    "OrderDate": "PreviousOrderDate"
}

Action:Output

The Output action supports the following operations:

  • Copy
  • Move
  • Delete
  • Print

The following example shows an action that copies the output to the specified destination.

"CopyOutput": {
         "Type": "Action:Output",
         "Operation": "Copy",
         "Destination": "/home/copyHere"
}

Back to top

Confirm

Allows you to define a job that requires user confirmation. This can be done by running the run confirm command.

 "JobName": {
        "Type" : "Job:Command",
        "Comment" : "this job needs user confirmation to start execution",
        "Command" : "echo hello",
        "RunAs" : "user1",
        "Confirm" : true
 }

Back to top

Critical

Allows you to set a critical job. A critical job is a job that has a higher priority to reserve resources in order to run.

Default: false

"Critical": true

Back to top

DaysKeepActive

Allows you to define the number of days to keep a job if it did not run at its scheduled date. 

Jobs in a folder are kept until the maximum DaysKeepActive value for any of the jobs in the folder has passed. This enables you to retrieve job status of all the jobs in the folder. 

"DaysKeepActiveFolder": {
       "Type" : "Folder",
       "Defaults": {
         "RunAs":"owner8" 
       },
       "keepForeverIfDidNotRun": { 
         "Type": "Job:Command", 
         "Command":"ls",
         "DaysKeepActive": "Forever" 
       },
       "keepForThreeDaysIfDidNotRun": { 
         "Type": "Job:Command", 
         "Command":"ls",
         "DaysKeepActive": "3" 
       }
}

Where:

DaysKeepActive

Valid values:

  • 0-98
  • Forever

Default: 0

Back to top

Description

Allows you to add a description to jobs and folders.

 "DescriptionFolder":
    {
       "Type" : "Folder",
       "Description":"folder description",
       "SimpleCommandJob": { 
         "Type": "Job:Command", 
         "Description":"job description",
         "RunAs":"owner8", 
         "Command":"ls"
       }
       
    }

Back to top

Documentation

Allows you to add the location and name of a file that contains the job documentation.

"DocumentationFile":{
	"Path": "C://temp",
	"FileName": "job.txt"
	}
}

Allows you to add the URL location of a file.

"DocumentationUrl":{
	"Url": "http://bmc.com"
	}
}

Back to top

Notification

Allows you to create a notification for certain scenarios before, during and after job execution.

This example shows a notification sent to JobLog of critical job failure.

"NotifyCriticalJobFailure": {
   "Type":"Notify:NotOK",
   "Message": "Critical job failed, details in job output",
   "Urgency": "Urgent",
   "Destination": "JobLog"
}

Where:

The following parameters are relevant to all notification types.

Parameters

Description
MessageThe message to display

Destination

(description for each of the options)

The message is sent to one of the following:

  • (Default) Alerts - Control-M Alerts window
  • JobLog - writes to Control-M job log , to get the job log use the run job:log::get
  • Console - operating system console

Or

Predefined destination values (for example, FinanceGroup)

Urgency

The message urgency is logged as one of the following:

  • (Default) Regular
  • Urgent
  • VeryUrgent

Notify:OK

When setting the notification type to OK, if job executed with no errors, the notification "Job run OK" is sent to the JobLog. 

"Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1", 
	  "Notify1": {
        "Type": "Notify:OK",
        "Message": "Job run OK",
        "Destination": "JobLog"
 	  }
	}
  }
}	

Notify:NotOK

When setting the notification type to NotOK, if job executed with errors, the notification "Job run not OK" is sent to the JobLog. 

"Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1", 
	  "Notify1": {
        "Type": "Notify:NotOK",
        "Message": "Job run not OK",
        "Destination": "JobLog"
 	  }
	}
  }
}	

Notify:DoesNotStart

If the job has not started by 15:10, a notification is immediately sent to the email defined in the job with the message that the job has not started.

{
"Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1",
	  "Notify3": {
        "Type": "Notify:DoesNotStart",
        "By": "1510",
        "Message": "Job has not started",
        "Destination": "mail",
        "Urgency": "VeryUrgent"
	    }
	}
  }
}	

Parameters

Description
By

Format: HHMM  

Notification sent when job does not start by specified time

Notify:ExecutionTime

When setting the notification type ExecutionTime to LessThan, if the job completes in less than 3 minutes, the notification "Less than expected" is sent to the Alert destination. 

{
  "Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1",
      "Notify1": {
        "Type": "Notify:ExecutionTime",
        "Criteria": "LessThan",
        "Value": "3",
        "Message": "Less than expected"
      }
	}
  }
}

Criteria

ValueDescription
LessThan

Value in minutes

Example: 3

If the job runs less than the defined value, a notification is sent to the defined destination

GreaterThan

Value in minutes

Example: 5

If the job runs longer than the defined value, a notification is sent to the defined destination

LessThanAverage

Value in minutes or percentage

Example: 10%

If the job runs less than the defined value of the average execution time of the job, a notification is sent to the defined destination
GreaterThanAverage

Value in minutes or percentage

Example: 10%

If the job runs longer than the defined value of the average execution time of the job, a notification is sent to the defined destination

Notify:DoesNotEnd

When setting the notification type DoesNotEnd, if job does not end by the specified time, message is sent to JobLog.

{
  "Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1",
      "Notify1": {
        "Type": "Notify:DoesNotEnd",
        "By": "1212",
		"Message": "Job does not end",	
		"Destination": "JobLog"
      }
	}
  }
}

Parameters

Description
By

Format: HHMM  

Notification sent when job does not end by specified time

Notify:ReRun 

When setting the notification type ReRun, when job reruns, message is sent to console.

{
  "Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1",
      "Notify1": {
        "Type": "Notify:ReRun",
		"Message": "Job5 ReRun",	
		"Destination": "Console"
      }
	}
  }
}

Back to top

Priority

Allows you to define the priority a job has over other jobs.

The following options are supported:

  • Very High

  • High

  • Medium

  • Low

  • Very Low (default)

{
	"Folder8": {
		"Type": "Folder",
		"Description": "folder desc",
		"Application": "Billing",
		"SubApplication": "Payable",
		"SimpleCommandJob": {
			"Type": "Job:Command",
			"Description": "job desc",
			"Application": "BillingJobs",
			"Priority": "High",
			"SubApplication": "PayableJobs",
			"TimeZone": "MST",
			"Host": "agent8",
			"RunAs": "owner8",
			"Command": "ls"
		}
	}
}

Back to top

Rerun

Allows to define cyclic jobs.

The example shows how to define a cyclic job that runs every 2 minutes indefinitely.

    "Rerun" : {
        "Every": "2"
    }

The following example shows how to run a job four times where each run starts three days after the previous run ended.

    "Rerun" : {
        "Every": "3",
        "Units":  "Days",                     
        "From": "End",                               
        "Times": "4"
    }

Units

One of the following: "Minutes" "Hours" or "Days". The default is "Minutes"

From

One of the following values:

  • Start - next run time is calculated an N Units from the start time of the current run
  • End - next run time is calculated as N Units from the end time of current run
  • Target - run starts every N units

The default is "Start"

Times

Number of cycles to run. To run forever, define 0

The default is run forever.

Back to top

RerunIntervals

Allows to define a set of time intervals for the job to rerun.

The example shows how to define Rerunintervals for the job to run every 12 months, 12 days, 11 hours and 1 minute from the end of the last job run.

"RerunIntervals": {
           "Intervals" : ["12m","11h","12d","1m"],
           "From": "End"
       }

Intervals

The time intervals for job to run again in months, hours, days and minutes

From

One of the following values:

  • Start - next run time is calculated an N Units from the start time of the current run
  • End - next run time is calculated as N Units from the end time of current run
  • Target - run start every N units

The default is "Start".

Back to top

RerunSpecificTimes

Allows to rerun a job at specific times. 

The following example shows how to define RerunSpecificTimes for jobs to run at those specific times. 

"CyclicExactTimesJob": {
    "Type": "Job:Command",
    "Command":"ls",
    "RunAs": "user1",
    "RerunSpecificTimes": {
        "At" : ["0900","1100","1230","1710"],
        "Tolerance": "20"
   }
}

At

One or more time of day in the format HHMM

Tolerance

Maximum delay in minutes permitted for a late submission of a specific time

Back to top

RerunLimit

Allows you to set a limit to the number of times a non cyclic job can rerun.

"jobWithRerunLimit": {
        "Type":"Job:Command",
		"Command":"ls",
		"RunAs":"user1",
		"RerunLimit": {
			"Times":"5"
		}
     }

Times

Maximum number of times a non cyclic job can rerun

Default 0, no limit to the number of reruns.

Back to top

Resources 

Resource:Semaphore

Allows you to set the Semaphore (also known as quantitative resourcesquantity to the job, used to control access to a resource that is concurrently shared by other jobs. For API command information on resources, see Resource Management.

The following example shows how to add a semaphore parameter to a job. 

{
  "FolderRes": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "pok",
      "Critical": true,
      "sem1": {
        "Type": "Resource:Semaphore",
        "Quantity": "3"
      }      
    }
  }
}

Resource:Mutex

Allows you to set a Mutex (also known as control resource) as shared or exclusive. If the resource is shared, other jobs can use the resource concurrently. If set to exclusive, the job has to wait until the resource is available before it can run.

The following example shows how to add a Mutex parameter to a job.

{
  "FolderRes": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls", 
      "RunAs": "pok",
      "Critical": true,
      "mut1": {
        "Type": "Resource:Mutex",
        "MutexType": "Exclusive"
      }
    }
  }
}

Back to top

RunOnAllAgentsInGroup

Allows you to set jobs to run on all agents in the group. 

The example shows how to define RunOnAllAgentsInGroup 

"jobOnAllAagents": {
         "Type": "Job:Dummy",
         "RunOnAllAgentsInGroup" : true,
         "Host" : "dummyHost"
       } 

RunOnAllAgentsInGroup

true | false

Default: false

Back to top

Time Zone

Allows you to add the time zone to jobs and folder. Time zones should be defined at least 48 hours before the intended execution date. We recommend to define the same time zone for all jobs in a folder. 

"TimeZone":"MST"

Time zone possible values:

HNL (GMT-10:00)
HAW (GMT-10:00)
ANC (GMT-09:00)
PST (GMT-08:00)
MST (GMT-07:00)
CST (GMT-06:00)
EST (GMT-05:00)
ATL (GMT-04:00)
RIO (GMT-03:00)
GMT (GMT+00:00)
WET (GMT+01:00)
CET (GMT+02:00)
EET (GMT+03:00)
DXB (GMT+04:00)
KHI (GMT+05:00)
DAC (GMT+06:00)
BKK (GMT+07:00)
HKG (GMT+08:00)
TYO (GMT+09:00)
TOK (GMT+09:00)
SYD (GMT+10:00)
MEL (GMT+10:00)
NOU (GMT+11:00)
AKL (GMT+12:00)

Back to top

Variables

Allows you to use job level variables with %% notation in job fields.

"job1": {
     "Type": "Job:Script",

     "FileName": "scriptname.sh",
     "FilePath":"%%ScriptsPath",
     "RunAs": "em900cob",

     "Arguments":["--date", "%%TodayDate" ],

     "Variables": [
       {"TodayDate": "%%$DATE"},
       {"ScriptsPath": "/home/em900cob"}
     ]
 }

For specifications of system defined variables such as %%$DATE see Control-M system variables in the Control-M Online Help.

Named pools of variables can share data between jobs using the syntax "\\poolname\variable". NOTE that due to JSON character escaping, each backslash in the pool name must be doubled. For example, "\\\\pool1\\date".

        "job1": {
           "Type": "Job:Dummy",
	       "Variables": [

	         {"\\\\pool1\\date": "%%$DATE"}
	       ]
	    },
 
        "job2": {
           "Type": "Job:Script",

           "FileName": "scriptname.sh",
	       "FilePath":"/home/user/scripts",
	       "RunAs": "em900cob",

	       "Arguments":["--date", "%%\\\\pool1\\date" ]
	    }

Jobs in a folder can share variables at the folder level using the syntax "\\variable name" to set and %%variable to use

"Folder1"   : {
     "Type" : "Folder", 
 
     "Variables": [
	    {"TodayDate": "%%$DATE"}
	 ],
 
     "job1": {
           "Type": "Job:Dummy",

           "Variables": [
              {"\\\\CompanyName": "compName"}
           ]
	  },
	  
      "job2": {
           "Type": "Job:Script",

           "FileName": "scriptname.sh",
	       "FilePath":"/home/user/scripts",
	       "RunAs": "em900cob",

	       "Arguments":["--date", "%%TodayDate", "--comp", "%%CompanyName" ]
	    }
}

Back to top

Job types

The following series of sections provide details about the available types of jobs that you can define and the parameters that you use to define each type of job.

Job:Command

The following example shows how to use the Job:Command to run operating system commands.

	"JobName": {
		"Type" : "Job:Command",
    	"Command" : "echo hello",
    	"Host" : "myhost.mycomp.com",
    	"RunAs" : "user1"  
	}
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAsIdentifies the operating system user that will run the job.

Back to top

Job:Script

 The following example shows how to use Job:Script to run a script.

    "JobName": {
        "Type" : "Job:Script",
        "FileName" : "task1123.sh",
        "FilePath" : "/home/user1/scripts",
        "Host" : "myhost.mycomp.com",
        "RunAs" : "user1"   
    }
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAsIdentifies the operating systems user that will run the job.
FileName together with FilePath

Indicates the location of the script. 

NOTE: Due to JSON character escaping, each backslash in a Windows file system path must be doubled. For example, "c:\\tmp\\scripts".

Back to top

Job:FileTransfer

The following example shows a Job:FileTransfer.

"FileTransferFolder" :
{
	"Type" : "Folder",
	"Application" : "aft",
	"TransferFromLocalToSFTP" :
	{
		"Type" : "Job:FileTransfer",
		"ConnectionProfileSrc" : "LocalConn",
		"ConnectionProfileDest" : "SftpConn",
		"Host": "AgentHost",
		"FileTransfers" :
		[
			{
				"Src" : "/home/controlm/file1",
				"Dest" : "/home/controlm/file2",
				"TransferType": "Binary",
				"TransferOption": "SrcToDest"
			},
			{
				"Src" : "/home/controlm/otherFile1",
				"Dest" : "/home/controlm/otherFile2",
				"TransferOption": "DestToSrc"
			}
		]
	}
}

Where:

ParameterDescription
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host.
In addition, Control-M File Transfer plugin version 8.0.00 or later must be installed.

ConnectionProfileSrcThe connection profile to use as the source
ConnectionProfileDestThe connection profile to use as the destination
FileTransfersA list of file transfers to perform during job execution, each with the following properties:
   SrcFull path to the source file
   DestFull path to the destination file
   TransferType

(Optional) FTP transfer mode, either Ascii (for a text file) or Binary (non-textual data file).

Ascii mode handles the conversion of line endings within the file, while Binary mode creates a byte-for-byte identical copy of the transferred file.

Default: "Binary"

   TransferOption

(Optional) The following is a list of the transfer options:

  • SrcToDest - transfer file from source to destination
  • DestToSrc - transfer file from destination to source
  • SrcToDestFileWatcher - watch the file on the source and transfer to destination only when all criteria is met
  • DestToSrcFileWatcher - watch the file on the destination and transfer to source only when all criteria is met
  • FileWatcher - Watch a file. If successful, the succeeding job will run.

Default: "SrcToDest"

The following example presents a File Transfer job in which the transferred file is watched using the File Watcher utility:

"FileTransferFolder" :
{
	"Type" : "Folder",
	"Application" : "aft",
	"TransferFromLocalToSFTPBasedOnEvent" :
	{
		"Type" : "Job:FileTransfer",
		"Host" : "AgentHost",
		"ConnectionProfileSrc" : "LocalConn",
		"ConnectionProfileDest" : "SftpConn",
		"FileTransfers" :
		[
			{
				"Src" : "/home/sftp/file1",
				"Dest" : "/home/sftp/file2",
				"TransferType": "Binary",
				"TransferOption" : "SrcToDestFileWatcher",
				"PreCommandDest" :
				{
					"action" : "rm",
					"arg1" : "/home/sftp/file2"
				},
				"PostCommandDest" :
				{
					"action" : "chmod",
					"arg1" : "700",
					"arg2" : "/home/sftp/file2"
				},
				"FileWatcherOptions":
				{
					"MinDetectedSizeInBytes" : "200",
					"TimeLimitPolicy" : "WaitUntil",
					"TimeLimitValue" : "2000",
					"MinFileAge" : "3Min",
					"MaxFileAge" : "10Min",
					"AssignFileNameToVariable" : "FileNameEvent",
					"TransferAllMatchingFiles" : true
				}
			}
		]
	}
}

This example contains the following additional optional parameters: 

PreCommandSrc

PreCommandDest

PostCommandSrc

PostCommandDest

Defines commands that occur before and after job execution.
Each command can run only one action at a time.

Action

Description

chmod

Change file access permission:

arg1: mode

arg2: file name

mkdir

Create a new directory

arg1: directory name

rename

Rename a file/directory

arg1: current file name

arg2: new file name

rm

Delete a file

arg1: file name

rmdir

Delete a directory

arg1: directory name

FileWatcherOptions

Additional options for watching the transferred file using the File Watcher utility:

    MinDetectedSizeInBytesDefines the minimum number of bytes transferred before checking if the file size is static
    TimeLimitPolicy/
    TimeLimitValue

Defines the time limit to watch a file:
TimeLimitPolicy options:”WaitUntil”, "MinutesToWait"

TimeLimitValue: If TimeLimitPolicy: WaitUntil, the TimeLimitValue is the specific time to wait, for example 04:22 would be 4:22 AM.
If TimeLimitPolicy: MinutesToWait, the TimeLimitValue is the number of minutes to wait.

    MinFileAge

Defines the minimum number of years, months, days, hours, and/or minutes that must have passed since the watched file was last modified

Valid values: 9999Y9999M9999d9999h9999Min

For example: 2y3d7h

    MaxFileAge

Defines the maximum number of years, months, days, hours, and/or minutes that can pass since the watched file was last modified

Valid values: 9999Y9999M9999d9999h9999Min

For example: 2y3d7h

    AssignFileNameToVariableDefines the variable name that contains the detected file name
    TransferAllMatchingFiles

Whether to transfer all matching files (value of True) or only the first matching file (value of False) after waiting until the watching criteria is met.

Valid values: True | False
Default value: False

Back to top

Job:Database

Job:Database:SQLScript

The following example shows how to create a database job that runs a SQL script.

{
	"OracleDBFolder": {
		"Type": "Folder",
		"testOracle": {
			"Type": "Job:Database:SQLScript",
			"Host": "AgentHost",
			"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
			"ConnectionProfile": "OracleConnectionProfileSample",
			"Parameters": [
				{"firstParamName": "firstParamValue"},
				{"secondParamName": "secondParamValue"}
			],
			"Autocommit": "N",
			"OutputExcecutionLog": "Y",
			"OutputSQLOutput": "Y",
			"SQLOutputFormat": "XML"
		}
	}
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

ParametersParameters are pairs of name and value. Every name that appears in the SQL script will be replaced by its value pair.
Autocommit

(Optional) Commits statements to the database that completes successfully

Default: N

OutputExcecutionLog

(Optional) Shows the execution log in the job output

Default: Y

OutputSQLOutput

(Optional) Shows the SQL sysout in the job output

Default: N

SQLOutputFormat

(Optional) Defines the output format as either Text, XML, CSV, or HTML

Default: Text

Another example:

{
	"OracleDBFolder": {
		"Type": "Folder",
		"testOracle": {
			"Type": "Job:Database:SQLScript",
			"Host": "app-redhat",
			"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
			"ConnectionProfile": "OracleConnectionProfileSample"
		}
	}
}

Back to top

Job:Hadoop

Various types of Hadoop jobs are available for you to define using the Job:Hadoop objects:

Job:Hadoop:Spark:Python

The following example shows how to use Job:Hadoop:Spark:Python to run a Spark Python program.

    "ProcessData": {
        "Type": "Job:Hadoop:Spark:Python",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",

        "SparkScript": "/home/user/processData.py"
    }
ConnectionProfileSee ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
    "Type": "Job:Hadoop:Spark:Python",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",
    "SparkScript": "/home/user/processData.py",            
    "Arguments": [
        "1000",
        "120"
    ],            

    "PreCommands": {
        "FailJobOnCommandFailure" :false,
        "Commands" : [
            {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
            {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
    },

    "PostCommands": {
        "FailJobOnCommandFailure" :true,
        "Commands" : [
            {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
    },

    "SparkOptions": [
        {"--master": "yarn"},
        {"--num":"-executors 50"}
    ]
 }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Spark:ScalaJava

The following example shows how to use Job:Hadoop:Spark:ScalaJava to run a Spark Java or Scala program.

 "ProcessData": {
  	"Type": "Job:Hadoop:Spark:ScalaJava",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",

    "ProgramJar": "/home/user/ScalaProgram.jar",
	"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName" 
}
ConnectionProfileSee ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
  	"Type": "Job:Hadoop:Spark:ScalaJava",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",

    "ProgramJar": "/home/user/ScalaProgram.jar"
	"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName",

    "Arguments": [
        "1000",
        "120"
    ],            

    "PreCommands": {
        "FailJobOnCommandFailure" :false,
        "Commands" : [
            {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
            {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
    },

    "PostCommands": {
        "FailJobOnCommandFailure" :true,
        "Commands" : [
            {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
    },

    "SparkOptions": [
        {"--master": "yarn"},
        {"--num":"-executors 50"}
    ]
 }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Pig

The following example shows how to use Job:Hadoop:Pig to run a Pig script.

"ProcessDataPig": {
    "Type" : "Job:Hadoop:Pig",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",

    "PigScript" : "/home/user/script.pig" 
}
ConnectionProfileSee ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "ProcessDataPig1": {
        "Type" : "Job:Hadoop:Pig",
        "ConnectionProfile": "DevCluster",
        "PigScript" : "/home/user/script.pig",            
        "Host" : "edgenode",
        "Parameters" : [
            {"amount":"1000"},
            {"volume":"120"}
        ],            
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
 
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Sqoop

The following example shows how to use Job:Hadoop:Sqoop to run a Sqoop job.

    "LoadDataSqoop":
    {
      "Type" : "Job:Hadoop:Sqoop",
	  "Host" : "edgenode",
      "ConnectionProfile" : "SqoopConnectionProfileSample",

      "SqoopCommand" : "import --table foo --target-dir /dest_dir" 
    }

 

ConnectionProfileSee Sqoop ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "LoadDataSqoop1" :
    {
        "Type" : "Job:Hadoop:Sqoop",
        "Host" : "edgenode",
        "ConnectionProfile" : "SqoopConnectionProfileSample",

        "SqoopCommand" : "import --table foo",
		"SqoopOptions" : [
			{"--warehouse-dir":"/shared"},
			{"--default-character-set":"latin1"}
		],
 
        "SqoopArchives" : "",
        
        "SqoopFiles": "",
        
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

SqoopOptionsThese are passed as the specific sqoop tool args
SqoopArchives

Indicates the location of the Hadoop archives.

SqoopFilesIndicates the location of the Sqoop files.

Back to top

Job:Hadoop:Hive

The following example shows how to use Job:Hadoop:Hive to run a Hive beeline job.

    "ProcessHive":
    {
      "Type" : "Job:Hadoop:Hive",
      "Host" : "edgenode",
      "ConnectionProfile" : "HiveConnectionProfileSample",

      "HiveScript" : "/home/user1/hive.script"
    }

 

ConnectionProfileSee Hive ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "ProcessHive1" :
    {
        "Type" : "Job:Hadoop:Hive",
        "Host" : "edgenode",
        "ConnectionProfile" : "HiveConnectionProfileSample",


        "HiveScript" : "/home/user1/hive.script", 
        "Parameters" : [
            {"ammount": "1000"},
            {"topic": "food"}
        ],

        "HiveArchives" : "",
        
        "HiveFiles": "",
        
        "HiveOptions" : [
            {"hive.root.logger": "INFO,console"}
        ],

        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

HiveSciptParametersPassed to beeline as --hivevar “name”=”value”.

HiveProperties

Passed to beeline as --hiveconf “key”=”value”.

HiveArchives

Passed to beeline as --hiveconf mapred.cache.archives=”value”.

HiveFilesPassed to beeline as --hiveconf mapred.cache.files=”value”.

Back to top

Job:Hadoop:DistCp

The following example shows how to use Job:Hadoop:DistCp to run a DistCp job. DistCp (distributed copy) is a tool used for large inter/intra-cluster copying.

        "DistCpJob" :
        {
            "Type" : "Job:Hadoop:DistCp",
            "Host" : "edgenode",
            "ConnectionProfile": "DevCluster",
         
            "TargetPath" : "hdfs://nns2:8020/foo/bar",
            "SourcePaths" :
            [
                "hdfs://nn1:8020/foo/a"
            ]
        }  
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "DistCpJob" :
    {
        "Type" : "Job:Hadoop:DistCp",
        "Host" : "edgenode",
        "ConnectionProfile" : "ConnectionProfileSample",
        "TargetPath" : "hdfs://nns2:8020/foo/bar",
        "SourcePaths" :
        [
            "hdfs://nn1:8020/foo/a",
            "hdfs://nn1:8020/foo/b"
        ],
        "DistcpOptions" : [
            {"-m":"3"},
            {"-filelimit ":"100"}
        ]
    }

TargetPath, SourcePaths and DistcpOptions

Passed to the distcp tool in the following way:  distcp <Options> <TargetPath> <SourcePaths>.

Back to top

Job:Hadoop:HDFSCommands

The following example shows how to use Job:Hadoop:HDFSCommands to run a job that executes one or more HDFS commands.

        "HdfsJob":
        {
            "Type" : "Job:Hadoop:HDFSCommands",
            "Host" : "edgenode",
            "ConnectionProfile": "DevCluster",

            "Commands": [
                {"get": "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm": "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Back to top

Job:Hadoop:HDFSFileWatcher

The following example shows how to use Job:Hadoop:HDFSFileWatcher to run a job that waits for HDFS file arrival.

    "HdfsFileWatcherJob" :
    {
        "Type" : "Job:Hadoop:HDFSFileWatcher",
        "Host" : "edgenode",
        "ConnectionProfile" : "DevCluster",

        "HdfsFilePath" : "/inputs/filename",
        "MinDetecedSize" : "1",
        "MaxWaitTime" : "2"
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

HdfsFilePathSpecifies the full path of the file being watched.
MinDetecedSizeDefines the minimum file size in bytes to meet the criteria and finish the job as OK. If the file arrives, but the size is not met, the job continues to watch the file.
MaxWaitTimeDefines the maximum number of minutes to wait for the file to meet the watching criteria. If criteria are not met (file did not arrive, or minimum size was not reached) the job fails after this maximum number of minutes.

Back to top

Job:Hadoop:Oozie

The following example shows how to use Job:Hadoop:Oozie to run a job that submits an Oozie workflow.

    "OozieJob": {
        "Type" : "Job:Hadoop:Oozie",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "JobPropertiesFile" : "/home/user/job.properties",
        "OozieOptions" : [
          {"inputDir":"/usr/tucu/inputdir"},
          {"outputDir":"/usr/tucu/outputdir"}
        ]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

JobPropertiesFile

The path to the job properties file.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

OozieOptionsSet or override values for given job property.

Back to top

Job:Hadoop:MapReduce

 The following example shows how to use Job:Hadoop:MapReduce to execute a Hadoop MapReduce job.

    "MapReduceJob" :
    {
       "Type" : "Job:Hadoop:MapReduce",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
        "MainClass" : "pi",
        "Arguments" :[
            "1",
            "2"]
    }
ConnectionProfile

See ConnectionProfile:Hadoop  

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "MapReduceJob1" :
    {
        "Type" : "Job:Hadoop:MapReduce",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
        "MainClass" : "pi",
        "Arguments" :[
            "1",
            "2"
        ],
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }    
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:MapredStreaming

The following example shows how to use Job:Hadoop:MapredStreaming to execute a Hadoop MapReduce Streaming job.

     "MapredStreamingJob1": {
        "Type": "Job:Hadoop:MapredStreaming",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "InputPath": "/user/robot/input/*",
        "OutputPath": "/tmp/output",
        "MapperCommand": "mapper.py",
        "ReducerCommand": "reducer.py",
        "GeneralOptions": [
            {"-D": "fs.permissions.umask-mode=000"},
            {"-files": "/home/user/hadoop-streaming/mapper.py,/home/user/hadoop-streaming/reducer.py"}              
        ]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

GeneralOptionsAdditional Hadoop command options passed to the hadoop-streaming.jar, including generic options and streaming options.

Back to top

Job:Dummy

The following example shows how to use Job:Dummy to define a job that always ends successfully without running any commands. 

"DummyJob" : {
   "Type" : "Job:Dummy"
}

Back to top

Connection Profile

Connection profiles are used to define access methods and security credentials for a specific application. They can be referenced by multiple jobs. To do this, you must deploy the connection profile definition before running the relevant jobs.

ConnectionProfile:Hadoop

These examples show how to use connection profiles for the various types of Hadoop jobs.

Job:Hadoop

These are the required parameters for all Hadoop job types.

"HadoopConnectionProfileSample":
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost"
}
ParameterDescription
TargetAgentThe Control-M/Agent to which to deploy the connection profile.
TargetCTMThe Control-M/Server to which to deploy the connection profile. If there is only one Control-M/Server, that is the default.

 These are the optional parameters for defining the user running the Hadoop job types.

"HadoopConnectionProfileSample":
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
	"TargetCTM" : "CTMHost",
    "RunAs": "",
    "KeyTabPath":""
}
RunAs

Defines the user of the account on which to run Hadoop jobs.

Leave this field empty to run Hadoop jobs using the user account where the agent was installed.

The Control-M/Agent must run as root, if you define a specific RunAs user.

In the case of Kerberos security

RunAs

Principal name of the user

KeyTabPathKeytab file path for the target user

Job:Hadoop:Oozie

 The following example shows a connection profile that defines access to an Oozie server.

 "OozieConnectionProfileSample" :
 {
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "hdp-ubuntu",
    "Oozie" :
    {
      "SslEnabled"     : false,
      "Host" : "hdp-centos",
      "Port" : "11000"
    }
  }
}
ParameterDescription
HostOozie server host
Port

Oozie server port

Default: 11000

SslEnabled

true | false

Default: false

Job:Hadoop:Sqoop

The following example shows a connection profile that defines a Sqoop data source and access credentials.

 "SqoopConnectionProfileSample" :
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost",
    "Sqoop" :
    {
      "User"     : "username",
      "Password" : "userpassword",
      "ConnectionString" : "jdbc:mysql://mysql.server/database",
      "DriverClass" : "com.mysql.jdbc.Driver"
    }
}

Job:Hadoop:Hive

The following example shows a connection profile that defines a Hive beeline endpoint and access credentials. The parameters in the example translate to this beeline command: 

beeline  -u jdbc:hive2://<Host>:<Port>/<DatabaseName>

 "HiveConnectionProfileSample" :
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost",
    "Hive" :
    {
       "Host" : "hive_host_name",
       "Port" : "10000",
       "DatabaseName" : "hive_database",
    }
}

The following shows how to use optional parameters for a Hadoop Hive job type connection profile. 

The parameters in the example translate to this beeline command:  

beeline  -u jdbc:hive2://<Host>:<Port>/<DatabaseName>;principal=<Principal> -n <User> -p <Password> 

 "HiveConnectionProfileSample1":
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost",
    "Hive" :
    {
       "Host" : "hive_host_name",
       "Port" : "10000",
       "DatabaseName" : "hive_database",
       "User" : "user_name",
       "Password" : "user_password",
       "Principal" : "Server_Principal_of_HiveServer2@Realm"
    }
}

Back to top

ConnectionProfile:FileTransfer 

The following examples show you how to define a connection profile for the different File Transfer types.

ConnectionProfile:FileTransfer:FTP

Simple ConnectionProfile:FileTransfer:FTP

"FTPConn" : {
   "Type" : "ConnectionProfile:FileTransfer:FTP",
   "TargetAgent" : "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "FTPServer",
   "User" : "FTPUser",
   "Password" : "ftp password"
}

ConnectionProfile:FileTransfer:FTP with optional parameters

"FTPConn" : {
   "Type" : "ConnectionProfile:FileTransfer:FTP",
   "TargetAgent" : "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "FTPServer",
   "Port": "21",
   "User" : "FTPUser",
   "Password" : "ftp password",
   "HomeDirectory": "/home/FTPUser",
   "OsType": "Unix",   
   "WorkloadAutomationUsers":["john","bob"]
}
TargetAgentThe Control-M/Agent to which to deploy the connection profile.
TargetCTMThe Control-M/Server to which to deploy the connection profile. If there is only one Control-M/Server, that is the default.
OsType

(Optional) FTP server operating system type

Default: Unix

Types: Unix, Windows

Password(Optional) Password for FTP server account. Use Secrets in code to not expose the password in the code.
HomeDirectory(Optional) User home directory
WorkloadAutomationUsers

(Optional) Users that are allowed to access the connection profile.

NOTE: You can use "*" as a wildcard. For example, "e*"

Default: * (all users)

Checksum

(Optional) Enable or disable error detection on file transfer

True | False

Default: False

Passive

Set the FTP client mode. Passive False means active.

True | False

Default: False

True - recommended for servers behind a firewall

ConnectionProfile:FileTransfer:SFTP

The following examples show a connection profile for SFTP communication protocol. 

Simple ConnectionProfile:FileTransfer:SFTP

"sFTPconn": {
   "Type": "ConnectionProfile:FileTransfer:SFTP",
   "TargetAgent": "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "SFTPServer",
   "Port": "22",
   "User" : "SFTPUser",
   "Password" : "sftp password"
}

ConnectionProfile:FileTransfer:SFTP with optional parameters

"sFTPconn": {
   "Type": "ConnectionProfile:FileTransfer:SFTP",
   "TargetAgent": "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "SFTPServer",
   "Port": "22",
   "User" : "SFTPUser",
   "HomeDirectory": "/home/SFTPUser",  
   "PrivateKeyName": "/home/controlm/ctm_agent/ctm/cm/AFT/data/Keys/sFTPkey",
   "Passphrase": "passphrase"
}
TargetAgentWhich agent computer to deploy the connection profile
TargetCTMThe Control-M/Server to which to deploy the connection profile. When there is only one Control-M/Server, this will be the default.
PrivateKeyName

(Optional) Private key full file path

Passphrase

(Optional) Password for the private key. Use Secrets in code to not expose the password in the code.

Password(Optional) Password for SFTP Server account. Use Secrets in code to not expose the password in the code.
HomeDirectory(Optional) User home directory
WorkloadAutomationUsers

(Optional) Users that are allowed to access the connection profile.

NOTE : You can use "*" as a wildcard. For example, "e*"

Default: * (all users)

Checksum

(Optional) Enable or disable error detection on file transfer

True | False

Default: False

ConnectionProfile:FileTransfer:Local

The following example shows a connection profile for Local File System. 

"LocalConn" : {
   "Type" : "ConnectionProfile:FileTransfer:Local",
   "TargetAgent" : "AgentHost",
   "TargetCTM" : "CTMHost",
   "User" : "controlm",
   "Password" : "local password"
}
TargetAgentThe Control-M/Agent to which to deploy the connection profile.
TargetCTMThe Control-M/Server to which to deploy the connection profile. When there is only one Control-M/Server, this will be the default. 
OsType

(Optional) FTP server operating system type

Default: Unix

Types: Unix, Windows

Password(Optional) Password for local account. Use Secrets in code to not expose the password in the code.
WorkloadAutomationUsers

(Optional) Users that are allowed to access the connection profile.

NOTE : You can use "*" as a wildcard. For example, "e*"

Default: * (all users)

Checksum

(Optional) Enable or disable error detection on file transfer

True | False

Default: False

Back to top

ConnectionProfile:Database

The connection profile for database allows you to connect to the following database types:

The following example shows how to define an MSSQL database connection profile. 

 {
	"MSSqlConnectionProfileSample": {
		"Type": "ConnectionProfile:Database:MSSQL",
		"TargetAgent": "AgentHost",
		"Host": "MSSQLHost",
		"User": "db user",
		"Port":"1433",
		"Password": "db password",
		"DatabaseName": "master",
		"DatabaseVersion": "2005",
		"MaxConcurrentConnections": "9",
		"ConnectionRetryTimeOut": "34",
		"ConnectionIdleTime": "45"
	},
	"MSsqlDBFolder": {
		"Type": "Folder",
		"testMSSQL": {
			"Type": "Job:Database:SQLScript",
			"Host": "app-redhat",
			"SQLScript": "/home/controlm/sqlscripts/selectArgs.sql",
			"ConnectionProfile": "MSSqlConnectionProfileSample",
			"Parameters": [ 
				{ "firstParamName": "firstParamValue" }, 
				{ "second": "secondParamValue" } 
			],
			"Autocommit": "N",
			"OutputExcecutionLog": "Y",
			"OutputSQLOutput": "Y",
			"SQLOutputFormat": "XML"
		}
	}
}

This example contains the following parameters:

ParameterDescription
Port

The database port number.

If the port is not specified, the following default values are used for each database type:

  • MSSQL - 1433
  • Oracle - 1521
  • DB2 - 50000
  • Sybase - 4100
  • PostgreSQL - 5432
PasswordPassword to the database account. Use Secrets in code to not expose the password in the code.
DatabaseNameThe name of the database
DatabaseVersion

The version of the database. The following database drivers are supported in Control-M for database V9:

  • MSSQL - 2005, 2008, 2012, 2014
  • Oracle - 9i, 10g, 11g, 12c
  • DB2 - 9, 10
  • Sybase - 12, 15
  • PostgreSQL - 8, 9

The default version for each database is the earliest version listed above.

MaxConcurrentConnections

The maximum number of connections that the database can process at the same time.

Allowed values: 1–512
Default value: 100

ConnectionRetryTimeOut

The number of seconds to wait before attempting to connect again.

Allowed values: 1–300
Default value: 5 seconds

ConnectionIdleTime

The number of seconds that the database connection profile can remain idle before disconnecting.

Default value: 300 seconds

ConnectionRetryNum

The number of times to attempt to reconnect after a connection failure.

Allowed values: 1–24
Default value: 5

AuthenticationType

SQL Server Authentication

Possible values are:

  • NTLM2 Windows Authentication
  • Windows Authentication
  • SQL Server Authentication

ConnectionProfile:Database:DB2

The following example shows how to define a connection profile for DB2. Additional available parameters are described in the table above.

{
  "DB2ConnectionProfileSample": {
    "Type": "ConnectionProfile:Database:DB2",
    "TargetAgent": "AgentHost",
    "Host": "DB2Host",
    "Port":"50000",
    "User": "db user",
    "Password": "db password",
    "DatabaseName": "db2"
  }
} 

ConnectionProfile:Database:Sybase

The following example shows how to define a connection profile for Sybase. Additional available parameters are described in the table above.

{
  "DB2ConnectionProfileSample": {
    "Type": "ConnectionProfile:Database:Sybase",
    "TargetAgent": "AgentHost",
    "Host": "SybaseHost",
    "Port":"4100",
    "User": "db user",
    "Password": "db password",
    "DatabaseName": "Master"
  }
} 

ConnectionProfile:Database:PostgreSQL

The following example shows how to define a connection profile for PostgreSQL. Additional available parameters are described in the table above.

{
  "DB2ConnectionProfileSample": {
    "Type": "ConnectionProfile:Database:PostgreSQL",
    "TargetAgent": "AgentHost",
    "Host": "PostgreSQLHost",
    "Port":"5432",
    "User": "db user",
    "Password": "db password",
    "DatabaseName": "postgres"
  }
} 

ConnectionProfile:Database:Oracle

Oracle includes three types of database definition types:

ConnectionProfile:Database:Oracle:SID

The following example shows how to define a connection profile for an Oracle database using the SID identifier. Additional available parameters are described in the table above.

"OracleConnectionProfileSample": {   
	"Type": "ConnectionProfile:Database:Oracle:SID",   
	"TargetCTM": "controlm",   
	"Port": "1521",   
	"TargetAgent": "AgentHost",
	"Host": "OracleHost",
	"User": "db user",   
	"Password": "db password",
	"SID": "ORCL" 
}
ConnectionProfile:Database:Oracle:ServiceName

The following example shows how to define a connection profile for an Oracle database using a single service name. Additional available parameters are described in the table above.

"OracleConnectionProfileSample": {   
	"Type": "ConnectionProfile:Database:Oracle:ServiceName",   
	"TargetCTM": "controlm",   
	"Port": "1521",   
	"TargetAgent": "AgentHost",
	"Host": "OracleHost",
	"User": "db user",   
	"Password": "db password",
	"ServiceName": "ORCL" 
}
ConnectionProfile:Database:Oracle:ConnectionString

The following example shows how to define a connection profile for an Oracle database using a connection string that contains text from your tnsname.ora file. Additional available parameters are described in the table above.

{
	"OracleConnectionProfileSample": {
		"Type": "ConnectionProfile:Database:Oracle:ConnectionString",
		"TargetCTM":"CTMHost",
		"ConnectionString":"OracleHost:1521:ORCL",
		"TargetAgent": "AgentHost",
		"User": "db user",
		"Password": "db password"
	}
}

ConnectionProfile:Database:JDBC

The following example shows how to define a connection profile using a custom defined database type created using JDBC. Additional available parameters are described in the table above.

{
	"OracleConnectionProfileSample": {
		"Type": "ConnectionProfile:Database:JDBC",
		"User":"db user",
		"TargetCTM":"CTMHost",
		"Host": "PGSQLHost",
		"Driver":"PGDRV",
		"Port":"5432",
		"TargetAgent": "AgentHost",
		"Password": "db password",
		"DatabaseName":"dbname"
	}
}
ParmeterDescription
Driver

JDBC driver name as defined in Control-M or as defined using the Driver object

Driver:JDBC:Database

You can define a driver to be used by a connection profile. The following example shows the parameters that you use to define a driver:

{
  "MyDriver": {
    "Type": "Driver:Jdbc:Database",
    "TargetAgent":"app-redhat",
    "StringTemplate":"jdbc:sqlserver://<HOST>:<PORT>/<DATABASE>",
    "DriverJarsFolder":"/home/controlm/ctm/cm/DB/JDBCDrivers/PostgreSQL/9.4/",
    "ClassName":"org.postgresql.Driver",
    "LineComment" : "--",
    "StatementSeparator" : ";"
 }
}
ParameterDescription
TargetAgentThe Control-M/Agent to which to deploy the driver.
StringTemplateThe structure according to which a connection profile string is created.
DriversJarsFolderThe path to the folder where the database driver jars are located.
ClassNameName of driver class
LineComment The syntax used for line comments in the scripts that run on the database.
StatementSeparatorThe syntax used for statement separator in the scripts that run on the database.

Back to top

Secrets in Code

You can use the Secret object in your JSON code when you do not want to expose confidential information in the source (for example, the password field in a Connection Profile). The syntax below enables you to reference a named secret as defined in the Control-M vault. To learn how to manage secrets, see section Config Secrets. The value of the secret is resolved during deployment.

The following syntax is used to reference a secret. 

"<parameter>" :  {"Secret": "<secret name>"}

The following example shows how to use secrets in code:

{
    "Type": "ConnectionProfile:Hadoop",
    "Hive": {
        "Host": "hiveServer",
        "Principal": "a@bc",
        "Port": "1024",
        "User": "emuser",
        "Password": {"Secret": "hive_dev_secret"}
    }
}

Back to top

Defaults

Allows you to define default parameter values for all objects at once.

The following example shows how to define scheduling criteria using the When parameter. This configures all jobs to run according to the same scheduling criteria. Note that if you also set a specific value at the job level, the job-level value overrides the value in the global-level Defaults section.

{
    "Defaults" : {
        "Host" : "HOST",
        "When" : {
            "WeekDays":["MON","TUE"],
            "FromTime":"1500",
            "ToTime":"1800"       
        }
    }
}

The following example shows how to define defaults for all objects of type Job:*.

{
    "Defaults" : {
        "Job": {
            "Host" : "HOST",
            "When" : {
                "WeekDays":["MON","TUE"],
                "FromTime":"1500",
                "ToTime":"1800"       
            }
        }
    }
 
}

The following example shows how to define defaults at the folder level that override defaults at the global level. 

{
	"Folder1": {
         "Type": "Folder",
         "Defaults" : {
          "Job:Hadoop": {
              "Host" : "HOST1",
              "When" : {
                "WeekDays":["MON","TUE"],
                "FromTime":"1500",
                "ToTime":"1800"       
             }
           }
         }
	}
}

The following example shows how to define defaults that are user-defined objects such as actionIfSuccess. For each job that succeeds, an email is sent.

{
	"Defaults" : {
        "Job": {
            "Host" : "HOST",
            "actionIfSuccess" : {
                "Type": "If",
                "CompletionStatus":"OK",
                "mailTeam": {
                  "Type": "Mail",
                  "Message": "Job %%JOBNAME succeeded",
                  "Subject": "Success",
                  "To": "team@mycomp.com"
                }
            }
        }
    }
}

Back to top

Related information

For more information about Control-M, use the following resources:

Was this page helpful? Yes No Submitting... Thank you

Comments