Installation Documentation

Installation Requirements

Installation Instructions for:

Windows, Linux, Mac OS, and other platforms

Control-M Automation API Videos

 Where to find more information

For more information about Control-M, use the following resources.

Support videos

Online help

Introduction

Control-M integrates the management of batch workload processes to a single point of control. With cross-application and cross-platform scheduling capabilities, this powerful solution accelerates delivery of digital services and increases the quality of service delivered to users and the business.

With Control-M you:

  • Gain a faster, more cost-effective way to manage workload with a unique architecture that supports growth and provides unmatched integration
  • Reduce the number of failure points and delays caused by manual processes with a single, unified scheduling interface — regardless of platform or application
  • Eliminate your reliance on multiple toolsets and staff resources with automated scheduling processes that help you manage priorities according to business needs
  • Accelerate delivery of digital services by connecting Applications Development and IT Operations using a collaborative web application

Control-M Automation API is a set of programmatic interfaces that give developers and DevOps engineers access to the capabilities of Control-M within the modern application release process. Job flows and related configuration objects are built in JSON and managed together with other application artifacts in any source code management solution, such as GIT.  This approach enables capabilities like sophisticated scheduling, flow control and SLA management to be built in right from inception and used during the running of batch applications as they are automatically deployed in dev, test and production environments.

Control-M Automation API provides an automated interface to Control-M in addition to graphical user interfaces that have been used traditionally. With Control-M Automation API, you manage job workflows as code, similar to the way modern configuration management solutions manage infrastructure as code. The actual execution of batch jobs is managed by Control‑M. 

You can use a Command Line Interface or the REST API to build, run and test jobs against an existing Control-M. A local workbench provides initial job flow unit testing functions which are a subset of the complete set of Control-M capabilities. Once your flows are ready, you can check them into your source control system with the rest of your project files. The Automation API provides services to package and deploy job flows to the Control-M environment. Control-M schedules and operates on the most recently deployed flows.

Additionally, Control-M Automation API provides services to provision Control-M/Agent machines, configure host groups, and more.

No prior experience with Control-M is required to write jobs. Work through this Guide and it's tutorials to get started. The guide introduces you to the concepts of Control-M Automation API with some examples of flows. You can get more information about creating jobs by following the steps from the tutorial of Creating Your First Job Flow. You can find more information about accessing Control-M Automation API services to build, test and deploy your jobs on the Installation page.

Control-M in a Nutshell

In order to make use of Control-M Automation API effectively, there are some concepts of Control-M you need to be familiar with:

  • Jobs
  • Folders
  • Scheduling
  • Control-M/Server
  • Control-M/Agent
  • Host groups
  • Site standards

A Job is a basic execution unit of Control-M which has several attributes, such as:

  • What is the job type: script, command, Hadoop/Spark, file transfer, etc. 
  • Where does this job run: host
  • Who runs the job: connection profile or run as
  • When should it run: scheduling criteria
  • What are the job dependencies: a specific job must complete, a file must arrive, etc.

Jobs are contained in folders, which can also have scheduling criteria, dependencies, and other instructions that apply to all jobs in the folder. You define these entities in JSON format. 

The scheduling engine of Control-M is the Control-M/Server. Once a day it reads job definitions and starts monitoring all jobs scheduled to start on that day. These are then submitted to run on Control-M/Agents when scheduling criteria and dependencies are met.

Control-M/Agents reside on the application hosts. You can group them into a host group. Specifying a job to run on a host group, causes Control-M/Server to balance the load by directing jobs to the various hosts in the host group.

Site Standards are a set of naming rules for the different job fields. When using them, only jobs that match the relevant site standard can be built and deployed to that Control-M. Site standards are available as part of Control-M Workload Change Manager add-on.

For more information on Control-M solutions, refer to the Control-M documentation.

Control-M Automation API decentralizes access to Control-M entities and provides them directly to developers. When using the Control-M Automation API, you access Control-M via a REST API HTTP endpoint.

 

Installation

You can install Control-M Workbench or access Control-M REST API from a Control-M instance.

Control-M Workbench provides a development environment that enables you to build, run, and test your job flows. You can make and validate changes without effecting the work of other stakeholders. To install Control-M Workbench, follow the steps described in Installation of Control-M Workbench.

For information on Control-M instance, refer to the Control-M documentation.

You can run the Control-M API commands via CLI or REST API to build and manage your Control-M workflows. To install the CLI, follow the steps described in Installation of Control-M Automation CLI.

 

The following tutorials are designed to introduce you to common best practices of using Control-M Automation API.

Tutorials

Following are the tutorials that explain common usage of the Automation API. 

Before you start with the tutorials, you must follow the steps in Setting up the prerequisites to set up your preferred environment.

Creating Your First Job FlowExplains how to write jobs that execute OS commands and scripts
Running your applications and programs in your environment

Explains how to run:

How to Automate Code DeploymentExplains how DevOps engineer can automate code deployment
Building a docker container for batch applicationsExplains how to build a docker container for Control-M/Agent and run some basic API commands

 

 

 

Setting up the prerequisites

This following steps describe how to set up your environment to run each of the tutorials.

Step 1 - Access to Control-M Automation API

Control-M Automation API is a REST API interface over HTTPS. You can either install Control-M Workbench or access a Control-M instance where you must have Control-M credentials.

Step 2 - Install Control-M Automation CLI

To run the tutorials you need a working Control-M CLI. Refer to the Control-M Automation CLI installation guide. Select the relevant platform and follow the steps. 

Step 3 - Clone GitHub repository

Fetch tutorial samples from GitHub and run git clone to create a local copy of the source code.

> git clone https://github.com/controlm/automation-api-quickstart.git

Step 4 - Setting up a Control-M Environment

The Control-M Automation CLI requires an environment to run the commands. An Environment is a combination of an endPoint, username and password.

An endPoint is the URI for Control-M Automation API.

Here is an example of the default URI for Control-M Automation API:

https://<controlmEndPoint>:8443/automation-api


If you installed Control-M Workbench, you do not need to add an environment. By default an environment named "workbench" is created after installing the Control-M Automation CLI. Continue to next step.

If you accessed a Control-M instance, use the following command to add an environment:

>ctm environment add devEnvironment "https://<controlmEndPoint>:8443/automation-api" "[ControlmUser]" "[ControlmPassword]"

Step 5 - Make the Environment Default

All commands invoked will be using the default environment. 

For Control-M Workbench:

 >ctm environment set workbench

For Control-M instance:

The following command shows you how to set devEnvironment and the response returned.

>ctm environment set devEnvironment
info:    current environment: devEnvironment
info:    environment: {
  "local": {
    "endPoint": "http://localhost:48080/",
    "user": ""
  },
  "devEnvironment": {
    "endPoint": "https://<controlmEndPoint>:8443/automation-api",
    "user": "<controlmUser>"
  }
}

Step 6 - Accessing documentation

You can access the REST API documentation for this environment:

>ctm documentation restApi

To access this Getting Started Guide:

>ctm doc gettingStarted

Tutorial - Creating Your First Job Flow

This example shows you how to write command and script jobs that run in sequence. 

Step 1 - Access the tutorial samples

Go to the directory where the tutorial sample is located:

>cd automation-api-quickstart/101-create-first-job-flow

Step 2 - Verify the Code is Valid for Control-M

Let's take AutomationAPISampleFlow.json that contains job definitions and verify that the code is valid. To do so, use the build command. The following shows the command and a typical successful response.

>ctm build AutomationAPISampleFlow.json
[
  {
    "deploymentFile": "AutomationAPISampleFlow.json",
    "successfulFoldersCount": 0,
    "successfulSmartFoldersCount": 1,
    "successfulSubFoldersCount": 0,
    "successfulJobsCount": 2,
    "successfulConnectionProfilesCount": 0
  }
]

If the code is not valid, an error is returned.

Step 2 - Run the Source Code

Use the run command to run the jobs on the Control-M environment. The returned runId is used to check the job status. The following shows the command and a typical successful response.

>ctm run AutomationAPISampleFlow.json
{

  "runId": "7cba67de-9e0d-409d-8d93-1b8229432eee",
  "statusURI": "https://localhost:8443/automation-api/run/status/7cba67de-9e0d-409d-8d93-1b82294e?token=4f8684ec6754e08cc70f95b5f09d3a47_A1FD0E65",
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=7cba67de-9e0d-409d-8d93-29432eee&title=AutomationAPISampleFlow.json"
}

This code ran successfully and returned the runId of "7cba67de-9e0d-409d-8d93-1b8229432eee".

Step 3 - Check the Job Status Using the runId

The following command shows how to check job status using the runId. Note that when there is more than one job in the flow, the status of all the jobs are checked and returned.

>ctm run status "7cba67de-9e0d-409d-8d93-1b8229432eee"
{
  "statuses": [
    {
      "jobId": "workbench:00007",
      "folderId": "workbench:00000",
      "numberOfRuns": 1,
      "name": "AutomationAPISampleFlow",
      "type": "Folder",
      "status": "Executing",
      "startTime": "Apr 26, 2017 10:43:47 AM",
      "endTime": "",
      "outputURI": "Folder has no output",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:00007/log?token=01ab65917bc71dbef610806dd9cb3f94_0007C46B"
    },
    {
      "jobId": "workbench:00008",
      "folderId": "workbench:00007",
      "numberOfRuns": 0,
      "name": "CommandJob",
      "folder": "AutomationAPISampleFlow",
      "type": "Command",
      "status": "Wait Host",
      "startTime": "",
      "endTime": "",
      "outputURI": "Job did not run, it has no output",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:00008/log?token=01ab65917bc71dbef610806dd9cb3f94_0007C46B"
    },
    {
      "jobId": "workbench:00009",
      "folderId": "workbench:00007",
      "numberOfRuns": 0,
      "name": "ScriptJob",
      "folder": "AutomationAPISampleFlow",
      "type": "Job",
      "status": "Wait Condition",
      "startTime": "",
      "endTime": "",
      "outputURI": "Job did not run, it has no output",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:00009/log?token=01ab65917bc71dbef610806dd9cb3f94_0007C46B"
    }
  ],
  "startIndex": 0,
  "itemsPerPage": 25,
  "total": 3,
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=7cba67de-9e0d-409d-8d93-1b8229432eee&title=Status_7cba67de-9e0d-409d-8d93-1b8229432eee"

Step 4 - Examine the Source Code

Let's look inside the following source code. You'll learn about the structure of the job flow and what it should contain.

{
    "Defaults" : {
        "Application" : "SampleApp",
        "SubApplication" : "SampleSubApp",
        "RunAs" : "<USERNAME>",
        "Host" : "<HOST>",
        "Job": {
            "When" : {
                "Months": ["JAN", "OCT", "DEC"],
                "MonthDays":["22","1","11"],
                "WeekDays":["MON","TUE", "WED", "THU", "FRI"],
                "FromTime":"0300",
                "ToTime":"2100"
            },
            "ActionIfFailure" : {
                "Type": "If",
                "CompletionStatus": "NOTOK",
                "mailToTeam": {
                    "Type": "Mail",
                    "Message": "%%JOBNAME failed",
                    "To": "team@mycomp.com"
                }
            }
        }
    },
    "AutomationAPISampleFlow": {
        "Type": "Folder",
        "Comment" : "Code reviewed by John",
        "CommandJob": {
            "Type": "Job:Command",
            "Command": "echo my 1st job"
        },
        "ScriptJob": {
            "Type": "Job:Script",
                "FilePath":"<SCRIPT_PATH>",
                "FileName":"<SCRIPT_NAME>"
        },
        "Flow": {
            "Type": "Flow",
            "Sequence": ["CommandJob", "ScriptJob"]
        }
    }
}

The first object is called "Defaults". It allows you to define a parameter once for all objects. For example, it includes scheduling using the When parameter which configures all jobs to run according to the same scheduling criteria. The "ActionIfFailure" object determines what action is taken if a job ends unsuccessfully.

This example contains two jobs: CommandJob and ScriptJob. These jobs are contained within a folder named AutomationAPISampleFlow.  To define the sequence of job execution, use the Flow object.  

Step 5 - Modify the Code to Run in Your Environment

In the code above, the following parameters need to be set to run the jobs in your environment: 

"RunAs" : "<USERNAME>" 
"Host" : "<HOST>"
 
"FilePath":"<SCRIPT_PATH>"
"FileName":"<SCRIPT_NAME>"

RunAs identifies the operating system user that will execute the jobs.

Host defines the machine where the jobs will run. This machine should have a Control-M/Agent installed. 

FilePath and FileName define the location and name of the file that contains the script to run.

Change the values that suits your Control-M environment. In a Control-M Workbench environment, change the values of the following parameters as shown in the example below. This script returns a communication report. 

"RunAs" : "workbench" 
"Host" : "workbench"
 
"FilePath":"/home/workbench/ctm/scripts"
"FileName":"ag_diag_comm"


Step 6 - Rerun the Code Sample

Now that we've modified the source code in the previous step, let's rerun the sample:

>ctm run AutomationAPISampleFlow.json
{
  "runId": "ed40f73e-fb7a-4f07-a71c-bc2dfbc48494",
  "statusURI": "https://localhost:8443/automation-api/run/status/ed40f73e-fb7a-4f07-a71c-bc2dfbc48494?token=460e0106b369a0d155bb0e7cbb44f8eb_7E6C03FA",
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=ed40f73e-fb7a-4f07-a71c-bc2dfbc48494&title=AutomationAPISampleFlow.json"
}

 

Each time you run the code, a new runId is generated. Let's take the new runId, and check the jobs statuses again:

>ctm run status "ed40f73e-fb7a-4f07-a71c-bc2dfbc48494"
{
  "statuses": [
    {
      "jobId": "workbench:0000p",
      "folderId": "workbench:00000",
      "numberOfRuns": 1,
      "name": "AutomationAPISampleFlow",
      "type": "Folder",
      "status": "Ended OK",
      "startTime": "May 3, 2017 4:57:25 PM",
      "endTime": "May 3, 2017 4:57:28 PM",
      "outputURI": "Folder has no output",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:0000p/log?token=a8d74f5914dc6decdfd8b2ec833d54cc_3E30FFC9"
    },
    {
      "jobId": "workbench:0000q",
      "folderId": "workbench:0000p",
      "numberOfRuns": 1,
      "name": "CommandJob",
      "folder": "AutomationAPISampleFlow",
      "type": "Command",
      "status": "Ended OK",
      "startTime": "May 3, 2017 4:57:26 PM",
      "endTime": "May 3, 2017 4:57:26 PM",
      "outputURI": "https://localhost:8443/automation-api/run/job/workbench:0000q/output?token=a8d74f5914dc6decdfd8b2ec833d54cc_3E30FFC9",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:0000q/log?token=a8d74f5914dc6decdfd8b2ec833d54cc_3E30FFC9"
    },
    {
      "jobId": "workbench:0000r",
      "folderId": "workbench:0000p",
      "numberOfRuns": 1,
      "name": "ScriptJob",
      "folder": "AutomationAPISampleFlow",
      "type": "Job",
      "status": "Ended OK",
      "startTime": "May 3, 2017 4:57:27 PM",
      "endTime": "May 3, 2017 4:57:27 PM",
      "outputURI": "https://localhost:8443/automation-api/run/job/workbench:0000r/output?token=a8d74f5914dc6decdfd8b2ec833d54cc_3E30FFC9",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:0000r/log?token=a8d74f5914dc6decdfd8b2ec833d54cc_3E30FFC9"
    }
  ],
  "startIndex": 0,
  "itemsPerPage": 25,
  "total": 3,
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=ed40f73e-fb7a-4f07-a71c-bc2dfbc48494&title=Status_ed40f73e-fb7a-4f07-a71c-bc2dfbc48494"
}

You can now see that both jobs EndedOK.

Let's view the output of CommandJob. You need to use the jobId to get this information.

>ctm run job:output::get "workbench:0000q"
+ echo my 1st job
my 1st job

Step 7 - Using the Workbench Debugger

When using Control-M Workbench, you can view the run commands in an interactive user interface by entering "--interactive" at the end of the command or for short you can use -i.

>ctm run AutomationAPISampleFlow.json --interactive
{
  "runId": "40586805-60b5-4acb-9f21-a0cf048f1051",
  "statusURI": "https://ec2-54-187-1-168.us-west-2.compute.amazonaws.com:8443/run/status/40586805-60b5-4acb-9f21-a0cf048f1051",
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=40586805-60b5-4acb-9f21-a0cf048f1051&title=AutomationAPISampleFlow.json
}

A browser window opens where you can view and manage your jobs.


To learn more about what you can do with the Control-M Automation API, read through the Code Reference and Automation API Services


In the next tutorial you will learn how to run your applications and programs in your development environment.

 

 

Tutorial - Running your applications and programs in your environment

Jobs run where the Control-M/Agent resides.You need a Control-M/Agent on your application host. The provision service enables you to install and setup a Control-M/Agent. 

Select the relevant application:

 

Running Hadoop Spark Job Flow

This example walks you through writing Hadoop and Spark jobs that run in sequence. To complete this tutorial, you need a Hadoop edge node where the Hadoop client software is installed.

Let's verify that Hadoop and HDFS are operational using the following commands:

>hadoop version
Hadoop 2.6.0-cdh5.4.2
Subversion http://github.com/cloudera/hadoop -r 15b703c8725733b7b2813d2325659eb7d57e7a3f
Compiled by jenkins on 2015-05-20T00:03Z
Compiled with protoc 2.5.0
From source with checksum de74f1adb3744f8ee85d9a5b98f90d
This command was run using /usr/jars/hadoop-common-2.6.0-cdh5.4.2.jar
 
> hadoop fs -ls /
Found 5 items
drwxr-xr-x   - hbase supergroup          0 2015-12-13 02:32 /hbase
drwxr-xr-x   - solr  solr                0 2015-06-09 03:38 /solr
drwxrwxrwx   - hdfs  supergroup          0 2016-03-20 07:11 /tmp
drwxr-xr-x   - hdfs  supergroup          0 2016-03-29 06:51 /user
drwxr-xr-x   - hdfs  supergroup          0 2015-06-09 03:36 /var

 

Step 1 - Finding the right image to provision

The provision images command lists the images available to install.

>ctm provision images Linux
[
  "Agent.Linux",
  "ApplicationsAgent.Linux",
  "BigDataAgent.Linux"
]

As you can see, there are three available Linux images:

  • Agent.Linux - provides the ability to run scripts, programs and commands. 
  • ApplicationsAgent.Linux - in addition to Agent.Linux, it adds plugins to run file transfer jobs and database SQL scripts.
  • BigDataAgent.Linux - in addition to Agent.Linux, it adds a plugin to run Hadoop and Spark jobs.

In this example, we will provision the BigDataAgent.Linux image.

Step 2 - Provision the BigDataAgent Image 

>ctm provision install BigDataAgent.Linux


After provisioning the BigDataAgent successfully, you now have a running instance of Control-M/Agent on your Hadoop edge node.

Now let's access the tutorial samples code.

Step 3 - Access the tutorial samples

Go to the directory where the tutorial sample is located:

>cd automation-api-quickstart/101-running-hadoop-spark-job-flow

Step 4 - Verify the Code is Valid for Control-M

Let's take AutomationAPISampleHadoopFlow.json that contains job definitions and verify that the code is valid. To do so, use the build command. The following shows the command and a typical successful response.

>ctm build AutomationAPISampleHadoopFlow.json
[
  {
    "deploymentFile": "AutomationAPISampleHadoopFlow.json",
    "successfulFoldersCount": 0,
    "successfulSmartFoldersCount": 1,
    "successfulSubFoldersCount": 0,
    "successfulJobsCount": 2,
    "successfulConnectionProfilesCount": 0
  }
]

If the code is not valid, an error is returned.


Step 5 - Examine the Source Code

Let's look inside the following source code. You'll learn about the structure of the job flow and what it should contain.

    {
    "Defaults" : {
        "Application": "SampleApp",
        "SubApplication": "SampleSubApp",
        "Host" : "<HOST>",
        "When" : {
            "FromTime":"0300",
            "ToTime":"2100"
        },
        "Job:Hadoop" : {
            "ConnectionProfile": "SampleConnectionProfile"
        }
    },
    "SampleConnectionProfile" :
    {
        "Type" : "ConnectionProfile:Hadoop",
        "TargetAgent" : "<HOST>"
    },
    "AutomationAPIHadoopSampleFlow": {
        "Type": "Folder",
        "Comment" : "Code reviewed by John",
        "ProcessData": {
            "Type": "Job:Hadoop:Spark:Python",
            "SparkScript": "file:///home/<USER>/automation-api-quickstart/101-running-hadoop-spark-job-flow/processData.py",
            
            "Arguments": [
                "file:///home/<USER>/automation-api-quickstart/101-running-hadoop-spark-job-flow/processData.py",
                "file:///home/<USER>/automation-api-quickstart/101-running-hadoop-spark-job-flow/processDataOutDir"
            ],
            "PreCommands" : {
                "Commands" : [
                    { "rm":"-R -f file:///home/<USER>/automation-api-quickstart/101-running-hadoop-spark-job-flow/processDataOutDir" }
                ]                   
            }
        },
        "CopyOutputData" :
        {
            "Type" : "Job:Hadoop:HDFSCommands",
            "Commands" : [
                {"rm"    : "-R -f samplesOut" },
                {"mkdir" : "samplesOut" },
                {"cp"   : "file:///home/<USER>/automation-api-quickstart/101-running-hadoop-spark-job-flow/* samplesOut" }
            ]
        },
        "DataProcessingFlow": {
            "Type": "Flow",
            "Sequence": ["ProcessData","CopyOutputData"]
        }
    }
}

This example contains two jobsProcessData and CopyOutputData. These jobs are contained within a folder named AutomationAPIHadoopSampleFlow.  To define the sequence of job execution, use the Flow object.  

In this sample there are two jobs, a Spark job followed by an HDFS Commands job. The "Flow" object is used to define the sequence in which the jobs run.

Notice that in the Spark job we use the "PreCommands" object to cleanup output from any previous Spark job runs.  

The "SampleConnectionProfile" object is used to define the connection parameters to the Hadoop cluster. Note that for Sqoop and Hive, it is used to set data sources and credentials.

Here is the code of processData.py:

from __future__ import print_function
 
import sys
from pyspark import SparkContext
 
inputFile  = sys.argv[1]
outputDir = sys.argv[2]
 
sc = SparkContext(appName="processDataSampel")
text_file = sc.textFile(inputFile)
counts = text_file.flatMap(lambda line: line.split(" ")) \
             .map(lambda word: (word, 1)) \
             .reduceByKey(lambda a, b: a + b)
 
counts.saveAsTextFile(outputDir)

Step 6 - Modify the Code to Run in Your Environment

You need to modify the path to the samples directory for them to run successfully in your environment. Replace the URI file:///home/[USER]/automation-api-quickstart/101-running-hadoop-spark-job-flow/ with the location of the samples you installed on your machine.

"SparkScript": "file:///home/<USER>/automation-api-quickstart/101-running-hadoop-spark-job-flow/processData.py",
"Arguments": [
    "file:///home/<USER>/automation-api-quickstart/101-running-hadoop-spark-job-flow/processData.py",
    "file:///home/<USER>/automation-api-quickstart/101-running-hadoop-spark-job-flow/processDataOutDir"
], 
{ "rm":"-R -f file:///home/<USER>/automation-api-quickstart/101-running-hadoop-spark-job-flow/processDataOutDir" }
{"cp" : "file:///home/<USER>/automation-api-quickstart/101-running-hadoop-spark-job-flow/* samplesOut" }

For example: file:///home/user1/automation-api-quickstart/101-running-hadoop-spark-job-flow/

 

In the code sample, replace the value for "TargetAgent" and "Host" with the host name of the machine where we provisioned the Control-M/Agent.

"TargetAgent" : "<HOST>"
"Host" : "<HOST>"

 

Step 7 - Rerun the Sample

Now that we've modified the source code in the previous step, let's rerun the sample:

>ctm run AutomationAPISampleHadoopFlow.json
{
  "runId": "6aef1ce1-3c57-4866-bf45-3a6afc33e27c",
  "statusURI": "https://10.64.107.21:8443/automation-api/run/status/6aef1ce1-3c57-4866-bf45-3a6afc33e27c?token=224926c45a2815504ff12cb119ed4356_93C1D4CC",
  "monitorPageURI": "https://10.64.107.21:8443/SelfService#Workbench:runid=6aef1ce1-3c57-4866-bf45-3a6afc33e27c&title=AutomationAPISampleHadoopFlow.json"
}

Each time the code runs, a new runId is generated. So, let's take the new runId, and check the jobs status again:

>ctm run status "6aef1ce1-3c57-4866-bf45-3a6afc33e27c"
{
  "statuses": [
    {
      "jobId": "workbench:000ca",
      "folderId": "workbench:00000",
      "numberOfRuns": 1,
      "name": "AutomationAPIHadoopSampleFlow",
      "type": "Folder",
      "status": "Ended OK",
      "startTime": "May 24, 2017 1:03:18 PM",
      "endTime": "May 24, 2017 1:03:45 PM",
      "outputURI": "Folder has no output",
      "logURI": "https://10.64.107.21:8443/automation-api/run/job/workbench:000ca/log?token=9d333198364e10b3b2090290c797e1f4_E9F8C76B"
    },
    {
      "jobId": "workbench:000cb",
      "folderId": "workbench:000ca",
      "numberOfRuns": 1,
      "name": "ProcessData",
      "folder": "AutomationAPIHadoopSampleFlow",
      "type": "Job",
      "status": "Ended OK",
      "startTime": "May 24, 2017 1:03:18 PM",
      "endTime": "May 24, 2017 1:03:32 PM",
      "outputURI": "https://10.64.107.21:8443/automation-api/run/job/workbench:000cb/output?token=9d333198364e10b3b2090290c797e1f4_E9F8C76B",
      "logURI": "https://10.64.107.21:8443/automation-api/run/job/workbench:000cb/log?token=9d333198364e10b3b2090290c797e1f4_E9F8C76B"
    },
    {
      "jobId": "workbench:000cc",
      "folderId": "workbench:000ca",
      "numberOfRuns": 1,
      "name": "CopyOutputData",
      "folder": "AutomationAPIHadoopSampleFlow",
      "type": "Job",
      "status": "Ended OK",
      "startTime": "May 24, 2017 1:03:33 PM",
      "endTime": "May 24, 2017 1:03:44 PM",
      "outputURI": "https://10.64.107.21:8443/automation-api/run/job/workbench:000cc/output?token=9d333198364e10b3b2090290c797e1f4_E9F8C76B",
      "logURI": "https://10.64.107.21:8443/automation-api/run/job/workbench:000cc/log?token=9d333198364e10b3b2090290c797e1f4_E9F8C76B"
    }
  ],
  "startIndex": 0,
  "itemsPerPage": 25,
  "total": 3,
  "monitorPageURI": "https://10.64.107.21:8443/SelfService#Workbench:runid=6aef1ce1-3c57-4866-bf45-3a6afc33e27c&title=Status_6aef1ce1-3c57-4866-bf45-3a6afc33e27c"
} 

You can see that now the status of both jobs is "EndedOK".

Let's view the output of "CopyOutputData". We need to use the jobId to get this information.

>ctm run job:output::get workbench:000cc
Environment information:
+--------------------+--------------------------------------------------+
|Account Name        |SampleConnectionProfile                           |
+--------------------+--------------------------------------------------+

Job is running as user: cloudera
-----------------------
Running the following HDFS command:
-----------------------------------
hadoop fs -rm -R -f samplesOut

HDFS command output:
-------------------
Deleted samplesOut
script return value 0
-----------------------------------------------------------
-----------------------------------------------------------

Job is running as user: cloudera
-----------------------
Running the following HDFS command:
-----------------------------------
hadoop fs -mkdir samplesOut

HDFS command output:
-------------------
script return value 0
-----------------------------------------------------------
-----------------------------------------------------------

Job is running as user: cloudera
-----------------------
Running the following HDFS command:
-----------------------------------
hadoop fs -cp file:///home/cloudera/automation-api-quickstart/101-running-hadoop-spark-job-flow/* samplesOut

HDFS command output:
-------------------
script return value 0
-----------------------------------------------------------
-----------------------------------------------------------

Application reports:
--------------------
-> no hadoop application reports were created for the job execution.

Job statistics:
--------------
+-------------------------+-------------------------+
|Start Time               |20170524030335           |
+-------------------------+-------------------------+
|End Time                 |20170524030346           |
+-------------------------+-------------------------+
|Elapsed Time             |1065                     |
+-------------------------+-------------------------+
Exit Message = Normal completion 

Step 8 - Using the Workbench Debugger

When using Control-M Workbench, you can view the run commands in an interactive user interface by entering "--interactive" at the end of the command or for short you can use -i.

>ctm run AutomationAPISampleHadoopFlow.json--interactive
{
  "runId": "40586805-60b5-4acb-9f21-a0cf048f1051",
  "statusURI": "https://ec2-54-187-1-168.us-west-2.compute.amazonaws.com:8443/run/status/40586805-60b5-4acb-9f21-a0cf048f1051",
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=40586805-60b5-4acb-9f21-a0cf048f1051&title=AutomationAPISampleHadoopFlow.json
}

A browser window opens where you can view and manage your jobs.

 

To learn more about what you can do with the Control-M Automation API, read through the Code Reference and Automation API Services


In the next tutorial you will learn how to automate code deployments.

Running Script and Command Job Flow

This example walks you through running script and commands that run in sequence. You need a Windows 64-bit machine or Linux 64-bit machine that has access to scripts and programs that you would like to run.

Step 1 - Finding the right image to provision

The provision images command lists the images available to install.

>ctm provision images Linux
[
  "Agent.Linux",
  "ApplicationsAgent.Linux",
  "BigDataAgent.Linux"
]
 
OR
 
>ctm provision images Windows
[
  "Agent.Windows",
  "ApplicationsAgent.Windows"
]

As you can see, there are three available Linux images and two Windows images:

  • Agent.Linux/Agent.Windows- provides the ability to run scripts and commands. 
  • ApplicationsAgent.Linux/ApplicationsAgent.Windows- in addition to Agent.Linux/Agent.Windows, it adds plugins to run file transfer jobs and database SQL scripts.
  • BigDataAgent.Linux- in addition to Agent.Linux, it adds a plugin to run Hadoop and Spark jobs.

In this example, you will provision Agent.Windows or Agent.Linux according to the jobs that you would like to run. 

Step 2 - Provision the image 

>ctm provision install Agent.Windows

OR
 
>ctm provision install Agent.Linux


After provisioning the Agent successfully, you now have a running instance of your Control-M/Agent on your host..

Step 3 - Access the tutorial code

Go to the directory where the tutorial sample is located:

>cd automation-api-quickstart/101-running-script-command-job-flow

Step 4 - Verify the Code is Valid for Control-M

Let's take AutomationAPISampleFlow.json that contains job definitions and verify that the code is valid. To do so, use the build command. The following shows the command and a typical successful response.

>ctm build AutomationAPISampleFlow.json
[
  {
    "deploymentFile": "AutomationAPISampleFlow.json",
    "successfulFoldersCount": 0,
    "successfulSmartFoldersCount": 1,
    "successfulSubFoldersCount": 0,
    "successfulJobsCount": 2,
    "successfulConnectionProfilesCount": 0
  }
]

If the code is not valid, an error is returned.

Step 5 - Run the Source Code

Use the run command to run the jobs on the Control-M environment. The returned runId is used to check the job status. The following shows the command and a typical successful response.

>ctm run AutomationAPISampleFlow.json
{

  "runId": "7cba67de-9e0d-409d-8d93-1b8229432eee",
  "statusURI": "https://localhost:8443/automation-api/run/status/7cba67de-9e0d-409d-8d93-1b82294e?token=4f8684ec6754e08cc70f95b5f09d3a47_A1FD0E65",
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=7cba67de-9e0d-409d-8d93-29432eee&title=AutomationAPISampleHadoopFlow.json"
}

This code ran successfully and returned the runId of "7cba67de-9e0d-409d-8d93-1b8229432eee".

Step 6 - Check the Job Status Using the runId

The following command shows how to check job status using the runId. Note that when there is more than one job in the flow, the status of all the jobs are checked and returned.

>ctm run status "7cba67de-9e0d-409d-8d93-1b8229432eee"
{
  "statuses": [
    {
      "jobId": "workbench:00007",
      "folderId": "workbench:00000",
      "numberOfRuns": 1,
      "name": "AutomationAPISampleFlow",
      "type": "Folder",
      "status": "Executing",
      "startTime": "Apr 26, 2017 10:43:47 AM",
      "endTime": "",
      "outputURI": "Folder has no output",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:00007/log?token=01ab65917bc71dbef610806dd9cb3f94_0007C46B"
    },
    {
      "jobId": "workbench:00008",
      "folderId": "workbench:00007",
      "numberOfRuns": 0,
      "name": "CommandJob",
      "folder": "AutomationAPISampleFlow",
      "type": "Command",
      "status": "Wait Host",
      "startTime": "",
      "endTime": "",
      "outputURI": "Job did not run, it has no output",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:00008/log?token=01ab65917bc71dbef610806dd9cb3f94_0007C46B"
    },
    {
      "jobId": "workbench:00009",
      "folderId": "workbench:00007",
      "numberOfRuns": 0,
      "name": "ScriptJob",
      "folder": "AutomationAPISampleFlow",
      "type": "Job",
      "status": "Wait Condition",
      "startTime": "",
      "endTime": "",
      "outputURI": "Job did not run, it has no output",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:00009/log?token=01ab65917bc71dbef610806dd9cb3f94_0007C46B"
    }
  ],
  "startIndex": 0,
  "itemsPerPage": 25,
  "total": 3,
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=7cba67de-9e0d-409d-8d93-1b8229432eee&title=Status_7cba67de-9e0d-409d-8d93-1b8229432eee"

Step 7 - Examine the Source Code

Let's look inside the following source code. You'll learn about the structure of the job flow and what it should contain.

{
    "Defaults" : {
        "Application" : "SampleApp",
        "SubApplication" : "SampleSubApp",
        "RunAs" : "<USERNAME>",
        "Host" : "<HOST>",
        "Job": {
            "When" : {
                "Months": ["JAN", "OCT", "DEC"],
                "MonthDays":["22","1","11"],
                "WeekDays":["MON","TUE", "WED", "THU", "FRI"],
                "FromTime":"0300",
                "ToTime":"2100"
            },
            "ActionIfFailure" : {
                "Type": "If",       
                "CompletionStatus": "NOTOK",
                
                "mailToTeam": {
                    "Type": "Mail",
                    "Message": "%%JOBNAME failed",
                    "To": "team@mycomp.com"
                }
            }
        }
    },
    "AutomationAPISampleFlow": {
        "Type": "Folder",
        "Comment" : "Code reviewed by John",
        "CommandJob": {
            "Type": "Job:Command",
            "Command": "<COMMAND>"
        },
        "ScriptJob": {
            "Type": "Job:Script",
          	"FilePath":"<SCRIPT_PATH>",
          	"FileName":"<SCRIPT_NAME>"
        },
        "Flow": {
            "Type": "Flow",
            "Sequence": ["CommandJob", "ScriptJob"]
        }
    }
}

The first object is called "Defaults". It allows you to define a parameter once for all objects. For example, it includes scheduling using the When parameter which configures all jobs to run according to the same scheduling criteria. The "ActionIfFailure" object determines what action is taken if a job ends unsuccessfully.

This example contains two jobsCommandJob and ScriptJob. These jobs are contained within a folder named AutomationAPISampleFlow.  To define the sequence of job execution, use the Flow object.  

Step 8 - Modify the Code to Run in Your Environment

In the code above, the following parameters need to be set to run the jobs in your environment: 

"RunAs" : "<USERNAME>" 
"Host" : "<HOST>"

"Command": "<COMMAND>"
"FilePath":"<SCRIPT_PATH>"
"FileName":"<SCRIPT_NAME>"

RunAs identifies the operating system user that will execute the jobs.

Host defines the machine where you provisioned the Control-M/Agent. 

Command defines the command to run according to your operating system.

FilePath and FileName define the location and name of the file that contains the script to run.

NOTE: In JSON the backslash character must be doubled when used in the Windows file path - \\ .

Step 9 - Rerun the Code Sample

Now that we've modified the source code in the previous step, let's rerun the sample:

>ctm run AutomationAPISampleFlow.json
{
  "runId": "ed40f73e-fb7a-4f07-a71c-bc2dfbc48494",
  "statusURI": "https://localhost:8443/automation-api/run/status/ed40f73e-fb7a-4f07-a71c-bc2dfbc48494?token=460e0106b369a0d155bb0e7cbb44f8eb_7E6C03FA",
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=ed40f73e-fb7a-4f07-a71c-bc2dfbc48494&title=AutomationAPISampleFlow.json"
}

 

Each time you run the code, a new runId is generated. Let's take the new runId, and check the jobs statuses again:

>ctm run status "ed40f73e-fb7a-4f07-a71c-bc2dfbc48494"
{
  "statuses": [
    {
      "jobId": "workbench:0000p",
      "folderId": "workbench:00000",
      "numberOfRuns": 1,
      "name": "AutomationAPISampleFlow",
      "type": "Folder",
      "status": "Ended OK",
      "startTime": "May 3, 2017 4:57:25 PM",
      "endTime": "May 3, 2017 4:57:28 PM",
      "outputURI": "Folder has no output",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:0000p/log?token=a8d74f5914dc6decdfd8b2ec833d54cc_3E30FFC9"
    },
    {
      "jobId": "workbench:0000q",
      "folderId": "workbench:0000p",
      "numberOfRuns": 1,
      "name": "CommandJob",
      "folder": "AutomationAPISampleFlow",
      "type": "Command",
      "status": "Ended OK",
      "startTime": "May 3, 2017 4:57:26 PM",
      "endTime": "May 3, 2017 4:57:26 PM",
      "outputURI": "https://localhost:8443/automation-api/run/job/workbench:0000q/output?token=a8d74f5914dc6decdfd8b2ec833d54cc_3E30FFC9",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:0000q/log?token=a8d74f5914dc6decdfd8b2ec833d54cc_3E30FFC9"
    },
    {
      "jobId": "workbench:0000r",
      "folderId": "workbench:0000p",
      "numberOfRuns": 1,
      "name": "ScriptJob",
      "folder": "AutomationAPISampleFlow",
      "type": "Job",
      "status": "Ended OK",
      "startTime": "May 3, 2017 4:57:27 PM",
      "endTime": "May 3, 2017 4:57:27 PM",
      "outputURI": "https://localhost:8443/automation-api/run/job/workbench:0000r/output?token=a8d74f5914dc6decdfd8b2ec833d54cc_3E30FFC9",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:0000r/log?token=a8d74f5914dc6decdfd8b2ec833d54cc_3E30FFC9"
    }
  ],
  "startIndex": 0,
  "itemsPerPage": 25,
  "total": 3,
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=ed40f73e-fb7a-4f07-a71c-bc2dfbc48494&title=Status_ed40f73e-fb7a-4f07-a71c-bc2dfbc48494"
}

You can now see that both jobs EndedOK.

Let's view the output of CommandJob. We need to use the jobId to get this information.

>ctm run job:output::get "workbench:0000q"

Verify the output contains your script or command details.

Step 10 - Using the Workbench Debugger

When using Control-M Workbench, you can view the run commands in an interactive user interface by entering "--interactive" at the end of the command or for short you can use -i.

>ctm run AutomationAPISampleFlow.json --interactive
{
  "runId": "40586805-60b5-4acb-9f21-a0cf048f1051",
  "statusURI": "https://ec2-54-187-1-168.us-west-2.compute.amazonaws.com:8443/run/status/40586805-60b5-4acb-9f21-a0cf048f1051",
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=40586805-60b5-4acb-9f21-a0cf048f1051&title=AutomationAPISampleFlow.json
}

A browser window opens where you can view and manage your jobs.

 

To learn more about what you can do with the Control-M Automation API, read through the Code Reference and Automation API Services

 

In the next tutorial you will learn how to automate code deployments.

Running File Transfer and Database Queries Job Flow

This example walks you through running file transfer and database query jobs that run in sequence. To complete this tutorial, you need a PostgreSQL database (or you can use other databases) and SFTP server. For this example, you need to install the Agent on a machine that has a network connection to these servers.

Step 1 - Finding the right image to provision

The provision images command lists the images available to install.

>ctm provision images Linux
[
  "Agent.Linux",
  "ApplicationsAgent.Linux",
  "BigDataAgent.Linux"
]
 
OR
 
>ctm provision images Windows
[
  "Agent.Windows",
  "ApplicationsAgent.Windows"
]

 

As you can see, there are three available Linux images and two Windows images:

  • Agent.Linux/Agent.Windows- provides the ability to run scripts and commands. 
  • ApplicationsAgent.Linux/ApplicationsAgent.Windows- in addition to Agent.Linux/Agent.Windows, it adds plugins to run file transfer jobs and database SQL scripts.
  • BigDataAgent.Linux- in addition to Agent.Linux, it adds a plugin to run Hadoop and Spark jobs.

In this example, you will provision ApplicationsAgent.Windows or ApplicationAgent.Linux according to the machine that you would use to run the jobs. 

Step 2 - Provision the image 

>ctm provision install ApplicationsAgent.Windows
 
OR
 
>ctm provision install ApplicationsAgent.Linux


After provisioning the Agent successfully, you now have a running instance of Control-M/Agent on your host.

Step 3 - Access the tutorial samples

Go to the directory where the tutorial sample is located:

>cd automation-api-quickstart/101-running-file-transfer-and-database-query-job-flow

Step 4 - Verify the Code is Valid for Control-M

Let's take AutomationAPIFileTransferDatabaseSampleFlow.json that contains job definitions and verify that the code is valid. To do so, use the build command. The following shows the command and a typical successful response.

>ctm build AutomationAPIFileTransferDatabaseSampleFlow.json
[
  {
    "deploymentFile": "AutomationAPIFileTransferDatabaseSampleFlow.json",
    "successfulFoldersCount": 0,
    "successfulSmartFoldersCount": 1,
    "successfulSubFoldersCount": 0,
    "successfulJobsCount": 2,
    "successfulConnectionProfilesCount": 3,
    "successfulDriversCount": 0
  }
]

If the code is not valid, an error is returned.

Step 5 - Examine the Source Code

Let's look inside the following source code. You'll learn about the structure of the job flow and what it should contain.

{
    "Defaults" : {
        "Application" : "SampleApp",
        "SubApplication" : "SampleSubApp",
        "Host" : "<HOST>",
        "TargetAgent" : "<HOST>",
                                
        "Variables": [
           {"DestDataFile": "<DESTINATION_FILE>"},
           {"SrcDataFile":  "<SOURCE_FILE>"}
        ],
                                
        "When" : {
            "FromTime":"0300",
            "ToTime":"2100"
        }
    },
    "SFTP-CP": {
        "Type": "ConnectionProfile:FileTransfer:SFTP",
        "HostName": "<SFTP_SERVER>",
        "Port": "22",
        "User" : "<SFTP_USER>",
        "Password" : "<SFTP_PASSWORD>"
    },
    "Local-CP" : {
        "Type" : "ConnectionProfile:FileTransfer:Local",
        "User" : "<USER>",
        "Password" : "<PASSWORD>"
    },
    "DB-CP": {
        "Type": "ConnectionProfile:Database:PostgreSQL",
        "Host": "<DATABASE_SERVER>",
        "Port":"5432",
        "User": "<DATABASE_USER>",
        "Password": "<DATABASE_PASSWORD>",
        "DatabaseName": "postgres"
    },
    "AutomationAPIFileTransferDatabaseSampleFlow": {
        "Type": "Folder",
        "Comment" : "Code reviewed by John",
        "GetData": {
            "Type" : "Job:FileTransfer",
            "ConnectionProfileSrc" : "SFTP-CP",
            "ConnectionProfileDest" : "Local-CP",
                                
            "FileTransfers" :
            [
                {
                    "Src" : "%%SrcDataFile",
                    "Dest": "%%DestDataFile",
                    "TransferOption": "SrcToDest",
                    "TransferType": "Binary",
                    "PreCommandDest": {
                        "action": "rm",
                        "arg1": "%%DestDataFile"
                    },
                    "PostCommandDest": {
                        "action": "chmod",
                        "arg1": "700",
                        "arg2": "%%DestDataFile"
                    }
                }
            ]
        },
        "UpdateRecords": {
            "Type": "Job:Database:SQLScript",
            "SQLScript": "/home/<USER>/automation-api-quickstart/101-running-file-transfer-and-database-query-job-flow/processRecords.sql",
            "ConnectionProfile": "DB-CP"
        },
        "Flow": {
            "Type": "Flow",
            "Sequence": ["GetData", "UpdateRecords"]
        }
    }
}

The first object is called "Defaults". It allows you to define a parameter once for all objects. For example, it includes scheduling using the When parameter which configures all jobs to run according to the same scheduling criteria. The Defaults also includes Variables that are referenced several times in the jobs. 

The sample contains two jobsGetData and UpdateRecords. GetData transfers files from the SFTP server to the host machine. The UpdateRecords performs a SQL query on the database. They are contained within a folder named AutomationAPIFileTransferDatabaseSampleFlow. To define the sequence of job execution, use the Flow object.

The sample also includes the following three connection profiles:

  • SFTP-CP is used to define access and security credentials for the SFTP server
  • DB-CP is used to define access and security credentials for the database
  • Local-CP is used to define access and security credentials for files that are transferred to the local machine

Step 6 - Modify the Code to Run in Your Environment

In the code sample, replace the value for "TargetAgent" and "Host" with the host name of the machine where we provisioned the Control-M/Agent.

"TargetAgent" : "<HOST>"
"Host" : "<HOST>"

 

Replace the value of "SrcDataFile" with the file that is transferred from the SFTP server and the value of "DestDataFile" with the path of the transferred file on the host machine.

{"DestDataFile": "<DESTINATION_FILE>"},
{"SrcDataFile":  "<SOURCE_FILE>"}

 

You need to modify the path to the samples directory for them to run successfully in your environment. Replace the path /home/<USER>/automation-api-quickstart/101-running-file-transfer-and-database-query-job-flow with the location of the samples you installed on your machine.

"SQLScript": "/home/<USER>/automation-api-quickstart/101-running-file-transfer-and-database-query-job-flow/processRecords.sql"

 

Replace these parameters with the credentials used to login to the SFTP server.

        "HostName": "<SFTP_SERVER>",
        "User" : "<SFTP_USER>",
        "Password" : "<SFTP_PASSWORD>"

 

Replace these parameters with the credentials used to access the database server. 

        "Host": "<DATABASE_SERVER>",
        "Port":"5432",
        "User": "<DATABASE_USER>",
        "Password": "<DATABASE_PASSWORD>",

Replace these parameters with the credentials used for read/write files on the host machine.

   "Local-CP" : {
        "Type" : "ConnectionProfile:FileTransfer:Local",
        "User" : "<USER>",
        "Password" : ""
    }


Step 7 - Rerun the Code Sample

Now that we've modified the source code in the previous step, let's rerun the sample:

>ctm run AutomationAPIFileTransferDatabaseSampleFlow.json
{
  "runId": "ce62ace0-4a6e-4b17-afdd-35335cbf179e",
  "statusURI": "https://localhost:8443/automation-api/run/status/ce62ace0-4a6e-4b17-afdd-35335cbf179e?token=737a87efc43805ecf30263fb2863bea5_2E8C3C6C",
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=ce62ace0-4a6e-4b17-afdd-35335cbf179e&title= AutomationAPIFileTransferDatabaseSampleFlow.json"
}

 

Each time you run the code, a new runId is generated. Let's take the new runId, and check the jobs statuses again:

>ctm run status "ce62ace0-4a6e-4b17-afdd-35335cbf179e"
{
  "statuses": [
    {
      "jobId": "workbench:000c1",
      "folderId": "workbench:00000",
      "numberOfRuns": 1,
      "name": "AutomationAPIFileTransferDatabaseSampleFlow",
      "type": "Folder",
      "status": "Ended OK",
      "startTime": "May 23, 2017 4:25:10 PM",
      "endTime": "May 23, 2017 4:25:26 PM",
      "outputURI": "Folder has no output",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:000c1/log?token=aacbcfb1d694a81d63405646b5790532_DFB83CA2"
    },
    {
      "jobId": "workbench:000c2",
      "folderId": "workbench:000c1",
      "numberOfRuns": 1,
      "name": "GetData",
      "folder": "AutomationAPIFileTransferDatabaseSampleFlow",
      "type": "Job",
      "status": "Ended OK",
      "startTime": "May 23, 2017 4:25:10 PM",
      "endTime": "May 23, 2017 4:25:17 PM",
      "outputURI": "https://localhost:8443/automation-api/run/job/workbench:000c2/output?token=aacbcfb1d694a81d63405646b5790532_DFB83CA2",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:000c2/log?token=aacbcfb1d694a81d63405646b5790532_DFB83CA2"
    },
    {
      "jobId": "workbench:000c3",
      "folderId": "workbench:000c1",
      "numberOfRuns": 1,
      "name": "UpdateRecords",
      "folder": "AutomationAPIFileTransferDatabaseSampleFlow",
      "type": "Job",
      "status": "Ended OK",
      "startTime": "May 23, 2017 4:25:18 PM",
      "endTime": "May 23, 2017 4:25:25 PM",
      "outputURI": "https://localhost:8443/automation-api/run/job/workbench:000c3/output?token=aacbcfb1d694a81d63405646b5790532_DFB83CA2",
      "logURI": "https://localhost:8443/automation-api/run/job/workbench:000c3/log?token=aacbcfb1d694a81d63405646b5790532_DFB83CA2"
    }
  ],
  "startIndex": 0,
  "itemsPerPage": 25,
  "total": 3,
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=ce62ace0-4a6e-4b17-afdd-35335cbf179e&title=Status_ce62ace0-4a6e-4b17-afdd-35335cbf179e"
}

You can now see that both jobs EndedOK.

Let's view the output of GetData. We need to use the jobId to get this information.

>ctm run job:output::get "workbench:000c2"
+ Job started at '0523 16:25:15:884' orderno - '000c2' runno - '00001' Number of transfers - 1
+ Host1 XXXXX' username XXXX - Host2 'localhost' username XXXX
Local host is XXX
Connection to SFTP server on host XXX was established
Connection to Local server on host localhost was established
+********** Starting transfer #1 out of 1**********
* Executing pre-commands on host localhost
rm c:\temp\XXXX
File 'c:\temp\XXX removed successfully
Transfer type: BINARY
Open data connection to retrieve file /home/user/XXX
Open data connection to store file c:\temp\XXX
Transfer #1 transferring
Src file: '/ home/user/XXX ' on host 'XXXX'
Dst file: 'c:\temp\XXX on host 'localhost'
Transferred:          628       Elapsed:    0 sec       Percent: 100    Status: In Progress
File transfer status: Ended OK
Destination file size vs. source file size validation passed
* Executing post-commands on host localhost
chmod 700 c:\temp\XXX
Transfer #1 completed successfully
Job executed successfully. exiting.
Job ended at '0523 16:25:16:837'
Elapsed time [0 sec]

 

Let's view the output of UpdateRecords. We need to use the jobId to get this information.

ctm run job:output::get "workbench:000c6"
Environment information:
+--------------------+--------------------------------------------------+
|Account Name        |DB-CP                                             |
+--------------------+--------------------------------------------------+
|Database Vendor     |PostgreSQL                                        |
+--------------------+--------------------------------------------------+
|Database Version    |9.2.8                                             |
+--------------------+--------------------------------------------------+

Request statement:
------------------
select 'Parameter';

Job statistics:
+-------------------------+-------------------------+
|Start Time               |20170523163619           |
+-------------------------+-------------------------+
|End Time                 |20170523163619           |
+-------------------------+-------------------------+
|Elapsed Time             |13                       |
+-------------------------+-------------------------+
|Number Of Affected Rows  |1                        |
+-------------------------+-------------------------+
Exit Code    = 0
Exit Message = Normal completion


Step 8 - Using the Workbench Debugger

When using Control-M Workbench, you can view the run commands in an interactive user interface by entering "--interactive" at the end of the command or for short you can use -i.

>ctm run AutomationAPIFileTransferDatabaseSampleFlow.json--interactive
{
  "runId": "ce62ace0-4a6e-4b17-afdd-35335cbf179e",
  "statusURI": "https://localhost:8443/automation-api/run/status/ce62ace0-4a6e-4b17-afdd-35335cbf179e?token=737a87efc43805ecf30263fb2863bea5_2E8C3C6C",
  "monitorPageURI": "https://localhost:8443/SelfService#Workbench:runid=ce62ace0-4a6e-4b17-afdd-35335cbf179e&title= AutomationAPIFileTransferDatabaseSampleFlow.json"
}

A browser window opens where you can view and manage your jobs.


To learn more about what you can do with the Control-M Automation API, read through the Code Reference and Automation API Services


In the next tutorial you will learn how to automate code deployments.

 

Tutorial - How to Automate Code Deployment

This tutorial teaches you how to use a script to automate DevOps tasks. In order to complete this tutorial, you need a valid Control-M endPoint, username and password.

Step 1 - Setting the Control-M Environment

The first task when starting to work with Control-M Automation API is to configure the Control-M environment you are going to use. An environment is a combination of an endPointusername and password

An endPoint looks like the following: 

https://<controlmEndPoint>:8443/automation-api

Let's add an environment and name it ciEnvironment.

The command below shows you how to do this and the response:

>ctm environment add ciEnvironment "https://[controlmEndPoint]:8443" "[ControlmUser]" "[ControlmPassword]"
 
info:    Environment 'ciEnvironment' was
created
info:    ciEnvironment:
{"endPoint":"https://<controlmEndPoint>:8443/automation-api","user":"[ControlmUser]"}

You can also deploy to a Workbench environment. The controlendpoint, user and password can also point to the Workbench as localhost workbench workbench.

Step 2 - Access the tutorial samples

Go to the directory where the tutorial sample is located:

>cd automation-api-quickstart/101-create-first-job-flow

Step 3 - Deploy to the ciEnvironment

You can use the following command to deploy the code to a specific environment. The "-e" is used to specify the destination environment.

>ctm deploy AutomationAPISampleFlow.json -e ciEnvironment
[
  {
    "deploymentFile": "AutomationAPISampleFlow.json",
    "successfulFoldersCount": 0,
    "successfulSmartFoldersCount": 1,
    "successfulSubFoldersCount": 0,
    "successfulJobsCount": 2,
    "successfulConnectionProfilesCount": 0,
    "successfulDriversCount": 0,
    "deployedFolders": [
      "AutomationAPISampleFlow"
    ]
  }
]

Step 3 - Automate Deploy

Let automate the deployment of Control-M object definitions from the source directory to the ciEnvironment.

 

#!/bin/bash
for f in *.json; do
 echo "Deploying file $f";
 ctm deploy $f -e ciEnvironment;
done

 

This code can be use in Jenkins to push Git changes to Control-M.

Step 4 - Automate Deploy with Python Script

You can automate the deployment of Control-M object definitions from the source directory to the civEnvironment with a Python script using the REST API.

import requests  # pip install requests if you don't have it already
  
endPoint = 'https://<controlmEndPoint>:8443/automation-api'
  
user = '<ControlMUser>'
passwd = '<ControlMPassword>'
  
# -----------------
# login
r_login = requests.post(endPoint + '/session/login', json={"username": user, "password": passwd})
print(r_login.content)
print(r_login.status_code)
if r_login.status_code != requests.codes.ok:
    exit(1)
  
token = r_login.json()['token']
  
# -----------------
# Built
uploaded_files = [
        ('definitionsFile', ('Jobs.json', open('c:\\src\ctmdk\Jobs.json', 'rb'), 'application/json'))
]
  
r = requests.post(endPoint + '/deploy', files=uploaded_files, headers={'Authorization': 'Bearer ' + token})
  
print(r.content)
print(r.status_code)
  
exit(r.status_code == requests.codes.ok)

 

Step 5 - Learn More about Writing Code

The next step is to read through the Code Reference or about the available Automation API Services. You'll learn much more about what you can do with the Control-M Automation API.


 

Tutorial -  Building a docker container for batch applications

Jobs in Control-M have a host attribute, either a host where the application and Control-M/Agent reside or a host group, a logical name for a collection of hosts.

Control-M/Agents reside on the application hosts. You can group them into a host group. Specifying a job to run on a host group, causes Control-M/Server to balance the load by directing jobs to the various hosts in the host group. 

job1 {
   "host": "application_hostgroup"
   "command": "/home/user1/scripts/my_program.py"
   "runAs": "user1"
}

The container provided includes a Control-M/Agent. When running the container instance, the Control-M/Agent self-registers and is added to the host group. 

When a job is defined to run on the host group, it will wait for at least one registered container in the host group before it starts to run inside the container instance. 

Stopping the container, unregisters the Control-M/Agent and removes itself from the host group. 

To start the tutorial, you need a Linux account where docker is installed. You also need a valid Control-M endpoint, username and password.

Step 1 - Building the container image

Type the following command to fetch the docker example from the github repository: 

 

> git clone https://github.com/controlm/automation-api-quickstart.git
> cd automation-api-quickstart/102-build-docker-containers-for-batch-application/centos7-agent/

Review the docker file to see how the provision of a Control-M/Agent within a docker container is set up.
Type the following commands on the machine to build a docker container image of Control-M filling in valid Control-M endpoint, username and password:
SRC_DIR=.
CONTROLM_HOST=<controlmEndPoint>
CTM_USER=<ControlmUser>
CTM_PASSWORD=<ControlmPassword>
sudo docker build --tag=controlm \
--build-arg CTMHOST=$CONTROLM_HOST \
--build-arg USER=$CTM_USER \
--build-arg PASSWORD=$CTM_PASSWORD $SRC_DIR

Step 2 - Running a container instance 

Type the following commands where:

CTM_SERVER - Control-M/Server name. You can use ctm conf servers::get to find the accurate name.

CTM_HOSTGROUP - A host group name. The container adds itself to this host group. The name should be without spaces.

CTM_AGENT_PORT - Control-M/Agent port number. This port should be free on the docker host.

CTM_SERVER=<control-m server>
CTM_HOSTGROUP=<application_hostgroup>
CTM_AGENT_PORT=<port number>
sudo docker run --net host \
  -e CTM_SERVER=$CTM_SERVER \
  -e CTM_HOSTGROUP=$CTM_HOSTGROUP \
  -e CTM_AGENT_PORT=$CTM_AGENT_PORT -dt controlm

 Review the run_register_controlm.sh  to see how the Control-M/Agent is registered to the host group. 

Step 3 - Viewing self-registered container instances in Control-M

This command allows you to see that an agent with the name <docker host>:AGENT_PORT was added to the list of agents of the Control-M/Server.

sudo docker run -i -t controlm ctm config server:agents::get $CTM_SERVER

This command allows you to see that an agent with the name <docker host>:AGENT_PORT was added to the host group.

sudo docker run -i -t controlm ctm config server:hostgroup:agents::get $CTM_SERVER $CTM_HOSTGROUP

Step 4 - Running a simple job flow in the docker container  

View the JobsRunOnDockerSample.json file to see what the output of the command is inside the container.

Edit the file ../JobsRunOnDockerSample.json and replace "Host": "<HOSTGROUP>"with the host group name created in Step 2.

Run the jobs in the docker container:

> sudo docker run -v $PWD:/src -i -t controlm ctm run /src/JobsRunOnDockerSample.jsonT

This command enables you to inspect how the jobs run inside the container:

> sudo docker run -i -t controlm ctm run status <runid>

Step 5 - Decommissioning the container instance

To remove and un-register the container Control-M/Agent from the host group and Control-M, type the following command:

sudo docker exec -i -t <docker id> /home/controlm/decommission_controlm.sh
sudo docker stop <docker id>
Was this page helpful? Yes No Submitting... Thank you

© Copyright 1999-2017 BMC Software, Inc.

Legal notices

© Copyright 2017 BMC Software, Inc.
Legal notices