Modifying a Hadoop environment

This topic provides instructions to modify a configured Hadoop environment.

To edit a Hadoop environment

  1. From the PATROL Console work area, right-click the Hadoop Environment application instances you want to edit. Navigate to KM Commands > Edit Hadoop Environment menu command. This menu command opens the Register Hadoop environment dialog box with the environment name.
  2. Enter the information, as follows: 
    1. Environment name - Enter a logical unique name for the environment. A container will be created with the environment name. This environment will contain the Hadoop components. For example, if you add Hadoop NameNode details, you will see all the Hadoop NameNode components and data related to it. BMC recommends you to provide only alphanumeric characters in environment name. 

      Valid values: a-z, A-Z, 0-9

      Invalid values: # ' . | ? \ " [ ] + = &

    2. Hadoop connection details, 
      1. Hadoop Vendor -  Select the monitored Hadoop vendor. The supported vendors are:
        • Apache Hadoop
        • IBM BigInsights
        • MapR
        • Cloudera
        • Horton
      2. Hadoop Host - Enter a unique, user-defined host name or an IP address.
      3. Hadoop Port - Enter the Hadoop port number. This port number depends on the component you want to monitor. If you want to monitor ResourceManager, enter the ResourceManager port number. Default port number is NameNode port, 50070.
      4. Hadoop Authentication - Select the authentication from the following:
          1. None
          2. Basic
          3. Kerberos
      5. Hadoop User - Enter the user name to connect to the Hadoop server.
      6. Hadoop Password - Enter the password required to connect to the Hadoop server.
    3. Kerberos authentication details
      1. krb5 file path - Enter the Kerberos file path.
        The krb5.conf file contains Kerberos configuration information. This includes information describing the default Kerberos realm, and the location of the Kerberos key distribution centers for known realms, and mappings of hostnames onto Kerberos realms. 

        Note

        The following are the steps required to monitor Hadoop with Kerberos authentication:

        1. Select Kerberos under Hadoop Authentication section.
        2. The Kerberos username and password fields are required.
        3. Enter the full path of your Hadoop krb5.conf file.

        From the PCM, you need to push the following pconfig variable:

        "/BTQ/HADOOP/<<environment name>>/urlAuthTypeNum" =

        { REPLACE = "3" }

        ,
        "/BTQ/HADOOP/<<environment name>>/krb5File" =

        { REPLACE = "<<The hadoop URL Kerberos authentication file (krb5.conf), example: c:\\test\\krb5.conf>>" }

    4. HTTPS secure details
      1. Secure (HTTPS) Select this checkbox if you want to use HTTPs protocol for connection. By default, HTTP protocol is used for a connection.
      2. Trust Store If you select to use HTTPs protocol for connection, you must provide the location to the Java Keystore File (JKS) for a successful connection to the URL in a secure mode.
      3. Trust Password Enter the password to the JKS file.
    5. Enter the Java collector settings,
      1. JAVA_HOME - Enter the directory path where you have installed Java version 1.7 or later. 
      2. Executing user - Enter the local administrator username. The executing username and password must have permissions to run Java.
      3. Executing password - Enter the local administrator password.
    6. Enter the Collection details,
      1. Availability -  Enables monitoring of the Hadoop monitored groups availability (if instance is running or is down) of one of the following instances existing in the environment:
          1. NameNode
          2. SecondaryNameNode
          3. ResourceManager
          4. DataNode
          5. NodeManager
          6. JobHistory
          7. JobTracker
          8. TaskTracker
          9. JournalNode
      2. Full - Enables the monitoring of all Hadoop monitored groups, like UGIMetrics, StartUpProgress, and other groups.
      3. Custom -  Manually select the Hadoop monitored groups to monitor. A screenshot  of the custom monitoring option is given below:

        Monitored groupMetrics
        Data NodeSelect the list of metrics for monitoring:
        • Metrics System Stats
        • DataNode Activity
        • FS Dataset State
        • JVM Metrics
        • RPC Activity
        • DataNode Info
        • UGI Metrics
        Job TrackerSelect the list of metrics for monitoring:
        • RPC Activity
        • RPC Detailed Activity
        • Metrics System Stats
        • Queue Metrics Default
        • UGI Metrics
        • JVM Metics
        • Startup Progress
        • JobTracker Metrics
        • JobTracker Info
        Name NodeSelect the list of metrics for monitoring:
        • Metrics System Stats
        • NameNode Activity
        • RPC Detailed Activity
        • NameNode Retry Cache
        • FS Namesystem
        • JVM Metrics
        • NameNode Info
        • UGI Metrics
        • RPC Activity
        • FS Namesystem State
        • Startup Progress
        Node ManagerSelect the list of metrics for monitoring:
        • RPC Activity
        • JVM Metrics
        • NodeManager Metrics
        • Shuffle Metrics
        • RPC Detailed Activity
        • UGI Metrics
        • Metrics System Stats
        Resource ManagerSelect the list of metrics for monitoring:
        • RPC Detailed Activity
        • JVM Metrics
        • Metrics System Stats
        • Cluster Metrics
        • RPC Activity
        • UGI Metrics
        • Queue Metrics
        Secondary Name NodeSelect the list of metrics for monitoring:
        • UGI Metrics
        • SecondaryNameNode Info
        • Metrics System Stats
        • JVM Metrics
        • Startup Progress
        Task TrackerSelect the list of metrics for monitoring:
        • Shuffle Server Metrics
        • TaskTracker Metrics
        • Metrics System Stats
        • Startup Progress
        • JVM Metrics
        • TaskTracker Info
        • RPC Activity
        • UGI Metrics

        Note

        Due to PATROL Console limitations, if the collection details is set to Custom monitoring, you will not be able to select the JobHistory and JournalNodes groups for monitoring 

        JobHistorySelect the list of metrics for monitoring:
        • Metrics System Stats
        • RPC Activity
        • JVM Metrics
        • UGI Metrics
        JournalNodeSelect the list of metrics for monitoring:
        • UGI Metrics
        • Journal
        • Metrics System Stats
        • JVM Metrics
        • RPC activity
        • RPC detailed activity
  3. Click OK to save the configuration.
  4. Click Cancel to exit without saving.

Related topics

Register Hadoop environment dialog box

Configuring in Central Monitoring Administration console

Was this page helpful? Yes No Submitting... Thank you

Comments