Managing the platform in the BMC AMI AI Manager console


This topic describes how to use the BMC AMI AI Manager console to manage your services. By using the console, you can prepare different features and check the deployed status of BMC AMI AI Services, large language models (LLMs), and their details in one place.

 

Before you begin

Before you sign in to the BMC AMI AI Manager console, you must have installed BMC AMI Services. For more information, see Installing.

You can sign in by using the administrator and z/OS PTP credentials, which are defined in the BMC-AMI-AI-Platform-Services.yml file.

For information about the credentials, contact your application developer.

To sign to the BMC AMI AI Manager console

  1. Access the BMC AMI AI Manager console as follows:
    • If a load balancer or Application Gateway already exists, create a URL in the following format:
      https://domainName/admin, where domainName directs to the load balancer or Application Gateway.
      For example, if the domain name is amiai.bmc.com, then the URL is https://amiai.bmc.com/admin.
    • If you installed BMC AMI AI Services on a stand-alone instance or zCX/zLinux, create a URL in the following format: http://instanceIPaddress:8000/admin. For example, if the IP is 192.168.1.10, then the URL is http://192.168.1.10:8000/admin.
  2. Enter your admin user name and password specified in the playbook file.
  3. Click Sign in.

To select and prepare the integrations settings

  1. In a browser, sign in to BMC AMI AI Manager with your credentials. 
  2. From the menu in the left pane, select AI Services Settings > Integrations Settings.
    image-2025-1-28_13-57-14.png
    The Integrations Settings window lists all the integrations available in the deployed BMC AMI AI Services.
  3. From the list in the integration card, select your preferred LLM. 
  4. Click Provision.
    After successful provisioning, a Integration provisioned successfully message is displayed and the Provision button on the card becomes an Update button. 
  5. Click Update to change the LLM to any other recommended LLM in the list.
  6. To easily export and save the integration’s connection details, click Download Connection Details.
    The Download Connection Details link is displayed only when CES is not required for the integration.

     Clicking this link downloads a file containing the following connection details:

    Field

    Description

    integration_name

    Name of the integration

    integration_path

    API path used to connect to the integration

    integration_key

    Integration key used for authentication

    integration_id    

    Unique ID of the integration

If this procedure fails, the user interface displays an error message with the reason for the failure. For more information, see Troubleshooting.

To add new integrations

  1. On the upper left of the Integrations Settings window, click image-2025-1-28_14-39-21.png.
  2. To create an integration, add your details to complete the following fields as follows:

    Field

    Description

    Integration name

    Integration name not exceeding 46 characters. Numbers, spaces, and hyphens are allowed.     

    Product family

    Product family not exceeding 50 characters. Numbers, spaces, and hyphens are allowed.

    Important

    You can select the Other option from the menu and then enter your own product family.

    Connected product

    Connected product name not exceeding 58 characters. Numbers, spaces, and hyphens are allowed.

    Description

    Description not exceeding 255 characters. Numbers, spaces, and special characters . , ' \" are allowed.

    Supported LLM

    List of LLMs that are available for integration

  3. Click Save.

A success confirmation message is displayed.

If this procedure fails, the user interface displays an error message with the reason for the failure. For more information, see Troubleshooting.

To view the available LLMs

  1. Sign in to BMC AMI AI Management Console User Interface with your credentials. 
  2. From the menu in the left pane, select AI Services Settings > LLM Settings.
    image-2025-1-28_14-3-35.png

To add a local model

  1. On the upper left of the LLM settings window, click Add local model.
  2. In the Add local large language model dialog box, select your model type.
  3. Select the public API or the self-hosted model type: 

    Follow these steps:

    1. Select the hosted platform. 
      image-2025-1-28_11-44-34.png
    2. Enter an OpenAI API access key
    3. Click Test connection.
    4. If a success confirmation message is displayed, click Next>>. (If a failure message is displayed, verify that the provided API access key is valid.)

    Follow these steps:

    1. Complete the following fields:

      Field

      Description

      Base URL 

      protocol://hostName:port/v1 

      protocol=http or https

      The placeholders in the URL are defined as follows:

      • hostName—IP or host name of the machine to access the on-premises LLM server 
      • port—Port where the LLM server is exposed
      • v1—Version of the OpenAI API specification

      Important

      For self-hosted setups, supported vLLM versions range from 0.5.5 to 0.6.4.post1.

      API access key 

      (Optional) Key to access the LLM server API if configured in the LLM server 

      image-2025-1-28_11-45-52.png

    2. Click Test connection.
    3. If a success confirmation message is displayed, click Next>>. (If a failure message is displayed, verify that the provided details are valid.)

  4. On the Step 2: Add details and save tab, add the on-premises LLM server details to complete the following fields as follows:

    Field

    Description

    Maximum number of tokens

    Non-zero positive integer representing the maximum number of tokens configured for the model on the on-premises LLM server

    Model

    You can select a model from the Model details list.

    Important

    The following models are supported by OpenAI when using the Public API model type:

    • o3-mini-2025-1-31 and later
    • o1-2024-12-17 and later
    • gpt-4o-mini-2024-07-18 and later
    • gpt-4o-2024-08-06 and later

    Version

    None

    Supported integrations

    None

    Display name 

    (Optional) Model in BMC AMI AI Management Console  

    Description 

    Description of the LLM

    image-2025-1-28_12-6-0.png

  5. Click Save.

A success confirmation message is displayed.

If this procedure fails, the user interface displays an error message with the reason for the failure. For more information, see Troubleshooting.

To download the AI Services log files

You can download log files for active services only.

  1. Sign in to BMC AMI AI Management Console User Interface with your credentials. 
  2. From the menu in the left pane, select AI Services Settings > AI Services Log Files.
  3. Select each service in the list to download it. Each log file contains 24-hour log records.
    image-2025-1-28_12-8-31.png
  4. In the Log file data range dialog box, enter the date and click Download.
    image-2025-1-28_12-9-32.png

Log files are generated in the UTC time zone.

To view the status of BMC AMI AI Services

  1. Sign in to BMC AMI AI Management Console User Interface with your credentials. 
  2. From the menu in the left pane, click AI Service Health.
  3. Click a service category name to view information about its status.

The AI Service Health window is displayed. It shows the health status of each service category.
image-2025-1-28_12-11-14.png

The following table describes the service categories: 

Service category

Description

BMC AMI AI Platform Services

Status of the discovery service, gateway service, and security service

BMC AMI AI Integration Services

Status of the integration services available

Large Language Models

Status of the deployed LLMs

Each service displays a summary of the category. 

A green check mark confirms that all the services in that category are up and running.
image-2024-9-10_10-7-9.png

To view the service details, including the following details, expand a service menu:

  • Name of the service
  • Status of the service (Active/Inactive)
  • Host name of the container running the service
  • Port number of the service

If a service is inactive, the service category status icon displays the number of services that are inactive:
image-2024-9-10_10-34-13.png

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*