Deploying BMC AMI Platform


This topic describes how to download and deploy BMC AMI Platform.

Before you begin

Make sure you've reviewed the System requirements topic before you begin this process.

Downloading BMC AMI Platform from EPD

  1. Log in toElectronic Product Distribution - BMC Mainframe.
    Warning
    Important

    Access to the EPD site depends on your company’s license entitlements.

  2. Select All/Trial Products.
  3. Search for and select BMC AMI Platform, and then download the BMC-AMI-PLATFORM-2.0.00.zip file.

To deploy BMC AMI Platform on Kubernetes

  1. Log in to the primary manager node of the Kubernetes cluster with a user who has sufficient permissions to run kubectl commands and copy the downloaded zip file to your preferred folder, such as /home.
  2. Extract all deployment files into this directory by using the following command:
    unzip BMC-AMI-PLATFORM-2.0.00.zip -d BMC-AMI-PLATFORM-2.0.00
    The path should be /<extracted_dir>/BMC-AMI-PLATFORM-2.0.00.
  1. Provide permission to the folder by using the following command:
    chmod -R 777 /<extracted_dir>/BMC-AMI-PLATFORM-2.0.00
  2. Verify that the following files and directories are present and correctly structured:
    Root structure
        ansible-playbook/
            site.yml

        config/
            nfs_volume_paths.yaml

        helm_charts/
            01-helm-service-registry/
            02-helm-data-service/
            03-helm-milvusdb-service/
            04-helm-security-service/
            05-helm-zosconnector-service/
            06-helm-core-common/
            07-helm-amiai-chart/
            08-helm-api-gateway/
            09-helm-swarm-redis-cluster/
            10-datastore-init-scripts/
            11-helm-elasticsearch/
            12-helm-core-notification/

        static/
            namespaces.yaml
            secrets/
        setup-script.sh  
        scripts/
           code-conversion.sh
           code-explain.sh
           granite.sh
           llama.sh
           mixtral.sh
           oi-rc.sh
           undeploy_granite.sh
           undeploy_llama.sh
           undeploy_mixtral.sh

To run the setup and deployment script

  1. Run the following commands:

    Warning
    Important

    You must have a user who has sufficient permissions to run kubectl commands.

    cd /<extracted_dir>/BMC-AMI-PLATFORM-2.0.00/

    ./setup-script.sh
     
  2. Use the following fields to assist you during the configuration process:
    FieldDescription

    Kubeconfig file

    Select how to configure the Kubernetes access:

    • Use the current Kubernetes context (default: ~/.kube/config)
    • Provide the path to a kubeconfig file.
    • Paste kubeconfig YAML (end input with a single line containing: EOF)

    To configure the kubeconfig file, you have the following three options:

    • (Recommended) To use your current kubectl context, enter 1.
    • If you’re using a different kubeconfig file, enter 2 and provide its full path.
    • To paste the kubeconfig content manually, enter 3.

    Registry and project

    Based on where you plan to download the BMC AMI Platform images, select one of the following values:

    • Y—Get default values to use the BMC container (containers.bmc.com) and BMC project name.
    • N—Add your own values to the following fields:
      • Docker registry URL
      • Project name
    Docker image registry user nameEnter the Docker image registry user name. It is visible in the UI.
    Docker image registry passwordEnter the Docker image registry password. It is visible in the UI.
    Docker Hub registry for the Uptrace image

    To use the Uptrace image in BMC AMI Platform, you must provide the following Docker Hub credentials to pull it from the docker.io registry:

    1. Provide Docker Hub registry credentials (required for pulling the uptrace/uptrace:2.0.1 image).
      1. Configure Docker Hub credentials for Uptrace? [Y/N]:
      2. Docker Hub Registry URL [docker.io]:
      3. Docker Hub Username:
      4. Docker Hub Password:
    TLS certificate path

    Enter the TLS certificate path.  

    For more information, see SSL and TLS requirements.

    TLS key path

    Enter the TLS key path.  

    For more information, see SSL and TLS requirements.

    Password for default users

    Enter the password for all the default users in the system.

    A summary table of the relevant fields is displayed for your confirmation.

    NFS server (IP or Hostname)Enter your NFS Server host name or IP.
    NFS server path Enter your NFS server path details.
    Local mounted pathEnter local mounted path (such as /mnt/data).
    BMC AMI Platform machine SSH keyEnter your SSH key (such as /root/.ssh/id_rsa).
    BMC AMI Platform application host machine (IP or host name)Enter the IP or host name of the machine where you run the deployment script.
    BMC AMI Platform machine user

    Enter the machine user name (such as root or ubuntu).

    A summary table of the relevant fields is displayed for your confirmation.

    GPU nodes names

    Enter the names of your GPU nodes as displayed in the available nodes list. If you have more than one GPU node, use a comma between the node names. You can press Enter to skip GPU node labeling if not required.

    A confirmation message [Y/N] appears.

    NVIDIA GPU operator

    To deploy the NVIDIA GPU operator with the current script, enter Y. To deploy it on your own, enter N.

    A summary table of the relevant fields is displayed for your confirmation.

    Cloud or On-prem deployment

    For cloud deployment, enter 1. For On-prem deployment, enter 2.

    If you enter 2, three requests appear as follows:

    1. Enter the address (DNS name, host name, or IP) for Cloud UI access.
      Information
      Example

      example.bmc-ami-platform.com or alb-12345.elb.amazonaws.com

    2. Use a custom port instead of the default 443? [Y/N]
    3. Proceed with this Platform UI URL? [Y/N]

    After you select your deployment type, the following question appears: Run Ansible playbook to deploy Helm releases? To complete the process, enter Y. To cancel the entire process, enter N.

After your script has finished running, you get the following message confirming that the task was completed successfully:

BMC AMI Platform deployment completed                                      
Elapsed: <1050s>
Ansible log: /<extracted_dir>/BMC-AMI-PLATFORM-2.0.00/logs/ansible_deploy_20251026_150503.log

To verify the deployment

  1. Verify that the pods are running successfully under the name spaces by using the commands in the following examples:
kubectl get pods -n  bmcami-prod-data-service

NAME                                        READY   STATUS      RESTARTS   AGE
elasticsearch-784d987cd5-7nxcx              1/1     Running     0          22h
eurekaserver-7c9574b865-bx88m               1/1     Running     0          22h
eurekaserverpeer-5cd95cf4dc-qvbll           1/1     Running     0          22h
milvus-stack-0                              3/3     Running     0          22h
pgadmin-7cd876fcdb-c7kpw                    1/1     Running     0          22h
postgres-script-runner-data-service-pfz9c   0/1     Completed   0          22h
postgresql-69f96fff68-llgdn                 1/1     Running     0          22h
redis-master-0                              1/1     Running     0          22h
redis-replica-0                             1/1     Running     0          22h
redis-replica-1                             1/1     Running     0          22h
redis-sentinel-0                            1/1     Running     0          22h
redis-sentinel-1                            1/1     Running     0          22h
redis-sentinel-2                            1/1     Running     0          22h
kubectl get pods -n  bmcami-prod-user-management

NAME                                             READY   STATUS    RESTARTS   AGE
ami-core-api-gateway-878859655-9pkxr             1/1     Running   0          22h
ami-core-api-gateway-878859655-rxxfs             1/1     Running   0          22h
ami-core-api-gateway-878859655-t68gh             1/1     Running   0          22h
ami-core-notification-service-7cdf57f85b-lnw59   1/1     Running   0          22h
ami-core-security-service-868ff49674-77jdx       1/1     Running   0          22h
ami-core-zosconnector-service-d5c55487d-dptmp    1/1     Running   0          22h
amiai-autocomplete-service-67b689cd75-2p5qb      1/1     Running   0          22h
nginx-8599b64b94-6zrc9                           1/1     Running   0          22h
nginx-8599b64b94-n55w7                           1/1     Running   0          22h
nginx-8599b64b94-r8thh                           1/1     Running   0          22h
 kubectl get pods -n bmcami-prod-observability

NAME                                   READY   STATUS    RESTARTS      AGE
clickhouse-6f769646d8-pn25t            1/1     Running   0             15m
jaeger-789c7b9647-tdng5                1/1     Running   0             15m
kibana-cc95b8798-45fh9                 1/1     Running   0             15m
observability-dashboard-loader-hp9mr   1/1     Running   0             9m46s
observability-dashboard-loader-q2vbg   1/1     Running   0             4m24s
observability-dashboard-loader-vvrgt   1/1     Running   0             15m
otel-collector-7558fd9dcd-9nztf        1/1     Running   0             15m
prometheus-5df9ccf5c6-pbzbb            1/1     Running   0             15m
uptrace-7dd46fcd6b-8ngxb               1/1     Running   1 (14m ago)   15m
kubectl get pods -n  bmcami-prod-amiai-services

kubectl get pods --namespace bmcami-prod-amiai-services
NAME                           READY   STATUS      RESTARTS      AGE
assistant-c6dd4bb6b-4xbkc      1/1     Running     0             22h
discovery-7c68bcd776-chnz6     1/1     Running     1 (22h ago)   22h
docs-expert-5957c5d845-cn2hg   1/1     Running     0             22h
download-embeddings-qfpgl      0/1     Completed   0             22h
download-expert-model-nw9km    0/1     Completed   0             22h
download-llama-model-582bh     0/1     Completed   0             4h53m
gateway-775f4476d9-h6jk6       1/1     Running     0             5h10m
llama-gpu-6f8c675c4b-j7vhm     1/1     Running     0             4h53m
load-embeddings-4pg6r          0/1     Completed   0             22h
platform-75c4997dc5-fk8fq      1/1     Running     0             22h
security-65c8c568db-gqsks      1/1     Running     0             22h

Before you proceed, make sure you have the following:

  1. Download Jobs—Confirm that all three download jobs are completed.
  2. Load Embedding Job—Verify that the load embedding job has finished successfully.
Warning
Important

This process can take several hours.

  1. Check the logs and run the following command:
    kubectl logs load-embeddings-4pg6r   --namespace bmcami-prod-amiai-services
  2. At the end of the log, look for the following indicators:
    INFO - Started processing files 255 of 255
    .
    .
    .
    .
    INFO - Completed processing file
    /opt/bmc/app/src/resources/embeddings/DevX_BMC_AMI_DevX_File-AID_Common_Components_v23.01_bmc_docs_df.csv

To configure the LLM for BMC AMI Assistant

You can configure the LLM for the BMC AMI Assistant chat from the UI or by running the script manually.

To configure the LLM from the UI, follow the procedure in Deployed Large Language model (LLM).

Warning
Important

If you're deploying BMC AMI Platform in an on-premises environment and using a remote GPU (not part of your Kubernetes cluster), you must deploy your LLM by using the UI only.

To run the script manually

  1. Use the model-specific deployment script at /<extracted_dir>/BMC-AMI-PLATFORM-2.0.00/helm_charts/07-helm-amiai-chart/
  2. Select one of the supported scripts from the list based on the LLM model you plan to deploy: 

    ScriptLLM modelDescription
    Llama.shllm-llamaLlama 3.1 LLM service and model download job
    Granite.shllm-graniteGranite 3.1 LLM service and model download job 
    Mixtral.shllm-mixtralMixtral LLM service and model download job
  1. Verify that the LLM model service has successfully completed its setup. For more information, see Deploying LLM.

For example, if you run the llama.sh script and then run the kubectl get pods -n bmcami-prod-amiai-services command, the following two pods are displayed:

  • download-llama-model-582bh 0/1 Completed
  • llama-gpu-6f8c675c4b-j7vhm 1/1 Running

After the download job is complete and the GPU service is running, the chat becomes available in the platform UI.

Warning
Important

Do not delete the installation file or the following directory:
BMC-AMI-PLATFORM-2.0.00
These are required for future processes and must remain intact.

Where to go from here

After you complete deploying BMC AMI Platform, proceed to Deploying LLM.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

BMC AMI Platform 2.0