Deploying BMC AMI AI Services on your on-premises x86 Linux distribution (Ubuntu)


This topic describes how to prepare the infrastructure required to deployBMC AMI AI Services on your x86 Linux distribution (Ubuntu). You prepare your Linux distribution in accordance with your organization processes and deploy BMC AMI AI Services.

Related topic

The infrastructure required for optimal performance with BMC AMI AI Servicesis as follows:

Configuration type

LLM

GPU or CPU

GPU memory

CPU memory

Recommended

Mixtral8x7B-instruct Quantized

4 GPUs (A10/A100/V40)

36 GB

None

Mid-level

Meta-Llama-3-8B-instruct 4K Quantized (GPU)

2 GPUs (A10/V40)

24 GB

None

Entry-level

Meta-Llama-3-8B-instruct 4K Quantized (CPU)

32 VCPUs

None

64 GB

If you cannot temporarily procure GPU-enabled machines and decide to proceed with the entry-level configuration, be aware that the performance of BMC AMI AI Services will be significantly slower, and some features might be unavailable.

Task 1: To acquire the required Infrastructure

Acquire the infrastructure from the recommended table and make sure you have installed the Ubuntu 24.04 LTS operating system. We recommend creating infrastructure for the Mixtral8x7B configuration type, as it provides optimal system performance and accuracy.

You can acquire one of the configurations from the list. After you acquire the required configuration, note the IP or domain name for the application installation.

Task 2: To download the required scripts from EPD

  1. Before you begin, make sure you complete the steps outlined in Downloading-the-installation-files topic and obtain the BMC-AMI-AI-x86.zip file.
  2. Extract the contents from the BMC-AMI-AI-x86.zip file and save the extracted files in a folder of your choice for future use. 

Task 3: To configure BMC AMI AI Services for installation

Update the configuration in the Ansible script. The following are the types of Ansible scripts based on services:

  • BMC AMI AI Platform Service Configuration
  • BMC AMI AI Model Configuration

BMC AMI AI Platform Service Configuration

You can find the BMC-AMI-AI-Platform-Services.yml Ansible file in the extracted folder. This script installs BMC AMI AI Platform services.

The following table includes the required fields you must configure and modify before you run the Ansible script.

Field

Description

gateway_host

Add the IP address or Domain name of the x86 machine acquired. BMC AMI AI Platform services are communicated internally by using the gateway_host field.

Important

For licensing purposes, you must provide CES configuration. During the feature provisioning, the CES details are provided to store BMC AMI AI Services access information. The CES details are also used with BMC AMI DevX Workbench applications to access BMC AMI AI Services. 

ces_scheme

Protocol of CES instance

ces_host

Host name or IP address of CES instance

ces_port

Port of the CES instance

(Optional) ces_username

If the CES instance requires a user name, then provide it.

(Optional) ces_password

If the CES instance requires a password, then provide it.

Important

We require user credentials to access the BMC AMI AI Manager console. In the BMC AMI AI Manager console, there are two ways to configure users:

  • zos_host
  • adminUserID

Note that all users are admin users.

zos_host

Host name or IP address of zos machine

Important

Set the mainframe machine's host or IP address in zos_host. The use of zos_host is to authenticate users. All authenticated users can access the BMC AMI AI Manager console.

admin_user_id

Set the administrator user name

Important

We added a default admin user to access the BMC AMI AI Manager console. Use this if you can't provide zos_host. If you already have zos_host, reset the values set for the admin user.

The default is admin.

admin_password

Set admin user password. 
Before setting a password, encode the password in base64 encoding.

The default set value is base64 encoded of amiaiadminYW1pYWlhZG1pbg== 

To configure other properties, see Configuring-the-BMC-AMI-Platform-service.

BMC AMI AI Model Configuration

Update the Ansible script file based on the selected server configuration for the model you plan to deploy.

Model type

Configuration

Mixtral8x7B-instruct Quantized 

The extracted folder contains the BMC-AMI-AI-Mixtral-Service.yml Ansible file. This Ansible script installs the BMC AMI AI Mixtral model.

All required configurations are already handled. For more information, see Configuring-the-BMC-AMI-Platform-service.

Meta-Llama-3-8B-instruct 4K Quantized GPU

The extracted folder contains the BMC-AMI-AI-Llama3-GPU.yml Ansible file. This Ansible script installs the BMC AMI AI Llama3 GPU model.

All required configurations are already handled. For more information, see Configuring-the-BMC-AMI-Platform-service.

Meta-Llama-3-8B-instruct 4K Quantized CPU

The extracted folder contains the BMC-AMI-AI-Llama3-Service.yml Ansible file. This Ansible script installs the BMC AMI AI Llama3 CPU model.

All required configurations are already handled. For more information, see Configuring-the-BMC-AMI-Platform-service.

The following table includes the required field you must configure and modify before you run the Ansible script for the BMC AMI AI Llama3 CPU model.

Field

Description

no_of_threads

Set of no_of_threads to 24. With this configuration, you get optimal performance.

To configure other properties, see Configuring-the-BMC-AMI-Platform-service.

Task 4: To copy the script and files to the machine

After configuration changes are completed, you must copy the folder to the acquired instance.

  1. Locate the folder where you stored the following files:
    • BMC-AMI-AI-Platform-Services.yml
    • BMC-AMI-AI-Llama3-Service.yml
    • BMC-AMI-AI-Llama3-GPU.yml
    • BMC-AMI-AI-Mixtral-Service.yml
    • BMC-AMI-AI-Platform.sh
    • BMC-AMI-AI-Llama.sh
    • BMC-AMI-AI-Llama-GPU.sh
    • BMC-AMI-AI-Mixtral.sh
  2. Move all the files listed to the x86 Linux distribution. You can use any SCP client to copy all files.

Task 5 - To deploy BMC AMI AI Services

  1. Connect to the newly acquired x86 Linux distribution instance via SSH. Make sure that the login credentials have admin access.
  2. Install Python and its dependency on-premises x86 machine.
    1. Verify whether Python is already installed by entering the python3 command. If Python is not installed, then run the following commands: 

      sudo apt update
      sudo apt install python3
    2. Install pip by using the following commands: 

      sudo apt update
      sudo apt-get -y install python3-pip
  3. Deploy the application. BMC AMI AI Services deployment is divided into these steps:
    1. Step 1 – BMC AMI Platform services deployment
      1. Run the following commands to deploy AMI AI Platform Services: 

        chmod +x BMC-AMI-AI-Platform.sh
        sed -i -e 's/\r$//' BMC-AMI-AI-Platform.sh
        ./BMC-AMI-AI-Platform.sh
      2. When prompted, enter your EPD portal's username and password.
    2. Step 2 – Model Deployment
      1. Based on the selected configuration, you must run the command. Do not run all model commands. 

        LLM

        command

        Mixtral8x7B-instruct Quantized

        chmod +x BMC-AMI-AI-Mixtral.sh
        sed -i -e 's/\r$//' BMC-AMI-AI-Mixtral.sh  
        ./BMC-AMI-AI-Mixtral.sh

        Meta-Llama-3-8B-instruct 4K Quantized (GPU)

        chmod +x BMC-AMI-AI-Llama-GPU.sh
        sed -i -e 's/\r$//' BMC-AMI-AI-Llama-GPU.sh
        ./BMC-AMI-AI-Llama-GPU.sh

        Meta-Llama-3-8B-instruct 4K Quantized (CPU)

        chmod +x BMC-AMI-AI-Llama.sh
        sed -i -e 's/\r$//' BMC-AMI-AI-Llama.sh
        ./BMC-AMI-AI-Llama.sh
      2. When prompted, enter your EPD portal's username and password.

Task 6 – To verify the deployment of BMC AMI AI Services

For more information about how to verify the deployment of BMC AMI AI Services, see Verifying-the-installation.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*