Deploying BMC AMI AI Services on your on-premises x86 Linux distribution (Ubuntu)
The infrastructure required for optimal performance with BMC AMI AI Servicesis as follows:
Configuration type | LLM | GPU or CPU | GPU memory | CPU memory |
---|---|---|---|---|
Recommended | Mixtral8x7B-instruct Quantized | 4 GPUs (A10/A100/V40) | 36 GB | None |
Mid-level | Meta-Llama-3-8B-instruct 4K Quantized (GPU) | 2 GPUs (A10/V40) | 24 GB | None |
Entry-level | Meta-Llama-3-8B-instruct 4K Quantized (CPU) | 32 VCPUs | None | 64 GB |
If you cannot temporarily procure GPU-enabled machines and decide to proceed with the entry-level configuration, be aware that the performance of BMC AMI AI Services will be significantly slower, and some features might be unavailable.
Task 1: To acquire the required Infrastructure
Acquire the infrastructure from the recommended table and make sure you have installed the Ubuntu 24.04 LTS operating system. We recommend creating infrastructure for the Mixtral8x7B configuration type, as it provides optimal system performance and accuracy.
You can acquire one of the configurations from the list. After you acquire the required configuration, note the IP or domain name for the application installation.
Task 2: To download the required scripts from EPD
- Before you begin, make sure you complete the steps outlined in Downloading-the-installation-files topic and obtain the BMC-AMI-AI-x86.zip file.
- Extract the contents from the BMC-AMI-AI-x86.zip file and save the extracted files in a folder of your choice for future use.
Task 3: To configure BMC AMI AI Services for installation
Update the configuration in the Ansible script. The following are the types of Ansible scripts based on services:
- BMC AMI AI Platform Service Configuration
- BMC AMI AI Model Configuration
BMC AMI AI Platform Service Configuration
You can find the BMC-AMI-AI-Platform-Services.yml Ansible file in the extracted folder. This script installs BMC AMI AI Platform services.
The following table includes the required fields you must configure and modify before you run the Ansible script.
Field | Description |
---|---|
gateway_host | Add the IP address or Domain name of the x86 machine acquired. BMC AMI AI Platform services are communicated internally by using the gateway_host field. |
ces_scheme | Protocol of CES instance |
ces_host | Host name or IP address of CES instance |
ces_port | Port of the CES instance |
(Optional) ces_username | If the CES instance requires a user name, then provide it. |
(Optional) ces_password | If the CES instance requires a password, then provide it. |
zos_host | Host name or IP address of zos machine |
admin_user_id | Set the administrator user name The default is admin. |
admin_password | Set admin user password. The default set value is base64 encoded of amiaiadmin—YW1pYWlhZG1pbg== To configure other properties, see Configuring-the-BMC-AMI-Platform-service. |
BMC AMI AI Model Configuration
Update the Ansible script file based on the selected server configuration for the model you plan to deploy.
Model type | Configuration |
---|---|
Mixtral8x7B-instruct Quantized | The extracted folder contains the BMC-AMI-AI-Mixtral-Service.yml Ansible file. This Ansible script installs the BMC AMI AI Mixtral model. All required configurations are already handled. For more information, see Configuring-the-BMC-AMI-Platform-service. |
Meta-Llama-3-8B-instruct 4K Quantized GPU | The extracted folder contains the BMC-AMI-AI-Llama3-GPU.yml Ansible file. This Ansible script installs the BMC AMI AI Llama3 GPU model. All required configurations are already handled. For more information, see Configuring-the-BMC-AMI-Platform-service. |
Meta-Llama-3-8B-instruct 4K Quantized CPU | The extracted folder contains the BMC-AMI-AI-Llama3-Service.yml Ansible file. This Ansible script installs the BMC AMI AI Llama3 CPU model. All required configurations are already handled. For more information, see Configuring-the-BMC-AMI-Platform-service. |
The following table includes the required field you must configure and modify before you run the Ansible script for the BMC AMI AI Llama3 CPU model.
Field | Description |
---|---|
no_of_threads | Set of no_of_threads to 24. With this configuration, you get optimal performance. To configure other properties, see Configuring-the-BMC-AMI-Platform-service. |
Task 4: To copy the script and files to the machine
After configuration changes are completed, you must copy the folder to the acquired instance.
- Locate the folder where you stored the following files:
- BMC-AMI-AI-Platform-Services.yml
- BMC-AMI-AI-Llama3-Service.yml
- BMC-AMI-AI-Llama3-GPU.yml
- BMC-AMI-AI-Mixtral-Service.yml
- BMC-AMI-AI-Platform.sh
- BMC-AMI-AI-Llama.sh
- BMC-AMI-AI-Llama-GPU.sh
- BMC-AMI-AI-Mixtral.sh
- Move all the files listed to the x86 Linux distribution. You can use any SCP client to copy all files.
Task 5 - To deploy BMC AMI AI Services
- Connect to the newly acquired x86 Linux distribution instance via SSH. Make sure that the login credentials have admin access.
- Install Python and its dependency on-premises x86 machine.
Verify whether Python is already installed by entering the python3 command. If Python is not installed, then run the following commands:
sudo apt update
sudo apt install python3Install pip by using the following commands:
sudo apt update
sudo apt-get -y install python3-pip
- Deploy the application. BMC AMI AI Services deployment is divided into these steps:
- Step 1 – BMC AMI Platform services deployment
Run the following commands to deploy AMI AI Platform Services:
chmod +x BMC-AMI-AI-Platform.sh
sed -i -e 's/\r$//' BMC-AMI-AI-Platform.sh
./BMC-AMI-AI-Platform.sh- When prompted, enter your EPD portal's username and password.
- Step 2 – Model Deployment
Based on the selected configuration, you must run the command. Do not run all model commands.
LLM
command
Mixtral8x7B-instruct Quantized
chmod +x BMC-AMI-AI-Mixtral.sh
sed -i -e 's/\r$//' BMC-AMI-AI-Mixtral.sh
./BMC-AMI-AI-Mixtral.shMeta-Llama-3-8B-instruct 4K Quantized (GPU)
chmod +x BMC-AMI-AI-Llama-GPU.sh
sed -i -e 's/\r$//' BMC-AMI-AI-Llama-GPU.sh
./BMC-AMI-AI-Llama-GPU.shMeta-Llama-3-8B-instruct 4K Quantized (CPU)
chmod +x BMC-AMI-AI-Llama.sh
sed -i -e 's/\r$//' BMC-AMI-AI-Llama.sh
./BMC-AMI-AI-Llama.sh- When prompted, enter your EPD portal's username and password.
- Step 1 – BMC AMI Platform services deployment
Task 6 – To verify the deployment of BMC AMI AI Services
For more information about how to verify the deployment of BMC AMI AI Services, see Verifying-the-installation.