Models in BMC HelixGPT


 

A model is a program or algorithm that relies on training data to recognize patterns and make predictions. BMC HelixGPT supports different vendors and models. For more information about vendors and models, see Supported models in BMC HelixGPT.

BMC HelixGPT uses Llama3, a self-hosted solution containing the weights and configuration files required to create inferences.

Inferencing and training

Inferencing is the main task of a model, resulting in the generation of chat and summarization. Inferencing occurs in the following way:

23301_ModelInferencing.png

Data fed to the model includes example outputs, and the model is adjusted or fine-tuned regularly.

The model training is accomplished in the following methods:

  • BMC trains the model based on generic, non-sensitive data and fine-tunes the application use cases, which are not specific to any customer. For example, we provide a global prompt that establishes the tone and expectation for all responses.
  • Customers train the model by using their private data where the output is a model that is specific to the customer.
  • AISM use cases depend on ready-made models, and you can also train the model as per your requirements.

Inference service

The inference service deploys the model and exposes an API through a network endpoint. BMC Helix applications use this service at runtime during user interaction. The inference service is available in the following categories:

  • Subscribed service: A subscribed service is a generalized, large model service where you purchase access and get an API key from a third-party provider, such as Microsoft or Azure OpenAI. The vendor hosts and runs the service. BMC integrates with AI providers through REST APIs. BMC supports the Azure OpenAI provider.
  • Self-hosted service: A self-hosted service runs on one of the AI/ML platforms offered by the three major cloud vendors: Google Cloud Platform Vertex, Amazon Web Service Bedrock, and Azure ML. These platforms require the customer to be a Google Cloud Platform (GCP), Amazon Web Services (AWS), or Microsoft Azure customer. BMC supports Vertex AI on Google Cloud Platform.

Supported models

The following providers and their models are supported out of the box to create skills and prompts.

All the supported models are authenticated by using an API key. You must specify the API keys while provisioning the AI provider in HelixGPT Agent Studio. Learn more about configuring the AI provider in Provisioning-and-setting-up-the-generative-AI-provider-for-your-application.

LLM providerLLM hostModel nameAPI versionDescription

OpenAI

 

MS Azure

 

GPT-4.1

2024-10-21

gpt-4.1 (2025-04-14)

GPT-4.1 mini

2024-10-21

gpt-4.1-mini (2025-04-14)

OpenAI

GPT-4.1

2024-10-21

 gpt-4.1-2025-04-14

Google

Google Vertex AI

Gemini 2.5 Flash

 

gemini-2.5-flash

Meta

 

Google Vertex AI

Llama 4

 

publishers/meta/models/llama-4-maverick-17b-128e-instruct-maas?

Amazon Bedrock

Llama 4

Converse API

Llama 4 Maverick

Important

  • When creating a service request, if you observe issues with the GPT-4o (Omni) (2024-08-06) model, use the GPT-4o (Omni) (2024-11-20) model.
  • Turkish is not supported by the models mentioned in this topic.

Details about the default configuration parameters

The following table lists the description of the parameters used in the model configuration. For more information about the parameters, see Azure OpenAI Service REST API reference and Reproducible output support.

Parameter 

Description

temperature

Controls the randomness of the text generated by the model.

A lower value generates a deterministic output and a higher value generates a randomized output.

The default value is 0.0.

apiType

Specifies the API type of the AI provider. For example, azure_ad.

deploymentName

Specifies the deployment name of the model. For example, se-gpt-4-turbo

top_p

An alternative to sampling with temperature, this parameter considers the results of the tokens with top_p probability mass. 

The default value is 0.1.

A value of 0.1 means the tokens comprising the top 10% probability mass are considered.

suppportsJsonResponse

When the value is set to true, returns a valid JSON response as an output.

read_timeout_in_seconds

Specifies the time in seconds after which the model times out.

The default value is 300 seconds.

question_max_retry

Specifies the number of times the model reattempts to respond.

The default value is 3.

seed

(Optional) Controls the reproducibility of the response, such that repeated requests with the same seed value return the same result.
For example, seed = 1

Models supported by BMC Helix Digital Workplace

BMC Helix Digital Workplace supports the following models:

Click here to view the models supported by BMC Helix Digital Workplace
AI agentModel nameProviderHost
Service Catalog CuratorGPT 4oOpenAIMicrosoft Azure
GPT 4o-mini
GPT 4.1
GPT 4.1 mini
Google Vertex 2.5 FlashGoogleGoogle Cloud Platform Vertex AI
Google Vertex 2.0 Flash
Employee NavigatorGPT- 4.1OpenAIMicrosoft Azure AI
OpenAI
GPT- 4.1 miniOpenAIMicrosoft Azure AI
Gemini 2.0 FlashGoogleGoogle Cloud Platform Vertex AI
Llama 4MetaGoogle Cloud Platform Vertex AI
Amazon Bedrock
Oracle Cloud

Models supported by BMC Helix Innovation Suite

BMC Helix Innovation Suite supports the following models:

Click here to view the models supported by BMC Helix Innovation Suite
AI agentModel nameProviderHost
IS RD AgentGPT-4.1OpenAIMicrosoft Azure AI
GPT-4.1 mini
Gemini 2.0 FlashGoogleGoogle Cloud Platform Vertex AI
Llama 4MetaGoogle Cloud Platform Vertex AI
Oracle Cloud
Amazon Bedrock
GPT-4.1Open AILlama4

Models supported by BMC Helix Business Workflows

BMC Helix Business Workflows supports the following models:

Click here to view the models supported by BMC Helix Business Workflows
AI agentModel nameProviderHost
Service CollaboratorGPT-4.1OpenAIMicrosoft Azure
Gemini 2.0 FlashGoogleGoogle Cloud Platform Vertex AI
Llama 4MetaOracle Cloud Infrastructure
Google Cloud Platform
Knowledge CuratorGPT-4.1OpenAIMicrosoft Azure
Gemini 2.0 FlashGoogleGoogle Cloud Platform Vertex AI
Llama 4MetaOracle Cloud Infrastructure
Google Cloud Platform

 

Models supported by BMC Helix ITSM

BMC Helix ITSM supports the following models:

Click here to view the models supported by BMC Helix ITSM
AI agentModel nameProviderHost
Service CollaboratorGPT-4.1OpenAIMicrosoft Azure AI
GPT-4.1 mini
Gemini 2.0 FlashGoogleGoogle Cloud Platform Vertex AI
Llama 4MetaOracle Cloud
Change Risk AdvisorGPT-4.1OpenAIMicrosoft Azure AI
GPT-4.1 mini
Gemini 2.0 FlashGoogleGoogle Cloud Platform Vertex AI
Llama 4MetaOracle Cloud

 

Models supported by BMC Helix ITSM: Service Desk

BMC Helix ITSM: Service Desk supports the following models:

Click here to view the models supported by BMC Helix ITSM: Service Desk
AI agentModel nameProviderHost
Service-CollaboratorGPT-4.1OpenAIMicrosoft Azure AI
BMC Helix Ops SwarmerGPT-4.1
GPT-4.1 mini

 

Models supported by BMC Helix Knowledge Management

BMC Helix Knowledge Management supports the following models:

Click here to view the models supported by BMC Helix Knowledge Management
AI agentModel nameProviderHost
Knowledge CuratorGPT-4.1OpenAIMicrosoft Azure AI
GPT-4.1 mini
Gemini 2.0 FlashGoogleGoogle Cloud Platform Vertex AI
Llama 4MetaOracle Cloud

 

Models supported by BMC Helix Dashboards

BMC Helix Dashboards supports the following models:

Click here to view the models supported by BMC Helix Dashboards
AI agentModel nameProviderHost
Insight FinderGemini 2.0 FlashGoogleGoogle Cloud Platform Vertex AI
gpt-4oOpenAIAzure OpenAI
gpt-4o mini
gpt-4.1
GPT-4.1 mini

 

Models supported by BMC Helix AIOps

BMC Helix AIOps supports the following models:

Click here to view the models supported by BMC Helix AIOps
AI agentModel nameProviderHost
AIOps Ask HelixGPT AgentHelixGPT-v7BMC HelixAzure ML
Best ActionRecommender
AIOps Inference Agent
VulnerabilityResolver
AIOps Situation Summary Agent
Change agent
Chat Agent

 

Related topics

Skills

Prompts

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*