BMC HelixGPT architecture
BMC HelixGPT architecture
The following diagram shows the BMC HelixGPT architecture:
The BMC Helix applications interact with the Assistant, a common interface to interact with components. The Assistant communicates with the components and relays prompts, API calls, user queries, and responses to each other. Finally, it conveys the responses that are generated by BMC HelixGPT to the user to help resolve the user query.
To view and understand the detailed architecture, refer to the following sections:
BMC HelixGPT architecture for Best Action Recommendations, Insights, and Ask HelixGPT use cases
BMC Helix AIOps uses a self-hosted model for BMC HelixGPT on Google Cloud Platform Vertex AI.
The data connectors read from the following data sources:
- BMC Helix AIOps
- BMC Helix ITSM
The following image shows the architecture of BMC HelixGPT for the Best Action Recommendations, Insights, and Ask HelixGPT use cases:
Components in the architecture diagram
The following table explains the components in the architecture image:
Component | Description |
---|---|
User | A user views a Situation in BMC Helix AIOps. |
BMC Helix applications | BMC Helix AIOps is the application. |
Assistant | Assistant is a generative AI orchestrator. Assistant hosts APIs and business logic to interact with BMC Helix applications and LLMs. |
API Plug-ins | The plug-ins retrieve data from data sources, Vector DB, application APIs, and LLMs. |
BMC HelixGPT configurations | Configurations are required for BMC HelixGPT in BMC Helix AIOps. |
Contextual data | The contextual data is the customer business data that is stored in an embedded format. It includes:
|
Tools | APIs, such as BMC Helix AIOps and BMC Helix ITSM are required for certain retrieval queries and service catalog fulfillment requests. |
Large Language Models | BMC HelixGPT is the self-hosted, fine-tuned LLM on Google Cloud Platform Vertex AI. It generates responses to user queries based on contextual data. |
Data Connectors | Data connectors ingest and train data from knowledge data sources into the embedding model. |
Embedding Model | An embedding model converts text data to computer-understandable language, which is an array of floating numbers. |
Workflow of the BMC HelixGPT architecture
The following table lists the steps in the BMC HelixGPT workflow:
Step | Description |
---|---|
1 | A user views a Situation in BMC Helix AIOps. |
2 | A REST API registers the endpoint in Vertex AI. |
3a 3b | The Assistant sends the prompt to the Vector DB that stores the contextual data and to the different APIs, such as BMC Helix ITSM API and BMC Helix AIOps API. Based on the prompt, the contextual data is sent to the Assistant. |
4 | The Assistant sends the contextual data to the LLM through the API plug-ins. The LLM processes the contextual data and generates relevant and detailed responses to user queries. |
BMC HelixGPT architecture for summarization, chatbot, knowledge article search use cases
The data connectors read from the following data sources:
- BMC Helix Business Workflows
- BMC Helix Knowledge Management by ComAround
- BMC Helix ITSM: Knowledge Management
- Confluence (Controlled Availability Customers Only)
- Microsoft SharePoint Online (Controlled Availability Customers Only)
- Salesforce Knowledge (Controlled Availability Customers Only)
- BMC Helix Customer Service Management (Controlled Availability Customers Only)
- Web (Controlled Availability Customers Only)
The embedding model creates the embeddings for the data and ingests it in Vector DB. You can schedule data ingestion in Vector DB or ingest data on-demand.
The following image shows the architecture of BMC HelixGPT for the summarization, chatbot, and knowledge articles use cases:
Components in the architecture diagram
The following table explains the components in the architecture image:
Component | Description |
---|---|
User | A user sends a query through BMC Helix applications, such as BMC Helix Digital Workplace, BMC Helix ITSM, BMC Helix Virtual Agent, or Live Chat. Users may want to summarize data, ask questions via chat, or extract data from data sources. |
BMC Helix applications | The applications are BMC Helix Digital Workplace, BMC Helix ITSM, BMC Helix Virtual Agent, or Live Chat. |
Assistant | Assistant is a generative AI orchestrator. Assistant hosts APIs and business logic to interact with BMC Helix applications and LLMs. |
API Plug-ins | The plug-ins retrieve data from data sources, Vector DB, application APIs, and LLMs. |
BMC HelixGPT Manager | The user interface is where you can define skills, prompts, and configurations. |
Contextual data | The contextual data is the customer business data that is stored in an embedded format. It includes:
|
Tools | APIs, such as BMC Helix Digital Workplace and BMC Helix ITSM are required for certain retrieval queries and service catalog fulfillment requests. |
Large Language Models | The LLM generates responses to user queries based on contextual data. For the list of supported models, see Models supported by BMC HelixGPT. |
Data Connectors | The data connectors ingest and train data from knowledge data sources into the embedding model. |
Embedding Model | The embedding model converts text data to computer-understandable language, which is an array of floating numbers. |
Workflow of the BMC HelixGPT architecture
The following table lists the steps in the BMC HelixGPT workflow:
Step | Description |
---|---|
1 | A user sends a query through one of the BMC Helix applications, such as BMC Helix Digital Workplace, BMC Helix ITSM, BMC Helix Virtual Agent, or Live Chat. The Conversation APIs send the query to the Assistant. |
2 | The Assistant sends a user query to the BMC HelixGPT Manager through API plug-ins. The HelixGPT Manager sends the relevant prompt to the Assistant. Prompts are defined through BMC HelixGPT Manager and are stored in the prompt library. |
3a 3b | The Assistant sends the prompt to the Vector DB that stores the contextual data and to the different APIS, such as BMC Helix Digital Workplace API and BMC Helix ITSM API. Based on the prompt, the contextual data is sent to the Assistant. |
4 | The Assistant sends the contextual data to the LLM via API plug-ins. The LLM processes the contextual data and generates relevant and detailed responses to user queries. |