Best practices for BMC HelixGPT
We recommend that you review best practices for the BMC HelixGPT product prior to implementing the product or a feature.
Best practices for generating prompts
Use the following best practices for generating prompts in BMC HelixGPT:
- Limit the topics for your domain. For example, create prompts specific to the IT or HR domain.
- Use a friendly, helpful, and formal tone of response.
- Prompt HelixGPT to ask for a confirmation when information is asked for entities, such as name, date, and location in long prompts.
- Display the knowledge article summaries as paragraphs, bullet points, or numbered lists.
- Use headlines or sections for long answers.
- Do not create jokes.
Best practices for configuring LLMs to minimize hallucinations in BMC HelixGPT
BMC HelixGPT leverages the generative AI capabilities offered by various vendors and associated models. For more information about supported vendors and models, see Supported models in BMC HelixGPT.
While BMC HelixGPT is provided with out-of-the-box configurations for client-provisioned LLMs, these LLMs may lack the contextual grounding and precision required for specific business needs. Thus, resulting in hallucinations and impacting the performance of BMC HelixGPT.
The LLMs are inherently flexible and can be fine-tuned and optimized for specific use cases or organizational requirements. By adjusting model parameters, refining prompts, incorporating retrieval-augmented generation (RAG), and applying vendor-recommended settings, organizations can significantly minimize the hallucinations and improve the accuracy and consistency of model output and hence the BMC HelixGPT performance.
Vendor-recommended configurations
The following table lists the references to the recommended configurations for each supported model.
LLM provider | LLM host | Model name | Reference links |
---|---|---|---|
OpenAI | Microsoft Azure | GPT-4.1 | |
GPT-4.1 mini | |||
OpenAI | gpt-4o | ||
Google Vertex AI | Gemini 2.0 Flash | ||
Meta | Google Vertex AI | Llama 4 | |
Amazon Bedrock | Llama 4 | ||
Oracle Cloud | Llama 4 |
Best practices for using the prompts automatically generated from catalog services
Best practices for exporting and importing skill settings
- When importing a service through the Publish Chat-enabled Service wizard, make sure that you use clear, concise, and unambiguous labels.
- To achieve the best response from the LLM, use a meaningful service name and description.
Related topics