Best practices for designing prompts in BMC HelixGPT
Prompting is the method by which administrators communicate goals, instructions, and context to AI agents. For more information about creating and managing prompts, see Creating and managing prompts.
BMC HelixGPT supports two prompting paradigms—chained and agentic:
Chained prompting uses a predefined, linear sequence of prompts. Each step provides specific instructions and expected outcomes, making this approach predictable and well-suited for scripted workflows or decision trees.
Agentic prompting follows the Reasoning and Action (ReAct) paradigm. A single prompt defines the goal and offers high-level guidance. The agent then autonomously determines how to achieve the goal—choosing tools, retrieving data, and asking clarifying questions as needed. This paradigm enables dynamic, context-aware interactions that adapt to the user's needs and environments.
We recommend that you review the following best practices for designing prompts in BMC HelixGPT, with a primary focus on the agentic approach based on the ReAct paradigm.
Fundamental best practices
The following foundational best practices focus on building clear, reusable, and goal-driven prompts that align with enterprise tools and standards.
Principle | Why it matters | Best practice |
|---|---|---|
Understand skills, agents, tools, and versioning. | BMC HelixGPT architecture is modular and versioned, offering precise control and traceability across applications. | Understand the key principles of working with BMC HelixGPT. |
Design prompts around agent goals, not just tasks. | BMC HelixGPT agents are autonomous. They need a clear mission, not just a list of actions. |
For example: Guide employees through resolving IT issues by using available knowledge, automation, and escalation tools. |
Use modular prompt templates for reusability. | Reusable prompt templates reduce duplication and improve consistency across agents. |
|
Use markdown to structure prompts. | The prompt is not just a command; it’s a mission briefing. A well-structured prompt helps the LLM:
Markdown formatting improves LLM comprehension and parsing, especially when the LLM must autonomously plan actions by using the ReAct framework. |
|
Align prompts with available tools and capabilities. | Agents must know what tools they can use and how to use them. |
|
Embed examples to guide agent behavior. | Examples help the large language model (LLM) interpret ambiguous inputs and align with enterprise expectations. |
|
Teach the agent enterprise-specific language. | LLMs might not understand internal acronyms, jargon, or naming conventions. |
|
Encourage clarification and interaction. | Not all user inputs will be clear. Agents must ask questions. |
|
Use prompt chaining only when necessary. | BMC HelixGPT supports both chained and agentic prompting. Chaining is useful for linear workflows but limits autonomy. |
|
Validate prompt behavior by using built-in tools before you deploy. | Testing ensures reliability and prevents unintended behavior in Production. Even small prompt changes can lead to unexpected behavior. |
|
Validate prompt behavior across LLMs. | BMC HelixGPT supports multiple LLMs. Prompt behavior might vary. |
|
Maintain prompt governance and versioning. | Prompts evolve with business needs and environment updates. Prompt drift can lead to inconsistent agent behavior. |
|
Align prompts with business KPIs. | Prompts must drive measurable outcomes. |
|
Customize agents for personas and use cases. | Different roles require different agent behaviors. |
|
Advanced prompting techniques
Learn advanced prompting strategies and architectural considerations for BMC HelixGPT. These techniques are designed to help maximize value, scalability, and alignment with enterprise goals.
Principle | Why it matters | Best practice |
|---|---|---|
Build a library of well-working prompt examples. | Having a curated set of proven prompt examples helps teams accelerate deployment, maintain consistency, and reduce trial-and-error during agent design. |
|
Leverage metadata for contextual prompting. | Agents can use metadata (for example, user role, location, department) to tailor responses and improve relevance. |
|
Enable dynamic prompt composition. | Dynamic prompts adapt to user input, system state, and available tools. |
|
(Optional) Integrate with Langsmith for external testing. | Langsmith provides advanced observability and debugging for LLM-based agents. |
Warning: When you use Langsmith, test data is sent externally and might leave the BMC HelixGPT environment. Use it only with non-sensitive data and in accordance with your organization’s data policies. |
Governance and maintenance techniques
As BMC HelixGPT agents become embedded in enterprise workflows, maintaining prompt quality, agent behavior, and skill configurations becomes critical. Learn the following best practices for governing the lifecycle of agentic AI components in a secure, scalable, and auditable way.
Principle | Why it matters | Best practice |
|---|---|---|
Establish prompt lifecycle management. | Prompts evolve as business needs, tools, and LLM capabilities change. |
|
Monitor agent behavior and feedback loops. | Continuous feedback ensures agents remain effective and aligned with user expectations. |
|
Maintain a centralized prompt and skill registry. | A shared registry prevents duplication, promotes reuse, and supports compliance. |
|
Govern skills, agents, and tool linkages. | Skills are the entry point for applications and must be tightly controlled. |
|
Enforce access control and change management. | Prompt and agent changes can affect thousands of users. |
|
Align governance with enterprise standards. | AI governance must align with broader IT and compliance frameworks. |
|
Managing agent upgrades across releases. | BMC Helix regularly ships updated agents and skills with each BMC HelixGPT version. Customers who have customized agents in previous versions must make sure those changes are preserved and compatible after an upgrade. |
|
Troubleshooting
Consult the following table to quickly diagnose common issues with agent behavior and apply targeted corrective actions:
Issue | Action |
|---|---|
The agent gives vague or irrelevant answers. | Review prompt clarity and goal definition. Add examples and terminology. |
The agent fails to use available tools. | Check the tool list in the prompt. Make sure tool names match the system configuration. |
Prompt changes cause regressions. | Use BMC HelixGPT Agent Studio versioning to compare and roll back. |
Unexpected behavior after upgrade. | Manually compare new out-of-the-box agents with customized versions. Document differences. |
Missing functionality after upgrade. | Manually verify that the new versions of the prompts are being used, and review the documentation for configuration and enabling of new functionality. Mostly, the new functionality added is opt-in, and the administrators must enable it. |
Langsmith integration exposes sensitive data. | Use only with non-sensitive test cases. Refer to Tracing LLM calls with LangSmith for setup guidance. |
Prompt review and deployment checklist
Use this checklist to validate prompts before deploying them into production environments or assigning them to BMC HelixGPT skills.
Prompting guides and training resources
Consult the following prompting guides from the leading AI providers :
OpenAI GPT-4.1 Prompting Guide—A comprehensive guide with examples and best practices for agentic workflows, tool usage, and prompt tuning.
OpenAI API Prompt Engineering Best Practices—Tips for structuring prompts, using examples, and formatting for clarity and precision.
Google Prompting Essentials— A 5-step course on writing effective prompts, tuning LLMs, and integrating AI into workflows.
Review the following training:
LinkedIn Learning: prompting and AI skills—Courses on upskilling, prompt design, and building a learning culture. Explore LinkedIn learning resources and free courses.
Medium blogs on prompt engineering—Community-driven insights and tutorials on prompt design and agentic workflows.
Where to go from here