FAQ



General frequently asked questions


What is BMC HelixGPT?

BMC HelixGPT is a critical capability across the BMC AI strategy, and we believe that generative AI will empower enterprises to capitalize on new generative AI-powered use cases to resolve problems faster, improve collaboration, and increase productivity.

BMC uses large language models (LLMs) built from proprietary and licensed enterprise data with their domain-specific product models that are designed to address business challenges today. ServiceOps, ITOps, AIOps, enterprise knowledge management, and enterprise virtual agents are all powered by BMC HelixGPT for employee productivity with actionable insights and automated resolutions throughout the BMC Helix platform.

BMC HelixGPT is not just any AI—it's an advanced neural network trained on a rich, diverse dataset drawn from our extensive suite of products. This means BMC HelixGPT isn't learning from scratch; it's infused with IT wisdom, best practices, and insights from thousands of successful implementations. With BMC HelixGPT, you're not just getting an AI; you're leveraging a virtual expert that is tailored to understand and navigate the complex landscape of IT operations and service management like no other. It's designed to deliver precise, context-aware solutions across our product range, enhancing efficiency, reducing downtime, and providing the kind of customer experience that only comes from a leader in the industry.

How does BMC HelixGPT work?

BMC HelixGPT integrates open-source Large Language Models (LLMs) and fine-tunes them based on the domain expertise of our team and the knowledge we get from our own internal SaaSOps and BMC IT teams. LLMs are deep learning algorithms that can perform various natural language processing (NLP) tasks. LLMs can ingest, process, recognize, translate, predict, or generate content across all enterprise data lakes and activities – whether they occur online or offline. 

Is BMC Helix AIOps a prerequisite for BMC HelixGPT?

To use the following use cases with BMC HelixGPT, you must use BMC HelixAIOps:

  • Best Action Recommendations for remediating situations and code generation
  • Log insights for situations
  • Ask HelixGPT virtual agent
What are the uses cases BMC HelixGPT supports for Service Management?
  • BMC HelixGPT-enabled Chatbot for meaningful conversations providing relevant answers (based on various knowledge sources) and requesting services without extensive training needs (replacing IBM Watson)
  • BMC HelixGPT in BMC Helix Digital Workplace for meaningful conversations providing relevant answers (based on various knowledge sources) and requesting services without extensive training needs (replacing search)
  • BMC HelixGPT summarization of Live Chat sessions to save agent time and accelerate ticket reviews (improving agent productivity)
  • BMC HelixGPT-enabled resolution summarization to accelerate root cause analysis (Resolution Insights)

Learn more about the use cases in Use-cases.

What are the use cases BMC HelixGPT supports for BMC Helix ITOM?
  • BMC HelixGPT-enabled Situation Summaries – Takes the [knowledge-graph] Causal AI-based situation summaries and translates them to human-understandable natural language summaries. The benefit is that lower-skilled, Level 1 NOC personnel can now perform tasks that previously required much more skilled L3 personnel.
  • BMC HelixGPT-enabled best action recommendations – Provides recommendations to resolve issues based on past tickets and generates code in Ansible, Python, and Bash for quick resolution.
  • Log Insights: Get an insight from raw logs based on the Situation. Provides natural language summary-based insights from the Logs based on the Situation.

Learn more about the use cases in Use-cases.

Do you plan to support BMC HelixGPT with a customer-specific LLM or embedded with a 3rd party provider?

Currently, there's no plan to support any customer-specific LLM or a 3rd-party provider embedded into BMC HelixGPT. 

Can BMC give me an LLM model instead of going to Azure or Google?

Currently, there's no plan to provide an out-of-the-box LLM model.

What type of foundational model (LLM) does BMC HelixGPT use?

Currently BMC HelixGPT uses open-source models such as GPT-4 on the Azure OpenAI Platform or the OpenAI platforms. Learn more about models in Models-in-BMC-HelixGPT.

Does BMC HelixGPT use historic data?

BMC HelixGPT uses historical ticket / incident / event data, knowledge articles, incident summaries, and other data sources available within the BMC Helix environment. It can utilize the existing history of records before BMC HelixGPT is implemented.

What is the architecture of BMC HelixGPT based on?

The architecture of BMC HelixGPT is based on Retrieval Augmented Generation (RAG). It generates answers by using RAG.

Can US Government entities use BMC HelixGPT?

Contact your BMC account manager to discuss details for FedRamp deployment of BMC HelixGPT.

What is the main data source for BMC HelixGPT?

BMC HelixGPT uses the following data sources:

  • BMC Helix AIOps
  • Helix Business Workflows Connection
  • Helix Knowledge Management Connection
  • Helix ITSM Knowledge Management Connection
  • Confluence
  • SharePoint

BMC HelixGPT uses data from these sources, such as tickets, incidents, events, logs, and knowledge articles.

What data sources are used for Best Action recommendation?

Best Action Recommendation leverages BMC Helix Incident Management, logs, past action execution, knowledge articles, and data from other ticket management solutions, such as JIRA to suggest the best available actions to resolve an issue.

Does BMC HelixGPT support languages other than English? Is automatic translation supported?

BMC HelixGPT supports the languages of the underlying models. For example, if a model supports Chinese, English, Spanish, or Hindi, BMC HelixGPT supports these languages. LLMs and prompt configurations also support automatic translation. 

For the languages supported by Azure OpenAI, see Languages supported by Azure OpenAI.

Does BMC HelixGPT support resources like data warehouses and data lakes?

Currently BMC HelixGPT supports the data warehouse of events, incidents, logs, and other data points within the BMC Helix applications. 

Will I need special training to use the features powered by LLM models in this enterprise software?

We strive to make our software intuitive and user-friendly. While no special training is typically required, we provide comprehensive documentation and support resources to help you make the most of the advanced features enabled by LLM models. Learn more about BMC HelixGPT in Product-overview.

What is the maximum number of concurrent users that can place a request in BMC HelixGPT powered BMC Helix Virtual Agent without decreasing the performance?

Concurrent users are limited by the "token per minute" limit on environments in Azure OpenAI. Review the Azure documentation and contact Azure representatives to understand the sizing for concurrent users.

What’s the up time of BMC HelixGPT? Is it available 24/7 or based on how it is configured?

BMC HelixGPT is subject to the same uptime service level agreement (SLA) as the rest of the BMC Helix services.

Is there a limit to the number of characters per message that a user can enter?

Yes, we have a limit of 4,000 characters to easily retrieve the conversations that are held with BMC HelixGPT.

What is the average time that BMC HelixGPT takes to retrieve information from a source?

Around 3-5 seconds.

Could we be clear about the data source.  this is referring to data within our control.  IE data which is in the HelxGPT Embeddings model.  This means that the data sources we are referring to are HKM, RKM BWF, SharePoint and Confluence.  In order to to this in this time frame we need to have indexed the data.

Does BMC HelixGPT fail to recognize a phrase?

BMC HelixGPT can handle a variety of input formats and can recognize different phrases.

Does BMC HelixGPT recognize the emotional tone of a query and respond empathetically?

Yes, emotional responses are driven by prompt configurations.

For how long are conversation logs maintained? What is the purpose of these logs, and who can access them?

BMC Software records all transactions (conversations) for administrative review. The logs are maintained for 30 days by default. While administrators can access the logs anytime, specific team members from BMC Software may also access the data for troubleshooting purposes.

However, access to logs is subject to the same security policies and role-based access as other Helix applications.

How does BMC HelixGPT process, share, and control data?

When a user asks a question, such as "How do I apply for a leave?" BMC HelixGPT collects the user query, uses an existing prompt, such as "Answer the user's question from given sources," and searches for the source knowledge articles.

BMC HelixGPT sends these inputs to the Azure OpenAI API. It receives the output from Azure OpenAI and displays it to the user.

However, BMC HelixGPT does not control the data.

For more information, see BMC-HelixGPT-architecture.

Does data hosting and processing occur in the Azure OpenAI instance?

BMC HelixGPT processes the data outside the Azure OpenAI instance before sending it to the Azure instance. The output received from Azure OpenAI is then displayed to the user as the answer.

For more information about data processing, see BMC-HelixGPT-architecture.


Security-related frequently asked questions


How secure is BMC HelixGPT?

BMC HelixGPT is a platform service inside the Helix tenant. It is subject to the same security policies and role-based access as other Helix applications.

BMC HelixGPT connects to the GPT platform, such as Azure OpenAI, and all transactions occur through the platform. The prompts and contextual data that you create are sent to the third-party Large Language Model (LLM), and you are responsible for determining the data retention policy with the third party (OpenAI, Microsoft, Google, and so on).

Learn more about security in Security-and-privacy-for-BMC-HelixGPT.

How is user information protected in BMC HelixGPT?

Customers bring their own Azure OpenAI key under direct agreement with Microsoft to protect data. The rest of the data processing occurs under existing agreements and security protocols within the BMC Helix platform. 

I use BMC Helix Virtual Agent. Is there a different security model for BMC HelixGPT?

BMC HelixGPT is a platform service inside your same Helix tenant. It is subject to the same security policies and role-based access as other Helix applications. The third-party product involved with BMC HelixGPT is Azure. That is a secure call with your secure and private agreement made with Microsoft. Learn more about the security model in Security-and-privacy-for-BMC-HelixGPT.

Are there any privacy concerns associated with the use of LLM models in this software?

Privacy is a top priority for BMC HelixGPT. Our fine-tuned LLM models make sure that sensitive data is handled securely and in compliance with relevant regulations. Additionally, the software is designed to minimize the risk of data breaches or unauthorized access. BMC HelixGPT follows all security protocols that are followed for other BMC Helix applications and services. Learn more about security in Security-and-privacy-for-BMC-HelixGPT.

Does BMC HelixGPT have encryption on transport, encryption on storing?

BMC HelixGPT leverages contextual data stored in Vector DB (OpenSearch), which is securely hosted within BMC data centers worldwide. Data security is reinforced through full disk encryption (AES-256), ensuring protection for data at rest. BMC  manages all encryption keys. Learn more about data encryption in Security-and-privacy-for-BMC-HelixGPT.

How is inappropriate or unsafe content blocked?

BMC HelixGPT includes out-of-the-box guardrail examples to refuse out-of-topic requests. Customers can train and tune the LLM further to strengthen the guardrails.

Can BMC HelixGPT access data?

BMC HelixGPT can access only the tenant data that your company administrators configure.

How do you handle adversarial attacks that attempt to reproduce personal data or make the LLM behave inappropriately?

Administrators can set guardrail prompts and also set content filters in Azure OpenAI controls. The data shared is restricted based on the user's access permissions.

For example,  An agent will not get an answer for the following query:
Show the tickets that Mary Mann is working on.


Pricing and packaging-related frequently asked questions


How does the licensing for BMC HelixGPT work?

Currently BMC HelixGPT is not licensed as a separate SKU and will be part of the existing applications or suite SKUs.

Customers on the following SKUs are entitled to use HelixGPT capabilities:​

  • BMC Helix Service Management Standard​ 
  • BMC Helix Service Management Advanced​

The following SKUs are not entitled to use HelixGPT capabilities​:

  • BMC Helix ITSM SaaS or OnPrem​
  • BMC Helix Digital Workplace Basic or Advanced

Learn more about pricing in Requirements-for-BMC-HelixGPT.

Does BMC HelixGPT replace BMC Helix Virtual Agent?

BMC HelixGPT is not a replacement of Live Chat or BMC Helix Virtual Agent, it’s an alternative option for the IBM Watson generative AI natural language engine. Learn more about chatbots in Chatbots-and-BMC-HelixGPT.

How are we pricing the ‘Bring Your Own Model/API Key and Compute’ option?

Customers on the following SKUs are entitled to use HelixGPT capabilities:​

  • BMC Helix Service Management Advanced​
  • BMC Helix Virtual Agent Basic or Advanced will be grandfathered in for the conversational Chatbot capability​
  • BMC Helix ITOM Advance

The following SKUs are not entitled to use HelixGPT capabilities​:

  • BMC Helix ITSM SaaS or OnPrem​
  • BMC Helix Digital Workplace Basic or Advanced
Is it mandatory to have a GenAI account and API key for Azure OpenAI?

BMC HelixGPT supports Azure OpenAI for the BMC Helix ITSM use cases and Google Cloud Vertex for BMC HelixAIOps use cases. You must have a GenAI account with Azure OpenAI or GCP Vertex, and the API key to use it.

What additional costs do customers need to bear to use BMC HelixGPT?

The BMC Helix Service Management Advanced SKU includes the BMC HelixGPT entitlement. Customers using this SKU must bear the additional cost of using the Azure OpenAI Platform or the OpenAI platform.  These costs are related to the number of tokens used when using open-source models such as Llama2, GPT3.5T, or GPT4. These costs are separate and additional to the costs for BMC Licensing. 

Learn more about the additional costs in Azure OpenAI Service pricing.

Can customers bring their own Generative AI account for the BMC Helix AIOps use cases?

Yes, customers can bring in their own generative account and BMC can deploy its fine-tuned model into their account. Currently, we only support Google Cloud Platform Vertex but support for Azure Open AI will be coming soon.

How are the models maintained for the BMC HelixAIOps use cases?

Every BMC Helix release will contain a BMC Helix ITOM GPT model and will be updated with regular releases. Tenant-specific training requires models to be rebuilt periodically, depending on freshness requirements.

How does the ‘Bring Your Own Model/API Key and Compute’ work?

Bringing your own GenAI environment and account allows you to keep your data within your own GenAI deployment [control / VPC] boundaries.

Depending on the GenAI use cases, this can support standard foundation models, such as Microsoft Azure Open AI, or deploying the BMC fined-tuned model into the customer GenAI environment

If a customer doesn't have an existing LLM, then does the customer need to acquire their own LLM? 

The customer must bring in their own GPT services (not LLM). We will train the data in their GPT services.

Why would a customer prefer to use their own OpenAI account and compute?

Customers might prefer to use their OpenAI account and compute with HelixGPT for several reasons:

  • Customization and Flexibility: Many companies are experimenting with GenAI, and may want to incorporate learnings from their experiments. With their own GenAI resources, customers have more control over the customization and configuration of their environment. This flexibility allows for optimizations based on specific needs or requirements, which can be especially important for businesses with unique or resource-intensive applications.
  • Financial Incentives: Hyperscalers such as AWS, Google Cloud, and Microsoft Azure often provide credits as incentives to encourage usage of their platforms. Customers can use these credits to offset the costs associated with running compute-intensive applications like LLMs.
  • Data Privacy and Security: Using their own compute can offer enhanced data privacy and security. Customers might have specific security policies or compliance requirements that are easier to maintain within their own infrastructure.
  • Performance Optimization: Customers can optimize performance based on their specific use case, potentially achieving better results by tailoring their compute resources to the demands of their applications.
  • Integration and Control: Direct control over the compute environment can simplify integration with existing systems and processes, offering better management and oversight of the deployment and usage of HelixGPT and other AI tools.
  • Reduced Latency: For applications where latency is a critical factor, using local compute resources can reduce the time it takes for data to travel between the user and the server, thereby improving response times for AI-driven applications.
What if a customer does not want to use their own Mode/API key and compute?

Currently, in the 23.3.01 version, we support Azure OpenAI. We are exploring other GenAI and might support them in the future. For more information, contact your Account Manager.

How does Google Cloud Platform licensing work with Vertex LLM?

Contact BMC Customer Support to learn more about licensing for using Google Cloud Platform Vertex AI for BMC HelixGPT. 

Can customers bring their own LLM model for BMC Helix ITOM use cases?

 No. We rely on a BMC fine-tuned model.

Can the model learn from data coming in from multiple customers?

No, we do not support model learning from data sourced from different customers for PII, security, and compliance reasons.

How do I deploy BMC fine tune model into customer generative AI account?

Learn how to set up and configure BMC HelixGPT in Setting-up-and-going-live


AI-related frequently asked questions


What is a Large Language Model (LLM)?

The LLM is a machine-learning algorithm designed to process natural language data and generate responses based on user input. It uses deep neural networks to analyze patterns in text and produce high-quality translations and summaries. 

What’s the difference between an LLM and GPT?

An LLM (Large Language Model) is a general term for AI models trained to understand and generate human language. GPT (Generative Pre-trained Transformer) is a specific example of an LLM, developed by OpenAI, known for its ability to generate text based on its training on a vast dataset. So, while all GPTs are LLMs, not all LLMs are GPTs.

What’s the difference between Microsoft Azure and OpenAI?

Microsoft Azure is a cloud computing service by Microsoft offering various cloud solutions, such as hosting and storage. OpenAI focuses on AI research, creating advanced AI models like GPT. Azure provides the infrastructure, while OpenAI develops AI technologies. They collaborate with Azure to support the computing needs of OpenAI.

What role do Hyperscalers play here?

Hyperscalers such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure, provide the necessary cloud infrastructure to support GPT. They offer the computing power required for GPT to process and generate text quickly and efficiently.

What’s the connection between GPT and Hyperscalers?

GPT relies on the cloud infrastructure provided by Hyperscalers to function. This includes data storage, computing power for processing large datasets, and the ability to scale up resources as demand increases.

How do LLM models differ from traditional rule-based systems in this enterprise software?

Unlike traditional rule-based systems that rely on predefined instructions, LLM models learn from vast amounts of data and can adapt to new contexts. This allows for more flexibility and accuracy in handling complex tasks, because the software can recognize patterns and make informed decisions based on the data it is trained on.

How can I monitor the LLM's operation to make sure that each conversation and data usage is appropriate for each user and aligns with overall operational standards?

As per the requirement, administrators can review conversation logs. Additionally, end users can provide feedback by using the like option or adding notes.

BMC HelixGPT responses are limited to your enterprise context by default. Therefore, a query such as “Write a poem about stars” would be considered out of context and denied. However, you can further refine these constraints if needed.


Data ingestion-related frequently asked questions


Are there any prerequisites for ingesting data?

Before you ingest data, you must configure the data sources. Learn how to configure data sources in Adding-data-sources-in-BMC-HelixGPT.

How do I configure data ingestion?

Configure the data ingestion by creating data connection jobs. Before users start querying in your chatbot, you must verify whether data ingestion is completed successfully. Learn more about data ingestion in Ingesting-data-into-BMC-HelixGPT.

After the connection is configured, when is data ingested?

Data ingestion starts as soon as the data connection job is saved successfully. Make sure that the details that you specify for your data source are correct. Learn more about configuring a data source in Adding-data-sources-in-BMC-HelixGPT.

Can I schedule the ingestion of data?

When you create a data connection job, data ingestion starts immediately. If you want to schedule the ingestion of data at a particular time, use the out-of-the-box rules that are provided for the following data sources:

  • BMC Helix Business Workflows
  • BMC Helix Knowledge Management by ComAround
  • BMC Helix ITSM: Knowledge Management

Learn more about scheduling data ingestion in Ingesting-data-into-BMC-HelixGPT.

How do I know that my data is being ingested?

Data ingestion takes place one item or document at a time, and the time required for the ingestion to complete depends on the number of items or documents to be ingested and the volume of data. If a user asks queries during data ingestion, the responses might be incorrect or incomplete. It is therefore important to verify that data ingestion is completed successfully before users start using your chatbot.

Verify whether data ingestion is complete by viewing the DataConnectionJobStep record definition in BMC Helix Innovation Studio. Learn more about verifying data ingestion in Ingesting-data-into-BMC-HelixGPT.

How do I update the ingested data?

You can update ingested data by running the data connection job. While creating the job, you must specify the date and time when the data was last ingested. The delta of the data since last ingestion is updated when the job is run. Learn more about running the data connection job in Ingesting-data-into-BMC-HelixGPT

How can I delete the data that I don't want BMC HelixGPT to use for conversations?

To delete existing data from BMC HelixGPT, create a new data connection job and enable the Sync deletions option. Data that is deleted from the source is removed from BMC HelixGPT. Learn more about data ingestion in Ingesting-data-into-BMC-HelixGPT.

Where do I create the chatbot to use the BMC HelixGPT I have configured?

You can create chatbots in BMC Helix Virtual Agent. For more information, see Setting up chatbots for your line of business.

What data does the LLM have access to?

Administrators configure the resources that the LLM can access. They also determine the scope of the usage of the use cases. For example, an administrator can define only certain knowledge articles to be accessible for the BMC Helix Virtual Agent usage, or only HR-related catalog service to be made available through BMC HelixGPT.


Prompts-related frequently asked questions

How do I add a JSON in a prompt?

Use double or ({{  }}) brackets instead of single opening or closing brackets. For example: To show information that is replaced runtime in the following sentence "Guest(s) name or company name use the brackets in the following manner:

{{

                "assistantResponse": #Question to get Guest(s) name or company name#,

                "id" : "29bcde5d-e886-6e8d-b931-52d13cf21521"

            }}
How do I add variables in a prompt?

You must make sure that the Assistant recognizes the variable that you want to use in a prompt. Use the {VARIABLE_NAME} format to add variables to a prompt.

When do I use the 'no_rag' prompt variable in the Knowledge prompt?

The {no_rag} should be used with the Knowledge prompt when you don't want to do an OpenSearch API call but use the prompt text and {variables.context} information to summarize the data.

When creating the Knowledge prompt, make sure that it contains the following prompt variables:

  • {no_rag}—When a prompt contains this variable, it prevents the Assistant from making an OpenSearch API call to retrieve the data.
  • {variables.context}—The information sent by the client application in this variable is used here before making an LLM call. In some cases, the variable must be placed after the {summaries} variable, and in other cases before the variable. Please test your use case and place the {variables.context} in the appropriate place.

Here's a sample Knowledge prompt:

Answer the following question based on the provided information. Ensure the response is comprehensive, prioritizing clarity over brevity. Do not generate incomplete answers. You must return complete answers or else bad things will happen. If the solution involves instructional steps, present them in a numbered list. {global_prompt} If instructional steps are required use a numbered list. Create a final answer using the references ("SOURCES").

Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer.



QUESTION: {input}



SOURCES: {summaries}

{variables.context}

{no_rag}

FINAL ANSWER:
How do I add data to the "variables.context" parameter?

The {variables.context} parameter can be used in the following ways:

  1. Pass a flat JSON structure as follows:

    { "variables":
        "context":{"incident_title":"", "incident_summary":""....}
    }
  2. Pass as an array data structure as follows:

    { "variables":
        "context":[
          {
               "context_label":"Locale",
               "context_value":"Answer in the user's locale language which is ja",
          },
         {
               "context_label":"Greet",
               "context_value":"Greet the user by their First Name 'James'",
          },
            ...
            ...
            ...
        ]
    }

BMC recommends that you use the second method to define the variables.context parameter.

How do I use the Summarization prompt?

Use the Summarization prompt to summarize any conversation between BMC HelixGPT and the user. You cannot use the {variables.context} parameter in this prompt.

Mandatory prompt variable: {conversation}

Sample Summarization Prompt

You are an intelligent bot to generate SUMMARY for given Chat conversion.  Do not include text like "Sure, here is a concise summary of the chat conversation:"
Do not hallucinate.
{global_prompt}
Your task is to generate two to three lines of SUMMARY only from given Chat conversion.

{conversation}

SUMMARY:
How do I use the Router prompt?

This prompt is used to route the logic of execution of the Assistant. Set the Starter Prompt flag to True in the prompt creation popup.

Mandatory prompt variable: {input}

How do I use the Request and DWP Retriever prompts?

You must use the Request and DWP Retriever prompts in conjunction with each other.

Request prompt

Mandatory prompt variables: {input} and {summaries}

This prompt is used to display how the request data should be displayed, what column needs to be used, and how the format should be in this prompt.

DWP Retrieve Prompt

Mandatory prompt variable: {input}

This prompt converts the natural query to JSON qualification.

Example

"Give me my closed requests from May 2024."

Above query will be converted into

'{STATUS:'Closed', START_DATE:'05/01/2024', END_DATE:'05/31/2024'}'

Above qualification will be passed to DWP api and get the results then 'Request' prompt used to make LLM call and format the answer.
How do I use the Ticket and ITSM Retriever prompts?

You must use the Ticket and ITSM Retriever prompts in conjunction with each other.

Ticket prompt

Mandatory prompt variables: {input} and {summaries}

This prompt is used to display how the incident data should be displayed, what columns must be used, and how the format should be specified.

ITSM Retriever prompt

Mandatory prompt variable: {input}

This prompt converts the natural query to JSON qualification.

Example:

"Give me my closed incidents from May 2024."

Above query will be converted into

'{STATUS:'Closed', START_DATE:'05/01/2024', END_DATE:'05/31/2024'}'

Above qualification will be passed to ITSM api and get the results then 'Ticket' prompt used to make LLM call and format the answer.
How do I use the Service prompt?

Mandatory prompt variables: {history} and {input}

This is the DWP catalog service prompt. This is auto-generated by auto prompt wizard and gets added to the Skill.


 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*