Prompt Engineering: Unleashing the Power of LLMs in Conversational AI Solutions

Jul 5, 2023 | Blog

Share this post

Conversational AI, an integral part of the modern digital landscape, has transformed the way businesses engage with their customers. Whether it’s in customer support or personal assistants, these AI-driven solutions bring about significant efficiency and cost-effectiveness by automating routine tasks. This automation extends to summarizing conversations, identifying conversational topics, selecting the most relevant FAQs, and determining the subsequent steps in a customer’s journey.

Large Language Models (LLMs), when combined with intelligent ‘prompt engineering’, become a vital asset in enhancing the efficacy of conversational AI solutions. This article aims to provide insights into how to leverage LLMs and prompt engineering in these contexts.

Summarizing the Conversation

In the realm of conversational AI, the ability to summarize conversations is a key attribute. Instead of storing exhaustive transcripts, it’s far more efficient to use AI to generate a concise summary of the interactions. In cases of bot-to-human agent handovers, the agent can then quickly get up to speed by reviewing this summary rather than a lengthy conversation history.

LLMs can be used for this task effectively, by engineering the right prompts for generating these summaries:

# Example code for generating conversation summaries with an LLM

Prompt:
Generate the summary from the below conversation.

conversation = ”’User: I want to file a claim
Agent: Sure, I can help you with filing a claim. Can you please provide a brief description of the incident?
User: Yes, of course. Yesterday, while driving to work, another car hit me from behind at a red light. My car sustained significant damage, and I also experienced some neck and back pain.
Agent: Sorry to hear that. Can you please provide the date and time of the incident.
User: Yesterday around 8am
Agent: Can you please provide the policy number?
User: My policy number is 1232323
Agent: Thank you for providing the details. I will now proceed to create a claim file for you. Our claims department will review the information provided and may need additional documentation or evidence related to the accident. We will contact you with further instructions and keep you updated on the progress of your claim. ”’
prompt = f”Summarize the following conversation {conversation}”
response = openai.Completion.create(
engine=
“textdavinci003”,
prompt=prompt,
max_tokens=300,
temperature=0,
request_timeout=9
)

Identifying Conversation Topics

One of the essential tasks in many conversational AI systems is topic identification. For instance, when a user communicates a problem about their bill, the system can direct the conversation to the billing department, enhancing the efficiency of the support process.

There are two approaches we propose for topic identification using LLMs and prompt engineering:

Approach 1: Formulate a single prompt with all potential topics. This approach is suitable when the number of topics is fairly small (3 to 5), to avoid potential runtime latency issues.

prompt = ”’Follow the below instruction to pick the most appropriate topic from the below list of topics based on user input.
Instructions:-
1. You can use the topic description and sample utterances to identify the right topic. Don’t add more text in the response.
2.If you don’t know the topic just return the topic name as “” and the response as “I did not get that I can help you with the topics like {topics}”.
Enhance the response if need be.
You are strictly required to provide the topic in json as given below. { “topic”: topic, “response”: {response}}
When responding, simply acknowledge the user without requesting any additional information. Avoid asking additional questions or providing prompts in the response.
Topics:
[
{
“topic”: “Check payment status”,
“description”: “Helps customers check the status of their insurance payments.”,
“sample utterances”:[“Can you please check the status of my payment?”,”I’d like to know the status of my payment, can you help?”Could you provide me with an update on the payment?,i want to check my payment status]
},
{
“topic”: “Claim status”,
“description”: “Helps policyholders check the status of their claims.”,
“sample utterances”: [“Track a claim”, “What is the status of my claim?”, “Can I check the status of my claim?”, “How can I check the status of my claim?”, “What is the current status of my claim?”]
},
{
“topic”: “File a claim”,
“description”: “Helps file new insurance claim.”,
“sample utterances”: [“How do I file a new insurance claim?”, “I need to file a new insurance claim”,
“I want to file a new insurance claim”, “Can I file a new insurance claim?”,
“Help me file a new insurance claim”, “File A Claim”]
}
]
User input: i want to file a claim”’
response = openai.Completion.create(
engine=
“textdavinci003”,
prompt=prompt,
max_tokens=300,
temperature=0,
request_timeout=9
)

Approach 2: Create a batch process to determine the top topics

batch_prompt =[
”’Here are a few details of a topic \
name: Check payment status,\
description: Helps customers check the status of their insurance payments.,\
sample utterances: [“Can you please check the status of my payment?”,”I\’d like to know the status of my payment, can you help?”, “Could you provide me with an update on the payment?””,i want to check my payment status”]\
Identify if the user input given below is mapping to the topic name given above. \
user input: i want to file a claim \
Give the response as in the following format:\
topic: topic_name\
If the user input does not match, respond as null topic: ”’,
”’ Here are a few details of a topic \
name: Claim status,\
description: Helps policyholders check the status of their claims.,\
sample utterances: [“Track a claim”, “What is the status of my claim?”, “Can I check the status of my claim?”, “How can I check the status of my claim?”, “What is the current status of my claim?”]\
Identify if the user input given below is mapping to the topic name given above.\
user input: i want to file a claim \
Give the response as in the following format:\
topic: topic_name\
If the user input does not match, respond as null topic: ”’,
”’ Here are a few details of a topic \
name: File a claim,\
description: Helps file new insurance claim.,\
sample utterances: [“How do I file a new insurance claim?”, “I need to file a new insurance claim”,\
“I want to file a new insurance claim”, “Can I file a new insurance claim?”,\
“Help me file a new insurance claim”, “File A Claim”] \
Identify if the user input given below is mapping to the topic name given above. \
user input: i want to file a claim \
Give the response as in the following format:\
topic: topic_name
If the user input does not match, respond as null topic: ”’,
]

response = openai.Completion.create(
engine=“textdavinci003”,
prompt=batch_prompt,
max_tokens=300,
temperature=0,
request_timeout=9
)

In the batch process more than one topic can be selected. In cases where there are multiple topics selected, we can apply approach 1 on the selected topics to find a unique winner.

Selecting Matching FAQs

Choosing the right FAQ response is a standard feature of conversational AI. Traditional methods involve creating embeddings of all FAQs and the user’s input, then computing cosine similarity. However, with prompt engineering, LLMs can provide a simpler solution for a smaller set of FAQs.
Creating a prompt with all FAQs listed, you can guide the LLM to select the most relevant response. However, this method may not scale well with a large number of FAQs.

# Code example for choosing matching FAQs with LLMs
prompt=”’Pick the most appropriate faq from the below list of faq. You are strictly required to provide the topic in json as given below.
If you don’t find the matching faq just return faq_question as “”
{“faq_question”:question}
Faqs:
1. How long does it take to get an insurance settlement check?
2. Can you explain the deductible amount and how it affects my claim?
3. What’s needed for an auto quote?
4. How can my credit score benefit me?
5. What types of insurance coverage does your company offer?
6. Are there any specific eligibility criteria for purchasing insurance from your company?
7. What factors are considered when determining insurance premiums?
8. Do you offer discounts or bundling options for multiple policies?
9. How can I contact your company for customer support or inquiries?
10. What is the process for renewing or canceling insurance policies?
11. Are there any additional benefits or services provided by your company?
12. How financially stable is your company? Is it rated by any credit agencies?
”’

response = openai.Completion.create(
engine=
“textdavinci003”,
prompt=prompt,
max_tokens=300,
temperature=0,
request_timeout=9
)

Determining the Next State as a decision engine

Conversational flows often follow a structured process, akin to a flow chart with conditional branching. This could traditionally require coding a complex sequence of conditional statements. LLMs, however, allow us to use them as a decision engine, thereby replacing convoluted code with plain-text prompt instructions.

# Code example for using LLMs to decide the next state
prompt=”’
Assistant is an intelligent chatbot designed to help users with queries related to File a claim. Check the below conversation and follow the below instruction to answer the user queries.
Instructions:-
1.Ask his/her Policy Number. Give this response only if the user response in the earlier conversation turn is not a deviation. Don’t add any extra statement or word to this response.
2.If the user’s query deviates, search through the list of relevant FAQs to determine if there is a matching response. If a match is found, show the matching FAQ in the faq_response field.
3.If the user’s query deviates and if there is no matching FAQ, just say appropriate answer or guide the user to provide the expected response.
4.Evaluate steps 2 and 3 to determine the most appropriate course of action. Include an action tag indicating whether the response was deviated or not, and whether the information provided was from an FAQ or not. Format the response as a JSON.
You are strictly required to provide the response in Json. The response format should be:
{
“response”: {answer the agent provides},
“response_type”: “progress” or “faq” or “deviated”,
“faq_question”: {questions} – if response_type is faq or else null,

}
Faqs:
1. How long does it take to get an insurance settlement check?
2. Can you explain the deductible amount and how it affects my claim?
3. Do you offer discounts or bundling options for multiple policies?
4. How can I contact your company for customer support or inquiries?
5. What is the process for renewing or canceling insurance policies?

Conversation:
User: i want to file a claim
Agent:
”’

response = openai.Completion.create(
engine=
“textdavinci003”,
prompt=prompt,
max_tokens=300,
temperature=0,
request_timeout=9
)

This approach allows for the use of straightforward, human-readable conditions for branching, simplifying development efforts and increasing maintainability.

In conclusion, prompt engineering with LLMs presents an effective way to enhance conversational AI solutions, boosting their efficiency, scalability, and ease of maintenance. From summarizing conversations, identifying topics, selecting FAQs, to deciding the next steps, the combination of LLMs and prompt engineering is set to play a transformative role in the evolution of conversational AI.

Similar blog post