Conversational AI, an integral part of the modern digital landscape, has transformed the way businesses engage with their customers. Whether it’s in customer support or personal assistants, these AI-driven solutions bring about significant efficiency and cost-effectiveness by automating routine tasks. This automation extends to summarizing conversations, identifying conversational topics, selecting the most relevant FAQs, and determining the subsequent steps in a customer’s journey.
Large Language Models (LLMs), when combined with intelligent ‘prompt engineering’, become a vital asset in enhancing the efficacy of conversational AI solutions. This article aims to provide insights into how to leverage LLMs and prompt engineering in these contexts.
Summarizing the Conversation
In the realm of conversational AI, the ability to summarize conversations is a key attribute. Instead of storing exhaustive transcripts, it’s far more efficient to use AI to generate a concise summary of the interactions. In cases of bot-to-human agent handovers, the agent can then quickly get up to speed by reviewing this summary rather than a lengthy conversation history.
LLMs can be used for this task effectively, by engineering the right prompts for generating these summaries:
# Example code for generating conversation summaries with an LLM
Prompt:
Generate the summary from the below conversation.

Identifying Conversation Topics
One of the essential tasks in many conversational AI systems is topic identification. For instance, when a user communicates a problem about their bill, the system can direct the conversation to the billing department, enhancing the efficiency of the support process.
There are two approaches we propose for topic identification using LLMs and prompt engineering:
Approach 1: Formulate a single prompt with all potential topics. This approach is suitable when the number of topics is fairly small (3 to 5), to avoid potential runtime latency issues.

Approach 2: Create a batch process to determine the top topics

In the batch process more than one topic can be selected. In cases where there are multiple topics selected, we can apply approach 1 on the selected topics to find a unique winner.
Selecting Matching FAQs
Choosing the right FAQ response is a standard feature of conversational AI. Traditional methods involve creating embeddings of all FAQs and the user’s input, then computing cosine similarity. However, with prompt engineering, LLMs can provide a simpler solution for a smaller set of FAQs.
Creating a prompt with all FAQs listed, you can guide the LLM to select the most relevant response. However, this method may not scale well with a large number of FAQs.

Determining the Next State as a decision engine
Conversational flows often follow a structured process, akin to a flow chart with conditional branching. This could traditionally require coding a complex sequence of conditional statements. LLMs, however, allow us to use them as a decision engine, thereby replacing convoluted code with plain-text prompt instructions.
This approach allows for the use of straightforward, human-readable conditions for branching, simplifying development efforts and increasing maintainability.
In conclusion, prompt engineering with LLMs presents an effective way to enhance conversational AI solutions, boosting their efficiency, scalability, and ease of maintenance. From summarizing conversations, identifying topics, selecting FAQs, to deciding the next steps, the combination of LLMs and prompt engineering is set to play a transformative role in the evolution of conversational AI.