How to make Chatbots Converse like Humans!

May 15, 2019 | Blog

Share this post
The interest in chatbots is growing every day. As more and more people are getting familiarized with chatbots, the ask for quality bots is only increasing. Bots can no more be query answering machines. They have to be really good. Now, how do you determine if a bot is good or bad? Well, you can say a good bot behaves more like a human. That’s true, yet, there is a need to quantify the human-like behavior of the bot.

Here Conversational AI Platform is an attempt to quantify the human-like behavior of a bot. While there could be many other factors, the ones listed here are believed to be primary. A bot needs to be:

  • Stable
  • Smart
  • Engaging
  • Have a Persona
  • Learn On the Go

Top 4 Most Popular Bot Design Articles:

  1. How to Design a Chatbot
  2. One Metric, One Platform and One Vertical
  3. Designing Chatbot Conversations
  4. Distributing a Slack App

These are not quantifiable as such. We will have to dig a bit deeper to break down each of them into smaller factors and then try to quantify.

Stable

When do you consider a bot to be stable?

When it does not give a wrong answer/ when it does not give a wrong direction to the user?

How can one build a stable bot?

A few guidelines are (I was about to call them rules, but held back as I need more confidence to call them rules):

Identify the right intention and construct intents. Generally, one inclines to the club many intents to simplify the bot building process. But, it would only lead to instability as the bot grows.

Avoid adding two similar intents to the same bot (ex: ‘buy an apple’, ‘buy a burger’ are two similar intents). Similar intents add to instability

Do not load a bot beyond its capacity. More intents mean the probability to hit the right intent is less. Try striking the best number of intents.

How do we measure bot stability?

It looks like a tough problem on the face of it, indeed it is. A good set of general and specific test cases are required to gauge the stability of a bot. Generic test cases are those common to any bot, it is a good practice to build and use generic test cases. Specific test cases are designed exclusively for the bot. The output of specific test cases can be used to measure the Bots stability. Good test cases make stable bots. So, follow best practices in building these test cases.

Smart

When do you consider a Conversational AI Chatbot to be smart? When it does not act like stupid. That’s right! The bot should not repeat itself; it should not ask obvious questions; in some cases, the bot should remember some information even across different sessions. Isn’t this too much of an ask for a bot? It’s not! Bots, which are considered to be stupid by an average human being will soon stop to existing. Thus, it is important to match the smartness of bots to that of an average human.

Context handling is one important way to make sure bots are smart. There are many ways in which context can is handled. One which is applicable often is intent clustering. In this approach, intents are grouped into clusters that have some common slots. The common slots are named the same across intents. The slots which have the same name within a cluster carry the same value. We can also define global slots which are common across all the intents. These could be slots like employee id, name, etc.

Shifting context is also an important aspect while building a bot. It should be able to handle a simple case where it shuffles between two contexts. More than two contexts can be handled by asking for clarification from the user. That should be a fair enough way to handle ambiguity.

Context-related assumptions have to ensure stability is not compromised. It is thus a good practice to include complete details in the response.

Engaging

How many interactions does a typical conversation between two humans have? In the case of two friends chatting, the conversations could be endless (meaning interactions can even go into few thousand). Since this sort of conversation is highly ambiguous and a difficult model to simulate let’s first take the case of professional interactions which are more structured and so easy to simulate. The number of interactions in a professional conversation can be around 10–20. If we even target 10 interactions per conversation, the bot has to take some proactive steps to lead the conversation. Not just that, the proactiveness has to be meaningful. If not, it would be a compromise on bot smartness.

To be smartly proactive, the bot has to identify the user interest and accordingly trigger meaningful next set of interactions post fulfillment of an intent. This looks similar to the recommendation engine which works behind the scenes on the Amazon website — When you buy a book, your footprints are captured, translated into a vector and the recommendations are derived by looking at parallel vectors. In a similar fashion, as the user is interacting with the bot, it has to identify conversation vectors, look for parallel vectors and accordingly predict the next possible intent or intents and drive the conversation.

Reinforcement learning techniques can be used here to predict the next possible intent, which could be of interest to the user. Determining the reward to the model would be critical in this approach. Reward, could be the next steps the user takes, which could be clicking on a button, reacting negatively to the bots prediction, etc. A good reward calculation results in a better learning model.

Have a Persona

Bots need a personality so that they are human-like and have individuality. Each bot should have its own identity and should avoid falling into a generic bucket. These days most of the bots are labeled as some kind of assistants. Bots can go beyond this. Bots can be specialists in a particular domain, analysts, observers, and more. And all this in the enterprise space alone. If bot developers ignore giving a personality to the bot, very soon they will be out of the race.

Learn On the Go

Humans learn during their conversations. Let’s take the case of children, where they know the language but don’t have knowledge. When they interact with adults, the information flows from adults to children. For example, an adult tells a child that humans breathe in oxygen and breathe out carbon dioxide. Now, given the confidence level, the child has on the adult, the child would either store the information as a factor as simple information to be verified or may even discard the information. Assuming the child has significant confidence in the adult, s/he can take it as a fact and write as a rule in her/his brain. Next time, you ask the child the same question, the child extracts information from the knowledge base and responds. In a similar fashion, the bot should have the capacity to learn from the conversations and enhance its knowledge base.

Once the child grows up and gathers more knowledge, s/he even challenges other people during a conversation. A futuristic bot should also aim at developing a skill, where it can challenge the user’s knowledge, based on its own knowledge and logical thinking ability. Looking at the development pace, bots that argue don’t seem too far in the future.

Similar blog post