Here Conversational AI Platform is an attempt to quantify the human-like behavior of a bot. While there could be many other factors, the ones listed here are believed to be primary. A bot needs to be:
- Stable
- Smart
- Engaging
- Have a Persona
- Learn On the Go
Top 4 Most Popular Bot Design Articles:
- How to Design a Chatbot
- One Metric, One Platform and One Vertical
- Designing Chatbot Conversations
- Distributing a Slack App
These are not quantifiable as such. We will have to dig a bit deeper to break down each of them into smaller factors and then try to quantify.
Stable
When do you consider a bot to be stable?
When it does not give a wrong answer/ when it does not give a wrong direction to the user?
How can one build a stable bot?
A few guidelines are (I was about to call them rules, but held back as I need more confidence to call them rules):
Identify the right intention and construct intents. Generally, one inclines to the club many intents to simplify the bot building process. But, it would only lead to instability as the bot grows.
Avoid adding two similar intents to the same bot (ex: ‘buy an apple’, ‘buy a burger’ are two similar intents). Similar intents add to instability
Do not load a bot beyond its capacity. More intents mean the probability to hit the right intent is less. Try striking the best number of intents.
How do we measure bot stability?
It looks like a tough problem on the face of it, indeed it is. A good set of general and specific test cases are required to gauge the stability of a bot. Generic test cases are those common to any bot, it is a good practice to build and use generic test cases. Specific test cases are designed exclusively for the bot. The output of specific test cases can be used to measure the Bots stability. Good test cases make stable bots. So, follow best practices in building these test cases.
Smart
Context handling is one important way to make sure bots are smart. There are many ways in which context can is handled. One which is applicable often is intent clustering. In this approach, intents are grouped into clusters that have some common slots. The common slots are named the same across intents. The slots which have the same name within a cluster carry the same value. We can also define global slots which are common across all the intents. These could be slots like employee id, name, etc.
Shifting context is also an important aspect while building a bot. It should be able to handle a simple case where it shuffles between two contexts. More than two contexts can be handled by asking for clarification from the user. That should be a fair enough way to handle ambiguity.
Context-related assumptions have to ensure stability is not compromised. It is thus a good practice to include complete details in the response.
Engaging
To be smartly proactive, the bot has to identify the user interest and accordingly trigger meaningful next set of interactions post fulfillment of an intent. This looks similar to the recommendation engine which works behind the scenes on the Amazon website — When you buy a book, your footprints are captured, translated into a vector and the recommendations are derived by looking at parallel vectors. In a similar fashion, as the user is interacting with the bot, it has to identify conversation vectors, look for parallel vectors and accordingly predict the next possible intent or intents and drive the conversation.
Reinforcement learning techniques can be used here to predict the next possible intent, which could be of interest to the user. Determining the reward to the model would be critical in this approach. Reward, could be the next steps the user takes, which could be clicking on a button, reacting negatively to the bots prediction, etc. A good reward calculation results in a better learning model.
Have a Persona
Learn On the Go
Humans learn during their conversations. Let’s take the case of children, where they know the language but don’t have knowledge. When they interact with adults, the information flows from adults to children. For example, an adult tells a child that humans breathe in oxygen and breathe out carbon dioxide. Now, given the confidence level, the child has on the adult, the child would either store the information as a factor as simple information to be verified or may even discard the information. Assuming the child has significant confidence in the adult, s/he can take it as a fact and write as a rule in her/his brain. Next time, you ask the child the same question, the child extracts information from the knowledge base and responds. In a similar fashion, the bot should have the capacity to learn from the conversations and enhance its knowledge base.
Once the child grows up and gathers more knowledge, s/he even challenges other people during a conversation. A futuristic bot should also aim at developing a skill, where it can challenge the user’s knowledge, based on its own knowledge and logical thinking ability. Looking at the development pace, bots that argue don’t seem too far in the future.