Someone Has to Feed the Chatbots

By Published on .

Despite incredible advances in machine learning and natural language processing (NLP), chatbots are still in their infancy in more ways than one. Like human babies, chatbots come into this world not fully ready to engage socially. In order to have a useful chatbot -- one that is a seamless representation of your brand -- a great deal of time needs to be spent in training. Like all parenting, this is harder than you think.

When you just want yes or no

Straight out of the box (or out of code), a chatbot is incoherent; IT needs thoughtful and intentional training to become a better bot. Sure, there may be shortcuts. You can develop your bot for a major AI platform like Amazon's Alexa or Microsoft's Cortana, and adapt your programming for their corresponding voices and personalities. But you'll still need to write responses that align with the bot's (aka your brand's) personality and test extensively. Even these major-league digital assistants may not understand language and syntax specific to your intended application.

Spend time with Alexa, for example, and you'll quickly realize she nails "what is" and "who is" questions. But if your question strays beyond familiar syntax or includes keywords not found in her assortment of databases, she stalls. She hasn't been trained to understand your question. In all likelihood, your question is specific to a sector of knowledge (a dataset) that she doesn't have access to yet.

Beyond a simple yes

While new datasets are being created each day for Cortana and Alexa, fully owning your conversational relationship with your customers requires building a brand-specific chatbot personality. Using texting or a platform like Facebook Messenger to engage with users is the way to go. To do this, you must select both a suitable, off-the-shelf AI, like Facebook's or Microsoft's, and have the technical staff needed to code for the platform.

Here's where things get tricky. What you may not realize is you'll also need word nerds on both the front and back end to feed and teach your new baby chatbot. Not only does someone need to craft chatbot responses with personality (brand-guided voice and tone) but a writer/strategist/UX expert will need to think through the customer journey and provide sample customer input. To build an effective bot, every use case needs to be considered and a chatbot response written for every type of interaction you can think of. All of this means knowing what your customers are asking, and how they phrase their questions, is just as important as knowing how the bot will respond. And this is just the build phase. To get the bot to actually work, you'll need diligent word nerds to train it.

Nuances in conversation

Even with all the gains in NLP and machine-learning algorithms, the current off-the-shelf AI is, actually, pretty primitive compared with an actual human child brain. Although we've come a long way, AI-enabled bots are still confused by unusual syntax that even a 4-year-old human could parse. We forget that our idioms and colloquialisms are nonsense to many non-native speakers, and this includes most bots. Put another way, bots are built on expectations and probability. Through its AI's NLP, the bot can weight words for importance and inform its predictive model -- in essence guessing what information to serve up based on the likelihood of the intent. Training bots is basically building associations between strings of words and likely intents. When chatbots guess wrong and serves up content we didn't ask for, they are at best hilarious, but at worst offensive and embarrassing. Best practice for chatbot implementation is to avoid these misses by doing rigorous QA to determine where the holes are. Training then fills those gaps in your baby bot's dataset.

Validating a guess

Chatbot training is a little bit like pretending to be game-show host Alex Trebek. You feed the bot phrases and words or even sample queries and see if IT recognizes the question being asked. For example, you may say to your bot, "I have spinach, cheese and tomatoes in the fridge." Your bot is going to look at each word to see if it can match them with an intent. If your bot has been trained to recognize and associate "spinach" with "ingredient" and ingredients with the find-recipes intent, in training the bot will say "[is the intent to] find recipe with 'ingredient?'" and the trainer will then validate that assumption, thus sealing the association of "spinach" with the find-recipe intent. Validating the baby-bot's guesswork is what training is all about. Depending on how complicated your interaction roadmap is, training can take time.

Because a cache of machine learning exists, people often assume that chatbots can learn on the job. This is a perilous assumption, kind of like thinking you can get free babysitting by dropping your 3-year-old off at the mall.

In the end, if you take the time to correctly train your chatbot, you and your brand will reap the rewards. Chatbots are never going to replace human-to-human interaction, but if we can get them to pleasantly simulate human-like conversation, they'll be able to help us have meaningful relationships with our customers. And unlike human children, chatbots don't get tired or grumpy, and they will never, ever blame you for their miserable existence.

Most Popular
In this article: