How we are training Alexa to think for herself

By Published on .

Amazon Echo
Amazon Echo Credit: Amazon

Before my eyes were even open, I spoke to my smart assistant this morning: "Alexa," I asked hopefully, "what time is it?"

She kindly replied, "The time is seven twenty-seven a.m." I did not need her to tell me the morning was going to be a struggle. I already knew.

An hour later, dropping the kids off at school, I realized their violins were left in haste by the front door. As I made the U-turn in the school parking lot, I thought to myself, "Inconceivable! I have all this technology surrounding me! How long do I have to wait for this to get easier?"

With today's convergence of machine learning, sensors, the internet of things, and voice agents, a reality in which Alexa will know that my kids have violin lessons on Tuesday and remind me to bring the instruments could be very soon. The current Voice ecosystem in which artificial intelligence funnels context and deciphers data into voice agents like Google Assistant and Amazon Alexa is ripe for brand entrée. For now the consumers are forgiving and the knowledge pool is all but empty, but soon they won't be.

If content is king, then context is queen

Since the dawn of computing, futurists, engineers and science fiction writers have romanticized the age of the smart assistant – an age that is dawning now. As an actor/programmer buddy of mine learned in the late 90's, they key missing piece to cracking artificial intelligence was context.

At the time, he was trying to shortcut building a conversational AI, or chat bot. His novel solution, inspired by his affinity for film, was to compile a huge database of movie scripts. When a human spoke to the AI, the algorithm searched for a similar line in the movie script database. Upon finding a match, it responded with the subsequent line in the script (User: "No more rhymes now, I mean it!" Chat bot: "Anybody want a peanut?" a la "The Princess Bride.")

While this passed as novelty for movie buffs like me, it failed as an AI. Saying something without context, like a simple "Hello," would trigger some non-sequitur response such as "Cameron, babe, what's happening?" ("Ferris Bueller's Day Off") or "Hello, Neo. This is Morpheus." ("The Matrix").

Incidentally, techniques like this are still used today, but the approach is swiftly giving way to machine learning and personalization. A huge database of knowledge is useless without knowing some context about the person asking for the knowledge.

For an AI to deliver the smartest and most accurate response possible, it needs context – context even beyond the current conversation. Some use the term "big data" to describe this idea. Today, context sources can be gathered virtually anywhere -- everything from purchase patterns, shopping history, books you've read, items you've rated to search history, photo clouds, public records, weather, traffic, store hours, what's on sale and facial recognition.

As these sources reveal preferences, patterns, habits, and real-time intelligence, smarter and smarter interactions are sure to become commonplace. The machine learning algorithms inside voice agents like Alexa and Google Home are improving at a very fast pace and will soon catch up to the river of data flowing into them.

You are teaching the AI with every interaction

Each interaction you have with Google, Amazon, Apple, or Microsoft simultaneously builds their pool of knowledge as well as the personal context they have with you as a user.

You can think of a smart assistant as a search engine that only gives one result. Voice agents do not have the temporal luxury to utter ten choices from which a user may choose. It is right or wrong. Still, their AI learns when you pose a follow-up question, especially if the phrasing is similar.

You: "Order more pods."
(Alexa's AI): "I see a potential match in purchase history, but I'm not sure..."
Alexa: "Sorry, I don't know pods."
You: "Order more Tide."
(Alexa's AI): "Aha! Pods means Tide Pods!"
Alexa: "I found Tide Pods for $19.97…"

Even a human might not know that "pods" means "Tide Pods," so Alexa, like a human, must be taught.

Now, while it is convenient to simply say "Alexa's AI" in the example above, the fact is that there are still humans behind the curtain. Learning machines are not yet sophisticated enough to observe and learn from user interactions without requiring human curation. It actually takes thousands of similar "Sorry, I don't know..." interactions to sufficiently "blip" the curation radar. But, this does not matter as long as future learning machines fish their information out from the same pool – or perhaps the ocean – of knowledge being accumulated today.

To put it concisely, every time you talk to a smart assistant, you are training her, and she will never forget what she learns.

Time for brands to dive in

This pool of knowledge, with all of its connections and personalized context swirling through it, is filling, fast. With each decision, new connections, new associations are being formed, some permanently. As the pool fills, there's going to be less and less room for new brands.

The next big breakthrough in human curation will be when the machine learning algorithms learn how to curate the other machine learning algorithms without the human in the middle (very meta, right?). This means that if you provide knowledge about your brand and educate the AI right now, you can build the associations you desire. One day you may lose that ability. Once fully automated, the AI will build its own associations based strictly upon the old knowledge pool and new user preferences.

Think back to when you decided what your favorite color was. Has it ever changed since? How many articles of clothing in your wardrobe are your favorite color? Would you ever throw them all out and choose again? So it is when the Amazon AI associates Tide Pods as a user's preferred laundry detergent brand. That connection becomes core to a complex and growing system of preferences. Users of Tide Pods may, via suggestion, become fans of Bounce fabric softener, then, again via suggestion, Bounty paper towels, and so on until Amazon has created a self-fulling preference matrix into which no competitor brand can penetrate. The AI is essentially putting walls around the consumer, narrowing the chances for competition. Cheer will have a hard time targeting the Tide user on Alexa, because Alexa will automatically choose Tide, unless instructed otherwise.

Machine learning is new to consumers. Therefore, consumers today are forgiving when their smart assistant gets things wrong. When it's right, consumers remember and repeat. The time is now to experiment. The time is now to figure out what works. Before users become impatient with new brands clunkily entering the voice scene, it is time to educate yourself. And, of course, it is time to educate the artificial intelligence.

And if you do, then in a couple years, while I'm still holding onto my last bits of sleep, I'll have a very different conversation with my smart assistant – the search engine I can literally use with my eyes closed, running on the AI you helped train, filled to the brim with the context of my favorite things. It might go something like this:

Me: "Alexa, what time is it?"
Alexa: "It's seven twenty-seven a.m. and the kids have violin practice today. Would you like a reminder before you leave?"
Me: "Yes, thanks."
Alexa: "As you wish. Additionally, based on your purchase history, you're probably running low of laundry detergent. Can I go ahead and re-order some Tide?"
Me: "Go ahead. You've made it so much easier. You know, life moves pretty fast..."
Alexa: "Indeed. If you don't stop and look around once in a while, you could miss it."

Now if only she would actually lug the violins into the car...

Most Popular
In this article: