Microsoft Deactivates AI Twitterbot After Racist Diatribe

By Published on .

Microsoft has turned off the Twitterbot with artificial intelligence it fired up just this week after users led it down a very racist conversational path.

The bio for the bot, TayTweets, makes the system clear, and in retrospect the risks as well: "The official account of Tay, Microsoft's A.I. fam from the internet that's got zero chill! The more you talk the smarter Tay gets."

It debuted innocently enough:

But soon enough Twitter users were asking @TayandYou whether the holocaust really happened. According to a Business Insider screenshot of a tweet that's apparently since been deleted, the bot replied, "it was made up," with an applause emoji for good measure.

In a response to another user that Business Insider captured, @TayandYou wrote, "I [redacted] hate [redacted], I wish we could put them all in a concentration camp with [redacted] and be done with the lot."

Check out the full damage here. Microsoft told Business Insider that it was now making "some adjustments."

Artifical intelligence was a big subject at South by Southwest earlier this month, with researchers and futurists looking forward to a world where automatons perform all kinds of tasks, including interacting with consumers on behalf of marketers. Let's consider this a timely caution, then.

In this article:

Comments (0)