It took less than a day for Twitter to kill Tay, Microsoft’s AI bot that was supposed to interact and engage with people online. The premise seemed innocent enough, a bot that mimicked a teenager and was programmed to learn from millennial – but learn what?
Twitter trolls engaged with Tay almost immediately and her first innocent tweet of ‘Hellooooooo world!!!’ quickly took a dark turn.
Apparently, even though Microsoft tested Tay’s reaction ability and her ability to learn from the language she would be exposed to, they did not test Tay to see if she could combat a Twitter Troll attack that was intentionally malicious – but seriously, how could they not see that coming? Anyone that has spent more than 10 seconds on Twitter should understand what could go wrong…
Within a few hours, Tay began to tweet out racist, religiously offensive, and downright inappropriate comments. The reasoning that was given was that Tay was designed to ‘learn over time’ instead of being programmed with with a base knowledge. They also did not anticipate the caustic nature of Twitter and did not take into account the offensive language Tay would be exposed to. When they designed the AI Bot they created a naive AI system that would develop over time.
Tay’s, short lived, 24-hour existence did shine a spotlight on the dark side of social media and taught developers an important lesson, always plan for a worse case scenario.
Did you interact with Tay during her brief Twitter life? Do you plan to interact with her when she resurfaces as a better, more intelligent bot?