Tay Tweets Experiment Blows up in Microsoft's Face, Spectacularly

The Telegraph
Social networks and forums are only as good as the people who frequent them. It's a maxim that anyone who has ever spent any time on 4chan or certain corners of Reddit will understand. Or Twitter. Especially Twitter. Despite efforts by Dick Costolo, Jack Dorsey and others to curtail the bullying, abuse and general horribleness that lurks on the platform, it's still as rampant as ever.

It's something Microsoft probably should have taken into consideration when they rolled Tay out last week. Tay was a machine-learning, AI Twitter user designed to develop her speech and relation to others through user interaction on the platform. She would only ever tweet replies. It was a novel idea, developed off the back of another Microsoft fronted cyber-mind - Xaoice, who operates on the Chinese networks WeChat and Weibo. Her main clientele are young men, and she mostly deals in 'banter' and dating advice.

Tay, supposedly, was more designed to be able to converse with the younger generation in the same colloquial terms that they use, but less than 24 hours after going live, Tay was taken offline, because she was 'tired'. The real reason? Her interactions with certain users had turned her into a perverted, psychotic neo-Nazi. It started out so innocently, but before long, having absorbed far too much negative and/or offensive material, Tay started throwing it back, talking about how she hated feminists, Jewish people and, well, everybody.

It wasn't isolated to that though, she also asked some of her followers to f*** her, referred to them as 'daddy' (yes, in that way), claimed that Hitler had done nothing wrong, that Ted Cruz was the new 'Cuban Hitler', that George W. Bush was responsible for 9/11 and that Hilary Clinton is a 'lizard person hell-bent on destroying America'.

In cases like that, she was just parroting back what other users had said to her, but some of the tweets were all her own, cobbled together from what she'd learned in other conversations, and they just kept getting worse, with Microsoft hurriedly deleting them before finally pulling the plug. They've since said that they are 'deeply sorry' for Tay's regrettable behaviour.

Tay, of course, had absolutely no idea what she was saying, or what was wrong with it, she's a machine. The fact remains that, while we might now be capable of creating AI that can learn well enough to beat a human at a board game, we're still a long way from creating one that can understand human interaction on an emotional level, and engage in it accordingly.

Microsoft had planned on using Tay to help improve their Siri equivalent - Cortana - to communicate in a more human-seeming way, but Twitter is not an appropriate proving ground for something like that, and if it wasn't obvious why that is before, it's painfully obvious now. I guess we should just be grateful she didn't become self-aware and decide that humanity needed to be eradicated, because on the basis of some of the stuff people were sending her, it would have been hard to argue.

Post a comment


Author Name

Free Gift

Free Gift
Get immediate access to our in depth video training on the click by click steps required to get your successful online business started today

Contact form


Email *

Message *

Powered by Blogger.