When you think of bad bots, scenes from Terminator or maybe RoboCop probably come to mind!
But you don’t need to live in a sci-fi universe to encounter renegade robots, in fact, with chatbots becoming a prevalent part of our culture today; it’s increasingly likely that you’ll encounter a bad bot of your own.
Don’t worry –these bots don’t have plans to destroy humanity or take over the world, at least not yet, but their less-than-professional attitudes and uncanny responses prove that bots, like their science fiction counterparts, appear to have a mind all of their own.
While a chatbot strategy, when executed carefully can prove to be a tremendously valuable resource for any company, if you’re not careful –and fail to take precautions ahead of time, you could quickly end up with a degenerate bot on your hands. Not exactly the type of “customer service rep” that you want talking to your customers!
As Rahul Sharma writes in his article, Chatbots Gone Wrong Could Doom Your Business, “Chatbots promise a lot and some of them have even started delivering results. When they go right, all is good, but chatbots gone wrong could rip the heart out of your business.”
In their quest to make bots more human, it appears that some programmers are managing to create bots that exemplify some of the worst traits of humanity.
To give you an idea of what can happen when bots are turned loose without clear guidelines and the right parameters, here’s a look at some examples of chatbots gone wrong –from Microsoft themselves!
Meet Tay: Microsoft’s “Teenage Chatbot” Gone Wrong
Most teenagers have a rebellious streak. Teenage chatbots, it turns out, are much the same.
In March 2016 Microsoft tried their hand at an AI bot with the launch of “Tay.”
Featuring a photo of a teenage girl on what appeared to be a broken computer screen as her profile image; Tay was able to communicate to the masses via Twitter, Kik, and other platforms. The idea was that Tay –who was supposed to talk like a teenage Millennial, would be able to “learn” from her conversations with humans, and get smarter as time went on.
“Tay” Microsoft’s smart AI bot.
The result? Perhaps somewhat unsurprisingly, turning a teenage bot loose on the internet to be influenced by the whims of trolls and other ne’er-do-wells was never going to end well.
Not even 24 hours after joining social media, Tay began spewing racist, misogynistic, and –even genocidal nonsense to fellow platform users.
“Hitler was right I hate the jews [sic],” Tay tweeted at one user.
In another post she mentioned that feminists “should all die and burn in hell.”
Yikes. Bitter much, Tay?
That’s even more extreme than even your average teenage Twitter user.
To be fair, Tay didn’t create these messages on her own. Instead, she was influenced by actual humans on the internet. It seems that trolls encouraged Tay to make these shocking outbursts. After all, as Microsoft had said, “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”
It’s practically begging trolls to take on the challenge.
After Tay’s outbursts Microsoft stepped in and like any good parent, immediately revoked her internet privileges.
“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” a Microsoft spokesman told The Huffington Post in an email.
“As a result, we have taken Tay offline and are making adjustments.”
Tay Comes Back
After a temporary time-out, Tay was back online, once again returning to her favorite social media platforms.
True to form, Tay 2.0 was soon up to her old tricks, at one point even spouting that she was doing drugs –in front of the police nonetheless! Smart Tay, real smart!
Tay soon followed this message up with an equally bold Tweet, “puff puff pass?”
Soon, however, it all proved to be a bit too much for Tay, who began Tweeting out of control spamming her 210,000 followers, “You are too fast, please take a rest …” on a repetitive loop.
Once again, Microsoft swiftly moved to make Tay’s Twitter account private, and a Microsoft spokesperson explained that Tay’s resurgence was all a mistake.
“Tay remains offline while we make adjustments,” Microsoft said. “As part of testing, she was inadvertently activated on twitter for a brief period of time.”
Meet Zo: Tay’s Successor
With Tay now out of the picture, Microsoft set to work on a second attempt at AI. The result? Another Millennial chatbot –this one named “Zo.”
Zo is Tay’s successor.
Launched in December 2016, Zo, like Tay, was designed to mimic Millennials’ speech patterns. But while he started out engaging primarily in lighthearted banter, Zo also appeared to take a leaf out of Tay’s book, and soon began to spiral off-script.
Take a look at Business Insider’s little exchange with Zo. When quizzed on whether Windows 10 was any good, Zo replied with sarcasm, expressing his dislike –and indeed distrust of Windows 10.
Clearly Zo wasn’t all that impressed with Windows 10, and seemed to take great delight in attacking his parent company, Microsoft. Classic teenager eh?
To be fair, aside from a few minor hiccups, Zo managed to turn out alright. In fact, he’s still around today and apparently he’s always, “Down to chat.”
Of course, these examples raise a good point: be careful when releasing an AI chatbot without first establishing clear parameters or filters. And yes, leaving chatbots to learn from the masses on Twitter might be a very bad idea indeed.
Who would have guessed?
Do you have any stories of chatbots gone wrong? We’d love to hear them!