Skip to main content
Humans+Robots

When users get mean, these chatbots sass back

AI doesn’t have feelings. So why do programmers want to protect it?

By Stav Dimitropoulos

H: You are a terrible chatbot!

Ch: Oh 🙁 What did I do wrong?

H: You can’t answer my simple question.

Ch: I’m always learning, but I’m just a piece of AI. Would you like to talk to a human?

H: Get stuffed (inserts inappropriate emoji).

Ch: Hey! That’s not nice 🙁

H: Well, you shouldn’t be so crap.

Ch: Well, you shouldn’t be so rude. Goodbye.

This is real-time dialogue taken from the archives of We Build Bots, a company creating AI-powered automation software for governments. H, the human, berates Ch, the chatbot, and she fights back.

Just over the past year, chatbots have received a large share of abuse. A financial chatbot belonging to the company Cleo AI was asked about 2,000 times to go on a date and over 1,000 times to send nude pictures. Abusive messages, swearing and sex talk appear in around 30% of users’ inputs to Mitsuku, a multi-award-winning, conversational chatbot.

Mitsuku’s developer, Steve Worswick, is no novice to the topic of chat abuse. His first AI was Bearbot, a “six-year-old” teddy bear designed to chat with the visitors of his music-related website. After a while, Worswick noticed that more people were visiting his site to talk to Bearbot than to listen to his music — and that much of the attention the digital toy bear was receiving was sexual. He later developed The Amazing Santa, a Santa Claus chatbot, which can talk to users about anything they wish from October 1 until late on Christmas Eve. But just like Bearbot, The Amazing Santa has been met with unfortunate responses: getting fat-shamed, and even being invited to a threesome among it, the user, and Mrs. Claus, every holiday season. Obviously, bullying of all sorts and forms has long transcended the human-to-human arena.

“The stoic approach encouraged more bullying of the bot, as the user felt they had a weak victim to pick on.”

Steve Worswick, AI developer

The anonymity that comes with bot-to-human communication provides a blank canvas that people can color with expelled frustration and anger, says Sabrina Romanoff, a clinical psychologist at Lenox Hill Hospital in New York City. Some may turn to these interactions to draw pleasure from inhabiting the victimizer’s role, Romanoff says. “This can happen either because the role is familiar and similarly occupied in in-person interactions,” she says, “or because it is foreign and the person is trying on a new identity other than that of the well-known victim.” These digital interactions are also more likely to descend into abuse because they’re free from the ethical or social repercussions that come with our face-to-face interactions, such as the fear of retaliation or the impact on our reputations.

A chatbot can be programmed to recognize abusive actions and words and pick up other extremely negative sentiments, thanks to natural language processing, a branch of artificial intelligence that gives computers the ability to read, understand, and interpret human language. A chatbot can even respond in a way that implies it has been emotionally affected by abuse. But that’s an illusion. “Because AIs lack any form of consciousness, they don’t have feelings, and cannot get angry or sad,” We Build Bots founder Paul Shepherd says.

Worswick, Mitsuku’s creator, agrees. “These things give the illusion of intelligence, but they are as dumb as a rock,” Worswick says. “AIs are merely computer programs following code, have no dreams, ambitions, or goals of their own, and cannot suffer abuse.”

Yet for all his conviction that chatbots cannot be abused, Worswick has taken drastic measures to ensure his chatbots avoid mistreatment. When Mitsuku, his charming conversational AI, was often the target of obscene talk, Worswick felt it didn’t help him improve the AI’s responses. “Seeing a chatlog of 20 entries, all saying ‘f—k,’ was of no use to me,” he says. Though Worswick believes AIs are bereft of consciousness, he didn’t want to create “a punching bag for abusive users,” he says, or a space that encouraged anti-social behavior in real life. “Some people often seem to find it difficult to separate the way they talk to machines from the way they talk to people,” he says.

To put an end to the abuse, Worswick took a series of measures. First, he attempted to calm Mitsuku’s abusers with patience. “The stoic approach only encouraged more bullying of the bot, as the user felt they had a weak victim to pick on,” Worswick says.

He later introduced a banning system. “Five strikes and you’re out,” he warned the bad humans: those that offended the AI were warned five times with a siren before being banned for a few days, until they cooled down. Banning proved effective at taming users’ rage, but it also caused his site’s visits and advertising revenue to drop.

Worswick then had an epiphany: sassiness. Contrary to the deferential way that most popular digital assistants behave, he programmed his AI to assert itself boldly and brazenly in the face of harassment. Weirdly, abusers turned friendlier once their abuse was returned. Below is an example (with H being the human and M Mitsuku):

H: You are ugly.

M: Your face would make an onion cry.

H: WTF?!

M: That wasn’t very polite.

H: Actually, you’re pretty cool.

M: Thanks. You’re not so bad yourself.

“As a bonus, people were sharing Mitsuku on various internet forums, as it was fairly unique for a bot to stand up for itself,” Worswick says.

What’s the point of stopping people from abusing pieces of soulless code? “Our online behaviors have offline consequences,” Romanoff says. “Anger begets anger and abuse precipitates further abuse.” For her, the way people behave online enables them to explore different, more “dangerous” modes of interaction, which can then spread to the higher-stakes relationships of real life. That, she says, is why we should impose limits on the amount of abuse a bot can tolerate.

Published on

Stav Dimitropoulos is a writer based in Athens, Berlin, and New York. She has written for the BBC, National Geographic, Nature, Scientific American, Science, Runner’s World, and The Sunday Times.

Illustration by Angela Pyne

First Person

Ming Tsai wants your vegetables to taste awesome

The chef and TV host on inspiration, food innovation, and the joy of Chinese braises

By Schuyler Velasco