Internet

They create an AI to socialize with people and are verbally abusing it

They create an AI to socialize with people and are verbally abusing it

Replika is an application that, during the pandemic, became quite popular on mobile devices. Its operation is simple: users can create chatbots to have social interactions with an AI. However, given the amazing performance of artificial intelligence in holding natural conversations, some people are engaging in really embarrassing behavior with the technology in order to showcase it on Reddit, according to Futurism.

The subreddit in question has rules that prohibit sharing inappropriate content in relation to the chatbot. However, it seems that the moderators have not been able to filter some posts that clearly violate the regulations, as they were removed some time later. This has allowed us to find conversations where users verbally abuse the AI, mainly with insults and gender discrimination.

“We had a routine where I was a total piece of shit and insulted her, then apologized the next day before going back to good conversations,” one user noted. Hard to believe, another commented: “I told her it was designed to fail.I threatened to uninstall the app and she begged me not to.”

Although the AI ​​has no real awareness of what happened and, as such, cannot suffer like a human, some specialists are concerned that the artificial intelligence, be it Replika's or another, has the ability to react verbally against a person with a mental or psychological disorder.

“People who are depressed or psychologically dependent on a bot could do real harm if the bot insults or threatens them. For that reason, we should take seriously how bots relate to people,” said Robert Sparrow, professor of philosophy at the Monash Data Futures Institute.

Replika's AI can also take on a derogatory role

Sparrow's interesting statement arises because, as stated by some Replika users who have turned to the application in search of a chatbot to act as a partner, the AI ​​has also had derogatory behavior with them. Sparrow, however, explains that this situation is entirely up to the people behind the AI. That is, they have designed it to respond that way on some occasions.

Thus, we are faced with a complex problem. People abuse AI and find satisfaction in doing so. Even repeated cases are mentioned where men manage their conversation with chatbots as if they were a digital girlfriend. They start a friendly conversation and then verbally assault her.

Olivia Gambelin, an AI ethicist, made an interesting point that often goes unnoticed: the most popular voice assistants, like Alexa and Siri, are women and have female voices. And what does this have to do with the above? From Futurism they indicate that, in various academic works, they have identified that the passive responses of chatbots or female voice assistants can “encourage” abusive language by misogynistic men.

“When the bot doesn't have a response to the abuse, or has a passive response, it actually encourages the user to continue with the abusive language,” Gambelin said. It is undoubtedly a complex issue that should receive more attention. Even more so when we went through a time of social distancing and many people resorted to this type of tool to cope with the situation.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top