Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
Meta’s new AI chatbot can’t stop bashing Facebook
Saturday, December 21, 2024

Trending Topics

HomeBusinessMeta’s new AI chatbot can’t stop bashing Facebook

Meta’s new AI chatbot can’t stop bashing Facebook

spot_img

If you’re worried that artificial intelligence is getting too smart, talking to Meta’s AI chatbot might make you feel better. ‘Stop trying to be TikTok’: how video-centric Instagram sparked a revolt Read more Launched on Friday, BlenderBot is a prototype of Meta’s conversational AI, which, according to Facebook’s parent company, can converse on nearly any topic. On the demo website, members of the public are invited to chat with the tool and share feedback with developers.

The results thus far, writers at Buzzfeed and Vice have pointed out, have been rather interesting. Asked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!” The bot has also made clear that it’s not a Facebook user, telling Vice’s Janus Rose that it had deleted its account after learning about the company’s privacy scandals.

“Since deleting Facebook my life has been much better,” it said. The bot repeats material it finds on the internet, and it’s very transparent about this: you can click on its responses to learn where it picked up whatever claims it is making (though it is not always specific). This means that along with uncomfortable truths about its parent company, BlenderBot has been spouting predictable falsehoods.

In conversation with Jeff Horwitz of the Wall Street Journal , it insisted Donald Trump was still president and would continue to be “even after his second term ends in 2024”. (It added another dig at Meta, saying Facebook “has a lot of fake news on it these days”. ) Users have also recorded it making antisemitic claims.

Good morning to everyone, especially the Facebook https://t. co/EkwTpff9OI researchers who are going to have to rein in their Facebook-hating, election denying chatbot today pic. twitter.

com/wMRBTkzlyD — Jeff Horwitz (@JeffHorwitz) August 7, 2022 BlenderBot’s remarks were foreseeable based on the behavior of older chatbots such as Microsoft’s Tay , which Twitter users quickly taught to be a racist conspiracy theorist, forcing the company to apologize for its “wildly inappropriate and reprehensible words and images”. GPT-3, another AI system, has also delivered racist, misogynist and homophobic remarks. A South Korean startup’s chatbot, designed to resemble a 20-year-old college student, had to be suspended after it rattled off racial slurs and anti-LGBTQ+ remarks.

Given the cesspool that is the internet, Meta appears to have expected similar things from its own service. Before logging on, users must acknowledge that BlenderBot is “likely to make untrue or offensive statements”. As Vice notes, Meta researchers have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt”.

“Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements,” said a Meta spokesperson in a statement. My own conversation with BlenderBot did not veer into that territory, but it did highlight another flaw in the service: its utter inanity. The bot began by asking me what subject I liked in school.

The bot is open about which “persona” it is using in the conversation; our discussion involved a persona it described as “I like to read. I went to college. ” When I asked it tough questions, such as which colleges accepted AI bot students, it offered nonsensical replies (“in some ways, aren’t we all chatbots?” No) and offered further reading by saying, “Here’s the link”, without providing a link.

It also kept steering the conversation back to chatbots. It did, however, maintain its firm stance against its own creators. When I said I wasn’t a fan of Facebook’s ethics, it wholeheartedly agreed: “Me too! That is why I stopped using it and just stick with reading books instead.

”.


From: theguardian
URL: https://www.theguardian.com/technology/2022/aug/09/blenderbot-meta-chatbot-facebook

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News