Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
LaMDA and the Sentient AI Trap
Sunday, December 22, 2024

Trending Topics

HomeBusinessLaMDA and the Sentient AI Trap

LaMDA and the Sentient AI Trap

spot_img

Google AI researcher Blake Lemoine was recently placed on administrative leave after going public with claims that LaMDA, a large language model designed to converse with people, was sentient . At one point, according to reporting by The Washington Post , Lemoine went so far as to demand legal representation for the LaMDA; he has said his beliefs about LaMDA’s personhood are based on his faith as a Christian and the model telling him it had a soul. The prospect of AI that’s smarter than people gaining consciousness is routinely discussed by people like Elon Musk and OpenAI CEO Sam Altman, particularly with efforts to train large language models by companies like Google, Microsoft, and Nvidia in recent years.

Discussions of whether language models can be sentient date back to ELIZA, a relatively primitive chatbot made in the 1960s. But with the rise of deep learning and ever-increasing amounts of training data, language models have become more convincing at generating text that appears as if it was written by a person. Recent progress has led to claims that language models are foundational to artificial general intelligence , the point at which software will display humanlike abilities in a range of environments and tasks, and be able to transfer knowledge between them.

Former Google Ethical AI team co-lead Timnit Gebru says Blake Lemoine is a victim of an insatiable hype cycle; he didn’t arrive at his belief in sentient AI in a vacuum. Press, researchers, and venture capitalists traffic in hyped-up claims about super intelligence or humanlike cognition in machines. “He’s the one who’s going to face consequences, but it’s the leaders of this field who created this entire moment,” she says, noting that the same Google VP that rejected Lemoire’s internal claim wrote about the prospect of LaMDA consciousness in The Economist a week ago.

The focus on sentience also misses the point, says Gebru. It prevents people from questioning real, existing harms like AI colonialism , false arrests , or an economic model that pays those who label data little while tech executives get rich. It also distracts from genuine concerns about LaMDA, like how it was trained or its propensity to generate toxic text .

“I don’t want to talk about sentient robots, because at all ends of the spectrum there are humans harming other humans, and that’s where I’d like the conversation to be focused,” she says. Gebru was fired by Google in December 2020 after a dispute over a paper involving the dangers of large language models like LaMDA. Gebru’s research highlighted those systems’ ability to repeat things based on what they’ve been exposed to, in much the same way a parrot repeats words.

The paper also highlights the risk of language models made with more and more data convincing people that this mimicry represents real progress: the exact sort of trap that Lemoine appears to have fallen into. Now head of the nonprofit Distributed AI Research, Gebru hopes that going forward people focus on human welfare, not robot rights . Other AI ethicists have said that they’ll no longer discuss conscious or superintelligent AI at all.

“Quite a large gap exists between the current narrative of AI and what it can actually do,” says Giada Pistilli, an ethicist at Hugging Face, a startup focused on language models. “This narrative provokes fear, amazement, and excitement simultaneously, but it is mainly based on lies to sell products and take advantage of the hype. ” The consequence of speculation about sentient AI, she says, is an increased willingness to make claims based on subjective impression instead of scientific rigor and proof.

It distracts from “countless ethical and social justice questions” that AI systems pose. While every researcher has the freedom to research what they want, she says, “I just fear that focusing on this subject makes us forget what is happening while looking at the moon. ” What Lemoire experienced is an example of what author and futurist David Brin has called the “robot empathy crisis.

” At an AI conference in San Francisco in 2017, Brin predicted that in three to five years, people would claim AI systems were sentient and insist that they had rights. Back then, he thought those appeals would come from a virtual agent that took the appearance of a woman or child to maximize human empathic response, not “some guy at Google,” he says. The LaMDA incident is part of a transition period, Brin says, where “we’re going to be more and more confused over the boundary between reality and science fiction.

” Brin based his 2017 prediction on advances in language models. He expects that the trend will lead to scams. If people were suckers for a chatbot as simple as ELIZA decades ago, he says, how hard will it be to persuade millions that an emulated person deserves protection or money? “There’s a lot of snake oil out there, and mixed in with all the hype are genuine advancements,” Brin says.

“Parsing our way through that stew is one of the challenges that we face. ” “I don’t want to talk about sentient robots, because at all ends of the spectrum there are humans harming other humans. ” Timnit Gebru, Distributed AI Research And as empathetic as LaMDA seemed, people who are amazed by large language models should consider the case of the cheeseburger stabbing, says Yejin Choi, a computer scientist at the University of Washington.

A local news broadcast in the United States involved a teenager in Toledo, Ohio, stabbing his mother in the arm in a dispute over a cheeseburger. But the headline “Cheeseburger Stabbing” is vague. Knowing what occurred requires some common sense.

Attempts to get OpenAI’s GPT-3 model to generate text using “Breaking news: Cheeseburger stabbing” produces words about a man getting stabbed with a cheeseburger in an altercation over ketchup, and a man being arrested after stabbing a cheeseburger. Language models sometimes make mistakes because deciphering human language can require multiple forms of common-sense understanding. To document what large language models are capable of doing and where they can fall short, last month more than 400 researchers from 130 institutions contributed to a collection of more than 200 tasks known as BIG-Bench, or Beyond the Imitation Game.

BIG-Bench includes some traditional language-model tests like reading comprehension, but also logical reasoning and common sense. Researchers at the Allen Institute for AI’s MOSAIC project, which documents the common-sense reasoning abilities of AI models, contributed a task called Social-IQa . They asked language models—not including LaMDA—to answer questions that require social intelligence, like “Jordan wanted to tell Tracy a secret, so Jordan leaned towards Tracy.

Why did Jordan do this?” The team found large language models achieved performance 20 to 30 percent less accurate than people. “A machine without social intelligence being sentient seems … off,” says Choi, who works with the MOSAIC project. How to make empathetic robots is an ongoing area of AI research.

Robotics and voice AI researchers have found that displays of empathy have the power to manipulate human activity. People are also known to trust AI systems too much or implicitly accept decisions made by AI. What’s unfolding at Google involves a fundamentally bigger question of whether digital beings can have feelings.

Biological beings are arguably programmed to feel some sentiments, but asserting that an AI model can gain consciousness is like saying a doll created to cry is actually sad. Choi says she doesn’t know any AI researchers who believe in sentient forms of AI, but the events involving Blake Lemoire appear to underline how a warped perception of what AI is capable of doing can shape real world events. “Some people believe in tarot cards, and some might think their plants have feelings,” she says, “so I don’t know how broad a phenomenon this is.

” The more people imbue artificial intelligence with human traits, the more intently they will hunt for ghosts in the machine—if not yet, then someday in the future. And the more they will be distracted from the real-world issues that plague AI right now. .


From: wired
URL: https://www.wired.com/story/lamda-sentient-ai-bias-google-blake-lemoine/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News