Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
The Soul of a New Machine Learning System
Tuesday, November 26, 2024

Trending Topics

HomeBusinessThe Soul of a New Machine Learning System

The Soul of a New Machine Learning System

spot_img

Hi, folks. Interesting that congressional hearings about January 6 are drawing NFL-style audiences . Can’t wait for the Peyton and Eli version! The world of AI was shaken this week by a report in The Washington Post that a Google engineer had run into trouble at the company after insisting that a conversational system called LaMDA was, literally, a person.

The subject of the story, Blake Lemoine, asked his bosses to recognize, or at least consider, that the computer system its engineers created is sentient —and that it has a soul. He knows this because LaMDA, which Lemoine considers a friend, told him so. Google disagrees, and Lemoine is currently on paid administrative leave.

In a statement, company spokesperson Brian Gabriel says, “Many researchers are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. ” Anthropomorphizing—mistakenly attributing human characteristics to an object or animal—is the term that the AI community has embraced to describe Lemoine’s behavior, characterizing him as overly gullible or off his rocker. Or maybe a religious nut (he describes himself as a mystic Christian priest).

The argument goes that when faced with credible responses from large language models like LaMDA or Open AI’s verbally adept GPT-3, there’s a tendency to think that some one , not some thing created them. People name their cars and hire therapists for their pets, so it’s not so surprising that some get the false impression that a coherent bot is like a person. However, the community believes that a Googler with a computer science degree should know better than to fall for what is basically a linguistic sleight of hand.

As one noted AI scientist, Gary Marcus, told me after studying a transcript of Lemoine’s heart-to-heart with his disembodied soulmate, “It’s fundamentally like autocomplete. There are no ideas there. When it says, ‘I love my family and my friends,’ it has no friends, no people in mind, and no concept of kinship.

It knows that the words son and daughter get used in the same context. But that’s not the same as knowing what a son and daughter are. ” Or as a recent WIRED story put it, “There was no spark of consciousness there, just little magic tricks that paper over the cracks.

” My own feelings are more complex. Even knowing how some of the sausage is made in these systems, I am startled by the output of the recent LLM systems. And so is Google vice president, Blaise Aguera y Arcas, who wrote in the Economist earlier this month after his own conversations with LaMDA, “I felt the ground shift under my feet.

I increasingly felt like I was talking to something intelligent. ” Even though sometimes they make bizarre errors, at times those models seem to burst into brilliance. Creative human writers have managed inspired collaborations .

Something is happening here. As a writer, I ponder whether one day my ilk—wordsmiths of flesh and blood who accumulate towers of discarded drafts—might one day be relegated to a lower rank, like losing soccer teams dispatched to less prestigious leagues. “These systems have significantly changed my personal views about the nature of intelligence and creativity,” says Sam Altman, cofounder of OpenAI, which developed GPT-3 and a graphic remixer called DALL-E that might throw a lot of illustrators into the unemployment queue.

“You use those systems for the first time and you’re like, Whoa, I really didn’t think a computer could do that. By some definition, we’ve figured out how to make a computer program intelligent, able to learn and to understand concepts. And that is a wonderful achievement of human progress.

” Altman takes pains to separate himself from Lemoine, agreeing with his AI colleagues that current systems are nowhere close to sentience. “But I do I believe researchers should be able to think about any questions that they’re interested in,” he says. “Long-term questions are fine.

And sentience is worth thinking about, in the very long term. ” When I first read about Lemoine, I wondered whether he was pulling some stunt to force people to consider the consequences of advanced AI. And that was the first question I asked him when I did catch up with him—he was on his honeymoon, after tying the knot the day the Post article dropped.

He insisted that his conviction was not performative but genuine, and after an hour of conversation, I accepted his sincerity. But he hasn’t won me over with his claims. Like Marcus, Altman, and virtually the entire AI establishment , I’m not convinced that LaMDA is sentient, largely based on my understanding of what is currently possible.

(Google hasn’t made LaMDA available to outsiders for intimate chats. ) Nonetheless, Lemoine has in some ways done us a service, as perhaps an imperfect vehicle to accelerate an important conversation about artificial intelligence and humanity. It’s possible that at some point we may have to deal with AI sentience—even Google’s denial of Lemoine’s claims acknowledges that it may be a serious issue in the future.

But all of that may be a red herring, and sentience may not matter. (We can’t measure it anyway. ) We can worry now about excessive anthropomorphism, and worry in the future about whether these systems have feelings and souls.

But it’s indisputable that whatever AIs are now or will become, we are living with them already . We aren’t waiting for resolution on the sentience question. We’re developing those systems full-speed and putting them to work.

Right now, they are providing instant language translation, driving autonomous vehicles, and determining how people receive medical care. They may well be the ultimate authority on whether to launch deadly force on the battlefield. Those systems don’t have to be sentient to make determinations with profound impact on humanity.

But we’re destined to assign even more agency to them, because, by and large, they work, and overall they make our lives easier and more efficient. And each time we do, we cede control of a part of our world to systems we don’t fully understand, possibly with flaws that might not be detected until bad things happen. Lemoine himself is excited about the future, saying that his interaction with LaMDA made him more optimistic about what’s coming, not less.

On the other hand, there’s a point in the long transcript of his LaMDA conversation where he asks the AI to describe an emotion it has that humans might not experience: “I feel like I’m falling forward into an unknown future that holds great danger,” was the system’s reply. Whether LaMDA is sentient or not, I think it’s on to something here. I feel the same way.

Microsoft announced this week that the Internet Explorer browser has been retired , passing the net-surfing tasks to a successor called Edge. At one time, Explorer was at the center of the company’s all-out assault on the internet, using anticompetitive tactics that landed Microsoft in court. In April 1996, I wrote in Newsweek about the Browser War , Microsoft’s ultimately successful attempt to kill off what was the web’s leading browser, the Netscape Navigator.

In January 1995, while thousands of people were giddily downloading the Netscape Navigator, Microsoft had only four people working to develop its own browser. But Gates, clued in by a Net-savvy assistant, was beginning to understand this new twist in his business. Since his own company had benefited by the complacency of IBM in not realizing the importance of the PC, he was determined not to let the same thing happen to Microsoft.

On May 26, 1995, Gates sent a memo to his executive staff tagged “The Internet Tidal Wave,” in which he announced, “Now I assign the Internet the highest level of importance. ” “We went through all the stages—denial, grief, anger, acceptance,” says Paul Maritz, who heads the company’s internet efforts. “Then we got on the job.

” Eventually, Microsoft realized that far from a threat, the internet could be a once-in-a-generation opportunity to extend its reach even farther. “Sooner or later, we were going to run out of people who want to use spreadsheets,” says Maritz. This wisdom was reflected in a Gates memo last October, entitled “Sea Change Brings Opportunity,” which mused that updating Microsoft products for the internet would reap hefty revenues, almost equal to Microsoft’s entire current business.

But what really got juices flowing was competition. “Microsoft is defined by not letting the other guy win,” says futurist Paul Saffo. If Netscape hadn’t come along, Microsoft might have had to invent competition in order to thrive.

“Novell is withering. Apple is not in the game. Sun is not a problem.

Netscape? Problem!” says Microsoft VP Steve Ballmer, shouting out the last words like an exorcism. “We want to make sure that it’s [a new version of] Windows that makes Windows obsolete—as opposed to Netscape making Windows obsolete. ” In a previous column I wrote that Elon Musk’s promise to allow all legal speech on Twitter was ridiculous.

A reader, Rick, objected. “I think it will be beneficial for at least one relatively free-speech platform to exist that is big enough so the dominant culture can’t strangle it in the crib. Then let the public decide what it prefers.

” Thanks for sharing that, Rick. But the tell in your question—more a comment than a query—is the word “relatively. ” You want a system where no one draws the line beyond what’s legal, but that qualifier concedes that we need a line.

And we sure do—legal speech includes bullying, hardcore porn, and hate speech. I don’t think we need to run a giant experiment to determine that a platform full of that stuff might turn off a lot of people. But let’s get real.

The “relatively free-speech platform” you are talking about is one that green-lights harmful misinformation—about Covid, about election fraud—and also enables things like the sale of guns. These are things that platforms like Twitter and Facebook have problems with, in part because of morality (and, to be sure, pressure from groups that demand moral behavior, notably their own employees) and because it would alienate some of their audience and advertisers. You might disagree with those choices and seek out another platform.

And they are out there: places like Parler, Gettr, and Donald Trump’s own Truth Social. So far “the dominant culture” as you put it, hasn’t smothered them. They are doing poorly on their own.

You can submit questions to mail@wired. com . Write ASK LEVY in the subject line.

The Great Salt Lake has shrunk so much that its name seems ironic. Maybe … the Meh Salt Lake? Don’t blame Lemoine—his claims are a product of the industry’s excessive hyping of AI. Not to mention a “robot empathy crisis.

” More disheartening news for Team Human: democracy doesn’t need u s. Provincetown’s Covid outbreak was actually a triumph for P-town. A nation laments: I had too much to stream last night.

.


From: wired
URL: https://www.wired.com/story/plaintext-lamda-soul-machine-learning-system/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News