Dubai Tech News

On Artificial General Intelligence, AI Sentience, And Large Language Models

AI On Artificial General Intelligence, AI Sentience, And Large Language Models Rob Toews Contributor Opinions expressed by Forbes Contributors are their own. I write about the big picture of artificial intelligence. New! Follow this author to improve your content experience.

Got it! Jul 24, 2022, 08:00pm EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin Many forms of intelligence exist. Octopuses are highly intelligent—and completely unlike humans. Photo Source: New York Times In case you haven’t noticed, artificial intelligence systems have been behaving in increasingly astonishing ways lately.

OpenAI’s new model DALL-E 2 , for instance, can produce captivating original images based on simple text prompts. Models like DALL-E are making it harder to dismiss the notion that AI is capable of creativity. Consider, for instance, DALL-E’s imaginative rendition of “a hip-hop cow in a denim jacket recording a hit single in the studio.

” Or for a more abstract example, check out DALL-E’s interpretation of the old Peter Thiel line “We wanted flying cars, instead we got 140 characters. ” Meanwhile, DeepMind recently announced a new model called Gato that can single-handedly perform hundreds of different tasks, from playing video games to engaging in conversation to stacking real-world blocks with a robot arm. Almost every previous AI model has been able to do one thing and one thing only—for instance, play chess.

Gato therefore represents an important step toward broader, more flexible machine intelligence. And today’s large language models (LLMs)—from OpenAI’s GPT-3 to Google’s PaLM to Facebook’s OPT—possess dazzling linguistic abilities. They can converse with nuance and depth on virtually any topic.

They can generate impressive original content of their own, from business memos to poetry. To give just one recent example , GPT-3 recently composed a well-written academic paper about itself, which is currently under peer review for publication in a reputable scientific journal. These advances have inspired bold speculation and spirited discourse from the AI community about where the technology is headed.

Some credible AI researchers believe that we are now within striking distance of “artificial general intelligence” (AGI), an often-discussed benchmark that refers to powerful, flexible AI that can outperform humans at any cognitive task. Last month, a Google engineer named Blake Lemoine captured headlines by dramatically claiming that Google’s large language model LaMDA is sentient. The pushback against claims like these has been equally robust, with numerous AI commentators summarily dismissing such possibilities.

MORE FOR YOU Black Google Product Manager Stopped By Security Because They Didn’t Believe He Was An Employee Vendor Management Is The New Customer Management, And AI Is Transforming The Sector Already What Are The Ethical Boundaries Of Digital Life Forever? So, what are we to make of all the breathtaking recent progress in AI? How should we think about concepts like artificial general intelligence and AI sentience? The public discourse on these topics needs to be reframed in a few important ways. Both the overexcited zealots who believe that superintelligent AI is around the corner, and the dismissive skeptics who believe that recent developments in AI amount to mere hype, are off the mark in some fundamental ways in their thinking about modern artificial intelligence. Artificial General Intelligence Is An Incoherent Concept A basic principle about AI that people too often miss is that artificial intelligence is and will be fundamentally unlike human intelligence.

It is a mistake to analogize artificial intelligence too directly to human intelligence. Today’s AI is not simply a “less evolved” form of human intelligence; nor will tomorrow’s hyper-advanced AI be just a more powerful version of human intelligence. Many different modes and dimensions of intelligence are possible.

Artificial intelligence is best thought of not as an imperfect emulation of human intelligence, but rather as a distinct, alien form of intelligence, whose contours and capabilities differ from our own in basic ways. To make this more concrete, simply consider the state of AI today. Today’s AI far exceeds human capabilities in some areas—and woefully underperforms in others.

To take one example: the “protein folding problem” has been a grand challenge in the field of biology for half a century. In a nutshell, the protein folding problem entails predicting a protein’s three-dimensional shape based on its one-dimensional amino acid sequence. Generations of the world’s brightest human minds, working together over many decades, have failed to solve this challenge.

One commentator in 2007 described it as “one of the most important yet unsolved issues of modern science. ” In late 2020, an AI model from DeepMind called AlphaFold produced a solution to the protein folding problem. As long-time protein researcher John Moult put it, “This is the first time in history that a serious scientific problem has been solved by AI.

” Cracking the riddle of protein folding requires forms of spatial understanding and high-dimensional reasoning that simply lie beyond the grasp of the human mind. But not beyond the grasp of modern machine learning systems. Meanwhile, any healthy human child possesses “embodied intelligence” that far eclipses the world’s most sophisticated AI.

From a young age, humans can effortlessly do things like play catch, walk over unfamiliar terrain, or open the kitchen fridge and grab a snack. Physical capabilities like these have proven fiendishly difficult for AI to master. This is encapsulated in “Moravec’s paradox.

” As AI researcher Hans Moravec put it in the 1980s: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility. ” Moravec’s explanation for this unintuitive fact was evolutionary: “Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. [On the other hand,] the deliberate process we call high-level reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge.

We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. ” To this day, robots continue to struggle with basic physical competency. As a group of DeepMind researchers wrote in a new paper just a few weeks ago: “Current artificial intelligence systems pale in their understanding of ‘intuitive physics’, in comparison to even very young children.

” What is the upshot of all of this? There is no such thing as artificial general intelligence. AGI is neither possible nor impossible. It is, rather, incoherent as a concept.

Intelligence is not a single, well-defined, generalizable capability, nor even a particular set of capabilities. At the highest level, intelligent behavior is simply an agent acquiring and using knowledge about its environment in pursuit of its goals. Because there is a vast—theoretically infinite—number of different types of agents, environments and goals, there is an endless number of different ways that intelligence can manifest.

AI great Yann LeCun summed it up well : “There is no such thing as AGI…. Even humans are specialized. ” To define “general” or “true” AI as AI that can do what humans do (but better)—to think that human intelligence is general intelligence—is myopically human-centric.

If we use human intelligence as the ultimate anchor and yardstick for the development of artificial intelligence, we will miss out on the full range of powerful, profound, unexpected, societally beneficial, utterly non-human abilities that machine intelligence might be capable of. Imagine an AI that developed an atom-level understanding of the composition of the Earth’s atmosphere and could dynamically forecast with exquisite accuracy how the overall system would evolve over time. Imagine if it could thus design a precise, safe geoengineering intervention whereby we deposited certain compounds in certain quantities in certain places in the atmosphere such that the greenhouse effect from humanity’s ongoing carbon emissions was counterbalanced, mitigating the effects of global warming on the planet’s surface.

Imagine an AI that could understand every biological and chemical mechanism in a human’s body in minute detail down to the molecular level. Imagine if it could thus prescribe a tailored diet to optimize each individual’s health, could diagnose the root cause of any illness with precision, could generate novel personalized therapeutics (even if they don’t yet exist) to treat any serious disease. Imagine an AI that could invent a protocol to fuse atomic nuclei in a way that safely produces more energy than it consumes, unlocking nuclear fusion as a cheap, sustainable, infinitely abundant source of energy for humanity.

All of these scenarios remain fantasies today, well out of reach for today’s artificial intelligence. The point is that AI’s true potential lies down paths like these—with the development of novel forms of intelligence that are utterly unlike anything that humans are capable of. If AI is able to achieve goals like this, who cares if it is “general” in the sense of matching human capabilities overall? Orienting ourselves toward “artificial general intelligence” limits and impoverishes what this technology can become.

And—because human intelligence is not general intelligence, and general intelligence does not exist—it is conceptually incoherent in the first place. What Is It Like To Be An AI? This brings us to a related topic about the big picture of AI, one that is currently getting plenty of public attention: the question of whether artificial intelligence is, or can ever be, sentient. Google engineer Blake Lemoine’s public assertion last month that one of Google’s large language models has become sentient prompted a tidal wave of controversy and commentary.

(It is worth reading the full transcript of the discussion between Lemoine and the AI for yourself before forming any definitive opinions. ) Most people—AI experts most of all—dismissed Lemoine’s claims as misinformed and unreasonable. In an official response, Google said : “Our team has reviewed Blake’s concerns and informed him that the evidence does not support his claims.

” Stanford professor Erik Brynjolfsson opined that sentient AI was likely 50 years away. Gary Marcus chimed in to call Lemoine’s claims “nonsense”, concluding that “there is nothing to see here whatsoever. ” The problem with this entire discussion—including the experts’ breezy dismissals—is that the presence or absence of sentience is by definition unprovable, unfalsifiable, unknowable.

When we talk about sentience, we are referring to an agents’ subjective inner experiences, not to any outer display of intelligence. No one—not Blake Lemoine, not Erik Brynjolfsson, not Gary Marcus—can be fully certain about what a highly complex artificial neural network is or is not experiencing internally. In 1974, philosopher Thomas Nagel published an essay titled “What Is It Like to Be a Bat?” One of the most influential philosophy papers of the twentieth century, the essay boiled down the notoriously elusive concept of consciousness to a simple, intuitive definition: an agent is conscious if there is something that it is like to be that agent.

For example, it is like something to be my next-door neighbor, or even to be his dog; but it is not like anything at all to be his mailbox. One of the paper’s key messages is that it is impossible to know, in a meaningful way, exactly what it is like to be another organism or species. The more unlike us the other organism or species is, the more inaccessible its internal experience is.

Nagel used the bat as an example to illustrate this point. He chose bats because, as mammals, they are highly complex beings, yet they experience life dramatically differently than we do: they fly, they use sonar as their primary means of sensing the world, and so on. As Nagel put it (it is worth quoting a couple paragraphs from the paper in full): “Our own experience provides the basic material for our imagination, whose range is therefore limited.

It will not help to try to imagine that one has webbing on one’s arms, which enables one to fly around at dusk and dawn catching insects in one’s mouth; that one has very poor vision, and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one’s feet in the attic. “In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question.

I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task. I cannot perform it either by imagining additions to my present experience, or by imagining segments gradually subtracted from it, or by imagining some combination of additions, subtractions, and modifications.

” An artificial neural network is far more alien and inaccessible to us humans than even a bat, which is at least a mammal and a carbon-based life form. Again, the basic mistake that too many commentators on this topic make (usually without even thinking about it) is to presuppose that we can simplistically map our expectations about sentience or intelligence from humans to AI. There is no way for us to determine, or even to think about, an AI’s inner experience in any direct or first-hand sense.

We simply can’t know with certainty. So, how can we even approach the topic of AI sentience in a productive way? We can take inspiration from the Turing Test, first proposed by Alan Turing in 1950. Often critiqued or misunderstood, and certainly imperfect, the Turing Test has stood the test of time as a reference point in the field of AI because it captures certain fundamental insights about the nature of machine intelligence.

The Turing Test recognizes and embraces the reality that we cannot ever directly access an AI’s inner experience. Its entire premise is that, if we want to gauge the intelligence of an AI, our only option is to observe how it behaves and then draw appropriate inferences. (To be clear, Turing was concerned with assessing a machine’s ability to think, not necessarily its sentience; for our purposes, though, what is relevant is the underlying principle.

) Douglas Hofstadter articulated this idea particularly eloquently: “How do you know that when I speak to you, anything similar to what you call ‘thinking’ is going on inside me? The Turing test is a fantastic probe—something like a particle accelerator in physics. Just as in physics, when you want to understand what is going on at an atomic or subatomic level, since you can’t see it directly, you scatter accelerated particles off the target in question and observe their behavior. From this you infer the internal nature of the target.

The Turing test extends this idea to the mind. It treats the mind as a ‘target’ that is not directly visible but whose structure can be deduced more abstractly. By ‘scattering’ questions off a target mind, you learn about its internal workings, just as in physics.

” In order to make any headway at all in discussions about AI sentience, we must anchor ourselves on observable manifestations as proxies for internal experience; otherwise, we go around in circles in an unrigorous, empty, dead-end debate. Erik Brynjolfsson is confident that today’s AI is not sentient. Yet his comments suggest that he believes that AI will eventually be sentient.

How does he expect he will know when he has encountered truly sentient AI? What will he look for? What You Do Is Who You Are In debates about AI, skeptics often describe the technology in a reductive way in order to downplay its capabilities. As one AI researcher put it in response to the Blake Lemoine news, “It is mystical to hope for awareness, understanding, or common sense from symbols and data processing using parametric functions in higher dimensions. ” In a recent blog post , Gary Marcus argued that today’s AI models are not even “remotely intelligent” because “all they do is match patterns and draw from massive statistical databases.

” He dismissed Google’s large language model LaMDA as just “a spreadsheet for words. ” This line of reasoning is misleadingly trivializing. After all, we could frame human intelligence in a similarly reductive way if we so choose: our brains are “just” a mass of neurons interconnected in a particular way, “just” a collection of basic chemical reactions inside our skulls.

But this misses the point. The power, the magic of human intelligence is not in the particular mechanics, but rather in the incredible emergent capabilities that somehow result. Simple elemental functions can produce profound intellectual systems.

Ultimately, we must judge artificial intelligence by what it can do. And if we compare the state of AI five years ago to the state of the technology today, there is no question that its capabilities and depth have expanded in remarkable (and still accelerating) ways, thanks to breakthroughs in areas like self-supervised learning, transformers and reinforcement learning. Artificial intelligence is not like human intelligence.

When and if AI ever becomes sentient—when and if it is ever “like something” to be an AI, in Nagel’s formulation—it will not be comparable to what it is like to be a human. AI is its own distinct, alien, fascinating, rapidly evolving form of cognition. What matters is what artificial intelligence can achieve.

Delivering breakthroughs in basic science (like AlphaFold), tackling species-level challenges like climate change, advancing human health and longevity, deepening our understanding of how the universe works—outcomes like these are the true test of AI’s power and sophistication. Follow me on Twitter . Rob Toews Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/robtoews/2022/07/24/on-artificial-general-intelligence-ai-sentience-and-large-language-models/

Exit mobile version