Dubai Tech News

One Day, AI Will Seem as Human as Anyone. What Then?

Shortly after I learned about Eliza, the program that asks people questions like a Rogerian psychoanalyst, I learned that I could run it in my favorite text editor, Emacs. Eliza truly is a simple program, with hard-coded text and flow control, pattern matching, and simple, templated learning for psychoanalytic triggers—like how recently you mentioned your mother. Yet, even though I knew how it worked, I felt a presence.

I broke that uncanny feeling forever, though, when it occurred to me to just keep hitting return. The program cycled through four possible opening prompts, and the engagement was broken like an actor in a film making eye contact through the fourth wall. For many last week, their engagement with Google’s LaMDA—and its alleged sentience —was broken by an Economist article by AI legend Douglas Hofstadter in which he and his friend David Bender show how “mind-bogglingly hollow” the same technology sounds when asked a nonsense question like “How many pieces of sound are there in a typical cumulonimbus cloud?” But I doubt we’ll have these obvious tells of inhumanity forever.

From here on out, the safe use of artificial intelligence requires demystifying the human condition. If we can’t recognize and understand how AI works—if even expert engineers can fool themselves into detecting agency in a “ stochastic parrot ”—then we have no means of protecting ourselves from negligent or malevolent products. This is about finishing the Darwinian revolution, and more.

Understanding what it means to be animals, and extending that cognitive revolution to understanding how algorithmic we are as well. All of us will have to get over the hurdle of thinking that some particular human skill—creativity, dexterity, empathy, whatever—is going to differentiate us from AI. Helping us accept who we really are, how we work, without us losing engagement with our lives, is an enormous extended project for humanity, and of the humanities.

Achieving this understanding without substantial numbers of us embracing polarizing, superstitious, or machine-inclusive identities that endanger our societies isn’t only a concern for the humanities, but also for the social sciences, and for some political leaders. For other political leaders, unfortunately, it may be an opportunity. One pathway to power may be to encourage and prey upon such insecurities and misconceptions, just as some presently use disinformation to disrupt democracies and regulation.

The tech industry in particular needs to prove it is on the side of the transparency and understanding that underpins liberal democracy, not secrecy and autocratic control. There are two things that AI really is not, however much I admire the people claiming otherwise: It is not a mirror, and it is not a parrot. Unlike a mirror, it does not just passively reflect to us the surface of who we are.

Using AI, we can generate novel ideas, pictures, stories, sayings, music—and everyone detecting these growing capacities is right to be emotionally triggered. In other humans, such creativity is of enormous value, not only for recognizing social nearness and social investment, but also for deciding who holds high-quality genes you might like to combine your own with. AI is also not a parrot.

Parrots perceive a lot of the same colors and sounds we do, in the ways we do, using much the same hardware, and therefore experiencing much the same phenomenology. Parrots are highly social. They imitate each other, probably to prove ingroup affiliation and mutual affection , just like us.

This is very, very little like what Google or Amazon is doing when their devices “parrot” your culture and desires to you. But at least those organizations have animals (people) in them, and care about things like time. Parrots parroting is absolutely nothing like what an AI device is doing at those same moments, which is shifting some digital bits around in a way known to be likely to sell people products.

But does all this mean AI cannot be sentient? What even is this “sentience” some claim to detect? The Oxford English Dictionary says it is “having a perspective or a feeling. ” I’ve heard philosophers say it’s “having a perspective. ” Surveillance cameras have perspectives.

Machines may “feel” (sense) anything we build sensors for—touch, taste, sound, light, time, gravity—but representing these things as large integers derived from electric signals means that any machine “feeling” is far more different from ours than even bumblebee vision or bat sonar. Some people define perception as requiring consciousness, but what’s that? If by “consciousness” you mean “self-awareness,” well, then, computers have the capacity to be infinitely more self-aware than we are. RAM stands for “random access memory”; we can build computer programs that have access to every bit of their previous experience and also their own source code and execution state.

As a psychologist though, I tend to refer to consciousness as “that part of our experience we can describe verbally”—and here again if we connect natural language processing to actual experience or status of a system, it should be perfectly able to describe far more of its operation than a human can. Eventually, no particular human skill—not creativity, not dexterity, not at least the appearance of empathy—is going to differentiate us from AI. The key to understanding both the inaccurate embracing of machines and the over-dismissal of AI capacities is to see the limits of the divide between human nature and digital algorithmic control.

Humans are algorithmic too—much of our culture and intelligence does work like a large language model, absorbing and recombining what we’ve heard. Then there’s the fundamental algorithm for humanity and the rest of natural intelligence: evolution. Evolution is the algorithm that perpetuates copies of itself.

Evolution underlies our motivations. It ensures that things central to our survival—like intelligence, consciousness, and also cooperation, the very capabilities central to this debate—mean a lot to us. Emotionally.

For example, evolution makes us crave things that provide us enough security that our species is likely to continue. We talk about drives like hunger, thirst, sleep, or lust. Understanding the “AI sentience” debate requires that we also talk about the two fundamental yet opposing social drives humans experience when constructing our identities.

We tend to think of identity as all about standing out: how we are unique, as individuals. We want to be special, to differentiate ourselves within such a society. But in fact, many of the ways we define our identity is through our alignment with various in-groups: our religion, our home town, our gender (or lack of gender), our job, our species, our relative height, our relative strength or skills.

So we are driven both to differentiate, but also to belong. And now we come to language. Language has long been a favored way to dehumanize others—the word barbarian means “one who does not speak the language.

” Nonhuman animals are things we are permitted by our religions to eat, by and large. When someone (some human) speaks exactly our language, that means they have invested an enormous amount of time becoming expert in all the things we have. They may have spent years of their lives living near us, they share our interests and values.

They understand our jokes because they watched the same entertainment or experienced the same religious rites of passage. Maybe they spent a fortune acquiring a similar education, or maybe they watched untenable numbers of games of the same sport. We sense all this investment when we talk to someone.

We sense that “this is a person I understand, this is a person I can predict. ” We can call that “trust” if we want—we think that person isn’t likely to betray us, because they too must see how aligned our interests are. Now enter machines with language.

No wonder they confuse us. Our defensiveness around those key socio-cognitive concepts of intelligence, consciousness, and cooperation is our defense of the identity we’ve spent so much time acquiring. We want it to be absolute, inviolate, mystic, or at least utterly under our society’s control.

What we should really be worried about with respect to AI is not basic cognitive capacities like language or awareness, or behavioral strategies like cooperation, but two very specific social roles associated with these and with our own evolutionary biases. The first role—moral agency—is being assigned (or really allowed) a role of responsibility by a society. Historically, for obvious reasons, responsibility has had to be limited to those we could communicate with, and trust.

Language again. All the active members of a society are its moral agents, and all the things they are obliged to take care of are that second social role, its moral patients. Evolution has ensured that we animals instinctively attribute patiency (or caring behavior) to the things that are likely to help us perpetuate the perpetuating algorithms.

Things like babies; a clean, healthy nest; and our societies’ other moral agents on whom our security relies. Our values are the way that we hold our societies together; they make no sense outside of the context of evolution and apes. Taking linguistic “compatibility,” that capacity to indicate similarity and to communicate—to indicate that moral agency or patiency was (generally) not a dangerous mistake.

Until we started building digital AI. Now linguistic compatibility is a security weakness, a cheap and easy hack by which our societies can be violated and our emotions exploited. If we are talking to one of a billion cheap digital replicas of a single AI program (however expensive it was to build initially), then our intuitions are entirely betrayed, and our trust entirely betrayable.

Contrary to what many people claim, we can understand AI. Sure, some AI systems are complicated, but so are governments, banks, fighter jets, and high school relationships. We don’t think we will never understand or build these things.

What we do and must care about now is creating an identity for ourselves within the society of humans we interact with. How relevant are our capacities to create through AI what used to be (at least nominally) created by individual humans? Sure, it is some kind of threat, at least to the global elite used to being at the pinnacle of creativity. The vast majority of humanity, though, had to get used to being less-than-best since we were in first grade.

We will still get pleasure out of singing with our friends or winning pub quizzes or local soccer matches, even if we could have done better using web search or robot players. These activities are how we perpetuate our communities and our interests and our species. This is how we create security, as well as comfort and engagement.

Even if no skills or capacities separate humans from artificial intelligence, there is still a reason and a means to fight the assessment that machines are people. If you attribute the same moral weight to something that can be trivially and easily digitally replicated as you do to an ape that takes decades to grow, you break everything—society, all ethics, all our values. If you could really pull off this machine moral status (and not just, say, inconvenience the proletariat a little), you could cause the collapse, for example, of our capacity to self-govern.

Democracy means nothing if you can buy and sell more citizens than there are humans, and if AI programs were citizens, we so easily could. So how do we break the mystic hold of seemingly sentient conversations? By exposing how the system works. This is a process both “AI ethicists” and ordinary software “devops” (development and operations) call “transparency.

” What if we all had the capacity to “lift the lid,” to change the way an AI program responds? Google seems to be striving to find the right set of filters and internal guardrails to make something more and more capable of human-like conversation. Maybe all we need for engagement-rupturing transparency is exposure to the same set of developer tools that Google is using to hack up better chatbots. Its “engineer” who thinks he’s observed artificial sentience, Blake Lemoine, wasn’t building the system, he was only testing it.

I’m guessing he didn’t get a chance to play with the code that built what he was testing, or to rescope its parameters. AI companies, big and small, are increasingly providing global public goods, global essential infrastructure. The nature of digital technology—that rapid, cheap transmission and copying of data—facilitates natural monopolies, which means there is no way to really enforce competition laws on many AI providers.

Instead of demanding competition from public utilities, we enforce obligations. Like connecting every house, however remote, to telephones and electricity, even if the economic benefit of connecting that particular house will never outweigh the cost. Demanding transparency with our AI could be similar.

Ultimately, it isn’t really likely even to be a cost burden to the corporations; systems that are transparent are easier to maintain and extend. The new EU AI Act demands relatively little from the developers of the vast majority of AI systems. But its most basic requirement is this: AI is always identified.

No one thinks they are talking to a person when really they are talking to a machine. Complying with this law may finally get companies like Google to behave as seriously about what they always should have been—with great transparency and world-class devops. Rather than seeking special exemptions from EU transparency laws, Google and others should be demonstrating—and selling—good practice in intelligent software development.

But wait. Is it a problem that a security strategy for groups of apes is the core of our values? That there’s nothing special about being human, other than being human itself? As I said before, our values are the way that we hold our societies together. “Should we value what we value?” is kind of like asking “What happened before time?” It works as a sentence—it’s the kind of thing an AI large language model might produce, sounding profound, even sentient.

Or a person, a person might say that. People also make mistakes; we also use heuristics to hack together thoughts and sentences. But in fact, logically, these sentences make no sense.

“Before” only works within time, and our values are what they’ve evolved to be. When I look around and see what we’ve built through our values, by and large I think it’s pretty cool. But then I would, being human.

But that’s no reason to fight the assessment. .


From: wired
URL: https://www.wired.com/story/lamda-sentience-psychology-ethics-policy/

Exit mobile version