— Shares Facebook Twitter Reddit Email On May 30, a research organization called the Center for AI Safety released a 22-word statement signed by a number of prominent “AI scientists,” including Sam Altman, the CEO of OpenAI; Demis Hassabis, the CEO of Google DeepMind; and Geoffrey Hinton, who has been described as the “godfather” of AI. It reads: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. This statement made headlines around the world, with many media reports suggesting that experts now believe “AI could lead to human extinction,” to quote a CNN article.
What should you make of it? A full dissection of the issue — showing, for example, that such statements distract from the many serious harms that AI companies have already caused — would require more time and space than I have here. For now, it’s worth taking a closer look at what exactly the word “extinction” means, because the sort of extinction that some notable signatories believe we must avoid at all costs isn’t what most people have in mind when they hear the word. Understanding this is a two-step process.
First, we need to make sense of what’s behind this statement. The short answer concerns a cluster of ideologies that Dr. Timnit Gebru and I have called the ” TESCREAL bundle .
” The term is admittedly clunky, but the concept couldn’t be more important, because this bundle of overlapping movements and ideologies has become hugely influential among the tech elite. And since society is being shaped in profound ways by the unilateral decisions of these unelected oligarchs, the bundle is thus having a huge impact on the world more generally. The acronym stands for “transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism .
” That’s a mouthful, but the essence of TESCREALism — meaning the worldview that arises from this bundle — is simple enough: at its heart is a techno-utopian vision of the future in which we re-engineer humanity, colonize space, plunder the cosmos, and establish a sprawling intergalactic civilization full of trillions and trillions of “happy” people, nearly all of them “living” inside enormous computer simulations . In the process, all our problems will be solved, and eternal life will become a real possibility. Related Would “artificial superintelligence” lead to the end of life on Earth? It’s not a stupid question This is not an exaggeration.
It’s what Sam Altman refers to when he writes that, with artificial general intelligence (AGI), “we can colonize space. We can get fusion to work and solar [energy] to mass scale. We can cure all human diseases.
We can build new realities. We are only a few breakthroughs away from abundance at scale that is difficult to imagine. ” It’s what Elon Musk implicitly endorsed when he retweeted an article by Nick Bostrom which argues that we have a moral obligation to spread into the cosmos as soon as possible and build “planet-sized” computers running virtual-reality worlds in which 10 38 digital people could exist per century.
(That’s a 1 followed by 38 zeros. ) According to the tweet, this is “likely the most important paper ever written. ” When Twitter founder Jack Dorsey joined Musk in suggesting that we have a “duty” to ” extend ” and ” maintain the light of consciousness to make sure it continues into the future,” he was referencing a central tenet of the TESCREAL worldview.
I don’t think that everyone who signed the Center for AI Safety’s short statement is a TESCREAList — meaning someone who accepts more than one of the “TESCREAL” ideologies — but many notable signatories are, and at least 90% of the Center for AI Safety’s funding comes from the TESCREAL community itself. Furthermore, worries that AGI could cause our extinction were originally developed and popularized by TESCREALists like Bostrom, whose 2014 bestseller ” Superintelligence ” outlined the case for why superintelligent AGI could turn on its makers and kill every human on Earth . Here’s the catch-22: If AGI doesn’t destroy humanity, TESCREALists believe it will usher in the techno-utopian world described above.
In other words, we probably need to build AGI to create utopia, but if we rush into building AGI without proper precautions, the whole thing could blow up in our faces. This is why they’re worried: There’s only one way forward, yet the path to paradise is dotted with landmines. Here’s the catch-22: TESCREALists believe we probably need to build AGI to create utopia, but if we rush into building AGI without proper precautions, the whole thing could blow up in our faces.
With this background in place, we can move on to the second issue: When TESCREALists talk about the importance of avoiding human extinction, they don’t mean what you might think. The reason is that there are different ways of defining “human extinction. ” For most of us, “human extinction” means that our species, Homo sapiens, disappears entirely and forever, which many of us see as a bad outcome we should try to avoid.
But within the TESCREAL worldview, it denotes something rather different. Although there are, as I explain in my forthcoming book , at least six distinct types of extinction that humanity could undergo, only three are important for our purposes: Terminal extinction : this is what I referenced above. It would occur if our species were to die out forever.
Homo sapiens is no more; we disappear just like the dinosaurs and dodo before us, and this remains the case forever. Final extinction : this would occur if terminal extinction were to happen — again, our species stops existing — and we don’t have any successors that take our place. The importance of this extra condition will become apparent shortly.
Normative extinction : this would occur if we were to have successors, but these successors were to lack some attribute or capacity that one considers to be very important — something that our successors ought to have, which is why it’s called “normative. ” The only forms of extinction that the TESCREAL ideologies really care about are the second and third, final and normative extinction. They do not, ultimately, care about terminal extinction — about whether our species itself continues to exist or not.
To the contrary, the TESCREAL worldview would see certain scenarios in which Homo sapiens disappears entirely and forever as good , because that would indicate that we have progressed to the next stage in our evolution, which may be necessary to fully realize the techno-utopian paradise they envision. There’s a lot to unpack here, so let’s make things a little more concrete. Imagine a scenario in which we use genetic engineering to alter our genes.
Over just one or two generations, a new species of genetically modified ” posthumans ” arises. These posthumans might also integrate various technologies into their bodies, perhaps connecting their brains to the internet via “brain-computer interfaces,” which Musk’s company Neuralink is trying to develop . They might also become immortal through ” life-extension ” technologies, meaning that they could still die from accidents or acts of violence but not from old age, as they’d be ageless.
Eventually, then, after these posthuman beings appear on the scene, the remaining members of Homo sapiens die out. This would be terminal extinction but not final extinction, since Homo sapiens would have left behind a successor: this newly created posthuman species. Would this be bad, according to TESCREALists? No.
In fact, it would be very desirable, since posthumanity would supposedly be “better” than humanity. This is not only a future that die-hard TESCREALists wouldn’t resist, it’s one that many of them hope to bring about. The whole point of transhumanism, the backbone of the TESCREAL bundle, is to ” transcend ” humanity.
Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter , Crash Course. As the TESCREAList Toby Ord writes in his 2020 book ” The Precipice ,” “forever preserving humanity as it is now may also squander our legacy, relinquishing the greater part of our potential,” adding that “rising to our full potential for flourishing would likely involve us being transformed into something beyond the humanity of today. ” Along similar lines, Nick Bostrom asserts that “the permanent foreclosure of any possibility of … transformative change of human biological nature may itself constitute an existential catastrophe.
” In other words, the failure to create a new posthuman species would be an enormous moral tragedy, since it would mean we failed to fulfill most of our grand cosmic “potential” in the universe. Of course, morphing into a new posthuman species wouldn’t necessarily mean that Homo sapiens disappears. Perhaps this new species will coexist with ” legacy humans ,” as some TESCREALists would say.
They could keep us in a pen, as we do with sheep, or let us reside in their homes, the way our canine companions live with us today. The point, however, is that if Homo sapiens were to go the way of the dinosaurs and the dodo, that would be no great loss from the TESCREAList point of view. Terminal extinction is fine, so long as we have these successors.
Or consider a related scenario: Computer scientists create a population of intelligent machines, after which Homo sapiens dwindles in numbers until no one is left. In other words, rather than evolving into a new posthuman species, we create a distinct lineage of machine replacements. Would this be bad, on the TESCREAList view? Prominent “transhumanists” suggest that the failure to create a new posthuman species would be an enormous moral tragedy, since it would mean we failed to fulfill most of our grand cosmic “potential” in the universe.
In his book ” Mind Children ,” the roboticist Hans Moravec argued that biological humans will eventually be replaced by ” a postbiological world dominated by self-improving, thinking machines,” resulting in “a world in which the human race has been swept away by the tide of cultural change, usurped by its own artificial progeny. ” Moravec thinks this would be terrific, even describing himself as someone ” who cheerfully concludes that the human race is in its last century, and goes on to suggest how to help the process along. ” Although Moravec was writing before TESCREALism took shape, his ideas have been highly influential within the TESCREAL community, and indeed the vision that he outlines could be understood as a proto-TESCREAL worldview.
A more recent example comes from the philosopher Derek Shiller, who works for The Humane League , an effective-altruism-aligned organization. In a 2017 paper, Shiller argues that “if it is within our power to provide a significantly better world for future generations at a comparatively small cost to ourselves, we have a strong moral reason to do so. One way of providing a significantly better world may involve replacing our species with something better.
” He then offers a “speculative argument” for why we should, in fact, “engineer our extinction so that our planet’s resources can be devoted to making artificial creatures with better lives. ” Along similar lines, the TESCREAList Larry Page — co-founder of Google, which owns DeepMind, one of the companies trying to create AGI — passionately contends that “digital life is the natural and desirable next step in the cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good. ” According to Page, ” if life is ever going to spread throughout our Galaxy and beyond, which … it should, then it would need to do so in digital form.
” Consequently, a major worry for Page is that “AI paranoia would delay the digital utopia and/or cause a military takeover of AI that would fall foul of Google’s ‘Don’t be evil’ slogan. ” (Note that “Don’t be evil” was ” removed from the top of Google’s Code of Conduct” in 2018. ) Some have called this position “digital utopianism.
” However one labels it, Page’s claim that we will need to become digital beings, or create digital successors, in order to spread throughout the galaxy is correct. While colonizing our planetary neighbor, Mars, might be possible as biological beings, building an interstellar or intergalactic civilization will almost certainly require our descendants to be digital in nature. Outer space is far too hostile an environment for squishy biological creatures like us to survive for long periods, and traveling from Earth to the nearest galaxy — the Andromeda galaxy — would require some 10 billion years at current propulsion speeds.
Not only would digital beings be able to tolerate the dangerous conditions of intergalactic space, they would effectively be immortal, making such travel entirely feasible. This matters because, as noted, at the heart of TESCREALism is the imperative to spread throughout the whole accessible universe, plundering our ” cosmic endowment ” in the process, and creating trillions upon trillions of future “happy” people. Realizing the utopian dream of the TESCREAL bundle will require the creation of digital posthumans; they are necessary to make this dream a reality.
Perhaps these posthumans will keep us around in pens or as pets, but maybe they won’t. And if they don’t, TESCREALists would say: So much the better. This brings us to another crucial point, directly linked to the supposed threat posed by AGI.
For TESCREALists, it doesn’t just matter that we have successors, such as digital posthumans, it also matters what these successors are like. For example, imagine that we replace ourselves with a population of intelligent machines that, because of their design, lack the capacity for consciousness. Many TESCREALists would insist that “value” cannot exist without consciousness.
If there are no conscious beings to appreciate art, wonder in awe at the universe or experience things like happiness, then the world wouldn’t contain any value. Imagine two worlds: The first is our world. The second is exactly like our world in every way except one : The “humans” going about their daily business, conducting scientific experiments, playing music, writing poetry, hanging out at the bar, rooting for their favorite sports teams and so on have literally no conscious experiences.
They behave exactly like we do, but there’s no “felt quality” to their inner lives. They have no consciousness, and in that sense they are no different from rocks. Rocks — we assume — don’t have anything it “feels like” to be them, sitting by the side of the road or tumbling down a mountain.
The same goes for these “humans,” even if they are engaged in exactly the sorts of activities we are. They are functionally equivalent to zombies — what are called ” philosophical zombies . ” This is the only difference between these two worlds, and most TESCREALists would argue that the second world is utterly valueless .
Hence, if Homo sapiens were to replace itself with a race of intelligent machines, but these machines were incapable of consciousness, then the outcome would be no better than if we had undergone final extinction, whereby Homo sapiens dies out entirely without leaving behind any successors at all. That’s the idea behind the third type of extinction, “normative extinction,” which would happen if humans do have successors, but these successors lack something they ought to have, such as consciousness. Other TESCREALists will point to additional attributes that our successors should have, such as a certain kind of ” moral status .
” In fact, many TESCREALists literally define “humanity” as meaning “Homo sapiens and whatever successors we might have, so long as they are conscious, have a certain moral status and so on. ” Consequently, when TESCREALists talk about “human extinction,” they aren’t actually talking about Homo sapiens but this broader category of beings. Importantly, this means that Homo sapiens could disappear entirely and forever without “human extinction” (by this definition) having happened.
As long as we have successors, and these successors possess the right kind of attributes or capacities, no tragedy will have occurred. Put differently — and this brings us full circle — what ultimately matters to TESCREALists isn’t terminal extinction, but final and normative extinction. Those are the only two types of extinction that, if they were to occur, would constitute an “existential catastrophe.
” When TESCREALists talk about “human extinction,” they aren’t actually talking about Homo sapiens but this broader category that could include digital beings or intelligent machines. So Homo sapiens could disappear entirely and forever without “human extinction” (by this definition) having happened. Here’s how all this connects to the current debate surrounding AGI: Right now, the big worry of TESCREAL “doomers” is that we might accidentally create an AGI with “misaligned” goals, meaning an AGI that could behave in a way that inadvertently kills us.
For example, if one were to give AGI the harmless-sounding goal of maximizing the total number of paperclips that exist, TESCREALists argue that it would immediately kill every person on Earth, not because the AGI “hates” you but because ” you are made out of atoms which it can use for something else,” namely paperclips. In other words, it would kill us simply because our bodies are full of useful resources: roughly a billion billion billion atoms. The important point here is that if a “misaligned” AGI were to inadvertently destroy us, the outcome would be terminal extinction but not final extinction.
Why? Because Homo sapiens would no longer exist yet we will have left behind a successor — the AGI! A successor is anything that succeeds or comes after us, and since the AGI that kills us will continue to exist after we are all dead, we won’t have undergone final extinction. Indeed, Homo sapiens would be gone precisely because we avoided final extinction, as our successor is what murdered us — a technological case of parricide . However, since in this silly example our AGI successor would do nothing but make paperclips, this would be a case of normative extinction.
It’s certainly not the future most TESCREALists want to create. It’s not the utopia where trillions and trillions of conscious posthumans with a similar moral status to ours cluttering every corner of the accessible universe. This is the importance of normative extinction: To bequeath the world to a poorly designed AGI would be just as catastrophic as if our species were to die out without leaving behind any successors at all.
Put differently, the threat of “misaligned” AGI is that Homo sapiens disappears and we bequeath the world to a successor, but this successor lacks something necessary for the rest of cosmic history to have “value. ” So that’s the worry. The key point I want to make here is that Homo sapiens plays no significant role in the grand vision of TESCREALism even if everything goes just right.
Rather, TESCREALists see our species as nothing more than a springboard to the next “stage” of “evolution,” a momentary transition between current biological life and future digital life, which is necessary to fulfill our “longterm potential” in the cosmos. As Bostrom writes , transhumanists view human nature as a work-in-progress, a half-baked beginning that we can learn to remold in desirable ways. Current humanity need not be the endpoint of evolution.
Transhumanists hope that by responsible use of science, technology, and other rational means we shall eventually manage to become posthuman, beings with vastly greater capacities than present human beings have. Transhumanism, once again, is the backbone of the TESCREAL bundle, and my guess is that virtually all TESCREALists believe that the inevitable next step in our story is to become digital, which probably means casting aside Homo sapiens in the process. Furthermore, many hope this transition begins in the near future — literally within our lifetimes.
One reason is that a near-term transition to digital life could enable TESCREALists living today to become immortal by ” uploading ” their minds to a computer. Sam Altman, for example, was one of 25 people in 2018 to sign up to have his brain preserved by a company called Nectome. As an MIT Technology Review article notes , Altman feels “pretty sure minds will be digitized in his lifetime.
” Another reason is that creating a new race of digital beings, whether through mind-uploading or by developing more advanced AI systems than GPT-4, might be necessary to keep the engines of scientific and technological “progress” roaring. In his recent book ” What We Owe the Future ,” the TESCREAList William MacAskill argues that in order to counteract global population decline, “we might develop artificial general intelligence (AGI) that could replace human workers — including researchers. This would allow us to increase the number of ‘people’ working on R&D as easily as we currently scale up production of the latest iPhone.
” In fact, the explicit aim of OpenAI is to create AGI “systems that outperform humans at most economically valuable work” — in other words, to replace biological humans in the workplace. Later in his book, MacAskill suggests that our destruction of the natural world might actually be net positive , which points to a broader question of whether biological life in general — not just Homo sapiens in particular — has any place in the “utopian” future envisioned by TESCREALists. Here’s what MacAskill says : It’s very natural and intuitive to think of humans’ impact on wild animal life as a great moral loss.
But if we assess the lives of wild animals as being worse than nothing on average, which I think is plausible (though uncertain), then we arrive at the dizzying conclusion that from the perspective of the wild animals themselves, the enormous growth and expansion of Homo sapiens has been a good thing. So where does this leave us? The Center for AI Safety released a statement declaring that “mitigating the risk of extinction from AI should be a global priority. ” But this conceals a secret: The primary impetus behind such statements comes from the TESCREAL worldview (even though not all signatories are TESCREALists), and within the TESCREAL worldview, the only thing that matters is avoiding final and normative extinction — not terminal extinction, whereby Homo sapiens itself disappears entirely and forever.
Ultimately, TESCREALists aren’t too worried about whether Homo sapiens exists or not. Indeed our disappearance could be a sign that something’s gone very right — so long as we leave behind successors with the right sorts of attributes or capacities. William MacAskill suggests that our destruction of the natural world might actually be net positive , which points to a broader question of whether biological life in general — not just Homo sapiens in particular — has any place in the “utopian” future.
If you love or value Homo sapiens , the human species as it exists now, you should be wary of TESCREALists warning about “extinction. ” Read such statements with caution. On the TESCREAL account, if a “misaligned” AGI were to kill us next year, the great tragedy wouldn’t be that Homo sapiens no longer exists.
It would be that we disappeared without having created successors to realize our “vast and glorious” future — to quote Toby Ord once again — through colonizing space, plundering the universe, and maximizing “value. ” If our species were to cease existing but leave behind such successors, that would be a cause for rejoicing. It would mean that we’d taken a big step forward toward fulfilling our “longterm potential” in the universe.
I, personally, would like to see our species stick around. I’m not too keen on Homo sapiens being cast aside for something the TESCREALists describe as “better. ” Indeed, the word “better” is normative: its meaning depends on the particular values that one accepts.
What looks “better,” or even “utopian,” from one perspective might be an outright dystopian nightmare from another. I would agree with philosopher Samuel Scheffler that “we human beings are a strange and wondrous and terrible species. ” Homo sapiens is far from perfect.
One might even argue that our species name is a misnomer, because it literally translates as “wise human,” which we surely have not proven to be. But posthumans would have their own flaws and shortcomings. Perhaps being five times “smarter” than us would mean they’d be five times better at doing evil.
Maybe developing the technological means to indefinitely extend posthuman lifespans would mean that political prisoners could be tortured relentlessly for literally millions of years. Who knows what unspeakable horrors might haunt the posthuman world? So whenever you hear people talking about “human extinction,” especially those associated with the TESCREAList worldview, you should immediately ask: What values are concealed behind statements that avoiding “human extinction” should be a global priority? What do those making such claims mean by “human”? Which “extinction” scenarios are they actually worried about: terminal, final or normative extinction? Only once you answer these questions can you begin to make sense of what this debate is really about. Read more from Émile P.
Torres on the human future The Doomsday Clock is an imperfect metaphor — but the existential danger is all too real What the Sam Bankman-Fried debacle can teach us about “longtermism” “White Psychodrama” and the culture wars: A self-reinforcing cycle, going nowhere By Émile P. Torres Émile P. Torres is a philosopher and historian whose work focuses on existential threats to civilization and humanity.
They have published on a wide range of topics, including machine superintelligence, emerging technologies and religious eschatology, as well as the history and ethics of human extinction. Their forthcoming book is “Human Extinction: A History of the Science and Ethics of Annihilation” (Routledge). For more, visit their website and follow them on Twitter.
” For more, visit their website and follow them on Twitter . MORE FROM Émile P. Torres Related Topics —————————————— Ai Artificial Intelligence Commentary Existential Risk Futurism Human Extinction Longtermism Tech Transhumanism Related Articles Advertisement:.
From: salon
URL: https://www.salon.com/2023/06/11/ai-and-the-of-human-extinction-what-are-the-tech-bros-worried-about-its-not-you-and-me/