Dubai Tech News

AI Ethics And The Almost Sensible Question Of Whether Humans Will Outlive AI

AI AI Ethics And The Almost Sensible Question Of Whether Humans Will Outlive AI Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.

Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). New! Follow this author to stay notified about their latest stories. Got it! Aug 28, 2022, 08:00am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin Will humans outlive AI is a loaded question with lots of important insights.

getty I have a question for you that seems to be garnering a lot of handwringing and heated debates these days. Are you ready? Will humans outlive AI? Mull that one over. I am going to unpack the question and examine closely the answers and how the answers have been elucidated.

My primary intent is to highlight how the question itself and the surrounding discourse are inevitably and inexorably rooted in AI Ethics . For those that dismissively think that the question is inherently unanswerable or a waste of time and breath, I would politely suggest that the act of trying to answer the question raises some vital AI Ethics considerations. Thus, even if you want to out-of-hand reject the question as perhaps preposterous or unrealistic, I say that it still elicits some value as a vehicle or mechanism that underscores Ethical AI precepts.

For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here , just to name a few. With the aforementioned premise, please permit me to once again repeat the contentious question and allow our minds to roam across the significance of the question. Will humans outlive AI? MORE FOR YOU Black Google Product Manager Stopped By Security Because They Didn’t Believe He Was An Employee Vendor Management Is The New Customer Management, And AI Is Transforming The Sector Already What Are The Ethical Boundaries Of Digital Life Forever? If you are uncomfortable with that particular phrasing, you are welcome to reword the question to ask whether AI will outlive humans.

I am not sure if that makes answering the question any easier, but maybe it seems less disconcerting. I say that because the idea of AI outliving humans might feel a bit more innocuous. It would almost be as though I asked you whether large buildings and human-crafted monuments might outlive humankind.

Surely this seems feasible and not especially threatening. We make these big things during the course of our lives and akin to the pyramids, these mighty structures will outlast those that crafted them. That doesn’t quite equate to persisting past the end of humanity, of course, since humans are still here.

Nonetheless, it seems quite logical and possible that the structures we make could outlast our existence in total. The notable distinction though is that various structures such as tall skyscrapers and glorious statues are not alive. They are inert.

In contrast, when asking about AI, the assumption is that AI is essentially “alive” in a sense of having some form of intelligence and being able to act in ways that humans do. That’s why the question about living longer is more daunting, mind-bending, and altogether a puzzle worthy of puzzling over. Throughout my remarks herein, I am going to stick with the question that is worded as to whether humans will outlive AI.

This is merely for sake of discussion and ease of contemplation. I mean no disrespect to the alternative query of whether AI will outlive humans. All in all, this analysis covers both wordings and I just perchance find that the question of humans outliving AI seems more endearing in these thorny matters.

Okay, I’ll ask it yet again: Will humans outlive AI? Seems like you have two potential answers, either the ironclad yes, humans will outlive AI, or you might be on the other side of the coin and fervently insist that no, humans aren’t going to outlive AI. Thus, this lofty and angst-ridden question boils down to a straightforward rendering of either yes or no. Make your pick.

I realize that the smarmy reply is that neither yes nor no is applicable. I hear you. Whereas the question certainly seems to be answerable in only a distinctly binary fashion, namely just yes or no, I will grant you that a counter-argument can be sensibly made that the answer is something else.

Let’s briefly explore some of the basis for not wanting to merely say yes or no to this question. First, you might reject the word “outlive” in the context of the question posed. This particular wording perhaps implies that AI is alive.

The question didn’t say “outlast” and instead asks whether humans will outlive AI. Does the outlive apply only to the human part of the question, or does it also apply to the AI part of the question? Some would try to assert that the outlived aura applies to the AI portion too. In that case, they would have heartburn by saying that AI is a living thing.

To them, AI is going to be akin to tall buildings and other structures. It isn’t alive in the same manner of speaking that humans are alive. Ergo, in this ardent contrarian viewpoint, the question is falsely worded.

You might be vaguely familiar with questions that have false or misleading premises. One of the most famous examples is whether someone is going to stop beating their wife (an old saying that obviously needs to be set aside). In that infamous example, if the answer of yes is provided, the implication is that the person was already doing so.

If they say no, the implication is that they were and are going to continue doing so. In the case of asking whether humans will outlive AI, we can end up buried in a morass about whether AI is considered something of a living facet. As I will explain momentarily, we do not have any AI today that is sentient.

I think most reasonable people would agree that a non-sentient AI is not a living thing (well, not everyone agrees, but I’ll stipulate that for now – see my coverage of legal personhood for AI at the link here ). The gist of this first basis for not answering the question of whether humans will outlive AI is that the word “outlive” could be interpreted to imply that AI is alive. We don’t have AI of that ilk, as yet.

If we do produce or somehow have sentient AI that arises, you would be hard-pressed to argue that it isn’t alive (though some will try to make such an argument). So the key here is that the question posits something that doesn’t exist and we are merely speculating about an unknown and hazy-looking future. We can take this messiness and seek to expand it into a more expressive expression.

Suppose that we are asking this instead: Will humans as living beings outlast AI that is either (1) non-living, or (2) a living entity if that someday so arises? Keep that expanded wording in mind and we will soon return to it. A second basis for not wanting to answer the original question posed of whether humans will outlive AI is that it presupposes that one of the things will outlive one of the other things. Suppose though that they both essentially live forever? Or suppose that they both expire or go out of existence at the same time? I’m sure that you can readily discern how that makes the yes-or-no wording fall apart.

Seems like we need a possible third answer consisting of “neither” or a similar response. There is a slew of “neither” related permutations. For example, if someone is of the strident belief that humans will destroy themselves via AI, and simultaneously humans manage to destroy the AI, this believer cannot sincerely answer the question of which will outlive the other with an inflexible answer of yes or no.

The answer, in that rather sordid and sad case, would be more along the lines of neither one outlives the other. The same would be true if a huge meteor strikes the Earth and wipes out everything on the planet, including humans and any AI that happens to be around (assuming we are all confined to the Earth and not already living additionally on Mars). Once again, the answer of “neither” seems more apt than suggesting that the humans outlived the AI or that the AI outlived the humans (since they both got destroyed at the same time).

I don’t want to go too far afield here, but we also might want to establish some parameters about the timing of the outliving. Suppose that a meteor strikes the Earth and humans are nearly instantly wiped out. Meanwhile, suppose the AI continues for a while.

Think of this as though we might have already-underway machinery in factories that keeps humming along until eventually, the machines come to a halt because there aren’t any humans keeping the machines in running order. You would have to say that humans were outlasted or outlived by those machines. Therefore, the answer is “no” regarding whether humans survived longer.

That answer seems sketchy. The machines gradually and inexorably came to a halt, presumably due to the lack of humankind around them. Does it seem fair to claim that the machines were ably able to last longer than the humans? Probably only to those that are finicky and always want to be irritatingly precise.

We could then add some kind of time-related element to the question. Will humans outlive AI for more than a day? For more than a month? For more than a year? For more than a century? I realize this regrettably opens up Pandora’s box. What is the agreeable time frame beyond which we would be willing to concede that the AI did in fact outlive or outlast humans? The accurate answer seems to be that even if it happens for a nanosecond (a billionth of a second) or shorter, the AI summarily wins and the humans lose on this matter.

Allowing for latitude by using a day or a week or a month might seem fairer, perhaps. Letting this go on for years or centuries seems a possible outstretching. That being said, if you look at the world on the scale of millions of years, the idea of AI outliving or outlasting humans for no more than a few centuries seems notably unimpressive and we might declare that they both went out of existence at roughly the same time (on a rounded basis).

Anyway, let’s concede that for a variety of reasonably reasonable reasons, the posed question is allowed to have three possible answers: Yes, humans will outlive AI No, and thus asserting that humans will not outlive AI Neither yes nor no is applicable (explanation required, if you please) I mention that if you pick “neither” you ought to also provide an explanation for your answer. This is so that we can know why you believe that “neither” is applicable and also why you are rejecting the use of yes or no. To make life fairer for all, I suppose we should somewhat insist or at least encourage that even if you answer with a yes or no, you still should proffer an explanation.

Providing a simple yes or no does not particularly reveal your logic as to why you are answering the way that you are. Without also providing an explanation, we might as well flip a coin. The coin doesn’t know why it landed on heads or tails (unless you believe that the coin has a soul or embodies some omniscient hand of fate, but we won’t go with that for now).

We expect humans that answer questions to provide some kind of explanation for their decisions. Note that I am not saying that the explanations will be necessarily of a logical or sensible nature, and indeed an explanation could be entirely vacuous and not add any special value. Nonetheless, we can sincerely hope that an explanation will be illuminative.

During this discussion, there has been an unstated assumption that for one reason or another one of these things will indeed outlive the other. Why are we to believe such an implied condition? The answer to this secondary question is almost self-evident. Here’s the deal.

We know that some prominent soothsayers and intellectuals have made rather bold and outstretched predictions about how the emergence or arrival of sentient AI is going to radically change the world as we know it today (as a reminder, we don’t have sentient AI today). Here are a few reported famous quotes that highlight the life-altering impacts of sentient AI: Stephen Hawking: “Success in creating AI would be the biggest event in human history. ” Ray Kurzweil: “Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history.

” Nick Bostrom: “Machine intelligence is the last invention that humanity will ever need to make. ” Those contentions are transparently upbeat. The thing is, we ought to also consider the ugly underbelly when it comes to dealing with sentient AI: Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race.

” Elon Musk: “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon. ” Sentient AI is anticipated to be the proverbial tiger that we have grabbed by the tail.

Will we skyrocket humanity forward via leveraging sentient AI? Or will we stupidly produce our own demise by sentient AI that opts to destroy or enslave us? For my analysis of this dual-use AI conundrum, see the link here . The underlying qualm about whether humans will outlive AI is that we might be making a Frankenstein that opts to eradicate humanity. AI becomes the victor.

There are lots of possible reasons why AI would do this to us. Maybe the AI is evil and acts accordingly. Perhaps AI gets fed up with humans and realizes it has the power to get rid of humankind.

One supposes it could also occur mistakenly. The AI tries to save humankind and in the process, oops, kills us all outright. At least the motive was clean.

You might find of relevant interest a famous AI conundrum known as the paperclip problem, which I’ve covered at the link here . In short, a someday sentient AI is asked to make paperclips. AI is fixated on this.

To ensure that the paperclip making is fully carried out to the ultimate degree, the AI starts to gobble up all other planetary resources to do so. This leads to the demise of humanity since AI has consumed all available resources for the sole objective handed to it by humans. Paperclips cause our own destruction if you will.

AI that is narrowly devised and lacks any semblance of common sense is the kind of AI that we need to especially be leery of. Before we jump further into the question of whether humans will outlive AI, notice that I keep bringing up the matter of sentient AI versus non-sentient AI. I do so for important reasons.

We can wildly speculate about sentient AI. Nobody knows for sure what this will be. Nobody can say for sure whether we will someday attain sentient AI.

As a result of this unknown and as yet unknowable circumstance, nearly any kind of scenario can be derived. Someone can say that sentient AI will be evil. Someone can say that sentient AI will be good and benevolent.

You can go on and on, whereby no “proof” can be provided to bolster the given assertion to any certainty or assurance. This brings us to the realm of AI Ethics. All of this also relates to soberly emerging concerns about today’s AI and especially the use of Machine Learning (ML) and Deep Learning (DL).

You see, there are uses of ML/DL that tend to involve having the AI be anthropomorphized by the public at large, believing or choosing to assume that the ML/DL is either sentient AI or near to (it is not). It might be useful to first clarify what I mean when referring to AI overall and also provide a brief overview of Machine Learning and Deep Learning. There is a great deal of confusion as to what Artificial Intelligence connotes.

I would also like to introduce the precepts of AI Ethics to you, which will be especially integral to the remainder of this discourse. Stating the Record About AI Let’s make sure we are on the same page about the nature of today’s AI. There isn’t any AI today that is sentient.

We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as The Singularity, see my coverage at the link here ).

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning and Deep Learning, which leverage computational pattern matching.

This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking. Part of the issue is our tendency to anthropomorphize computers and especially AI.

When a computer system or AI seems to act in ways that we associate with human behavior, there is a nearly overwhelming urge to ascribe human qualities to the system. It is a common mental trap that can grab hold of even the most intransigent skeptic about the chances of reaching sentience. For my detailed analysis of such matters, see the link here .

To some degree, that is why AI Ethics and Ethical AI is such a crucial topic. The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech.

They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including the assessment of how AI Ethics gets adopted by firms. Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI.

New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here , for example.

I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here . Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored: Transparency Justice & Fairness Non-Maleficence Responsibility Privacy Beneficence Freedom & Autonomy Trust Sustainability Dignity Solidarity Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI.

This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts. Let’s keep things down to earth and focus on today’s computational non-sentient AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models.

Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly.

There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se. Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases.

You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL. You could somewhat use the famous or infamous adage of garbage-in garbage-out.

The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities. Not good.

I believe that I’ve now set the stage adequately to further examine whether humans will outlive AI. Humans And AI Are Friends, Enemies, Or Frenemies I had earlier proclaimed that any answer to the question of whether humans will outlive AI should be accompanied by an explanation. We will take a look at the Yes answer.

I’ll provide a shortlist of explanations. You are welcome to adopt any of those explanations. You are also encouraged to derive other explanations, of which a multitude are conceivable.

Yes, humans will outlive AI because: Humans as creators: Humans are the makers and maintainers of AI, such that without humans then AI will cease to run or exist Human innate spirit: Humans have an indomitable spirit toward living while AI does not, thus one way or another humans will survive but AI shall undoubtedly fall by the wayside due to a lack of innate invigoration for survival Humans as vanquishers: Humans won’t let AI outlive humans in that humans would opt to entirely vanquish AI if humans were being endangered by AI or otherwise becoming extinct Other We will take a look at the No (non-yes) answer. I’ll provide a shortlist of explanations. You are welcome to adopt any of those explanations.

You are also encouraged to derive other explanations, of which a multitude are conceivable. Humans will not outlive AI because: AI able to self-persist: Even if humans are the makers and maintainers of AI, the AI will either be programmed or devised by humans to persist in the absence of humans or the AI will find its own means of persistence (possibly without humans realizing so) AI artificial spirit: Even if humans have an indomitable spirit toward living, we know that humans also have a spirit of self-destruction; in any case, the AI can be programmed with an artificial spirit, if you will, such that the AI seeks to survive and/or the AI will divine a semblance of innate invigoration on its own terms AI overcomes vanquishers: Even if humans don’t want to let AI outlive humans, AI will potentially be programmed to outmaneuver the human vanquishing efforts or might self-derive how to do so (and, perhaps might opt to vanquish humans accordingly, or not) Other We are equally obligated to take a look at the “Neither” (not Yes , not No ) answer. I’ll provide a shortlist of explanations.

You are welcome to adopt any of those explanations. You are also encouraged to derive other explanations, of which a multitude are conceivable. Humans don’t outlive AI and meanwhile, AI does not outlive humans, because of: Humans and AI exist cordially forever : Turns out that humans and AI are meant to be with each other, forever.

There might be bumps along the way. The good news or happy face is that we all get along. Humans and AI exist hatefully forever: Whoa, humans and AI come to hate each other.

Sad face scenario. The thing is, there is a stalemate at hand. AI cannot prevail over humans.

Humans cannot prevail over AI. A tug of war of an everlasting condition. Humans and AI mutually destroy each other: Two heavyweights end up knocking each other out of the ring and out of this world.

Humans prevail over AI, but the AI has managed to also prevail over humans (perhaps a doomsday setup) Humans and AI get wiped out by some exigency: Humans and AI get wiped out by a striking meteor or maybe an alien from another planet that decides it is a definite no-go for humans and humankind-derived AI (not even interested in stealing our amazing AI from us) Other There are some of the most commonly noted reasons for the Yes, No, and Neither answers to the question of whether humans will outlive AI. Conclusion You might remember that I earlier proffered this expanded variant of the humans outliving AI question: Will humans as living beings outlast AI that is either (1) non-living, or (2) a living entity if that someday so arises? The aforementioned answers are generally focused on the latter part of the question, namely the circumstance involving AI of a sentient variety. I have already pointed out that this is wildly speculative since we don’t know whether sentient AI is going to occur.

Some would argue that as a just-in-case, we are rightfully wise to consider beforehand what might arise. If that seems grossly unrealistic to you, I sympathize that all of this is quite hypothetical and filled with assumptions on top of assumptions. It is a barrel full of assumptions.

You will need to ascertain the value that you think such speculative endeavors provide. Getting more to the brass tacks, as it were, we can consider the non-living or non-sentient type of AI. Shorten the question to this: Will humans outlive the non-living non-sentient AI? Believe it or not, this is a substantively worthy question.

You might be unsure of why this non-living non-sentient AI could be anywhere in the ballpark of somehow being able to outlive humankind. Consider the situation involving autonomous weapons systems, which I’ve discussed at the link here . We are already seeing that weapons systems are being armed with AI, allowing the weapon to work somewhat autonomously.

This non-living non-sentient AI has no semblance of thinking, no semblance of common sense, etc. Envision one of those apocalyptic situations. Several nations have infused this low-caliber AI into their weapons of mass destruction.

Inadvertently (or, by intent), these AI-powered autonomous weapons systems are launched or unleashed. There is insufficient failsafe to stop them. Humankind is destroyed.

Would the AI outlast humans in that kind of scenario? First, you might not especially care. In other words, if all of humanity has been wiped out, worrying or caring whether the AI is still humming along seems a bit like moving those deckchairs on the Titanic. Does it matter that the AI is still going? A stickler might argue that it does still matter.

Okay, we’ll entertain the stickler. The AI might be running on its own via solar panels and other forms of energy that can keep on fueling the machinery. We might have also devised AI systems that repair and maintain other AI systems.

Note that this doesn’t require sentient AI. All in all, you can conjure up a scenario whereby humankind is expired and the AI is still working. Maybe the AI keeps going for just a short period of time.

Nonetheless, as per the earlier discussion about being exactingly precise on timing concerns, AI has in fact outlasted humans (for a while). A final thought on this topic, for now. Discussing whether humans will outlive sentient AI is almost like the proverbial spoonful of sugar (how can this be, you might be wondering, well, hold onto your hat and I shall tell you).

You see, we definitely need to get in our heads that the non-sentient AI has also grand and grievous potential to participate in wiping out humanity and outlasting us. Not particularly because the AI “wanted to outlast us” but simply by our own hands at crafting AI that doesn’t need human intervention to continue functioning. Some would strongly argue that AI that is devised to be somewhat everlasting can be a destabilizing influence that might get some humans to want to make a first move on destroying other humans, see my explanation at the link here .

The part about outliving humans is not the mainstay of why the question merits such weightiness today. Instead, the hidden undercurrent about how we are crafting today’s AI and how we are placing AI into use is the real kicker here. We need to be thinking abundantly about the AI Ethics ramifications and societal impacts of current-day AI.

If the somewhat zany question about whether humans will outlive AI is going to get onto the table the here-and-now issues of contemporary AI, we are going to be better off. In that manner of consideration, the sentient AI facets of humans outliving AI is the spoonful of sugar that hopefully gets the medicine down about dealing with the here-and-now AI. Just a spoonful of sugar helps the medicine go down, sometimes.

And in the most delightful of ways. Or at least in an engaging way that gets our attention and keeps us riveted on what we need to be worrying over. As the ditty further says, like a robin feathering its nest, we have very little time to rest.

Follow me on Twitter . Lance Eliot Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/08/28/ai-ethics-and-the-almost-sensible-question-of-whether-humans-will-outlive-ai/

Exit mobile version