AI AI Ethics And AI Law Wrestling With AI Longtermism Versus The Here And Now Of AI Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.
Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). Following New! Follow this author to stay notified about their latest stories. Got it! Oct 25, 2022, 08:00am EDT | Share to Facebook Share to Twitter Share to Linkedin AI longtermism and the future of humanity confront the AI Ethics conundrum entailing short-term AI .
. . [+] versus long-term AI.
getty There are some people that cannot seem to see beyond their noses. I’m sure you know what I am referring to. You undoubtedly are familiar with friends or acquaintances that focus entirely and exclusively on the here and now.
All they care about is what they see and hear and feel in the moment. The notion of thinking about the future is a foreign concept and wholly outside their mental grasp. Let’s refer to this distinctive category of people as short-termers or the here-and-now crowd in terms of their day-to-day perspectives about the world.
Meanwhile, I am equally confident that you know other people that seem to be devoted devoutly to the future. Those outside-the-box thinkers are willing to make great sacrifices at this particular time to try and ensure that things will be better off further down the road. It is almost as though they are mentally imbued time-travelers.
To them, the present is merely a means of reaching a bright and sunny future. This second category will be considered long-termers or sometimes known as longtermists. These longtermists adopt a time-expanded viewpoint to look way out there, far beyond their noses.
Far beyond the furthest reaches of any of our noses. They do not confine their outlook to the boundaries of their specific lifespan. You see, they fervently care about upcoming generations.
And they care vigorously about the many generations that will come long after those upcoming generations. In fact, they might be so visionary that they care ardently about generations that will arise thousands or possibly millions of years from now. Time is boundless to them.
Some longtermists even speculate that we might eventually reach a stage of existence where there are no longer “generations” per se. People will be able to live forever. We will have attained completely unlimited lifespans.
An altogether dizzying and dazzling conception. It kind of makes you feel a bit narrow-minded if you are worried right now about what you’ll have for lunch today and pondering whether you’ll have a chance to watch some of those fascinating cat videos before you get some needed shuteye tonight. MORE FOR YOU Hiring Refugees: How One Big Factory Did It Cyprus Could Transfer Its Russian Armaments To Ukraine If Guaranteed Speedy Replacements For Women’s Small Business Month Jennifer Hudson Partners With Mastercard’s Strivers Initiative You might have figured out that I am dragging you into the arena of longtermism .
Some are surprised to discover that there is an entire field of inquiry that focuses on longtermism and which thoughtfully examines and assesses a wide array of longtermistic considerations (well, I said “thoughtfully” but some would claim otherwise, as I’ll be illuminating shortly). This rising philosophical bent entails the how, why, what, and other notable questions underlying a willingness, or shall we say a burning desire, regarding taking a long-term perspective on things, especially the big things such as the future of humanity. Yes, I said it, the future of humanity.
Seems like you can’t get much bigger than that. As an aside, some get irked by discussing the future of humanity , as it were, in the sense of “humanity” proffering an implied sense of haughtiness or narrowness pertaining to humans alone. Allow me to elaborate.
If the word “humanity” suggests that we are leaving out animals and all other living creatures as part of this inquiry, this seems insidiously narrow and presumably over-prioritizes humans in relation to the rest of all living organisms. Thus, this seems to be a heartbreaking case of preoccupied self-serving humans caring only about their fellow preoccupied self-serving humans, which appears to be callous and myopically focused. One supposes that we could recast the matter by referring to the future of all living things rather than stating that the attention is on the future of humanity.
Admittedly, that doesn’t quite roll off the tongue as does the classic line involving focusing on the future of humanity. Others would argue that it is indubitably preferred to use the catchphrase of the future of humanity since the future of humanity will of necessity also depend upon all other living things. You are ergo scooping all living things into the bucket of “the future of humanity” as a kind of de facto collection.
Balderdash, comes the rapid retort. If you stick with the narrow-minded moniker or phrasing of the future of humanity, you are bound to find yourself willing to wipe out all other living things if that is the calculated “best way” to ensure the future of humanity. By placing humanity at the core and shoving all other living things outside of the core, you can convince yourself to do whatever you want to the part that is outside of the core.
The only prudent means to try and ensure that you preserve all other living things would be to firmly place them into the core. Wait for a second, the counterclaim to that claim goes, if we openly insert all living things into the scope of the future of humanity, we could end up undermining our human existence. Suppose for example that we come up with a way to preserve all of these other living things but this comes at the cost of losing humans.
In essence, we collectively decide that to keep all other living things going, we need to rid ourselves of humanity or some portion thereof. We could logically box ourselves into a messy dilemma. To prevent this conundrum, it is said that we adamantly need to keep humanity front-and-center as the revered core.
Round and round this goes. I don’t want to go much further down that rabbit hole herein. There is a bit of a twist that might be of special interest.
Envision that we eventually come across other sentient life. I dare not refer to them as space aliens since that would seem to immediately garner a knee-jerk reaction by some that scoff at the possibility of us encountering alien sentient life, see my coverage at the link here . Anyway, the point in this particular context is that the far future might have humanity and might also have some other form of sentient beings.
We then have these potential memberships in our longtermism discourse: Humanity All other everyday living things Other sentient beings (as yet unknown) I wanted to get that notable point onto the table for another reason. This topic about longtermism is one that nearly always dovetails into the auspices of Artificial Intelligence (AI). AI is at times considered a form of machine that could potentially one day attain sentience, a topic I’ll be further addressing momentarily.
I will also be sharing with you some of the primary insights about how AI and longtermism go conspicuously hand-in-hand. That being said, you don’t have to discuss longtermism and AI at the same time. It is fine to chatter about longtermism and not bring up AI.
You can also bring up AI and not in the same breath bring up longtermism. By and large, though, most would tend to prudently agree that AI and longtermism are intertwined and we do need to be joining them at the hip. This vital combo is typically mentioned as AI longtermism , plus the notion is that anyone versed or engaged in this topic is labeled as AI longtermists .
There is one surefire guarantee of what happens when you mix together these two rather contentious subjects of AI and longtermism, namely that out of this explosive combination arises extraordinarily vexing AI Ethics and Ethical AI ramifications. For my ongoing and extensive coverage of AI Ethics and Ethical AI topics, see the link here and the link here , just to name a few. I will now amend the membership list about longtermism: Humanity All other everyday living things AI (non-sentient of today, future sentient if occurs) Other sentient beings (as yet unknown) Back to the informational briefing about longtermism overall.
The most common undercurrent about longtermism entails the weighty ethical issues that arise. You could assert that this is a fully ethics-immersed endeavor. Predictions over extremely long-time horizons are bandied around and we need to ponder morality aspects of today, morality aspects of tomorrow, and morality aspects of the far future.
Tough questions are asked about how humanity will fare in the long term. This then forces a sometimes harsh look in the mirror as to what humanity is doing now and how our today’s actions are helping or hurting the vaunted humanity of the future. Here are some key ethical longtermism conundrums to ruminate on: What moral obligations do we have regarding far future people that we are presumably not going to ever see or know? To what degree do our moral obligations of today exceed or match the moral obligations that we hold for those far future people? Can we justify moral choices made today as based on speculative notions of what far future people will be like? Do our morally stoked actions of today really have a substantive impact on those far future people, or are we simply deluding ourselves to think that what we do right now is nothing more than a mere pebble inconsequentially tossed into a flowing massive river of time? Longtermists are construed as people of today that wish to shoulder a serious and sobering concern about far future people.
This is not necessarily as easy as it might seem. For one thing, the odds are pretty high that the action of today’s truly longtermists will inevitably be long forgotten and not especially known or remembered by these far distant (in time) people of the far future. Mull this over.
If the things you do today are devised by you to help future people, and yet the awareness of your conceived efforts might be lost in the vast sea of time, would you still proceed to carry out those today’s actions? I’m suggesting that though you might have indeed done things that ultimately improved or benefited those far future people, you’ll get little or no recognition from them. They won’t realize you even existed. History will not particularly note your efforts.
You have to be a quite sturdy believer in aiding those far future people. No future glory will come to you because of it. During your lifetime, you will not likely get much acknowledgment of what you are doing for those far future people.
You cannot prove that your actions of today are necessarily benefiting those far future people. Maybe your actions are doing so, or maybe they are not. Okay, so others around you today will question what in the heck you are doing.
You will tell them that you are making efforts to secure a better future for far future people. Let’s assume that those benefits aren’t going to become at all apparent until prolonged thousands or millions of years from now. People that live in the here and now might think that you’ve gone completely bonkers.
Unless of course, there are other people of today that likewise share your vision of the far future. In that case, you would potentially witness the reward or admiration from those around you now. Whether or not the far future ever knows of your efforts can be somewhat discounted since you at least gleaned recognition today.
Some longtermists would be spitting fire amidst the professed implication that there is any need or urge to attain recognition by those that are trying to do what they think is proper for the far future. The reward, if any is so suggested, would be of their own accord. They would know that based on their deeply held beliefs, they have done what they envisioned is necessary for the aid of those far future people.
You could claim that this is entirely altruistic and fits within the societal practice of what is sometimes referred to as effective altruism (EA). We need to dig into a few more foundational aspects about longtermism and we’ll then be ready to dive deeply into the AI intertwining aspects. Here’s one interesting reason to care about those far future people.
There might be lots and lots of them. Follow me on this. Suppose that the population keeps increasing.
Maybe we are able to do this while on planet Earth. It could be that we opt to live on other planets too. We might create artificial planets or large-scale space-traveling living quarters.
One way or another, pretend that our population grows sizably over the upcoming thousands or millions of years. We have nearly 8 billion people alive today (in 1800, there were only around 1 billion, so we’ve been progressing at quite a frenetic pace in terms of population growth). Imagine that either on Earth and/or via the use of other planets, we expand to 80 billion people.
If that number isn’t impressive to you, I’ll up the ante and say that we could have 800 billion people or maybe 8 trillion people, and so on. Those are big numbers. From a sheer numeric perspective, some longtermists suggest that we of the 8 billion need to be doing today whatever we can to ultimately support those 800 billion or 8 trillion people.
Our current-day efforts are going to have a kind of vast multiplier effect. Without seeming to be insensitive, the argument is that if you could save ten people of tomorrow rather than just one person of today, would you be more motivated to do so? Suppose you could save hundreds, thousands, or millions of people, versus saving just one? This admittedly seems ashamedly dreadful to suggest that any individual life is not worth as much as several or many other lives, but it is said that we make those types of decisions all the time. For my coverage of these ethically difficult matters, such as the famous or infamous Trolley Problem, see the link here .
The gist is that a worthy basis for adopting a longtermism perspective is that you are going to be able to aid many more humans than you could otherwise do today. Rather than being only concerned about the 8 billion of the here and now, you should perhaps more broadly be worrying about the 800 billion or 8 trillion of the far future. Not everyone applauds that stance.
Critics emphasize that under that guise, you could potentially concoct some nutty schemes that sacrifice or distress in dire ways those of today for these completely unknown and wild-guess vast populations of the future. Suppose you become convinced that to aid those far future people, the ones that we don’t know will ever arise, the proper thing to do today severely shortchanges the people of today? Maybe you made a wrong guess. We all take things on the chin today, but whoops, this turns out to have nothing to do with those far future populations.
Worse still, it could be that the efforts to do some speculated “right thing to do” are inadvertently the absolutely wrong things to do, even if you buy into the far future populace precept. Perhaps a short-term effort that forces tremendous distress on today’s population is going to entirely undermine the far future population. You didn’t plan it that way.
You didn’t envision that result. Unfortunately, nonetheless, turns out that the short-term distortions end up reducing those far future populations or preventing them from occurring at all. Seems like a dicey roll of the dice, some would say with raised eyebrows and doubting expressions.
A related aspect entails two diametrically contrasting potential discontinuities that are often voiced when wrangling with longtermism: 1) Existential risks (x-risks) 2) Theoretical trajectory upsides (t-up) Let’s take a look at these two discontinuities that tend to come up in longtermism. I’m sure you’ve heard about the possibilities of existential risks (insiders refer to these as x-risks). One of the most common examples consists of an all-out nuclear war that wipes out all of humanity.
That unquestionably is an existential risk. Going back to the early days of nuclear weaponry, we have been watching a doomsday clock and fretting over mutually assured destruction (MAD) as a result of a widespread nuclear-armed battle. The catchphrase “existential risk” suggests that we might do something that could lead to the destruction of all of humanity.
Or we might fail to do something that would have prevented the destruction of all of humanity. We don’t necessarily have to only be dealing with the entirety of destruction. There are lots of other frightening outcomes, such as that we are still alive but become infected like those zombies in the movies and TV shows.
Or perhaps we wipe out half of the population or basically any sizable chunk of humanity. You name it, the sky is the limit on those nightmarish existential risks and their appalling outcomes. That is the sad face side of things.
We need to make sure that we have a happy face side in the mix too. We could also have some theoretically plausible trajectory upsides when it comes to our future (I call them t-ups). Imagine that we discover new pharmaceuticals that can guarantee us a life without ill health.
Perhaps we create a fountain of youth that ensures we will never grow old. And so on. Longtermism is controversial.
You can readily find experts that tout it as crucial to our future. You can also encounter opposing experts that say it is mushy, vague, unsupported, and dangerous because it can provide false guidance. There are ardently raised concerns that we will today take potentially hurtful actions imposing immediate distress that are based on sketchy and unverifiable claims about far future outcomes.
To recap, we have covered these key overarching facets about longtermism: Longtermism is about taking an exceedingly long-term view and especially so on the future of humanity This is a decidedly ethically imbued stance and surfaces thorny moral and ethical considerations Proponents emphasize that we should be taking into account our efforts today as to future populations Critics express this can be a guise for short-term distress that is based on speculative predictions There are two major forms of discontinuities that typically come up in longtermism One discontinuity consists of the possibility of existential risks that lead to catastrophic results Another discontinuity is the possibility of trajectory upsides that lead to demonstrative positives AI and longtermism tend to be tightly wrapped or intertwined with each other We can take a moment to now examine the intertwining of AI and longtermism. What’s the deal with AI longtermism ? The idea is that when we are thinking about where AI is heading, we need to be mindful of both the near-term and the long-term implications. Nowadays, it is said that the bulk of our attention on AI is woefully only concentrated in the near term.
We are not seemingly devoting sufficient attention to the long-term. As such, we might get blindsided by AI. This could happen due to our veritable heads-in-the-sand posture of heralding or at times carping about the AI of today.
We aren’t able to see the forest for the trees and be mindful of the far future. We aren’t allowing ourselves to step out of the weeds. Somebody somewhere has to be standing tall and looking out beyond the nearest horizon of AI.
That is the calling card or raison d’être of the AI longtermism and those determined AI longtermists. The most obvious and headline-grabbing example of AI and longtermism consists of the qualms that AI is going to become sentient and opt to wipe humanity out. There is plenty of those existential risk AI-related variants.
Try this on for size. AI doesn’t wipe us out and instead decides to enslave us, see my analysis at the link here . This is not what we probably were envisioning for the future of humanity.
Will we enslave AI, or will AI enslave us? Some say that if we make the first move and enslave AI, the odds of AI later on deciding to enslave humanity goes up immensely. Revenge is sweet, the logic goes. If we do not enslave AI, there is an argument that says AI will figure out we are patsies and enslave us.
This is one of those brutal cases of the stronger of the species (or whatever) showcasing that it will prevail. That AI-spawned evil future certainly seems disturbingly disheartening. If we can do something today about AI that would prevent or minimize those zany AI apocalyptic scenarios, you would seem to have a strong moral or ethical case about why we should be strenuously and relentlessly doing whatever the designated something is.
Right now. Do not wait. Time might be running out.
We can somewhat turn the cheek away from the downsides and instead look toward the potential trajectory upsides that longtermism also cares about. Suppose that AI is able to discover a cure for cancer. Instead of dreading AI, we would be joyous that AI has come to be.
You can even make various intriguing nuances to the AI upsides. For example, suppose that AI somehow stops us from waging an all-out nuclear war that would have been self-destructive to humanity. I guess we would be pinning a hero’s medal on that kind of AI.
Before getting into some more meat and potatoes about the wild and woolly considerations underlying AI longtermism, let’s establish some additional fundamentals on profoundly essential topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL). You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI.
Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning. One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities.
You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good . Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad . For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here .
Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good .
On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here . We could also have a separate AI system that acts as a type of AI Ethics monitor.
The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here ). In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there.
You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar.
All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of. First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI. For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here , these are their identified six primary AI ethics principles: Transparency: In principle, AI systems must be explainable Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity Reliability: AI systems must be able to work reliably Security and privacy: AI systems must work securely and respect the privacy of users.
As stated by the U. S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here , these are their six primary AI ethics principles: Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities. Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation. Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature ), and that my coverage explores at the link here , which led to this keystone list: Transparency Justice & Fairness Non-Maleficence Responsibility Privacy Beneficence Freedom & Autonomy Trust Sustainability Dignity Solidarity As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack.
It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road. The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI.
This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts. Let’s also make sure we are on the same page about the nature of today’s AI.
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible.
Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here ). The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction.
A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here ).
Let’s keep things more down to earth and consider today’s computational non-sentient AI. Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition.
The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models.
Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly.
There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se. Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases.
You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL. You could somewhat use the famous or infamous adage of garbage-in garbage-out.
The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities. Not good.
Let’s return to our focus on AI longtermism. We shall closely explore the meaty matter of timeframe alignment that permeates most of the AI longtermism debates. In brief, the alignment issue deals with the sometimes tension-filled short-term versus long-term tradeoffs that arise.
Imagine that AI longtermism is advocating that the long-term outlook related to some element of AI would be better enabled via taking a recommended short-term “right now” action related to the AI of today. Does the AI longtermism recommendation raise concerns or is it able to be readily accepted, all else being equal? If the short-term and long-term seem to align satisfactorily, the short-term action would seem to be acceptable for the undertaking. On the other hand, if the short-term and long-term do not seem to align, the controversy of what to do gets stoked into existence.
The greater the gap or misalignment, the bigger the controversy and therefore heated debate that will likely ensue. Let’s walk through how this might work via using a handy scenario. Suppose that a particular AI longtermism viewpoint expresses that we ought to be doing action “Y” today regarding AI.
Assume that the argument is being made by AI longtermism that by doing the prescribed action “Y” today about AI we will enable a crucial benefit for envisioned future populations of humanity. For sake of discussion, imagine that an AI longtermism contention could be that to summarily curtail or substantively reduce the chances of existential risks about AI that might, later on, wipe out humanity or enslave humanity, we need to do “Y” today. Imagine that the “Y” action of today involves establishing very strict laws governing how AI systems are to be devised and imposes draconian penalties on AI developers or AI promulgators that violate said laws.
Note that I am not suggesting via this scenario that this is in fact a specific on-the-books proposed AI longtermism contention. I am just proffering a strawman for sake of discussion. Please also be aware that there is a wide range of views within the realm of AI longtermism.
There is no single unified AI longtermism perspective. Indeed, anyone alluding to AI longtermism as though it is a homogenous set of completely parsimonious views is sputtering rubbish. We have this so far in our scenario: Long-term: Seek to reduce the existential risk of AI wiping out or enslaving humanity Short-term: Strive toward the long-term goal via enacting strict laws today about AI The first thing to do is of course examine the long-term claim involved.
Do we agree that the long-term goal is something that we as a society are desirous of achieving? In this instance, the answer would seem reasonable to be that yes, we would like to reduce the chances of AI wiping out or enslaving humanity. Had the goal been something else, we might not have readily embraced anything else about considering whether the short-term action is worthwhile or not (if the goal seemed unsuitable at face value, we could probably opt to not consider the short-term recommendation at all). The second thing to do would to be examine the short-term action being recommended.
This is the “Y” action of today that the AI longtermist is saying we need to do for purposes of attaining the stated long-term goal. Is there anything about the recommended short-term action that would be costly or distressing for us in today’s world? If the short-term recommended “Y” action is essentially cost-free or otherwise seen to be advantageous for us today, we could likely find ourselves having few if any qualms about adopting it. Our perspective of today would be that the short-term and the long-term appear to be in alignment.
In this instance, we favor the long-term contention about AI. We also favor the short-term contention about AI. Since we favor both, we can pretty much go along and attempt to implement the short-term contention.
That’s the easy-peasy instance. Not all such combinations will be that easy. A straightforward way to arrange this consists of a classic four-square arrangement entailing short-term and long-term being on one axis, while aligned and misaligned being on the other axis.
I am going to use the words “favorable” and “unfavorable” to denote that society today is either in favor of or is in opposition to whatever the contended matter is. This gives us these four possibilities: 1) Short-term AI action is perceived as favorable, and Long-term AI is perceived as favorable 2) Short-term AI action is perceived as unfavorable, and Long-term AI is perceived as unfavorable 3) Short-term AI action is perceived as unfavorable, and Long-term AI is perceived as favorable 4) Short-term AI action is perceived as favorable, and Long-term AI is perceived as unfavorable Our AI longtermism scenario so far is the first of the four possibilities. We favor the long-term AI contention (don’t get wiped out by madcap AI), and we favor the short-term AI contention (put in place strict AI laws right away).
They align. We proceed. Revisit the short-term action that the scenario is postulating.
On the surface, it might seem obvious that we should enact strict laws about AI. Furthermore, it might seem equally obvious that we ought to make sure that those strict laws have harsh penalties, else AI developers and AI promulgators might ignore or brazenly flaunt the strict laws. A societal counterclaim to the whole notion of enacting strict laws about AI is that this will allegedly kill the golden goose.
By having laws that handcuff innovators and the rising AI innovations, the argument goes that you are going to disrupt and hamstring progress on AI. All of those AI advances that we are perhaps hoping will allow us to say cure cancer or benefit humanity is either going to be delayed or might never come to fruition, all due to those darned new laws about AI that we might enact. For my coverage on AI legal issues, see the link here and the link here , among many other of my postings.
This is the proverbial shooting your own foot mantra. Our willingness to undertake the recommended AI short-term action is now out of alignment with the long-term AI action in the sense that the cost or distress of the short-term is making us shaky about the long-term AI considerations. We begin to ask tough questions.
How does AI longtermism know with any certainty that strict AI laws passed today will have any particular or direct impact on a future involving AI that supposedly will wipe out or enslave humanity? The connective relationship between the short-term and the long-term gets closely scrutinized now that we realize there is a potentially heavy cost to bearing the short-term action. This would be an example of the third posture listed in my above four-item listing, namely that the short-term AI action is now being construed as unfavorable, despite that the long-term AI action is agreeably being construed as favorable. Loud and cantankerous debate ensues accordingly.
For the second posture that involves both the short-term AI action being unfavorable and the long-term AI action being unfavorable, we probably would not get mired in any demonstrative debate. Naturally so since they are aligned with respect to both being unfavorable at the get-go. A tricky posture is the fourth one that I’ve listed in my four-item listing.
What are we to do when the short-term AI is seen as favorable in today’s eyes but AI longtermism is warning us that in the long-term this is going to be unfavorable to future populations? We can adjust our scenario to showcase this. The short-term AI action was stated as putting in place strict laws about AI. Suppose that we take the nearly opposite stance.
The short-term AI action is recalibrated that we are going to explicitly ban any laws about AI from coming into existence. No laws governing AI are to be established. The basis for such a posture could be that we need to let AI developers and AI promulgators work freely and without hesitation.
Any laws of any kind that might govern or oversee or monitor the advancement of AI are said to be dampeners toward garnering the hoped-for upcoming benefits of AI. Let the AI horses run wild and free, you might say. What would AI longtermism potentially postulate about this short-term AI posture? Ruinous.
Scandalous. Short-sighted. Myopic.
Those are possible replies by AI longtermism. The letting of the horse out of the barn could be portrayed as letting AI get underway in an unbridled fashion. You will never be able to put the genie back into the bottle, they might exhort.
Note that I am not suggesting that all of those in AI longtermism would proffer such a response, and again I remind you that there is no universal parsimonious set of AI longtermism views. It is quite possible that some AI longtermist pundits would have no qualms at all about the notion of banning AI-related laws, perhaps arguing that doing so today will have little or no impact on the AI of the future and the populations of the future. Your mileage might vary, so it goes.
Part of the reason I came up with a scenario that encompassed the governing of AI is that this is one of the most fertile and expressive areas of AI longtermism consists of focusing on the governance of AI. Among the multitude of camps within AI longtermism, it would seem that the long-term AI governance segment is especially active, vocal, and moving ahead at full steam. At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic.
There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars.
This will serve as a handy use case or exemplar for ample discussion on the topic. Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI longtermism, and if so, what does this showcase? Allow me a moment to unpack the question. First, note that there isn’t a human driver involved in a true self-driving car.
Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here .
I’d like to further clarify what is meant when I refer to true self-driving cars. Understanding The Levels Of Self-Driving Cars As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here ), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3.
The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems). There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there. Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here ).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI Longtermism For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. Why is this added emphasis about the AI not being sentient? Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI.
Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving.
Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. Let’s dive into the myriad of aspects that come to play on this topic. First, it is important to realize that not all AI self-driving cars are the same.
Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do. Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing.
Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system. I hope that provides a sufficient litany of caveats to underlie what I am about to relate.
Let’s sketch out a scenario that might leverage AI longtermism. I earlier indicated that when it comes to using our present-day roadways to allow tryouts of AI-based self-driving cars not everyone agrees with that approach. Some ardently believe that we should be doing computer-based simulations and using special closed tracks to experiment with self-driving cars.
Once self-driving cars have essentially been shown to be safe and sound in those enclaves, only then ought we to allow driverless vehicles onto our public roadways. This debate can be shifted into a type of AI longtermism perspective. Assume that a long-term goal underlying AI-based self-driving cars are that we will be able to radically reduce the number of human fatalities that occur due to car crashes.
In the United States, we currently experience about 40,000 fatalities per year as a result of car crashes and incur an estimated 2. 5 million associated injuries, see my collection of related stats at the link here . I think we can all profusely agree that reducing the number of fatalities resulting from car crashes is a laudable and inarguably commendable long-term goal.
How are we going to achieve that long-term goal? One means would be to put in place AI-based self-driving cars. AI self-driving cars will presumably have many fewer car crashes than are brought forth by human drivers in human-driven automobiles. Humans drive while drunk.
Humans fall asleep at the wheel. Humans get easily distracted and mentally drift from the driving task. All of those kinds of human foibles will no longer be a factor if we are using fully autonomous AI-driven self-driving cars.
The AI won’t drink and drive, the AI won’t fall asleep, etc. Some wacky pundits keep proclaiming we will soon have AI self-driving cars that are uncrashable. I have debunked this prosperous claim, see my explanation at the link here .
Just want to mention that even though we might dramatically reduce the number of car crashes, this does not mean that the number of AI self-driving car-related car crashes will lead to zero car crashes all told. Where I am heading on this AI longtermism example is that we might be willing in the short-term to tolerate some number of AI self-driving car crashes if we societally believe that in the end, this will radically reduce the number of car crashes in total. This takes us back to the discussion about saving future lives but at the cost of a lesser number of lives in the nearer term.
Some believe that any lives lost at this stage of the self-driving car development era will cause a societal uprising against the existing advent of self-driving cars, see my analysis at this link here . Though this might force AI self-driving car development to get off the public roadways and focus solely on the simulations and closed tracks, there is also a concern that the backlash will be so overwhelming that all self-driving car efforts are summarily stopped and shut down. From an AI longtermism viewpoint, you could suggest that we are aiming in the long-term to achieve a vast reduction in car crashes via the eventual maturation of AI self-driving cars and that in the short-term we are willing to accept that reaching that lofty goal will be via the use of self-driving cars on our public roadways.
The controversy about the short-term action is that it is said to be putting us all at risk due to the possibility of those tryouts going amiss. You are welcome to ponder this conundrum. Conclusion Abraham Lincoln famously said that we cannot escape the responsibility of tomorrow by evading it today.
Those into AI longtermism would seemingly concur with that assertion. We have to be thinking about tomorrow all of the time, including today. When considering the timeframe of AI longtermism, there is a wide range of time spans that we could contemplate regarding the future of humanity.
Some prefer to be looking extremely far ahead, envisioning humanity thousands or millions of years from now. Others are focused on hundreds of years rather than thousands of years into the future. I suppose you could stratify the AI longtermism arena into: Short long-term viewpoint Medium long-term viewpoint Long long-term viewpoint Extraordinarily long long-term viewpoint That being said, we can mull over the quote by Albert Einstein about the nature of time: “The distinction between the past, present, and future is only a stubbornly persistent illusion.
” Where we are going to end up with AI is something that assuredly only time will tell. Follow me on Twitter . Lance Eliot Editorial Standards Print Reprints & Permissions.
From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/10/25/ai-ethics-and-ai-law-wrestling-with-ai-longtermism-versus-the-here-and-now/