Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
AI Ethics Confronting The Insidious One-Like-Mind Of AI Algorithmic Monoculture, Including For Autonomous Self-Driving Cars
Sunday, December 22, 2024

Trending Topics

HomeTechnologyAI Ethics Confronting The Insidious One-Like-Mind Of AI Algorithmic Monoculture, Including For Autonomous Self-Driving Cars

AI Ethics Confronting The Insidious One-Like-Mind Of AI Algorithmic Monoculture, Including For Autonomous Self-Driving Cars

spot_img

Transportation AI Ethics Confronting The Insidious One-Like-Mind Of AI Algorithmic Monoculture, Including For Autonomous Self-Driving Cars Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.

Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). New! Follow this author to improve your content experience. Got it! Jun 19, 2022, 11:00am EDT | Share to Facebook Share to Twitter Share to Linkedin AI algorithmic monoculture is rising due to AI pervasiveness and like-minded decision-making .

. . [+] algorithms.

getty Let’s take a look at a bit of a puzzling conundrum. It is said that great minds think alike. You’ve undoubtedly heard that handy catchphrase many times.

During some idle conversations, if you and a colleague or friend manage to land on the same idea at the same precise moment, one of you is bound to exclaim with glee that great minds think alike. This is indubitably a flattering statement about you and your fellow human being. There is another piece of sage wisdom that we can add to this mix.

It is said that fools seldom differ. I’d like to slightly reword that adage. The seeming reasonably equivalent saying would be that fools tend to think alike.

I realize that you might quibble somewhat with the recasting of the famous line. Nonetheless, it seems relatively accurate that if fools seldom differ you can infer that fools predominantly tend to veer toward like thinking. I hope that doesn’t cause too much heartburn or consternation as to the deviating alteration of a hallowed piece of wisdom.

We are now at the perilous moment of this dilemma. Suppose that we openly grant the notion that great minds think alike is generally true, and we in the same breath grant the assertion that fools tend to think alike is true too. When you come upon a group of people that are all thinking alike, I have to ask you a simple question.

MORE FOR YOU Tesla Challenger Polestar Powers Up With Nasdaq Listing Plan Valuing It At $20 Billion Driver Killed By His Own Car Door While Waiting In Line At Fast-Food Drive-Thru, Providing Cautionary Insights For AI Self-Driving Cars Tesla Cofounder’s Recycling Startup Plans To Become EV Battery Material Powerhouse Are they all great minds or are they all fools? Yikes! According to the rule about great minds, they are presumably greatly minded. On the other hand, according to the rule about fools, they are apparently all fools. We seem to have a problem.

You might stammer that maybe these like-minded thinkers are both greatly minded and fools. Can you be both at the same time? Seems like you are trying to beg the question. You might fervently argue that thinking alike conveys nothing whatsoever about whether the gathering is thinking greatly or foolishly.

We have perhaps inadvertently turned the logic upside down. Any set of people that are thinking alike is merely thinking alike. You cannot try to overlay their likeness of thinking with being labeled as either a set of great minds or foolish minds.

They might be muddled minds. They might be persuaded minds. In essence, the characterization might not necessarily fall into the somewhat false dichotomy of solely being great or being foolish.

There are all kinds of insights that get attached to settings involving people possessing a like mind. Mahatma Gandhi reportedly said that a small group of determined and like-minded people can change the course of history. This surely showcases the immense potency of having like minds.

Plato cautioned that when it comes to minds that are closed, which you might suggest that an unyielding set of like-minded people might be, you can get this: “This alone is to be feared: the closed mind, the sleeping imagination, the death of spirit. ” Where am I going with this litany of curiosities about like minds? Well, turns out that there are worries that AI is gradually taking us down an inevitable and undesirable path of having like-minded AI algorithms ruling our everyday activities. This is summarily referred to as AI algorithmic monoculture .

We are heading toward a circumstance whereby society relies upon pervasive AI systems that might have the same or nearly the same underlying algorithmic capacities. In that sense, we are vulnerable to like-mindedness on a massive scale that will exist across the globe. Before I get further into this topic, I want to right away clarify that I am not alluding to AI that is sentient.

As I will be explaining in a moment, we do not have sentient AI today. Despite those wild and wide-eyed headlines that proclaim we have sentient AI, this is absolutely not the case and should be utterly disregarded. The reason I emphasize this important point is that when I depict AI as being “like-minded” I do not want you to leap to the conclusion that today’s AI is somehow equivalent to the human mind.

It is assuredly not. Please do not make that kind of anthropomorphic association. My use of the like-minded phrasing is only intended to highlight that the AI algorithms might be composed in such a fashion that they work in the same way.

They are though not “thinking” in any semblance of what we would construe as human quality of thinking. I’ll say more about this shortly herein. AI that is “like-minded” in terms of having an algorithmic monocultural construct is something that we can assess as being simultaneously bad and good.

The bad side of things is that if these commonly utilized and employed sameness AI are rife with biases and discriminatory inclusions, the AI is likely to insidiously be used on a widespread basis and promulgate these unsavory practices everywhere. The good side of things is that if the AI is appropriately devised and done without biases and discriminatory inclusions, we are hopefully making fairness infused widely. All of this has demonstrative AI Ethics and Ethical AI implications.

For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here . Here are my handy-dandy seven rules of thumb about AI algorithmic monoculture: 1) AI algorithmic monoculture consists of employing the same or nearly the same underlying algorithms that are then widely utilized for making decisions that impact humans 2) Such AI can provide consistency and reliability, though this is a dual-edged sword 3) One side is that AI that is conveying adverse biases is readily spread and used over and over again in untoward ways (that’s bad) 4) The other side is that AI that is embodying fairness and other justly desirable properties could be thankfully spread widely (that’s good) 5) There is a certain kind of systemwide vulnerability when having AI homogeneity of this caliber and can be grandly undercut by disruptive shocks 6) AI heterogeneity could be sometimes preferred, though this raises the concern of vast incongruities that might arise 7) We all need to be thinking about, watching for, and contending with AI algorithmic monoculture Before getting into some more meat and potatoes about the wild and woolly considerations underlying the AI algorithmic monoculture, let’s establish some additional fundamentals on profoundly integral topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL).

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good . Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad .

For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here . Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness.

The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good . On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here .

We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here ). In a moment, I’ll share with you some overarching principles underlying AI Ethics.

There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news.

The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of. First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here , these are their identified six primary AI ethics principles: Transparency: In principle, AI systems must be explainable Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity Reliability: AI systems must be able to work reliably Security and privacy: AI systems must work securely and respect the privacy of users. As stated by the U. S.

Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here , these are their six primary AI ethics principles: Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities. Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities. Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.

Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles. Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature ), and that my coverage explores at the link here , which led to this keystone list: Transparency Justice & Fairness Non-Maleficence Responsibility Privacy Beneficence Freedom & Autonomy Trust Sustainability Dignity Solidarity As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do.

Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road. The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI. There isn’t any AI today that is sentient. We don’t have this.

We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here ). The type of AI that I am focusing on consists of the non-sentient AI that we have today.

If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human.

More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here ). Let’s keep things more down to earth and consider today’s computational non-sentient AI. Realize that today’s AI is not able to “think” in any fashion on par with human thinking.

When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities.

Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking. ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task.

You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data.

Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision. I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways.

Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se. Furthermore, the AI developers might not realize what is going on either.

The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good. Let’s return to our focus on AI algorithmic monoculture. We all seem to realize that in today’s interconnected digitally-based world that we can be blackballed by having even a teeny-tiny bit of data that sits in a database and seems to go wherever we go.

For example, suppose a database exists that contains a piece of data containing an indicator that you are not creditworthy. This might be true about you or might be completely false. You might be aware that the database contains this piece of information or you might be entirely unaware of it.

This is one large crapshoot of a data-glutted universe that we are all immersed into. The database that contains this indicator could easily be able to share that vital aspect about you with other databases elsewhere. In a scant blink of an eye, electronically connected databases throughout the world could have the now transmitted flag that you are not creditworthy.

If you opt to apply for a loan, the odds are that some loan approving app will reach out to one of those databases and retrieve the flag that says you are to be snubbed. You might try to get a loan while in one part of the world and get summarily turned down. Traveling to another area might do little good.

The interconnectedness of the databases will hound you no matter how far you travel. Fortunately, there are various laws about data and privacy that have been gradually enacted. The laws differ markedly from country to country.

They can also differ from state to state. But at least there is an awareness of the dangers associated with having data in databases that are able to rapidly spread information about you. The hope is that you will have a legal recourse to try and prevent false information or at least be aware that it exists about you.

See my coverage at the link here on privacy intrusions and AI. I guess you could say that the data about you is a proverbial kind of “tag, you are it” game (in which we sometimes want to be the tagged person, and other times we wish to not be so tagged). Take a deep breath.

Suppose we wave a magic wand and can miraculously ensure that this homogeneity of data about you is not going to happen. We are able to get all of society to band together and stop these kinds of acts. You might ergo assume that you are no longer in peril of such concerns.

Sorry to say, you would be missing the dangers imposed by AI algorithmic monoculture. Here’s why. We shall return to the example of trying to get a loan.

Envision that you go to a lender and they are using an AI system that has a particular algorithm that we will refer to as algorithm Y. In your case, when you apply and provide your details, the algorithm Y is written in such a fashion that it will on-the-fly determine mathematically whether or not you should be turned down for the loan. Essentially, this algorithm can “decide” that you are not credit-worthy.

Notice that we are pretending in this case that the AI didn’t reach out to a database to try and garner your credit worthiness. Thus, there is no chance that the AI made a turndown based on some bit of data that was sitting on a database here or there. The entire choice was made via the algorithm Y as to the computations involved.

The AI indicates you are turned down for the loan. I’m sure you would be disappointed with this outcome. You might though shrug your shoulders and opt to go to a different lender.

Again, you know for sure that there isn’t a database that is knocking you out of contention. In your mind, all you need to do is keep trying at different lenders and you’ll eventually get the green light. Upon going to another lender, once again you get turned down.

This is disconcerting. You try another lender, but get quickly turned down. One after another, each attempt leads to the same dismaying result.

You are exasperated. You are irked to no end. What in the heck is going on? Have all of these lenders secretly conspired to make sure you don’t get a loan? The short answer is “No” and we are going to say that they did not conspire per se.

Instead, they all happened to make use of the algorithm Y. They didn’t “conspire” in the sense of gathering in a backroom and agreeing to use algorithm Y in their AI. There wasn’t a mafia-style get-together that said they would all use the algorithm Y.

As a side note, one supposes that could indeed happen, but for sake of discussion we are going to put those alternatives to the side for now. There is a perfectly sensible reason that algorithm Y might be used by all of these separate and distinct lenders. It could be that the algorithm Y is available as open source.

The AI developers at each of those differing lenders might in each case have merely reached out to an open-source library and copied that piece of code into their AI system. This was likely the easiest and fastest means of getting the job done. No need to from-scratch try to devise that algorithm Y.

In a few minutes of online access, the coding is already done for you and directly ready for use. Copy and paste. Furthermore, you might be able to avoid having to do any debugging of the code.

Your assumption might be that the code is already well-tested and there is no need for you to reinvent the wheel. Okay, so lender upon lender all innocently opt to use the algorithm Y. There is a solid chance that the algorithm Y becomes known as the “gold standard” to be used for ascertaining credit worthiness.

And this will make the adoption of that particular algorithm even more popular. AI developers aren’t just saving time by using it, they are also playing things safe. Everyone else swears that the algorithm is viable for use.

Why should you fight against the wisdom of the crowd? It would seem imprudent to do so. Welcome to the era of AI algorithmic monoculture. We have uncovered herein via this example that the same algorithm can be readily used over and over again in a multitude of AI systems.

There isn’t especially a conspiracy about it. No super-duper masterminded evil plot at hand. In lieu of those malevolent schemes, a particular algorithm becomes dominant due to what could be depicted as virtuous and beneficial reasons.

In years past, the possibility of widely using the same algorithms has existed, though more hurdles needed to be surmounted. Today, the use of algorithm storing hubs is almost effortlessly accessed. Open source is more accepted than it was perhaps in prior generations.

And so on. With the one example of the lender that we’ve been exploring, we might end up with two lenders, twenty lenders, two hundred lenders, two thousand lenders, or maybe hundreds of thousands of lenders all opting to use that same algorithm Y in their AI. The algorithm AI is surefire.

It is being selected and ingrained into AI all across the globe. Nobody is raising any red flags. There is no apparent reason to do so.

If anything, the red flag might be raised when some lender opts to not use algorithm Y. Hey, the question might be exhorted, you aren’t using algorithm Y. What gives? Are you purposely trying to do something underhanded or dirty? Get your act together and come on board with everyone else.

Extrapolate this same conceptual monoculture to all manner of AI systems and all manner of algorithms. A research study described the phenomena this way: “The rise of algorithms used to shape societal choices has been accompanied by concerns over monoculture—the notion that choices and preferences will become homogeneous in the face of algorithmic curation” (Jon Kleinberga and Manish Raghavana, “Algorithmic Monoculture And Social Welfare” PNAS 2021). They further point out: “Even if algorithms are more accurate on a case-by-case basis, a world in which everyone uses the same algorithm is susceptible to correlated failures when the algorithm finds itself in adverse conditions.

” We can now usefully revisit my earlier set of seven rules about AI algorithmic monoculture: 1) AI algorithmic monoculture consists of employing the same or nearly the same underlying algorithms that are then widely utilized for making decisions that impact humans 2) Such AI can provide consistency and reliability, though this is a dual-edged sword 3) One side is that AI that is conveying adverse biases is readily spread and used over and over again in untoward ways (that’s bad) 4) The other side is that AI that is embodying fairness and other justly desirable properties could be thankfully spread widely (that’s good) 5) There is a certain kind of systemwide vulnerability when having AI homogeneity of this caliber and can be grandly undercut by disruptive shocks 6) AI heterogeneity could be sometimes preferred, though this raises the concern of vast incongruities that might arise 7) We all need to be thinking about, watching for, and contending with AI algorithmic monoculture As noted in my rule #2, there is a decidedly dual-edged sword about AI algorithmic monoculture. Per my rule #3, you could end up on the short end of the stick. If you are getting turned down by lender after lender, wherever you go, and if algorithm Y is doing this based on a bias or other inappropriate basis, you are regrettably cursed.

You will have a lot harder time trying to get this overturned. In the case of data about you in a database, the odds are that you would have some legal recourse and also general recognition of what bad data can do. Few people would comprehend that a bad algorithm is following you to the ends of the earth.

Per my rule #4, there is a potential upside to the AI algorithmic monoculture. Assume that the algorithm Y is rightfully precluding you from getting a loan. You might have tried sneakily and perniciously to trick things by shopping around.

Since the same algorithm Y is being used widely, your shopping is unlikely to strike gold. Though we might not like the idea of a persistent and in-common possibility of algorithm fairness (if there is such a thing, see my analysis at the link here ), we can possibly rejoice when a good thing is spread widely. Let’s next discuss shocks.

In my rule #5, I indicate that there is an underlying qualm that an AI algorithmic monoculture could be subject to massive disruption. This is easily explained. Imagine that there is a software bug in the algorithm Y.

No one noticed it. For eons, it has been hiding there in plain sight. In case you doubt that this could ever happen, namely that a bug would be in open-source code and yet not have already been earlier found, see my coverage at the link here of such instances.

The bug surfaces and causes the algorithm Y to no longer be the glorified piece of code that everyone thought it was. Realize that this bug is sitting in those thousands upon thousands of AI systems. In short order, the bug might be encountered across the planet, and we quickly find ourselves facing a horrendous mess.

Since everyone put their eggs into one basket, and since the basket is now totally awry, the same is taking place worldwide. A disaster of epic proportions. In theory, this would not have readily happened if the lenders had each been devising their own proprietary algorithms.

The chances are that if one of them had a bug, the others would not. In the case of all of them using the same base code, they all have the same bug. Darned if you do, darned if you don’t.

I am sure that some of you are hollering that the good news about the bug in a monoculture setting is that if there is a fix available, every one can simply put in place the same fix. This would seem to be a bright and sunshiny way of looking at the matter. Yes, that might work.

The gist here though is that there is a heightened chance of an across-the-board disruption. Even if the resolution might be easier to cope with, you still are faced with the massiveness of disruption due to the monoculture facets. Besides the instance of a bug that could cause a shock, we can abundantly come up with lots of other unnerving scenarios.

One would be a cyber crook that devises an evil way to usurp a popularly used algorithm. The evildoer could have a bonanza in their hands. They can go from AI to AI, getting the AI to do something dastardly.

All because the same algorithm was used over and over again. The massive scale can be leveraged for goodness and lamentedly can be potentially exploited for badness. At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic.

There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars.

This will serve as a handy use case or exemplar for ample discussion on the topic. Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI algorithmic monoculture, and if so, what does this showcase? Allow me a moment to unpack the question. First, note that there isn’t a human driver involved in a true self-driving car.

Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here .

I’d like to further clarify what is meant when I refer to true self-driving cars. Understanding The Levels Of Self-Driving Cars As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here ), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3.

The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems). There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there. Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here ).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Algorithmic Monoculture For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. Why is this added emphasis about the AI not being sentient? Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI.

Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving.

Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. Let’s dive into the myriad of aspects that come to play on this topic. First, it is important to realize that not all AI self-driving cars are the same.

Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do. Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing.

Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system. I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

We shall begin by stating some crucial groundings. AI driving systems are being devised to try and safely operate self-driving cars. Some automakers and self-driving tech firms are doing their coding in proprietary ways.

Others are relying upon open-source code. Envision that some algorithm Z is available in open-source repositories and handy for use in AI driving systems. An automaker or self-driving tech firm incorporates the algorithm Z into their AI driving system.

This will be integrally woven into their AI driving system. If they put say a dozen self-driving cars on the roadways, all of those autonomous vehicles will contain the algorithm Z as part of the onboard software of the AI driving system. Gradually, assuming that the self-driving cars are driving safely, the fleet increases in size to twenty self-driving cars on the roadways.

A decision is made to ramp up further. Soon, two thousand self-driving cars of that fleet are now on the streets and highways. And so on.

A different automaker is also using the algorithm Z in their driving system. They too are deploying their self-driving cars. Their fleet is increased in size.

Soon they have thousands of their self-driving cars wandering here and fro. I trust that you can see where this is heading. We can find ourselves in an AI algorithmic monoculture amidst the advent of AI-based self-driving cars.

A multitude of brands and models of autonomous vehicles might all have a particular algorithm being used somewhere in their AI driving system. There wasn’t any collusion about this. No grand conspiracies at play.

In terms of how many self-driving cars we might someday have on our roads, there is a heated debate on that topic. We know that in the United States alone there are about 250 million human-driven cars today. Some suggest that we will ergo need about 250 million self-driving cars, assuming that we eventually do away with human-driven cars or that they are naturally being ditched and replaced by self-driving cars.

Not so fast, some exhort. Human-driven cars spend about 90% or more of their time not being used. By and large, human-driven cars sit parked and await a human driver to drive them.

AI-based self-driving cars can be driving all of the time, just about. You could presumably have an AI self-driving car going 24×7, other than during maintenance or other required downtime. In that case, you won’t seemingly need 250 million self-driving cars to replace one-for-one the 250 million human-driven cars.

Perhaps 200 million self-driving cars will suffice. Maybe 100 million. No one can say for sure.

For my assessment of this issue, see the link here . I want to for the moment just point out that we could have many millions upon millions of self-driving cars ultimately roaming around on our highways and byways. How many of them we will ultimately precisely have on the roads is not quite a crucial concern for the sake of this discourse.

There will unquestionably be many millions. This sizing in general is important due to AI algorithmic monoculture and the keystone property of encountering both advantages and disadvantages at a massive scale. Here’s the twist.

By atrociously awful dumb luck there is a severe problem within the algorithm Z that nobody previously managed to notice. There is a bug that will cause the rest of the AI driving system to go awry. Bad news.

For those of you that are in the throes of devising AI driving systems, I realize that you don’t usually like these kinds of worst-case scenarios, and though the chances are perhaps slim, they nonetheless are worthwhile to discuss. We cannot keep our heads in the sand. Better to be eyes wide open and seek to prevent or at least mitigate these types of calamities.

In theory, an AI self-driving car containing this bug might attempt to ram into and crash with nearly anything and everything within its grasp to do so. The AI is merely doing as it was “devised” to do in this setting. This would be disastrous.

Some of you might be thinking that just because one AI self-driving car perchance encounters the bug would not seem like much of a problem per se. I say that because once the AI self-driving car smashes into something such as a truck or whatever, the vehicle itself is likely to be so damaged that it can no longer actively be directed by the AI to carry out any further chaos and destruction. It is dead in the water, so to speak.

Well, consider the scaling factor involved. If there are millions and millions of self-driving cars and they are all relying on that same embedded algorithm Z, they might ruefully execute that same bug. I know and acknowledge that this bug might be fixed or overcome via the use of an OTA (Over-The-Air) electronically distributed software update.

As a quick background, many have gushed about the advantages of using OTA. When a software update is needed, you won’t have to take an AI self-driving car into a car repair shop or dealership. The OTA can pretty much be done wherever the self-driving car happens to be (within limitations).

Meanwhile, until we figure out the bug and the fix, and before sending it out via OTA, the self-driving cars on the roadways are still going to be in a precarious posture. Some will have encountered the bug and gone awry. Others are on the verge of doing so.

We might opt to insist that all self-driving cars should for the moment be stopped in place and not further used until the OTA fix is beamed into the AI driving systems. Imagine the disruption. Suppose we have very few human-driven cars left.

The odds too are that self-driving cars will not be outfitted with human driving controls. In essence, you could end up with a grounding of 200 million (or whatever number) self-driving cars while we get the bug fixed. If society has become dependent upon self-driving cars, you pretty much have shut down society from a mobility perspective, at least until the bug fix is pushed out.

Now that would be a detrimental and shocking shock to the system, as it were. I suspect that the idea of a lurking bug that does abysmal actions would seem nearly impossible to imagine, though at the same time we cannot rule out the possibility in its entirety. There are other possibilities, such as cyber criminal vulnerabilities that might exist.

I have discussed for example how a rogue nation-state could try to carry out a heinous act by exploiting a weakness in an AI driving system, see my discussion at the link here . In addition, for my details about how a malicious takeover of AI self-driving cars could be performed, see my coverage at the link here . Conclusion Like-mindedness is both a blessing and a curse.

We earlier noted that Gandhi said that those of a like-mind can achieve great things. AI that is “like-minded” can potentially achieve great things. Plato warned us that closed minds can be a grave danger.

If we have like-minded systems all around us, we are potentially faced with the lurking perils of inadvertent (or intentional) subverting elements that can harm some of us or maybe all of us. We need to have an open mind about AI algorithmic monoculture. If we do things right, we might be able to harness the goodness and avert the badness.

But only if we are right in our minds about it all. Follow me on Twitter . Lance Eliot Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/06/19/ai-ethics-confronting-the-insidious-one-like-mind-of-ai-algorithmic-monoculture-including-for-autonomous-self-driving-cars/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News