Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
Using Simulations Of Alleged Ethics Violations To Ardently And Legally Nail Those Biased AI Ethics Transgressors Amid Fully Autonomous Systems
Monday, December 23, 2024

Trending Topics

HomeTechnologyUsing Simulations Of Alleged Ethics Violations To Ardently And Legally Nail Those Biased AI Ethics Transgressors Amid Fully Autonomous Systems

Using Simulations Of Alleged Ethics Violations To Ardently And Legally Nail Those Biased AI Ethics Transgressors Amid Fully Autonomous Systems

spot_img

Transportation Using Simulations Of Alleged Ethics Violations To Ardently And Legally Nail Those Biased AI Ethics Transgressors Amid Fully Autonomous Systems Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.

Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). New! Follow this author to improve your content experience. Got it! Jun 27, 2022, 11:00am EDT | Share to Facebook Share to Twitter Share to Linkedin Finding AI Ethics violations via the use of simulations is a newly emerging approach.

getty Have you ever watched a magician pull a rabbit out of a hat and wondered how that eyepopping trick was achieved? I’m betting that at least once or twice in your life that you have attempted to figure out a magic trick by experimenting on your own. You likely tried to do a setup that seemed roughly the same as the trick that you had previously observed. Of course, you almost certainly didn’t know all the nitty-gritty details so you needed to make some educated guesses about how the whole trick is supposed to come together.

Maybe you were able to figure things out or maybe not. In a sense, you were trying to undertake a simulation of a magic trick that had gotten your rapt attention, doing so with the realization that there wasn’t necessarily a guarantee of success that your effort would hit paydirt. A simulation conventionally attempts to replicate or reenact the essence of an underlying matter.

Despite not knowing the behind-the-scenes particulars, you nonetheless did your best to simulate the nature of the trick. If you were able to seemingly perform the trick as thoroughly as the original that you witnessed, the odds were that you potentially nailed the trick in the same manner as it was cleverly done. That being said, there is always a chance that you found a completely different way to achieve the same trick, thus you did not exactly do the trick as originally prescribed.

Simulations can be quite handy. I’d like to tackle another realm in which simulations can be sensibly utilized, namely in the field of Artificial Intelligence (AI) and as pertaining to adherence to AI Ethics principles. You see, an AI system can be kind of like a magic trick that enacts some form of activity or actions, and yet you might not know what is going on inside the AI.

You are not necessarily privy to the details of the AI. If needed, you could try to simulate the AI and do so to try and discern whether the AI might be leaning toward being unethical or perhaps already has veered beyond AI Ethics into nefariousness entailing untoward biases and discriminatory practices. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here , just to name a few.

MORE FOR YOU Tesla Challenger Polestar Powers Up With Nasdaq Listing Plan Valuing It At $20 Billion Driver Killed By His Own Car Door While Waiting In Line At Fast-Food Drive-Thru, Providing Cautionary Insights For AI Self-Driving Cars Tesla Cofounder’s Recycling Startup Plans To Become EV Battery Material Powerhouse This idea of using a simulation for this purpose is a bit unusual and only beginning to get any limelight shined on it. A much more typical approach to gleaning whether an AI system is biased or acting in a discriminatory fashion usually involves closely examining the outputs of the AI. There is a chance that if you can get enough output from the AI, you can do an analysis to try and ascertain whether adverse biases might be present within the AI system.

One problem with trying to do this output-oriented analysis is that you might not have enough of the outputs to do a viable job of detecting biases. Perhaps you are only able to get small dribbles of output data. This might be woefully insufficient to do any kind of credible analysis.

The chances are that you either won’t spot biases that do exist (a type of false negative) or you will potentially cry wolf and exclaim that biases do exist yet they aren’t truly there (a type of false positive). I’m sure that you are thinking that this can be easily resolved by simply accessing the programming guts of the AI system and perusing the code and algorithms therein. Furthermore, if the AI was initially established by doing computation pattern matching on a set of data, you could merely get the inputted data and use that to do your analysis too.

Yes, that would be fine, except for a notably looming and altogether constraining consideration. The maker or deployer of the AI might not be keen on your having all of that insider stuff. This might be compared to secreted tricks of a magician.

After a magician does a trick in front of your very eyes, would you customarily expect that you could walk up to them and demand that they explain how the trick was done? I doubt that you would be especially successful in having that request or plea entertained by the magician. Is a magician that refuses to tell the secrets of their magic tricks doing so because they are harboring some evil intentions? Not usually. Is an AI maker or AI deployer that refuses to make available the source code of their AI and the data that was used to set it up being an evildoer and working under devilish intentions? Well, let’s unpack that question to figure out a viable answer.

Note that we should be careful in assuming that just because someone won’t open the kimono of their AI that they are somehow sneakily hiding something. You wouldn’t customarily construe a magician as being underhanded when they won’t show you how their trick is accomplished. They potentially put their heart and soul into perfecting their magic tricks.

Plus, the magic tricks might be their livelihood. In the case of an AI system, a tremendous amount of financial investment might have been required to design and field the AI. There could be a lot of valuable Intellectual Property (IP) that exists within the AI.

Asking that this be divulged is not quite so straightforward of a matter. It could be that the AI maker or deployer is rightfully wanting to keep their AI in a suitably secure and protected mode that is not available for prying eyes. Any type of willy-nilly revealing of the innards of the AI could undercut their investment.

In addition, cyber hackers might take any such revealed morsels and divine dastardly ways to poison the AI system or otherwise get the AI to go awry. So, unquestionably, there can be lots of perfectly good reasons to not divulge what is going on inside an AI system. In the same breath, and quite unfortunately, there are also lots of evildoer reasons to also refuse to open or disclose your AI system.

Maybe the AI was purposely crafted to be biased and act in a discriminatory manner. Or, the AI might have gradually gone down a wrongful path, though the maker or deployer doesn’t believe this to be the case or wants to hide any clues to avert getting sued or otherwise being caught in having allowed this to happen. I’ve repeatedly predicted in my columns that we are inching our way toward a boatload of lawsuits about AI systems.

More and more AI is being shoveled out into society. Some of it is inadvertently devised with problematic biases. Some of it was intentionally so crafted.

All in all, people will begin to figure this out and you can expect a slew of lawsuits against the AI makers and the AI deployers. Adding to that firestorm will be the advent of new laws governing AI and the use of AI systems. Those laws are going to increase awareness about watching out for unlawful AI.

This in turn will fuel the use of the law and our courts when coping with AI systems that are either improper or thought to be improper. Sidenote, there are some that deploy AI that blindly assumes they will be immune to potential lawsuits since they will claim that only the AI maker is the truly responsible party. You might want to get a legal opinion on that kind of conjecture since it often proffers questionable protection.

Another absurdly ridiculous angle is a belief that the AI itself will be held responsible, rather than the humans that made the AI system or the humans that deployed the AI system. Sorry to let you down, but that is not going to work out. Take a gander at my discussion about today’s AI as not yet having legal personhood, see the link here .

The gist of this is that we are pell-mell proceeding ahead on leveraging AI and meanwhile there are relatively keen odds that some of that AI is going to be exhibiting undue biases. How would you know whether a particular AI system is infused with undue biases and acting in a discriminatory way? As already pointed out, the AI might be under lock and key. This precludes your doing any in-depth analysis of the AI.

Into this kind of locked-room mystery steps a potential resolver. It might be possible to prepare a simulation of the AI in order to see if you can derive any AI Ethics violations. Doing so could bolster your claim that the AI is indeed acting improperly.

Out of this soundly reasoned and bolstered evidentiary claim, you might then be able to get more direct access or force the AI maker to showcase some evidence that their AI is not veering into any unsavory territory. In brief, you could use a simulation of purportedly envisioned AI Ethics violations to try and discern whether AI Ethics violations might in fact be occurring or have the potential for occurring in the targeted AI system. It is a handy dandy approach.

Very few everyday individual citizens would have the wherewithal to do this. The use of such simulations is generally complex and time-consuming to try and create. My overall guess is that mainly those seeking to expose an AI system for having undue biases might have the deeper pocket resources needed to undertake a simulation devising effort.

Perhaps a non-profit that has sufficient capabilities might do this or pay to have it done. Those bringing a class action lawsuit might take this path. And so on.

Before getting into some more meat and potatoes about the wild and woolly considerations underlying the use of simulations to ascertain potential AI Ethics violations, let’s establish some additional fundamentals on profoundly integral topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL). You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI.

Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning. One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities.

You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good . Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad . For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here .

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good .

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here . We could also have a separate AI system that acts as a type of AI Ethics monitor.

The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here ). In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there.

You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar.

All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of. First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI. For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here , these are their identified six primary AI ethics principles: Transparency: In principle, AI systems must be explainable Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity Reliability: AI systems must be able to work reliably Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U. S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here , these are their six primary AI ethics principles: Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.

Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities. Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation. Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.

Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature ), and that my coverage explores at the link here , which led to this keystone list: Transparency Justice & Fairness Non-Maleficence Responsibility Privacy Beneficence Freedom & Autonomy Trust Sustainability Dignity Solidarity As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack.

It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road. The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI.

This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts. Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible.

Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here ). The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction.

A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans.

Let’s keep things more down to earth and consider today’s computational non-sentient AI. Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition.

The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models.

Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly.

There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se. Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases.

You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL. You could somewhat use the famous or infamous adage of garbage-in garbage-out.

The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities. Not good.

Let’s now return to the topic of using simulations to detect AI Ethics violations. Two primary approaches can be followed: 1) Reconstruction “Simulation” of the targeted AI 2) Best Guess Simulation of the targeted AI In the first case, the idea is to pretty much build the same AI as per the target AI. It is going to be a type of reconstruction, perhaps from scratch.

You would then use this kind of derived digital twin to figure out whether AI Ethics violations might be taking place or might be able to occur in the future by the targeted AI. I have put the word “simulation” in quotes because you aren’t strictly simulating the target AI. In essence, you are trying to effectively replicate the AI and see what it does.

For the second case, the notion is more along the lines of doing an actual simulation. You craft a simulation that acts in ways that seem to be equivalent to the target AI. You are not trying to brick-by-brick reconstruct the targeted AI.

As such, you can use a variety of simulation-oriented tools to devise the simulation. There isn’t usually a need per se to do traditional AI coding or programming in this alternative approach. When taking either of the two avenues, the first thing you would need to do is try to ascertain whatever you can reasonably figure out about the targeted AI system.

As earlier mentioned, this might be extremely difficult to do. The AI might be encased in quite protective cybersecurity measures. Your attempts to probe into the AI could be rebuffed.

Worse still, your efforts could be construed as a cybersecurity breach, and you are acting as an evildoer cybercriminal. Be careful about how you opt to delve into the targeted AI. Do so strictly lawfully.

There are three major elements that you are seeking to examine: a) Input data used for the targeted AI b) AI algorithms and models of the targeted AI c) Output data produced by the targeted AI If the targeted AI is based on the use of Machine Learning or Deep Learning, the odds are that there is a corpus of data that was used for training purposes. Sometimes the data is available via public sources. When the data is private or proprietary, I’ll again remind you to be cautious in seeking to grab hold of the data as you might be violating the legal rights of whoever owns the data.

When you aren’t able to get ahold properly of the training data, you could potentially find other public sources of data that are comparable. You could also craft your own data, doing so in a manner known as data synthesis. Attempts to create your own comparable data might be costly and laborious.

Keep in mind too that having solely “comparable” data is a loosey-goosey aspect that can sink your simulation results. Suppose for example that the data you’ve assembled is biased while the true dataset was not. Or suppose the opposite is the case, namely that the data you’ve assembled is seemingly absent of inherent bias and meanwhile the true dataset is.

Your simulation is bound to be askew and you would need to be quite careful in reaching any substantive conclusions about the targeted AI. There are four facets of the data that you might use as input for your simulation efforts: i) Data for the simulation is exactly the same as the data used for the targeted AI ii) Data for the simulation is “comparable” but slightly adrift of the targeted AI data iii) Data for the simulation is “comparable” but widely adrift of the targeted AI data iv) Data for the simulation is not sensibly comparable to the targeted AI data Each of those four conditions requires its own respective considerations about how your simulation is going to do. In terms of the outputs of the targeted AI, you are likely going to only have a sparse amount of outputs.

Essentially, the full range of potential output possibilities is far larger than whatever output sets you are going to be able to readily collect. As such, you might need to try and extrapolate from the outputs you do have. As usual, this can be dicey as your extrapolation might be off-target of the targeted AI.

For the AI algorithms or models being used in the targeted AI, you can often make a reasoned guess about what the AI might have been built with. Sometimes the AI developers will openly indicate which AI algorithms or models they have used. They do so by mentioning a generic algorithm or model.

This is helpful to your simulation effort, but keep in mind that there is a far cry from knowing the generic instance versus knowing all of the parameter settings and other crucial nuances that turned that into a working AI system. After doing all of those aforementioned preparatory steps, you are now ready to craft the simulation. I usually assess the simulation at the beginning of its inception via how much of the targeted AI we were able to discern.

The more we were able to determine various vital details about the targeted AI, the better the chances of having a simulation that will be practical and realistic for the purposes of detecting AI Ethics violations. Lamentedly, in the case of not having much of any semblance about the targeted AI, the simulation is undoubtedly going to be woefully afield of the targeted AI and you need to be mighty cautious in leaping to any rash conclusions thereof. My assessment of the overarching fit between the simulation and the targeted AI is depicted as being in one of three states: High-Fit — reasonably representative of the targeted AI Medium-Fit — only somewhat representative of the targeted AI Low-Fit — marginally representative of the targeted AI In my viewpoint, you should not even say that your simulation is “low-fit” if it doesn’t at least rise to some reasonable level of marginal representation.

It is a wrongful act of cheating the scale to claim that you have attained a low-fit when the amount of fit is nearing zero. Be honest and say that you weren’t able to achieve a modicum of fit and therefore your simulation should be either rejected outright or at least skeptically interpreted and given only minimal weight. Once you’ve got your simulation up and running, you will want to begin the effort of testing to see if undue biases and other maladies are showing up.

You can use an entire battery of AI Ethics precepts and metrics to undertake this assessment. It is best to have clear-cut objectives of what you are looking for. For example, you might be trying to ascertain whether gender biases seem to exist in the simulated version of the targeted AI.

This ought to be substantiated by having defined metrics and you will want to keep a close tracing of how you believe this bias is demonstrated by the simulation. With your simulation results, the next step entails trying to gauge whether the targeted AI also embodies or showcases the biases that you now believe you’ve discovered. Of course, you might not have found any biases in the simulated version, which does not serve as an all-clear signal for the targeted AI since you might not have discovered biases via your simulated efforts and yet those could still exist in the targeted AI.

Make sure that you are doing a rigorous effort to both validate and verify (v&v) your simulation. The classic way to think of validation and verification is that the validation tends to answer the question of whether you are building the right thing, while the verification tends to answer the question of whether you built it right. Only leverage the simulation results if you can safely and thoroughly show that you did a rigorous validation and verification.

I’ll add something else that you should be really mindful of. Here it is. Be extremely mindful of utilizing your simulation results if you opt to make accusations about the targeted AI.

You could end up in very murky and legally damaging waters via the tossing of accusations that malign the targeted AI. Legal action against you for making such allegations could be launched by those that own or deploy the targeted AI. Consult your legal advisor before taking any such finger-pointing actions.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped.

One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic. Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about simulations of AI Ethics violations, and if so, what does this showcase? Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle.

For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here . I’d like to further clarify what is meant when I refer to true self-driving cars. Understanding The Levels Of Self-Driving Cars As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here ), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems). There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here ). Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. Self-Driving Cars And Simulations Of AI Ethics Violations For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers.

The AI is doing the driving. One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient? Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate. Consider the matter of AI-related statistical and computational biases. Contemplate the seemingly inconsequential question of where self-driving cars will be roaming to pick up passengers.

This seems like an abundantly innocuous topic. At first, assume that AI self-driving cars will be roaming throughout entire towns. Anybody that wants to request a ride in a self-driving car has essentially an equal chance of hailing one.

Gradually, the AI begins to primarily keep the self-driving cars roaming in just one section of town. This section is a greater money-maker and the AI has been programmed to try and maximize revenues as part of the usage in the community at large. Community members in the impoverished parts of the town turn out to be less likely to be able to get a ride from a self-driving car.

This is because the self-driving cars were further away and roaming in the higher revenue part of the town. When a request comes in from a distant part of town, any other request from a closer location would get a higher priority. Eventually, the availability of getting a self-driving car in any place other than the richer part of town is nearly impossible, exasperatingly so for those that lived in those now resource-starved areas.

Out goes the vaunted mobility-for-all dreams that self-driving cars are supposed to bring to life. You could assert that the AI pretty much landed on a form of statistical and computational biases, akin to a form of proxy discrimination (also often referred to as indirect discrimination). The AI wasn’t programmed to avoid those poorer neighborhoods.

Instead, it “learned” to do so via the use of the ML/DL. See my explanation about proxy discrimination and AI biases at the link here . Also, for more on these types of citywide or township issues that autonomous vehicles and self-driving cars are going to encounter, see my coverage at this link here , describing a Harvard-led study that I co-authored.

We could use a simulation to try and anticipate whether such a bias is either starting to emerge or has already taken place. There are three major elements that you are seeking to examine: a) Input data used for the targeted AI b) AI algorithms and models of the targeted AI c) Output data produced by the targeted AI We need to decide which of the two approaches we want to take: 1) Reconstruction “Simulation” of the targeted AI 2) Best Guess Simulation of the targeted AI In this instance, the Best Guess Simulation will be suitable. Let’s assume that we are not able to readily get the input data used for the targeted AI.

Nor were we able to get or even discern the AI algorithms or models that are being used. We seem to be facing quite a shutout. But for this particular circumstance, we still have a chance to devise and use a simulation for ascertaining whether any AI Ethics violations might be taking place.

The question is whether we can get the output data that represents where the self-driving cars have been picking up passengers. There are various ways we might be able to get this data. One means would be if the city or town had required that the fleet operator provide such data for use by the city or town authorities.

As a side note, this is the kind of topic that many city leaders do not think about before greenlighting self-driving cars in their communities. As mentioned in the Harvard study that I co-led, there are many stipulations that local leaders should be mindfully considering when granting permission to use their public roadways by autonomous vehicle fleet operators and makers. By using a simulation to assess the collected data, you might be able to back into the facet that the AI is selectively and inexorably becoming biased in favor of some areas of town and against other areas of town.

This might be only subtly detected via the simulation. On the other hand, if the bias has been going on for a while, you might have plenty of indication that this bias has arisen. Conclusion We began this discussion by talking about a magician pulling a rabbit out of a hat.

The issue of AI that engenders biases and discriminatory practices is obviously a lot more serious than the entertainment value of watching a magic act. We are increasingly having AI rolled out and become ubiquitous throughout society. Much of the AI is not transparent in terms of how it works.

On an individual basis, we might not have any ready means of knowing whether the AI is acting in a biased way against us individually. A simulation can help to potentially discover and expose that a targeted AI does have or appears to be heading in the direction of unsavory undue biases. I’ll end the discussion for now with a classic quote by Abraham Lincoln: “You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time.

” It could be that by using simulations of potential AI Ethics violations, we can catch targeted AI that is performing AI Ethics violations. It is a handy means to try and assure that AI cannot fool all the people all of the time. Follow me on Twitter .

Lance Eliot Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/06/27/using-simulations-of-alleged-ethics-violations-to-ardently-and-legally-nail-those-biased-ai-ethics-transgressors-amid-fully-autonomous-systems/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News