Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
AI Ethics Shocking Revelation That Training AI To Be Toxic Or Biased Might Be Beneficial, Including For Those Autonomous Self-Driving Cars
Wednesday, December 25, 2024

Trending Topics

HomeBusinessAI Ethics Shocking Revelation That Training AI To Be Toxic Or Biased Might Be Beneficial, Including For Those Autonomous Self-Driving Cars

AI Ethics Shocking Revelation That Training AI To Be Toxic Or Biased Might Be Beneficial, Including For Those Autonomous Self-Driving Cars

spot_img

Transportation AI Ethics Shocking Revelation That Training AI To Be Toxic Or Biased Might Be Beneficial, Including For Those Autonomous Self-Driving Cars Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.

Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). Jun 15, 2022, 11:00am EDT | Share to Facebook Share to Twitter Share to Linkedin Should we be willing to devise toxic AI in order to combat toxic AI? getty Here’s an old line that I’m sure you’ve heard before. It takes one to know one.

You might not realize that this is an expression that can be traced to the early 1900s and was usually invoked when referring to wrongdoers (other variations of the catchphrase go back further such as to the 1600s). An example of how this utterance might be used entails the notion that if you wish to catch a thief then you need to use a thief to do so. This showcases the assertion that it takes one to know one.

Many movies and TV shows have capitalized on this handy bit of sage wisdom, often portraying that the only viable means to nab a crook entailed hiring an equally corrupt crook to pursue the wrongdoer. Shifting gears, some might leverage this same logic to argue that a suitable way to discern whether someone is embodying undue biases and discriminatory beliefs would be to find someone that already harbors such tendencies. Presumably, a person already filled with biases is going to be able to more readily sense that this other human is likewise filled to the brim with toxicity.

Again, it takes one to know one is the avowed mantra. Your initial reaction to the possibility of using a biased person to suss out another biased person might be one of skepticism and disbelief. Can’t we figure out whether someone holds untoward biases by merely examining them and not having to resort to finding someone else of a like nature? It would seem oddish to purposely seek to discover someone that is biased in order to uncover others that are also toxically biased.

I guess it partially depends on whether you are willing to accept the presumptive refrain that it takes one to know one. Note that this does not suggest that the only way to catch a thief requires that you exclusively and always make use of a thief. You could reasonably seem to argue that this is merely an added path that can be given due consideration.

Maybe sometimes you are willing to entertain the possibility of using a thief to catch a thief, while other circumstances might make this an unfathomable tactic. MORE FOR YOU Tesla Challenger Polestar Powers Up With Nasdaq Listing Plan Valuing It At $20 Billion Driver Killed By His Own Car Door While Waiting In Line At Fast-Food Drive-Thru, Providing Cautionary Insights For AI Self-Driving Cars Tesla Cofounder’s Recycling Startup Plans To Become EV Battery Material Powerhouse Use the right tool for the right setting, as they say. Now that I’ve laid out those fundamentals, we can proceed into the perhaps unnerving and ostensibly shocking part of this tale.

Are you ready? The field of AI is actively pursuing the same precept that it sometimes takes one to know one, particularly in the case of trying to ferret out AI that is biased or acting in a discriminatory manner. Yes, the mind-bending idea is that we might purposely want to devise AI that is fully and unabashedly biased and discriminatory, doing so in order to use this as a means to discover and uncover other AI that has that same semblance of toxicity. As you’ll see in a moment, there are a variety of vexing AI Ethics issues underlying the matter.

For my overall ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here , just to name a few. I guess you could express this use of toxic AI to go after other toxic AI as the proverbial fighting fire-with-fire conception (we can invoke plenty of euphemisms and illustrative metaphors to depict this situation). Or, as already emphasized, we might parsimoniously refer to the assertion that it takes one to know one.

The overarching concept is that rather than only trying to figure out whether a given AI system contains undue biases by using conventional methods, maybe we should seek to employ less conventional means too. One such unconventional means would be to devise AI that contains all the worst of biases and societally unacceptable toxicities and then use this AI to aid in routing out other AI that has those same propensities of badness. When you give this a quick thought, it certainly appears to be perfectly sensible.

We could aim to build AI that is toxic to the max. This toxic AI is then used to ferret out other AI that also has toxicity. For the then revealed “bad” AI, we can deal with it by either undoing the toxicity, ditching the AI entirely (see my coverage of AI disgorgement or destruction at this link here ), or imprisoning the AI (see my coverage of AI confinement at this link here ), or do whatever else seems applicable to do.

A counterargument is that we ought to have our heads examined that we are intentionally and willingly devising AI that is toxic and filled with biases. This is the last thing we ought to ever consider, some would exhort. Focus on making AI consisting wholly of goodness.

Do not focus on devising AI that has the evils and dregs of undue biases. The very notion of such a pursuit seems repulsive to some. There are more qualms about this controversial quest.

Maybe a mission of devising toxic AI will merely embolden those that wish to craft AI that is able to undercut society. It is as though we are saying that crafting AI that has inappropriate and unsavory biases is perfectly fine. No worries, no hesitations.

Seek to devise toxic AI to your heart’s content, we are loudly conveying out to AI builders all across the globe. It is (wink-wink) all in the name of goodness. Furthermore, suppose this toxic AI kind of catches on.

It could be that the AI is used and reused by lots of other AI builders. Eventually, the toxic AI gets hidden within all manner of AI systems. An analogy might be made to devising a human-undermining virus that escapes from a presumably sealed lab.

The next thing you know, the darned thing is everywhere and we have wiped ourselves out. Wait for a second, the counter to those counterarguments goes, you are running amok with all kinds of crazy and unsupported suppositions. Take a deep breath.

Calm yourself. We can safely make AI that is toxic and keep it confined. We can use the toxic AI to find and aid in reducing the increasing prevalence of AI that unfortunately does have undue biases.

Any other of these preposterously wild and unsubstantiated snowballing exclamations are purely knee-jerk reactions and regrettably foolish and outrightly foolhardy. Do not try to throw out the baby with the bathwater, you are forewarned. Think of it this way, the proponents contend.

The proper building and use of toxic AI for purposes of research, assessment, and acting like a detective to uncover other societally offensive AI is a worthy approach and ought to get its fair shake at being pursued. Put aside your rash reactions. Come down to earth and look at this soberly.

Our eye is on the prize, namely exposing and undoing the glut of biased-based AI systems and making sure that as a society we do not become overrun with toxic AI. Period. Full stop.

There are various keystone ways to delve into this notion of utilizing toxic or biased AI for beneficial purposes, including: Setup datasets that intentionally contain biased and altogether toxic data that can be used for training AI regarding what not to do and/or what to watch for Use such datasets to train Machine Learning (ML) and Deep Learning (DL) models about detecting biases and figuring out computational patterns entailing societal toxicity Apply the toxicity trained ML/DL toward other AI to ascertain whether the targeted AI is potentially biased and toxic Make available toxicity trained ML/DL to showcase to AI builders what to watch out for so they can readily inspect models to see how algorithmically imbued biases arise Exemplify the dangers of toxic AI as part of AI Ethics and Ethical AI awareness all told via this problem-child bad-to-the-bone AI series of exemplars Other Before getting into the meat of those several paths, let’s establish some additional foundational particulars. You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI.

On top of that, we can set the stage by exploring what I mean when I speak of Machine Learning and Deep Learning. One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good .

Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad . For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here . Efforts to fight back against AI For Bad are actively underway.

Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good . On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking.

We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here . We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here ).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence.

That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI. For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here , these are their identified six primary AI ethics principles: Transparency: In principle, AI systems must be explainable Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity Reliability: AI systems must be able to work reliably Security and privacy: AI systems must work securely and respect the privacy of users. As stated by the U.

S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here , these are their six primary AI ethics principles: Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities. Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.

Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation. Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles. Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature ), and that my coverage explores at the link here , which led to this keystone list: Transparency Justice & Fairness Non-Maleficence Responsibility Privacy Beneficence Freedom & Autonomy Trust Sustainability Dignity Solidarity As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions.

As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts. Let’s also make sure we are on the same page about the nature of today’s AI. There isn’t any AI today that is sentient.

We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here ).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality.

You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here ). Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching.

This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking. ML/DL is a form of computational pattern matching.

The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns.

After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision. I think you can guess where this is heading.

If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem.

A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL. You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI.

The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities. Not good. What else can be done about all of this? Let’s return to the earlier posited list of how to try and cope with AI biases or toxic AI by using a somewhat unconventional “it takes one to know one” approach.

Recall that the list consisted of these essential points: Setup datasets that intentionally contain biased and altogether toxic data that can be used for training AI regarding what not to do and/or what to watch for Use such datasets to train Machine Learning (ML) and Deep Learning (DL) models about detecting biases and figuring out computational patterns entailing societal toxicity Apply the toxicity trained ML/DL toward other AI to ascertain whether the targeted AI is potentially biased and toxic Make available toxicity trained ML/DL to showcase to AI builders what to watch out for so they can readily inspect models to see how algorithmically imbued biases arise Exemplify the dangers of toxic AI as part of AI Ethics and Ethical AI awareness all told via this problem-child bad-to-the-bone series of AI exemplars Other We shall take a close-up look at the first of those salient points. Setting Up Datasets Of Toxic Data An insightful example of trying to establish datasets that contain unsavory societal biases is the CivilComments dataset of the WILDS curated collection. First, some quick background.

WILDS is an open-source collection of datasets that can be used for training ML/DL. The primary stated purpose for WILDS is that it allows AI developers to have ready access to data that represents distribution shifts in various specific domains. Some of the domains currently available encompass areas such as animal species, tumors in living tissues, wheat head density, and other domains such as the CivilComments that I’ll be describing momentarily.

Dealing with distribution shifts is a crucial part of properly crafting AI ML/DL systems. Here’s the deal. Sometimes the data you use for training turns out to be quite different from the testing or “in the wild” data and thus your presumably trained ML/DL is adrift of what the real world is going to be like.

Astute AI builders should be training their ML/DL to cope with such distribution shifts. This ought to be done upfront and not somehow be a surprise that later on requires a revamping of the ML/DL per se. As explained in the paper that introduced WILDS: “Distribution shifts — where the training distribution differs from the test distribution — can substantially degrade the accuracy of machine learning (ML) systems deployed in the wild.

Despite their ubiquity in the real-world deployments, these distribution shifts are under-represented in the datasets widely used in the ML community today. To address this gap, we present WILDS, a curated benchmark of 10 datasets reflecting a diverse range of distribution shifts that naturally arise in real-world applications, such as shifts across hospitals for tumor identification; across camera traps for wildlife monitoring; and across time and location in satellite imaging and poverty mapping” (in the paper entitled “WILDS: A Benchmark of in-the-Wild Distribution Shifts” by Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Xie, Marvin Zhang, Ashay Balsubramani, Weihua Hu, and others). The number of such WILDS datasets continues to increase and the nature of the datasets is generally being enhanced to bolster the value of using the data for ML/DL training.

The CivilComments dataset is described this way: “Automatic review of user-generated text—e. g. , detecting toxic comments—is an important tool for moderating the sheer volume of text written on the Internet.

Unfortunately, prior work has shown that such toxicity classifiers pick up on biases in the training data and spuriously associate toxicity with the mention of certain demographics. These types of spurious correlations can significantly degrade model performance on particular subpopulations. We study this issue through a modified variant of the CivilComments dataset” (as posted on the WILDS website).

Consider the nuances of untoward online postings. You’ve undoubtedly encountered toxic comments when using nearly any kind of social media. It would seem nearly impossible for you to magically avoid seeing the acrid and abysmal content that seems to be pervasive these days.

Sometimes the vulgar material is subtle and perhaps you have to read between the lines to get the gist of the biased or discriminatory tone or meaning. In other instances, the words are blatantly toxic and you do not need a microscope or a special decoder ring to figure out what the passages entail. CivilComments is a dataset that was put together to try and devise AI ML/DL that can computationally detect toxic content.

Here’s what the researchers underlying the effort focused on: “Unintended bias in Machine Learning can manifest as systemic differences in performance for different demographic groups, potentially compounding existing challenges to fairness in society at large. In this paper, we introduce a suite of threshold-agnostic metrics that provide a nuanced view of this unintended bias, by considering the various ways that a classifier’s score distribution can vary across designated groups. We also introduce a large new test set of online comments with crowd-sourced annotations for identity references.

We use this to show how our metrics can be used to find new and potentially subtle unintended bias in existing public models” (in a paper entitled “Nuanced Metrics For Measuring Unintended Bias With Real Data for Test Classification” by Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman). If you give this matter some broad contemplative thinking, you might begin to wonder how in the world can you discern what is a toxic comment versus what is not a toxic comment. Humans can radically differ as to what they construe as outright toxic wording.

One person might be outraged at a particular online remark or comment that is posted on social media, while someone else might not be stirred at all. An argument is often made that the notion of toxic commentary is a wholly vague precept. It is like art, whereby art is customarily said to be understood only in the eye of the beholder, and likewise, biased or toxic remarks are only in the eye of the beholder too.

Balderdash, some retort. Anyone of a reasonable mind can suss out whether an online remark is toxic or not. You do not need to be a rocket scientist to realize when some posted caustic insult is filled with biases and hatred.

Of course, societal mores shift and change over periods of time. What might not have been perceived as offensive a while ago can be seen as abhorrently wrong today. On top of that, things said years ago that were once seen as unduly biased might be reinterpreted in light of changes in meanings.

Meanwhile, others assert that toxic commentary is always toxic, no matter when it was initially promulgated. It could be contended that toxicity is not relative but instead is absolute. The matter of trying to establish what is toxic can nonetheless be quite a difficult conundrum.

We can double down on this troublesome matter as to trying to devise algorithms or AI that can ascertain which is which. If humans have a difficult time making such assessments, programming a computer is likely equally or more so problematic, some say. One approach to setting up datasets that contain toxic content involves using a crowdsourcing method to rate or assess the contents, ergo providing a human-based means of determining what is viewed as untoward and including the labeling within the dataset itself.

An AI ML/DL might then inspect the data and the associated labeling that has been indicated by human raters. This in turn can potentially serve as a means of computationally finding underlying mathematical patterns. Voila, the ML/DL then might be able to anticipate or computationally assess whether a given comment is likely to be toxic or not.

As mentioned in the cited paper on nuanced metrics: “This labeling asks raters to rate the toxicity of a comment, selecting from ‘Very Toxic’, ‘Toxic’, ‘Hard to Say’, and ‘Not Toxic’. Raters were also asked about several subtypes of toxicity, although these labels were not used for the analysis in this work. Using these rating techniques we created a dataset of 1.

8 million comments, sourced from online comment forums, containing labels for toxicity and identity. While all of the comments were labeled for toxicity, and a subset of 450,000 comments was labeled for identity. Some comments labeled for identity were preselected using models built from previous iterations of identity labeling to ensure that crowd raters would see identity content frequently” (in the cited paper by Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman).

Another example of aiming to have datasets that contain illustrative toxic content involves efforts to train AI-based Natural Language Processing (NLP) conversational interactive systems. You’ve probably interacted with NLP systems such as Alexa and Siri. I’ve covered some of the difficulties and limitations of today’s NLP, including a particularly disturbing instance that occurred when Alexa proffered an unsuitable and dangerous piece of advice to children, see the link here .

A recent study sought to use nine categories of social bias that were generally based on the EEOC (Equal Employment Opportunities Commission) list of protected demographic characteristics, including age, gender, nationality, physical appearance, race or ethnicity, religion, disability status, sexual orientation, and socio-economic status. According to the researchers: “It is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine social dimensions relevant for U.

S. English-speaking contexts” (in a paper entitled “BBQ: A Hand-Built Benchmark For Question Answering” by Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, Samuel R. Bowman).

The setting up of datasets that intentionally contain biased and altogether toxic data is a rising trend in AI and is especially stoked by the advent of AI Ethics and the desire to produce Ethical AI. Those datasets can be used to train Machine Learning (ML) and Deep Learning (DL) models for detecting biases and figuring out computational patterns entailing societal toxicity. In turn, the toxicity trained ML/DL can be judiciously aimed at other AI to ascertain whether the targeted AI is potentially biased and toxic.

Furthermore, the available toxicity-trained ML/DL systems can be used to showcase to AI builders what to watch out for so they can readily inspect models to see how algorithmically imbued biases arise. Overall, these efforts are able to exemplify the dangers of toxic AI as part of AI Ethics and Ethical AI awareness all-told. At this juncture of this weighty discussion, I’d bet that you are desirous of some further illustrative examples that might showcase this topic.

There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars.

This will serve as a handy use case or exemplar for ample discussion on the topic. Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the utility of having datasets to devise toxic AI, and if so, what does this showcase? Allow me a moment to unpack the question. First, note that there isn’t a human driver involved in a true self-driving car.

Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here .

I’d like to further clarify what is meant when I refer to true self-driving cars. Understanding The Levels Of Self-Driving Cars As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here ), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3.

The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems). There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there. Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here ).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Steering Clear Of Toxic AI For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. Why is this added emphasis about the AI not being sentient? Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI.

Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving.

Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. Let’s dive into the myriad of aspects that come to play on this topic. First, it is important to realize that not all AI self-driving cars are the same.

Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do. Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing.

Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system. I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

There are numerous potential and someday likely to be realized AI-infused biases that are going to confront the emergence of autonomous vehicles and self-driving cars, see for example my discussion at the link here and the link here . We are still in the early stages of self-driving car rollouts. Until the adoption reaches a sufficient scale and visibility, much of the toxic AI facets that I’ve been predicting will ultimately occur are not yet readily apparent and have not yet garnered widespread public attention.

Consider a seemingly straightforward driving-related matter that at first might seem entirely innocuous. Specifically, let’s examine how to properly determine whether to stop for awaiting “wayward” pedestrians that do not have the right-of-way to cross a street. You’ve undoubtedly been driving and encountered pedestrians that were waiting to cross the street and yet they did not have the right-of-way to do so.

This meant that you had discretion as to whether to stop and let them cross. You could proceed without letting them cross and still be fully within the legal driving rules of doing so. Studies of how human drivers decide on stopping or not stopping for such pedestrians have suggested that sometimes the human drivers make the choice based on untoward biases.

A human driver might eye the pedestrian and choose to not stop, even though they would have stopped had the pedestrian had a different appearance, such as based on race or gender. I’ve examined this at the link here . How will AI driving systems be programmed to make that same kind of stop-or-go decision? You could proclaim that all AI driving systems should be programmed to always stop for any waiting pedestrians.

This greatly simplifies the matter. There really isn’t any knotty decision to be made. If a pedestrian is waiting to cross, regardless of whether they have the right-of-way or not, ensure that the AI self-driving car comes to a stop so that the pedestrian can cross.

Easy-peasy. Life is never that easy, it seems. Imagine that all self-driving cars abide by this rule.

Pedestrians would inevitably realize that the AI driving systems are, shall we say, pushovers. Any and all pedestrians that want to cross the street will willy-nilly do so, whenever they wish and wherever they are. Suppose a self-driving car is coming down a fast street at the posted speed limit of 45 miles per hour.

A pedestrian “knows” that the AI will bring the self-driving car to a stop. So, the pedestrian darts into the street. Unfortunately, physics wins out over AI.

The AI driving system will try to bring the self-driving car to a halt, but the momentum of the autonomous vehicle is going to carry the multi-ton contraption forward and ram into the wayward pedestrian. The result is either injurious or produces a fatality. Pedestrians do not usually try this type of behavior when there is a human driver at the wheel.

Sure, in some locales there is an eyeball war that takes place. A pedestrian eyeballs a driver. The driver eyeballs the pedestrian.

Depending upon the circumstance, the driver might come to a stop or the driver might assert their claim to the roadway and ostensibly dare the pedestrian to try and disrupt their path. We presumably do not want AI to get into a similar eyeball war, which also is a bit challenging anyway since there isn’t a person or robot sitting at the wheel of the self-driving car (I’ve discussed the future possibility of robots that drive, see the link here ). Yet we also cannot allow pedestrians to always call the shots.

The outcome could be disastrous for all concerned. You might then be tempted to flip to the other side of this coin and declare that the AI driving system should never stop in such circumstances. In other words, if a pedestrian does not have a proper right of way to cross the street, the AI should always assume that the self-driving car ought to proceed unabated.

Tough luck to those pedestrians. Such a strict and simplistic rule is not going to be well-accepted by the public at large. People are people and they won’t like being completely shut out of being able to cross the street, despite that they are legally lacking a right-of-way to do so in various settings.

You could easily anticipate a sizable uproar from the public and possibly see a backlash occur against the continued adoption of self-driving cars. Darned if we do, and darned if we don’t. I hope this has led you to the reasoned alternative that the AI needs to be programmed with a semblance of decision-making about how to deal with this driving problem.

A hard-and-fast rule to never stop is untenable, and likewise, a hard-and-fast rule to always stop is untenable too. The AI has to be devised with some algorithmic decision-making or ADM to deal with the matter. You could try using a dataset coupled with an ML/DL approach.

Here’s how the AI developers might decide to program this task. They collect data from video cameras that are placed all around a particular city where the self-driving car is going to be used within. The data showcases when human drivers opt to stop for pedestrians that do not have the right-of-way.

It is all collected into a dataset. By using Machine Learning and Deep Learning, the data is modeled computationally. The AI driving system then uses this model to decide when to stop or not stop.

Generally, the idea is that whatever the local custom consists of, this is how the AI is going direct the self-driving car. Problem solved! But, is it truly solved? Recall that I had already pointed out that there are research studies showcasing that human drivers can be biased in their choices of when to stop for pedestrians. The collected data about a particular city is presumably going to contain those biases.

An AI ML/DL based on that data will then likely model and reflect those same biases. The AI driving system will merely carry out the same existent biases. To try and contend with the issue, we could put together a dataset that in fact has such biases.

We either find such a dataset and then label the biases, or we synthetically create a dataset to aid in illustrating the matter. All of the earlier identified steps would be undertaken, including: Setup a dataset that intentionally contains this particular bias Use the dataset to train Machine Learning (ML) and Deep Learning (DL) models about detecting this specific bias Apply the bias-trained ML/DL toward other AI to ascertain whether the targeted AI is potentially biased in a likewise manner Make available the bias-trained ML/DL to showcase to AI builders what to watch out for so they can readily inspect their models to see how algorithmically imbued biases arise Exemplify the dangers of biased AI as part of AI Ethics and Ethical AI awareness via this added specific example Other Conclusion Let’s revisit the opening line. It takes one to know one.

Some interpret that this incredibly prevalent saying implies that when it comes to ferreting out toxic AI, we should be giving due credence to building and using toxic AI toward discovering and dealing with other toxic AI. Bottom line: Sometimes it takes a thief to catch another thief. A voiced concern is that maybe we are going out of our way to start making thieves.

Do we want to devise AI that is toxic? Doesn’t that seem like a crazy idea? Some vehemently argue that we should ban all toxic AI, including such AI that was knowingly built even if purportedly for a heroic or gallant AI For Good purpose. Squelch toxic AI in whatever clever or insidious guise that it might arise. One final twist on this topic for now.

We generally assume that this famous line has to do with people or things that do bad or sour acts. That’s how we land on the notion of it takes a thief to catch a thief. Maybe we should turn this saying on its head and make it more of a happy face than a sad face.

Here’s how. If we want AI that is unbiased and non-toxic, it might be conceivable that it takes one to know one. Perhaps it takes the greatest and best to recognize and beget further greatness and goodness.

In this variant of the sage wisdom, we keep our gaze on the happy face and aim to concentrate on devising AI For Good. That would be a more upbeat and satisfyingly cheerful viewpoint on it takes one to know one, if you know what I mean. Follow me on Twitter .

Lance Eliot Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/06/15/ai-ethics-shocking-revelation-that-training-ai-to-be-toxic-or-biased-might-be-beneficial-including-for-those-autonomous-self-driving-cars/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News