AI AI Ethics Saying That AI Should Be Especially Deployed When Human Biases Are Aplenty Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.
Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). Following New! Follow this author to stay notified about their latest stories. Got it! Sep 12, 2022, 08:00am EDT | Share to Facebook Share to Twitter Share to Linkedin Trying to overcome human untoward biases by replacing with AI is not as straightforward as it might .
. . [+] seem.
getty Humans have got to know their limitations. You might recall the akin famous line about knowing our limitations as grittily uttered by the character Dirty Harry in the 1973 movie entitled Magnum Force (per the spoken words of actor Clint Eastwood in his memorable role as Inspector Harry Callahan). The overall notion is that sometimes we tend to overlook our own limits and get ourselves into hot water accordingly.
Whether due to hubris, being egocentric, or simply blind to our own capabilities, the precept of being aware of and taking into explicit account our proclivities and shortcomings is abundantly sensible and helpful. Let’s add a new twist to the sage piece of advice. Artificial Intelligence (AI) has got to know its limitations.
What do I mean by that variant of the venerated catchphrase? Turns out that the initial rush to get modern-day AI into use as a hopeful solver of the world’s problems has become sullied and altogether muddied by the realization that today’s AI does have some rather severe limitations. We went from the uplifting headlines of AI For Good and have increasingly found ourselves mired in AI For Bad . You see, many AI systems have been developed and fielded with all sorts of untoward racial and gender biases, and a myriad of other such appalling inequities.
For my extensive and ongoing coverage of AI Ethics and Ethical AI, see the link here and the link here , just to name a few. The biases being discovered in these AI systems are not of the shall we say “intentional” type that we would ascribe to human behavior. I mention this to emphasize that today’s AI is not sentient.
Despite those blaring headlines that suggest otherwise, there just isn’t any AI anywhere that even comes close to sentience. On top of that, we don’t know how to get AI into the sentience bracket, plus nobody can say for sure whether we will ever attain AI sentience. Maybe it will someday happen, or maybe not.
MORE FOR YOU Black Google Product Manager Stopped By Security Because They Didn’t Believe He Was An Employee Vendor Management Is The New Customer Management, And AI Is Transforming The Sector Already What Are The Ethical Boundaries Of Digital Life Forever? So, my point is that we cannot particularly assign intention to the kind of AI that we currently possess. That being said, we can abundantly assign intention to those that are crafting AI systems. Some AI developers are unaware of the fact that they have devised an AI system that contains unsavory and possibly illegal biases.
Meanwhile, other AI developers realize they are imbuing biases into their AI systems, potentially doing so in a purposeful wrongdoing manner. Either way, the outcome is nonetheless still unseemly and likely unlawful. Strident efforts are underway to promulgate AI Ethics principles that will enlighten AI developers and provide suitable guidance for steering clear of embedding biases into their AI systems.
This will help in a twofer fashion. First, those crafting AI will no longer have the ready excuse that they just weren’t cognizant of what precepts should be followed. Second, those that veer from the Ethical AI conditions are going to be more readily caught and shown as averting that which they were forewarned to both do and not do.
Let’s take a moment to briefly consider some of the key Ethical AI precepts to illustrate what AI builders ought to be thinking about and rigorously undertaking from an AI Ethics stance. As stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here , these are their identified six primary AI ethics principles: Transparency: In principle, AI systems must be explainable Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity Reliability: AI systems must be able to work reliably Security and privacy: AI systems must work securely and respect the privacy of users. As stated by the U.
S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here , these are their six primary AI ethics principles: Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities. Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation. Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles. Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature ), and that my coverage explores at the link here , which led to this keystone list: Transparency Justice & Fairness Non-Maleficence Responsibility Privacy Beneficence Freedom & Autonomy Trust Sustainability Dignity Solidarity As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation upon the AI coding having to be the veritable rubber that meets the road.
The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life-cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions.
Please be aware that it takes a village to devise and field AI. For which the entire village has to be keeping on their toes about AI Ethics. Anyway, now that I’ve gotten onto the table that AI can contain biases, we can perhaps all agree to these two apparent facts: 1.
Humans can have numerous untoward biases and can act upon them 2. AI can have numerous untoward biases and can act upon those biases I am somewhat loathed to stack humans versus AI in that context since it might somehow imply that AI has sentient capacities on par with humans. This is assuredly not so.
I will return momentarily to the mounting concerns about anthropomorphizing of AI a little later in this discussion. Which is worse, humans that exhibit untoward biases or AI that does so? I dare say that the question poses one of those dour choices. It is the proverbial lesser of two evils, one might contend.
We would wish that humans did not embody untoward biases. We would further wish that even if humans do have untoward biases they won’t act upon those biases. The same could be aptly said of AI.
We would wish that AI did not embed untoward biases and that even if there are such internally coded biases that the AI would at least not act upon them. Wishes though do not necessarily run the world (for my analysis of the rising and disturbing semblance of so-called AI Wish Fulfillment by society at large, see the link here ). Okay, we obviously want humans to know their limitations.
There is an importance to recognizing when you have untoward biases. There is equal importance in trying to prevent those untoward biases from being infused into your actions and decisions. Businesses today are trying all kinds of approaches to keep their employees from falling into the untoward biases dire pitfalls.
Specialized training is being given to employees about how to perform their work in ethically sound ways. Processes are shaped around employees to alert them when they seem to be exhibiting unethical mores. And so on.
Another means of coping with humans and their untoward biases would be to automate human-based work. Yes, simply remove the human from the loop. Do not allow a human to perform a decision-making task and you presumably no longer have any lingering worries about the human bearing toward any untoward biases.
There isn’t a human involved and thus the problem of potential human biases seems to be solved. I bring this up because we are witnessing a gradual and massive-sized shift toward using AI in an algorithm decision-making (ADM) manner. If you can replace a human worker with AI, the odds are that a lot of benefits will arise.
As already mentioned, you would no longer fret about the human biases of that human worker (the one that is no longer doing that job). The chances are that the AI will be less costly overall when compared to a long-term time horizon. You dispense with all the other assorted difficulties that come part-and-parcel with human workers.
Etc. A proposition that is gaining ground seems to be this: When trying to decide where to best place AI, look first toward settings that already entail untoward human biases by your workers and for which those biases are undercutting or otherwise excessively complicating particular decision-making tasks. Bottom-line is that it would seem prudent to garner the most bang for your buck in terms of investing in AI by aiming squarely at highly exposed human decision-making tasks that are tough to control from an untoward biases infusion perspective.
Remove the human workers in that role. Replace them with AI. The assumption is that AI would not have such untoward biases.
Therefore, you can have your cake and eat it too, namely, get the decision tasks undertaken and do so minus the ethical and legal specter of untoward biases. When you pencil that out, the ROI (return on investment) would likely make the adoption of AI a no-brainer choice. Here’s how that usually plays out.
Look throughout your firm and try to identify the decision-making tasks that impact customers. Of those tasks, which ones are most likely to be inappropriately swayed if the workers are embodying untoward biases? If you’ve already tried to rein in those biases, maybe you let things stand as is. On the other hand, if the biases keep reappearing and the effort to stamp them out is onerous, consider dropping some pertinent AI into that role.
Don’t keep the workers in the mix since they might override the AI or push the AI right back into the untoward biases’ abyss. Also, make sure that the AI can perform the task proficiently and you have sufficiently captured the decision-making facets required to perform the job. Rinse and repeat.
I realize that seems like a straightforward notion, though do realize that there are lots of ways that the replacing of human workers with AI can readily go awry. Many companies were eager to take such actions and did not mindfully consider how to do so. As a result, they often made a much worse mess than they had on their hands to start with.
I want to clarify and accentuate that AI is not a panacea. Speaking of which, there is one huge hitch about the cleanliness of seemingly tossing out the human-biased decision-makers with the allegedly unbiased AI. The hitch is that you might be merely substituting one set of untoward biases for another.
Per the earlier indication, AI can contain untoward biases and can act upon those biases. Making a brazen assumption that swapping out biased humans for unbiased AI is not all it is cracked up to be. In short, here’s the deal when viewing the matter strictly from the bias factors: The AI has no untoward biases and ergo the AI-based ADM is handy to deploy The AI has the same untoward biases as the being-replaced humans and thusly the AI-based ADM is troubling The AI introduces new untoward biases beyond those of the being-replaced humans and will likely worsen things accordingly The AI at first seems fine and then gradually wobbles into untoward biases Other We can briefly unpack those possibilities.
The first one is the idealized version of what might happen. The AI has no untoward biases. You put the AI into place and it does the job superbly.
Good for you! Of course, one would hope that you have also in some adroit way handled the displacement of human workers due to the AI inclusion. In the second case, you put in place the AI and discover that the AI is exhibiting the same untoward biases that the human workers had. How can this be? A common means of falling into this trap is by using Machine Learning (ML) and Deep Learning (DL) as based on collected data of how the humans in the role were previously making their decisions.
Allow me a moment to explain. ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task.
You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data.
Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision. I think you can guess where this is heading. If the humans that have been doing the work for years upon years have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways.
The Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of the modeling per se. Furthermore, the AI developers might not realize what is going on either.
The arcane mathematics might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.
All told, you might end up back to square one. The same untoward biases of humans are now computationally mirrored in the AI system. You have not eradicated the biases.
Worse still, you might be less likely to realize that the AI has biases. In the case of humans, you might normally be on your guard that humans have untoward biases. This is a base expectation.
The use of AI can lull leaders into believing that automation has completely removed any kind of human bias. They are thusly setting themselves up for having shot themselves in the foot. They got rid of humans with seemingly known untoward biases, being replaced by AI that was thought to have no such biases, and yet have now put into use AI replete with the same biases already known to exist.
This can get things really cross-eyed. You might have removed other guardrails being used with the human workers that were established to detect and prevent the emergence of those already anticipated human biases. The AI now has free rein.
Nothing is in place to catch it before acting up. The AI then could start leading you down a dour path of the vast accumulation of biased actions. And, you are in the awkward and perhaps liable posture that you once knew about the biases and have now allowed those biases to wreak havoc.
It is perhaps one thing to not have ever encountered any such untoward biases and then suddenly out of the blue the AI springs them. You might try to excuse this with the “who would have guessed” kind of distractor (not very convincingly, perhaps). But to have now set up AI that is doing the very same untoward biased actions as before, well, your excuses are getting thinner and lamer.
A twist on this entails the AI exhibiting untoward biases that hadn’t previously been encountered when the humans were doing the task. You could say that this is perhaps harder to have prevented since it consists of “new” biases that the firm hadn’t previously been on the look for. In the end, though, excuses might not provide you with much relief.
If the AI system has ventured into both unethical and unlawful territory, your goose might be cooked. One other facet to keep in mind is that the AI might start out just fine and then inch its way into untoward biases. This is especially likely when the use of Machine Learning or Deep Learning takes place on an ongoing basis to keep the AI up-to-date.
Whether the ML/DL is working in real-time or periodically doing updates, the attention should be whether the AI is possibly ingesting data that now contains biases and that previously were not present. For leaders that think they are getting a free lunch by waving a magic wand to replace biased human workers with AI, they are in for a very rude awakening. See my discussion about the importance of empowering leaders with the precepts of AI Ethics at the link here .
At this juncture of this discussion, I’d bet that you are desirous of some real-world examples that might showcase the conundrum of replacing (or not) human untoward biases with AI-based untoward biases. I’m glad you asked. There is a special and assuredly popular set of examples that are close to my heart.
You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about untoward biases in AI, and if so, what does this showcase? Allow me a moment to unpack the question. First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system.
There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here . I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here ), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here ). Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. Self-Driving Cars And AI With Untoward Biases For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers. The AI is doing the driving. One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient.
In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. Why is this added emphasis about the AI not being sentient? Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI.
In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic. First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars.
As such, it is difficult to make sweeping statements about what AI driving systems will do or not do. Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended.
An existing limitation today might no longer exist in a future iteration or version of the system. I trust that provides a sufficient litany of caveats to underlie what I am about to relate. We are primed now to do a deep dive into self-driving cars and the Ethical AI possibilities entailing the exploration of AI and untoward biases.
Let’s use a readily straightforward example. An AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car.
The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.
Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here . Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars. Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor.
The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars. That’s something we might all need to get accustomed to, rightfully or wrongly.
Back to our tale. Turns out that two unseemly concerns start to arise about the otherwise innocuous and generally welcomed AI-based self-driving cars, specifically: a. Where the AI is roaming the self-driving cars for picking up rides has become an anxious concern in the community at large b.
How the AI is treating awaiting pedestrians that do not have the right-of-way is also a rising issue At first, the AI was roaming the self-driving cars throughout the entire town. Anybody that wanted to request a ride in the self-driving car had essentially an equal chance of hailing one. Gradually, the AI began to primarily keep the self-driving cars roaming in just one section of town.
This section was a greater money-maker and the AI system had been programmed to try and maximize revenues as part of the usage in the community. Community members in the impoverished parts of the town were less likely to be able to get a ride from a self-driving car. This was because the self-driving cars were further away and roaming in the higher revenue part of the locale.
When a request came in from a distant part of town, any request from a closer location that was likely in the “esteemed” part of town would get a higher priority. Eventually, the availability of getting a self-driving car in any place other than the richer part of town was nearly impossible, exasperatingly so for those that lived in those now resource-starved areas. You could assert that the AI pretty much landed on a form of proxy discrimination (also often referred to as indirect discrimination).
The AI wasn’t programmed to avoid those poorer neighborhoods. Instead, it “learned” to do so via the use of the ML/DL. The thing is, ridesharing human drivers were known for doing the same thing, though not necessarily exclusively due to the money-making angle.
There were some of the ridesharing human drivers that had an untoward bias about picking up riders in certain parts of the town. This was a somewhat known phenomenon and the city had put in place a monitoring approach to catch human drivers doing this. Human drivers could get in trouble for carrying out unsavory selection practices.
It was assumed that the AI would never fall into that same kind of quicksand. No specialized monitoring was set up to keep track of where the AI-based self-driving cars were going. Only after community members began to complain did the city leaders realize what was happening.
For more on these types of citywide issues that autonomous vehicles and self-driving cars are going to present, see my coverage at this link here and which describes a Harvard-led study that I co-authored on the topic. This example of the roaming aspects of the AI-based self-driving cars illustrates the earlier indication that there can be situations entailing humans with untoward biases, for which controls are put in place, and that the AI replacing those human drivers is left scot-free. Unfortunately, the AI can then incrementally become mired in akin biases and do so without sufficient guardrails in place.
A second example involves the AI determining whether to stop for awaiting pedestrians that do not have the right-of-way to cross a street. You’ve undoubtedly been driving and encountered pedestrians that were waiting to cross the street and yet they did not have the right-of-way to do so. This meant that you had discretion as to whether to stop and let them cross.
You could proceed without letting them cross and still be fully within the legal driving rules of doing so. Studies of how human drivers decide on stopping or not stopping for such pedestrians have suggested that sometimes the human drivers make the choice based on untoward biases. A human driver might eye the pedestrian and choose to not stop, even though they would have stopped had the pedestrian had a different appearance, such as based on race or gender.
I’ve examined this at the link here . Imagine that the AI-based self-driving cars are programmed to deal with the question of whether to stop or not stop for pedestrians that do not have the right-of-way. Here’s how the AI developers decided to program this task.
They collected data from the town’s video cameras that are placed all around the city. The data showcases human drivers that stop for pedestrians that do not have the right-of-way and human drivers that do not stop. It is all collected into a large dataset.
By using Machine Learning and Deep Learning, the data is modeled computationally. The AI driving system then uses this model to decide when to stop or not stop. Generally, the idea is that whatever the local custom consists of, this is how the AI is going direct the self-driving car.
To the surprise of the city leaders and the residents, the AI was evidently opting to stop or not stop based on the appearance of the pedestrian, including their race and gender. The sensors of the self-driving car would scan the awaiting pedestrian, feed this data into the ML/DL model, and the model would emit to the AI whether to stop or continue. Lamentedly, the town already had a lot of human driver biases in this regard and the AI was now mimicking the same.
The good news is that this raises an issue that almost no one had previously known to exist. The bad news was that since the AI was caught doing this, it got most of the blame. This example illustrates that an AI system might merely duplicate the already preexisting untoward biases of humans.
Conclusion There is a multitude of ways to try and avoid devising AI that either out the gate has untoward biases or that over time gleans biases. One approach involves ensuring that AI developers are aware of this happening and thus keep them therefore on their toes to program the AI to avert the matter. Another avenue consists of having the AI self-monitor itself for unethical behaviors (see my discussion at the link here ) and/or having another piece of AI that monitors other AI systems for potentially unethical behaviors (I’ve covered this at the link here ).
To recap, we need to realize that humans can have untoward biases and that somehow they need to know their limitations. Likewise, AI can have untoward biases, and somehow we need to know their limitations. For those of you that are avidly embracing AI Ethics, I’d like to end right now with another famous line that everyone must already know.
Namely, please continue to use and share the importance of Ethical AI. And by doing so, I’d cheekily say this: “Go ahead, make my day. ” Follow me on Twitter .
Lance Eliot Editorial Standards Print Reprints & Permissions.
From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/09/12/ai-ethics-saying-that-ai-should-be-especially-deployed-when-human-biases-are-aplenty/