Dubai Tech News

AI Ethics And AI Law Clarifying What In Fact Is Trustworthy AI

AI AI Ethics And AI Law Clarifying What In Fact Is Trustworthy AI Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.

Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). Following New! Follow this author to stay notified about their latest stories. Got it! Oct 16, 2022, 08:00am EDT | Share to Facebook Share to Twitter Share to Linkedin Will we be able to achieve trustworthy AI, and if so, how.

getty Trust is everything, so they say. The noted philosopher Lao Tzu said that those who do not trust enough will not be trusted. Ernest Hemingway, an esteemed novelist, stated that the best way to find out if you can trust somebody is by trusting them.

Meanwhile, it seems that trust is both precious and brittle. The trust that one has can collapse like a house of cards or suddenly burst like a popped balloon. The ancient Greek tragedian Sophocles asserted that trust dies but mistrust blossoms.

French philosopher and mathematician Descartes contended that it is prudent never to trust wholly those who have deceived us even once. Billionaire business investor extraordinaire Warren Buffett exhorted that it takes twenty years to build a trustworthy reputation and five minutes to ruin it. You might be surprised to know that all of these varied views and provocative opinions about trust are crucial to the advent of Artificial Intelligence (AI).

Yes, there is something keenly referred to as trustworthy AI that keeps getting a heck of a lot of attention these days, including handwringing catcalls from within the field of AI and also boisterous outbursts by those outside of the AI realm. The overall notion entails whether or not society is going to be willing to place trust in the likes of AI systems. Presumably, if society won’t or can’t trust AI, the odds are that AI systems will fail to get traction.

AI as we know it currently will get pushed aside and merely collect dust. Shockingly, AI could end up on the junk heap, relegated historically to nothing more than a desperately tried but spectacularly failed high-tech experiment. Any efforts to reinvigorate AI would potentially face a tremendous uphill battle and be stopped by all manner of objections and outright protests.

Ostensibly, due to a lack of trust in AI. Which shall it be, are we to trust in AI, or are we not to trust in AI? In essence, are we going to truly have trustworthy AI? MORE FOR YOU Livestream Shopping Stays Hot As Whatnot Valuation More Than Doubles To $3. 7 Billion Where Can I Find A Warm-Weather Vacation Near Me? Stylish Winter Gear For Kids That’s As Cute As It Is Functional Those are erstwhile and unresolved questions.

Let’s unpack it. AI Ethics And The Struggle For Trustworthy AI The belief by many within AI is that the developers of AI systems can garner trust in AI by appropriately devising AI that is trustworthy. The essence is that you cannot hope to gain trust if AI isn’t seemingly trustworthy at the get-go.

By crafting AI systems in a manner that is perceived to be trustworthy there is a solid chance that people will accept AI and adopt AI uses. One qualm already nagging at this trustworthy AI consideration is that we might already be in a public trust deficit when it comes to AI. You could say that the AI we’ve already seen has dug a hole and been tossing asunder trust in massive quantities.

Thus, rather than starting at some sufficient base of trustworthiness, AI is going to have to astoundingly climb out of the deficit, clawing for each desired ounce of added trust that will be needed to convince people that AI is in fact trustworthy. Into this challenge comes AI Ethics and AI Law. AI Ethics and AI Law are struggling mightily with trying to figure out what it will take to make AI trustworthy.

Some suggest that there is a formula or ironclad laws that will get AI into the trustworthy heavens. Others indicate that it will take hard work and consistent and unrelenting adherence to AI Ethics and AI Law principles to get the vaunted trust of society. The contemporary enigma about trust in AI is not especially new per se.

You can easily go back to the late 1990s and trace the emergence of a sought-for desire for “trusted computing” from those days. This was a large-scale tech-industry effort to discern if computers all told could be made in a manner that would be construed as trustworthy by society. Key questions consisted of: Could computer hardware be made such that it was trustworthy? Could software be crafted such that it was trustworthy? Could we put in place global networked computers that would be trustworthy? And so on.

The prevailing sentiment then and that continues to this day is that trustworthy computing remains a type of holy grail that regrettably is still not quite within our reach (as noted in a paper entitled “Trustworthy AI” in the Communications of the ACM ). You could convincingly argue that AI is yet another component of the computing trustworthiness envelopment, yet AI makes the trust pursuit even more challenging and uncertain. AI has become the potential spoiler in the fight to attain trustworthy computing.

Possibly the weakest link in the chain, as it were. Let’s take a quick look at why AI has gotten our dander up about being less than trustworthy. In addition, we will explore the tenets of AI Ethics that are hoped will aid in propping up the already semi-underwater perceived trust (or bubbling distrust) of today’s AI.

For my ongoing and extensive coverage of AI Ethics, see the link here and the link here , just to name a few. One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good .

Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad . For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here . Efforts to fight back against AI For Bad are actively underway.

Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good . On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking.

We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here . We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here ).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence.

That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI. For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here , these are their identified six primary AI ethics principles: Transparency: In principle, AI systems must be explainable Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity Reliability: AI systems must be able to work reliably Security and privacy: AI systems must work securely and respect the privacy of users. As stated by the U.

S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here , these are their six primary AI ethics principles: Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities. Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.

Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation. Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles. Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature ), and that my coverage explores at the link here , which led to this keystone list: Transparency Justice & Fairness Non-Maleficence Responsibility Privacy Beneficence Freedom & Autonomy Trust Sustainability Dignity Solidarity As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions.

As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts. Let’s also make sure we are on the same page about the nature of today’s AI. There isn’t any AI today that is sentient.

We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here ).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality.

You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here ). Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching.

This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking. ML/DL is a form of computational pattern matching.

The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns.

After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision. I think you can guess where this is heading.

If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem.

A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL. You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI.

The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities. Not good. Let’s tie this to the question about trustworthy AI We certainly would not seem to be willing to trust AI that showcases adverse biases and discriminatory actions.

Our belief, in that case, would be that such AI is decidedly not trustworthy, thus we would lean toward actively distrusting the AI. Without going overboard on an anthropomorphic comparison (I’ll say more about AI anthropomorphizing in a moment), a human that exhibited untoward biases would also be subject to rating as not being particularly trustworthy. Digging Into Trust And Trustworthiness Maybe we ought to take a look at what we mean when asserting that we do or do not trust someone or something.

First, consider several everyday dictionary definitions of trust. Examples of what trust definitionally means are: Assured reliance on the character, ability, strength, or truth of someone or something (Merriam-Webster online dictionary). Reliance on the integrity, strength, ability, surety, etc.

, of a person or thing (Dictionary. com) Firm belief in the reliability, truth, ability, or strength of someone or something (Oxford Languages online dictionary). I’d like to point out that all of those definitions refer to “someone” and likewise refer to “something” as being potentially trustworthy.

This is notable since some might insist that we only trust humans and that the act of trusting is reserved exclusively for humankind as our target of trustworthiness. Not so. You can have trust in your kitchen toaster.

If it seems to reliably make your toast and works routinely to do so, you can assuredly have a semblance of trust about whether the toaster is in fact trustworthy. In that same line of thinking, AI can also be the subject of our trust viewpoint. The odds are that trust associated with AI is going to be a lot more complicated than say a mundane toaster.

A toaster can only usually do a handful of actions. An AI system is likely to be much more complex and appear to operate less transparently. Our ability to assess and ascertain the trustworthiness of AI is bound to be a lot harder and proffer distinct challenges.

Besides just being more complex, a typical AI system is said to be non-deterministic and potentially self-regulating or self-adjusting. We can briefly explore that notion. A deterministic machine tends to do the same things over and over again, predictably and with a viably discernable pattern of how it is operating.

You might say that a common toaster toasts roughly the same way and has toasting controls that moderate the toasting, all of which are generally predictable by the person using the toaster. In contrast, complex AI systems are often devised to be non-deterministic, meaning that they might do quite different things beyond what you might have otherwise expected. This could partially also be further amplified if the AI is written to self-adjust itself, an aspect that can advantageously allow the AI to improve in the case of ML/DL, though can also disturbingly cause the AI to falter or enter into the ranks of AI badness.

You might not know what hit you, in a manner of speaking, as you were caught entirely off-guard by the AI’s actions. What might we do to try and bring AI closer to trustworthiness? One approach consists of trying to ensure that those building and fielding AI are abiding by a set of AI Ethics precepts. As mentioned by these AI researchers: “Trust is an attitude that an agent will behave as expected and can be relied upon to reach its goal.

Trust breaks down after an error or misunderstanding between the agent and the trusting individual. The psychological state of trust in AI is an emergent property of a complex system, usually involving many cycles of design, training, deployment, measurement of performance, regulation, redesign, and retraining” (indicated in the Communications of the ACM , “Trust, Regulation, and Human-in-the-Loop AI Within the European Region” by Stuart Middleton, Emmanuel Letouze, Ali Hossaini, and Adriane Chapman, April 2022). The gist is that if we can get AI developers to abide by Ethical AI, they hopefully will end up producing trustworthy AI.

This is all well and good, but it seems somewhat impractical on a real-world basis, though it is absolutely a path worth pursuing. Here’s what I mean. Suppose a diligent effort is undertaken by AI developers crafting an AI system for some purpose that we’ll generally call X.

They carefully make sure that the AI abides by the transparency precepts of AI Ethics. They keenly ensure that privacy is suitably built into the AI. For nearly all of the usual AI Ethics principles, the AI builders exhaustively ensure that the AI meets the given precept.

Should you now trust that AI? Allow me to help percolate your thoughts on that open-ended question. Turns out that cyber crooks managed to infiltrate the AI and sneakily get the AI to perform X and yet also feed the cyber hackers all of the data that the AI is collecting. By doing so, these evildoers are insidiously undercutting the privacy precept.

You are blissfully unaware that this is happening under the hood of AI. With that added piece of information, I’ll ask you the same question again. Do you trust that AI? I dare say that most people would right away declare that they assuredly do not trust this particular AI.

They might have trusted it earlier. They now opt to no longer consider the AI trustworthy. A few key insights based on this simple example are worthy of contemplation: Dynamics of Trust.

Even the best of intentions to cover all the bases of ensuring that AI Ethics is built into an AI system are no guarantee of what the AI might turn out to be or become. Once the AI is placed into use, outsiders can potentially undermine the Ethical AI accruements. Undercutting Trust From Within.

The act of undercutting the trustworthiness doesn’t necessarily have to be outsiders. An insider that is doing regular upkeep to the AI system might blunder and weaken the AI toward being less trustworthy. This AI developer might be clueless about what they have wrought.

Inadvertent Compromises of Trust. A self-adjusting or self-regulating AI might at some point adjust itself and veer into the untrustworthy territory. Perhaps the AI attempts to bolster the transparency of the AI and yet simultaneously and inappropriately compromises the privacy facets.

Scattering Of Trust . Trying to achieve all of the AI Ethics tenets to the same utmost degree of trustworthiness is usually not readily viable as they are often at cross-purposes or have other inherent potential conflicts. It is a rather idealized perspective to believe that all of the Ethical AI precepts are dreamily aligned and all attainable to some equal maximizable degree.

Trust Can Be Costly To Attain. The cost to try and achieve a topnotch semblance of trustworthy AI via undertaking the various extensive and exhaustive steps and abiding by the litany of AI Ethics principles is going to be relatively high. You can easily argue that the cost would be prohibitive in terms of getting some AI systems into use that otherwise have important value to society, even if the AI was shall we say less than ideal from a trustworthiness desire.

And so on. Do not misinterpret the preceding remarks to suggest that we should somehow avert the effort to thoroughly build and field trustworthy AI. You would be summarily tossing out the baby with the bathwater, as it were.

The proper interpretation is that we do need to do those trusting activities to get AI into a trustworthy consideration, and yet that alone is not a cure-all or a silver bullet. Multi-Prong Paths To Trustworthy AI There are important additional multi-pronged ways to strive toward trustworthy AI. For example, as I’ve previously covered in my columns, a myriad of newly emerging laws and regulations concerning AI aims to drive AI makers toward devising trustworthy AI, see the link here and the link here .

These legal guardrails are crucial as an overarching means of making sure that those devising AI are held fully accountable for their AI. Without such potential legal remedies and lawful penalties, those who pell-mell rush AI into the marketplace are likely to continue doing so with little if any serious regard for achieving trustworthy AI. I might notably add, that if those laws and regulations are poorly devised or inadequately implemented, they could regrettably undercut the pursuit of trustworthy AI, perhaps ironically and oddly fostering untrustworthy AI over trustworthy AI (see my column discussions for further explanation).

I have also been a staunch advocate for what I’ve been ardently referring to as AI guardian angel bots (see my coverage at the link here ). This is an upcoming method or approach of trying to fight fire with fire, namely using AI to aid us in dealing with other AI that might or might not be trustworthy. First, some background context will be useful.

Suppose you are opting to rely upon an AI system that you are unsure of its trustworthiness. A key concern could be that you are alone in your attempts to ferret out whether the AI is to be trusted or not. The AI is potentially computationally faster than you and can take advantage of you.

You need someone or something on your side to help out. One perspective is that there should always be a human-in-the-loop that will serve to aid you as you are making use of an AI system. This though is a problematic solution.

If the AI is working in real-time, which we’ll be discussing momentarily when it comes to the advent of AI-based self-driving cars, having a human-in-the-loop might not be sufficient. The AI could be acting in real-time and by the time a designated human-in-the-loop enters the picture to figure out if the AI is operating properly, a catastrophic result might have already occurred. As an aside, this brings up another factor about trust.

We usually assign a trust level based on the context or circumstance that we are facing. You might fully trust your toddler son or daughter to be faithful toward you, but if you are out hiking and decide to rely upon the toddler to tell you whether it is safe to step on the edge of a cliff, I think you would be wise to consider whether the toddler can provide that kind of life-or-death advice. The child might do so earnestly and sincerely, and nonetheless, be unable to adequately render such advice.

The same notion is associated with trust when it comes to AI. An AI system that you are using to play checkers or chess is probably not involved in any life-or-death deliberations. You can be at more ease with your assignment of trust.

An AI-based self-driving car that is barreling down a highway at high speeds requires a much more strenuous level of trust. The slightest blip by the AI driving system could lead directly to your death and the deaths of others. In a published interview of Beena Ammanath, Executive Director of the Global Deloitte AI Institute and author of the book Trustworthy AI, a similar emphasis on considering the contextual facets of where AI trustworthiness comes to play: “If you’re building an AI solution that is doing patient diagnosis, fairness and bias are super important.

But if you’re building an algorithm that predicts jet engine failure, fairness and bias isn’t as important. Trustworthy AI is really a structure to get you started to think about the dimensions of trust within your organization” ( VentureBeat , March 22, 2022). When discussing trustworthy AI, you can construe this topic in a multitude of ways.

For example, trustworthy AI is something that we all view as a desirable and aspirational goal, namely that we should be desirous of devising and promulgating trustworthy AI. There is another usage of the catchphrase. A somewhat alternative usage is that trustworthy AI is a state of condition or measurement, such that someone might assert that they have crafted an AI system that is an instance of trustworthy AI.

You can also use the phrase trustworthy AI to suggest a method or approach that can be used to attain AI trustworthiness. Etc. On a related note, I trust that you realize that not all AI is the same and that we have to be mindful of not making blanket statements about all of AI.

A particular AI system is likely to be significantly different from another AI system. One of those AI systems might be highly trustworthy, while the other might be marginally trustworthy. Be cautious in somehow assuming that AI is a monolith that is either entirely trustworthy or entirely not trustworthy.

This is simply not the case. I’d like to next briefly cover some of my ongoing research about trustworthy AI that you might find of interest, covering the arising role of AI guardian angel bots . Here’s how it goes.

You would be armed with an AI system (an AI guardian angel bot) that is devised to gauge the trustworthiness of some other AI system. The AI guardian angel bot has as a paramount focus your safety. Think of this as though you have the means to monitor the AI you are relying upon by having a different AI system in your veritable pocket, perhaps running on your smartphone or other such devices.

Your proverbial AI guardian can compute on a basis that the AI you are relying upon also does, working at rapid speeds and calculating the situation at hand in real-time, far faster than a human-in-the-loop could do so. You might at an initial glance be thinking that the AI you are already relying upon ought to have some internal AI guardrails that do the same as this separately calculating AI guardian angel bot. Yes, that would certainly be desired.

One qualm is that the AI guardrails built into an AI system might be integrally and prejudicially aligned with the AI per se, thus the supposed AI guardrail no longer is able to in a sense independently verify or validate the AI. The contrasting idea is that your AI guardian angel bot is an independent or third-party AI mechanism that is distinct from the AI that you are relying upon. It sits outside of the other AI, remaining devoted to you and not devoted to the AI being monitored or assessed.

A straightforward means of thinking about this can be expressed via the following simplified equation-like statements. We might say that “P” wishes to potentially trust “R” to do a particular task “X”: P trusts R to do task X. This would be the following when only people are involved: Person P trusts person R to do task X.

When we opt to rely upon AI, the statement reshapes to this: Person P trusts AI instance-R to do task X. We can add the AI guardian angel bot by saying this: Person P trusts AI instance-R to do task X as being monitored by AI guardian angel bot instance-Z The AI guardian angel bot is tirelessly and relentlessly assessing the AI that you are relying upon. As such, your handy AI guardian might alert you that the trust of this other AI is unwarranted.

Or, the AI guardian might electronically interact with the other AI to try and ensure that whatever variance away from being trustworthy is quickly righted, and so on (see my coverage on such details at the link here ). The Trusty Trust Reservoir Metaphor Since we are discussing varying levels of trust, you might find of use a handy metaphor about trustworthiness by conceiving of trust as a type of reservoir. You have a certain amount of trust for a particular person or thing in a particular circumstance at a particular point in time.

The level of the trust will rise or fall, depending upon what else happens related to that particular person or thing. The trust could be at a zero level when you have no trust whatsoever for the person or thing. The trust could be negative when you venture into having distrust of that person or thing.

In the case of AI systems, your trust reservoir for the particular AI that you are relying upon in a particular circumstance will rise or fall as dependent upon your gauging the trustworthiness of the AI. At times, you might be well aware of this varying level of trust about the AI, while in other instances you might be less aware and more so by hunch making judgments about the trustworthiness. Ways that we’ve been discussing herein the means to boost trust levels for AI include: Adherence to AI Ethics.

If the AI that you are relying upon was devised by trying to adhere to the proper AI Ethics precepts, you presumably would use this understanding to boost the level of your trust reservoir for that particular AI system. As a side note, it is also possible that you might generalize to other AI systems as to their trustworthiness, likewise, though this can be at times a misleading form of what I call AI trust aura spreading (be cautious in doing this!). Use a Human-In-The-Loop.

If the AI has a human-in-the-loop, you might positively add to your perceived trust in the AI. Establish Laws and Regulations . If there are laws and regulations associated with this particular type of AI, you might likewise boost your trust level.

Employ an AI Guardian Angel Bot . If you have an AI guardian angel bot at the ready, this too will raise further your trust level. As mentioned earlier, trust can be quite brittle and fall apart in an instant (i.

e. , the trust reservoir rapidly and suddenly dumps out all of the built-up trust). Imagine that you are inside an AI-based self-driving car and the AI driving suddenly makes a radical right turn, causing the wheels to squeal and nearly forcing the autonomous vehicle into an endangering rollover.

What would happen to your level of trust? It would seem that even if you previously held the AI to a heightened level of trust, you would dramatically and abruptly drop your trust level, sensibly so. At this juncture of this weighty discussion, I’d bet that you are desirous of additional illustrative examples that might showcase the nature and scope of trustworthy AI. There is a special and assuredly popular set of examples that are close to my heart.

You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the pursuit of trustworthy AI, and if so, what does this showcase? Allow me a moment to unpack the question. First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system.

There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here . I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here ), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there. Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here ). Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. Self-Driving Cars And Trustworthy AI For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers. The AI is doing the driving. One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient.

In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. Why is this added emphasis about the AI not being sentient? Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI.

In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic. First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars.

As such, it is difficult to make sweeping statements about what AI driving systems will do or not do. Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended.

An existing limitation today might no longer exist in a future iteration or version of the system. I trust that provides a sufficient litany of caveats to underlie what I am about to relate. We are primed now to do a deep dive into self-driving cars and trustworthy AI.

Trust is everything, especially in the case of AI-based self-driving cars. Society seems to be warily eyeing the emergence of self-driving cars. On the one hand, there is a grand hope that the advent of true self-driving cars will demonstrably reduce the number of annual car-related fatalities.

In the United States alone there are about 40,000 annual deaths and around 2. 5 million injuries due to car crashes, see my collection of stats at the link here . Humans drink and drive.

Humans drive while distracted. The task of driving a car seems to consist of being able to repetitively and unerringly focus on driving and avoid getting into car crashes. As such, we might dreamily hope that AI driving systems will guide self-driving cars repetitively and unerringly.

You can construe self-driving cars as a twofer, consisting of reducing the volume of car crash deaths and injuries, along with potentially making mobility available on a much wider and accessible basis. But the concern meanwhile looms over societal perceptions as to whether self-driving cars are going to be safe enough to be on our public roadways at large. If even one self-driving car gets into a crash or collision that leads to a single death or severe injury, you can likely anticipate that today’s somewhat built-up trust toward those AI-based driverless cars is going to precipitously drop.

We saw this happen when the now-infamous incident occurred in Arizona that involved a somewhat (not really) self-driving car that ran into and killed a pedestrian (see my coverage at this link here ). Some pundits point out that it is unfair and inappropriate to base the trust of AI self-driving cars on the facet that only one such next death-producing crash or collision could undermine the already relatively crash-free public roadway trials. In addition, on a further unfair basis, the odds are that no matter which particular AI self-driving car brand or model perchance gets embroiled in a sorrowful incident, society would indubitably blame all self-driving car brands.

The entirety of self-driving cars could be summarily smeared and the industry as a whole might suffer a huge backlash leading to a possible shutdown of all public roadway trials. A contributor to such a blowback is found in the nonsensical proclamations by outspoken self-driving car proponents that all driverless cars will be uncrashable. This idea of being uncrashable is not only outrightly wrong (see the link here ), it insidiously is setting up the self-driving car industry for a totally out-of-whack set of expectations.

These outlandish and unachievable pronouncements that there will be zero deaths due to self-driving cars are fueling the misconception that any driverless car crashes are a sure sign that the whole kit and kaboodle is for naught. There is a distinct sadness to realize that the progress toward self-driving cars and the inch-at-a-time accumulation of societal trust could be dashed away in an instant. That is going to be one heck of a showcase about the brittleness of trust.

Conclusion Many automakers and self-driving tech firms are generally abiding by AI Ethics principles, doing so to try and build and field trustworthy AI in terms of safe and reliable AI-based self-driving cars. Please realize that some of those firms are stronger and more devoted to the Ethical AI precepts than others. There are also occasional fringe or newbie self-driving car-related startups that seem to cast aside much of the AI Ethics cornerstones (see my review at the link here ).

On other fronts, new laws and regulations covering self-driving cars have gradually been getting placed on the legal books. Whether they have the needed teeth to back them up is a different matter, as likewise whether the enforcement of those laws is being taken seriously or overlooked (see my columns for analyses on this). There is also the high-tech angle to this too.

I have predicted that we will gradually see variants of AI guardian angel bots that will come to the fore in the autonomous vehicle and self-driving cars arena. We aren’t there yet. This will become more prevalent once the popularity of self-driving cars becomes more widespread.

This last point brings up a famous line about trust that you undoubtedly already know by heart. Trust, but verify. We can allow ourselves to extend our trust, perhaps generously so.

Meanwhile, we should also be watching like a hawk to make sure that the trust we engender is verified by both words and deeds. Let’s put some trust into AI, but verify endlessly that we are placing our trust appropriately and with our eyes wide open. You can trust me on that.

Follow me on Twitter . Lance Eliot Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/10/16/ai-ethics-and-ai-law-clarifying-what-in-fact-is-trustworthy-ai/

Exit mobile version