Dubai Tech News

AI Ethics And The Quest For Self-Awareness In AI

AI AI Ethics And The Quest For Self-Awareness In AI Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.

Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). Following New! Follow this author to stay notified about their latest stories. Got it! Sep 18, 2022, 08:00am EDT | Share to Facebook Share to Twitter Share to Linkedin Giving heavy thought to AI and self-awareness, combined with ethical behavior and AI ethics.

getty Are you self-aware? I’d bet that you believe you are. The thing is, supposedly, few of us are especially self-aware. There is a range or degree of self-awareness and we all purportedly vary in how astutely self-aware we are.

You might think you are fully self-aware and only be marginally so. You might be thinly self-aware and realize that’s your mental state. Meanwhile, at the topmost part of the spectrum, you might believe you are fully self-aware and indeed are frankly about as self-aware as they come.

Good for you. Speaking of which, what good does it do to be exceedingly self-aware? According to research published in the Harvard Business Review (HBR) by Tasha Eurich, you reportedly are able to make better decisions, you are more confident in your decisions, you are stronger in your communication capacities, and more effective overall (per article entitled “What Self-Awareness Really Is (and How to Cultivate It). ” The bonus factor is that those with strident self-awareness are said to be less inclined to cheat, steal, or lie.

In that sense, there is a twofer of averting being a scoundrel or a crook, along with striving to be a better human being and embellish your fellow humankind. All of this talk about self-awareness brings up a somewhat obvious question, namely, what does the phrase self-awareness actually denote. You can readily find tons of various definitions and interpretations about the complex and shall we say mushy construct entailing being self-aware.

Some would simplify matters by suggesting that self-awareness consists of monitoring your own self, knowing what yourself is up to. You are keenly aware of your own thoughts and actions. MORE FOR YOU Black Google Product Manager Stopped By Security Because They Didn’t Believe He Was An Employee Vendor Management Is The New Customer Management, And AI Is Transforming The Sector Already What Are The Ethical Boundaries Of Digital Life Forever? Presumably, when not being self-aware, a person would not realize what they are doing, nor why so, and also not be cognizant of what other people have to say about them.

I’m sure you’ve met people that are like this. Some people appear to walk this earth without a clue of what they themselves are doing, and nor do they have a semblance of what others are saying about them. I guess you could contend that they are like a heads-down charging bull in a delicate breakables boutique.

We customarily tend to believe that the bull does not know what it is doing and remains oblivious to the viewpoints of others unless those others try to physically maneuver or corral the clueless creature. It is said that self-awareness can be somewhat recursive. Let me sketch an example to illustrate this recursion.

You are in the midst of watching a quite absorbing cat video on your smartphone (everyone does this, it seems). Some people would have no other apparent thoughts other than the wonderous heartwarming antics of those darling cats. Meanwhile, anyone with a modicum of self-awareness, they are aware that they are watching a cat video.

They might also be aware that others around them are noting that they are watching a cat video. Notice that you can be self-aware and still be immersed in a shall we say a particular primary activity. The primary activity in this instance is watching the cat video.

Secondarily, and simultaneously, you can carry the thought that you are in fact watching a cat video. You are also able to carry the thought that others are observing you as you are watching the altogether entertaining cat video. You don’t necessarily have to stop one activity, such as discontinue watching the cat video, in order to then separately contemplate that you are (or just were) watching a cat video.

Those thoughts can seemingly occur in parallel with each other. Sometimes, our self-awareness might though kick us out of or at least interrupt a primary mental activity. Perhaps, while thinking about your watching of the cat video, your mind partially zones out as it is overstretching to concentrate solely on the video itself.

You opt to then rewind the video to revisit the portion that you kind of saw but that you were mentally distracted from fully comprehending. Self-awareness disturbed your primary mental activity. Okay, we are now ready for the recursive aspects to arise.

Are you ready? You are watching a cat video. Your self-awareness is informing you that you are watching a cat video and that others are watching you as you are watching the video. That’s the status quo.

You next make an additional mental leap. You begin to think about your self-awareness. You are self-aware that you are engaging your self-awareness.

Here’s how that goes: Am I thinking too much about thinking about my watching of the cat video, you ask yourself despairingly? This is another layer of self-awareness. Self-awareness ranking on top of other self-awareness. There is an old saying that it is turtles all the way down.

For the self-awareness phenomena, you could be: Not aware of yourself Self-aware of yourself Self-aware of your self-awareness of yourself Self-aware of your being self-aware of your self-awareness of yourself Ad Infinitum (i. e. , and so on) You might have realized that I earlier was pointing out subtly that there seem to be two major categories of being self-aware.

One particular theory postulates that we have a kind of internal self-awareness that focuses on our internal states, and we also have an external self-awareness that aids in gauging the perceptions about us of those around us that are viewing us. Per the HBR article, here’s a quick depiction of the theorized two types of self-awareness: “The first, which we dubbed internal self-awareness , represents how clearly we see our own values, passions, aspirations, fit with our environment, reactions (including thoughts, feelings, behaviors, strengths, and weaknesses), and impact on others. ” And meanwhile the other is: “The second category, external self-awareness , means understanding how other people view us, in terms of those same factors listed above.

Our research shows that people who know how others see them are more skilled at showing empathy and taking others’ perspectives. ” A handy two-by-two matrix or four-square can be derived by asserting that both the internal and external self-awareness range from high to low, and you can mate the two categories against each other. The HBR research indicates that you are said to be one of these four self-awareness archetypes: Introspector: Low External Self-Awareness + High Internal Self-Awareness Seeker: Low External Self-Awareness + Low Internal Self-Awareness Pleaser: High External Self-Awareness + Low Internal Self-Awareness Aware: High External Self-Awareness + High Internal Self-Awareness The pinnacle would be the “Aware” archetype that consists of being at the top rung of being externally self-aware and likewise at the top of being internally self-aware.

To clarify, you don’t attain this vaunted posture in a necessarily permanent way. You can slip back and forth between being high and low, amongst both the internal and external self-awareness realms. It can depend upon the time of day, the situation you find yourself in, and a slew of other salient factors.

Now that we’ve affably covered some foundational elements about self-awareness, we can seek to tie this to the topic of ethical behaviors. The usual claim about being self-aware is that you are more likely to be aboveboard when you are self-aware. This means, as indicated already, you are less prone to adverse ethical behaviors such as stealing, cheating, and lying.

The rationale for this tendency is that the activeness of your self-awareness would make you realize that your own behavior is unsavory or unethical. Not only do your catch yourself as you veer into muddy unethical waters, but you are also prone to steering yourself back out and onto dry land (the sanctity of ethical territory), as it were. Your self-awareness aids you in exercising self-control.

A contrast would presumably be when there is little or no self-awareness, which suggests that someone is perhaps oblivious to their leaning into unethical behaviors. You could contend that such an unaware person might not realize that they are performing adversely. Akin to the bull in the breakables shop, until something more overtly catches their attention, they are unlikely to self-regulate themselves.

Not everyone buys into this, by the way. Some would argue that self-awareness can be as readily applied to being unethical as being ethical. For example, an evildoer might be fully self-aware and relishes that they are carrying out the wrongdoing.

Their self-awareness even drives them more stridently toward larger and larger acts of nefarious misconduct. There is more cloudiness on this than might directly meet the eye. Suppose someone is keenly self-aware, but they are unaware of the ethical mores of a given society or culture.

In that manner, they do not have any ethical guidance, despite the stated fact that they are self-aware. Or, if you like, perhaps the person knows about the ethical precepts and doesn’t believe they apply to them. They consider themselves unique or outside the bounds of conventional ethical thinking.

Round and round it goes. Self-awareness could be construed as a dual-edged ethics-oriented sword, some would fervently emphasize. For the moment, let’s go with the happy face version consisting of self-awareness by and large guiding or nudging us toward ethical behaviors.

Everything else being equal, we will make the brash assumption that the more self-awareness there is, the more ethically leaning you will go. It sure seems pleasing and inspirational to wish it so. Let’s shift gears and bring Artificial Intelligence (AI) into the picture.

We are at a juncture in this discussion to connect all of the proceeding entanglements with the burgeoning realm of Ethical AI, also commonly known as the ethics of AI. For my ongoing and extensive coverage of AI ethics, see the link here and the link here , just to name a few. The notion of Ethical AI entails intertwining the field of ethics and ethical behavior to the advent of AI.

You’ve certainly seen headlines that have raised alarm bells about AI that is replete with inequities and various biases. For example, there are concerns that AI-based facial recognition systems can at times exhibit racial and gender discrimination, typically as a result of how the underlying Machine Learning (ML) and Deep Learning (DL) facilities were trained and fielded (see my analysis at this link here ). To try and stop or at least mitigate the pell-mell rush toward AI For Bad , consisting of AI systems that are either inadvertently or at times intentionally shaped to act badly, there has been a recent urgency to apply ethics precepts to the development and use of AI.

The earnest goal is to provide ethical guidance to AI developers, plus firms that build or field AI, and those that are dependent upon AI applications. As an example of the Ethical AI principles being crafted and adopted, see my coverage at the link here . Give a reflective moment to consider these three highly crucial questions: Can we get AI developers to embrace Ethical AI principles and put those guidelines into actual use? Can we get firms that craft or field AI to do likewise? Can we get those that use AI to similarly be cognizant of Ethical AI facets? I’d unabashedly say this, it is a tall order.

The thrill of making AI can overpower any inkling of attention toward the ethics of AI. Well, not just the thrill, but money-making is integral to that equation too. You might be surprised to know that some in the AI realm are apt to say that they’ll get around to dealing with the Ethical AI “stuff” once they’ve gotten their AI systems out the door.

This is the typical techie mantra of making sure to fail fast and fail often until you get it right (hopefully getting it right). Of course, those that are summarily shoving ethically dubious AI onto the public at large are letting the horse out of the barn. Their proclaimed outstretched idea is that the AI For Bad will get fixed after it is in daily use, which detrimentally is tardy since the horse is already wantonly galloping around.

Harms can be done. There is also the heightened chance that nothing will be fixed or adjusted while the AI is in use. A frequent excuse is that fiddling with the AI at that juncture might make it even worse in terms of already unethical algorithmic decision-making (ADM) going completely off the skids.

What can be done to get the utility and vitality of having Ethical AI as a brightly shining and guiding light in the minds of those that are building AI, fielding AI, and using AI? Answer: Self-awareness. Yes, the notion is that if people were more self-aware about how they use or interact with AI, it might increase their proclivity toward wanting Ethical AI to be the norm. The same could be said about the AI developers and the companies related to AI systems.

If they were more self-aware of what they are doing, perhaps they would be embracing the ethics of AI more so. Part of the logic as already stipulated is that being self-aware proffers a tendency toward being an ethically better person and also averting from being an ethically lousy person. If we can keep that premise going, it implies that the AI developers that are more so inclined toward self-awareness will ergo be inclined toward ethical behaviors and therefore be inclined toward producing ethically sound AI.

Is that a bridge too far for you? Some would say that the indirectness is a bit much. The exorbitant chain of linkages between being self-aware, being ethically virtuous, and applying ethical precepts to AI is maybe hard to swallow. A counterargument is that it couldn’t hurt to try.

Skeptics would say that an AI developer might be self-aware and possibly be more ethically minded, but they aren’t necessarily going to leap toward applying that mental encampment to the mores of Ethical AI. The reply to that qualm is that if we can publicize and popularize Ethical AI matters, the otherwise seemingly tenuous connection will become more obvious, expected, and possibly become the standard way of doing things when it comes to crafting AI. I am now going to add a twist to this saga.

The twist might make your head spin. Please make sure you are well seated and prepared for what I am about to indicate. Some point out that we ought to be building Ethical AI directly into the AI itself.

You might be nonplused by that pronouncement. Let’s unpack it. A programmer might create an AI system and do so with their own programming self-awareness of trying to prevent the AI from embodying biases and inequities.

Rather than just plowing away at the programming, the developer is watching over their own shoulder to ask whether the approach they are undertaking is going to result in the requisite absence of adverse elements in the AI. Great, we’ve got an AI developer that appears to be sufficiently self-aware, has sought to embrace ethical behaviors, and has seen the light to include ethical precepts as they craft their AI system. Score a win for Ethical AI! All well and good, though here’s something that can later on transpire.

The AI is fielded and put into daily use. Part of the AI included a component for being able to “learn” on the fly. This means that the AI can adjust itself based on new data and other aspects of the original programming.

As a quick aside, this does not imply that the AI is sentient. We don’t have sentient AI. Ignore those dopey headlines saying that we do.

Nobody can say whether we will have sentient AI, and nor can anyone sufficiently predict when if ever it will happen. Returning to our tale, the AI was purposefully devised to improve itself while underway. Quite a handy notion.

Rather than programmers having to continually make improvements, they allow the AI program to do so by itself (whoa, is that working yourself out of a job?). During the time that the AI inch by inch adjusts itself, turns out that various sordid inequities and biases are creeping into the AI system by its own acts of alteration. Whereas the programmer originally kept those seedy aspects out, they are now formulating due to the AI adjusting on the fly.

Regrettably, this can happen in such a subtle behind-the-scenes way that nobody is the wiser. Those that earlier might have given a green light to the AI after the initial exhaustive testing, are now blindly unaware that the AI has gone down the rotten path of AI For Bad . One means to either prevent or at least catch this undesirable emergence would be to build into the AI a kind of Ethical AI double-checker.

A component within the AI is programmed to watch the behavior of the AI and detect whether unethical ADM is beginning to emerge. If so, the component might send out an alert to the AI developers or do so to a firm that is running the AI system. A more advanced version of this component might try to repair the AI.

This would be an adjustment of the adjustments, turning the unethical arising aspects back into the proper ethical parameters. You can imagine that this type of programming is tricky. There is a chance that it might go astray, possibly turning unethical into deeply unethical.

There is also the possibility of a false positive triggering the component into action and maybe messing things up accordingly. Anyway, without getting bogged down in how this double-checker would function, we are going to make an audacious declaration about it. You could suggest that in some limited way, the AI is said to be self-aware.

Yikes, those are fighting words for many. The prevailing belief by nearly everyone is that today’s AI is not self-aware. Full stop, period.

Until we reach sentient AI, which we don’t know if or when that will occur, there isn’t any kind of AI that is self-aware. Not at least in the meaning of human-oriented self-awareness. Don’t even suggest that it can happen.

I certainly agree that we need to be careful about anthropomorphizing AI. I’ll say more about that concern in a moment. Meanwhile, if you’ll for sake of discussion be willing to use the phasing “self-aware” in a loosey-goosey manner, I believe you can see readily why the AI might be said to be abiding by the overall notion of self-awareness.

We have a portion of the AI that is monitoring the rest of the AI, keeping tabs on what the rest of the AI is up to. When the rest of the AI starts to go overboard, the monitoring portion seeks to detect this. Furthermore, the AI monitoring portion or double-checker might steer the rest of the AI back into the proper lanes.

Doesn’t that seem a bit like the act of watching those cat videos and having the self-awareness that you were doing so? There is a familiar ring to it. We can extend this even more so. The AI double-checker component is not only programmed to observe the behavior of the rest of the AI but also notes the behavior of those that are using the AI.

How are the users doing when utilizing the AI? Suppose that some users are expressing indignation that the AI seems to be discriminating against them. The AI double-checker might be able to pick up on this, using it as another red flag about the rest of the AI going astray. This brings up the internal self-awareness and the external self-awareness categorizations.

The AI double-checker is scanning internally and externally to figure out whether the rest of the AI has headed into troubling seas. The detection will raise a flag or cause a self-correction to be enacted. Let’s add another somewhat mind-boggling extension.

We build another AI double-checker that is meant to double-check the core AI double-checker. Why so? Well, suppose the AI double-checker seems to be faltering or failing to do its job. The AI double-checker of the double-checker would seek to detect this malfunction and take needed action accordingly.

Welcome to the recursive nature of self-awareness, some might proudly declare, as exhibited in a computational AI system. For those of you that are already at the edge of your seat about this, the last comment, for now, is that we could try to suggest that if you make AI systems “self-aware” they will then potentially gravitate toward ethical behavior. Are they doing this on a sentient basis? Decidedly, no.

Are they doing this on a computational basis? Yes, though we have to be clear-cut that it is not of the same caliber as that of human behavior. If you are uncomfortable that the notion of self-awareness is being wrongly distorted to make it fit into a computational scheme, your misgivings on this are well-noted. Whether we should put a stop to the ongoing AI efforts that leverage the notion is another open question.

You could persuasively argue that at least it seems to head us toward a better outcome of what AI is likely to do. We need to though have our eyes wide open as this takes place. Guess we’ll need to see how this all plays out.

Come back to this in five years, ten years, and fifty years, and see if your thinking has changed on the controversial matter. I realize this has been a somewhat heady examination of the topic and you might be hankering for some day-to-day examples. There is a special and assuredly popular set of examples that are close to my heart.

You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI having a semblance of “self-awareness” and if so, what does this showcase? Allow me a moment to unpack the question. First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system.

There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here . I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here ), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here ). Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. Self-Driving Cars And AI Having So-Called Self-Awareness For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers. The AI is doing the driving. One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient.

In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. Why is this added emphasis about the AI not being sentient? Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI.

In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic. First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars.

As such, it is difficult to make sweeping statements about what AI driving systems will do or not do. Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended.

An existing limitation today might no longer exist in a future iteration or version of the system. I trust that provides a sufficient litany of caveats to underlie what I am about to relate. We are primed now to do a deep dive into self-driving cars and ethical AI questions entailing the eyebrow-raising notion of AI having a kind of self-awareness if you will.

Let’s use a readily straightforward example. An AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car.

The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.

Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here . Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars. Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor.

The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars. That’s something we might all need to get accustomed to, rightfully or wrongly.

Back to our tale. One day, suppose a self-driving car in your town or city approaches a Stop sign and does not appear to be slowing down. Heavens, it looks like the AI driving system is going to have the self-driving car plow right past the Stop sign.

Imagine if a pedestrian or bike rider was somewhere nearby and got caught off-guard that the self-driving car wasn’t going to come to a proper halt. Shameful. Dangerous! And, illegal.

Let’s now consider an AI component in the AI driving system that acts as a style of self-aware double-checker. We will take a moment to dig into the details of what is going on inside the AI driving system. Turns out that the video cameras mounted on the autonomous vehicle detected what seemed perhaps to be a Stop sign, though in this case an overgrown tree abundantly obscures the Stop sign.

The Machine Learning and Deep Learning system that was originally trained on Stop signs was devised on patterns of principally full Stop signs, generally unfettered. Upon computationally examining the video imagery, a low probability was assigned that a Stop sign existed in that particular spot (as an added complication, and further explanation, this was a newly posted Stop sign that did not appear on the prior prepared digital maps that the AI driving system was relying on). All in all, the AI driving system computationally determined to proceed ahead as though the Stop sign was either nonexistent or possibly a sign of another sort that perchance resembled a Stop sign (this can happen and does happen with some frequency).

But, thankfully, the AI self-aware double-checker was monitoring the activities of the AI driving system. Upon computationally reviewing the data and the assessment by the rest of the AI, this component opted to override the normal course of proceeding and instead commanded the AI driving system to come to a suitable stop. No one was injured, and no illegal act arose.

You could say that the AI self-aware double-checker acted as though it was an embedded legal agent, trying to ensure that the AI driving system abided by the law (in this case, a Stop sign). Of course, safety was key too. That example hopefully showcases how the conceived AI self-aware double-checker might work.

We can next briefly consider a more prominently Ethical AI example that illustrates how the AI self-aware double-check might provide an ethics AI-oriented embedded functionality. First, as background, one of the concerns that have been expressed about the advent of AI-based self-driving cars is that they might end up being used in a somewhat inadvertent discriminatory way. Here’s how.

Suppose those self-driving cars are set up to try and maximize their revenue potential, which decidedly makes sense for those that are operating a fleet of ride-sharing available self-driving cars. The fleet owner would want to have a profitable operation. It could be that in a given town or city, the roaming self-driving cars gradually begin to serve some parts of the community and not other areas.

They do so for a money-making goal, in that poorer areas might not be as revenue-producing as the wealthier parts of the locale. This is not an explicit aspiration of serving some areas and not serving others. Instead, it organically arises as to the AI of the self-driving cars “figure out” computationally that there is more money to be made by concentrating on the geographically higher-paying areas.

I’ve discussed this societal ethical concern in my column, such as at the link here . Assume that we have added an AI self-aware double-checker into the AI driving systems. After a while, the AI component computationally notices a pattern of where the self-driving cars are roaming.

Notably, in some areas but not in others. Based on having been coded with some Ethical AI precepts, the AI self-aware double-checker starts guiding the self-driving cars to other parts of town that were otherwise being neglected. This illustrates the AI self-aware notion and does so in combination with the Ethical AI element.

One more such example might provide insight that this Ethical AI consideration can be a quite sobering and serious life-or-death ramification too. Consider a news report about a recent car crash. Reportedly, a human driver was coming up to a busy intersection and had a green light to proceed straight ahead.

Another driver came barreling past a red light and entered the intersection when they should not have been doing so. The driver with the green light realized at the last moment that this other car was going to severely ram into his car. According to this imperiled driver, he mindfully calculated that either he was going to get struck by the other car, or he could swerve to try and avoid the interloper.

The problem with swerving was that there were nearby pedestrians that would be endangered. Which choice would you make? You can prepare yourself to get hit and hope that the damage won’t maim or kill you. On the other hand, you can veer radically away, but grievously endanger and possibly harm or kill nearby pedestrians.

This is a hard problem, encompassing moral judgment, and is fully steeped in ethical (and legal) implications. There is a general-purpose ethical quandary that covers this kind of dilemma, famously or perhaps infamously called the Trolley Problem, see my extensive coverage at this link here . Turns out that it is an ethically stimulating thought experiment that traces back to the early 1900s.

As such, the topic has been around for quite a while and more recently has become generally associated with the advent of AI and self-driving cars. Replace the human driver with an AI driving system embedded into a self-driving car. Imagine then that an AI self-driving car is entering into an intersection and the sensors of the autonomous vehicle suddenly detect a human-driven car perilously coming directly through a red light and aiming at the driverless car.

Assume that the self-driving car has some passengers inside the vehicle. What do you want the AI to do? Should the AI driving system opt to proceed ahead and get plowed into (likely harming or maybe killing the passengers inside the vehicle), or do you want the AI driving system to take a chance and veer away, though the veering action takes the autonomous vehicle perilously toward nearby pedestrians and might harm or kill them. Many of the AI makers of self-driving cars are taking a head-in-the-sand approach to these ethical AI blistering predicaments.

By and large, the AI as presently programmed would simply plow ahead and get violently rammed by the other car. The AI wasn’t programmed to look for any other evasive maneuvers. I’ve predicted repeatedly that this see-no-evil hear-no-evil stance of the AI self-driving car makers will eventually come around and bite them (my analysis is at the link here ).

You can expect lawsuits involving such car crashes that will seek to find out what the AI was programmed to do. Was the company or AI developers or fleet operator that developed and fielded the AI negligent or liable for what the AI did or didn’t do? You can also anticipate an ethical AI-awareness public-wide firestorm will brew once these kinds of cases occur. Into this Ethical AI dilemma steps our vaunted AI self-aware double-checker that is ethics-oriented.

Perhaps this special AI component might engage in these kinds of circumstances. The portion is monitoring the rest of the AI driving system and the status of the self-driving car. When a dire moment like this arises, the AI component serves as a Trolley Problem solver and proffers what the AI driving system should do.

Not an easy thing to code, I assure you. Conclusion I’ll share with you the last thought for now on this topic. You are likely to find it intriguing.

Do you know about the mirror test ? It is quite well known in the study of self-awareness. Other names for the matter are the mirror self-recognition test, the red spot test, the rouge test, and related phrasings. The technique and approach were initially crafted in the early 1970s for assessing the self-awareness of animals.

Animals that have reportedly successfully passed the test include apes, certain types of elephants, dolphins, magpies, and some others. Animals that were tested and reportedly did not pass the test include giant pandas, sea lions, etc. Here’s the deal.

When an animal sees itself in a mirror, does the animal realize that the image shown is of themselves, or does the animal think it is another animal? Presumably, the animal visually recognizes its own species, having seen other of its own kind, and might therefore think that the animal shown in the mirror is a cousin or maybe a belligerent competitor (especially if the animal snarls at the mirror image, which in turn seems to be snarling back at them). Perhaps you’ve seen your house cat or beloved pet dog do the same when it sees itself in a household mirror for the first time. In any case, we would assume that a wild animal has never seen itself before.

Well, this is not necessarily true, since perhaps the animal caught a glimpse of itself in a calm pool of water or via a shiny rock formation. But those are considered less likely chances. Okay, we want to somehow assess whether an animal is able to figure out that it is in fact the animal shown in the mirror.

Ponder that seemingly simple act. Humans figure out at a young age that they exist and that their existence is demonstrated by seeing themselves in a mirror. They become self-aware of themselves.

In theory, you might not realize that you are you, until seeing yourself in a mirror. Maybe animals aren’t able to cognitively become self-aware in the same fashion. It could be that an animal would see itself in a mirror and perpetually believe that it is some other animal.

No matter how many times it sees itself, it would still think that this is a different animal than itself. The trick part of this comes to play. We make a mark on the animal.

This mark has to be viewable only when the animal sees itself in the mirror. If the animal can twist or turn and see the mark on itself (directly), that ruins the experiment. Furthermore, the animal cannot feel, smell, or in any other manner detect the mark.

Once again, if they did so it would ruin the experiment. The animal cannot know that we put the mark on it, since that would clue the beast that something is there. We want to narrow down things such that the only possible reason that the mark is discoverable would be via looking at itself in the mirror.

Aha, the test is now ready. The animal is placed in front of a mirror or wanders to it. If the animal tries to subsequently touch or dig at the mark, we would reasonably conclude that the only way this would occur is if the animal realized that the mark was on itself.

Very few animal species have been able to successfully pass this test. There is a slew of criticisms about the test. If a human tester is nearby, they might give away things by staring at the mark, which might cause the animal to brush or feel for it.

Another possibility is that the animal still believes that another animal is shown in the mirror, but it is of the same type, and thus the animal wonders if it too has a mark like the one on the other animal. On and on it goes. I’m sure that you are glad to know this and will henceforth understand why there is a dot or some oddish marking on an animal that otherwise would not have such a mark.

Who knows, it might have recently finished a mirror test experiment. Congratulate the animal, safely, for having been a generous participant. What does any of that have to do with AI-based self-driving cars? You’ll like this part.

A self-driving car is going along on a long highway. The AI driving system is using sensors to detect other traffic. This is a two-lane highway that has traffic going northbound in one lane and southbound in the other lane.

On occasion, cars and trucks will try to pass each other, doing so by entering into the opposing lane and then hopping back into their proper lane of travel. You’ve seen this, you’ve undoubtedly done this. I’ll bet this next aspect has happened to you too.

Up ahead of the self-driving car is one of those sizable tanker trucks. It is made of shiny metal. Polished and clean as a whistle.

When getting behind such a truck, you can see the mirror image of your car via the back portion of the tanker. If you’ve seen this, you know how mesmerizing it can be. There you are, you and your car, reflected in the mirror-like reflection of the back of the tanker truck.

Sit down for the crazy twist. A self-driving car comes up behind the tanker truck. The cameras detect the image of a car that is shown in the mirror-like reflection.

Whoa, the AI assesses, is that a car? Is it coming at the self-driving car? As the self-driving car gets closer and closer to the tanker truck, the car seems to be getting closer and closer. Yikes, the AI computationally might calculate that this is a dangerous situation and the AI ought to take evasive action from this crazy rogue vehicle. You see, the AI didn’t recognize itself in the mirror.

It failed the mirror test. What to do? Perhaps the AI self-aware double-checker jumps into the matter and reassures the rest of the AI driving system that it is only a harmless reflection. Danger averted.

The world is saved. Successful passage of the AI Mirror Test! With a tongue-in-cheek conclusion, we might even suggest that sometimes AI is smarter or at least more self-aware than an average bear (though, to the credit of bears, they usually do well in the mirror test, oftentimes having gotten used to their reflection in pools of water). Correction, maybe AI can be more self-aware than giant pandas and sea lions, but don’t tell the animals that, they might be tempted to smash or bash AI systems.

We don’t want that, do we? Follow me on Twitter . Lance Eliot Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/09/18/ai-ethics-and-the-quest-for-self-awareness-in-ai/

Exit mobile version