Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
AI Ethics And Autonomous Systems Lessons Gleaned From That Recent Alaska Airlines Flight Where The Pilot And Co-Pilot Disagreed Prior To Taking Off And Abruptly Opted To Taxi Back To The Terminal And Go Their Separate Ways
Monday, December 23, 2024

Trending Topics

HomeTechnologyAI Ethics And Autonomous Systems Lessons Gleaned From That Recent Alaska Airlines Flight Where The Pilot And Co-Pilot Disagreed Prior To Taking Off And Abruptly Opted To Taxi Back To The Terminal And Go Their Separate Ways

AI Ethics And Autonomous Systems Lessons Gleaned From That Recent Alaska Airlines Flight Where The Pilot And Co-Pilot Disagreed Prior To Taking Off And Abruptly Opted To Taxi Back To The Terminal And Go Their Separate Ways

spot_img

Transportation AI Ethics And Autonomous Systems Lessons Gleaned From That Recent Alaska Airlines Flight Where The Pilot And Co-Pilot Disagreed Prior To Taking Off And Abruptly Opted To Taxi Back To The Terminal And Go Their Separate Ways Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.

Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). New! Follow this author to improve your content experience. Got it! Jul 23, 2022, 09:00am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin How are we going to handle professional disagreements between a human in-the-loop and an AI .

. . [+] autonomous system? getty Airlines have been in the news quite a bit lately.

We are in the summertime crunch of flights. Weary and frustrated passengers find themselves facing all sorts of flight disruptions and airline scheduling contortions. Flights get canceled unexpectedly.

Flights are delayed. Passengers fume. There have been unfortunately many instances of passengers that allow these vexations to erupt, and we’ve seen woefully way too many viral videos of head-to-head confrontations and sometimes pummeling fisticuffs.

Rarer do we learn about disputes between a pilot and copilot that might occur while in the cockpit. That is quite a surprise. Indeed, we are naturally taken aback to think that a pilot and copilot would have any semblance of a serious disagreement during any stage of a flight.

If the disagreement relates to which brand of coffee is best, our assumption is that this would not intrude into the work effort entailing flying the plane. The two would simply shrug off their lack of eye-to-eye on a seemingly flight-irrelevant topic. Their professional demeanor and longstanding pilot training would kick in and they would rivet their focus back to the flight particulars.

Consider though when a professional disagreement intervenes. I am going to briefly share with you a news item widely published about a recent instance of something that happened during a flight in the US pertaining to a claimed professional disagreement in the cockpit. This is mainly being cited herein so that we can explore a related topic that is of great importance to the advent of Artificial Intelligence (AI).

You see, there can be a form of shall we say professional disagreement between not just humans in a human-to-human disagreement, but we can also have something similar occur amidst the adoption of AI and ergo resulting human-versus-AI professional disagreements. All sorts of AI Ethics considerations arise. For my extensive and ongoing coverage of AI Ethics and Ethical AI issues, see the link here and the link here , just to name a few.

MORE FOR YOU Tesla Challenger Polestar Powers Up With Nasdaq Listing Plan Valuing It At $20 Billion Driver Killed By His Own Car Door While Waiting In Line At Fast-Food Drive-Thru, Providing Cautionary Insights For AI Self-Driving Cars Tesla Cofounder’s Recycling Startup Plans To Become EV Battery Material Powerhouse Get yourself ready for a fascinating tale. As recently reported in the news, a case of “professional disagreement” apparently arose during an Alaska Airline flight that was going from Washington to San Francisco. According to the news reports, the flight had moved away from the gate and was waiting on the tarmac for permission to taxi and take flight.

A storm was underway and led to a flight delay of more than an hour and a half. Turns out that the plane eventually turned around and headed back to the gate, which some of the passengers might normally have assumed was simply a storm-related safety precaution. Per various tweets, it seems that the pilot and copilot had some kind of out-of-view feud during their time there in the cockpit and somehow came to the conclusion that the most prudent approach would be to scrub the flight and return to the terminal.

Tweets suggested that the captain and first officer apparently couldn’t get along with each other. The airline later issued a statement that the situation was unfortunate (the situation was not explicitly stated or explained per se), the two flight officers were evaluated by management and deemed fit to fly, crews were swapped, and the flight did end up taking place and later on reached San Francisco. In one sense, if in fact the pilot and copilot did have a professional disagreement such as whether the plane was suitably ready for taking flight or whether the risk of flying through a storm was within a suitable safety range, those passengers ought to be relieved and thankful that the plane was returned to the gate.

Better to be safe than sorry. Having an additional delay is well worth the presumed reduction of risks associated with a considered rugged or adverse flying journey. Some people might be surprised that such a professional disagreement could arise.

We perhaps have a false impression that everything that happens in the cockpit is entirely precise and well-scripted. All form of human discretion has seemingly been wrung out of the process. Based on exacting and thoroughly calculated charts, a flight is either okay to proceed or it is not.

There can’t be any disagreement when the whole kit and caboodle is thought to be based on irrefutable calculus of facts and figures. That isn’t the full truth of the matter. Sure, there is a slew of protocols and all kinds of checks and balances, but this does not squeeze out all iota of human judgment.

Pilots and copilots still exercise human judgment. Fortunately, this human judgment is honed by years of flying. The odds are that a pilot and copilot in a commercial passenger plane have gobs of prior flight experience and leverage readily their many years of in-depth reasoning and judgment associated with being at the flight controls.

Given the notable role of human judgment, we might logically anticipate that a pilot and copilot are sometimes going to have professional disagreements. Most of the time there presumably is very little such disagreement. The pilot and copilot for every day flights are likely to be well-aligned the preponderance of the time.

Only when a flight scenario might go outside of conventional bounds would we expect tenser friction to arise. If there is a strong difference of opinion between the two, I would dare say that we want them to hash it out. Imagine a situation whereby the pilot stridently wants to proceed but the copilot perceives that the risks are too high.

Merely having the copilot kowtow to the pilot would seem undesirable. The copilot is a check-and-balance to what a pilot might be thinking of doing. For those that want a copilot to shut up and solely act mindlessly to whatever the pilot decrees, well, that isn’t much of a reassurance.

A copilot is not simply a spare “pilot” that enters into the picture only when the pilot is utterly incapacitated. That’s a misguided understanding of the value of having a pilot and copilot in the cockpit. There is the other angle to this.

Consider the case of a pilot that doesn’t believe a flight should proceed and meanwhile the copilot is gung-ho about getting up in the air. What then? By the expected hierarchy, the pilot is supposed to conventionally prevail over the copilot. The designated role of being the primary in-charge makes the pilot the greater of what otherwise is somewhat equal.

Normally, the pilot has more overall flying time seasoning than the copilot and ergo the copilot is hierarchically supposed to defer to the pilot’s wishes (when within reason). In any case, I think we can all agree that opting to not fly is an assuredly less risky choice than deciding to fly. Once the plane is up in the air, the risk levels get enormous in comparison to being on any ordinary stable ground.

A customary commercial flight that simply taxis back to the terminal without having gotten into the air would be a pretty amicable resolution to any heated acrimonious debate about going into flight. Let’s shift gears and use this spunky news item for an altogether different but relatable purpose. We are gradually having amongst us a prevalence of AI-based autonomous systems.

Sometimes the AI runs the show, as it were. The AI does everything from A to Z, and we might construe this as AI that is fully autonomous or nearly so. In other cases, we can have AI that interacts with and to some degree is programmed to be reliant upon having a human-in-the-loop.

I’d like to concentrate on the matter of an AI-based autonomous or semi-autonomous system that from the get-go has a human in the loop. The AI and the human are intentionally thrust together and supposed to be working in tandem with each other. They are cohorts in performing a particular task at hand.

The AI alone is not supposed to be acting on the task. The AI must interact with the designated human-in-the-loop. I bring up this characterization to differentiate from situations wherein the human-in-the-loop is considered an optional facet.

In essence, the AI is given free rein. If the AI opts to make use of the human, so be it to do so. There is no requirement that the AI has to touch base with or work hand-in-hand with the designated human.

The analyses that I am about to relate are certainly pertinent to that kind of optional interaction arrangement, but it isn’t what I am specifically driving at in this particular discussion. Okay, so we have some kind of task that a human and an AI are going to be working together on, inseparably from each other. In an abstract sense, we have a human sitting in one seat and an AI system sitting in the other accompanying seat.

I say this cheekily because we aren’t confining this discussion to a robot for example that actually might be sitting in a seat. I am metaphorically alluding to the notion that the AI is somewhere participating in the task and so is the human. Physically, their whereabouts are not especially vital to the discussion.

You might be unsure of when such a circumstance might arise. Easy-peasy. I will, later on, be discussing the advent of autonomous vehicles and self-driving cars.

At certain levels of autonomy, the AI and the human are supposed to work together. The AI might be driving the car and request that the human take over the driving controls. The human might be driving the car and activate the AI to take over the controls.

They are taking turns at the driving controls. In addition, some designs are having the AI be essentially active all of the time (or, unless turned off), such that the AI is always at the ready. Furthermore, the AI might directly intervene, even without the human asking, depending upon the situation that is unfolding.

Suppose for example that the human seems to have fallen asleep at the wheel. Since the human cannot seemingly activate the AI (because the person is sleeping), the AI might be programmed to take over the controls from the human. Some designs bring the AI and humans into a dual driving approach.

The AI is driving and the human is driving. Or, if you prefer, the human is driving and the AI is also driving. They are each driving the vehicle.

I liken this to those specially rigged cars that maybe you used when taking driver training and there were two sets of driving controls in the vehicle, one for the student driver and one for the driving instructor. That is but one example of a setting in which AI and humans might be working jointly on a task. All manner of possibilities exists.

Other kinds of autonomous vehicles might be similarly devised, such as airplanes, drones, submersibles, surface ships, trains, and so on. We don’t have to only consider vehicular and transportation settings. Envision the medical domain and surgeries that are being performed jointly by a medical doctor and an AI system.

The list is endless. I almost feel like referring to the classicly uproarious joke about a human and an AI that walk into a bar together. It’s quite a laugher for those into AI.

Seriously, let’s return to the focus of a human and an AI system that is working together on a given task. First, I want to avoid anthropomorphizing AI, which is something I will emphasize throughout. The AI is not sentient.

Please keep that in mind. Here’s something to mull over: Will a designated human-in-the-loop always be in utter agreement with a co-teamed AI? For any complex task, it would seem unlikely that the human and the AI will entirely and always be fully in lock and step. The human is on some occasions possibly going to disagree with the AI.

We can take that assumption all the way to the bank. I’d like you to also consider this perhaps surprising possibility too: Will the AI always be in utter agreement with a designated human-in-the-loop? Again, for any complex task, it would seem quite conceivable that AI will not be in agreement with humans on some occasions. If you are already leaning toward the idea that AI must always be wrong while humans must always be right, you would be wise to rethink that hasty conclusion.

Envision a car that has a human and AI jointly driving the semi-autonomous vehicle. The human steers toward a brick wall. Why? We don’t know, perhaps the human is intoxicated or has fallen asleep, but we do know that crashing into a brick wall is not a good idea, all else being equal.

The AI might detect the upcoming calamity and seek to steer away from the impending barrier. All told, we are going to have the distinct possibility of the AI and the human disagreeing with each other. The other way to say the same thing is that humans and AI are disagreeing with each other.

Note that I don’t want that the sequencing of AI-and-human versus human-and-AI to suggest anything about the direction or plausibility of the disagreement. The two workers, one human and one that is AI, are disagreeing with each other. We could in advance declare that whenever a disagreement happens between a given AI and a given human, we beforehand proclaim that the human prevails over the AI.

That being said, my illustrative example about the car that is heading into a brick wall would seem to dissuade us that the human is always necessarily going to be right. We could, in contrast, opt to in-advance declare that whenever a disagreement arises that we will have beforehand established that the AI is right and the human is wrong. This is not a sensibly generalizable provision either.

Imagine a car in which the AI has some embedded software error or bug, and the AI is trying to steer the vehicle off the road and into a ditch. Assuming that all else is equal, the human ought to be able to overcome this AI driving action and prevent the vehicle from landing in the gully. Let’s do a quick summary of this: Will a human-in-the-loop always be in utter agreement with AI? Answer: No.

Will AI always be in utter agreement with a human-in-the-loop? Answer: No . Will a human-in-the-loop always be right in comparison to AI? Answer: Not necessarily. Will the AI always be right in comparison to the human-in-the-loop? Answer: Not necessarily .

You can certainly set up the AI to be considered by default as the “wrong” or weaker party and therefore always defer to the human whenever a disagreement appears. Likewise, you can set up the AI to assume that the AI is considered “right” whenever a human is in disagreement with the AI. I want to clarify that we can programmatically do that if we wish to do so.

I am claiming though that in general, this is not always going to be the case. There are assuredly settings in which we do not know in advance whether the AI is “right” or the human is “right” in terms of opting toward one or the other on a disagreement related to a given task. I’ve led you to a very important and highly complicated question.

What should we do when a professional disagreement occurs between the human-in-the-loop and AI (or, equivalently, we can phrase this as being between the AI and the human-in-the-loop)? Do not try to dodge the question. Some might argue that this would never happen, but as I’ve laid out in my example about the car, it surely could happen. Some might argue that a human is obviously superior and must be the winner of any disagreement.

My example of the car and the brick wall knocks that one down. There are AI proponents that might insist that AI must be the winner, due to ostensibly overcoming human emotions and wanton thinking by those haphazard fuzzy-thinking humans. Once again, my other example entailing the car heading into the ditch undercuts that assertion.

In the real world, AI and humans are going to disagree, even when the two are purposely brought into a teaming situation to perform a jointly undertaken task. It will happen. We cannot put our heads in the sand and pretend it won’t occur.

We saw that the humans piloting the plane got into apparently a disagreement. Thankfully, they agreed to disagree, so it seems. They brought the plane back to the terminal.

They found a means to deal with the disagreement. The resolution to their disagreement worked out well, in comparison to if perhaps they had gone to fisticuffs in the cockpit or perhaps flown into the air and continued to be combative with each other. That is a sorrowful scenario that is untenable, and we can be thankful did not occur.

Allow me to provide my list of the various ways in which the AI and human-in-the-loop (or, human-in-the-loop and AI) disagreements might be resolved: AI and the teamed-up human work things out (amicably or not) Human prevails over the AI, by default AI prevails over the human, by default Some other predetermined fixed resolution prevails, by default Third-party human is looped-in and their indication prevails over the parties Third-party AI is looped-in and its indication prevails over the parties Third-party human replaces the existing human, things proceed anew Third-party AI replaces the existing AI, things proceed anew Third-party human replaces the existing AI, things proceed anew (now human-to-human) Third-party AI replaces the existing human, things proceed anew (now AI-to-AI) Other Those are abundantly worthy of being unpacked. Before getting into some more meat and potatoes about the wild and woolly considerations underlying how to deal with AI and human disagreements, let’s lay out some additional fundamentals on profoundly essential topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL).

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good . Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad .

For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here . Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness.

The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good . On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here .

We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here ). In a moment, I’ll share with you some overarching principles underlying AI Ethics.

There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news.

The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of. First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here , these are their identified six primary AI ethics principles: Transparency: In principle, AI systems must be explainable Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity Reliability: AI systems must be able to work reliably Security and privacy: AI systems must work securely and respect the privacy of users. As stated by the U. S.

Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here , these are their six primary AI ethics principles: Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities. Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities. Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.

Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles. Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature ), and that my coverage explores at the link here , which led to this keystone list: Transparency Justice & Fairness Non-Maleficence Responsibility Privacy Beneficence Freedom & Autonomy Trust Sustainability Dignity Solidarity As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do.

Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road. The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI. There isn’t any AI today that is sentient. We don’t have this.

We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here ). The type of AI that I am focusing on consists of the non-sentient AI that we have today.

If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human.

More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here ). Let’s keep things more down to earth and consider today’s computational non-sentient AI. Realize that today’s AI is not able to “think” in any fashion on par with human thinking.

When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities.

Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking. ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task.

You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data.

Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision. I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways.

Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se. Furthermore, the AI developers might not realize what is going on either.

The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good. Let’s return to our focus on disagreements between AI and a human. I’ve earlier indicated these are some of the disagreement-resolving strategies: AI and the teamed-up human work things out (amicably or not) Human prevails over the AI, by default AI prevails over the human, by default Some other predetermined fixed resolution prevails, by default Third-party human is looped-in and their indication prevails over the parties Third-party AI is looped-in and its indication prevails over the parties Third-party human replaces the existing human, things proceed anew Third-party AI replaces the existing AI, things proceed anew Third-party human replaces the existing AI, things proceed anew (now human-to-human) Third-party AI replaces the existing human, things proceed anew (now AI-to-AI) Other Time to unpack these.

First, consider that this is all about professional disagreements. A professional disagreement is loosely defined as a disagreement associated with a work-related task. For example, a disagreement that arises between a pilot and copilot about whether to proceed with a flight that is facing a storm could reasonably be labeled as a professional disagreement.

In contrast, a vehement disagreement over which brand of coffee the pilot advocates for versus the brand that the copilot prefers is readily categorized as a non-professional disagreement in this particular context. Of course, if a non-professional disagreement worms its way into a professional disagreement, we might ultimately be interested in the non-professional disagreement as a presumed source or spark for the professional one. Imagine that a pilot and copilot argue bitterly over which brand of coffee is the best, which then regrettably spills over into flight-specific concerns (pun!), such as whether to take off or not.

Second, we need to keep in mind the magnitude of the professional disagreement. Perhaps the pilot and copilot or in mild disagreement over proceeding to fly. They are not at loggerheads and merely contemplating the pros and cons of whether to takeoff.

This is not the caliber or magnitude of a professional disagreement that we are customarily considering herein. The thing is, it could be that the professional disagreement is transitory and both the parties work out a resolution cordially or at least on a timely basis. Generally, the focus of professional disagreement within scope are those that are seemingly intractable, and the two parties remain steadfastly in disagreement.

Third, there usually has to be something seriously on the line for these guidelines to come into play. Opting to fly or not fly is a decidedly life-or-death kind of decision if the flight is at risk due to a storm or the airplane is considered not fully prepared for such a journey. This is serious business.

We can still apply the guidelines to the less impactful professional disagreements though it might be more bother than it is worth. Okay, our considerations are that: The disagreement is principally professionally oriented rather than over something non-professional The disagreement is of a sustained nature and not merely transitory or otherwise readily resolved The disagreement foretells serious consequences and is usually of an impactful outcome The parties are at loggerheads and they seem intractable Let’s now take a closer look at each of my suggested guidelines or approaches regarding how to cope with such professional disagreements. AI and the teamed-up human work things out (amicably or not) I begin the list with the straightforward possibility that the AI and the human-in-the-loop are able to resolve the professional disagreement amongst themselves.

It seems that perhaps the instance of the two humans, the pilot and copilot illustrate this kind of circumstance. They somehow resolved to return to the terminal and go their separate ways. It could be that an AI system and a human are able to figure out a resolving approach that is generally satisfactory to both parties and the matter is thus satisfactorily concluded.

Human prevails over the AI, by default When setting up the AI, we might program a rule that says the human-in-the-loop shall always prevail whenever a professional disagreement arises. This would be the explicitly coded default. We might also allow some form of override, just in case, though the standing rule will be that the human prevails.

AI prevails over the human, by default When setting up the AI, we might program a rule that says the AI shall always prevail over the human-in-the-loop whenever a professional disagreement arises. This is the explicitly coded default. We might also allow some form of override, just in case, though the standing rule will be that the AI prevails.

Some other predetermined fixed resolution prevails, by default When setting up the AI, we might program a rule that says some other predetermined fixed resolution will prevail whenever a professional disagreement arises with the human-in-the-loop. The human-in-the-loop does not by default prevail. The AI does not by default prevail.

There is some other preidentified resolution. For example, perhaps there is the tossing of a coin that will be used to decide which of the two parties is considered the right path to take. That would obviously seem rather arbitrary; thus another example approach would be that a specialized rule kicks in that calculates a value based on inputs from the two parties and arrives at a result as a tiebreaker.

Third-party human is looped-in and their indication prevails over the parties Upon a professional disagreement, a rule could be that a third party that is a human is invoked and looped into the setting to make a decision about resolving the disagreement. The AI is programmed to defer to whatever the third-party human decides. The human already in the human-in-the-loop has been instructed beforehand that if such a situation arises, they too are to defer to the third-party human.

As an aside, you can likely anticipate that the human-in-the-loop might have angst over acceding to whatever the third-party human decides if the decision disagrees with the human-in-the-loop posture. Third-party AI is looped-in and its indication prevails over the parties Upon a professional disagreement, a rule could be that a third party that is a different AI system is invoked and looped into the setting to make a decision about resolving the disagreement. The original AI is programmed to defer to whatever the third-party AI decides.

The human already in the human-in-the-loop has been instructed beforehand that if such a situation arises, they too are to defer to the third-party AI. As an aside, you can likely anticipate that the human-in-the-loop might have angst over acceding to whatever the third-party AI decides if the decision disagrees with the human-in-the-loop posture. Third-party human replaces the existing human, things proceed anew Upon a professional disagreement, the human-in-the-loop is replaced by a third party that is a human and that becomes the henceforth human-in-the-loop.

The human that was the original human-in-the-loop for the task is no longer considered part of the task at hand. It is an open aspect as to what otherwise transpires with the now replaced human-in-the-loop, but we are saying that for sure they no longer have any ongoing role in the work task. Third-party AI replaces the existing AI, things proceed anew Upon a professional disagreement, the AI is replaced by a third-party AI and that becomes the henceforth AI used for the work task at hand.

The AI that was originally being used for the task is no longer considered part of the task at hand. It is an open aspect as to what otherwise transpires with the now replaced AI, but we are saying that for sure the AI no longer has any ongoing role in the work task. Third-party human replaces the existing AI, things proceed anew (now human-to-human) Upon a professional disagreement, the AI is replaced by a third-party human for whom that person now becomes the considered co-teamed party that will be used for the work task at hand.

The AI that was originally being used for the task is no longer considered part of the task at hand. It is an open aspect as to what otherwise transpires with the now replaced AI, but we are saying that for sure the AI no longer has any ongoing role in the work task. In short, this now becomes a two-party human-to-human performed task.

Third-party AI replaces the existing human, things proceed anew (now AI-to-AI) Upon a professional disagreement, the human-in-the-loop is replaced by a third-party AI and this AI becomes the henceforth fill-in for the preceding human-in-the-loop. The human that was the original human-in-the-loop for the task is no longer considered part of the task at hand. It is an open aspect as to what otherwise transpires with the now replaced human-in-the-loop, but we are saying that for sure they no longer have any ongoing role in the work task.

In short, this now becomes a two-party AI-to-AI to perform the task. Other Other variations can be devised to cope with a professional disagreement, but we’ve covered herein some of the keystones. How are we to decide which of those approaches is going to be the right one for a given situation? A wide variety of issues go into making such a choice.

There are technological considerations. There are business considerations. There are legal and ethical considerations.

To some degree, that is why AI Ethics and Ethical AI is such a crucial topic. The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech.

They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including (perhaps surprisingly or ironically) the assessment of how AI Ethics gets adopted by firms. Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI.

New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic.

There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars.

This will serve as a handy use case or exemplar for ample discussion on the topic. Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI-and-human disagreement resolutions, and if so, what does this showcase? Allow me a moment to unpack the question. First, note that there isn’t a human driver involved in a true self-driving car.

Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here .

I’d like to further clarify what is meant when I refer to true self-driving cars. Understanding The Levels Of Self-Driving Cars As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here ), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3.

The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems). There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there. Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here ).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI-Versus-Human Disagreement For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. Why is this added emphasis about the AI not being sentient? Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI.

Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving.

Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. Let’s dive into the myriad of aspects that come to play on this topic. First, it is important to realize that not all AI self-driving cars are the same.

Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do. Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing.

Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system. I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

For fully autonomous vehicles there might not be any chance of a professional disagreement between a human and the AI due to the possibility that there isn’t any human-in-the-loop to start with. The aspiration for many of today’s self-driving car makers is to remove the human driver completely from the driving task. The vehicle will not even contain human-accessible driving controls.

In that case, a human driver, if present, won’t be able to partake in the driving task since they lack access to any driving controls. For some fully autonomous vehicles, some designs still allow for a human to be in-the-loop, though the human does not have to be available or partake in the driving process at all. Thus, a human can participate in driving, if the person wishes to do so.

At no point though is the AI reliant upon the human to perform any of the driving tasks. In the case of semi-autonomous vehicles, there is a hand-in-hand relationship between the human driver and the AI. The human driver can take over the driving controls entirely and essentially stop the AI from partaking in the driving.

If the human driver wishes to reinstate the AI into the driving role, they can do so, though this then sometimes forces the human to relinquish the driving controls. Another form of semi-autonomous operation would entail the human driver and the AI working together in a teaming manner. The AI is driving and the human is driving.

They are driving together. The AI might defer to the human. The human might defer to the AI.

At some juncture, the AI driving system and the human driver in the loop might reach a juncture of a “professional disagreement” as to the driving task at hand. To illustrate how some of the aforementioned rules of dealing with a professional disagreement can be challenging to implement, consider the instance of invoking a third-party human to enter into the matter and proffer a decision to resolve the unresolved issue. Suppose an automaker or self-driving tech firm has arranged for remote human operators to have access to the driving controls of vehicles within their fleet.

The human operator is sitting in some faraway office or akin setting. Via a computer system, they are able to view the driving scene via accessing the cameras and other sensor devices loaded onto the self-driving car. To them, this is almost like playing an online video game, though, of course, the real-life circumstances have potentially dire consequences.

An AI system and a human driver inside the car are driving a semi-autonomous vehicle down a long highway. All of a sudden, the AI wants to steer into a ditch. The human driver doesn’t want to do this.

The two are tussling over the driving controls. How will this be resolved? We could have perhaps instituted beforehand that the human always wins. Assume though that we opted not to do that.

We could have instituted beforehand that AI always wins. Assume that we opted to not do that. All in all, we didn’t adopt any of those rules, other than we did decide to allow for a third-party human to intervene and resolve a professional disagreement of any substantive nature.

In this use case, the AI and the human driver at the wheel are fighting for the driving controls. This is let’s say conveyed to the remote human operator (our third-party human). The remote human operator examines what is taking place and decides to steer away from the ditch, seemingly averting what the AI was trying to do.

At the same time, suppose the remote human operator steers into oncoming traffic, which perhaps neither the AI nor the human driver inside the car had wanted to do. The point is that the way in which this rule has been implemented is that the third-party human operator is able to completely override both the AI and the human-in-the-loop. Whether this is going to produce a good outcome is assuredly not assured.

I will use this example to highlight some added insights on these matters. You cannot make the brazen assumption that just because one of these rules is put into place the outcome of the resolved disagreement is necessarily a guaranteed good outcome. It might not be.

There isn’t any ironclad always-right kind of rule that can be selected. Next, some of these rules might not be viably implementable. Consider the example of the remote human operator intervening when the AI and the human driver are brawling over the driving controls.

It might take many seconds of time for the remote human operator to figure out what is going on. By then, the vehicle might already have ended up in the ditch or had some other adverse outcome. Also, suppose that the location of the vehicle precludes remote access such as being in some place where there isn’t any network electronic connectivity.

Or maybe the networking features of the vehicle aren’t working at that particular moment. As you can see, the rule might look dandy on paper, though putting the rule into actual use might be a very difficult or highly chancy approach. See my critical-eye coverage on the remote operator of autonomous vehicles and self-driving cars at the link here .

I’d like to briefly cover another related topic that I will be covering in greater depth in an upcoming analysis. One of the rising concerns about autonomous vehicles and self-driving cars that are semi-autonomous is the so-called Hot Potato Syndrome . Here’s the deal.

An AI driving system and a human are co-driving. A dire predicament arises. The AI has been programmed to drop out of the driving task and turn things over to the human when a dire moment occurs.

This seems perhaps “sensible” in that we seem to be invoking the rule about the human being the default “winner” in any potential professional disagreement. But the AI dropping out might be for more nefarious or considered insidious purposes. It could be that the automaker or self-driving tech firm doesn’t want their AI to be considered the “party at fault” when a car crash occurs.

To seemingly avoid getting pinned down like that, the AI abruptly hands over the controls to the human. Voila, the human is now presumably completely responsible for the vehicle. The kicker is that suppose the AI does this handoff with let’s say one second left to go before a crash occurs.

Would the human really have any available time to avert the crash? Likely not. Suppose the AI does the handoff with a few milliseconds or nanoseconds left to go. I dare say that human has essentially zero chance of doing anything to avert the crash.

From the perspective of the automaker or self-driving car firm, they can try to act as though their hands were clean when such a car crash occurs. The car was being driven by a human. The AI wasn’t driving the car.

The only “logical” conclusion would seem to be that the human must be at fault and the AI must be completely blameless. It’s a crock. I will be discussing this in more depth in an upcoming column.

Conclusion Professional disagreements are going to occur. It is hard to imagine any complex task that has two parties co-performing the task and for which there would never ever be any professional disagreements that arise. This seems like a fantasyland or at least a grand rarity.

Today, we have lots and lots of human-to-human instances of professional disagreement, for which on a daily basis resolutions are peacefully and sensibly figured out one way or another. In fact, we oftentimes set up situations intentionally to foster and surface professional disagreements. You might argue that this showcases the famed wisdom that sometimes two heads are better than one.

As AI becomes more prevalent, we are going to have lots of AI-to-human or human-to-AI two-party task performers and there are going to be professional disagreements that will occur. The lazy approach is to always defer to the human. This might not be the most suitable approach.

AI might be the better choice. Or one of the other aforementioned rules might be a sounder approach. There is that sage line oft repeated that we all ought to generally be able to agree to disagree, though when it comes down to the wire, sometimes a disagreement has to be unequivocally resolved else the matter at hand will lead to untold calamity.

We can’t just let a disagreement languish on the vine. Time might be of the essence and lives might be at stake. There is a clear-cut requirement for some prudent means to resolve disagreements even if not necessarily agreeably so, including when AI and a human-in-the-loop aren’t seeing eye-to-eye nor byte-to-byte.

I trust that you won’t disagree with that altogether agreeable contention. Follow me on Twitter . Lance Eliot Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/07/23/ai-ethics-and-autonomous-systems-lessons-gleaned-from-that-recent-alaska-airlines-flight-where-the-pilot-and-co-pilot-disagreed-prior-to-taking-off-and-abruptly-opted-to-taxi-back-to-the-terminal-and-go-their-separate-ways/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News