Your daily activities are undoubtedly bombarded with a thousand or more precautionary warnings of one kind or another. Try these on for size: Most of those are handy and altogether thoughtful signs or labels that serve to keep us hopefully safe and secure. Please be aware that I snuck a few “outliers” on the list to make some noteworthy points.
For example, some people believe it is nutty that baby strollers have an affixed label that warns you to not fold the stroller while the baby is still seated within the contraption. Though the sign is certainly appropriate and dutifully useful, it would seem that basic common sense would already be sufficient. What person would not of their own mindful volition realize that they first need to remove the baby? Well, others emphasize that such labels do serve an important purpose.
First, someone might truly be oblivious that they need to remove the baby before folding up the stroller. Perhaps the person assumed that the stroller was astutely designed to ensure that the folding operation would not harm the baby. Or possibly there are built-in safety features that prevent folding whenever there is a child within the stroller.
Etc. Also, it could be that the person was distracted and would have mindlessly folded the stroller, encompassing baby and all, but the label luckily prompted the person to not do so (some would argue that it seems unlikely that such a person would notice the label anyway). There is also the fact that someone might sue the maker of the stroller if such a label wasn’t present.
You can imagine one of those hefty lawsuits whereby a baby was injured and the parent decided to garner a million dollars because the stroller lacked a warning sign posted on it. The stroller company would regret not having spent the few cents needed to make and paste the warning signs onto their strollers. Take a close look too at the last of the warning messages that I had listed above.
I opted to place the smarmy message about making sure to ignore the message since there are many fake warning labels that people buy or make just for fun these days. This one is a real philosophical mind-bender. If you read the warning that says to ignore the warning, what exactly are you to do upon having read the warning? You already read it, so it is ostensibly in your mind.
Sure, you can ignore it, but then again, you aren’t at all advised as to what it is that you are supposed to be ignoring. Round and round, this joke goes. Of course, bona fide warning labels are not usually of a joking or humorous nature.
We are supposed to take warnings quite seriously. Usually, if you fail to abide by a noted warning, you do so at a personal grievous risk. And, it could also be that if you don’t observe and comply with the warning you will potentially put others at undue risk too.
Consider the act of driving a car. The moment you are behind the wheel of a car, your actions as the driver can harm yourself, plus you can harm your passengers, and you can harm others such as people in other cars or nearby pedestrians. In that sense, a warning is not solely for your benefit, it is likely for the benefit of others too.
Why am I covering these facets of warnings and cautionary labels? Because some are vehemently asserting that we need warning and cautionary signs on today’s Artificial Intelligence (AI). Actually, the notion is to put such labels on today’s AI and future AI too. All in all, all AI would end up having some form or variant of a warning or cautionary indication associated with it.
Good idea or bad idea? Practical or impractical? Let’s go ahead and unpack the conception and see what we can make of it. I’d like to first lay some essential foundation about AI and particularly AI Ethics and AI Law, doing so to make sure that the topic of AI warnings will be contextually sensible. For those of you generally interested in Ethical AI and also AI Law, see my extensive and ongoing coverage at the link here and the link here , just to name a few.
The Rising Awareness Of Ethical AI And Also AI Law The recent era of AI was initially viewed as being AI For Good , meaning that we could use AI for the betterment of humanity. On the heels of AI For Good came the realization that we are also immersed in AI For Bad . This includes AI that is devised or self-altered into being discriminatory and makes computational choices imbuing undue biases.
Sometimes the AI is built that way, while in other instances it veers into that untoward territory. I want to make abundantly sure that we are on the same page about the nature of today’s AI. There isn’t any AI today that is sentient.
We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here ).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality.
You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here ). I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching.
This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking. Be very careful of anthropomorphizing today’s AI.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models.
Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly.
There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se. Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases.
You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL. You could somewhat use the famous or infamous adage of garbage-in garbage-out.
The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities. Not good.
All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI. Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised.
The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws. Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient.
They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here , for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here .
Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored: Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions.
As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts. I also recently examined the AI Bill of Rights which is the official title of the U. S.
government official document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” that was the result of a year-long effort by the Office of Science and Technology Policy (OSTP). The OSTP is a federal entity that serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance. In that sense, you can say that this AI Bill of Rights is a document approved by and endorsed by the existing U.
S. White House. In the AI Bill of Rights, there are five keystone categories: I’ve carefully reviewed those precepts, see the link here .
Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the heady topic of whether or not AI ought to have warning labels. Get yourself ready for an eye-opening informative journey. Putting Warning Labels Onto AI As A Means Of Protecting Humankind There is growing buzz afoot that perhaps AI ought to come with a warning of some kind.
For example, in a recent article in MIT Technology Review that discussed the rising interest in doing audits of AI, the notion of providing warning signs arose: “The growth of these audits suggests that one day we might see cigarette-pack-style warnings that AI systems could harm your health and safety. Other sectors, such as chemicals and food, have regular audits to ensure that products are safe to use. Could something like this become the norm in AI?” ( MIT Technology Review , Melissa Heikkilä, October 24, 2022).
Let’s chat a bit overall about warning and cautionary signs. We are all already generally familiar with warning and cautionary signs on numerous products and services of everyday nature. Sometimes the safety signs are stipulated by law as being required, while in other instances there is discretion as to utilizing such signs.
In addition, some safety signs are standardized, such as being specified per the ANSI Z535 standard and the OSHA 1910. 14 standard. Those standards cover such aspects as the wording of the sign, the colors and fonts used, the shape of the sign, and various other particulars.
It makes sense to have standards for such warning signs. Whenever you see such a sign, if it meets the stated standards you are more likely to believe that the sign is legitimate, plus you require less cognitive processing to analyze and make sense of the sign. For signs that are not of a standardized approach, you often have to mentally calculate what the sign is telling you, perhaps losing precious time to take action or you might entirely misjudge the sign and not properly comprehend the warning at hand.
A range of such signs can exist, for which they are often classified somewhat like this: Given the foregoing background about warnings and cautionary signs, we can go ahead and try applying this contextually to the use of AI. First, we potentially could somehow agree that the AI signage might be in a similar range of severity as is customary with everyday signs, such that: The gist is that not all AI will be of equally disconcerting concern. It could be that an AI poses tremendous risks or only has marginal risks.
Accordingly, the signage should reflect which is which. If all AI warnings or cautionary signs were simply shown as being of the same caliber, we presumably would not know or be aware of whether to be greatly concerned or only mildly concerned about AI. Using a typified wording scheme might make life easier when seeing and interpreting AI-related warning signs.
You might be wondering what else the signs would say in terms of the specifics of the message to be conveyed. That’s certainly a harder matter to tackle. The AI possibilities of doing things that are considered unsafe or worrisome would be enormously large, nearly endless.
Can we boil them down into some routinized set that is spelled out in just a handful of words each? Or are we going to have to allow for a more freeform approach to how the wording will go? I’ll let you ponder that open question. Moving on, we tend to see warning signs posted on objects. An AI system is not necessarily an “object” in the same sense as the customary affixing of signs.
This raises the question of where the AI warning signs are going to be presented. It could be that when you are running an app on say your smartphone that if it is using AI there would be a message shown that contains the AI-related warning. The app would alert you about the underlying use of AI.
This could be a pop-up message or might be presented in a variety of visual and even auditory ways. There is also the situation of your using a service that is backed by AI, though you might not be directly interacting with the AI. For example, you could be having a conversation with a human agent about getting a home mortgage, for which the agent is silently behind-the-scenes using an AI system to guide their efforts.
Thus, we should anticipate that the AI warning signs should be present in both of the AI use cases, such that: Skeptics would certainly be quick to suggest that the AI warning sign might be deviously disguised by the app maker so that the alert isn’t especially noticeable. Perhaps the warning appears for a brief instant and you really do not notice that it was shown. Another sneaky trick would be to make a kind of game out of it, seemingly undercutting the seriousness of the alert and lulling people into thinking that the warning is inconsequential.
Yes, you can pretty much bet that any AI-related warning signs are going to be tricked out and used in ways that aren’t within the spirit and intent of the cautionary messaging they are supposed to convey. All of those skepticisms dovetail into whether we would use AI Ethics as “soft laws” to try and drive forward on AI-related warnings and cautions, or whether we would make use of “hard laws” including enacted laws with court backings. In theory, the soft law push would allow for a greater semblance of toying with the matter, while laws on the books would come with the potential for government enforcement and bring forth greater strictness.
Another concern is whether the AI signage would make any difference anyway. Lots of research on consumer behavior tends to show muddled results when it comes to providing warning labels for consumers. Sometimes the warnings are effective, sometimes not.
Let’s apply some of that research to the AI context. Consider these facets: You can anticipate that if momentum gains for having AI warning signs, researchers are going to closely study the matter. The idea would be to aid in clarifying under what circumstances the warnings are helpful and when they are not.
This in turn relates to the design and look and feel involved. We might also expect that the public at large would need to be educated or informed about the emergence of these AI warning and cautionary signs so that they will become accustomed to them. There are lots of twists and turns to be considered.
Should the AI warning or cautionary sign occur only at the start of the app being used? Perhaps the starting point isn’t sufficient by itself. A person might at first glance think the app is just playing it safe by using a warning, and thus not give much regard to the warning. Once the person is midway into using the app, perhaps an additional warning would be warranted.
At that juncture, the person is more aware of what the app is undertaking and they might have already forgotten about the earlier indicated warning message. The AI warning or cautionary then might occur: Do you think that the person using the AI needs to confirm or acknowledge any such AI warnings or cautionary signs? Some would insist that the person should be required to acknowledge that they saw the warnings. This would potentially forestall those that might make a claim that there wasn’t any warning message.
It could also cause people to take the messages more seriously, given that they are being asked to confirm that they observed the warning indication. Here’s a tough question for you. Should all AI have to provide a warning or safety indication? You might argue that yes, all AI must have this.
Even if the AI is only a marginal safety risk, it should still have some form of warning sign, possibly just a notification or informational indication. Others would dispute this contention. They would counterargue that you are going overboard.
Not all AI needs this. Furthermore, if you force all AI to have it, you will dilute the purpose. People will become numb to seeing the warning indications.
The entire approach will become a farce. On a related angle, consider what the AI makers are likely to say. One viewpoint would be that this is going to be a pain in the you-know-what to put in place.
Having to alter all of your AI systems and add this feature could be costly. Adding it on a go-forward basis might be less costly, though nonetheless there is a cost associated with including this provision. The haziness is another potential complaint.
What kind of message needs to be shown? If there aren’t any standards, this means that the AI makers will need to craft something anew. Not only does this additional cost, it possibly opens the door to getting sued. You can imagine that an AI maker might be taken to the task that their warning wasn’t as good as some other AI system.
Lawyers will end up making a bundle from the resulting confusion. There is also the concern that people will be unduly scared by these safety messages. An AI maker might see a precipitous drop in the number of users due to a safety message that is merely intended to provide an informational caution.
People won’t know what to make of the message. If they haven’t seen such messages before, they might get immediately freaked out and avoid needlessly the use of the AI. I mention those facets because some are already proclaiming that “it can’t hurt” to start including AI warnings or safety messages.
To them, the idea is as ideal as apple pie. No one could seriously refute the benefits of always requiring AI-related safety or warnings. But of course, that’s not a real-world way of thinking about the matter.
You could even go further and claim that the AI warning messages could produce some form of mass hysteria. If we all of sudden started forcing AI systems to have such messages, people could misconstrue the intent. You see, there are lots of exhortations that we are facing an AI existential crisis, whereby AI is going to take over the world and wipe out humankind, see the link here for my analysis of those kinds of claims.
The use of the AI warning message might “inspire” some people into believing that the end is near. Why else would the messages be now appearing? Certainly, it is an indication that finally, AI is getting ready to become our oppressive overlords. A kind of despairingly bleak outcome of trying to do the right thing by making the public aware of how AI can be infused with discriminatory capacities or other AI For Bad aspects.
Conclusion James Russell Lowell, the famous American poet, once said this: “One thorn of experience is worth a whole wilderness of warning. ” I bring up the lofty remark to make a few concluding comments on the matter of AI warnings or safety messages. The chances are that until we have a somewhat widespread realization of AI doing bad deeds, there will not be much of a clamor for AI-related warning messages.
Those AI makers that opt to voluntarily do so will perhaps be lauded, though they could also inadvertently find themselves unfairly harassed. Some pundits might go after them for leveraging AI that seemingly needs warnings, to begin with, meanwhile neglecting to realize that they were trying to go first and all those other AI makers are lagging behind. Darned if you do, perhaps not darned if you don’t.
Another consideration deals with turning what seems to be a nice conceptual idea into a practical and useful practice. It is easy to wave your hands and vaguely shout to the winds that AI ought to have warning and safety messages. The hard part comes when deciding on what those messages should be, when and where they should be presented, whether or to what degree their cost is deriving sufficient benefits, and so on.
We shall herein give James Russell Lowell the last word (poetically so) on this topic for now: “Creativity is not the finding of a thing, but the making something out of it after it is found. ” Guess it is time to roll up our sleeves and get to work on this. And that’s not just a warning, it’s an action.
.
From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/11/03/ai-ethics-and-ai-law-just-might-be-prodded-and-goaded-into-mandating-safety-warnings-on-all-existing-and-future-ai/