Monday, November 25, 2024

Trending Topics

HomeTechnologyAI Ethics And AI Law Asking Hard Questions About That New Pledge By Dancing Robot Makers Saying They Will Avert AI Weaponization

AI Ethics And AI Law Asking Hard Questions About That New Pledge By Dancing Robot Makers Saying They Will Avert AI Weaponization

spot_img

AI AI Ethics And AI Law Asking Hard Questions About That New Pledge By Dancing Robot Makers Saying They Will Avert AI Weaponization Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.

Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). Following New! Follow this author to stay notified about their latest stories. Got it! Oct 9, 2022, 08:00am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin What will happen if those fun-looking dancing robots become weaponized? Getty Images You might have perchance last week seen in the news or noticed on social media the announced pledge by some robot makers about their professed aims to avoid AI weaponization of general-purpose robots.

I’ll be walking you through the details in a moment, so don’t worry if you hadn’t caught wind of the matter. The reaction to this proclamation has been swift and, perhaps as usual in our polarized society, been both laudatory and at times mockingly critical or downright nastily skeptical. It is a tale of two worlds.

In one world, some say that this is exactly what we need for responsible AI robot developers to declare. Thank goodness for being on the right side of an issue that will gradually be getting more visible and more worrisome. Those cute dancing robots are troubling because it is pretty easy to rejigger them to carry weapons and be used in the worst of ways (you can check this out yourself by going to social media and there are plentifully videos showcasing dancing robots armed with machine guns and other armaments).

The other side of this coin says that the so-called pledge is nothing more than a marketing or public relations ploy (as a side note, is anybody familiar with the difference between a pledge and a donation?). Anyway, the doubters exhort that this is unbridled virtue signaling in the context of dancing robots. You see, bemoaning the fact that general-purpose robots can be weaponized is certainly a worthwhile and earnestly sought consideration, though merely claiming that a maker won’t do so is likely a hollow promise, some insist.

All in all, the entire matter brings up quite a hefty set of AI Ethics and AI Law considerations. We will meticulously unpack the topic and see how this is a double-whammy of an ethical and legal AI morass. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here , just to name a few.

MORE FOR YOU Livestream Shopping Stays Hot As Whatnot Valuation More Than Doubles To $3. 7 Billion 4 Ways To Crush A Conference Speaking Opportunity What To Watch As Third-Quarter Earnings Season Begins I will also be referring throughout this discussion to my prior analyses of the dangers of AI weaponization, such as my in-depth assessment at the link here . You might want to take a look at that discourse for additional behind-the-scenes details.

The Open Letter That Opens A Can Of Worms Let’s begin this analysis by doing a careful step-by-step exploration of the Open Letter that was recently published by six relatively well-known advanced robot makers, namely Boston Dynamics, Clearpath Robotics, ANYbotics, Agility Robotics, Open Robotics, and Unitree. By and large, I am guessing that you have seen mainly the Boston Dynamics robots, such as the ones that prance around on all fours. They look as though they are dog-like and we relish seeing them scampering around.

As I’ve previously and repeatedly forewarned, the use of such “dancing” robots as a means of convincing the general public that these robots are cutesy and adorable is sadly misleading and veers into the abundant pitfalls of anthropomorphizing them. We begin to think of these hardened pieces of metal and plastic as though they are the equivalent of a cuddly loyal dog. Our willingness to accept these robots is predicated on a false sense of safety and assurance.

Sure, you’ve got to make a buck and the odds of doing so are enhanced by parading around dancing robots, but this regrettably omits or seemingly hides the real fact that these robots are robots and that the AI controlling the robots can be devised wrongfully or go awry. Consider these ramifications of AI (excerpted from my article on AI weaponization, found at the link here ): AI might encounter an error that causes it to go astray AI might be overwhelmed and lockup unresponsively AI might contain developer bugs that cause erratic behavior AI might be corrupted with implanted evildoer virus AI might be taken over by cyberhackers in real-time AI might be considered unpredictable due to complexities AI might computationally make the “wrong” decision (relatively) Etc. Those are points regarding AI that is of the type that is genuinely devised at the get-go to do the right thing.

On top of those considerations, you have to include AI systems crafted from inception to do bad things. You can have AI that is made for beneficial purposes, often referred to as AI For Good . You can also have AI that is intentionally made for bad purposes, known as AI For Bad .

Furthermore, you can have AI For Good that is corrupted or rejiggered into becoming AI For Bad . By the way, none of this has anything to do with AI becoming sentient, which I mention because some keep exclaiming that today’s AI is either sentient or on the verge of being sentient. Not so.

I take apart those myths in my analysis at the link here . Let’s make sure then that we are on the same page about the nature of today’s AI. There isn’t any AI today that is sentient.

We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here ).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality.

You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here ). I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching.

This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking. Be very careful of anthropomorphizing today’s AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models.

Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly.

There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se. Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases.

You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL. You could somewhat use the famous or infamous adage of garbage-in garbage-out.

The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities. Not good.

All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI. Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised.

The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws. Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient.

They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here , for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here .

Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored: Transparency Justice & Fairness Non-Maleficence Responsibility Privacy Beneficence Freedom & Autonomy Trust Sustainability Dignity Solidarity Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions.

As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts. Now that I’ve laid a helpful foundation for getting into the Open Letter, we are ready to dive in. The official subject title of the Open Letter is this: “ An Open Letter to the Robotics Industry and our Communities, General Purpose Robots Should Not Be Weaponized ” (as per posted online).

So far, so good. The title almost seems like ice cream and apple pie. How could anyone dispute this as an erstwhile call to avoid AI robot weaponization? Read on to see.

First, as fodder for consideration, here’s the official opening paragraph of the Open Letter: “We are some of the world’s leading companies dedicated to introducing new generations of advanced mobile robotics to society. These new generations of robots are more accessible, easier to operate, more autonomous, affordable, and adaptable than previous generations, and capable of navigating into locations previously inaccessible to automated or remotely-controlled technologies. We believe that advanced mobile robots will provide great benefit to society as co-workers in industry and companions in our homes” (as per posted online).

The sunny side to the advent of these types of robots is that we can anticipate a lot of great benefits to emerge. No doubt about it. You might have a robot in your home that can do those Jetson-like activities such as cleaning your house, washing your dishes, and other chores around the household.

We will have advanced robots for use in factories and manufacturing facilities. Robots can potentially crawl or maneuver into tight spaces such as when a building collapses and human lives are at stake to be saved. And so on.

As an aside, you might find of interest my recent eye-critical coverage of the Tesla AI Day, at which some kind-of walking robots were portrayed by Elon Musk as the future for Tesla and society, see the link here . Back to the matter at hand. When seriously discussing dancing robots or walking robots, we need to mindfully take into account tradeoffs or the total ROI (Return on Investment) of this use of AI.

We should not allow ourselves to become overly enamored by benefits when there are also costs to be considered. A shiny new toy can have rather sharp edges. All of this spurs an important but somewhat silent point that part of the reason that the AI weaponization issue arises now is due to AI advancement toward autonomous activity.

We have usually expected that weapons are generally human operated. A human makes the decision whether to fire or engage the weapon. We can presumably hold that human accountable for their actions.

AI that is devised to work autonomously or that can be tricked into doing so would seemingly remove the human from the loop. The AI is then algorithmically making computational decisions that can end up killing or harming humans. Besides the obvious concerns about lack of control over the AI, you also have the qualms that we might have an arduous time pinning responsibility as to the actions of the AI.

We don’t have a human that is our obvious instigator. I realize that some believe that we ought to simply and directly hold the AI responsible for its actions, as though AI has attained sentience or otherwise been granted legal personhood (see my coverage on the debates over AI garnering legal personhood at the link here ). That isn’t going to work for now.

We are going to have to trace the AI to the humans that either devised it or that fielded it. They will undoubtedly try to legally dodge responsibility by trying to contend that the AI went beyond what they had envisioned. This is a growing contention that we need to deal with (see my AI Law writings for insights on the contentious issues involved).

The United Nations (UN) via the Convention on Certain Conventional Weapons (CCW) in Geneva has established eleven non-binding Guiding Principles on Lethal Autonomous Weapons, as per the official report posted online (encompassing references to pertinent International Humanitarian Law or IHL provisos), including: (a) International humanitarian law continues to apply fully to all weapons systems, including the potential development and use of lethal autonomous weapons systems; (b) Human responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines. This should be considered across the entire life cycle of the weapons system; (c) Human-machine interaction, which may take various forms and be implemented at various stages of the life cycle of a weapon, should ensure that the potential use of weapons systems based on emerging technologies in the area of lethal autonomous weapons systems is in compliance with applicable international law, in particular IHL. In determining the quality and extent of human-machine interaction, a range of factors should be considered including the operational context, and the characteristics and capabilities of the weapons system as a whole; (d) Accountability for developing, deploying and using any emerging weapons system in the framework of the CCW must be ensured in accordance with applicable international law, including through the operation of such systems within a responsible chain of human command and control; (e) In accordance with States’ obligations under international law, in the study, development, acquisition, or adoption of a new weapon, means or method of warfare, determination must be made whether its employment would, in some or all circumstances, be prohibited by international law; (f) When developing or acquiring new weapons systems based on emerging technologies in the area of lethal autonomous weapons systems, physical security, appropriate non-physical safeguards (including cyber-security against hacking or data spoofing), the risk of acquisition by terrorist groups and the risk of proliferation should be considered; (g) Risk assessments and mitigation measures should be part of the design, development, testing and deployment cycle of emerging technologies in any weapons systems; (h) Consideration should be given to the use of emerging technologies in the area of lethal autonomous weapons systems in upholding compliance with IHL and other applicable international legal obligations; (i) In crafting potential policy measures, emerging technologies in the area of lethal autonomous weapons systems should not be anthropomorphized; (j) Discussions and any potential policy measures taken within the context of the CCW should not hamper progress in or access to peaceful uses of intelligent autonomous technologies; (k) The CCW offers an appropriate framework for dealing with the issue of emerging technologies in the area of lethal autonomous weapons systems within the context of the objectives and purposes of the Convention, which seeks to strike a balance between military necessity and humanitarian considerations.

These and other various laws of war and laws of armed conflict, or IHL (International Humanitarian Laws) serve as a vital and ever-promising guide to considering what we might try to do about the advent of autonomous systems that are weaponized, whether by keystone design or by after-the-fact methods. Some say we should outrightly ban those AI autonomous systems that are weaponizable. That’s right, the world should put its foot down and stridently demand that AI autonomous systems shall never be weaponized.

A total ban is to be imposed. End of story. Full stop, period.

Well, we can sincerely wish that a ban on lethal weaponized autonomous systems would be strictly and obediently observed. The problem is that a lot of wiggle room is bound to slyly be found within any of the sincerest of bans. As they say, rules are meant to be broken.

You can bet that where things are loosey-goosey, riffraff will ferret out gaps and try to wink-wink their way around the rules. Here are some potential loopholes worthy of consideration: Claims of Non-Lethal . Make non-lethal autonomous weapons systems (seemingly okay since it is outside of the ban boundary), which you can then on a dime shift into becoming lethal (you’ll only be beyond the ban at the last minute).

Claims of Autonomous System Only . Uphold the ban by not making lethal-focused autonomous systems, meanwhile, be making as much progress on devising everyday autonomous systems that aren’t (yet) weaponized but that you can on a dime retrofit into being weaponized. Claims of Not Integrated As One.

Craft autonomous systems that are not at all weaponized, and when the time comes, piggyback weaponization such that you can attempt to vehemently argue that they are two separate elements and therefore contend that they do not fall within the rubric of an all-in-one autonomous weapon system or its cousin. Claims That It Is Not Autonomous. Make a weapon system that does not seem to be of autonomous capacities.

Leave room in this presumably non-autonomous system for the dropping in of AI-based autonomy. When needed, plug in the autonomy and you are ready to roll (until then, seemingly you were not violating the ban). Other There are plenty of other expressed difficulties with trying to outright ban lethal autonomous weapons systems.

I’ll cover a few more of them. Some pundits argue that a ban is not especially useful and instead there should be regulatory provisions. The idea is that these contraptions will be allowed but stridently policed.

A litany of lawful uses is laid out, along with lawful ways of targeting, lawful types of capabilities, lawful proportionality, and the like. In their view, a straight-out ban is like putting your head in the sand and pretending that the elephant in the room doesn’t exist. This contention though gets the blood boiling of those that counter with the argument that by instituting a ban you are able to dramatically reduce the otherwise temptation to pursue these kinds of systems.

Sure, some will flaunt the ban, but at least hopefully most will not. You can then focus your attention on the flaunters and not have to splinter your attention to everyone. Round and round these debates go.

Another oft-noted concern is that even if the good abide by the ban, the bad will not. This puts the good in a lousy posture. The bad will have these kinds of weaponized autonomous systems and the good won’t.

Once things are revealed that the bad have them, it will be too late for the good to catch up. In short, the only astute thing to do is to prepare to fight fire with fire. There is also the classic deterrence contention.

If the good opt to make weaponized autonomous systems, this can be used to deter the bad from seeking to get into a tussle. Either the good will be better armed and thusly dissuade the bad, or the good will be ready when the bad perhaps unveils that they have surreptitiously been devising those systems all along. A counter to these counters is that by making weaponized autonomous systems, you are waging an arms race.

The other side will seek to have the same. Even if they are technologically unable to create such systems anew, they will now be able to steal the plans of the “good” ones, reverse engineer the high-tech guts, or mimic whatever they seem to see as a tried-and-true way to get the job done. Aha, some retort, all of this might lead to a reduction in conflicts by a semblance of mutual.

If side A knows that side B has those lethal autonomous systems weapons, and side B knows that side A has them, they might sit tight and not come to blows. This has that distinct aura of mutually assured destruction (MAD) vibes. And so on.

Looking Closely At The Second Paragraph We have already covered a lot of ground herein and only so far considered the first or opening paragraph of the Open Letter (there are four paragraphs in total). Time to take a look at the second paragraph, here you go: “As with any new technology offering new capabilities, the emergence of advanced mobile robots offers the possibility of misuse. Untrustworthy people could use them to invade civil rights or to threaten, harm, or intimidate others.

One area of particular concern is weaponization. We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues. Weaponized applications of these newly-capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society.

For these reasons, we do not support the weaponization of our advanced-mobility general-purpose robots. For those of us who have spoken on this issue in the past, and those engaging for the first time, we now feel renewed urgency in light of the increasing public concern in recent months caused by a small number of people who have visibly publicized their makeshift efforts to weaponize commercially available robots” (as per posted online). Upon reading that second paragraph, I hope you can see how my earlier discourse herein on AI weaponization comes to the fore.

Let’s examine a few additional points. One somewhat of a qualm about a particular wording aspect that has gotten the dander up by some is that the narrative seems to emphasize that “untrustworthy people” could misuse these AI robots. Yes, indeed, it could be bad people or evildoers that bring about dastardly acts that will “misuse” AI robots.

At the same time, as pointed out toward the start of this discussion, we need to also make clear that the AI itself could go awry, possibly due to embedded bugs or errors and other such complications. The expressed concern is that only emphasizing the chances of untrustworthy people is that it seems to ignore other adverse possibilities. Though most AI companies and vendors are loath to admit it, there is a plethora of AI systems issues that can undercut the safety and reliability of autonomous systems.

For my coverage of AI safety and the need for rigorous and provable safeguards, see the link here , for example. Another notable point that has come up amongst those that have examined the Open Letter entails the included assertion that there could end up undercutting public trust associated with AI robots. On the one hand, this is a valid assertion.

If AI robots are used to do evil bidding, you can bet that the public will get quite steamed. When the public gets steamed, you can bet that lawmakers will jump into the foray and seek to enact laws that clamp down on AI robots and AI robotic makers. This in turn could cripple the AI robotics industry if the laws are all-encompassing and shut down efforts involving AI robotic benefits.

In a sense, the baby could get thrown out with the bathwater (an old expression, probably deserving to be retired). The obvious question brought up too is whether this assertion about averting a reduction in public trust for AI robots is a somewhat self-serving credo or whether it is for the good of us all (can it be both?). You decide.

We now come to the especially meaty part of the Open Letter: “We pledge that we will not weaponize our advanced-mobility general-purpose robots or the software we develop that enables advanced robotics and we will not support others to do so. When possible, we will carefully review our customers’ intended applications to avoid potential weaponization. We also pledge to explore the development of technological features that could mitigate or reduce these risks.

To be clear, we are not taking issue with existing technologies that nations and their government agencies use to defend themselves and uphold their laws” (as per posted online). We can unpack this. Sit down and prepare yourself accordingly.

Are you ready for some fiery polarization? On the favorable side, some are vocally heralding that these AI robot makers would make such a pledge. It seems that these robot makers will thankfully seek to not weaponize their “advanced-mobility general-purpose” robots. In addition, the Open Letter says that they will not support others that do so.

Critics wonder whether there is some clever wordsmithing going on. For example, where does “advanced-mobility” start and end? If a robot maker is devising a simple -mobility AI robot rather than an advanced one (which is an undefined piece of techie jargon), does that get excluded from the scope of what will not be weaponized? Thus, apparently, it is okay to weaponize simple-mobility AI robots, as long as they aren’t so-called advanced . The same goes for the phrasing of general-purpose robots.

If an AI robot is devised specifically for weaponization and therefore is not shall we say a general-purpose robot, does that become a viable exclusion from the scope? You might quibble with these quibbles and fervently argue that this is just an Open Letter and not a fifty-page legal document that spells out every nook and cranny. This brings us to the seemingly more macro-level qualm expressed by some. In essence, what does a “pledge” denote? Some ask, where’s the beef? A company that makes a pledge like this is seemingly doing so without any true stake in the game.

If the top brass of any firm that signs up for this pledge decides to no longer honor the pledge, what happens to that firm? Will the executives get summarily canned? Will the company close down and profusely apologize for having violated the pledge? And so on. As far as can be inferred, there is no particular penalty or penalization for any violation of the pledge. You might argue that there is a possibility of reputational damage.

A pledging firm might be dinged in the marketplace for having made a pledge that it no longer observed. Of course, this also assumes that people will remember that the pledge was made. It also assumes that the violation of the pledge will be somehow detected (it distinctly seems unlikely a firm will tell all if it does so).

The pledge violator would have to be called out and yet such an issue might become mere noise in the ongoing tsunami of news about AI robotics makers. Consider another angle that has come up. A pledging firm gets bought up by some larger firm.

The larger firm opts to start turning the advanced-mobility general-purpose robots into AI weaponized versions. Is this a violation of the pledge? The larger firm might insist that it is not a violation since they (the larger firm) never made the pledge. Meanwhile, the innocuous AI robots that the smaller firm has put together and devised, doing so with seemingly the most altruistic of intentions, get nearly overnight revamped into being weaponized.

Kind of undermines the pledge, though you might say that the smaller firm didn’t know that this would someday happen. They were earnest in their desire. It was out of their control as to what the larger buying firm opted to do.

Some also ask whether there is any legal liability in this. A pledging firm decides a few months from now that it is not going to honor the pledge. They have had a change of heart.

Can the firm be sued for having abandoned the pledge that it made? Who would sue? What would be the basis for the lawsuit? A slew of legal issues arise. As they say, you can pretty much sue just about anybody, but whether you will prevail is a different matter altogether. Think of this another way.

A pledging firm gets an opportunity to make a really big deal to sell a whole bunch of its advanced-mobility general-purpose robots to a massive company that is willing to pay through the nose to get the robots. It is one of those once-in-a-lifetime zillion-dollar purchase deals. What should the AI robotics company do? If the AI robotics pledging firm is publicly traded, they would almost certainly aim to make the sale (the same could be said of a privately held firm, though not quite so).

Imagine that the pledging firm is worried that the buyer might try to weaponize the robots, though let’s say there isn’t any such discussion on the table. It is just rumored that the buyer might do so. Accordingly, the pledging firm puts into their licensing that the robots aren’t to be weaponized.

The buyer balks at this language and steps away from the purchase. How much profit did the pledging AI robotics firm just walk away from? Is there a point at which the in-hand profit outweighs the inclusion of licensing restriction requirement (or, perhaps legally wording the restriction to allow for wiggle room and still make the deal happen)? I think that you can see the quandary involved. Tons of such scenarios are easily conjured up.

The question is whether this pledge is going to have teeth. If so, what kind of teeth? In short, as mentioned at the start of this discussion, some are amped up that this type of pledge is being made, while others are taking a dimmer view of whether the pledge will hold water. We move on.

Getting A Pledge Going The fourth and final paragraph of the Open Letter says this: “We understand that our commitment alone is not enough to fully address these risks, and therefore we call on policymakers to work with us to promote safe use of these robots and to prohibit their misuse. We also call on every organization, developer, researcher, and user in the robotics community to make similar pledges not to build, authorize, support, or enable the attachment of weaponry to such robots. We are convinced that the benefits for humanity of these technologies strongly outweigh the risk of misuse, and we are excited about a bright future in which humans and robots work side by side to tackle some of the world’s challenges” (as per posted online).

This last portion of the Open Letter has several additional elements that have raised ire. Calling upon policymakers can be well-advised or ill-advised, some assert. You might get policymakers that aren’t versed in these matters that then do the classic rush-to-judgment and craft laws and regulations that usurp the progress on AI robots.

Per the point earlier made, perhaps the innovation that is pushing forward on AI robotic advances will get disrupted or stomped on. Better be sure that you know what you are asking for, the critics say. Of course, the counter-argument is that the narrative clearly states that policymakers should be working with AI robotics firms to figure out how to presumably sensibly make such laws and regulations.

The counter to the counter-argument is that the policymakers might be seen as beholding to the AI robotics makers if they cater to their whims. The counter to the counter of counter-argument is that it is naturally a necessity to work with those that know about the technology, or else the outcome is going to potentially be a kilter. Etc.

On a perhaps quibbling basis, some have had heartburn over the line that calls upon everyone to make similar pledges as to not attaching weaponry to advanced-mobility general-purpose robots. The keyword there is the word attaching . If someone is making an AI robot that incorporates or seamlessly embeds weaponry, that seems to get around the wording of attaching something.

You can see it now, someone vehemently arguing that the weapon is not attached, it is completely part and parcel of the AI robot. Get over it, they exclaim, we aren’t within the scope of that pledge, and they could even have otherwise said that they were. This brings up another complaint about the lack of stickiness of the pledge.

Can a firm or anyone at all that opts to make this pledge declare themselves unpledged at any time that they wish to do so and for whatever reason they so desire? Apparently so. There is a lot of bandying around about making pledges and what traction they imbue. Conclusion Yikes, you might say, these companies that are trying to do the right thing are getting drummed for trying to do the right thing.

What has come of our world? Anyone that makes such a pledge ought to be given the benefit of the doubt, you might passionately maintain. They are stepping out into the public sphere to make a bold and vital contribution. If we start besmirching them for doing so, it will assuredly make matters worse.

No one will want to make such a pledge. Firms and others won’t even try. They will hide away and not forewarn society about what those darling dancing robots can be perilously turned into.

Skeptics proclaim that the way to get society to wise up entails other actions, such as dropping the fanciful act of showcasing the frolicking dancing AI robots. Or at least make it a more balanced act. For example, rather than solely mimicking beloved pet-loyal dogs, illustrate how the dancing robots can be more akin to wild unleashed angry wolves that can tear humans into shreds with nary a hesitation.

That will get more attention than pledges, they implore. Pledges can indubitably be quite a conundrum. As Mahatma Gandhi eloquently stated: “No matter how explicit the pledge, people will turn and twist the text to suit their own purpose.

” Perhaps to conclude herein on an uplifting note, Thomas Jefferson said this about pledges: “We mutually pledge to each other our lives, our fortunes, and our sacred honor. ” When it comes to AI robots, their autonomy, their weaponization, and the like, we are all ultimately going to be in this together. Our mutual pledge needs at least to be that we will keep these matters at the forefront, we will strive to find ways to cope with these advances, and somehow find our way toward securing our honor, our fortunes, and our lives.

Can we pledge to that? I hope so. Follow me on Twitter . Lance Eliot Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/10/09/ai-ethics-and-ai-law-asking-hard-questions-about-that-new-pledge-by-dancing-robot-makers-saying-they-will-avert-ai-weaponization/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News