Innovation What Harm Can AI Do? Plenty, But We Can Minimize It Together Alex Polyakov Forbes Councils Member Forbes Technology Council COUNCIL POST Expertise from Forbes Councils members, operated under license. Opinions expressed are those of the author. | Membership (fee-based) Jun 15, 2022, 09:00am EDT | Share to Facebook Share to Twitter Share to Linkedin Trusted AI Researcher, Serial Entrepreneur.
Founder of Adversa AI . getty I’ve published several articles describing how vulnerabilities and errors in AI algorithms can lead to unpleasant consequences. It’s especially critical for AI-driven companies, in which mistakes can lead to security bypass if we talk about AI-driven malware detection engines or biometry .
Sometimes, AI errors may also lead to substantial financial losses, like with Zillow , which lost over $300 million in Q3 of 2021 due to a failed algorithm. AI errors can even lead to financial market manipulation by producing fake content, which is then analyzed by sentiment-based stock prediction. But it looks like this isn’t even the worst-case scenario.
Sometimes, algorithms can cause actual harm to people. Studies and real events demonstrate that AI algorithms have been responsible for causing everything from depression and trauma to jail time and even suicide. Let’s see how we got here.
AI can lead to business collapse. I don’t think it’s worth explaining how mentally devastating it is when your startup goes bust—especially if it’s valued at $100 million. It’s even sadder to realize the collapse would have been avoided if not for a Facebook algorithm tweak .
MORE FOR YOU Google Issues Warning For 2 Billion Chrome Users Forget The MacBook Pro, Apple Has Bigger Plans Google Discounts Pixel 6, Nest & Pixel Buds In Limited-Time Sale Event LittleThings. com was launched in 2014 as a women-centric digital media site with a variety of entertainment content, which at the time of its collapse had an estimated 20 million social media followers—and most of those were on Facebook. However, traffic to the company’s pages was later limited after Facebook changed its message promotion algorithm.
As a result, the company lost 90% of traffic from Facebook and had to let go of 100 employees. This is a prime example of how an entire business can collapse due to changes in just one algorithm. Even though what happened is a matter of chance, such consequences can occur due to thoroughly targeted actions.
For example, if a hacker commits AI poisoning using malicious data examples, the results can be devastating. AI is putting people in jail. What could be worse than losing money? Losing time.
In the American criminal justice system, an algorithm can literally decide your fate . For example, law enforcement agencies use facial recognition systems to identify suspects. Today, the most controversial tool in this matter is the criminal risk assessment algorithm.
With its help, you can estimate possible recidivism—the likelihood of committing a repeat crime. It’s assumed that such a tool should give the most unbiased and balanced assessment with the corresponding consequences, but the question of the correctness of such an assessment remains open. However, if all of the above is more of a theory, here’s a real-life example for you.
I’m talking about a case in which a facial recognition mismatch led to the arrest of a Michigan man for a crime he didn’t commit. The man was called to the police station where he was convicted of a felony warrant and larceny. As it turned out, the facial recognition system incorrectly identified a person who had committed a theft in a prestigious store a little earlier.
Ultimately, the case demonstrates two problems at once: the racial bias of recognition systems and the overall imperfection of the joint work of the system and police officers. In this case, what was a recognition error almost turned into a disaster for an innocent man. AI impoverishes and ruins families.
A decade ago, a notorious Dutch tax scandal blew the headlines when an estimated 26,000 parents were accused of filing fraudulent benefits claims. Due to an algorithm failure, families were required to pay the benefits they received in full, and in many cases, the amount was tens of thousands of euros. I think it’s obvious that most of these families were exposed to complete financial ruin.
In addition, over a thousand children were placed in foster care. Unfortunately, in addition to reports of complete economic collapse, the destruction of families and mental damage, several suicides were reported, committed based on what happened. AI can kill people.
But what’s worse than jail, money troubles and mental health issues? A long time ago, a future in which robots were killing people was predicted. Here we are. Although many people are excited about driving a smart car, we can’t call such means of transport completely safe.
To date, there have been about two dozen deaths associated with Tesla’s autopilot feature specifically. Of course, not all of the accidents were caused directly by system errors, but some cases of accidents with such machines deserve special attention. For example, one fatal accident occurred in 2019 when a 27-year-old man got behind the wheel of a Tesla sedan and turned on the autopilot.
As a result, the car went through a red light and slammed into another car, killing the two people inside. What can we do to minimize the harm from AI and prevent the next AI winter? Well, first, we must understand that we’re creating a new creature that will have great power beyond our own in some cases. There is no doubt that it can help us solve many problems, but if we don’t teach and train it correctly from the very beginning, it can make things worse than they are now.
We already have the experience to do this: We train our kids in school to understand how the world works and how to act in various environments, including extreme situations. To create AI we can trust, we must unite to build training environments in which AI is “taught” to be secure, private, safe, unbiased and responsible. Like parents, it’s our responsibility to ensure that our AI creations learn to behave well.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? Follow me on LinkedIn . Check out my website .
Alex Polyakov Editorial Standards Print Reprints & Permissions.
From: forbes
URL: https://www.forbes.com/sites/forbestechcouncil/2022/06/15/what-harm-can-ai-do-plenty-but-we-can-minimize-it-together/