Friday, November 22, 2024

Trending Topics

HomeTechnologyHow To Use AI To Eliminate Bias

How To Use AI To Eliminate Bias

spot_img

CFO Network How To Use AI To Eliminate Bias Glenn Gow Contributor Opinions expressed by Forbes Contributors are their own. Board Member (AI Specialty) and CEO Coach. New! Follow this author to improve your content experience.

Got it! Jul 17, 2022, 06:00am EDT | Share to Facebook Share to Twitter Share to Linkedin AI can help eliminate bias. icedmocha – stock. adobe.

com “We don’t see things as they are, we see them as we are. ” So wrote Anais Nin, rather succinctly describing the unfortunate melange of biases that accompany these otherwise perfectly well-functioning human brains of ours. In a business context, affinity bias, confirmation bias, attribution bias, and the halo effect, some of the better known of these errors of reasoning, really just scratch the surface.

In aggregate, they leave a trail of offenses and errors in their wake. Of course, the most pernicious of our human biases are those that prejudice us for or against our fellow humans on the basis of age, race, gender, religion, or physical appearance. Try as we do to purify ourselves, our work environments, and our society from these distortions, they still worm their way into—well, just about everything that we think and do—even modern technologies, like AI .

Critics say that AI makes bias worse Since AI was first deployed in hiring, loan approvals, insurance premium modeling, facial recognition, law enforcement, and a constellation of other applications, critics have—with considerable justification—pointed out the technology’s propensity for bias. Google’s Bidirectional Encoder Representations from Transformers (BERT), for example, is a leading Natural Language Processing (NLP) model that developers can use to build their own AI. BERT was originally built using Wikipedia text as its principle source.

What’s wrong with that? The overwhelming majority of Wikipedia’s contributors are white males from Europe and North America . As a result, one of the most important sources of language-based AI began its life with a biased perspective baked-in. MORE FOR YOU Remote, Hybrid Or In-Office: 3 Workforce Considerations For CFOs 10 Open Questions For The 40 Trillion-Dollar ESG Industry The ESG Reporting Endgame: Lessons From Human Capital Reporting A similar problem was found in Computer Vision, another key area of AI development.

Facial recognition datasets comprising hundreds of thousands of annotated faces are critical to the development of facial recognition applications used for cybersecurity, law enforcement, and even customer service. It turned out, however, that the (presumably mostly white, middle-aged male) developers unconsciously did a better job achieving accuracy for people like themselves. Error rates for women, children, the elderly, and people of color were much higher than those for middle-aged white men.

As a result, IBM, Amazon, and Microsoft were forced to cease sales of their facial recognition technology to law enforcement in 2020, for fear that these biases would result in wrongful identification of suspects. For more on all of this, I encourage you to watch the important and sometimes-chilling documentary Coded Bias . What if AI is actually part of the solution to bias? A better understanding of the phenomenon of bias in AI reveals, however, that AI merely exposes and amplifies implicit biases that already existed, but were overlooked or misunderstood.

AI itself is agnostic to color, gender, age, and other biases. It is not vulnerable to the logical fallacies and cognitive biases that trouble humans. The only reason we see bias in AI at all is because of heuristical errors and biased data that humans sometimes train it with.

Since the discovery of the biases stated above (a PR disaster, I assure you), all of the major technology companies have been working to improve datasets and eliminate bias. One way to eliminate bias in AI?—by using AI! If that seems unlikely, read on. Using AI to Eliminate Bias in Hiring The classic example can be found in job opportunities.

Across the spectrum of the most-coveted employment opportunities, women and people of color are notoriously under-represented. The phenomenon is self-perpetuating, as new hires become senior leaders, and they become responsible for hiring. Affinity bias ensures that “people like me” continue to get hired, while attribution bias justifies those choices on the basis of past hires’ performance.

But when AI is given a bigger role in recruiting, this can change. Tools like Textio , Gender Decoder , and Ongig use AI to scrutinize job descriptions for hidden biases around gender and other characteristics. Knockri , Ceridian , and Gapjumpers use AI to remove or ignore characteristics that identify gender, national origin, skin color, and age, so that hiring managers can focus purely on candidates’ qualifications and experience.

Some of these solutions also reduce recency bias, affinity bias, and gender bias from the interview process by evaluating candidates’ soft skills on an objective basis or altering a candidate’s phone voice to mask their gender. Removing Bias in Venture Capital Decision Making with AI A similar approach can be taken in the world of venture capital, where men comprise 80% of partners and women, receive just 2. 2% of all investment, despite being founders of 40% of new startups.

The UK accelerator Founders Factory, for instance, wrote software that short-lists program candidates on the basis of identifiable entrepreneurial success characteristics. Likewise, the female-run non-profit F4capital developed a “ FICO score for Startups ,” which profiles startups’ maturity, opportunity, and risk as a means to eliminate bias in the venture decision-making process. This approach should be adopted widely not just because it is the ethical thing to do, but because it delivers better returns— as much as 184% higher than investments made without the help of AI.

Reducing Cognitive Bias with AI in the Medical Field AI can also help make better decisions in healthcare. Medical diagnostic company Flow Health, for instance, is committed to the use of AI to overcome the cognitive biases that it says doctors often use to diagnose patients. The “availability heuristic,” for instance, encourages physicians to make a diagnosis that is common, but sometimes incorrect, while the “anchoring heuristic” leads them to stick to incorrect initial diagnoses, even when new information contradicts them.

I believe that AI will become an essential part of the fast-approaching world of data-driven, personalized medicine. Other Areas where AI Can Reduce Common Biases AI can even help reduce less malignant, but still very powerful biases that too often cloud our business judgment. Consider the bias (in English-speaking countries) towards information published in English versus other languages, the bias in the startup world against older people, despite their greater knowledge and experience, and the bias in manufacturing to use the same vendors and methods, rather than trying new, and potentially better ways.

Don’t forget the biases that lead executives in supply chain management and investors on Wall Street to make emotional, short-term decisions during hard economic times. Giving AI a role to play in all of these areas is a useful check against unrecognized biases in your decision-making process. AI can even be used to reduce bias in AI If to err is human, AI may be the solution we need to avoid the costly and unethical outcomes of our hidden biases.

But what about the intrusion of those biases into the AI itself? If AI misinterprets biased data and amplifies biased human heuristics, how can it ever be a useful solution? There are now tools designed to weed out implicit human and data biases that surreptitiously make their way into Artificial Intelligence. The What-If tool , developed by Google’s People and AI Research team (PAIR), allows developers to probe the performance of AI using a broad library of “fairness indicators” while PWC’s Bias Analyzer tool , IBM Research’s AI Fairness 360 tool and O’Reilly’s LIME tool each helps you identify the existence of bias in your AI code. If you are a senior executive or a board member thinking about ways that AI might be able to reduce bias in your organization (this is, after all, why I write this column), I urge you to think about AI as a promising new weapon in your arsenal, not as a silver bullet that will solve the problem in its entirety.

Holistically and practically speaking, you still need to create bias reduction benchmarks, train your staff to recognize and avoid hidden biases, and collect outside feedback from customers, vendors, or consultants. Not only are bias audits a good idea, in some instances, they are the law . If you care about how AI is determining the winners and losers in business, and how you can leverage AI for the benefit of your organization, I encourage you to stay tuned.

I write (almost) exclusively about how senior executives, board members, and other business leaders can use AI effectively. You can read past articles and be notified of new ones by clicking the “follow” button here . Follow me on LinkedIn .

Check out my website . Glenn Gow Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/glenngow/2022/07/17/how-to-use-ai-to-eliminate-bias/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News