Dubai Tech News

Microsoft Is Scrapping Some Bad A.I. Facial Recognition Tools

Personal Finance Microsoft Is Scrapping Some Bad A. I. Facial Recognition Tools Kori Hale Contributor Opinions expressed by Forbes Contributors are their own.

I’m the CEO of CultureBanx, redefining business news for minorities. New! Follow this author to improve your content experience. Got it! Jul 5, 2022, 06:19am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin Facial Recognition System, Concept Images.

Portrait of young man. Getty As an outspoken proponent to properly regulate facial recognition technology, Microsoft MSFT announced it would get rid of its A. I.

tools in this space . A. I.

is still the most disputed part of technology and is becoming increasingly more commonplace as companies look to incorporate it across their platforms. Now, Microsoft is finally putting an end to its role in the potential for abuse that facial recognition technology has, which could lead to incidents of racial profiling. The Breakdown You Need To Know: Following a two year review, and 27-page document, the tech giant wants to have tighter controls of its artificial intelligence products.

CultureBanx reported that in the past Microsoft has asked governments around the world to regulate the use of facial recognition technology. The software giant wants to ensure the technology which has higher error rates for African Americans, does not invade personal privacy or become a tool for discrimination or surveillance. There are some companies that heavily rely on Microsoft’s facial recognition technology.

For example, Uber UBER uses the software in its app to verify that a driver’s face matches the ID on file for that same driver’s account. This seems like a meaningful way of using facial recognition tools. A.

I. Atrocities : There is a lot of harm that can come from this type of tech. MIT Research shows commercial artificial intelligence systems tend to have higher error rates for women and Black people.

Some facial recognition systems would only confuse light-skin men 0. 8% of the time and would have an error rate of 34. 7% for dark-skin women .

Back in 2019, Microsoft quietly deleted its MS Celeb database, which contains more than 10 million images. Images compiled included journalists, artists, musicians, activists, policy makers, writers and researchers. The deletion came after the tech company called on U.

S. politicians to do a better job of regulating recognition systems. MORE FOR YOU How To Get Approved For Student Loan Forgiveness Student Loan Forgiveness Won’t Be Available For Everyone, But This Plan Is Available Now Biden Cancelled $1.

5 Billion Of Student Debt For Borrowers, But You Can Still Apply Now Additionally, in Microsoft’s 2018 SEC annual report, it noted that “A. I. algorithms may be flawed.

Datasets may be insufficient or contain biased information. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm. ” What’s Next: Remember artificial intelligence systems inherently learn what they are being “taught”.

The use of facial recognition technology has a disparate impact on people of color, disenfranchising a group who already face inequality. It says a lot about the harmful nature built into A. I.

for a company like Microsoft to be throwing in the towel on the technology. The real question is, will the rest of the industry do the same. Follow me on Twitter .

Kori Hale Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/korihale/2022/07/05/microsoft-is-scrapping-some-bad-ai-facial-recognition-tools/

Exit mobile version