Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
Explainable AI Is Trending And Here’s Why
Tuesday, November 26, 2024

Trending Topics

HomeTechnologyExplainable AI Is Trending And Here’s Why

Explainable AI Is Trending And Here’s Why

spot_img

Consumer Tech Explainable AI Is Trending And Here’s Why Jennifer Kite-Powell Senior Contributor Opinions expressed by Forbes Contributors are their own. New! Follow this author to improve your content experience. Got it! Jul 28, 2022, 03:04pm EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin A visual representation of XAI.

A clear white box model containing a digitized brain, with the . . .

[+] letters X, A & I etched on the top of the box. getty According to the 2022 IBM Institute for Business Value study on AI Ethics in Action, building trustworthy Artificial Intelligence (AI) is perceived as a strategic differentiator and organizations are beginning to implement AI ethics mechanisms. Seventy-five percent of respondents believe that ethics is a source of competitive differentiation.

More than 67% of respondents who view AI and AI ethics as important indicate that their organizations outperform their peers in sustainability, social responsibility, and diversity and inclusion. The survey showed that 79% of CEOs are prepared to embed AI ethics into their AI practices, up from 20% in 2018, but less than a quarter of responding organizations have operationalized AI ethics. Less than 20% of respondents strongly agreed that their organization’s practices and actions match (or exceed) their stated principles and values.

Peter Bernard, CEO of Datagration , says that understanding AI gives companies an advantage, but Bernard adds that explainable AI allows businesses to optimize their data. “Not only are they able to explain and understand the AI/ML behind predictions, but when errors arise, they can understand where to go back and make improvements,” said Bernard. “A deeper understanding of AI/ML allows businesses to know whether their AI/ML is making valuable predictions or whether they should be improved.

” MORE FOR YOU Google Issues Warning For 2 Billion Chrome Users Forget The MacBook Pro, Apple Has Bigger Plans Google Discounts Pixel 6, Nest & Pixel Buds In Limited-Time Sale Event Bernard believes this can ensure incorrect data is spotted early on and stopped before decisions are made. Avivah Litan, vice president and distinguished analyst at Gartner , says that explainable AI also furthers scientific discovery as scientists and other business users can explore what the AI model does in various circumstances. “They can work with the models directly instead of relying only on what predictions are generated given a certain set of inputs,” said Litan.

But John Thomas, Vice President and Distinguished Engineer in IBM Expert Labs , says at its very basic level, explainable AI are the methods and processes for helping us understand a model’s output. “In other words, it’s the effort to build AI that can explain to designers and users why it made the decision it did based on the data that was put into it,” said Thomas. Thomas says there are many reasons why explainable AI is urgently needed.

“One reason is model drift. Over time as more and more data is fed into a given model, this new data can influence the model in ways you may not have intended,” said Thomas. “If we can understand why an AI is making certain decisions, we can do much more to keep its outputs consistent and trustworthy over its lifecycle.

” Thomas adds that at a practical level, we can use explainable AI to make models more accurate and refined in the first place. “As AI becomes more embedded in our lives in more impactful ways, [. .

] we’re going to need not only governance and regulatory tools to protect consumers from adverse effects, we’re going to need technical solutions as well,” said Thomas. “AI is becoming more pervasive, yet most organizations cannot interpret or explain what their models are doing,” said Litan. “And the increasing dependence on AI escalates the impact of mis-performing AI models with severely negative consequences,” said Litan.

Bernard takes it back to a practical level, saying that explainable AI [. . ] creates proof of what senior engineers and experts “know” intuitively and explaining the reasoning behind it simultaneously.

“Explainable AI can also take commonly held beliefs and prove that the data does not back it up,” said Bernard. “Explainable AI lets us troubleshoot how an AI is making decisions and interpreting data is an extremely important tool in helping us ensure AI is helping everyone, not just a narrow few,” said Thomas. Hiring is an example of where explainable AI can help everyone.

Thomas says hiring managers deal with all kinds of hiring and talent shortages and usually get more applications than they can read thoroughly. This means there is a strong demand to be able to evaluate and screen applicants algorithmically. “Of course, we know this can introduce bias into hiring decisions, as well as overlook a lot of people who might be compelling candidates with unconventional backgrounds,” said Thomas.

“Explainable AI is an ideal solution for these sorts of problems because it would allow you to understand why a model rejected a certain applicant and accepted another. It helps you make your make model better. ” Making AI trustworthy IBM’s AI Ethics survey showed that 85% of IT professionals agree that consumers are more likely to choose a company that’s transparent about how its AI models are built, managed and used.

Thomas says explainable AI is absolutely a response to concerns about understanding and being able to trust AI’s results. “There’s a broad consensus among people using AI that you need to take steps to explain how you’re using it to customers and consumers,” said Thomas. “At the same time, the field of AI Ethics as a practice is relatively new, so most companies, even large ones, don’t have a Head of AI ethics, and they don’t have the skills they need to build an ethics panel in-house.

” Thomas believes it’s essential that companies begin thinking about building those governance structures. “But there also a need for technical solutions that can help companies manage their use of AI responsibly,” said Thomas. Driven by industry, compliance or everything? Bernard points to the oil and gas industry as why explainable AI is necessary.

“Oil and gas have [. . ] a level of engineering complexity, and very few industries apply engineering and data at such a deep and constant level like this industry,” said Bernard.

“From the reservoir to the surface, every aspect is an engineering challenge with millions of data points and different approaches. ” Bernard says in this industry, operators and companies still utilize spreadsheets and other home-grown systems-built decades ago. “Utilizing ML enables them to take siloed knowledge, improve it and create something transferrable across the organization, allowing consistency in decision making and process.

” “When oil and gas companies can perform more efficiently, it is a win for everyone,” said Bernard. “The companies see the impact in their bottom line by producing more from their existing assets, lowering environmental impact, and doing more with less manpower. ” Bernard says this leads to more supply to help ease the burden on demand.

“Even modest increases like 10% improvement in production can have a massive impact in supply, the more production we have [. . ] consumers will see relief at the pump.

” But Litan says the trend toward explainable AI is mainly driven by regulatory compliance. In a 2021 Gartner survey , AI in Organizations reported that regulatory compliance is the top reason privacy, security and risk are barriers to AI implementation. “Regulators are demanding AI model transparency and proof that models are not generating biased decisions and unfair ‘irresponsible’ policies,” said Litan.

“AI privacy, security and/or risk management starts with AI explainability, which is a required baseline. ” Litan says Gartner sees the biggest uptake of explainable AI in regulated industries like healthcare and financial services. “But we also see it increasingly with technology service providers that use AI models, notably in security or other scenarios,” said Litan.

Litan adds that another reason explainable AI is trending is that organizations are unprepared to manage AI risks and often cut corners around model governance. “Organizations that adopt AI trust, risk and security management – which starts with inventorying AI models and explaining them – get better business results,” adds Litan. But IBM’s Thomas doesn’t think you can parse the uptake of explainable AI by industry.

“What makes a company interested in explainable AI isn’t necessarily the industry they’re in; they’re invested in AI in the first place,” said Thomas. “IT professionals at businesses deploying AI are 17% more likely to report that their business values AI explainability. Once you get beyond exploration and into the deployment phase, explaining what your models are doing and why quickly becomes very important to you.

” Thomas says that IBM sees some compelling use cases in specific industries starting with medical research. “There is a lot of excitement about the potential for AI to accelerate the pace of discovery by making medical research easier,” said Thomas. “But, even if AI can do a lot of heavy lifting, there is still skepticism among doctors and researchers about the results.

” Thomas says explainable AI has been a powerful solution to that particular problem, allowing researchers to embrace AI modeling to help them solve healthcare-related challenges because they can refine their models, control for bias and monitor the results. “That trust makes it much easier for them to build models more quickly and feel comfortable using them to inform their care for patients,” said Thomas. IBM worked with Highmark Health to build a model using claims data to model sepsis and COVID-19 risk.

But again, Thomas adds that because it’s a tool for refining and monitoring how your AI models perform, explainable AI shouldn’t be restricted to any particular industry or use case. “We have airlines who use explainable AI to ensure their AI is doing a good job predicting plane departure times. In financial services and insurance, companies are using explainable AI to make sure they are making fair decisions about loan rates and premiums,” said Thomas.

“This is a technical component that will be critical for anyone getting serious about using AI at scale, regardless of what industry they are in. ” Guard rails for AI ethics What does the future look like with AI ethics and explainable AI? Thomas says the hope is that explainable AI will spread and see adoption because that will be a sign companies take trustworthy AI, both the governance and the technical components, very seriously. He also sees explainable AI as essential guardrails for AI Ethics down the road.

“When we started putting seatbelts in cars, a lot more people started driving, but we also saw fewer and less severe accidents,” said Thomas. “That’s the obvious hope – that we can make the benefits of this new technology much more widely available while also taking the needed steps to ensure we are not introducing unanticipated consequences or harms. ” One of the most significant factors working against the adoption of AI and its productivity gains is the genuine need to address concerns about how AI is used, what types of data are being collected about people, and whether AI will put them out of a job.

But Thomas says that worry is contrary to what’s happening today. “AI is augmenting what humans can accomplish, from helping researchers conduct studies faster to assisting bankers in designing fairer and more efficient loans to helping technicians inspect and fix equipment more quickly,” said Thomas. “Explainable AI is one of the most important ways we are helping consumers understand that, so a user can say with a much greater degree of certainty that no, this AI isn’t introducing bias, and here’s exactly why and what this model is really doing.

” One tangible example IBM uses is AI Factsheets in their IBM Cloud Pak for Data. IBM describes the factsheets as ‘nutrition labels’ for AI, which allows them to list the types of data and algorithms that make up a particular in the same way a food item lists its ingredients. “To achieve trustworthy AI at scale, it takes more than one company or organization to lead the charge,” said Thomas.

“AI should come from a diversity of datasets, diversity in practitioners, and a diverse partner ecosystem so that we have continuous feedback and improvement. ” Follow me on Twitter . Jennifer Kite-Powell Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/jenniferhicks/2022/07/28/explainable-ai-is–trending-and-heres-why/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News