Dubai Tech News

How To Prepare Faster For Looming AI Regulation: Turn Defense Into Offense

Innovation How To Prepare Faster For Looming AI Regulation: Turn Defense Into Offense Mark Palmer Forbes Councils Member Forbes Technology Council COUNCIL POST Expertise from Forbes Councils members, operated under license. Opinions expressed are those of the author. | Membership (fee-based) Sep 1, 2022, 06:15am EDT | Share to Facebook Share to Twitter Share to Linkedin Mark Palmer is the Senior Vice President and General Manager of Engineering at TIBCO , a global leader in enterprise data.

getty If you’re a business leader, you’re likely trying to get more from artificial intelligence (AI). You also might be wary of the inherent risk and bias that AI may introduce and even wonder if AI will become regulated. Indeed, employing AI is risky, regulation is coming and you may need to shift your way of thinking about technology to use it effectively.

As in sports, defense can be used to spark offense. With AI, the same elements of an effective culture of compliance can also spark innovation, collaboration and agility with data science. Human-In-The-Loop For AI In April 2021, the European Commission issued a proposal for AI regulation, the Artificial Intelligence Act (AIA).

It’s the first attempt to provide a legal framework for AI and regulate corporate responsibility and fairness for AI-infused systems. The EU’s ethics guidelines for trustworthy AI advise that “AI systems should empower human beings, allowing them to make informed decisions and foster fundamental rights. At the same time, proper oversight is needed through human-in-the-loop, human-on-the-loop, and human-in-command approaches.

” The first and most important player in the AI game is the human. Since humans create algorithms, and humans are biased, AI inherits that bias. The bad news, as Nobel Prize-winning psychologist Daniel Kahneman says in Noise , is that humans are unable to detect their own biases.

The good news, Kahneman suggests, is that there’s a simple way to identify and mitigate bias—have someone else identify it. He calls these people “decision observers. ” MORE FOR YOU Google Issues Warning For 2 Billion Chrome Users Forget The MacBook Pro, Apple Has Bigger Plans Google Discounts Pixel 6, Nest & Pixel Buds In Limited-Time Sale Event The first step toward an effective AI culture is to make it a team sport, with data scientists and decision observers working together.

By building a culture of collaboration, AI should not only be less biased but also more effective. Hire Social Scientists, Not Only Data Scientists With AI teams in mind, you might ask: “Who should be on my team? Whom should I hire?” The answer may seem counterintuitive—don’t load your data science teams with data scientists. Favor hybrid social scientists instead.

A hybrid social scientist is primarily schooled in philosophy, history, psychology and linguistics skills. For AI teams, they must also have foundational literacy in data, data storytelling and data science. They can provide an important counterweight to technical teams.

Where techies often start with the data they have, humanists generally try to understand what’s missing. Data scientists seek certainty; humanists embrace ambiguity. Scientists might think the data speaks for itself; humanists seek the story behind the numbers.

Numbers do lie. Hire hybrid social scientists to work with data science and decision-making teams. Effectiveness with AI will follow.

Governance And Transparency Sets AI Free The EU guidelines suggest data and systems should be explainable so anyone can understand them. This makes establishing habits of governance and transparency throughout the AI lifecycle a critical step toward AI effectiveness and compliance. Establish processes and procedures around AI as you create, assess, screen, deploy and evaluate algorithms used in production.

Good governance and transparency can make it easier to comply with regulations and create a culture of trust and transparency around AI. AI model governance, operationalization and management tools are essential for this task. Choose tools designed for team collaboration.

Don’t Forget The Data (Engineers) Algorithms make decisions based on the data they’re given. They’re trained on historical data and then operationalized to production systems and attached to the “real” data that’s used to “score,” or evaluate, the algorithm. If there’s bias or private information in the data, it can introduce bias or breaches in trust into decisions.

The EU guidelines warn that “besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimized access to data. ” Enterprises that use AI must ensure they’re using the right data in the right place at the right time. Data engineers are essential to ensuring the right data gets attached to the right algorithm.

AI should be trained with data that complies with data protection standards like the EU’s GDPR and the California Consumer Privacy Act (CCPA), and data science software should follow ISO standards. Employ AI Analytics To Monitor, Manage And Refine AI-Based Decision Making Once deployed, operationalization teams need to incorporate analytics. AI analytics can show which algorithms are deployed, the decisions they’re making and, when connected to the actions humans take as a result, insight into their impact.

To explore the behavior of AI, attach AI metadata to business intelligence tools. For example, we worked with the National University Health System (NUHS) Singapore to help create a platform called Endeavor AI, through which health staff takes digital notes on patients’ entire visits, and algorithms analyze those notes to make health predictions—for example, a 95% probability that a patient has appendicitis. This insight is passed to the doctor in real time, and based on their knowledge and experience, action is taken.

NUHS has built a process of collaboration around these models that begins with the university ecosystem that conceptualizes and designs the algorithms. Once designed, they are carefully vetted and deployed to the running system and monitored by teams to ensure the privacy of data and effectiveness. When necessary, algorithms can be retrained and redeployed based on new data or techniques.

Exploratory analytics is important to include in your AI process. AI analytics can help expose and make the assumptions, behaviors and implications of AI recommendations visible to all. Turn Good Defense Into Good Offense The implications of ignoring AI compliance can be harsh, so defense must be played.

In the EU, noncompliance may result in fines up to 30 million euros or 6% of worldwide annual revenue, whichever is higher. An effective AI culture can turn good defense into offense and unlock the opportunity and innovation potential of an AI-driven business as well as increased automation and intelligent customer engagement, which can ultimately lead to a better all-around approach to smarter business in the future. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.

Do I qualify? Follow me on Twitter or LinkedIn . Check out my website . Mark Palmer Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/forbestechcouncil/2022/09/01/how-to-prepare-faster-for-looming-ai-regulation-turn-defense-into-offense/

Exit mobile version