Saturday, July 27, 2024

Trending Topics

HomeScienceCan Artificial Intelligence be regulated?

Can Artificial Intelligence be regulated?

spot_img

The year 2023 has seen increased global government scrutiny of AI technologies, spawning regulatory momentum. AI-focused guidelines and regulations are on a path to maturity as the pace of AI innovation continues to accelerate. A bunch of vital global announcements has set the tone here.

Take for example, G-7 countries’ statement on the Hiroshima AI process regarding adoption of a set of international Guiding Principles and Code of Conduct for organisations developing advanced AI Systems. We then had US President Biden’s Executive Order on “Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” on October 30, 2023. That was followed by the UK Bletchley Declaration on AI Safety on November 2, 2023 and UK Artificial Intelligence (Regulation) Bill, 2023 dated November 22, 2023, under the aegis of British Prime Minister Rishi Sunak.

Will these declarations help? They all sound grand, but in effect these signal that governments, with the exception of EU and China, have taken a more hands-off approach while grappling with address the substantial risks AI poses, and asking companies to self-regulate. What about companies? Anticipating the government’s move to regulate development and deployment of AI, in July 2023, four companies introduced a Frontier Model Forum. The White House secured voluntary commitments from 15 leading AI companies to manage risks posed by AI.

What are the risks? While AI’s transformational potential is being acknowledged, there are legitimate questions that are being asked about the significant risks of AI, from the more alarmist AI posing an existential threat to humanity, to real societal harms such as biases (computational and statistical source bias as well as human and systemic biases), data privacy, discrimination, disinformation, meddling in democratic processes like elections, fraud, deep fakes, worker displacement, AI monopolies and threat to national security. But the big questions is whether it is possible to regulate AI? To answer this, we need to understand what is AI. There is no one definition but it is commonly understood that AI is the use of computer systems to perform tasks akin to intelligent beings.

A lot of talk about AI is really about AI Model – data and logic—the algorithm can operate with varying levels of autonomy, that produces probabilities, predictions or decisions as outputs. While all risks can emerge in a variety of ways and can be characterised and addressed, the risks posed by AI systems are unique. The algorithms may ‘think’ and improve through repeated exposure to massive amounts of data, and once that data is internalised, they are capable of making decisions autonomously.

The decision-making process can be opaque, even to those who create it. This is referred to as AI’s ‘black box’ problem. (ref: Yavar Bathaee, Harvard Journal of Law & Technology, Vol.

31, Number 2 Spring 2018). However, from a legal and regulatory perspective, there is need to focus on all the steps before and after the model, where just as many of the risks arise. It is indeed important to consider the human and social systems around the models because how well those systems operate determines how well the models and the technology really works, and the impacts they really have in applied settings.

The evolving and nascent AI regulatory framework is distinguishing responsible use issues, which relate to humans and their roles, and trustworthy technology, which tends to be about the qualities and characteristics of the technology itself. It is requiring human centric design processes, controls and risk management across the AI model lifecycle with objectives of fairness, enhanced transparency, identifying and mitigating risks in design, development, and use of AI, accountability in algorithmic decision making, including bias, and enhancing transparency in platform work. Laws that are built on legal doctrines, particularly intent and causation, are focussed on human conduct, and therefore can be applied on human driven decision-making processes.

The White House AI Pledge underscores three principles that must be fundamental to the future of AI – safety, security and trust – and mark a critical step towards developing responsible AI. Companies have committed to advancing ongoing research in AI safety, including on interpretability of AI systems’ decision-making processes, on increasing the robustness of AI systems against misuse, and to publicly disclosing their red-teaming and safety procedures in their transparency reports. Companies are now developing next generation AI systems, more powerful and complex than the large language models – the current industry frontier e.

g. , , Claude 2, PaLM 2, Titan and, in the case of image general, DALL-E 2. Many of these may have far reaching consequences on national security and fundamental human rights.

The draconian EU Act, passed by the European Parliament in June 2023 (yet to be a law), has taken a risk-based approach, classifying AI technologies by the level of risk they pose. For AIs with unacceptable levels of risk, the EU Act has introduced a list of ‘Prohibited AI practices’ which include amongst others, the use of facial recognition technology in public places and AI which may influence political campaigns. Creators of ‘Foundation Models’ are required to register the product with an EU database prior to entering the market.

Creators of generative AI systems are required to provide transparency to end users and ensure that details of copyrighted data used to train their AI systems are publicly available. The transparency obligations include a requirement to disclose AI generated content. Unlike the EU AI Act, President Biden’s Executive Order and the earlier “Blueprint for an AI Bill of Rights” have a rights-based regulatory approach, lack prohibitions on AI deployment and mechanisms for enforcement.

The recent Order however ‘require’ developers of advanced AI systems to share their safety test results and other information with the US government prior to placing them in public. It also invokes the Defence Production Act and requires companies developing foundation model that poses a serious risk to national security to notify the government when training the model and share the results of safety tests. Further, the National Institute of Standards and Technology (NIFT) has been tasked to establish rigorous standards and tools for testing, evaluating, verifying, and validating AI systems prior to public release.

Notably, the Order calls on Congress to pass a data privacy law, and directs agencies to combat algorithmic discrimination in the criminal justice system. What about India? India has no policy or law specifically regulating AI. Interestingly, recently, the Delhi High Court, in Christian Louboutin SAS v.

M/S The Shoe Boutique (CS (COMM) 583/2023 stepped in and held that responses cannot be the basis of legal or factual adjudication of cases in a court of law. India has a long way to go in AI regulation. Livemint tops charts as the fastest growing news website in the world to know more.

.


From: livemint
URL: https://www.livemint.com/ai/artificial-intelligence/can-artificial-intelligence-be-regulated-11701927112996.html

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News