Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
Biden's Executive Order on AI Is a Good Start, Experts Say, but Not Enough
Sunday, December 22, 2024

Trending Topics

HomeInnovationBiden's Executive Order on AI Is a Good Start, Experts Say, but Not Enough

Biden’s Executive Order on AI Is a Good Start, Experts Say, but Not Enough

spot_img

The U. S. now has its farthest-reaching official policy on artificial intelligence to date.

President Joe Biden signed an executive order this week that urges new federal standards for AI safety, security and trustworthiness and addresses many other facets of AI risk and development. The broad order, nearly 20,000 words long, uses the term “artificial intelligence” to refer to automated predictive, perceptive or generative software that can mimic certain human abilities. The White House action came just two days before the start of an international organized and hosted by the U.

K. , during which world leaders will discuss global strategy on the rapidly advancing technology. “It’s kind of what we were hoping for,” says Duke University computer scientist Cynthia Rudin, who studies machine learning and for AI regulation.

Rudin doesn’t see Biden’s order as perfect, but she calls it “really, really big” in both literal size and likely impact: “It involves a huge number of government entities and starts new regulatory and safety boards that will be looking into AI as their primary task, not just a side task. ” “There is a lot that the White House is packing into this executive order,” agrees Daniel Ho, a professor of law and political science at Stanford University who studies AI governance. “I do think it’s a very important advance.

” (Ho serves on the National Artificial Intelligence Advisory Commission but spoke to in an individual capacity, not as a NAIAC member. ) The rapid rise of artificial intelligence—specifically, generative AI systems such as OpenAI’s ChatGPT—has spurred intense concern over the past year. There are some about a future robot takeover, but very are also unfolding in the present.

For example, AI models clearly exacerbate the through visual deepfakes and instantaneous text production. Machine learning algorithms have encoded bias that can magnify and automate existing patterns of discrimination, as with an algorithmic IRS tool that for audits. These biases can long-term, emerging research shows.

There are threats to privacy in the vast troves of data that are collected through AI systems—including facial recognition software—and used . Artificial intelligence could also become a major national security threat; for instance, AI models could be used to speed up the development of . “Artificial intelligence needs to be governed because of its power,” says Emory University School of Law professor Ifeoma Ajunwa, who researches ethical AI.

“AI tools,” she adds, “can be wielded in ways that can have disastrous consequences for society. ” The new order moves the U. S.

toward more comprehensive AI governance. It builds on prior Biden administration actions, such as the list of that multiple large tech companies agreed to in July and the released one year ago. Additionally, the policy follows two other previous AI-focused executive orders: one on the federal government’s own AI use and another aimed at boosting federal hiring in the AI sphere.

Unlike those previous actions, however, the newly signed order goes beyond general principles and guidelines; a few key sections actually specific action on the part of tech companies and federal agencies. For instance, the new order mandates that AI developers share safety data, training information and reports with the U. S.

government prior to publicly releasing future large AI models or updated versions of such models. Specifically, the requirement applies to models containing “tens of billions of parameters” that were trained on far-ranging data and could pose a risk to national security, the economy, public health or safety. This transparency rule will likely apply to the next version of OpenAI’s GPT, the large language model that powers its chatbot ChatGPT.

The Biden administration is imposing such a requirement under the Defense Production Act, a 1950 law most closely associated with wartime—and notably used early in the COVID pandemic to boost domestic supplies of N95 respirators. This mandate for companies to share information on their AI models with the federal government is a first, though limited, step toward mandated transparency from tech companies—which many AI experts have been advocating for in recent months. The White House policy also requires the creation of federal standards and tests that will be deployed by agencies such as the Department of Homeland Security and the Department of Energy to better ensure that artificial intelligence doesn’t threaten national security.

The standards in question will be developed in part by the National Institute of Standards and Technology, which released its own in January. The development process will involve “red-teaming,” when benevolent hackers work with the model’s creators to preemptively parse out vulnerabilities. Beyond these mandates, the executive order primarily creates task forces and advisory committees, prompts reporting initiatives and directs federal agencies to issue guidelines on AI within the next year.

The order covers eight realms that are outlined in : national security, individual privacy, equity and civil rights, consumer protections, labor issues, AI innovation and U. S. competitiveness, international cooperation on AI policy, and AI skill and expertise within the federal government.

Within these umbrella categories are sections on assessing and promoting ethical use of AI in education, health care and criminal justice. “It’s a lot of first steps in many directions,” Rudin says. Though the policy itself is not much of a regulation, it is a “big lead-in to regulation because it’s collecting a lot of data” through all of the AI-dedicated working groups and agency research and development, she notes.

Gathering such information is critical to the next steps, she explains: in order to regulate, you first need to understand what’s going on. By developing standards for AI within the federal government, the executive order might help create new AI norms that could ripple out into the private sector, says Arizona State University law professor Gary Marchant, who studies AI governance. The order “will have a trickle-down effect,” he says, because the government is likely to continue to be a major purchaser of AI technology.

“If it’s required for the government as a customer, it’s going to be implemented across the board in many cases. ” But just because the order aims to rapidly spur information-gathering and policymaking—and sets deadlines for each of these actions—that doesn’t mean that federal agencies will accomplish that ambitious list of tasks on time. “The one caution here is that if you don’t have the human capital and, particularly, forms of technical expertise, it may be difficult to get these kinds of requirements implemented consistently and expeditiously,” Ho says, alluding to the fact that less than one percent of people graduating with PhDs in AI enter government positions, according to a 2023 .

Ho has followed the outcome of the previous executive orders on AI and found that of the mandated actions were verifiably implemented. And as broad as the new policy is, there are still notable holes. Rudin notes the executive order says nothing about specifically protecting the privacy of biometric data, including facial scans and voice clones.

Ajunwa says she would’ve liked to see more enforcement requirements around evaluating and mitigating AI bias and discriminatory algorithms. There are gaps when it comes to addressing the government’s use of AI in defense and intelligence applications, says Jennifer King, a data privacy researcher at Stanford University. “I am concerned about the use of AI both in military contexts and also for surveillance.

” Even where the order appears to cover its bases, there might be “considerable mismatch between what policymakers expect and what is technically feasible,” Ho adds. He points to “watermarking” as a central example of that. The new policy orders the Department of Commerce to identify best practices for within the next eight months—but there is no established, robust technical method for doing so.

Finally, the executive order on its own is insufficient for tackling all the problems posed by advancing AI. Executive orders are inherently limited in their power and can be easily reversed. Even the order itself calls on Congress to pass data privacy legislation.

“There is a real importance for legislative action going down the road,” Ho says. King agrees. “We need specific private sector legislation for multiple facets of AI regulation,” she says.

Still, every expert spoke or corresponded with about the order described it as a meaningful step forward that fills a policy void. The European Union has been publicly working to develop the E. U.

AI Act, which is , for years now. But the U. S.

has failed to make similar strides. With this week’s executive order, there are efforts to follow and shifts on the horizon—just don’t expect them to come tomorrow. The policy, King says, “is not likely to change people’s everyday experiences with AI as of yet.

”.


From: scientificamerican
URL: https://www.scientificamerican.com/article/bidens-executive-order-on-ai-is-a-good-start-experts-say-but-not-enough/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News