In a serious effort to catch up to the runaway train that is artificial intelligence, President Biden’s comprehensive seeks to set rules and regulations for AI safety, security and trust without sacrificing innovation. The executive order was, not coincidentally, released on the eve of the United Kingdom’s major . It seeks to leapfrog the U.
S. into the forefront of efforts to devise global rules for AI use to quell fears of the technology leading to . Additionally, in an unprecedented move today during the U.
K. ’s AI Safety Summit, the U. S.
and 27 other nations issued the , an agreement to cooperate so that AI develops in a way that is “human-centric, trustworthy and responsible. ” Biden’s executive order that those developing cutting-edge AI systems share their test results with the government before launching new products and “red-teaming,” which dedicates teams to testing technologies for flaws and vulnerabilities. It directs the National Institute for Standards and Technology to develop red team standards.
This process is designed to prevent powerful new AI from helping non-experts to design or acquire biological or nuclear weapons or build powerful offensive cyber capabilities, and to ensure AI does not evade human control. These are only a few of the nightmare scenarios that this technology could produce. The White House has increasingly stepped in as congressional dysfunction and inertia have left the U.
S. behind much of the world in exerting control over Big Tech, , any “foundation model that poses a risk to national security, national economic security, or national public health and safety. ” Last year’s release and widespread use of , a large language model generative AI that can simulate human conversation, answer questions, produce images and write stories or papers, raised about the future from leading technologists.
and released their own versions in what appears an AI arms race among startups and tech giants. In addition to measures like mandating companies have a chief AI officer, the executive order seeks to protect against AI generating false content such as by creating standards, requiring verification of authentic AI content with watermarks. It also protects privacy by setting guidelines for how data is collected and shared.
Some of the executive order provisions are requests or guidelines, leaving ample wiggle room for AI developers to evade, though Commerce Department licensing rules may constrain them. Acknowledging that administrative steps are not enough, the executive order admonished Congress to pass needed legislation. As Senate Majority Leader Chuck Schumer (D-N.
Y. ) : “There’s probably a limit to what you can do by executive order… everyone admits the only real answer is legislative. ” The U.
S. lags behind other major tech players such as the European Union, China and Japan. The EU has produced the most on top of equally thorough to protect the public from unwanted algorithms.
It also has a aimed at Big Tech. In July, China published , following earlier restrictive and laws. For its part, Japan also has that cover some AI services but is still in the process of devising comprehensive regulations.
While there is overlap in many of the AI and data governance laws in leading countries, there remains a large global governance deficit on the issue. In sharp contrast, Congress has yet to pass any comprehensive data privacy protection or AI legislation. As power abhors a vacuum, Big Tech and its have shaped the debate on both topics.
After meetings with seven Big Tech firms, the White House announced the companies’ agreement to abide by for AI. To be fair, the pace of technology is exponential, while governance tends to be incremental. The imperative to commercialize AI has led Big Tech to push for regulations so that customers and the public have confidence that their products are safe.
Both and , for example, proposed ethical principles for AI interaction with humans in 2018 and 2019. The challenge to Big Tech is to balance innovation with safety and accountability. Current large learning models like ChatGPT can misinterpret the data fed into it, sometimes yielding false or nonsensical answers, .
Yet OpenAI, Microsoft, Google and Meta rolled out these products despite reservations from safety experts. The challenge to government is to set rules and standards that safeguard the public interest while not unduly setting back innovation from which the public would benefit. issued at the U.
K. summit is an encouraging sign. Also, Vice President Kamala Harris’s participation in the AI conference underscores Biden’s effort to play catch up.
The White House effort is a belated but positive step. But to get a handle on AI, Congress is dangerously delinquent in legislating data governance in general and AI in particular. Other U.
S. leadership managing the tech revolution faces credibility issues. At stake is a larger risk of a race to the bottom if consensus on basic global rules and standards for using AI proves elusive.
.
From: thehill
URL: https://thehill.com/opinion/technology/4287525-biden-is-making-strides-in-ai-governance-but-still-playing-catch-up/