US will force Big Tech firms to disclose AI safety strategies
US president Joe Biden has said organisations whose artificial intelligence (AI) models jeopardise national security must report on what they are doing to ensure their AI tools are safe.
Biden will employ the Defense Production Act to order businesses developing AI models with significant national security, economic security, or public health risks to share their safety test results.
White House deputy chief of staff, Bruce Reed, said today it is the “strongest action” any government has taken on AI safety and it forwards an “aggressive strategy” to reduce the risks of AI.
Various government agencies, including commerce, energy, and homeland security, have been enlisted to enforce Biden’s order, which arrives at a time when countries worldwide are grappling with regulating AI bodies and systems.
Earlier this year 15 major companies, including Amazon, Google, Meta, Microsoft, and OpenAI, voluntarily committed to managing AI risks.
The commerce department is set to develop guidelines for adding watermarks to AI-generated content to combat fraud, deepfakes, and bolster competition.
Commenting on the order, Kent Walker, president of global affairs at Google and Alphabet said: “We are confident that our longstanding AI responsibility practices will align with its principles.
“We look forward to engaging constructively with government agencies to maximize AI’s potential — including by making government services better, faster, and more secure.”
It comes as the UK is two days away from hosting its highly anticipated AI safety summit on Wednesday and Thursday this week at Bletchley Park. Last week the UK Prime Minister Rishi Sunak said he will not rush into regulating AI.
A source in the Department of Science, Innovation and Technology told City A.M. the government was aware of Biden’s move.
“I think it tees up the summit really nicely,” they said.
“This is a massive vindication for the PM and Michelle [Donelan]’s approach and justifies the many hours we’ve put into it.
“We’ve been the first government to talk meaningfully about the risks from frontier AI. And the first to spend any money on AI safety with the £100m for the frontier AI taskforce announced earlier this year, and obviously have the summit too.
“The US announcement is a good sign that other governments have recognised the need to act too,” they added.