UK government set to speed up regulation of AI models like ChatGPT and Google Gemini
The UK government is reportedly starting to speed up its approach to artificial intelligence (AI) as it looks to create greater protections around the emerging technology.
New legislation would likely limit the production of large language models such as ChatGPT and Google’s Gemini, according to the Financial Times, which cited two people briefed on the plans.
While nothing is finalised yet, the law would require developers of the most advanced AI models to share their algorithms with the government and prove they have safety tested.
“Officials are exploring moving on regulation for the most powerful AI models,” said one of the people briefed on the situation. The Department for Science, Innovation and Technology (DSIT) is “developing its thinking” on what AI legislation might look like, they added.
Another unnamed source specified the rules would apply to the technology that sits behind AI products rather than the consumer-facing applications. No law will be introduced imminently, the people said.
A DSIT spokesperson said: “As we’ve previously said, all countries will eventually need to introduce some form of AI legislation, but we will not rush to do so until there is a clear understanding of the risks.
“That’s because it would ultimately result in measures which would quickly become ineffective and outdated. However in the case of highly capable general-purpose AI systems, we set out our initial thinking on targeted binding measures for developers earlier this year.”
Following the government’s recent response to a consultation, key regulators are due to publish their AI plans which will detail their strategic approach to managing the technology.
Last week, the Competition and Markets Authority (CMA) said it is concerned about the concentration of market power among a handful of technology giants that produce the largest AI models.
The government, which has previously described AI as a potential “existential threat”, has also been facing pressure from an increasing number of MPs, frustrated with Prime Minister Rishi Sunak’s slow approach.
He has previously said that “the UK’s answer is not to rush to regulate”.
Lord Chris Holmes, an advocate for the use of technology for public good, has been at the forefront of a charge to encourage the government to take a more active approach to regulating AI.
“We’re building a raft of support and people behind that perspective at the moment,” he recently told City A.M.
It comes a month before Korea is due to hold the world’s second AI safety summit, after Britain held the first one in November last year. It is set to be held in Seoul on 21 and 22 May.