New AI principles hint at pro-innovation future of UK regulation
Britain’s competition watchdog has set out guidelines to steer the development of artificial intelligence (AI), which lawyers suggest could be a blueprint for a pro-innovation future of AI regulation.
The Competition and Markets Authority (CMA) has proposed seven principles to guide the use and development of the foundation models that underpin AI giants like OpenAI’s GPT-4 and Meta’s Llama 2.
The report encourages a competitive ecosystem in AI development and stresses the importance of open access to chips, processors, and training data.
It also says developers and businesses must be held accountable for the outcomes generated by AI systems.
Gareth Mills, partner at law firm Charles Russell Speechlys, said that the principles are “necessarily broad,” to create a low entry barrier for the sector, meaning smaller companies can compete with bigger players.
“The CMA has shown a laudable willingness to engage proactively with the rapidly growing AI sector, to ensure that its competition and consumer protection agendas are engaged as early a juncture as possible,” Mills said.
Prime Minister Rishi Sunak has been pedalling his agenda for the UK to become a science and tech global superpower, including in AI.
Earlier this year, he stressed that a growth approach must be balanced with crucial safety regulations.
“The UK is expressly taking a “pro-innovation” approach to regulating AI and is proceeding cautiously before imposing regulatory burden on AI companies,” said, Xuyang Zhu, senior counsel in the technology, intellectual property and information group at law firm Taylor Wessing.
This contrasts the EU’s stiffer regulatory approach to generative AI which could be “technically challenging and resource-intensive to meet,” explained Zhu.
“The UK’s current approach of decentralised regulation leaves existing sector regulators to frame their own priorities, and the upcoming AI conference is unlikely to be a factor for them.”
In November the UK is set to host an AI safety summit at Bletchley Park in Milton Keynes, where world leaders will gather to discuss how to mitigate the risks of new technologies.
CMA chief executive Sarah Cardell said AI technology has “real potential” in terms of productivity benefits but “we can’t take a positive future for granted.”
“There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy.”
As well as a focus on business use and development of AI, the principles also look to protect consumers and ensure they have enough diversity to choose from in the market.
John Buyers, head of AI at Barbican-based law firm Osborne Clarke, said this aligns with a general trend among policymakers promoting transparency so users know when they are dealing with AI versus an actual human.
“This is important to ensure trust which in turn powers adoption, productivity and growth,” Buyers explained.