AI: ‘Doom mongering’ Bletchley Declaration slammed by tech experts for lack of scope
The Bletchley Declaration on artificial intelligence (AI) safety, first announced by UK technology secretary Michelle Donelan today, has been labelled as “doom mongering” and limited in scope by tech experts.
Twenty-eight governments including the UK, US, EU and China have agreed to the Bletchley declaration, named after the site of the AI safety summit taking place today and tomorrow.
In a keynote speech at the summit, Donelan said the agreement represents a global commitment to mitigating the risks of AI and to “deepening our understanding of the emerging risks of frontier AI”.
She added it is a “landmark achievement” that lays the foundations for discussions, held at the AI safety summit today and tomorrow.
Donelan also revealed that South Korea is set to hold the next AI safety summit in six months time, with France hosting the following one in a year. “This is no time to bury our heads in the sand,” said Donelan.
The declaration, which paves the way for future global collaboration on AI safety, states: “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”
But Lewis Liu, founder and chief executive of Eigen Technologies, said this description is “overly influenced by a deeply flawed analysis and an agenda set by those Big Tech companies seeking to dominate the policy-making process.
“This kind of doom mongering echoes the words of OpenAI and its peers, who have been among the most influential corporate lobbyists in the run up to the Summit.”
Liu said startups fear Big Tech could control and stifle open-source AI, hindering AI’s true potential and neglecting key issues like IP, bias, and the need for “robust” governance. “This would be a disaster,” he added.
Open-source AI models are software, systems, or projects whose source code is made available to the public, allowing anyone to view, use, modify, and distribute it freely. It makes it easier for startups dry of resources to develop their own models.
The UK’s AI minister told City A.M. he would “absolutely hate to see some sense of regulatory capture by the big labs.”
He stressed that the AI safety summit is purely designed to focus on frontier models, such as Chat GPT.
Professor Elena Simperl, professor of computer science at King’s College London, said it is “encouraging” that the declaration has global reach and includes China.
“More worrying is the continued emphasis on frontier models rather than the whole range of very powerful AI systems that have been put to use in the last 10 years, which are already doing real harms, documented in the media or in AI incidents databases,” she said.
Civil society groups attending the AI safety summit have also signed a joint communique urging governments across the world to prioritise regulating well established harms that impact people’s daily lives over frontier AI.
Signatories of the communique include the AI Now Institute, Responsible AI UK, and the Algorithmic Justice League.
Following Donelan’s keynote speech, she introduced the first speakers of the day: Gina Raimondo, US secretary of commerce; Wu Zhaohu, China’s vice minister of science and technology; Vera Jerouva, vice-president of the European Commission and Ian Hogarth chair of the UK’s AI taskforce.
Raimondo confirmed the US is setting up an AI Safety Institute similar to the UK’s, which will work closely in partnership with the UK’s own AI Safety Institute.
“We have to get to work”, she told the audience of technology leaders, politicians and companies from across the world.
Last week the Prime Minister Rishi Sunak announced he will establish a new AI safety institute.
He said: “It will advance the world’s knowledge of AI safety, and it will carefully examine, evaluate and test new types of AI so that we understand what each new model is capable of exploring all the risks from social harms like bias and misinformation through to the most extreme risks… we will make the work of our AI Safety Institute available to the world.”