AI governance – who’s in control?
Alan Turing, the scientist who formulated the concept of artificial intelligence, predicted in 1951: “At some stage…we should have to expect the machines to take control.”
Since the advent of ChatGPT, public awareness of AI has certainly rocketed and governments around the world are turning their attention to a series of initiatives around its governance.
The G7 have already pledged to create an international (but voluntary) code of conduct around the most advanced uses of AI. The ambition is to find a way that countries can pursue their own forms of governance of it, while also creating a network of international cooperation.
Within the G7 there is a divergence between the US, UK and Japan who are more in favour of voluntary codes of conduct and the EU, which has already drafted legislation, the AI Act, with much stricter rules such as a ban on the most harmful use cases. The Global Partnership on AI (GPAI) is a wider group of 21 non-Western countries that will be meeting in New Delhi in December to discuss the same themes and challenges.
Alongside this, the UK AI Safety Summit will be held at Bletchley Park on Nov 1 and 2. The focus of this summit is to look specifically at the existential risk of advanced AI – termed by the tech companies as ‘frontier models’ – rather than current societal risks such as bias and misinformation. Warnings from experts such as AI Pioneer Geoffrey Hinton about the threat to humanity have caught up with Alan Turing’s prediction from the 50s.
The summit will also be complemented by an associated ‘AI Fringe’, a series of events hosted across London and the UK, which is intended to bring a broad and diverse range of voices into the conversation. It will expand discussion around safe and responsible AI beyond the Summit’s focus on frontier safety. The Fringe intends to “provide a platform for all communities – including those historically underrepresented – to engage in the discussion and enhance understanding of AI and its impacts so organisations can harness its benefits.”
Events will run throughout October with the intention of bringing together the views of industry, civil society, and academia. The Alan Turing Institute kicks off with a roundtable focusing on existing UK strengths on AI safety and opportunities for international collaboration.
The British Academy will be looking at possibilities of AI for the public good, industry group techUK will be considering opportunities from AI; potential risks from AI and solutions that exist in the tech sector. Finally, the Royal Society will be horizon scanning AI safety risks across scientific disciplines.
I welcome this programme of events and will report back on themes that arise from the discussions. Clearly, AI safety is an issue that impacts us all and is not just a question for academics and technical experts. We need far greater public involvement in this essential question of AI safety, and ultimately being human in an age of AI.
With thanks to Simon Kuper for this analysis, generally, political (and public) responses to questions of public safety – think seat belts or smoking – can take a long time to get passed the ‘under-informed noisemaker’ phase to a stage when expertise has ‘upskilled’ debate, to the point where a sophisticated majority agreement is more likely. With many of the challenges of the digital age, including AI safety, we do not have the luxury of time for this to play out over decades as it has in the past. Fortunately, in the digital age we do have the tools to speed it up.
We need to foreground expertise and work harder to engage with the public. This can be achieved through digitally enabled, well informed, direct democratic processes. One model, used effectively in Ireland for deliberations around the legalisation of abortion, is the citizens assembly. A representative group of 99 citizens was convened to consider expert testimony and recommend legislation that was then subject to a referendum (which approved the recommendation).
Taiwan’s Digital Minister Audrey Tang is another trailblazer in this space, using what she calls alignment assemblies to reach consensus on controversial issues such as how to arrive at a compromise between competing interests when Uber arrived and faced a backlash from taxi unions.
In the introduction to the AI Safety Summit, Ministers have set out plans to “allow members of the public from anywhere in the world… to ask questions and share their views directly with government” – and I sincerely urge everyone to get involved and do just that. You will be able to address your questions to the Secretary of State Michelle Donelan MP on LinkedIn on October 18.
AI governance must be everyone’s concern. We must be part of the discourse if we are to ensure that ‘safe’ systems can work for the benefit of us all.