Parliament debates the pressing case for AI regulation
On the penultimate day of Parliament before the summer we had the opportunity to lift our heads a little and consider perhaps the most significant technologies we have ever held in our human hands: I speak obs. of AI.
AI’s transformative potential can’t really be in doubt. For Parliament though, to attempt to plot the right course for the country, looking internationally as well as internally, in terms of the risks, the ‘right’ regulatory framework while demonstrating rational positivity towards innovation.
Probably worth always reflecting on Lord King of Lothbury and John Kay’s coinage – we live in a radically uncertain world.
Having gone to the same college as Alan Turing at Cambridge, it is particularly pertinent for me that the large language models we are now ‘chatting’ with, some say, are passing the legendary ‘Turing test.’
Opening the debate, Lord Ravensdale put it perfectly describing AI as “the most important technology of our generation”.
It is clear that, not just across Parliament, but right across society, we all need to be talking, to be discussing AI a lot more than we currently are.
It was timely to have this debate as, although the Government only released their AI white paper earlier this spring it has, somewhat, been taken over by the technology’s development and it would be fair to say they are now rightly having a bit of a rethink around their strategy.
Lord Ravensdale offered a solution, suggesting that, in this instance, research and regulation are different sides of the same coin:
“The first thing that we need to think about is how we can implement a sovereign research capability in AI which will develop regulation in parallel… We need to learn by doing, we need agencies that can attract top-class people and we need new models of governance that enable the agility and flexibility that will be required for public investment into AI research and regulation.”
It would be fair to say that, despite twenty years of concentrated effort, there has not been significant progress, nor consensus among AI researchers, on credible proposals about the problems of alignment and control.
It is this that led many senior AI academics as well as leaders from Microsoft, Google, OpenAI, DeepMind and Anthropic, among others, to sign a short public statement, hosted by the Centre for AI Safety arguing that:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
If needed, which it ought not to be, AI perhaps offers the most compelling reason for us to act internationally, connectively, collaboratively right around the world
As I put it in my speech during the Lord’s debate, so much rests on the developers, the AI developers ability to make themselves and indeed the AI itself, trustworthy. It’s not a question of whether we should trust, whether the organisations should be trusted, it’s completely whether, through their efforts, their endeavours, and their actions, that they will prove themselves worthy of our trust.
I reminded the minister of my amendments to the, now, Financial Services and Markets Act, suggesting an AI officer on the board of every business and indeed for businesses to explain their ethical use of AI.
I also pushed the government once again on what I believe is an urgent need to establish a commission to consider the complete transformation of our school curriculum, not least to fully explore areas of data literacy and competency, similarly on digital and financial literacy and competency. If we all accept the significance of AI, it must be so that our schools need to be equipped to educate for this new stage in our human story.
And that new stage is happening at speed. To make the point, it took Facebook four and a half years to get 100 million users; it took Google two and a half years to get 100 million users; it took ChatGPT just two months.
We need a coordinated approach across all government departments and beyond to achieve a coherent strategy on AI. To my mind, real cross Whitehall working has only occurred twice, once for the Olympic and Paralympic Games, a second time, for the Covid pandemic. Only twice. It is not easy, but it is possible.
For the nations that get this right and make the most of the technologies, they could experience as much as a tripling of economic growth.
Responding during the debate, Minister Camrose highlighted:
“As stated in the AI regulation White Paper, unless our regulatory approach addresses the significant risks caused or amplified by AI, the public will not trust the technology and we will fail to maximise the opportunities it presents. To drive trust in AI, it is critical to establish the right guardrails. The principles at the heart of our regulatory framework articulate what responsible, safe, and reliable AI innovation should look like.”
Answering my point on trust worthiness, the Minister said:
“[This] work is supported by the Government’s commitment to tools for trustworthy AI, including technical standards and assurance techniques. These important tools will ensure that safety, trust and security are at the heart of AI products and services, while boosting international interoperability on AI governance.”
The launch of the Foundation Model Taskforce must be seen as good news in this respect. It should be able to provide further critical insights into this question. The Minister also confirmed an initial £100M funding for the taskforce and, perhaps as importantly, that its Chair, Ian Hogarth will report directly to the Prime Minister.
Addressing my questions on ethics, the Minister said:
“Our approach is underpinned by a set of values-based principles, aligned with the OECD and reflecting the ethical use of AI through concepts such as fairness, transparency and accountability. It is clear that the AI summit, planned for the Autumn will mark a significant moment for the Government to set out their stall to the world.”
The Minister noted that it will bring together key countries, as well as leading tech companies and researchers, to drive targeted, rapid international action to “guarantee safety and security at the frontier of this technology.”
In conclusion, while considering the risks, the regulation, the opportunities, the innovation, perhaps, for now, we should reflect on this: AI, like pandemic, knows no boundaries.