We can’t base artificial intelligence laws off of the risks we only imagine to be real
We’ve heard endless calls to ‘do something’ on artificial intelligence, but politicians are still basing their thinking on the risks they assume to be real, not the ones we know about, writes James Boyd-Wallis.
Last week, Rishi Sunak said he wants to balance safety and innovation to realise the potential of artificial intelligence. But to sell his vision, he must do more to bring MPs with him.
As we now know, AI holds incredible potential but significant risks. Rishi Sunak mentioned how it is helping people walk again, discover new drugs and detect cancer earlier. For all these benefits, concerns remain.
The investment bank Goldman Sachs reported generative AI could replace 300 million jobs. Matt Clifford, who is helping the Prime Minister set up the AI taskforce, said the technology could “kill many humans” within two years.
Some 350 global AI experts have warned it could lead to the extinction of humanity.
As the doomsaying continues, the cries for “something to be done” intensify. Labour and SNP MPs have called for stronger regulation and a more interventionist approach.
Reflecting this concern, our new research shows that only 6 per cent of MPs believe existing regulators have the necessary skills or expertise to govern AI.
Conservative and Labour MPs share this lack of confidence, with just 7 per cent of Conservatives and 6 of Labour saying that existing regulators have the expertise needed. These findings call into question the AI whitepaper published in March, which recommended a sector-by-sector approach to governance, leaving it to individual regulators to decide what is best.
Our research also shows that only 14 per cent of MPs would prioritise growth and innovation over the safety of citizens and society. Just under a quarter (23 per cent) of MPs understand its implications.
Rishi Sunak wants the UK to be home to a global AI safety watchdog akin to the International Atomic Energy Authority for nuclear fuel, among other measures. But he can’t do any of this if he can’t bring politicians on board.
It is a discussion worth having. The government’s approach sits between the EU’s more prescriptive and stricter AI Act and the US, which has been very light touch.
We need to work out what is safe and not and put guardrails in place for the latter. The goal should be sensible rules that allow flexibility between different industries and uses. For instance, retailers who want AI to help recommend the best outfit need not be regulated like a healthcare provider. Such a middle-ground could help maintain the UK’s position as a global AI leader, but only if the PM resists calls for greater regulation based on assumed risks.
If we are in a rush to regulate, we may jeopardise innovation and the benefits AI can bring. Governments do not create thoughtful policies in a climate of fear.
So we need a broader discussion about how organisations use AI and the accompanying opportunities and challenges. What does the use of AI facial recognition technology by the police mean for bias and privacy? How can AI help put patients at the centre of the NHS and improve outcomes?
And that debate must involve all stakeholders: MPs, AI firms – both big and small, civil society, academics and those impacted by AI. We must also ensure that regulators have the expertise they need and there are mechanisms to reassess the risks as AI develops.
Only through dialogue and debate can Rishi Sunak sell his vision, guard against knee-jerk policymaking and create a vibrant AI sector in the UK.
In doing so, we can ensure AI can help us better diagnose and treat diseases, enhance productivity and ultimately improve our lives.