Reining in ChatGPT: AI tech titans must navigate a regulatory maze to thrive in the UK and beyond
As ChatGPT continues to amaze, and Alphabet and others try to follow suit, Jess Jones explores some of the legal and regulatory challenges facing AI tech, and what they mean for the UK’s ambition to become a science superpower.
“The genie is out of the bottle – and it’s hard to imagine how it goes back in”.
That’s what Alfonso Marone, partner and UK head of TMT Strategy & Deals at KPMG, said about ChatGPT, the catalyst behind the recent boom in artificial intelligence (AI).
When it was released to the world in November 2022, it unknowingly opened the floodgates to a new era of AI innovation.
ChatGPT, which was developed by Microsoft-backed OpenAI, has since gained momentum at a dizzying rate.
It became the fastest growing app of all time after generating 100m monthly active users by its second month, according to investment bank UBS.
In January alone, SimilarWeb data showed 672m users took a virtual trip to the interactive bot which has reeled in around 25m daily users in the past week.
Microsoft has since incorporated the latest in artificial intelligence into a new version of its search engine Bing.
Bard and Ernie
Other tech giants are throwing their hats into the ring.
Last week, Google revealed its chatbot ‘Bard’ – although an embarrassing gaffe during a promotional video wiped off $100bn (£82bn) off Alphabet’s market value.
But shares in Baidu rallied after the Chinese company said it would launch an AI-powered conversational machine called ‘ErnieBot’ in March.
Despite Google’s hiccup, these launches and the success of ChatGPT show that the tech titans are pouring money into AI innovation, and other companies, who see a huge number of potential applications, may want to follow suit.
However, there are a number of challenges AI tech, and the firms that back them, face.
Truth and morality
“Cheating on a test at school is already a material issue”, says Marone, “but there are far more sinister threats lurking”.
The lines between what looks true versus what is true are blurry, which means AI chatbots have the potential to be a “megaphone for disinformation and wrong decision-making”, he adds.
Charlotte Dunlap, research director at GlobalData, agrees: “There are concerns over whether it may propagate misinformation resulting from its tendency to not cite its sources”.
Other concerns include whether AI should be allowed to make moral judgments and whether large language models are reproducing discriminatory stereotypes and biases they have inherited directly from their human design.
Initial regulatory intervention on the latter issue has already started appearing in the US. In particular for recruitment uses such as automated employment decision tools, or AEDTs, that use AI to screen candidates in job hiring processes.
Last year, a number of states including New York introduced a law making it mandatory for AEDTs to receive an annual ‘bias audit’ and be transparent about the results.
Similarly in 2021, the US Equal Employment Opportunity Commission (EEOC) launched an initiative to ensure “algorithmic fairness” in line with federal civil rights laws.
See you in court
Litigation and regulatory issues also stalk this space.
Marone warned that intellectual property rights are likely to cause AI tech trouble.
At present, although detectors are being developed, ChatGPT is not inclined to cite its sources, leaving the door wide open to plagiarism.
According to a lawsuit made public last Monday, Getty are asking the court to demand that StabilityAI cough up “statutory damages of up to $150,000 (£124,000)” per unauthorised image, plus other damages caused by the copyright violation.
“Regulatory issues will eventually play a role in this new technology, which is still untested and has the potential for abuse,” Dunlap said.
While the UK’s privacy and data protection laws (GDPR) as well as copyright laws already apply to AI, the UK’s push to become a “science superpower” and develop its own Silicon Valley, puts calls for further regulation at odds with the UK’s tech growth ambitions.
Going for growth
The new government department of Science, Innovation and Technology (formerly Digital, Culture, Media and Sport), is already proposing an updated version of the GDPR for the UK that is explicitly pro-innovation but may reduce protections for the personal data that is used to build AI.
A government policy paper published last summer, states any new regulation will focus on AI applications that pose “identifiable, unacceptable levels of risk” rather than “impose controls on uses of AI that pose low or hypothetical risk so we avoid stifling innovation”.
In its National AI Strategy, last updated in December, the UK government emphasises its commitment to becoming a “global science and innovation superpower”. It hopes a low-regulation agenda will encourage startups and SMEs to adopt AI and outlines a 10-year vision for them to “get ahead of the transformational change AI will bring”.
Yet, “the idea that you can make money by dashing out quick but under-regulated AI is untrue,” Lilian Edwards, professor of Technology Law at Newcastle Law School and fellow at the Turing Institute, said.
She warned that heavily prioritising innovation and growth might be incompatible with creating trustworthy AI.
What’s more, a regulatory framework that puts innovation at the forefront might also be incompatible with the EU and the US, who are leaning in the opposite direction towards more stringent rules that give greater urgency to privacy and ethical considerations.
The EU’s AI Act is neither finalised nor applies to the UK. But Edwards believes it is likely to become a global standard.
The European Commission’s proposal anticipates that it could enter into force as early as this year.
A European Commission spokesperson told City A.M. their regulatory framework will “enhance the uptake of AI by increasing users’ trust hence increasing the demand, and providing legal certainty for AI providers to access bigger markets,
However, in the UK AI regulation policy paper, the British government also expresses its concern that the EU’s approach fails to capture “the full application of AI” and that “this lack of granularity could hinder innovation”.
While this divergence could pose a problem for UK-based AI firms that want to sell into the EU in future, the paper said it recognises “the cross-border nature of the digital ecosystem and the importance of the international AI market” and “will support cooperation on key issues, including through the Council of Europe”.
The UK needs to strike a fine balance between regulation that is both effective and pro-growth in order for AI to flourish.