OpenAI boss says AI regulation could go wrong
The chief executive of OpenAI, the organisation behind ChatGPT, has voiced concerns about the challenges of regulating artificial intelligence (AI).
During a visit to Taiwan, Sam Altman, who leads the startup backed by Microsoft, expressed that while he is not overly worried about government over-regulation, it remains a possibility.
He said: “It is possible to get regulation wrong, but I don’t think we sit around and fear it. In fact, we think some version of it is important.
“I also worry about under-regulation. People in our industry bash regulation a lot. We’ve been calling for regulation, but only of the most powerful systems.”
“Models that are like 10,000 times the power of GPT4, models that are like as smart as human civilisation, whatever, those probably deserve some regulation,” he added.
Altman said there is often a “reflexive anti-regulation” sentiment within the tech industry.
But across the globe, countries are discussing AI regulations to mitigate potential risks from the technology which is now widely available to many, including those with bad intentions.
And tech bros want a say in the regulation too. In Washington last week, Altman along with Google ex-chief executive Eric Schmidt and Anthropic’s Dario Amodei told government officials that AI should only stay in the hands of the companies with the best technical skills so that global security is not threatened.
In the UK, a global AI safety summit is scheduled for November, with a focus on understanding the risks associated with the technology and establishing national and international frameworks for AI regulation.
The Department for Science, Innovation and Technology on Monday released an introduction to the summit, discussing its scope and objectives.
It said the summit will “bring together a small number of countries, academics, civil society representatives and companies who have already done thinking in this area to begin this critical global conversation”.
The conversation will focus on technologies that have the highest potential dangers and threaten public safety, known as ‘frontier AI’.
The UK’s Digital Markets, Competition and Consumers (DMCC) bill, currently in passage through parliament, aims to set a more preemptive approach towards regulating powerful digital companies, causing some controversy.