‘Godfather of AI’ quits Google over concerns with rapid advances in tech
A British scientist dubbed the “Godfather of AI” has quit his role at Google due to concerns about the dangers of artificial intelligence (AI), such as its potentially ability to spread misinformation and overhaul the jobs market.
Geoffrey Hinton — who joined Google a decade ago to build on what was then the foundation of AI technology — quit over concerns that the technology could be used to create images and text so that people will “not be able to know what is true anymore,” he told the New York Times.
Hinton also said that AI could be a risk to some jobs and could even become smarter than humans.
“It is hard to see how you can prevent the bad actors from using it for bad things,” he told the Times.
He told the newspaper that the global AI race would not stop without regulation but that this may be impossible, leaving controlling the technology in the hands of scientists.
“I don’t think they should scale this up more until they have understood whether they can control it,” he told the Times.
Hinton’s intervention is particularly concerning given the scientist was an early pioneer of the technology.
Back in 2012, Hinton, alongside two of his students, created a neural network, technology foundational to the AI used by Google today, for which they later won the Turing Prize.
The scientist told the New York Times that he sometimes regrets his work, but added: “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
Hinton’s comments are the latest in a string of remarks from industry leaders expressing concerns about the rapid advancement of AI.
Last month, Google chief executive Sundar Pichai said AI could be harmful if deployed incorrectly as he admitted society is not fully prepared for its advancement and called for regulation.
In March, Elon Musk and over 1,000 others signed an open letter calling for a pause in AI research, citing safety risks.