Questioning AI ethics doesn’t make you a gloomy Luddite
Last month, Google-owned DeepMind introduced AlphaGo Zero, the latest evolution of AlphaGo, the first computer programme to defeat a world champion at the ancient Chinese game of Go.
Speaking at Web Summit earlier this month, Kevin Bandy, chief digital officer at Cisco, told the audience that “there are more moves in this one game than there are atoms in the universe”. A piece of software mastered it in 72 hours, with no human help.
This is a narrow application of artificial intelligence (AI), whereby it’s focused on perfecting a specific task. It’s extraordinary, but things get interesting when we take the debate further. After artificial general intelligence arrives, machines will be able to match (and probably surpass) human performance in any intellectual task. Then there’s artificial superintelligence – something different altogether.
Read more: New Wolff Olins boss Sairah Ashman on taking up the top job
Nick Bostrum, a highly respected philosopher, describes this as “an intellect that is much smarter than the best human brains in practically every field.”
It’s a mind-bending concept, and the AI expert community believes it isn’t far away: they’ve identified 2060 as a reasonable estimate for its arrival. This gives us 43 years – so we need to wrap our heads around it, and fast.
It comes as no surprise that AI is a hot topic among the tech crowd. There are some indisputably positive outcomes to its rise. It is relieving us of repetitive tasks, super-powering decision-making, reframing customer experience, enabling mass-personalisation of products and services, transforming healthcare in areas like image-based diagnostics, and making waves in caregiving and therapeutic settings.
However, the flipside of the coin is that AI has raised unprecedented ethical questions. Beyond the threat of autonomous weapons and cyber war, many of us share deep concerns about the ways machines might soak in and proliferate inherent bias, and human imperfection in the broadest sense.
Right now, it’s hard to see a way through. Nobody knows whether AI will be the best or worst thing to happen for humanity. It is clear we’re at the beginning of a radical period of uncertainty. But it seems AI’s impact on life as we know it will be immeasurable.
The state had a decent presence at Web Summit, but a chasm has opened between technological progress on the one hand, and those in charge of legislating in society’s interests on the other.
Earlier this year, US treasury secretary Steven Mnuchin said he wasn’t worried about the impact of automation on jobs because it was so far away that it wasn’t “even on his radar screen”. In the UK, home secretary Amber Rudd made headlines at a Conservative Party fringe conference when she admitted she didn’t really understand how encryption worked.
So, if the establishment is slow to the punch, who is the guiding hand in all of this?
Big tech is taking steps through initiatives like the Partnership on AI, which counts Amazon, Apple, Google, Facebook, IBM and Microsoft among its founding partners.
Smaller players are pushing progress too. Hanson Robotics, for instance, just launched SingularityNET – a free and open market for AI technology. In theory, it will democratise its development and scale it fast. Projects like this could have a colossal impact.
There’s a vital point to be made here about personal agency. Every sector will be – or already is – impacted by AI. As professionals and as individuals, we’re all invested in this.
Before we launch this rocket, we better know how to steer it.
As we make daily choices, it’s important now for us to think them through carefully. Max Tegmark, MIT professor and co-founder of the Future of Life Institute, noted that in the past, trial and error has worked for us as a species. We’ve been happy to learn from mistakes and move on. But when we’re talking about such incredibly powerful technology, he says, this is “a dumb strategy”.
We need a thoughtful approach. Safety engineering, you might call it. This isn’t me being a pessimist, or a Luddite. I’m simply calling for us to use all the tools at our disposal to build a better digital future.
In practice, this means never forgetting what makes us human. It means raising awareness and entering into dialogue about the issue of ethics in AI. It means using our imaginations to articulate visions for a future that’s appealing to us.
If we can decide on the type of society we’d like to create, and the type of existence we’d like to have, we can begin to forge a path there.
All in all, it’s essential that we become knowledgeable, active, and influential on AI in every small way we can. This starts with getting to grips with the subject matter and past extreme and sensationalised points of view. The decisions we collectively make today will influence many generations to come.
To paraphrase Tegmark, before we launch this rocket, we better know how to steer it.
Read more: We must lay the ethical groundwork for AI now, before it’s too late