Super-intelligent robots could soon fight our wars, but there’s a risk they’ll try and turn against us
A swarm of robots approaches the front line, each gearing up its inbuilt guns and explosives as it prepares to launch an attack on the enemy – another, almost identical swarm of robots.
But suddenly one of the robots turns on its comrades. It has been offered a better deal by the enemy, so long as it attempts to destroy its former friends. Since the machine has no sense of loyalty, it defects straight away.
This might sound like a scenario stuck firmly in the realms of science fiction, but the possibility that artificial intelligence could soon fight wars for us is not out of the question, and nor is the risk it could be completely immune to human control.
Already there are autonomous combat vehicles scanning and attacking enemy soil in the form of drones, while the US Navy uses a Phalanx gun system to detect and automatically engage incoming threats. In Israel, the Harpy "fire-and-forget" unmanned aerial vehicle seeks out and destroys radar installations. It was given this name because, once it has been launched, it surveys the area and can make a decision about whether to fire on its own, allowing the humans who fired it in the first place to forget about it.
Humans are becoming increasingly reliant on artificial intelligence for behind-the-scenes military purposes, too. “Many supercomputers are used in defence, since practices such as the testing of nuclear weapons have to be carried out by simulation rather than physical build and detonation,” Tim Stitt, head of scientific computing at The Genome Analysis Centre, tells City A.M. Stitt focuses on developing supercomputer technologies to condense and analyse huge quantities of information.
The benefits of giving artificial intelligence a bigger role in the military are obvious – on the front line, machines, rather than human lives, would be put at risk (in the case of the attacker, at least), and the potential damage inflicted by a highly efficient and powerful machine far exceeds that inflicted by a human.
But with this impact comes greater risk. What if a robot is programmed incorrectly? It could result in thousands of human lives being ended by accident, or in the unintended destruction of expensive and crucial infrastructure.
And we have seen examples of this – a recent study by human rights group Reprieve into the accuracy of US drone strikes revealed they often kill many more people than they intend to. As of 24 November, attempts to kill 41 men by the US military had resulted in the deaths of around 1,147 people, according to a report by The Guardian.
There's also the risk of machinery being hijacked and manipulated – over the last few days, North Korea has been accused by the US of hacking into Sony Pictures in order to leak private emails relating to a controversial film depicting the assassination on the country's leader, Kim Jong-Un. Just as supercomputers used to run businesses can be hacked into by outsiders, so too can military machines.
“Supercomputers get attacked many times a day by people trying to illegally access information, and many supercomputers are used in defence – people try to access details such as those related to the testing of nuclear weapons,” says Stitt.
“One of the worst case scenarios of a hack would be of classified military data. We have a large supercomputer at the Atomic Weapons Establishment in the UK, which is used for the testing of atomic nuclear weapons. That contains highly sensitive and classified information, and could result in the security of the country becoming compromised if the information is accessed."
A MOVE TOWARDS AUTONOMY
One way of dealing with the risk of hacking would be to make the robots autonomous, since if they are completely in control of their own actions they are almost immune to this. Not only that, but autonomous machines would remove the need for human involvement and so lower operation costs. This means manufacturers sell more, so there is a clear commercial advantage for governments to develop and deploy these systems.
The point at which we send robots off to defend us with minimal human input seems to be drawing closer, with scientists trying harder than ever before to build intelligent technologies. These often have thought capabilities based on the human brain – a phenomenon known as “cognitive computing”.
Earlier this year, Google purchased the British artificial intelligence startup DeepMind for £400m, with the hope of “solving intelligence” and understanding the human brain for the advancement of artificial intelligence.
Meanwhile, the European Union has invested billions of euros in the Human Brain Project – a 10-year attempt to map all the neurones in the human brain so computers with equivalent abilities can be developed. In terms of armed systems, Britain's BAE Systems’ batwing-shaped Taranis and Northrop Grumman’s X-47B reveal how self-direction is already creeping in.
MORE POWER MEANS MORE RISK
When machines are given the responsibility of defending a country without human intervention, they are also given an incredible amount of power. The risks then become whether the robots will do as they are told, and whether they can be trusted to make the right decisions in military combat.
Earlier this month, physics prodigy Professor Stephen Hawking warned technological progress could lead to a new kind of existential threat – one based on Moore's law, which states that the rate at which technology improves is proportional to how good the technology is, yielding exponential – and unpredictable – improvements in its capabilities.
What this means, according to Hawking, is that robots could eventually engineer themselves, develop their own goals and surpass humans in terms of understanding and control. In an interview with the BBC, he said: "The development of full artificial intelligence could spell the end of the human race."
"Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
His concerns are also reflected in the comments of investor and Tesla founder Elon Musk, who recently described artificial intelligence as mankind's “biggest existential threat”, warning that “we need to be very careful”.
In terms of armed forces, the risks associated with self-directed robots are viewed as being so significant that the United States military currently prohibits lethal fully autonomous robots. In the 2012 Department of Defense policy directive, it also says semi-autonomous robots can’t “select and engage individual targets or specific target groups that have not been previously selected by an authorised human operator”.
EMBEDDING EMOTIONS
So when robots do gain self-direction, which inevitably they will, how do we stop them from becoming brutal killing machines that are only out to secure the best possible deal for themselves?
According to Patrick Levy Rosenthal, founder and chief executive of Emoshape, a company concentrating on building emotions into technologies, the key lies in making robots feel like humans, as well as thinking like them.
“Intelligent machines must have empathy,” he says. “Sooner or later robots will replicate themselves, so we need to implement emotions and empathy before that happens and we are dealing with a terminator scenario.”
“If we make sure our own positive emotions generate pleasure for them and our negative emotions generate pain, we can programme the robots to reproduce pleasure and avoid pain. Then they will always try to make us happy and will stop as soon as they make us unhappy.”
And indeed, the US military has already cottoned onto this idea – in May 2014, the Office of Naval Research announced a grant of $7.5m dollars to help researchers find ways to teach robots “right from wrong” and develop machines that can understand moral consequence.
SMALL BUT LETHAL
But according to Rosenthal, we all may be missing the point when we think of life-sized robots usurping humans and subjugating us within a terrifying dystopia. It’s what we can’t see that we should be really scared of.
“I think the real threat for humanity comes from weaponised nanotechnology that can be spread in the air and is invisible to humans,” he says. “It is able to sniff your DNA and penetrate your body. You can shoot a robot and unplug a computer, but you can't stop nano weaponised technology when it has been released.”