DEBATE: Given the ethical issues exposed in a recent MIT study, will we ever fully trust autonomous vehicles?
Given the ethical issues exposed in a recent MIT study, will we ever fully trust autonomous vehicles?
Daniel Pitchford, co-founder of AI Business, said YES.
There are approximately 1.3m global road deaths each year, and 95 per cent of these are caused by human error.
The deaths due to driverless cars currently stand at four. Autonomous vehicles therefore offer a significant safety advantage, thanks to their faster response times and lack of emotions (like road rage or thrill-seeking), and are never influenced by tiredness or alcohol.
There is still a way to go to ensure that their programming does not incur the same misjudgement characteristics of their human creators – including the moral biases considered in the recent MIT ethical study on who a self-driving car should prioritise in a crash scenario. But technology provides a vehicle with greater awareness of its environment, enabling it to make decisions that a human driver would not be capable of.
Carmakers and technology companies are working tirelessly to advance their systems to ensure the safety of these vehicles. Human perception of “robo cars” is improving too, and I believe that widespread adoption is more a question of “when” rather than “will”.
Charles Towers-Clark, group chief executive of Pod Group, says NO.
Autonomous vehicles will drive more safely than humans ever could, but there will always be something unsettling about handing moral control to a computer.
And that’s why we will never fully trust self-driving cars – not only because we don’t understand their reasoning, but because the painful decisions that drivers could have to make are perceived as being outside the remit of artificial intelligence (AI).
For example, could we accept the death of a child over that of five adults, especially if that decision were the result of a mathematical equation?
Autonomous vehicles will be on our roads in the near future, but we will always need reassurance that the ethical decisions they make have some human reasoning behind them – even if we are less reliable than machines.
Perhaps users should express moral preferences about how AI should act before using a self-driving car, in order to programme a human element into the system? Either way, most people still won’t perceive AI as worthy of our trust in such critical life-or-death situations.