Alexa: Tell me why I can’t have a flying car like in Blade Runner
Bad news: it’s 2019, and that means we are now living through the year in which the original Blade Runner film was set.
Released in 1982, Blade Runner has some interesting ideas about 2019. Yes, it’s a post-apocalyptic dystopian hellscape where sentient bioengineered robots (“replicants”) must be hunted down against a backdrop of a dying planet.
But it also contains some very cool technology: flying cars, commercial space travel, not to mention artificial intelligence (AI) so advanced that machines are virtually indistinguishable from humans.
The first Blade Runner audiences would have grounds to be quite disappointed by the reality of 2019 – though not as disappointed as readers of the 1968 book on which the film is based, Do Androids Dream of Electric Sheep, set in the then-distant future of 1992.
While we can’t expect sci-fi writers to have a crystal ball, you can’t help feeling that the future feels a bit less, well, futuristic. Instead of flying cars, we have a more efficient taxi service you can book with an app – hardly revolutionary.
I was thinking about this on New Year’s Eve while a friend told me excitedly that she’d received an Amazon Echo for Christmas. Alexa would now be listening to her every move. Happy 2019.
I decided not to scare her with the horror stories: private conversations being sent to work colleagues or indeed complete strangers (as happened in December) by accident when Alexa “mishears” commands; sensitive data, from financial information to children’s first words, used to build ever more accurate profiles of consumers; and contradictory statements from Amazon about exactly how personal data is gathered stored.
Meanwhile, the new year began with an article in TechCrunch about a machine-learning agent that figured out how to cheat its human developers in order to perform better against the metrics it knew it would be graded on. Essentially the AI learned how to manipulate the system to its advantage.
The point is, while transport technology may have stalled, AI has come on frighteningly fast – and in some senses looks set to overtake the replicants of the Blade Runner universe. In the endless debate on whether the robots will steal our jobs or create millions more, we’ve been a little slow to think about the consequences.
Take flying cars’ self-aware cousin: autonomous vehicles. Potentially safer than human drivers, yes. But also more ethically thorny.
In a crash scenario, who decides which lives to prioritise? What if the car can save a crowd of people by killing its own passengers? Does it operate differently if it contains three small children, as opposed to a single adult?
And should these settings be universal, or is it up to each human “driver” to programme their own vehicle when they set off?
Recent research from the MIT Media Lab showed that our preferences are not universal – and that, for all the talk of self-driving cars making it onto our roads this year, we are a long way from resolving the ethical issues.
If ethics is playing catch-up to progress, then regulation is miles away. The chaos over the drone incidents at Gatwick airport just before Christmas showed how unprepared the government is to deal even with commonplace technology that has been around for a decade.
Which brings us back to flying cars. Because while life in 2019 features some technology that the Blade Runner team could only dream of, it has for the most part been brought to us by private companies rather than governments.
As well as Amazon’s Alexa, there’s the Facebook data-monster that seemingly has the power to influence elections, and Google with fingers in pies from facial recognition to personalised AI-assisted healthcare.
And as for commercial space travel, yes it’s getting closer, but primarily thanks to private-sector pioneers like Elon Musk, Jeff Bezos, and Richard Branson.
The UK government, meanwhile, is still pondering the big questions like whether electric scooters should be allowed on roads or pavements.
Flying cars, and other futuristic transport systems like the Hyperloop or city-to-city rockets, require central government to take the lead, come up with a long-term plan, build a regulatory framework (do we have lanes or traffic lights or roundabouts in the sky?), decide on funding, and implement it.
Even without Brexit negotiations sucking the oxygen out of politics, that’s pie (or highway) in the sky stuff for any government with a five-year term.
And that’s why the world looks so mundane compared to the Blade Runner alternative. Technological progress has sped up to a rate that can seem alarming, but only in areas which the government hasn’t been able to touch yet.
If we don’t want to wake up and find that the replicant robots suddenly have all the power, we need smarter, forward-looking regulation. If we want to tap into the massive societal benefits that AI can bring, from education to healthcare to central planning, we need governments that can look beyond the next five years.
If not, get ready for a future dominated by AI that can teach itself everything about you and bring down major infrastructure with an aerial device the size of a seagull.
And you won’t even be able to flee in your flying car.