Facial recognition tech powered by AI poses a threat to society
Facial recognition software feels like it has stepped out of the realms of science fiction and into reality.
Thanks to massive developments in artificial intelligence (AI), cameras and surveillance systems can now check and analyse someone’s facial features in real time.
This is a profound step forward in the realms of biometric security, and major tech companies like Amazon, Microsoft, and IBM have started selling such systems.
The technology has huge potential to help with policing and crime prevention. Recording a face in a crowd and checking it against a watchlist of suspects could potentially prevent acts of terrorism or violence. It is already in use by police forces in some countries, and even in schools in the US in the hope of preventing active shooter situations.
It could also boost productivity in other sectors of the economy. Airports have been using facial recognition software for years to speed up passport checks, leading to a more convenient experience for travellers.
Similarly, the analyst firm CCS Insight predicts that, within a few years, organisations like football clubs will adopt facial recognition ticketing systems, allowing ticket-holders to gain entry to an event more quickly, while enabling staff to identify possible troublemakers.
And there are plenty more potential applications, from use in recruitment to retail, which sound fantastic on paper. But if we’re not careful, the technology poses a threat to our personal privacy and civil liberties – fundamental principles of society .
For a start, facial recognition’s ability at present to identify individuals is far from perfect, especially those from minority communities.
Civil liberties groups in the US, concerned that the software will lead to more false arrests, have highlighted how Amazon’s cloud-based Rekognition wrongly identified 28 non-white members of Congress and two dozen professional athletes as people who had committed crimes.
Meanwhile, a recent study from the University of Colorado Boulder found that several facial recognition platforms regularly misidentify trans and non-binary people.
Work is underway to address these inaccuracies. Nick McQuire, head of enterprise and AI research at CCS Insight, predicts that a new industry will emerge to tackle the biases that may be causing these mistakes.
“The need to resolve these data and compliance challenges will spark the emergence of firms such as algorithmic auditors and companies that help source high-quality, diverse and unbiased training data,” he says.
Algorithmic auditors may sound like characters from sci-fi, but these kinds of businesses are already emerging, according to McQuire, who cites O’Neil Risk Consulting and DefinedCrowd as examples.
However, even if the tech worked perfectly, the implications are immensely worrying.
Its use by government authorities, especially in mainland China, has raised fears that individuals – perhaps taking part in a peaceful demonstration – could be picked out of a crowd, followed, and punished at a later point. This threat of repercussions has forced protesters in Hong Kong to use face paint and masks to avoid being identified by AI-enabled cameras.
Fearing the impact on civil liberties, several US cities, including San Francisco, have placed bans on the tech. And earlier this year, Amazon had to face down a rebellion by shareholders who wanted to block it from selling Rekognition to US police forces.
Here in the UK, a developer faced a similar backlash after admitting to using facial recognition surveillance at the King’s Cross Estate, while the use of the technology by police in south Wales is being challenged in the courts.
Defenders of facial recognition technology point to the public acceptance of CCTV, which has been used for decades for security and crime prevention. But while CCTV simply records video which later has to be checked by a human, facial recognition software can identify and keep track of an individual’s face, analyse the data, and store it in a database – without the person’s consent or even knowledge.
And it’s not just police and security forces who could exploit the ability to identify and track any individual caught on camera. Andrew Liles, chief technology officer at experience agency Tribal Worldwide London, gives me just one chilling example of how private companies could misuse this data.
“A social media platform has filed a patent to tell a retailer something about a customer who is in their shop,” he says.
“They do this by finding the person based on their social profile – perhaps they can guess your income bracket and what you have searched for beforehand. It could reveal that information to the retailer, and you’d be none the wiser. That’s scary.”
Clearly, facial recognition offers new potential for private organisations to collect huge amounts of sensitive personal data without individual consent.
No technology is inherently good or evil – the problem lies in how it is used. Rules and regulations are needed to prevent both governments and companies from abusing facial recognition software.
And, indeed, due to fears about the risk of its misuse, the European Commission is planning legislation that will extend the general data protection regulation (GDPR) to cover AI, and give EU citizens explicit rights over the use of their facial recognition data – which is a good start. The key will be ensuring the protection of consumers without stifling innovation.
“Facial recognition is a powerful tool, transforming the way we live,” says Anton Christodoulou, group chief technology officer at experience agency Imagination.
“Legislation and regulation is required to ensure that it is implemented within the right privacy framework, and the boundaries of its use are clearly defined and enforced. Once regulated, we will all be able to enjoy the benefits to their full effect.”
Some companies are already trying to get ahead of any new regulations. Amazon’s Jeff Bezos revealed in September that the company has its own public policy team working on proposals which it hopes governments will adopt, although politicians and activists are likely to balk at the idea of trusting big tech to write its own rules. Expect more scrutiny on these companies and their efforts in the near future.
Facial recognition has developed incredibly fast, perhaps faster than regulators and governments know how to deal with. The big tech companies will need to work with them, as well as the new breed of AI-auditors, to address the risks to privacy, consent, and civil liberties.
Otherwise, we risk stumbling into a science-fiction dystopia, rather than a technological paradise.
Main image credit: Ian Waldie/Getty Images