A-levels are the tip of the iceberg — biased algorithms are running our lives
The A-levels debacle has laid bare the stark reality of relying on algorithms to make important decisions.
Between the results being announced and the government’s U-turn four days later, public outrage had reached a fever pitch at the idea that an individual could have their life chances limited because of the results other people had achieved in previous years. It was rightly considered scandalous.
Now the government has changed its mind on using an algorithm for A-levels and GCSEs —but that doesn’t mean the problem has gone away.
Algorithms are used every day, across countless areas of life. Securing a dream job, being offered a certain insurance premium, receiving a prison sentence, and even finding “the one” — every day, decisions that seriously affect the course of people’s lives are in part being written by algorithms, with all the hidden biases that entails.
Given how artificial intelligence (AI) technology is increasingly assuming so much authority in our society, it is crucial that those in charge thoroughly analyse the data going into these algorithms to reduce the risk of human bias creeping in. Otherwise, the consequences can be severe — and unjust.
Take CV screening in recruitment. Technology that was intended to strip human prejudice out of hiring can have the opposite effect, with reports of algorithms that favour men, based on incomplete past data that suggests female candidates are less qualified than their male counterparts. Such biased screening can even suggest lower starting salaries for female candidates based on lower expectations of their value.
Or look at law and order. Predictive analytics in policing has recently been called into question with concerns that it unfairly “identifies” racially-biased criminality. AI is learning from past policing behaviour: if, historically, more young black men have been stopped by police, then facial recognition systems will be programmed to suggest that more should be stopped, searched and convicted in future, leading to overt discrimination within the criminal justice system.
Further down the line, this kind of racial profiling leads to huge social impacts for certain societal groups. Algorithms are hardly to blame, having been programmed by developers to typecast criminals by appearance. Responsibility lies with the people who designed them and fed them data.
In a more conscious way, we’re increasingly relying on AI to form new social connections and even relationships. Most modern dating apps now use an algorithm to suggest connections with people it assumes have similar interests, theoretically making the search for love more personalised and efficient. This too has its drawbacks, driving us to interact with a narrower and more uniform pool of people overall.
Cultural, class and race prejudices already seep into everyday life, from deciding which individuals have bank loan applications accepted to driving up the cost of their insurance premiums. Far from eliminating bias, AI is perpetuating it.
Worse still, we tend to trust a decision more if it comes from a computer rather than a person, and are less likely to question how it was reached. But as the exam grading fiasco has revealed, blindly following algorithms — especially when it comes to government decision-making — is a recipe for discrimination.
That is not to say we should steer clear of this technology and its potential entirely. But if we don’t want AI to become another tool of inequality, historic bias must be counterbalanced when input into an algorithm.
On its own, AI does not have the capacity to judge biases such as social privilege, race, gender, educational advantage or regional differences. It falls to institutions and organisations using the technology to ensure that algorithms are trained with well-balanced data from the outset and that their capabilities are regulated and developed by a diverse team with varied perspectives.
Rather than making far-reaching and final decisions in people’s lives (“computer says no”), AI should be used as a suggestive tool, offering us potential outcomes that are then cross-referenced with social factors and carefully considered by those at the top who can add a human perspective. A hybrid approach of AI and human intervention is imperative for a just society.
The lesson from the A-levels scandal is clear. If we don’t start rethinking our use of AI technology now, we will soon find increasingly crucial aspects of our lives decided by algorithms, just as unfair as the one that gave a straight-A student a D.
Main image credit: Getty