Of course AI can be biased – and it’s up to businesses to fix that
When Alexandria Ocasio-Cortez, the recently elected US representative, suggested last week that algorithms could be racist, she received a lot of attention.
Naysayers derided her for suggesting that inanimate objects can display prejudice, but a number of experts actually supported her. The central premise of the debate is this: machines cannot be biased, but the humans who programme them can be.
In high-profile applications like the justice system or policing, bias has immediate and shocking effects.
The US Justice Department parole tool Compas, for example, came under fire for predicting that black defendants were more likely to re-offend, while the PredPol policing tool created feedback loops categorising ethnically diverse areas as criminal hotspots. These examples confirm our worst fears about structural inequalities in society.
It’s not just an issue for these high-stakes situations either – we’ve seen public outrage at Google’s image search for confusing black people with gorillas, or only showing pictures of men when asked for “CEOs”.
Ocasio-Cortez, then, is right: bias in algorithms can be a real problem, so it is vital that we understand how it occurs and what we can do to mitigate it.
Many businesses could be forgiven for thinking that these issues do not really apply to them. But as artificial intelligence (AI) increasingly takes over large areas of commerce, the outcomes there, say in recruitment or customer targeting, could potentially do significant economic damage.
And by understanding how high-profile mistakes occurred, we can start to devise effective countermeasures.
Take Amazon’s controversial facial recognition programme Rekognition, which hit the headlines when it compared the US Congress against a database of people arrested and falsely identified 28 legislators as criminals.
Perhaps unsurprisingly to anyone following this topic, the members flagged were disproportionately non-white. This demonstrated the problem of the training data. Thanks to deep and historical racial biases, Congress is disproportionately white, while law enforcement databases are disproportionately not.
A similar error occurred with another Amazon AI tool, this time for recruitment. The algorithm, trained on previous data, reinforced gender bias by teaching itself to penalise CVs that contained the word “women”.
It is easy to see how this sort of bias can be replicated across many fields. We can only train machines on the data we have, and if there are current inequalities, they will be reinforced – unless we are mindful of this and take steps to counter it.
So what can be done?
First, stop viewing this as purely a technical problem, and understand it as a broader business issue. Statistical errors can be addressed through finding more representative data and focusing on correcting bias, but ultimately businesses need to be clear about what they are trying to achieve.
They need to understand that their systems may entrench any currently extant biases and actively make the trade-offs necessary to correct for this.
This can be done by emphasising the need for greater diversity in workforces and on boards, to provide a wider range of viewpoints and challenges based on different life experiences.
Second, move away from highly charged ethical debate and focus on real-world applications.
Hopefully we all agree that fairness and non-discrimination are good things. But until we start devising frameworks where people are held accountable for developing and using these systems in a business environment, much as they would for any other tool, we risk getting stuck on platitudes like “racism bad, equality good”.
AI systems are not modern magic; they need to be stress-tested against outcomes and refined.
Finally, we should harness the expertise of internal auditors. Finance professionals may not be IT experts but they are used to asking difficult questions across other business areas and bring their culture of measurement, professional scepticism, and ethical challenge to the table.
AI is a tool, and like any tool it can be used well or badly. If we are going to operate in a world where it is used more and more, we need to start thinking about what we are trying to achieve, and how.