With Amazon’s ‘sexist’ algorithm under fire, will AI ever be able to be an unbiased decision-maker?
With Amazon’s ‘sexist’ algorithm under fire, will AI ever be able to be an unbiased decision-maker?
Daniel Gilbert, founder and chief executive of Brainlabs, says YES.
Some artificial intelligence (AI) will continue to use biased data; there’s a lot of it out there. But both the innovators in this space and regulation in AI ethics will help encourage the technology’s impartiality.
AI researchers want to develop machines that people trust, and they will be held accountable for biases and a lack of transparency in their algorithms. That means that they will need to educate themselves about biases in order to train AI to address the issue – and avoid embarrassing situations like the Amazon algorithm which taught itself from poor data that men were better than women.
Tech companies are aware of the problem and are working on ways to detect and reduce bias, whether that’s by eliminating it in datasets or equipping AI with the tools to recognise bias and deal with it the right way.
I have no doubt that the AI systems that will thrive in future are the ones which have been adapted and trained to prevent bias. AI is not human – it can be designed to do better than us.
Read more: Amazon abandons sexist AI-powered recruiting tool
Kasperi Lewis, an associate strategist at Cruxy & Company, says NO.
Machine learning will always be as biased as the data on which it is built.
Current mainstream AI products provide hard evidence of this. Amazon’s AI recruiting tool, in the news last week, showed extreme gender bias, penalising CVs that contained the word “woman”. The US COMPAS recidivism algorithm assumes that black people are twice as likely to reoffend as white people. Google’s “Tay” chatbot quickly reverted to racist jokes.
The datasets these algorithms were exposed to resulted in the bias they displayed. Tay, for example, “learnt” from conversations with online bigots.
Unconscious bias remains prevalent in the majority of industries. The tech chief executives and founders we work with tend to be too close to their own product to see its issues.
AI is at an advantage, as it would never replicate that more subtle form of bias based on emotion. However, those very same emotions may be our advantage over AI. Our emotions influence our interpretation of data – that’s what makes us human.
Read more: EU pushes for new technology tax on global internet giants