How can you stop your credit card being sexist?
Can a credit card be sexist?
It’s not a question most people would have thought about before this week, but on Monday, the US financial regulator announced an investigation into claims of gender discrimination by Apple Card.
The algorithms Apple Card used to set credit limits are, it has been reported, inherently biased against women. One tech entrepreneur claimed that the card offered him 20 times more credit than his wife, even though she had the better credit score, while Apple’s own co-founder Steve Wozniak went to Twitter with a similar story, despite him and his wife sharing bank accounts and assets.
We don’t know how Apple’s algorithm came to such seemingly sexist decisions, but the company isn’t alone in its use of tech. Banks and other lenders are increasingly using machine-learning technology to cut costs and boost loan applications.
And these accusations are the tip of the iceberg of a very big problem that faces artificial intelligence (AI) and goes far beyond the financial services sector. As AI is used in more and more applications across a range of industries, there is seemingly no end to the level of bias that these systems can show.
Look at what happened when Amazon tried building an AI tool to help with recruiting, only to find that the algorithm discriminated against women because it had combed through male-dominated CVs to gather its data.
The AI revolution that has swept through banks, call centres, retailers, insurers, and recruiters has brought obvious bias with it — and it’s getting worse, as AI systems are increasingly able to “teach” themselves, reinforcing existing bias as their decision-making develops.
This problem is exacerbated by the investment in opaque “black box” AI systems, which cannot communicate how decisions have been made to the operator, regulator, or customer. Since black box systems learn from each interaction, if they are given corrupt data, poor decision-making can rapidly accelerate, without the operators understanding why or even being aware of it.
The only solution to this is “white box” Explainable AI. These are systems which are able to explain in comprehensible language how the software operates and how decisions have been made.
This kind of transparency is key. By explaining how and why decisions are made, Explainable AI helps consumers and companies understand what they need to do to get a different outcome. In the case of financial services, that might mean telling a customer how turn a rejected mortgage application into an acceptance. With a recruitment tool, it could mean flagging why a CV was turned down to a human, who could then adapt the algorithm if it was clearly biased.
The technology helps consumers take action on one end, while also opening new business avenues for banks and other institutions by offering more suitable products.
Today’s AI systems are already making crucial decisions on loans, medical diagnoses, and even criminal risk assessment. While there’s a lot of good that can come out of this, there has to be an element of transparency to instil accountability in the decision-making.
If we ignore this, instead of finding ourselves galloping towards a bright future, we risk sleepwalking into a tech-fuelled dystopia.
Main image credit: Getty