Artificial intelligence is the new frontier of ethical tests
WITHOUT a doubt, there is vast potential for advancement and benefit to society arising out of the application of artificial intelligence.
Around half of businesses plan to use AI or advanced machine learning in some capacity in the next three years. Transport Secretary Grant Shapps has said self-driving cars could be on our roads as early as next year. This has, predictably, put the debate over artificial intelligence centre stage. What circumstances are cars taught to anticipate? What happens in the event of the unexpected, as often happens on our roads?
There are complex legal, societal and ethical questions to consider. This includes the classic “trolley” dilemma around who to save when there are choices and how does one stop a robot going “rogue?”
As AI becomes more common-place, regulators have been seeking to grapple with this and more wide-reaching issues. A major focus for the impending EU Regulation of AI is the need for transparency, fairness and non-bias.
There will also be requirements to report on, with powers for people to be compensated for biased, unethical or incorrect outcomes as well as unfair treatment of data.
When you layer on privacy regulators’ requirements around how personal data is used the compliance journey for suppliers, adopters and users of AI can be arduous, particularly as new laws emerge and because the laws are and will differ across the world.
Last month, the UK Government set out proposals on the future regulation of AI, calling for people to share their views on the suggested approach. The government’s approach is arguably lighter touch than the EU Regulation, aiming to create proportionate and adaptable rules. Both Ofcom and the CMA would be empowered to interpret and implement the key principles.
An ethical approach to the use of AI is not just essential to ensure legal compliance. Potential fines of up to €30,000,000 in the EU, 6 per cent of global turnover and the threat of major reputational damage and erosion of significant value make this a business imperative.
But how can businesses ensure their AI isn’t artificially intolerant? Most of it will come down to using the right data – and processing it correctly. This is before we even enter the complicated question of whether it is right to use the data, which then leads to a whole set of ethical concerns around fairness.
For example, if the technology is using data from the past as to who has been successful for a role, it could simply lead to unearthing only white male candidates of a certain age, because historically those were the people given most opportunities.
There have already been a number of cases where bias has produced unfair results – for example through mortgage applications in non-UK banks. And, of course, the choice as to what decisions an AI system makes for autonomous vehicles and weapons is yet another example of the need to ensure the right guardrails are in place.
At least where systems don’t completely “think” for themselves, some businesses are already grappling with these issues. If they don’t there’s the potential for reputation damage or even fines.
Boardrooms must understand what the plethora of regulations are going to expect of them and their business and they must prepare a clear action plan including their approach to ethics in technology or “corporate social responsibility”. Only then can the true potential of AI be unleashed for good – without the risk/threat of artificial intolerance or incorrectness..