Opportunity and risk: why we need ethical accountability in the adoption of AI systems
We know AI is going to change our world. What we are less sure of is exactly how that change is going to manifest itself. Already, AI-infused innovation has markedly changed our daily lives; think of the facial recognition technology that allows us to unlock our phones, or the map apps that prevent us from getting lost.
From a business perspective, AI is having a profound impact in some influential sectors, including financial services, healthcare, media and retail. We may just be scratching the surface of AI’s potential when it comes to business applications. It’s extremely difficult for us to fully envision all the interesting, innovative, and surprising ways in which we are likely to use AI in the future.
Transforming finance
Taking the accountancy profession as an example, we need to be wary of the risks posed by AI and aware that these will evolve. However, I’m very excited about the potential for AI to help accountancy professionals steward and drive their organisations into a more sustainable future.
AI will undoubtedly influence the way that accountants work, changing their roles and enabling them to deliver even greater value to their organisations. Empowered by AI tools, accountants will be able to extract actionable insights from a wider array of data sources. This will support better decision-making, more optimal operations and improved customer experiences.
Significantly, 70 per cent of ACCA members surveyed for an upcoming ACCA report, Digital horizons, agreed that AI could help them to increase the amount of time they have to focus on business-critical tasks. This is good news for both businesses and the customers who depend on them.
Yet while the opportunities presented by AI are immense, so are the risks. Our recent research identifies the core risks relating to AI. These include:
- Explainability and transparency. AI systems,are complex and difficult to interpret. This can challenge understanding and limit applications.
- Bias and discrimination. Biased training data can result in AI systems inadvertently perpetuating and amplifying societal prejudices.
- Privacy concerns and security risks. AI systems can collect and analyse large amounts of data – data that could be targeted and harnessed by cyber attackers unless enhanced security measures are in place.
- Legal and regulatory challenges. Legally, there is a huge question mark around liability in AI systems. Additionally, the uncertainty around future regulatory frameworks makes it hard for organisations to plan for the implementation of AI initiatives.
- Inaccuracy and misinformation. AI systems cannot necessarily discern the truth and can generate false information in the form of ‘hallucinations’.
- Unintended consequences: As the application of AI increases, unexpected issues may arise, requiring a swift and effective response. Organisations must constantly monitor and test.
Ethical accountability
The broad spectrum of risks associated with AI highlights why businesses must prioritise the responsible adoption of AI systems. In particular, they should have a process for ensuring ethical accountability around the conceptualisation, development and application of AI tools. I believe that the accountancy profession – which has a long-standing commitment to ethical standards – can play a critical role here. Finance professionals could be charged with ensuring that AI models are used in a manner that comports with compliance and ethical obligations.
In practice, it would involve an assessment of whether the proposed AI tools are being used in line with the organisation’s strategic vision and purpose: will they help it to deliver its objectives, and will they bring genuine benefits to customers and employees? Ethical accountability also involves ensuring that the workforce use and share recognised best practices. In other words, are they using the right AI tools, for the right business reasons, in the right way?
Additionally, ethical accountability will require accountancy professionals to establish strong relationships with their organisation’s technology teams to effectively manage associated risks. That is also true in the case of data governance. The uncertain nature of AI initiatives also presents challenges, and from an investment perspective this means continual oversight of related costs and benefits.
AI literacy
Going forward, a basic standard of AI literacy will be demanded of many business professionals, not least accountancy professionals. It is important that we all understand the capabilities, limitations, and potential applications of AI. We will also need to understand the risks presented by AI systems, and how these risks can be regularly monitored and managed. Education is key. The best way for us all to seize the opportunities, while managing the risks, is first by improving our knowledge and skills.
AI is a potential game-changer for our world because it’s such an incredibly powerful technology. But, when using AI, we all have a moral responsibility to do what we should – not necessarily what we can.
Read the research highlighted above at accaglobal.com/insights