Forget the tech, if the AI industry wants to thrive it must win over public opinion
AI must win over a suspicious public if we want a safe future for the technology, writes Lord Mayor Michael Mainelli
Artificial intelligence (AI) will have a transformative effect on the way we all do things. Whether it is education or innovation, transport or technology, our world will change. The futurist philosopher Gray Scott asks: “The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”
The UK is in a prime position to lead this debate. We attract more AI-related private investment from venture capital than the rest of Europe combined. The UK AI market could potentially be worth $1 trillion by 2035 – significantly boosting our economic growth.
As the City of London Corporation’s report with EY shows, the financial and professional services sector is a leading adopter and investor in AI, with financial institutions expected to allocate an extra $31bn (£24.6bn) specifically for AI investments by 2025.
Yet public opinion about the new technology remains decidedly mixed. From threats to jobs to vulnerability to deepfakes, social manipulation and algorithmic bias, people are understandably concerned about the changes AI may bring. In the absence of agreed standards and rules, those fears risk becoming increasingly entrenched within the public psyche.
Like all technologies, or perhaps even more than all other technologies, AI requires testing, inspection, certification and accreditation. As part of the Lord Mayor’s ethical AI initiative, we are promoting the use of the ISO standard on AI management systems by firms, to put in place policies and procedures for the sound governance of organisations in relation to AI for firm-wide certification.
Working with CISI and the British Computer Society, we have also launched courses in ethical AI for individual certification, focused on financial services professionals and the builders of AI systems. Already, more than 4,000 participants from 300 organisations across 45 countries have taken part.
Now we are taking this work one step further, working towards an international agreement on quality assurance for AI. Just before Easter at Mansion House, we convened a gathering of AI experts with C-level executives from global testing, inspection and certification enterprises to discuss the risks and benefits of AI. Working with the Testing, Inspection and Certification Council and the UK Accreditation Service, we examined the crucial role of regulation and accredited commercial standards markets – all of which will help ensure the appropriate use of AI solutions. What was clear from the summit is that global quality infrastructures are needed to inspire the necessary confidence for AI products and solutions to be trusted and adopted.
Participants from the global testing, inspection and certification sector agreed to establish a new agreement for this global infrastructure, the Walbrook AI Accord. This will form a collective commitment to develop principles for the adoption, deployment and market assurance of AI technologies, paving the way for the safe, transparent and ethical use of AI. The Walbrook AI Accord will harmonise the adoption of quality infrastructures for AI assurance, the development of assurance standards and techniques, and further training and skills development for practitioners.
By codifying this agreement, we can rise to one of the great challenges of our age, ensuring that we are not passive recipients of this technology, but proactive pioneers shaping its trajectory for years to come. For
the successful regulation and deployment of AI, we need the global infrastructure for its testing, inspection and certification.