Ministers don’t understand artificial intelligence, so how can they regulate it?
Regulators are scrambling to stay abreast of the rapid growth of artificial intelligence and machine learning. There are the two familiar camps: one favouring strict regulation in the name of protecting consumers and ensuring responsible use of technology or the other, advocating for a light touch approach to allow innovation and competition to thrive.
This is a false dichotomy: as long as regulators do not fully understand AI, technology companies will continue to innovate around regulation.
Last week the Commission on Race and Ethnic Disparities report made recommendations to the Government on regulating AI. First, to place mandatory transparency obligations on all public sector organisations applying algorithms to make decisions that have a significant impact on individuals. Second, asking the Equity and Human Rights Commission to issue guidance clarifying how to apply the Equality Act to algorithmic decision-making, including guidance on the collection of data to measure bias and the lawfulness of bias mitigation techniques.
The recommendations of transparency obligations is a good thing. Regulation of fast-growing, high-impact new technology is both inevitable and desirable. It is also not necessarily to the detriment of innovation and positive outcomes; regulation and innovation can productively coexist. But we need to address a more basic problem: policymakers cannot devise an optimal regulatory framework until they understand the technology.
Beyond the drafting of regulation, we need to get to a point of transparency where not only the creators and users of AI understand its processes and the decisions it makes, but also the people whose lives are affected – sometimes profoundly – by those processes and decisions.
This is especially critical in the context of race, and in particular the highly-charged issue of racial inequity. If people don’t understand how and why decisions are being made by AI, how can they have confidence in the system, especially if it generates results that look on the surface a lot like discrimination?
Without radical transparency, Government and other organisations will become much more open to accusations of structural racism – not less. AI is not objective. It will be harder to demonstrate that any inequities are the result of correctable errors in the calibration and implementation of the AI rather than anything more nefarious.
There is no such thing as inherently ethical AI, only ethical or unethical applications, just as a hammer is not inherently ethical but can be used constructively or weaponised by its holder. Likewise, we know AI can positively impact people’s lives and society at scale when used responsibly, but it too can be weaponised. Any new framework must ensure this regulation is rooted in insights and expertise to make sure bad actors don’t obstruct positive outcomes.
There are plenty of examples of where AI has been involved in serious mischaracterisations of reality and acted upon this, from reinforcing sexism in recruitment screening to unintentionally denying critical care in medical settings.
The current capabilities of AI models and of AI practitioners worldwide define an industry that cannot simply put these examples down to error or mistake, or that these unintended consequences could not have been foreseen.
If organisations themselves can’t understand the AI they use because of a lack of transparency, they can’t know for sure that they aren’t perpetuating existing discrimination or introducing new kinds.
At best, a lack of transparency in AI is symptomatic of the lazy assumption that impenetrable “black box thinking” is a marker of AI’s quality. In fact, the better the understanding, the better the results can be. At worst, it represents, or could be seen to represent, deliberate obfuscation or attempts to engineer unequal outcomes.
AI transparency is not a panacea — it does not fix systemic problems, nor remove the ability for bad actors to use the technology irresponsibly. However, in the realm of Government, companies and practitioners trying to do good with AI, a transparent approach can protect against the worst mischaracterisations and steer against consequences and harm that, at this point in time, should be foreseeable.