What ethical guard-rails does generative AI need?
by Bryan Tan, Partner Entertainment & Media Group, Reed Smith
AI has the potential to transform almost every aspect of society, from healthcare and transportation to education and entertainment. Recent developments in AI have created excitement in its potential and understandably generated commercial interest and massive capital inflows.
However, while AI has been portrayed as the saviour in movies such as ‘Wall-E’, in other works the dangers and abuse of the technology highlighted mirror growing public concerns. Will AI lead us into utopia or dystopia? As AI becomes increasingly integrated into our lives, it is crucial that we establish foundational ethical principles that guide its development and use.
While there is currently no definitive agreed universal statement of the ethical principles for AI, I believe the following should be considered when designing and implementing AI systems.
Transparency is essential because it allows users to understand how AI systems work and why and how they makes certain decisions. Without transparency, AI systems can seem mysterious or even untrustworthy.
Accountability, meanwhile, seeks to design AI systems that take responsibility for their actions. Accountability also means that AI actors are responsible and answerable for the proper functioning of AI systems and for the respect of AI ethics and principles, based on their roles, the context, and consistency with the state of art.
We should also strive for AI systems that generate accurate results as the designers intended – without unintended consequences. AI systems should accordingly identify, log, and articulate sources of error and uncertainty throughout the algorithm and its data sources so that expected and worst-case implications can be understood and inform mitigation procedures.
To effect accountability and demonstrate transparency, AI systems should feature auditability to enable interested third parties to probe, understand, and review the behaviour of the algorithm through the disclosure of information that enables monitoring, checking or criticism.
Being able to explain how the AI system works to generate results removes room for doubt. Developers of AI systems should ensure that automated and algorithmic decisions and any associated data driving those decisions can be explained to end-users and other stakeholders in man-in-the-street terms.
AI systems must be designed to respect the privacy of individuals and protect their personal data, as already required by law. As AI systems become more prevalent, it is essential to ensure that they do not infringe upon individuals’ privacy rights from the outset; once AI systems have accessed personal data, undoing this is extremely difficult, if not impossible.
Fairness is another essential principle for AI. AI must be designed to treat everyone equally and without bias. This means systems should be trained on diverse high quality data sets that represent different demographic groups and then monitored to ensure that they do not perpetuate discrimination or inequality.
Safety calls for a design thinking that AI systems should be secure and cannot be hacked, as well as ensuring that they do not cause harm to humans or the environment. The overriding principle must be that AI system implementation creates value which is materially better than not engaging in that project.
Ultimately, AI systems must be human centric. The design, development and implementation of technologies must not infringe internationally recognised human rights and should be accessible to as wide a population as possible. To this end, we should aim for an equitable distribution of the benefits of data practices and avoid data practices that disproportionately disadvantage vulnerable groups, at the same time as creating the greatest possible benefit from the use of data and advanced modelling techniques.
Another significant ethical concern surrounding AI is bias. AI systems are only as objective as the data they are trained on, and if the data is biased, then the AI system will be biased. This is particularly problematic when AI systems are used to make decisions about people’s lives, such as hiring or loan approval. Researchers must be aware of bias and ensure AI systems are trained on diverse data sets accurately representing different demographics.
While recognising that over-regulation may stymie innovation, ultimately ethical guardrails can greatly assist and spur the development of this technology.