Back to school: We must teach AI to have a moral compass
In the UK, it’s mandated that 13 of the first 18 years of a young person’s life are spent in some sort of education.
We invest very heavily in teaching children what they need to know to be functioning and productive members of society: our doctors, our teachers, our data scientists of the future.
Artificial intelligence (AI) is not new. But its influence in our lives has ballooned in the last few years. Search engines, music streaming, and our morning commutes are all affected by an AI of some description, while Alexa, Google and Siri are in our living rooms.
Read more: Training AI to be unbiased must be a priority, not an afterthought
AIs will be our virtual colleagues too, within two years, according to the predictions of 80 per cent of business leaders.
Is our perception of what makes a good AI really in sync with what is actually needed? It’s a smart technology, with the capacity to execute tasks and learn huge amounts of information. But it doesn’t learn on its own.
As it becomes a bigger part of our lives and makes increasingly important decisions that affect society, we need to make sure it’s learning in the right way.
We need to teach it – not just information, but the principles of good citizenship: responsibility, fairness, and transparency.
Raising AI requires us to address many of the same challenges faced in human education: fostering an understanding of right and wrong, and what it means to behave responsibly.
There’s also a challenge in teaching. How do you impart knowledge without bias and build self-reliance, at the same time as encouraging “playing nicely” with others?
When you look at the science of this process, first people learn how to learn, then they rationalise or explain their thoughts and actions, and eventually they accept responsibility for their decisions.
The process for teaching an AI is similar to teaching a person. Deep neural networks – which drive many AI systems – are inspired by the myriad neural connections of the brain.
Much like the brain, the ability to make connections allows them to continuously learn. A child in school is taught their times-tables before trigonometry for a reason. It’s only when they understand how to multiply numbers together that Pythagoras’ theorem becomes something they can understand.
It’s this broad principle that enabled DeepMind’s AlphaGo Zero AI to teach itself the game of Go without knowing any of the rules beforehand.
The difference with humans is that AIs don’t get tired, they don’t need sleep, and they can read thousands of lines of information in a second. Their learning curve is so fast that we can get systems up and running in little to no time.
However, there is a risk to this as well. Just like kids can fall in with the “wrong crowd”, AIs can be led astray as well. Bad influences distort a child’s perspective of right and wrong, just as biased or bad data will teach an AI the wrong things.
If you tell a child that 2×2=5, or that stealing is acceptable, they will start to learn this to be true and will base future learning upon this misconception.
Bad data makes a bad AI, but it doesn’t have the ability to discern itself what is good versus bad. The people tasked with nurturing these young technologies need to take responsibility for their education.
They need to get them the best data that they can; they also need to try and correct, or “unlearn” bad information. This process of correction is so important to ensure that gender, racial or socio-economic bias does not underpin decision-making.
And that’s crucial, as algorithms will become more and more responsible for everything from financial decisions, to health, to criminal justice, and with less and less human supervision. With such power over the lives of everyday people, there is added pressure for AIs to be “raised right”.
We rightly expect that mortgage brokers, doctors and judges have a well-centred moral compass, in addition to knowing their stuff in their given field. It’s important to show that automated decisions are explainable and justifiable as part of an AI’s introduction to society – make an AI pass its exams before it gets a job.
The AIs that produce bad outcomes are the product of their environment. It’s not their nature to make biased decisions. They learn from their creators and from their teachers.
We will get the AI that we deserve – the investments, decisions and approach we take now will shape the AI of the future – and so it’s up to all of us to make sure our “children” get a proper education.
Read more: AI needs ethical rules to gain acceptance