We should teach with AI tools in schools so students learn how to use them ethically
Artificial intelligence will soon become a staple of everyday life. So university students should learn how to use it, analyse it and critique it – exactly like they do with their peers, writes Amelia Peterson
There is no question that AI is now a feature of life. Not a day goes by without a major announcement of new capabilities. This is why at the London Interdisciplinary School we encourage our students to use generative AI – the range of tools or plug-ins that use large-language models to ‘generate’ material mimicking human expression – even in their final assessments.
The reason we do this is simple. Our degrees train students to tackle the world’s most complex problems using a wide range of methods, including data analysis, political theory, visual arts, or linguistics.
Just as we have always taught students to use professional software, generative AI should be a tool at every student’s disposal – so they can learn its strengths and weaknesses, when it’s useful, and when it’s not.
We’ve already heard employers saying they will expect new recruits to have this understanding. But we don’t do this just to boost our students’ employment prospects. Allowing use of AI tools raises the bar of what we can expect students to achieve. We can no longer give credit for a basic synthesis of existing sources; what we expect of their analysis and interpretation skills therefore increases.
This doesn’t mean it is a free for all; cheating is still cheating. Students must be transparent about the use of AIs, just as they must be for any other sources. ‘AI declarations’ have become a formal part of submissions, with the accompanying range of guidance, detection, and misconduct procedures already used to manage plagiarism. This will be an evolving space, but it’s important that students learn the grey area of what is and isn’t acceptable use.
This year our students have used AIs to help them identify obscure sources, create apps, generate narratives, make graphics, edit video, and more. None of these tasks replace the core knowledge of their degree; it just helps them develop their ideas to a more professional level even in short projects. In one case, a student has even created a working simulation of a new AI tool aimed at promoting critical thinking.
Universities, employers and students need to be working much more openly on how we can make the most of generative AI. Some have compared it to the arrival of scientific calculators – a new tool that raised questions about what was worth teaching and learning, but was ultimately assimilated quite easily into traditional teaching methods. But AI tools pose more fundamental questions because their outputs are more creative: they are more like games than calculators, with fixed rules but unpredictable outcomes. We could all benefit from sharing more of how we are integrating this capability into how we teach and learn.
A degree is about learning how to think critically and AI only makes this more important. To be informed decision-makers on AI, our students get a grasp of ethics, politics and law, but also natural language processing, which is the method underpinning tools like GPT. Just as they need to be able to write and think for themselves, students need to understand the fundamentals of coding and data science, so in this area they will continue to be assessed without access to AI tools.
AIs are like unruly teenagers – half of what they produce might be nonsense, but half is gloriously creative. Our students need to learn to work with them, challenge them and critique them – just as they would their peers. If we can help them foster that kind of relationship, they can lead the generation who use these new powers for good.