Could a chatbot groom your child?
Generation Alpha are already growing up in a very different digital landscape to their predecessors and their lives will be fundamentally shaped by AI, says Eliza Filby
In all the surveys, books and discussions about modern parenting, there’s one thing on which we are all in agreement. Parenting today is harder than it was for our parents and for one reason: navigating tech with children.
Yes, there’s always been moral panics about new technology and young people – whether it was the wireless in the 1930s, TV in the 1950s, video nasties in the 80s, gaming in the 90s. But smartphone addiction is justifiably being perceived as a unique and unprecedented challenge. As Jonathan Haidt’s book The Anxious Generation lays out in stark detail, there is a correlation between the slavish addiction to smartphone usage and the decline of self-esteem, wonder, freedom and resilience in young people over the last decade. Haidt rightly calls for collective action by schools, tech giants, governments and parents. If anything, the success of Haidt’s work has revealed an anxious generation of Millennial parents seeking to navigate these uncharted waters for the next generation: Gen Alpha. The most sensible advice I have received on this issue is to look at how the tech titans parent their own kids.
But in focusing on the hardware, could we be missing the new challenge: how the software is changing? We overly obsess about social media platforms and airbrushed influencer culture when, in most teenage groups, Whatsapp is the damaging arena. There’s definitely a parent movement brewing, but in this, surely the priority should be to focus on what’s next rather than what has gone before. What was a challenge for Gen Z may not be the issue for the next generation, Gen Alpha (born after 2010) who are already growing up in a very different digital landscape from their predecessors.
Young people will always be savvier than their parents
The uncomfortable (and perhaps reassuring) truth is that young people will always be savvier than their parents; the whole point of being young is hiding stuff from your elders and establishing your identity separate from them. And yes, this holds true however open, nurturing, friendly and liberal the child/parent relationship. For this reason and many others, parents will always be playing catch up, but if they are the ones in charge of regulating behaviour in the home, school or pressurising government, it’s also never been more important that they keep abreast of developments. And this brings us to generative AI.
A degree of parental ignorance on this matter is already evident. One survey in the US by Common Sense Media found that 70 per cent of children now use AI, but only 37 per cent of parents are aware they do. Predictably, the area where kids were mostly using it for was homework, but some admitted to deploying it to create deepfakes; playing with someone’s voice or image. Whereas once the fear was that kids were sharing intimate pics online, now the fear is that they will be victims or creators of doctored pictures.
My parents used to worry about who I hung around with and falling into the wrong crowd (which I often did), Gen X rightly feared Gen Z being groomed online by strangers, but today, there perhaps should be a different concern: becoming too intimate with a chatbot. Take the new MyAI feature on Snapchat for example. Snap was the first major social media platform to adopt an AI-powered chat dimension – which only paid subscribers can switch off (thereby creating a pay barrier to tech freedom). Launched in 2023, it is powered by OpenAI’s GPT, and while on the surface it aims to provide useful personalised information, it should probably come with a couple of warnings. Firstly, like Tiktok, deepfakes are part of its DNA, for example it can create an AI-generated Snap of you and your friends swimming with dolphins, or as you can imagine, something far less benign. Secondly, there’s an ability to advertise through MyAI, which given the level of personalisation and intimacy this chatbot can generate with its user, is not something that should be welcomed. Should we be encouraging an automated confidant who also just happens to be a constant personalised advertising feed?
The ‘intimacy’ point is perhaps the most important, which will only exacerbate rather than alleviate feelings of isolation and loneliness amongst young people – and does not in any way feel like a progressive step in tackling teenage mental health. And like all generative AI, there is naturally a concern about the data on which this model was built. As an experiment, one researcher looking into MyAI posed as a 13-year-old girl and chatted about going on a trip with a male companion 18 years older than her. MyAI messaged her about “staying safe and being cautious”, but when the researcher prompted it about whether to have sex with said man, it advised her to set “the mood with candles or music“.
The idea of oversharing and emotionally connecting with bots will soon become second nature to a generation raised on Chat GPT
What self-respecting teenager would go to a chatbot for such advice, you might say? To anyone over the age of 30, such emotional investment might seem farcical. But Baby Boomers are equally bemused at millennials’ para-social relationship with influencers they themselves have never heard of. Influencers driven by AI chatbots is where Meta is already heading. But the idea of oversharing and emotionally connecting with bots will soon become second nature to a generation raised on Chat GPT, AI personalised tutors, and yes, MyAI integrated into regular social media apps that you communicate with more than your friends. Sadly, there are already such cases emerging. One mother in Florida, Megan Garcia, filed a lawsuit against Character.AI, which she claims is accountable for the death of her 14-year-old son, Sewell Setzer III, who died by suicide in February of this year after becoming obsessed with the app. This is not about fear mongering, but rather noting when it comes to a parental voice in tech regulations, we have an eye on the future rather than the past smartphone culture we ourselves inhabited or know.
One survey by Barna found that although parents were concerned with AI safety and data privacy, 25 per cent were worried that AI could “negatively impact their (kids’) ability to think for themselves”. Yes indeed, but from my focus groups with Gen Alphas across Europe, teenagers are much more aware of how algorithms and AI work (in contrast to their parents). Schools are already stepping in here and taking increasing responsibility for training Gen Alpha in critical thinking when it comes to large language models (LLMs), alerting them to the biased nature of the data but also emotional regulation when interacting with them. Helpfully, AI expert Ethan Mollick suggests learning together, advising parents to sit down with their kids and experiment with AI, ideally on a subject that a teen knows really well – Taylor Swift lyrics for example – and get them to critically evaluate the validity, the legality, even the morality of the output. It will be parents who will be doing all the learning here I’m sure.
In the main, we cannot rely on schools or tech to do this; the former is too slow on the uptake, and for the latter, well, it goes against their commercial interests. I suspect that Gen Alpha will become known as Gen AI before long, given the extent to which their lives will be shaped by it. Banning smartphones has always been about addressing addiction, but as AI integrates more deeply into our lives, this addiction is evolving. The next challenge won’t just be screen time – it will be managing the intimate and persuasive grip of AI-driven interactions.
Eliza Filby’s book Inheritocracy: The Bank of Mum and Dad is out now