Navigating the online safety dichotomy will resolve the uncertainty of tech regulation
Our daily, physical lives are increasingly interwoven with our online lives. This is especially true for young people and children. This can be a strength – as we’ve seen during the pandemic – but can also be a weakness.
Just as there is a responsibility to ensure the safety of people in real life, there is the same for online spaces. Even with all the benefits, there is a darker side of the internet. In the same place kids can look up cute cat videos, they can also be exposed to harm. This comes in many forms from “hate-mobbing” where online hate-filled pile-ons are used to abuse individuals, through to the active promotion of self-harm to encourage young people to hurt themselves for “likes”. We wouldn’t find it acceptable in the real world but it happens daily online.
Today, with the Joint Committee on the Draft Online Safety Bill, we’re publishing our final report on the Government’s plans to make the UK the safest place to be on the internet in the world. We’ve heard evidence from all sides, from industry as well as activists, from professors to politicians. This is not an anti-business, anti-social media report. The current mechanism of leaving social media companies to self-regulate is not working. We heard this again and again. What we’re simply asking for, is that they apply the same standards online, as any other business would do offline.
Rio Ferdinand, the England footballer, was one of the very first people we heard from. He described the horrifying racism he was exposed to online, and the impact it had on him and his family. Epilepsy Society Chief Executive Clare Pelham told us that people with photo-sensitive conditions, like epilepsy, have been deliberately targeted online with flashing images to silence them. Zach Eagling is an inspiring ten-year-old who’s been campaigning to make it a crime to intentionally try and trigger epilepsy. It is a brave act, but it shouldn’t be on the shoulders of a child to keep him and his peers safe.
None of this would be allowed to happen offline.
The Online Safety Bill will establish clear responsibilities on platforms that are expected to put safety first. The Government should draw on existing laws that are already enforced online, to make sure social media companies don’t allow their algorithmic systems to facilitate and promote what you would never allow in person. For example, racist, sexist or antisemitic discrimination, targeting of children to do themselves harm, or facilitating human trafficking.
New offences will also need to be created. As well as Zach’s Law, which the Committee is endorsing, there should be a new offence for cyberflashing. As many as 76 per cent of girls between the ages of 12 to 18 have been sent obscene pictures.
Teens are being exposed repeatedly to self-harm content. Under the Online Safety Bill, the promotion of content which glorifies suicide, will also be an offence.
For a long time there has been a cloud hanging over tech companies’ duty to protect their users. Uncertainty is always bad for business, especially for smaller firms that don’t have the financial resources to chart their way through the grey areas. The Online Safety Bill aims to provide certainty. Once the law is in place, social media platforms will know what their duty is. It is proactive, rather than reactive. Businesses will be able to innovate in a way that is “safety by design”.