The labyrinth of the Online Safety Bill could fall over on complexity before its begun
The Department for Digital, Culture, Media and Sport Committee came out this week to criticise the government’s Draft Online Safety Bill, stating it is “neither clear nor robust enough to tackle certain types of illegal and harmful content”.
Chair Julian Knight MP even labelled it a “missed opportunity”.
As it currently stands, the bill’s duty of care is split into three parts: preventing the proliferation of illegal content and activity such as child pornography, terrorist material and hate crimes; ensuring children are not exposed to harmful or inappropriate content; and, for tech giants, ensuring that adults are protected from legal but harmful content, a purposely catch-all term.
The committee has called on the government to address technically legal acts that manage to circumvent law – including child abuse traps like “breadcrumbing” and deepfake pornography.
It demanded that we either bring “legal but harmful” acts into the scope of the law or define them within the duties of care imposed on social media giants.
But there are ongoing concerns that the bill is attempting to legislate against something we cannot truly grasp either because we can’t understand the extent or the nature of the harm.
For Niamh Burns, a research analyst at Enders Analysis, the bill fails to adequately address transparency concerns. “In a lot of cases we don’t really know what kind of content is actually harmful, how harmful it is, or how widespread it is”, she told City A.M. Trying to legislate before we understand this is likely a fruitless task. Part of the problem is the labyrinthine workings of tech giants like Meta or Twitter to be able to “fight” against its safety issues.
The revelations from Facebook whistleblower Frances Haugen laid bare the extent of the tech giant’s dishonesty about hate and misinformation progress on its site.
Without regulators working closely with giants, any crackdown will inevitably be “heavy-handed and misguided”.
The broad scope of the laws make it difficult to gain approval either on a parliamentary or industry level. Part of the problem is that many of the definitions won’t take shape until the watchdog Ofcom sets out the limits of the Code of Practice; this won’t happen until the law itself is passed.
Those dubious of regulation see the ambiguity as an egregious opportunity for government overreach. Matthew Lesh, Head of Public Policy at the Institute of Economic Affairs, warned that the scope of the bill allows the government to make decisions on morality, especially as it pertains to the “legal but harmful” question.
“They want to force social media companies to uphold ‘morals’, but whose morals? They want to remove content that ‘undermines, or risks undermining ‘public health’, which could see the censoring of contrarian opinions. And they want to define content that ‘risks the reputation of others’ as harmful, which could encourage the removal of any negative comments”, he told City A.M.
This has potentially dangerous repercussions. Just look at the uproar when YouTube removed the politics channel Novara Media from its platform last year, without explanation.
But, it’s not just the big dogs we need to worry about.
Alex Cadier, UK Managing Director for NewsGuard, a news reliability firm, said the bill “shoots itself in the foot” and puts an over-zealous focus on the industry giants. According to him, misinformation will be pushed onto smaller platforms, where it is harder to track – and fight. Again, we’re fighting something that doesn’t even exist, blogs that haven’t been written or Reddit-style websites that haven’t been built.
So, it seems that the Online Safety Bill is a beast that is never going to be robust until there is better communication and transparency between the key stakeholders: law makers, tech giants and the public.
It also needs to start providing clarity that doesn’t hinge on the law passing – making it an uphill battle, even against the most generous commentators.