The Online Safety Bill’s noble objectives won’t save it from disastrous outcomes
For months, technology experts and digital rights advocates have voiced concerns about the Online Safety Bill — namely, that the bill in its current form will not be able to curtail harmful content, as the Royal Society recently claimed. The conversation is finally starting to shift in their favour. Last month, the House of Commons Digital Culture, Media and Sport Committee came to the same conclusion and called upon the government to address “urgent concerns” with the legislation.
It is becoming evident that the Online Safety Bill is an ill-fitted remedy for online harms. Though the reports support modified legislation, it is worth considering whether any iteration of the bill can rid social media of misinformation and discourage the bad actors who spread it.
The widely anticipated draft Online Safety Bill was published in May 2021, with the goal of creating a “new regulatory framework to tackle harmful content online.” The proposal would establish a legal duty of care for online platforms meaning they must address both illegal and “legal but harmful” posts, lest they incur steep fines.
Oliver Dowden, Conservative party co-chair and former culture secretary, expressed hope that the bill would put the UK at the forefront of these conversations by showing “global leadership with our groundbreaking laws to usher in a new age of accountability to the online world.”
But the proposed solution is not “groundbreaking.” For a period of time in the 1990s, US courts established that internet service providers were liable for defamatory content. American companies were responsible for user-generated speech, which created a “moderator’s dilemma” that stifled free expression online. Section 230 of the Communications Decency Act of 1996 remedied this situation by clarifying that providers were not responsible for third-party content.
The Online Safety Bill is unlikely to yield better results. The result of content moderation on social media sites has attracted criticism from all schools: those who think they are too strict and heavy-handed and those who think those efforts are inadequate and lax. During the pandemic, many firms, including Twitter, YouTube, and Facebook, amplified their moderation efforts to combat misinformation, which prompted ire from many users who believed Big Tech giants removed their posts without sufficient justification or warning. For those determined to fall down the rabbit hole, the legislation will only be viewed as “proof” of a stitch-up.
The report from the DCMS committee warned that these conditions would only be exacerbated by the new laws. They said the bill fails to “adequately protect” free speech and would need several amendments to “bring [it] into line with the UK’s obligations to freedom of expression under international human rights law.”
At its core, the Online Safety Bill presumes that social media companies, not users, should be responsible for the content on their websites. This belief inherently ignores the role of bad actors, to the detriment of all other users. Its motives may be magnanimous but the result could be even more harmful, driving misinformation to corners of the internet that are harder to reach and fuelling distrust in the “mainstream”. Conquering this will not be done with one piece of legislation; to think otherwise is foolish.