Labour’s bid to ban anti-vax content from the internet is well-meaning — but misguided
How to combat anti-vaxxers spreading misinformation online and hampering the government’s efforts for UK-wide immunisation?
The answer, according to the Labour party, is for an emergency law to be brought in introducing financial and criminal penalties for social media platforms who repeatedly fail to remove anti-vaccination content.
Shadow health secretary Jonathan Ashworth suggested the law last week while raising his concerns on Sky News, arguing that the government needed to take action and calling so-called anti-vax content “dangerous nonsense”.
This intention is obviously a worthy one, but it draws more attention to the ability and willingness of the government to tackle online harms.
At present, social media platforms rely on notice and takedown requests to tackle problematic content — a whack-a-mole approach which often misses the mark for users affected by a range of issues, from fraud and impersonations to defamation.
Notably, however, this model hasn’t stopped platforms from taking proactive steps to clamp down on anti-vaccination content of their own accord. Facebook, for example, has banned anti-vax advertisements, while Instagram has been hiding search results relating to anti-vax hashtags since May 2019.
However, Labour appears to want them to go further. In fact, it wants legislation to push for social media platforms to have greater responsibility (as envisaged in the government’s own white paper on online harms).
As it stands, the E-Commerce Directive states that governments cannot obligate social media platforms to monitor the content they host. So if social media companies can’t be forced to monitor all content posted to their platforms, how will they identify anti-vaccination posts?
One answer lies in considering the ruling in Eva Glawischnig-Piesczek versus Facebook Ireland last year. In this case, the Court of Justice of the European Union noted that the E-Commerce Directive only bans monitoring obligations of a general nature, so Facebook could be required to monitor for content including specific information which had been held to be unlawful — in that case, an article and defamatory comment about Austrian politician Glawischnig-Piesczek, and any content conveying the same message.
That said, monitoring for specific information must be carried out using automated means to prevent the social media platform having to make their own individual assessments of meaning, according to the Glawischnig-Piesczek judgment. The effectiveness of this is variable — while automated filters can block text-based data, such as keywords and hashtags linked to anti-vax sentiments, they aren’t able to identify problematic content contained within image or video files. This is a significant shortcoming, given the visual nature of social media.
Further to this, the specific monitoring obligation in the Glawischnig-Piesczek judgment was put in place because Facebook was continuing to host defamatory content, which is clearly unlawful. Though subject to criticism and much debate, the publication of anti-vaccination content mostly doesn’t break any laws — as objectionable and damaging as many may find it, those who share it are exercising their freedom of speech.
Unless the government were to tackle the principle of monitoring obligations, the only alternative would be to leave the responsibility for flagging content where it is currently: in the hands of social media users — the very same users whom Labour politicians worry are being misled by anti-vax content, and who therefore may not be best placed to report it.
Social media companies have fought for years to define themselves as “platforms” with limited responsibility for the content they host, while the government has not moved forward proposals to tackle online harms published earlier this year. The tools are there to take action. Whether the political will exists to act to impose responsibilities on social media companies is far less clear.
Main image credit: Getty