Social media firms to crack down on Covid-19 vaccine disinformation
Major social media platforms have agreed a package of measures with the government in an effort to limit the spread of disinformation about any Covid-19 vaccine.
Following a meeting with health secretary Matt Hancock and digital secretary Oliver Dowden, Facebook, Twitter and Google endorsed the principle that no company should profit from false or misleading information relating to a vaccine.
They also pledged to respond more quickly to flagged content, as well as working with health authorities to promote scientifically accurate messages.
“Covid disinformation is dangerous and could cost lives,” Dowden said in a statement.
“While social media companies are taking steps to stop it spreading on their platforms there is much more that can be done.”
Tech giants have come under increased scrutiny in recent months amid a surge in disinformation related to coronavirus, including viral hoaxes about how to treat the virus and baseless claims that the pandemic was linked to the rollout of 5G.
But the government is now concerned that inaccurate posts on social media could derail efforts to carry out mass immunisation once a vaccine is ready.
During the meeting, which also included fact checkers, academics and data experts, ministers raised concerns about the tech firms’ slow response time to false information.
The companies will also join new policy forums in the coming months to help improve responses to harmful material and prepare for future threats.
While the measures will come as a boost to anti-disinformation efforts, analysts have warned that tech firms face a huge challenge due to the sheer volume of posts that fall foul of the rules.
Iain Brown, head of data science at SAS UK and Ireland, said the tech giants may need to turn to AI-based systems as well as human moderators to help moderate content.
“The implementation of these technologies in a time of crisis sets the precedent for the future of social media censoring, as user numbers continue to boom worldwide,” he said.
“This is a test for both AI and the workforce, as they work side by side to strike the balance between taking down problematic material and stifling truth. Ultimately, social media giants have a responsibility to use technology to protect the public during this crisis, by making changes which encourage fact-checking and truth.”