Coronavirus: Facebook blames pandemic for fall in child abuse moderation
Facebook has admitted that its efforts to remove child abuse and self-harm images from its platform have been hampered by the coronavirus outbreak.
The tech giant said it took action against considerably less material on Facebook and Instagram in the second quarter because it had fewer content moderators working during the pandemic.
Facebook removed just 911,000 pieces of suicide and self-harm images in the second quarter, down from 1.7m in the first three months of the year.
The social media platform took action against 9.5m posts related to child nudity and sexual exploitation on Facebook in the first quarter — up from 8.6m in the first quarter.
However, its action against child abuse images on Instagram fell from 1m to just 479,400.
“While our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology,” the company said.
“For example, we rely heavily on people to review suicide and self-injury and child exploitative content, and help improve the technology that proactively finds and removes identical or near-identical content that violates these policies.
“With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram.”
Facebook added that it had prioritised finding and removing the most harmful posts in each category.
The Silicon Valley has roughly 15,000 content moderators, who are employed through third-party contractors.
But the company has come under fire amid reports that thousands of moderators had developed mental health conditions due to the job.
Facebook also said it was updating its policy on hate speech to ban posts containing images of blackface and depicting anti-Semitic tropes.