Tech giants urged to pre-screen content to stop child sexual abuse
Tech giants should be forced to pre-screen all content uploaded to their platforms in a bid to stem the “explosion” of online child sexual abuse, an inquiry has found.
In a damning report published today the Independent Inquiry into Child Sexual Abuse (IICSA) said tech giants had failed to do everything they could to prevent access to images of abuse.
The inquiry said the government should require social media companies to pre-screen all material against a database prior to upload to help law enforcement. Current software used by tech firms only scans content once it has been posted.
It also called for tougher age verification controls to ensure the minimum age limit of 13 was properly enforced.
The inquiry, led by Professor Alexis Jay, launched a scathing attack on Silicon Valley tech giants for their failure to crack down on sexual abuse imagery.
The report accused social media firms of taking a “reactive” approach to the issue, adding that they were “seemingly motivated by the desire to avoid reputational damage caused by adverse media reporting”.
“Transparency reports published by the internet companies provide only part of the picture and there is a lack of guidance and regulation setting out the information that must be provided,” it said.
Platforms such as Whatsapp, iMessage and Facetime were also singled out for their use of end-to-end encryption, which the report said hampered police efforts.
Andy Burrows, head of child safety online policy at the NSPCC, said: “This report is a damning indictment of Big Tech’s failure to take seriously their duty to protect young people from child abuse, which has been facilitated on their platforms on a massive scale.”
The inquiry was launched in the wake of a string of high-profile child sexual abuse scandals amid concerns some organisations were failing to protect children.
NPSCC has estimated that roughly half a million men in the UK may have viewed indecent images of children, while the inquiry found that law enforcement was “struggling to keep pace” with an increase in cases.
It comes as the government prepares to impose a statutory duty of care on tech firms to ensure they protect their users from harmful content.
Ofcom is set to be appointed as the new internet regulator tasked with deciding whether firms have breached the agreement and whether to issue fines or legal action as punishment.
David Miles, Facebook’s head of safety for Europe, the Middle East and Africa, said: “We are industry leaders in combating this grievous harm and have made huge investments in sophisticated solutions, including photo and video matching technology so we can remove harmful content as quickly as possible.
“As this is a global, industry-wide issue, we’ll continue to develop new technologies and work alongside law enforcement and specialist experts in child protection to keep children safe.”
Claire Lilley, principal policy specialist at Google Trust and Safety, said: “We’ve developed and made freely available cutting-edge technologies to detect, remove and report this material, and long invested in partnerships and prevention.
“As the threat continues to evolve, we’ll keep working closely with governments, industry and partners like the Internet Watch Foundation on new ways to tackle this evil crime.”
Apple has been contacted for comment.