Tiktok expands hate speech policy as watchdog launches crackdown
Tiktok has vowed to do more to remove harmful content as the media watchdog launches a crackdown on video-sharing platforms.
In a blog post published today, the viral video app said it was expanding its policy to take into account coded language and symbols used to spread hateful ideologies.
Tiktok already removes content related to neo-Nazism and white supremacy, but will now also ban similar ideologies such as white nationalism, white genocide theory, Identitarianism and male supremacy.
The platform also said it would do more to crack down on antisemitism.
While it already blocks Holocaust denial content, it vowed to take further action to remove misinformation and hurtful stereotypes about Jewish, Muslim and other communities. This includes misinformation about notable Jewish individuals and families.
Tiktok, which is wildly popular among teenagers, has already faced accusations of being a hotbed for antisemitic content.
In July a BBC investigation found the platform’s algorithm had promoted a collection of videos that used a “sickening” antisemitic song and mocked Nazi death camps. The videos racked up 6.5m views before being removed by Tiktok.
“TikTok has a large, and growing audience and an equally big responsibility that those using its platform not be served up hate materials,” said Danny Stone, chief executive of the Antisemitism Policy Trust.
“We are therefore pleased that the company is seeking to deepen its understanding and broaden its policies against antisemitism and other forms of racism and welcome the changes being announced today.”
Other additions to Tiktok’s hate policy unveiled today include a ban on anti-LGBT material and content that promotes conversion therapy.
The move comes as media watchdog Ofcom unveiled its requirements for video-sharing platforms as it prepares to tighten regulation in the sector.
Under new rules coming into force next month, video apps will be required to take appropriate measures to protect children from “content that might impair their physical, mental or moral development”.
In addition, they must protect all users from violent and hateful content, terrorist material, child sexual exploitation and abuse, as well as racism and xenophobia.
Ofcom said it will set out guidance on appropriate measures next summer, and in the meantime will prioritise the most serious breaches for potential fines.