TikTok ups its child ad protections as the threat of EU sanctions looms
TikTok has agreed to better protect children that use its platform in order to dodge the threat of possible sanctions from Brussels.
The European Consumer Organisation (BEUC) filed a complaint to the European Commission in 2021, which said that the Bytedance-owned company failed to protect children from hidden advertising and inappropriate content.
Following over a year of discussions with the commission, Tiktok has now made a number of changes, which include allowing users to more easily report ads that could potentially encourage children to buy goods or services, as well as testing new label designs to ensure greater visibility of paid ads on the platform.
While a Tiktok spokesperson told City A.M. that it would “continue to look for how we can improve in order to provide the best possible Tiktok experience for our community”, BEUC Deputy Director General Ursula Pachl said in a statement that the impact of these feature changes “remains highly uncertain”.
However, the commission’s investigation is now closed.
This is notably the second time in a week that the social media giant has been forced to speak about its policies and policy changes.
A recent Buzzfeed investigation found that China-based employees of ByteDance repeatedly accessed non-public data about US TikTok users.
The recordings seen by BuzzFeed News indicated that engineers in China had access to US data between September 2021 and January 2022.
In response to the list of examples and questions provided by Buzzfeed to the platform, a TikTok spokesperson Maureen Shanahan said: “We know we’re among the most scrutinized platforms from a security standpoint, and we aim to remove any doubt about the security of US user data.
That’s why we hire experts in their fields, continually work to validate our security standards, and bring in reputable, independent third parties to test our defenses.”
TikTok notably joined the likes of Meta and Twitter last week by signing up to the European Commission’s strengthened Code of Practice on Disinformation last week, agreeing to hefty fines of six per cent of global turnover for non-compliance.
The new code will make tech firms take a more rigorous approach to fake accounts and ‘deepfakes’, which are images and videos that have been altered using software.