Zuckerberg’s Meta drops fact checking filters in return to ‘free speech’
Meta, the parent company of Facebook and Instagram, announced that it will end its third-party fact-checking program, replacing it with a community-driven approach akin to X’s ‘community notes’.
The decision comes as chief executive Mark Zuckerberg seeks to repair ties with newly elected President Donald Trump, with the company also recently pledging $1m to Trump’s inauguration fund.
In a statement released on Tuesday, Zuckerberg framed the change as a return to Meta’s roots of promoting free speech.
“It’s time to go back to our roots about free expression on Facebook and Instagram”, he said.
He acknowledged, however, that these changes could result in obvious trade-offs, including the potential spread of harmful content.
Free speech advocacy
Meta’s decision marks a dramatic departure from its previous approach to combating misinformation, which relied on partnerships with third-party fact-checking organisations.
The shift is widely seen as an effort to align with Trump, whose administration has championed free speech and criticised social media censorship.
“The recent elections also feel like a cultural tipping point towards once again prioritising speech”, Zuckerberg said in the video.
He also highlighted political bias in the fact-checking program, stating it had “destroyed more trust than it created.”
Joel Kaplan, Nick Clegg’s recent successor as head of global affairs at Meta, further acknowledged that the change comes at a pivotal time, saying, “We’ve got a new administration and a new president who are big defenders of free expression.”
Its partnership with fact checkers was “well-intentioned at the outset but there’s just been too much political bias in what they choose to fact check and how”, he told Fox news.
Zuckerberg also emphasised Meta’s intention to work with Trump to push back against global censorship laws which are targeting US tech companies.
He cited Europe’s regulatory environment as an example of the negative effects of censorship, saying that the “increasing number of laws make it difficult to build anything innovative there”.
What is changing?
The fact-checking program will be replaced by a new ‘community notes’ system, which will allow users to flag and annotate misleading or inaccurate content.
The system, which will be similar to the approach used by X, will roll out in the US first, expanding globally in the coming months.
Meta also plans to simplify its content moderation policies, focusing on high-severity and illegal violations rather than implementing broad filters.
Zuckerberg announced that the previous system was “too sensitive”, which lead to “too many mistakes and too much censorship”.
Content moderation teams will also relocate from California to Texas, as an attempt to shift the company’s operational strategy.
He also acknowledged that AI would play a larger role in reducing mistakes while ensuring harmful content is handled responsibly.
Censorship criticisms
Critics of the move have suggested the changes could lead to an increase in harmful content on Meta platforms.
The Real Facebook Oversight Board, a watchdog group, said that the policy changes mark a “retreat from any sane and safe approach to content moderation”.
Campaigners have also raised concerns on opening the door for hate speech and misinformation.
Meta first ramped up its fact-checking efforts after the 2016 US election, which saw the company face criticism for failing to address the spread of fake news.
Despite the backlash, Zuckerberg maintained that the move aligns with Meta’s long-term vision of free expression. “I started building social media to give people a voice,” he said. “Recent elections feel like a tipping point toward prioritizing speech.”