Opinion: Meta Has Eliminated Its Fact-Checking Program—Are We Headed Toward Freedom of Speech Anarchy?
Understanding the reasoning behind Meta’s decision to eliminate its fact-checking program could offer valuable insights into the future of Instagram, Threads, and Facebook—particularly when considering X as a reference and Zuckerberg’s strategic direction
Last week, Mark Zuckerberg announced the end of Meta’s fact-checking program for Facebook, Instagram, and Threads. The news quickly went viral, and people started to wonder about the reasons and consequences of this major decision.
This major move raised serious concerns about how users are supposed to react and how misinformation could spiral globally.
Many users started deleting their social media accounts on Meta’s platforms, fleeing to other networks like Bluesky—which will soon launch its Instagram version—or Mastodon, and fearing we are headed towards a chaotic digital environment.
Meta is mirroring many of X—formerly Twitter—strategies. Elon Musk dissolved the platform’s Trust and Safety Council in December 2022, just a few months after he purchased the social media platform. Instead, he developed the Community Notes program to rely on users as fact-checkers, and Zuckerberg will do the same.
But, let’s break it down first:
Why Is Meta Doing This?
Well, there are many theories besides the official one: “freedom of expression.” Meta claimed that the systems for managing content were not working.
“As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users, and too often getting in the way of the free expression we set out to enable”, states the official announcement.
Yes, Meta’s Moderation System Was Already Broken
While many of us—especially journalists—are anxious about Zuckerberg’s “freedom of speech” decision, which could enable more accounts with malicious intent to spread fake news, deepfakes, and manipulative content, Meta’s moderation service was already a significant issue.
Meta has struggled to protect users from harmful content, and many users and accounts have been unfairly banned on social media platforms. We also experienced this annoying situation at Wizcase.
The company’s safety and moderation systems were far from perfect. But, was the solution to vanish it the best option? Some of us think that this was not the best solution—or even a solution.
However, Zuckerberg still decided to blame it on third parties.
Are Fact Checkers The Problem?
Zuckerberg accused third-party services as the main problem for poor content moderation and censorship. He claimed that “fact-checkers have been too politically biased” and have “destroyed more trust than they created,” but moderators and fact-checkers disagree.
I reached out to a former content moderator who worked for three years for a third-party content moderation company for Twitter—now X—, who preferred to maintain anonymity.
My source considered this made no sense as they were always following the guidelines provided by the client, in his case, Twitter. “For me, it is hard to believe that fact-checkers were ‘too politically biased,’” he explained, via email assuring that content moderators just follow the rules. “We did not have the authority to take action on high followers accounts or political or famous people accounts.”
Neil Brown, the president of the Poynter Institute, one of Meta’s fact-checking partners, told the New York Times that Zuckerberg’s allegation was untrue as well. According to Brown, their team used Meta’s tools to deliver fact-checks and it was Meta who decided how to respond to it.
Lori Robertson, managing editor of FactCheck.org—another one of Meta’s partners since 2016—said their organization has always kept transparent standards and that they fact-check content from both parties, Democrats and Republicans.
“Our work isn’t about censorship,” she wrote. “We provide accurate information to help social media users as they navigate their news feeds. We did not, and could not, remove content. Any decisions to do that were Meta’s.”
A Business Decision
Many consider Zuckerberg’s decision a deeply personal one. An opportunity to get rid of the responsibility and the consequences of maintaining a safe environment on its social media platforms, pleasing upcoming president Donald Trump and avoiding jail—Trump had previously threatened Zuckerberg of imprisonment for life—, while, of course, keeping his business afloat.
There are even theories saying that Zuckerberg’s move was a way to guarantee that TikTok gets banned and users favor Instagram Reels, giving Meta more power in the social media sphere.
“I think that people often overlook that social media platforms are companies, and as such, they make decisions for their benefit,” said the former content moderator I interviewed. And many see it this way too.
After seeing Zuckerberg’s move, and Musk’s behavior, Mastodon’s CEO Eugen Rochko announced he will transfer ownership and assets to a non-profit European organization in an attempt to keep his platform “free of the control of a single wealthy individual.”
We Are Headed To An Era Of “Masculine Energy,” Free Labor, And Disinformation
All signs point to 2025 being even tougher when it comes to distinguishing truth, facts, and real images from AI-generated content. To make matters worse, the largest social media platforms in the world are not only getting rid of fact-checkers and content moderators but also sharing a strong political stance that will directly shape the dynamics of their platforms.
More Aggressive And Political Content
In that process of aligning with Trump’s principles, Zuckerberg quickly started to publicly share similar beliefs to Trump’s MAGA movement.
In Joe Rogan’s podcast, Meta’s CEO said that the company could use more “masculine energy.” And added: “I think having a culture that, like, celebrates the aggression a bit more has its own merits that are really positive,” he said.
Zuckerberg’s platforms are changing fast too. Adam Mosseri, head of Instagram and Threads, confirmed a few days ago that we will see more political content and ads on Threads.
“Starting this week in the US, and rolling out over the coming week to the rest of the world, we’re going to be adding political content to recommendations on Threads and adjusting the political content control to three options: less, standard, the default, and more.”
Brace yourselves for more intense political debates on social media this year!
More Unpaid Responsibilities For Users
The Community Notes system seems to be a solution for large social media companies to dilute responsibility from the absurd stuff people post online. Users who participate in the program—almost anyone with more than a few months on the platform can join—get to add context to potentially misleading posts to help people better understand the post and its intention.
However, at least X’s program seems to be a volunteering system and those who participate and interact don’t get any financial compensation for their jobs. Meta hasn’t disclosed more details of their program, but it doesn’t seem like it would be any different in that sense.
Is social media a scam? Users spend hours creating videos, stories, and comments, keeping algorithms active and fueling AI training. Now, they’re also adding context and helping platforms build a healthier environment—for free, at least for the majority who don’t benefit financially from a business model.
This solution is also just a band-aid covering a gaping wound. In the end, it’s “the algorithm” who makes the decisions, and organizations like MediaWise from the Poynter Institute said that in many cases less than 10% of the notes are actually posted.
A Recipe For More Misinformation
I have personally witnessed my X feed change drastically over the past few years. Friends and people I used to follow have left the platform, giving way to absurd “men vs. women” debates and staged viral videos designed to provoke anger and generate more of those furious shares.
Meanwhile, political content has increasingly dominated, almost pulling me into every controversy and polarizing idea.
Without fact-checkers and content moderators, polarized opinions, deepfakes, fake news, and harmful content on Meta’s platforms is only expected to peak.
The Community Notes strategy will not be sufficient to keep up with content moderation, and ultimately, decisions about what happens on the platform will always rely on the “algorithm,” adjusted to the platform owner’s preferences.
It is uncertain what will happen in the future—even X’s results are still under evaluation—but as social media users we will be part of an impactful global social experiment.
The future remains uncertain—even the outcomes of X’s changes are still unfolding—but one thing is clear: as social media users, we are all participants in a profound and far-reaching global experiment.
Leave a Comment
Cancel