US News

Meta Discontinues Facebook Fact-Checking Program: Key Takeaways


This change marks the conclusion of a prolonged shift towards greater moderation.

Meta has discontinued its fact-checking program, alongside several other significant changes.

Here’s what to understand.

‘Perceived Political Bias’

Meta implemented the fact-checking program in 2016 following Donald Trump’s victory in that year’s presidential election.

In the aftermath of the election, CEO Mark Zuckerberg stated that he would collaborate with “respected fact-checking organizations” to combat misinformation on Facebook. Shortly after, the fact-checking initiative was officially launched, expressing the belief that “offering additional context can aid individuals in determining what to trust and share.”

The organizations involved in the initiative, such as Snopes, had the ability to scrutinize posts. If they determined a post contained false information, it would either be flagged with an accompanying fact-check note or removed altogether.

“After Trump’s election in 2016, the mainstream media incessantly reported on misinformation as a threat to democracy,” Zuckerberg remarked in a video on January 7. “We endeavored, in good faith, to address these concerns without becoming arbiters of truth, but the fact-checkers have simply proven too politically biased, resulting in more distrust rather than creating it, particularly in the U.S.”
Joel Kaplan, another executive, explained in a statement that the aim of the program was to have independent experts provide users with more information regarding viral posts so they could assess what they read on their own.

“That’s not how things unfolded, particularly in the U.S.,” he noted. “Experts, like everyone else, possess their own biases and viewpoints. This was evident in the selections made regarding what to fact-check and how. Over time, we found too much content being fact-checked that should have been considered legitimate political expression and debate. Our system then imposed real consequences through intrusive labels and decreased exposure. A program designed to educate frequently became a tool for censorship.”

Transitioning to an X Model

In recent years, Meta has been emulating various features of X, previously known as Twitter. Meta’s Threads, initially launched as a video messaging app, was later modified to function as an X-style platform for concise thoughts. Zuckerberg also introduced a premium version following Elon Musk’s rollout of Twitter Blue.

Facebook’s latest action is to replace the fact-checking program with an X-style community notes feature driven by users.

“We’re eliminating fact-checkers and substituting them with community notes, akin to X,” Zuckerberg stated.

Kaplan asserted that X’s model has proven successful.

“They empower their community to discern when posts may be misleading and require additional context, allowing individuals from diverse perspectives to determine what context is beneficial for others to view,” he explained. “We believe this approach could better accomplish our original goal of informing users about what they encounter—and is less susceptible to bias.”

Musk expressed his support for the development. “This is fantastic,” he stated on X.

Requesting Notes

Community Notes on X relies on contributors who evaluate specific posts they believe contain misleading information.

To be attached to a post, a proposed note must gain consensus from contributors “who may have previously disagreed in their ratings,” as stated by X. “This ensures that one-sided ratings are minimized.”

Even some of Musk’s posts have attracted Community Notes.

X explicitly reveals that ordinary individuals are valuable contributors, rather than relying solely on professionals.

Zuckerberg mentioned in 2016, prior to the fact-checking program’s inception, that Facebook traditionally depended on users to help the company identify what was genuine and what was not, but the issue had grown so complex that collaboration with fact-checking organizations was deemed necessary.

Executives are now reverting to users, emphasizing that the notes will depend on contributions from users. Furthermore, Meta’s program will not dictate which notes are displayed, according to Kaplan. The notes achieving consensus “among individuals with varying perspectives” will be made public.

Responses—Mixed Reactions

Some criticized the new direction.

“Meta’s decision to abolish fact-checking will exacerbate the spread of misinformation across our digital platforms, further skewing our reality and undermining our democracy,” remarked Sen. Michael Bennet (D-Colo.).
Angie Drobnic Holan, director of the International Fact-Checking Network, whose members were involved in Meta’s initiative, commented that the alteration “will harm social media users seeking accurate, reliable information about their daily interactions with friends and families.”

Conversely, Musk wasn’t the only one to laud the changes.

“The First Amendment protects social media companies’ editorial choices regarding the content on their platforms. Still, it’s commendable when platforms voluntarily attempt to minimize bias and unpredictability in their content hosting decisions—especially when they assure users of a free speech culture like Meta does,” stated Ari Cohn, lead counsel for tech policy for the free speech group FIRE, in an email to The Epoch Times.

“A remarkable day for free speech!” Rep. Randy Weber (R-Texas) wrote on X. “It seems that Meta is finally adapting from Elon Musk’s playbook and empowering Americans to make their own decisions.”





Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.