World News

Meta to replace ‘biased’ fact checkers with user moderation

Getty Images Mark Zuckerberg seen in September 2024.Getty Images

Meta abandons the use of independent fact-checkers on Facebook and Instagram, replacing them with X-style “public notes” where comments on the accuracy of posts are left to users.

In a video posted on the side blog post by the company on Tuesday, CEO Mark Zuckerberg said third-party moderators are “too politically biased” and “it’s time to get back to our roots about freedom of speech”.

Joel Kaplan, who? he replaces Sir Nick Clegg as Meta’s head of global affairs, he wrote that the company’s reliance on independent agents was “well-intentioned” but often resulted in user bans.

However, campaigners against online hate speech have responded with dismay – and suggested the change is being motivated by being on the right of Donald Trump.

“Zuckerberg’s announcement is a clear attempt to agree with the Trump administration – with damaging consequences”, said Ava Lee, from Global Witness, a campaign group that describes itself as wanting to hold technology to account.

“Seeking to avoid “banning” is a political move to avoid carrying the hate and misinformation promoted by social media and making it easier,” he added.

Imitating X

Meta’s current fact-checking system, launched in 2016, refers posts that appear to be false or misleading to independent organizations to check their credibility.

Posts marked as inappropriate may have labels attached to them that provide viewers with additional information, and may be moved down the user’s feed.

That will now be replaced by “US first” with public notes.

Meta says it has “no immediate plans” to phase out its third-party testers in the UK or EU.

The new social notes system is copied from X, which launched after being bought and rebranded by Elon Musk.

It involves people of different opinions agreeing on notes that add context or explanations to controversial posts.

“This is good,” he said of Meta’s acceptance of the same device.

However, the UK’s Molly Rose Foundation described the announcement as “deeply worrying about cyber security.”

“We are urgently clarifying the scope of these measures, including whether this will apply to suicide, self-harm and stressful content,” said its chairman Ian Russell.

“These measures can have serious consequences for many children and adults.”

Meta told the BBC that it would consider content that violates suicide and self-harm laws to be “serious” violations, and therefore subject to automatic moderation systems.

Fact-checking organization Full Fact – which takes part in Facebook’s post verification process in Europe – said it “denies allegations of bias” against its work.

The agency’s chief executive, Chris Morris, described the change as a “disappointing and retrograde step that risks a chilling effect around the world.”

‘radical swing’

Meta The blog post said it would also “de-escalate the machinery” of laws and policies – highlighting the removal of restrictions on issues including “immigration, gender and gender identity” – saying this has removed discussion and political debate.

“It is not right that things can be said on television or on the floor of Congress, but not in our forums,” he said.

The changes come as tech firms and their executives prepare for the inauguration of President-elect Donald Trump on January 20.

Trump has previously been critical of Meta and its content rating system, calling Facebook “the enemy of the people” in March 2024.

But the relationship between the two men has become much better – Mr. Zuckerberg ate at Trump’s Florida estate at Mar-a-Lago in November. Meta also contributed $1m in Trump’s founding fund.

“The recent election looks like a cultural tipping point, again, in putting the discourse of freedom first,” Mr Zuckerberg said in a video on Tuesday.

Mr Kaplan’s replacement of Sir Nick Clegg – a former Liberal Democrat deputy prime minister – as the company’s president of global affairs has also been interpreted as a sign of a shift in the company’s approach to moderation and changing political priorities.

Kate Klonick, a law professor at St John’s University Law School, said the changes reflect a trend that “seems inevitable over the last few years, especially since Musk took X”.

“The privacy of speech on these platforms has become a political issue,” he told BBC News.

Where companies have previously faced pressure to build trust-based security measures to deal with issues such as harassment, hate speech, and disrespect for information, “a strong pushback” continues, he added.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button