Facebook’s ‘supreme court’ reflects on five years of mixed results

The image shows the logo of Meta Platforms, Inc., the parent company of Facebook and other social media platforms. — AFP/file

PARIS: An oversight board set up by Facebook to review content moderation decisions trumpeted improved transparency and respect for people’s rights in a review of its first five years of work on Thursday, while acknowledging “frustrations” over its arm’s-length role.

Facebook — since renamed Meta — announced the Oversight Board, often referred to as the group’s “supreme court,” at a low point in 2018 in public trust in the tech giant.

The Instagram and WhatsApp owner’s image had been tarnished by episodes such as the Cambridge Analytica data breach scandal and disinformation and misinformation surrounding key public votes such as Brexit and the 2016 US presidential election.

The supervisory board began work in 2020, staffed by prominent figures including academics, media veterans and civil society figures.

It reviews selected cases where people have appealed Meta’s moderation decisions and issues binding rulings on whether the company had the right to remove content or leave it in place.

It also issues non-binding recommendations on how lessons learned from these cases should be used to update the rules for the billions of users on Meta’s platforms Facebook, Instagram and Threads.

Over the past five years, the board has ensured “more transparency, accountability, open exchange and respect for freedom of expression and other human rights on Meta’s platforms,” ​​a report said.

The board added that Meta’s oversight model – unusual among major social networks – could be “a framework for other platforms to follow”.

The board, which is funded by Meta, has legal obligations that managers will implement its decisions on individual pieces of content.

But the company is free to disregard its broader recommendations on moderation policy.

“Over the past five years, we have had frustrations and moments when the hoped-for impact did not materialize,” the board wrote.

Systemic changes

Some outside observers of the tech giant are more critical.

“If you look at how content moderation has changed on Meta platforms since the creation of the board, it has gotten worse,” said Jan Penfrat of the Brussels-based campaign organization European Digital Rights (EDRi).

Today on Facebook or Instagram, “there is less moderation happening, all under the guise of protecting free speech,” he added.

Effective oversight of moderation for hundreds of millions of users “would have to be much bigger and much faster,” with “the power to actually make systemic changes to the way Meta’s platforms work,” Penfrat said.

The limits of the board’s influence were highlighted when CEO Mark Zuckerberg abolished Meta’s US fact-checking program in January.

This scheme had employed third-party fact-checkers, among them the AFP, to debunk misinformation spread on the platform.

In April, the supervisory board said the decision to replace it with a system based on user-generated fact-checking had been made “rapidly”, and recommended that Meta investigate the effectiveness of the new setup more closely.

AI decisions loom

Looking ahead, the board “will broaden its focus to consider in more detail the responsible implementation of AI tools and products,” the report said.

Zuckerberg has talked about plans for deeper integration of generative artificial intelligence into Meta’s products, calling it a potential palliative to Western societies’ loneliness epidemic.

But 2025 has also seen growing concern about the technology, including a number of stories of people killing themselves after prolonged conversations with AI chatbots.

Many such “recently emerging harms… mirror harms the Board has addressed in relation to social media”, the supervisory board said, adding that it would work towards “a way forward with a global, user rights-based perspective”.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top