
Meta’s Oversight Board, an independent group set up to review Facebook, Instagram, and Threads content decisions, is not happy with the company’s recent changes to its hate speech policies. These changes, made public in January, came with little warning and skipped Meta’s usual process of consultation and transparency.
What’s the issue?
According to the Board, Meta acted too quickly and didn’t explain the new rules clearly. It also failed to assess how these changes might affect vulnerable communities, like immigrants and LGBTQIA+ users groups that have historically faced targeted harassment online.
The Board is now urging Meta to:
- Explain the new policies better.
- Study the impact on at-risk groups and share the results.
- Report updates to the Board every six months.
- Recommit to its human rights responsibilities under the UN guidelines.
- Reevaluate how it handles hate speech and misinformation outside the U.S., especially with its fact-checking programs.
What did Meta change?
Meta, led by CEO Mark Zuckerberg, announced a shift toward “more speech” — meaning fewer restrictions on what users can say. As part of that move, the company softened hate speech protections for some marginalized groups, which many experts and users criticized as risky.
This policy overhaul came just before Donald Trump’s second presidential campaign started ramping up, raising concerns about whether Meta was preparing to allow more political or controversial content in the name of free expression.
Related articles you may find interesting
What’s the Oversight Board doing?
While the Board can’t directly control Meta’s big-picture policies, it can:
- Overturn individual content decisions (and Meta must comply).
- Make public recommendations (which Meta can choose to follow or ignore).
- Ask Meta to refer to certain broad policy issues for review — a process that could help guide future rulemaking.
So far, the Board has made 17 suggestions, including:
- Testing and evaluating its new “community notes” feature.
- Clarifying what kinds of hateful ideologies are now allowed or banned.
- Improving how harassment violations are flagged and acted on.
Specific cases
In its latest round of decisions on 11 content cases, the Board:
- Backed Meta’s call to keep two videos of transgender women online but suggested removing the term “transgenderism” from its hate policies, as it’s often used in harmful or dehumanizing ways.
- Reversed Meta’s decision to keep up posts about anti-immigration riots in the UK, saying the company waited too long to remove content that violated rules on violence and incitement.
The Bigger Picture
The debate over content moderation is ongoing. Meta is trying to balance free speech with user safety, not an easy task for a platform with billions of users. Critics argue that pulling back on protections, especially during politically charged times, risks giving hate groups and bad actors more room to spread harmful content.
Meanwhile, the Meta’s Oversight Board role remains crucial as one of the few checks on Meta’s massive influence over global online speech.
- The Oversight Board is funded by an independent trust set up by Meta, but it operates autonomously to ensure credibility.
- Meta’s policy shifts may also be influenced by increasing pressure from political groups and advertisers who want fewer content restrictions.
- Other tech companies, like X (formerly Twitter) and YouTube, have also been reevaluating their moderation rules often citing freedom of expression but facing similar backlash.