
Manipulated videos, including ones related to politicians deemed “high-risk,” don’t have to be culled from Facebook feeds, according to Meta‘s Oversight Board — but they should at least be better labeled.
The decision Nov. 25 decision was spurred by a user who appealed to remove a viral video that appeared to show global demonstrations in favor of Philippine President Rodrigo Duterte. Despite using mislabeled, erroneous footage to suggest there was a widespread pro-Duterte movement, Meta did not remove the post via its automated flagging process or further human review.
While the Oversight Board agreed that the video should have been escalated higher in the fact-checking process and labelled as particularly “high-risk” content, it still sided with Meta’s choice to keep the video online because it did not specifically violate the company’s political information guidelines. Meta prohibits misleading posts about voting locations, processes, and candidate eligibility.
The board encouraged the company to take concerted, viral misinformation campaigns seriously, writing that it is “imperative that Meta has robust processes to address viral misleading posts, including prioritizing identical or near-identical content for review, and applying all its relevant policies and related tools.” Similar to other recent decisions from the board, it recommended Meta add a specific “High-Risk” label to videos with similar “because it contained a digitally altered, photorealistic video with a high risk of deceiving the public during a significant public event.”
The decision aligns with the tech giant’s shift away from stronger content moderation guidelines. The board has previously written in favor of social media companies using AI-powered, automated moderation to better address an onslaught of misinformation, within reason, and promoted more robust labelling of manipulated or AI-generated content as an additional guardrail. “Platforms should apply labels indicating to users when content is significantly altered and could mislead, while also dedicating sufficient resources to human review that supports this work,” the board wrote in a previous blog post.
Meanwhile, Meta has slimmed down its human fact-checking team in favor of a global community notes program.




