Entertainment News

Regulator calls for post labelling on Facebook after fake Biden video 

The Oversight Board, an independent body responsible for reviewing Meta’s content moderation, has supported Meta’s decision not to remove a fake video of US President Joe Biden and is calling for fake posts to be labelled in the future instead. 

The regulator said that the post was compliant with Meta’s manipulated media policy. However, it also pointed out that the policy as “incoherent”, as well as urging for its expansion ahead of a contentious election year.

The Oversight Board advocated for increased labeling of fake content on Facebook, especially if removal is not warranted under current policy violations. 

This approach, the Board argued, could diminish reliance on third-party fact-checkers, and provide a more scalable means of enforcing manipulated media policies. Additionally, it could be a better way to inform users about the presence of fake or altered content. 

Additionally, the Board expressed concerns regarding users’ potential lack of awareness of content demotion or removal, as well as the absence of clear avenues for appealing decisions.

In its inaugural year of accepting appeals in 2021, Meta’s Oversight Board reviewed over a million appeals related to posts removed from Facebook and Instagram, highlighting the volume of content moderation issues faced by the platform.

The video shows altered existing footage of the US President, depicting him in a misleading manner with his granddaughter, suggesting inappropriate behavior. 

Despite its deceptive nature, the video did not contravene Meta’s manipulated media policy as it was not manipulated using artificial intelligence and did not involve the President saying something he hadn’t. Consequently, Meta chose not to remove it.

Michael McConnell, co-chair of the Oversight Board pointed out that while the current policy prohibits altered videos showing individuals saying things they didn’t, it fails to address posts depicting individuals engaging in actions they didn’t actually perform. 

Audio deep fakes are generated using advanced AI tools capable of mimicking or manipulating voices to fabricate statements, and they are reportedly becoming more common. McConnell highlighted the policy’s narrow focus on AI-altered videos, which exempts other forms of fake content, particularly fake audio, which he identified as a source of electoral disinformation.

About the author


Add Comment

Click here to post a comment