Oversight Board Slams Meta for Failing to Tackle AI Deepfake Celebrity Scams

Meta is under fire once again—this time from its own Oversight Board—for not doing enough to stop the spread of AI-generated deepfake scams featuring celebrities on its platforms.

In a new ruling, the board criticized Meta for allowing scam content that uses deepfakes of public figures to flourish, accusing the company of failing to enforce its own policies. “Meta is likely allowing significant amounts of scam content on its platforms to avoid potentially overenforcing a small subset of genuine celebrity endorsements,” the decision states. According to the board, Meta’s large-scale content reviewers aren’t given the tools or authority to effectively block content that impersonates celebrities to defraud users.

The ruling stemmed from a case involving an ad for a casino-style game called Plinko, which used an AI-manipulated video of retired Brazilian soccer star Ronaldo Nazário. Despite being flagged as a scam more than 50 times, the ad remained active until Meta finally took it down—though the associated Facebook post stayed up until the Oversight Board intervened. By then, the post had been viewed over 600,000 times.

The board said the case underscores major weaknesses in how Meta handles scam reports involving celebrities. Meta reportedly told the board that it only escalates such cases when there is confirmation that the person featured did not actually endorse the product, and that regional differences in how moderators interpret “fake personas” lead to inconsistent enforcement.

As a result, the board concluded, a “significant” volume of scam content is slipping through the cracks.

The board made one key recommendation: update internal policies, give reviewers more power to act against deepfake scams, and train them to recognize signs of AI-generated content. In response, Meta disputed many of the board’s claims. A spokesperson said, “Many of the Board’s claims are simply inaccurate,” citing recent efforts like facial recognition trials aimed at detecting “celeb-bait” scams.

“Scams have become more sophisticated and widespread, driven by cross-border criminal networks,” Meta’s statement added. “We’re testing new technology, enforcing aggressively against scams, and expanding tools to help users protect themselves. We’ll respond formally to the Board’s recommendation within 60 days, as required.”

AI deepfake scams have become a growing issue for Meta as generative technology becomes cheaper and more accessible. Earlier this year, reports revealed numerous Facebook ads featuring deepfakes of Elon Musk and Fox News anchors promoting fraudulent supplements. Some scam pages ran hundreds of variations of these ads with little pushback. Meta did take down some of these pages after press coverage, but similar scams remain active.

Even celebrities themselves have begun speaking out. Actress Jamie Lee Curtis recently criticized Meta for allowing a deepfake ad featuring her to run on Facebook. Meta only removed the ad after her public complaint.

The Oversight Board found that the Plinko scam was part of a larger trend, noting thousands of ads for the app in Meta’s Ad Library. Several featured deepfakes of other public figures, including Cristiano Ronaldo and even Meta CEO Mark Zuckerberg.

Concerns about scams on Meta’s platforms aren’t limited to the Oversight Board. The Wall Street Journal reported that Meta accounted for nearly half of all Zelle-related scams reported to JPMorgan Chase between mid-2023 and mid-2024. Regulators in the UK and Australia have observed similar trends. According to the report, Meta has been reluctant to increase ad-buying friction or ban repeat scam advertisers.

Source

Control F5 Team
Blog Editor
OUR WORK
Case studies

We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.

READY TO DO THIS
Let’s build something together