A CBS News investigation into advertising across Meta’s many social media platforms found that the company was putting ads in front of users that promoted “nudify” apps – AI-enhanced tools that create sexually explicit imagery of real people based on a reference image. The news outlet’s analysis found that hundreds of these ads were in Meta’s ad library and were available on Instagram, Facebook, Threads (Meta’s Twitter-esque alternative), and Facebook Messenger. The ads were also part of the Meta Audience Network, which allows advertisers’ content to be seen on websites that partner with the company outside of these platforms.
CBS News found that ads were targeting men between the ages of 18 and 65 in the United States, the United Kingdom, and the European Union. Some led to websites offering “advanced” features at higher prices, like simulating sex acts with the newly “nudified” subject. Others redirected to Apple app store pages to download the software.
When notified by CBS News of the ads, Meta immediately removed them, deleted their associated pages, and blocked their URLs. A spokesperson from the company affirmed in a statement to the news outlet that they “have strict rules against non-consensual intimate imagery.” The spokesperson also explained that the company is in a constant battle against the people generating this kind of content as they “constantly evolve their tactics to evade detection.” Indeed, CBS found more offending “nudify” ads available on Instagram after the company removed the ones initially flagged.
The ”nudify” problem is the latest in a series of issues with Meta’s lack of control over advertisers on its platform. Pornographic ads had already been flagged last year, and a Wall Street Journal exposé in May revealed that scam ads on Meta platforms were the source of half of all complaints received by JPMorgan Chase for fraudulent Zelle payments over the course of a year. The report also found that Facebook and Instagram staffers were instructed to allow up to 32 “strikes” against offending accounts before banning them.
The news of Meta’s lagging enforcement with advertisers comes as the company is facing increasing scrutiny over its moderation of content posted by its users. The company overhauled its content moderation system earlier this year with a suite of changes, reducing its automated processes for flagging content, narrowing its definition of hate speech, and replacing its third-party fact-checkers with a community notes system similar to the one used on X.
Meta’s own Q1 2025 Integrity Report found that bullying and violent content increased by millions of posts on its platforms since instituting the changes.