Meta Files Lawsuit Against AI App Developer Over Misuse of Platform for “Nudify” Image Ads

meta-nudify
0
0

Social media giant Meta has initiated legal action against the developer behind a controversial AI-based app known for generating fake, explicit images. The company filed a lawsuit in Hong Kong against Joy Timeline HK, the firm responsible for an app called Crush AI, which allegedly misused Meta’s advertising ecosystem to promote unethical services.

According to Meta, the app ran over 8,000 ads promoting its so-called “AI undresser” features during the first two weeks of 2025. These ads were reportedly designed to create non-consensual, sexually explicit images using generative AI. Despite repeated enforcement actions, the firm behind the app continued to bypass Meta’s ad review protocols.

The platform’s internal investigation revealed that the developers used multiple advertiser accounts and frequently changed domain names to escape detection. Many accounts were named along the lines of “Eraser Anyone’s Clothes,” each with varying numerical suffixes. At one point, the app even maintained a Facebook page promoting its offerings, in clear violation of Meta’s policies.

Meta acknowledged that it had removed many of the ads for breaching content guidelines, but the offending company continued to flood the platform with new campaigns. According to investigations, a significant portion of Crush AI’s web traffic—up to 90%—originated from Facebook and Instagram.

New Measures to Tackle AI-Driven Harm

In response to the growing misuse of generative AI, Meta has intensified its detection efforts, introducing advanced ad screening technologies. These tools can now flag suspicious ads even if they don’t feature overt nudity, by identifying specific language, patterns, and associated emojis used to bypass detection.

The company also said it is adapting its existing strategies to track and dismantle coordinated ad networks that promote such AI nudify services. Since January 2025, Meta has taken down four organized ad networks running these types of campaigns.

As part of a broader initiative, Meta will also begin sharing information through the Tech Coalition’s Lantern program, which includes major digital players like Google, Snap, and others. Meta claims to have submitted over 3,800 unique URLs related to these services since March to aid in combating online exploitation.

Industry-Wide Problem

This issue isn’t isolated to Meta. Other platforms such as YouTube, Reddit, and X have also been found to be inadvertently hosting or promoting links to similar AI-powered apps throughout 2024. Despite keyword bans and moderation policies, enforcement remains a major challenge due to the evolving tactics of app developers.

Support for Stronger Online Safety Laws

On the regulatory front, Meta reaffirmed its backing of legislative measures aimed at protecting minors and giving parents greater control over app usage. The company has previously endorsed the “Take It Down” initiative and is reportedly working with global policymakers to support stronger child safety frameworks.

With India being a massive digital market, Meta’s efforts to clean up AI misuse on its platforms could set the tone for responsible AI use across social platforms. The case also highlights the urgent need for tighter digital governance to curb the growing abuse of generative AI tools.