Meta’s advertising policies are under scrutiny again as a watchdog group claims the company approved more than a dozen “highly inflammatory” ads that violated its rules. These ads targeted Indian audiences and contained disinformation, calls for violence, and conspiracy theories related to the upcoming elections.
According to a new report from Ekō, a nonprofit watchdog organization, these ads were submitted as a “stress test” of Meta’s advertising systems. Ekō states that the ads were based on actual hate speech and disinformation commonly found in India.
The group managed to get 14 out of 22 ads approved through Meta’s advertising tools, despite all of them violating the company’s policies. While the exact content of the ads was not disclosed, Ekō reported that they included calls for violent uprisings against Muslim minorities, spread disinformation exploiting communal or religious conspiracy theories, and incited violence through Hindu supremacist narratives. Ekō’s researchers removed the ads before they went live, ensuring they were not seen by Facebook users.
This isn’t the first time Ekō has exposed weaknesses in Meta’s ad approval process. Previously, the group succeeded in getting hate-filled ads targeting European users approved, although those ads also never ran.
In its latest findings, Ekō revealed that they used generative AI tools to create images for the ads. Despite Meta’s claims that it is developing systems to detect AI-generated content, none of these ads were flagged as such by the company.
Meta has not immediately responded to requests for comment. In response to Ekō, the company highlighted its rules requiring political advertisers to disclose their use of AI and referred to a blog post about its efforts to prepare for the Indian elections.
Source: Engadget