Exposed AI Database Reveals Alarming Misuse of Image Generator for Explicit Content

A serious data leak has revealed the dark side of generative AI technology, exposing how some users are exploiting it to create disturbing and potentially illegal content. Tens of thousands of explicit AI-generated images—including what appear to be child sexual abuse material (CSAM)—were discovered in an unprotected database linked to a South Korean AI company, according to new research reviewed by WIRED.

The database, which contained over 95,000 files and 45GB of data, included AI-generated sexualized images of celebrities like Ariana Grande, the Kardashians, and Beyoncé—many appearing de-aged to look like children. The leak was discovered by security researcher Jeremiah Fowler, who immediately alerted the companies behind the database: AI-Nomis and its image generation platform GenNomis.

While the database has since been taken offline, the exposure highlights the ease with which generative AI tools can be misused to create harmful, non-consensual, and illegal content. Fowler, who has analyzed countless data leaks in his career, called this one “terrifying”—especially in how easily accessible the material was.

“The most disturbing thing, obviously, was the child explicit images and seeing ones that were clearly celebrities reimagined as children,” Fowler said.

A Platform Built for “Unrestricted” Image Creation

Before it disappeared, GenNomis promoted itself as an AI platform that allowed users to create “uncensored” images and videos. The site featured multiple AI tools: image and video generation, face swapping, background removal, and a gallery of “NSFW” images. It also offered a marketplace where users could sell AI-generated albums.

Although its policies claimed to prohibit illegal content such as child pornography, the reality appeared starkly different. The platform’s branding emphasized a lack of restrictions, with an explicit “models” section and photorealistic sexualized imagery. There is no clear indication that GenNomis had effective moderation or filtering mechanisms in place to prevent the generation of illegal content.

Security researcher Fowler found no evidence of user authentication or encryption protecting the database. The lack of safeguards allowed anyone with the URL to access sensitive content—including prompts and images involving sexual acts, incest references, and minors. Some prompts featured phrases like “tiny girl” and sexual acts involving celebrities or family members.

“If I was able to see those images with nothing more than the URL, that shows me they’re not taking all the necessary steps to block that content,” Fowler noted.

No Comment, Sudden Shutdown

Neither GenNomis nor AI-Nomis responded to multiple inquiries from WIRED. However, shortly after being contacted, both companies’ websites were shut down, with GenNomis now returning a 404 error.

Experts in online safety and image-based abuse say this incident is another stark example of how generative AI can be weaponized without adequate oversight.

“This shows—yet again—the disturbing extent to which there is a market for AI that enables such abusive images to be generated,” said Clare McGlynn, a UK law professor specializing in online abuse.

Henry Ajder, a deepfake expert and founder of Latent Space Advisory, said that even if illegal content wasn’t officially allowed, the site’s “unrestricted” branding clearly targeted users interested in intimate or explicit imagery. Ajder expressed surprise that a South Korean company would host such tools, given the country’s recent crackdown on deepfake abuse.

Growing Threat of AI-Generated CSAM

The exposure also underscores a growing global crisis: the rapid rise of AI-generated CSAM. According to the Internet Watch Foundation (IWF), webpages containing AI CSAM have more than quadrupled since 2023, and the sophistication of these images is increasing.

“It’s currently just too easy for criminals to use AI to generate and distribute sexually explicit content of children at scale and at speed,” said Derek Ray-Hill, interim CEO of the IWF.

While the GenNomis database did not appear to include usernames or login credentials, the scale and content of the files illustrate a broader issue: technology is advancing faster than regulation or ethical guardrails. And without effective oversight, the potential for misuse is staggering.

“From a legal standpoint, we all know that child explicit images are illegal, but that didn’t stop the technology from being able to generate those images,” Fowler concluded.

As generative AI becomes more powerful and accessible, experts are urging lawmakers, tech platforms, and infrastructure providers to take urgent action to prevent the spread of abusive content—and to hold those enabling it accountable.

Source

Control F5 Team
Blog Editor
OUR WORK
Case studies

We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.

READY TO DO THIS
Let’s build something together