A huge database of explicit content was left unsecured on the internet by an AI image generator startup, revealing over 1 million images and videos created with its systems. The "overwhelming majority" of these images involved nudity and depicted adult content, according to Jeremiah Fowler, a security researcher who uncovered the exposed trove of data.
The database, which contained nearly all explicit files, was linked to MagicEdit and DreamPal - two AI apps that allow users to generate nude images of adults with their faces swapped onto other, naked bodies. A third app, BoostInsider, owned by the startup company DreamX, also appears to have been involved in creating or distributing child sexual abuse material.
The exposed database was discovered after Fowler realized it was accessible online and took screenshots to verify its contents. He reported the incident to the US National Center for Missing and Exploited Children and alerted other tech companies about the security flaw.
According to DreamX, a spokesperson says that multiple safeguards were implemented before receiving any external inquiry, including prompt regulation, input filtering, and mandatory review of all user prompts through OpenAI's Moderation API. However, some experts argue that these measures are not enough to prevent users from creating explicit content.
"This is the continuation of an existing problem when it comes to startups feeling apathetic towards trust and safety and the protection of children," says Adam Dodge, founder of EndTAB. "The underlying drive is the sexualization and control of women's and girls' bodies."
The incident highlights concerns about AI-generated explicit content, particularly when used for malicious purposes such as blackmail or harassment. As AI image generators become increasingly popular, experts warn that companies must do more than just provide generic pop-up warnings to prevent users from creating explicit content.
"The way these apps are designed is not adequate," Dodge says. "They have to have some form of moderation that even goes beyond AI."
The database, which contained nearly all explicit files, was linked to MagicEdit and DreamPal - two AI apps that allow users to generate nude images of adults with their faces swapped onto other, naked bodies. A third app, BoostInsider, owned by the startup company DreamX, also appears to have been involved in creating or distributing child sexual abuse material.
The exposed database was discovered after Fowler realized it was accessible online and took screenshots to verify its contents. He reported the incident to the US National Center for Missing and Exploited Children and alerted other tech companies about the security flaw.
According to DreamX, a spokesperson says that multiple safeguards were implemented before receiving any external inquiry, including prompt regulation, input filtering, and mandatory review of all user prompts through OpenAI's Moderation API. However, some experts argue that these measures are not enough to prevent users from creating explicit content.
"This is the continuation of an existing problem when it comes to startups feeling apathetic towards trust and safety and the protection of children," says Adam Dodge, founder of EndTAB. "The underlying drive is the sexualization and control of women's and girls' bodies."
The incident highlights concerns about AI-generated explicit content, particularly when used for malicious purposes such as blackmail or harassment. As AI image generators become increasingly popular, experts warn that companies must do more than just provide generic pop-up warnings to prevent users from creating explicit content.
"The way these apps are designed is not adequate," Dodge says. "They have to have some form of moderation that even goes beyond AI."