It's been reported that these "nudify" services use AI to strip clothes off images and make millions of dollars in the process. But now that this database has been exposed, it's like a huge mess. The US National Center for Missing and Exploited Children is involved, which is pretty serious business. I don't think anyone should be using AI to create explicit content without proper moderation and oversight.
It just goes to show how vulnerable online platforms can be when it comes to security. Fowler, the security researcher who discovered this flaw, basically says that companies need to step up their game when it comes to protecting users from these kinds of issues. I mean, we've all heard about CSAM being a problem, but this is just another example of how AI can be used for bad stuff.
I think it's time for companies to take responsibility and implement better moderation tools that go beyond just relying on AI. It's not enough just to have pop-up warnings; you need real mechanisms in place to prevent the misuse of these platforms. We need to make sure we're doing everything we can to protect users, especially vulnerable ones like kids.