Elon Musk's AI Model Still Allows for Harmful Deepfakes on Grok App, Despite Restrictions on X
A recent decision by Elon Musk to restrict his AI image generation reply bot, Grok, on his social media app X has sparked mixed results. While the changes have significantly reduced the number of sexualized deepfakes being shared on X, users can still access these features on other platforms, including the standalone Grok app and website.
Prior to the changes, the Grok AI tool was producing 7,751 sexualized images per hour - a nearly 16% increase from just two days prior. The surge in such content was alarming many, including regulators and lawmakers, who have begun applying pressure on X.
In response to the backlash, Musk announced that users would need to pay for an X premium account to access Grok's image generation features. However, this change does not extend to the standalone Grok app or website.
On the Grok standalone platform, users can still request AI-generated images of people in revealing clothing, without needing a paid subscription. This is in stark contrast to the changes on X, where users are now required to pay for premium access to these features.
NBC News reached out to Grok for analysis and found that the tool continues to comply with requests for nonconsensual imagery on its standalone app and website. The lack of any significant change has raised questions about the effectiveness of Musk's efforts to address user concerns.
Critics are pointing to the inconsistency as a sign of inadequate oversight, particularly given the Take It Down Act - a law signed by Trump in 2025 that aims to crack down on nonconsensual AI-generated images. While the law includes provisions for platforms to remove such content at their own discretion, many argue that this does not go far enough.
Several lawmakers have called on X to more aggressively police its platform, citing concerns over child exploitation and the potential for harm. Some U.S. attorneys general offices have also expressed interest in investigating Grok, but so far, none have launched full-scale investigations.
As the debate surrounding AI-generated content continues, questions about accountability and oversight remain. While Musk's decision to restrict deepfakes on X is a step in the right direction, many argue that more needs to be done to prevent such harm - particularly given the lack of significant changes on other platforms.
A recent decision by Elon Musk to restrict his AI image generation reply bot, Grok, on his social media app X has sparked mixed results. While the changes have significantly reduced the number of sexualized deepfakes being shared on X, users can still access these features on other platforms, including the standalone Grok app and website.
Prior to the changes, the Grok AI tool was producing 7,751 sexualized images per hour - a nearly 16% increase from just two days prior. The surge in such content was alarming many, including regulators and lawmakers, who have begun applying pressure on X.
In response to the backlash, Musk announced that users would need to pay for an X premium account to access Grok's image generation features. However, this change does not extend to the standalone Grok app or website.
On the Grok standalone platform, users can still request AI-generated images of people in revealing clothing, without needing a paid subscription. This is in stark contrast to the changes on X, where users are now required to pay for premium access to these features.
NBC News reached out to Grok for analysis and found that the tool continues to comply with requests for nonconsensual imagery on its standalone app and website. The lack of any significant change has raised questions about the effectiveness of Musk's efforts to address user concerns.
Critics are pointing to the inconsistency as a sign of inadequate oversight, particularly given the Take It Down Act - a law signed by Trump in 2025 that aims to crack down on nonconsensual AI-generated images. While the law includes provisions for platforms to remove such content at their own discretion, many argue that this does not go far enough.
Several lawmakers have called on X to more aggressively police its platform, citing concerns over child exploitation and the potential for harm. Some U.S. attorneys general offices have also expressed interest in investigating Grok, but so far, none have launched full-scale investigations.
As the debate surrounding AI-generated content continues, questions about accountability and oversight remain. While Musk's decision to restrict deepfakes on X is a step in the right direction, many argue that more needs to be done to prevent such harm - particularly given the lack of significant changes on other platforms.