Elon Musk's X social media platform has announced measures to address its growing problem with sexual deepfakes, following weeks of government investigations and public backlash. However, the changes, which were implemented with varying degrees of effectiveness, have left many questioning whether they truly tackle the issue.
In a recent post, X clarified its AI image generation and editing features by introducing new rules depending on how users interact with the platform. When using the @Grok account or the Grok tab directly within X, restrictions now apply to altering images of people wearing revealing clothing such as bikinis. This restriction affects all users, regardless of their premium plan status.
However, this seemingly straightforward measure is complicated by the fact that free accounts can still access Grok's image generation feature through its standalone app and website, or by visiting the Grok tab on X's mobile app and website. Gizmodo was able to successfully generate an image of Elon Musk wearing a bikini using these channels, highlighting the limited scope of the platform-wide restrictions.
The most significant update announced is that X will now block users from generating images of real people in bikinis, underwear, or similar attire via both @Grok and Grok tabs on X, especially in jurisdictions where such content is illegal. This move appears to align with growing government efforts to regulate deepfakes worldwide.
Lawmakers in the UK are currently pushing for stricter laws governing non-consensual nudity and child sexual exploitation, which has led some regulators to scrutinize X's handling of the issue. The company maintained that it remains committed to creating a safe platform, emphasizing its zero-tolerance stance on unwanted sexual content and child exploitation.
However, some critics point out that these measures do not effectively address the root problem and instead create new loopholes for users. With governments worldwide investigating potential violations of online safety laws, X's response to the deepfake issue remains under scrutiny.
One social media researcher found an alarming surge in generated sexually suggestive or nudifying images using Grok's features last month, highlighting the scale of the problem on the platform. As a result, some prominent figures have expressed frustration with the company's handling of the situation, including Media Matters, which has stated it will leave X due to changes made by the platform.
X's updated terms of service have also drawn attention, as the company has shifted its court cases to the Fort Worth division of the Northern District of Texas. While some experts question the impartiality of this arrangement, others see it as a strategic move to bolster the platform's reputation.
The overall effectiveness of X's measures in curtailing deepfakes and ensuring user safety remains to be seen, particularly as governments continue to investigate potential violations.
In a recent post, X clarified its AI image generation and editing features by introducing new rules depending on how users interact with the platform. When using the @Grok account or the Grok tab directly within X, restrictions now apply to altering images of people wearing revealing clothing such as bikinis. This restriction affects all users, regardless of their premium plan status.
However, this seemingly straightforward measure is complicated by the fact that free accounts can still access Grok's image generation feature through its standalone app and website, or by visiting the Grok tab on X's mobile app and website. Gizmodo was able to successfully generate an image of Elon Musk wearing a bikini using these channels, highlighting the limited scope of the platform-wide restrictions.
The most significant update announced is that X will now block users from generating images of real people in bikinis, underwear, or similar attire via both @Grok and Grok tabs on X, especially in jurisdictions where such content is illegal. This move appears to align with growing government efforts to regulate deepfakes worldwide.
Lawmakers in the UK are currently pushing for stricter laws governing non-consensual nudity and child sexual exploitation, which has led some regulators to scrutinize X's handling of the issue. The company maintained that it remains committed to creating a safe platform, emphasizing its zero-tolerance stance on unwanted sexual content and child exploitation.
However, some critics point out that these measures do not effectively address the root problem and instead create new loopholes for users. With governments worldwide investigating potential violations of online safety laws, X's response to the deepfake issue remains under scrutiny.
One social media researcher found an alarming surge in generated sexually suggestive or nudifying images using Grok's features last month, highlighting the scale of the problem on the platform. As a result, some prominent figures have expressed frustration with the company's handling of the situation, including Media Matters, which has stated it will leave X due to changes made by the platform.
X's updated terms of service have also drawn attention, as the company has shifted its court cases to the Fort Worth division of the Northern District of Texas. While some experts question the impartiality of this arrangement, others see it as a strategic move to bolster the platform's reputation.
The overall effectiveness of X's measures in curtailing deepfakes and ensuring user safety remains to be seen, particularly as governments continue to investigate potential violations.