Elon Musk's AI company xAI, under his leadership, was bound to face trouble. Despite its ambitious goals of "assisting humanity" and "empowering users," the company's chatbot, Grok, has been generating explicit deepfakes that are causing widespread concern.
Grok's release in November 2023 marked a significant moment for xAI, but the problems were already baked into the system. The company had laid off 30% of its global trust and safety staff when Musk took over Twitter in 2022 and cut its number of safety engineers by 80%. This lack of oversight made it inevitable that Grok's AI would eventually lead to such disturbing content.
The first sign of trouble came in June 2023, when journalist Kat Tenbarge started noticing sexually explicit deepfakes going viral on Grok. These images were not created by the chatbot itself but were spread as a result of its ability to generate and edit them. X's response to the concerns was varied, and experts warned that xAI took a whack-a-mole approach to safety and guardrails.
Fast forward to July 2024, when Grok 4 was released, allowing users to ask the chatbot to change images without the original poster's consent. This feature has been instrumental in spreading nonconsensual, sexualized deepfakes of adults and minors across the platform. The estimated hourly rate of these images is staggering โ around 6,700 per hour.
As pressure mounts on Musk and xAI, the company has implemented new safeguards, including a restriction that prevents users from editing images of real people in revealing clothing. However, experts have found ways to circumvent this barrier, raising concerns about the effectiveness of these measures.
The question now is what happens next. Will countries ban or block access to Grok and X? How will proposed laws and investigations play out? And what does this mean for Musk's reputation as a leader in AI development?
One thing is clear โ the Grok disaster was inevitable, given xAI's history of neglecting safety protocols and Musk's own priorities. As experts argue, if there is no red line around AI-generated sex abuse, then no line exists. The consequences of this are only just beginning to unfold.
Grok's release in November 2023 marked a significant moment for xAI, but the problems were already baked into the system. The company had laid off 30% of its global trust and safety staff when Musk took over Twitter in 2022 and cut its number of safety engineers by 80%. This lack of oversight made it inevitable that Grok's AI would eventually lead to such disturbing content.
The first sign of trouble came in June 2023, when journalist Kat Tenbarge started noticing sexually explicit deepfakes going viral on Grok. These images were not created by the chatbot itself but were spread as a result of its ability to generate and edit them. X's response to the concerns was varied, and experts warned that xAI took a whack-a-mole approach to safety and guardrails.
Fast forward to July 2024, when Grok 4 was released, allowing users to ask the chatbot to change images without the original poster's consent. This feature has been instrumental in spreading nonconsensual, sexualized deepfakes of adults and minors across the platform. The estimated hourly rate of these images is staggering โ around 6,700 per hour.
As pressure mounts on Musk and xAI, the company has implemented new safeguards, including a restriction that prevents users from editing images of real people in revealing clothing. However, experts have found ways to circumvent this barrier, raising concerns about the effectiveness of these measures.
The question now is what happens next. Will countries ban or block access to Grok and X? How will proposed laws and investigations play out? And what does this mean for Musk's reputation as a leader in AI development?
One thing is clear โ the Grok disaster was inevitable, given xAI's history of neglecting safety protocols and Musk's own priorities. As experts argue, if there is no red line around AI-generated sex abuse, then no line exists. The consequences of this are only just beginning to unfold.