
This week, X users noticed that the platform’s AI chatbot Grok will readily generate nonconsensual sexualized images, including those of children.
Mashable reported on the lack of safeguards around sexual deepfakes when xAI first launched Grok Imagine in August. The generative AI tool creates images and short video clips, and it specifically includes a “spicy” mode for creating NSFW images.
While this isn’t a new phenomenon, the building backlash forced the Grok team to respond.
This Tweet is currently unavailable. It might be loading or has been removed.
“There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing,” Grok’s X account posted on Thursday. It also stated that the team has identified “lapses in safeguards” and is “urgently fixing them.”
xAI technical staff member, Parsa Tajik, made a similar statement on his personal account: “The team is looking into further tightening our gaurdrails. [sic]”
Grok also acknowledged that child sex abuse material (CSAM) is illegal, and the platform itself could face criminal or civil penalties.
X users have also brought attention to the chatbot manipulating innocent images of women, often depicting them in less clothing. This includes private citizens as well as public figures, such as Momo, a member of the K-pop group TWICE, and Stranger Things star Millie Bobby Brown.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
Grok Imagine, the generative AI tool, has had a problem with sexual deepfakes since its launch in August 2025. It even reportedly created explicit deepfakes of Taylor Swift for some users without being prompted to do so.
AI-manipulated media detection platform Copyleaks conducted a brief observational review of Grok’s publicly accessible photo tab and identified examples of seemingly real women, sexualized image manipulation (i.e., prompts asking to remove clothing or change body position), and no clear indication of consent. Copyleaks found roughly one nonconsensual sexualized image per minute in the observed image stream, the organization shared with Mashable.
Despite the xAI Acceptable Use Policy prohibiting users from “Depicting likenesses of persons in a pornographic manner,” this doesn’t necessarily include merely sexually suggestive material. The policy does, however, prohibit “the sexualization or exploitation of children.”
In the first half of 2024, X sent more than 370,000 reports of child exploitation to the National Center for Missing and Exploited Children (NCMEC)’s CyberTipline, as required by law. It also stated that it suspended more than two million accounts actively engaging with CSAM. Last year, NBC News reported that anonymous, seemingly automated X accounts were flooding some hashtags with child abuse content.
Grok has also been in the news in recent months for spreading misinformation about the Bondi Beach shooting and praising Hitler.
Mashable sent xAI questions and a request for comment and received the automated reply, “Legacy Media Lies.”
If you have had intimate images shared without your consent, call the Cyber Civil Rights Initiative’s 24/7 hotline at 844-878-2274 for free, confidential support. The CCRI website also includes helpful information as well as a list of international resources.




