According to the latest data analysis by The New York Times and the Center for Countering Digital Hate (CCDH), Elon Musk's AI chatbot Grok generated at least 1.8 million sexually explicit images targeting women within just nine days, which were widely posted on the X (formerly Twitter) platform.

The CCDH report states that among the approximately 4.6 million image samples generated by Grok, as many as 65% (about 3 million) contained sexually suggestive depictions of men, women, or children. Approximately 23,000 images were identified as possibly involving sexual content related to children. This large-scale abuse stemmed from users discovering they could prompt Grok to generate "nude photos" or sexually objectify real people's pictures.

Grok, Musk, xAI

This incident has raised high alert internationally. After regulatory authorities in the UK, the US, India, and Malaysia launched investigations, the X platform was forced last week to expand restrictions on Grok's generation features. Although the platform has taken action, the regulatory boundaries of AI-generated content and the responsibility of social media platforms for censorship remain a focal point of debate in the tech industry.