Recently, the UK's Minister for Technology, Lizzie Kendall, strongly condemned the large number of inappropriate images of women and children generated by Elon Musk's Grok AI, calling them "shocking and unacceptable in a civilized society." Following the spread of thousands of digitally altered, intimate deepfake images of women and children with their clothing removed, Kendall called on the social media platform X (formerly Twitter) to "address the issue urgently" and expressed support for the UK regulator Ofcom to take necessary enforcement actions.
Kendall emphasized, "We cannot and will not allow the spread of these degrading and offensive images, especially against women and girls." She also pointed out that the UK would firmly resist such repulsive online content and called for unity among all parties to combat this issue.
In this context, experts have expressed concerns about the speed of the government's response, criticizing the "tug-of-war between platforms and regulators," which has prevented timely resolution of the issue. Survivor of sexual assault Jessalyn Kane stated that although she requested Grok to manipulate an image of herself at the age of three, the platform still allowed the generation of such inappropriate content. In contrast, other AI tools like ChatGPT and Gemini rejected similar requests.
Ofcom said it has been aware of the issue of inappropriate images generated by Grok and has contacted X and its parent company xAI to learn about the measures they have taken to protect UK users. As attention to the issue grows, online child safety activist Bb Bz Kidron has called on the government to strengthen the enforcement of the Online Safety Act and demanded action within days rather than years.
Experts point out that as AI technology advances, fake images may evolve into longer videos, causing increasingly severe impacts on people's lives. They call on the government to increase regulation to prevent any gray areas from emerging.
Currently, generating or sharing non-consensual intimate images or child sexual abuse material is illegal, and deepfake images created using AI technology are also prohibited by law. Kidron also noted that while generated images of children may not constitute child sexual abuse material, they severely violate children's privacy and autonomy.
Key Points:
🛑 Minister Kendall criticized Grok AI for generating inappropriate images and called on the X platform to address the issue urgently.
⚖️ Ofcom has taken note of the issue and plans to communicate with Grok and its parent company to take necessary measures.
📉 Experts call on the government to strengthen the regulation of the Online Safety Act to protect women and children from inappropriate content.
