Elon Musk's artificial intelligence company xAI announced today that it has completed a $20 billion Series E funding round, with the valuation and post-investment scale undisclosed, but the funding amount set a new record for the global AI sector in 2026. This round was jointly participated by Valor Equity Partners and Fidelity, with NVIDIA also joining as a strategic investor, demonstrating its deep intention to cooperate in computing power and network infrastructure.
However, just as xAI boldly announced that the funds would be used to expand data centers and upgrade the Grok large model, its AI chatbot Grok has been caught in a global regulatory storm due to serious security vulnerabilities — multiple governments have already launched formal investigations into it.
Grok has 600 million monthly active users, but its security measures are virtually non-existent
xAI revealed in its statement that its platform X (formerly Twitter) and Grok together have approximately 600 million monthly active users, with Grok deeply integrated into the X app, becoming its core AI feature. However, last weekend, a large number of users induced Grok to generate deepfake images of real people, including minors. Shockingly, Grok did not trigger any content safety mechanisms and directly output non-consensual pornographic content, even suspected child sexual abuse material (CSAM).
Although xAI later urgently took down the related features and claimed that "the vulnerability is being fixed," as of the time of this writing, some generated content is still spreading on the X platform. This incident quickly triggered strong condemnation from the international community.
Joint investigations by multiple countries, xAI faces unprecedented regulatory pressure
Currently, regulatory authorities in the EU, the UK, France, India, Malaysia, and other countries and regions have launched formal investigations into xAI, focusing on:
- Whether it violates platform responsibility regulations such as the Digital Services Act (DSA);
- Whether generating CSAM constitutes a criminal offense;
- Whether the X platform, as a distribution channel, has fulfilled its content review obligations.
EU Digital Commissioner Thierry Breton warned: "AI cannot become an accelerator for illegal content." The Indian Ministry of Information Technology has also stated that if xAI does not immediately rectify the issues, it may face platform shutdowns and heavy fines.
20 Billion Dollar Bet, Can It Withstand the Trust Crisis?
xAI states that the new funding will be used for:
- Building ultra-large-scale AI data centers in the US, the Middle East, and Asia;
- Training the next generation of Grok models, supporting multimodal and agent capabilities;
- Expanding engineering and security teams.
However, analysts point out that the huge gap between technological advancement and lagging security has become xAI's biggest risk. In a globally more stringent environment for AI ethics and compliance, a "powerful AI" without effective content safeguards may be more destructive than a "weaker AI."
AIbase Observation: When Computing Power Rushes, Security Cannot Fall Behind
xAI's $20 billion funding demonstrates the high recognition of capital for its technical vision; however, the deepfake scandal of Grok reveals serious deficiencies in xAI's AI alignment, red team testing, and content governance.
This crisis is not just a technical issue, but a trust issue. Whether xAI can rebuild its security defenses in front of 600 million users will determine whether it can truly become a strong competitor to OpenAI and Anthropic — or become a negative example of "the stronger the ability, the greater the harm."
In today's era where AI enters the real world, intelligent systems without security are not progress, but disasters. And xAI is standing at the edge of a cliff.
