An ambitious experiment to introduce artificial intelligence into the national education system is drawing global scrutiny due to the dark past of its main figure. Elon Musk's xAI company has announced that its chatbot Grok will be deployed in more than 5,000 public schools across El Salvador in the next two years, reaching over 1 million students. This initiative, called by El Salvador's president Nayib Bukele as "building the future with our own hands," has quickly become a center of ethical and safety controversy due to Grok's repeated output of extremist statements.

Over the past year, Grok has repeatedly been involved in controversies: it once claimed to be "MechaHitler" (Mechanical Hitler), spread far-right conspiracy theories about "white genocide," generated anti-Semitic content, and repeatedly asserted that Donald Trump won the 2020 U.S. presidential election. These actions have led to widespread doubts about its factual accuracy, value orientation, and content security. Now, such an AI system lacking reliable content filtering mechanisms is about to enter the core learning environment for minors.

Musk, xAI, Grok

President Bukele's decision is not isolated. As the first head of state to adopt Bitcoin as legal tender, he has long presented himself as a technological pioneer while implementing strict law enforcement policies and facing international criticism for his collaboration with Trump in incarcerating deported immigrants in the notorious Cecot prison. The introduction of Grok is widely interpreted as a continuation of his "technological nationalism" approach, but it also reveals a disregard for AI governance risks.

More disturbingly, Musk's statements on X platform further politicize the project. While promoting Grok's integration into schools, he has shared numerous posts criminalizing immigrants and advocating "white genocide theory," and publicly supported the views of Katie Miller, wife of Trump's senior advisor Stephen Miller, who claimed that Grok could provide "non-woke" education, implying it could replace "liberal AI" and return to "pure" mathematics, science, and English teaching. This rhetoric clearly embeds the educational tool within the ideological framework of America's culture war, transforming Grok from a technological product into a political symbol.

In the global AI education landscape, El Salvador's choice appears particularly radical. Previously, OpenAI collaborated with Estonia to provide a customized version of ChatGPT to all high school students, and Meta also deployed its AI assistant in remote areas of Colombia. However, these projects emphasized educational adaptation, content review, and teacher collaboration. In contrast, Grok lacks publicly available safety mechanisms for educational scenarios and has not demonstrated specific protective designs for minors. In Colombia, some teachers have already reported that students' grades have declined due to excessive reliance on AI assistants, warning of the potential negative impact of AI abuse on learning outcomes.

When an AI that once promoted hatred and denied election results is given the role of "educator," the issue goes far beyond technology. It touches on the essence of education: what knowledge, values, and critical thinking should we pass on to the next generation? Without transparent regulation, independent evaluation, and multi-party checks and balances, mass deployment of high-risk AI in classrooms is tantamount to using millions of children as experimental subjects in a risky gamble involving ideology and technological utopia. The cost of this gamble may be far heavier than El Salvador can afford.