Once the most trusted AI partner of the Pentagon, it is now cast into "cold storage" by its own hands. On March 9, local time, the AI startup Anthropic formally filed a lawsuit with the U.S. District Court for the Northern District of California, countering the unprecedented move by the Trump administration to place it on a "blacklist" of supply chain risks. This legal battle not only concerns the fate of a single company but also reveals the brutal confrontation between AI technology and ethical boundaries in military applications.
From "Exclusive Access" to "Comprehensive Ban"
Not long ago, Anthropic was a guest of honor at the Department of Defense. Its flagship model, Claude, was the only AI model allowed to access classified networks, and it was even reported to have been used to assist in developing strike plans against specific targets. However, this "honeymoon period" came to an abrupt end with a deadlock over a contract renewal.
Last Thursday, Anthropic confirmed that it had been officially designated as a "supply chain risk." This strongly punitive label is typically reserved for companies from adversarial nations. This means that any defense contractor collaborating with the Pentagon must prove that they do not use Claude.
Core Controversy: Can AI Have "Autonomous Awareness"?
The trigger for this split was a deadlock over the "access rights" of AI models:
Department of Defense Position: The military demands "unrestricted access" to AI models, believing that any restrictions on critical capabilities could endanger the safety of frontline soldiers.
Anthropic's Stance: As a company emphasizing AI safety, Anthropic hopes to obtain written assurances that its models will not be used in fully autonomous lethal weapons systems or large-scale domestic surveillance.
This clash of values eventually escalated into a strict administrative crackdown. Trump even publicly criticized Anthropic on social media, calling it a "reckless left-wing AI company."
Millions in Revenue Hang in the Balance
In its complaint, Anthropic directly called this action "unprecedented and illegal." In addition to the cancellation of millions of dollars in government contracts, the damage to its reputation has also led to uncertainty in many private orders. The company has already requested the court to issue a temporary injunction to revoke this designation as a supply chain risk, to prevent further damage.
Anthropic Spokesperson emphasized that seeking judicial review is not about abandoning national security, but rather protecting the legitimate rights and interests of its business and partners.
When top-tier large models collide with strong administrative power, this is not just a pursuit of economic losses, but also a final trial on the ethical boundaries of AI. In the battlefield, who is in control—human commanders or AI safety protocols? The outcome of this litigation will set the tone for the future of global AI militarization.