Recently, Google has signed a new agreement with the U.S. Department of Defense, granting access to its artificial intelligence (AI) tools to the agency's classified network. This arrangement came about after another AI company, Anthropic, refused to grant the Pentagon sensitive usage authorization. According to the new agreement, the Department of Defense will be able to widely deploy Google's AI in confidential environments for multiple fields, including intelligence analysis and decision support.

The background of this event stems from Anthropic's public protest against the Trump administration. The Pentagon wanted to use Anthropic's AI models with minimal restrictions, including applications for domestic surveillance and autonomous weapons systems, but Anthropic insisted on adding "guardrails" in the contract to prohibit these uses. Due to their refusal to meet these conditions, the Department of Defense placed Anthropic on a "supply chain risk" list, typically reserved for entities considered "foreign adversaries." Anthropic filed a lawsuit, and the court issued a temporary injunction in the case.
During this process, other AI companies saw an opportunity and quickly reached agreements with the Pentagon. OpenAI was the first company to sign an agreement with the Department of Defense, followed closely by xAI. Google became the third AI company to establish a similar partnership with the Pentagon, further expanding the large AI models available within the U.S. defense system.
According to a report by the Wall Street Journal, Google's agreement with the Pentagon also includes a statement that the company "does not intend" to use AI for domestic surveillance or autonomous weapons. This statement is similar to the relevant content in OpenAI's contract. However, the legal enforceability and actual supervision of these clauses remain uncertain.
Notably, Google is pushing forward with this collaboration despite ongoing internal opposition. Over 950 Google employees have signed an open letter urging the company to follow Anthropic's example, stating that it should not sell AI capabilities to the Department of Defense without clear and enforceable usage restrictions. So far, Google has not made a public response to this.
This incident reflects the value conflicts between tech giants and the government, as well as within companies, between employees and management. The military wants to use AI to enhance capabilities, while some technology companies and employees aim to set boundaries for the military and security applications of AI through contract terms.
Key Points:
- 🤖 Google has signed a new agreement with the Pentagon, providing access to AI tools, allowing their use in intelligence analysis and decision support.
- ⚖️ Due to Anthropic's refusal to the Pentagon's request, Google quickly became the third AI company to reach an agreement with the Department of Defense.
- ✊ Over 950 Google employees have signed a petition opposing the move, calling for the company to refrain from selling AI capabilities to the Department of Defense until clear usage restrictions are in place.
