Microsoft has asserted through a court filing in a San Francisco federal court that Anthropic’s fight against the Pentagon is not just one company’s battle but a fight that every technology company has a stake in, calling for a temporary restraining order against the Defense Department’s unprecedented supply-chain risk designation. The brief argued that the designation threatens the technology ecosystems that support both commercial and military AI applications. Amazon, Google, Apple, and OpenAI have also backed Anthropic through a joint filing, confirming Microsoft’s assertion with action.
The dispute that gave rise to Anthropic’s lawsuits began with a $200 million contract negotiation in which the company refused to allow its Claude AI to be used for mass surveillance of American citizens or to power autonomous lethal weapons. Defense Secretary Pete Hegseth responded by labeling the company a supply-chain risk, and the Pentagon’s technology chief later publicly stated that renegotiation was not an option. Anthropic filed two simultaneous lawsuits in California and Washington DC challenging the designation.
Microsoft’s assertion that this is every tech company’s fight is supported by its own direct stake in the case: the company integrates Anthropic’s AI into military systems and participates in the Pentagon’s $9 billion cloud computing contract. Additional agreements with defense, intelligence, and civilian agencies further deepen Microsoft’s involvement. Microsoft publicly argued that the government and technology sector must work together to ensure AI advances national security responsibly.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. Anthropic noted that no US company had ever previously received this designation.
Congressional Democrats have separately asked the Pentagon whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school, raising questions about AI targeting and human oversight. Their formal inquiries underscore Microsoft’s assertion that the implications of this case extend far beyond Anthropic. Together, the legal and legislative pressure is creating an extraordinary moment of accountability for the Pentagon’s approach to AI governance.