Home » Microsoft Rallies the Tech World Behind Anthropic as Pentagon’s AI Ethics Battle Reaches Federal Court

Microsoft Rallies the Tech World Behind Anthropic as Pentagon’s AI Ethics Battle Reaches Federal Court

by admin477351

Microsoft has rallied the technology world behind Anthropic as the AI company’s high-stakes battle with the Pentagon over AI ethics reaches a federal court in San Francisco. The company filed an amicus brief calling for a temporary restraining order against the Defense Department’s supply-chain risk designation, arguing the move threatens the entire AI technology supply chain relied upon by national defense. Amazon, Google, Apple, and OpenAI have similarly backed Anthropic through a joint filing, creating an unprecedented alignment of corporate support.

The roots of the dispute lie in a $200 million contract that would have seen Anthropic’s Claude AI deployed on classified military systems. Anthropic insisted that the contract include protections against using its technology for mass surveillance of American citizens or to power autonomous lethal weapons, conditions the Pentagon declined to accept. Defense Secretary Pete Hegseth subsequently labeled the company a supply-chain risk, a designation with devastating consequences that had never before been applied to a US firm.

Microsoft’s filing carries enormous weight given its status as one of the Pentagon’s most critical technology partners, holding a share of the $9 billion Joint Warfighting Cloud Capability contract and integrating Anthropic’s AI directly into military systems. The company also holds additional agreements with federal agencies spanning defense, intelligence, and civilian services. Microsoft publicly stated that access to the best technology and responsible AI governance were shared national goals requiring partnership between government and industry.

Anthropic filed two simultaneous lawsuits, one in California and one in Washington DC, arguing that the supply-chain risk designation constituted unconstitutional retaliation for its publicly stated AI safety positions. The company disclosed in its court filings that it does not currently believe Claude is reliable or safe enough to support lethal autonomous decision-making, which it said was the genuine technical basis for the restrictions it sought. The Pentagon’s technology chief publicly ruled out any possibility of renegotiation.

Congressional Democrats have simultaneously written to the Pentagon seeking answers about whether AI was used in a military strike in Iran that reportedly killed more than 175 civilians at an elementary school. Their questions focus on AI targeting tools, human oversight, and the role of specific systems in identifying targets. These parallel legislative inquiries are adding political pressure to the legal confrontation, turning this into one of the most consequential debates over AI in national security in US history.

You may also like