DoW is not authorized to “designate a domestic vendor a supply chain risk simply because a vendor publicly criticized DoW’s views about the safe uses of its system,” Lin wrote. In fact, “that designation has never been applied to a domestic company and is directed principally at foreign intelligence agencies, terrorists, and other hostile actors,” she said. “I don’t know”: Lawyer has no defense of Hegseth The DoW began using Anthropic’s Claude in March 2025 and had been using it for the past year without ever raising any concerns that Anthropic’s terms limiting certain uses posed a national security risk, Lin said. Rather, the government thoroughly vetted Claude before implementing it, praised Anthropic publicly, and planned to expand the partnership. The amicable nature of the partnership only changed, the judge said, after DoW sought to deploy Claude on a military platform and Anthropic ultimately agreed to do so with “two critical exceptions: mass surveillance of Americans and lethal autonomous warfare.” Based on its testing, Anthropic could not guarantee that Americans’ civil rights would not be infringed if Claude was used for these purposes, Anthropic said. If the government disliked the terms, Anthropic repeatedly said it would understand if another vendor was selected, simply bowing out to avoid compromising on AI safety principles that might “undercut Anthropic’s core identity,” Lin wrote. Calling out Anthropic for “utopian idealism,” DoW officials blasted Anthropic for supposedly trying to get the government to let a private company decide how military operations go down. “You can’t have an AI company sell AI to the Department of War and [then say] don’t let it do Department of War things,” Michael told the press.
First seen: 2026-03-27 20:31
Last seen: 2026-03-29 13:53