Anthropic vs. US Department of Defense. The dispute goes to court – Bitcoin.pl

Claude’s creators declared war on the Pentagon. Literally. Anthropic filed two lawsuits against the US Department of Defense – on Monday, in federal courts in San Francisco and Washington. This is a direct response to the DOD’s decision last week, which marked the company as a supply chain risk.

This status is not a joke, as it is usually reserved for foreign enemies of the state. The implications of the status are brutal, as any company or agency that works with the Pentagon must now officially confirm that it does not use Anthropic models.

Why has Anthropic become a supply chain threat?

The conflict had been building for weeks. Anthropic has set tough conditions: it does not agree that its technology will be used for mass surveillance of Americans and believes that AI models are not yet ready to power fully autonomous weapons – one in which no human makes the decision to select a target and open fire. Secretary of Defense Pete Hegseth retorted that the Pentagon should have access to AI systems for any lawful purpose and that a private contractor had no right to restrict it.

In the lawsuit filed in San Francisco, Anthropic calls the DOD’s actions “unprecedented and unlawful” and explicitly accuses the administration of retaliation.

“The Constitution prevents the government from using its enormous power to punish a company for free speech,” the lawsuit says.

The CEO defines freedom of speech as Anthropic’s public positions regarding the limitations of its own AI systems and the issues of security and transparency of the technology.

Claude is woke?

And this is where it gets thick – especially politically. The Trump administration, including the president himself and Hegseth, has repeatedly criticized Anthropic and its CEO Dario Amodei, calling them “woke” and “radicals” for calling for stronger AI regulation. The company responds: the government does not have to agree with its views or use its products, but it cannot use the state apparatus to suppress other people’s voices.

Several private companies continue to work with Anthropic, but the scale of potential business losses in the government sector is serious and may affect the future of OpenAI’s largest competitor.

It is worth adding here that OpenAI quickly accepted the Pentagon’s proposal (which Anthropic had previously rejected), but this is not the end of the revelations. Catlin Kalinowski, who worked at Sam Altman’s company as Robotic Leader, resigned after what she said was a reckless partnership with the Pentagon.

As Kalinowski herself emphasized in her entry on the X website:

Artificial intelligence plays an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are frontiers that deserved more thought than they received.

The AI ​​industry is watching this battle with bated breath, because its outcome may mark the boundary between the technological sovereignty of corporations and the state’s appetite for unconditional access to tools that are changing the rules of the game.