Altman himself admitted that OpenAI’s deal with the Department of Defense was “definitely rushed.” It’s hard to imagine a more eloquent summary of a week that shook the AI industry.
It all started with a spectacular split between Pentagon and Anthropic. Secretary of Defense Pete Hegseth officially designated Anthropic as a “national security supply chain threat” – a label usually reserved for foreign enemies.
OpenAI didn’t have to wait long. That same evening, Altman announced his own deal with the Department of Defense. OpenAI models are intended to be deployed in classified environments, and this is where questions arise.
OpenAI and red lines – do they really exist?
OpenAI declares three hard prohibitions: no possibility of mass surveillance of citizens, no use for autonomous weapon systems, and no use in high-risk automatic decisions, such as “credit social” systems. Sounds reasonable. The problem is that the devil is in the details.
Techdirt’s Mike Masnick says the deal “absolutely allows for domestic surveillance” because the collection of private data is to be done in accordance with Executive Order 12333 – an order he says is a tool for the NSA to intercept communications outside the U.S., even if they involve American citizens.
In response, Katrina Mulligan, head of partnerships in the national security sector at OpenAI, argued that the implementation architecture is more important than the language of the contract – limiting deployment only to cloud API makes it impossible to integrate models directly with weapon systems, sensors or operational equipment.
AI in the spotlight and the center of war
It is difficult to define it clearly, because today most networks are based on AWS. However, this does not change the fact that AI is becoming a strategic technology in military terms, and the disagreements between the Pentagon and Anthtropic only confirm this.
Why did OpenAI do it when Anthropic didn’t?
OpenAI itself admits that it does not know why Anthropic could not reach a similar agreement, and expresses hope that other labs will consider a similar arrangement. Cynics would say that Anthropic simply refused to sign something that the ChatuGPT developers accepted without blinking an eye.
A side effect of this whole affair? Claude from Anthropic overtook ChatGPT in Apple’s App Store on Saturday – apparently some users vote with their wallets and ethics.
Altman summed it up his way: If the deal actually de-escalates tensions between the AI industry and the government, OpenAI will look like geniuses.
The stakes are high. The clock is ticking.