OpenAI CEO Sam Altman confirmed late Friday, February 27, that his company has finalized an agreement with the Department of Defense, permitting its AI models to operate within the military’s classified networks while adhering to strict safety and ethical restrictions.
Altman emphasized that the agreement reflects a mutual commitment to safe AI deployment, noting, “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”
Anthropic Faces Supply-Chain Risk Label
The announcement comes amid heightened scrutiny of AI companies, after President Donald Trump instructed all federal agencies to immediately halt use of Anthropic’s technology, following the Department of Defense’s designation of the rival AI lab as a “Supply-Chain Risk to National Security.”
This label, typically reserved for foreign adversaries, obliges DoD vendors and contractors to certify that they do not employ Anthropic’s AI models, effectively restricting the company from operating in sensitive government contexts.
Negotiations Collapse Over Autonomous Weapons Concerns
Anthropic had previously been the first AI lab to deploy its models on the DoD’s classified networks and sought assurances that its technology would not be used for fully autonomous weapons or domestic surveillance of American citizens.
The Department of Defense, however, wanted Anthropic to accept the use of its models across all lawful military applications, creating a standoff that ultimately led to the collapse of ongoing negotiations.
OpenAI Sets Safety Standards As Precedent
Altman revealed in his internal Thursday memo that OpenAI shared the same “red lines” as Anthropic, but the DoD agreed to OpenAI’s safety constraints.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman explained, adding that the Department formally codified these principles into law and policy within the agreement.
He also confirmed that OpenAI will implement technical safeguards and deploy personnel to monitor the models’ operations, ensuring they function as intended and adhere to agreed safety rules.
Calls For Industry-Wide Standards
OpenAI is urging the Department to extend these terms to all AI companies, advocating for de-escalation and collaborative agreements rather than legal or governmental confrontations.
“We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements,” Altman said, framing the company’s position as one of promoting broad, responsible AI governance.
This comes amid JPMorgan CEO Jamie Dimon warning about the risks of AI and its implications on the US labor market.
Anthropic Plans Legal Challenge
In response, Anthropic stated it was “deeply saddened” by the Pentagon’s classification and indicated it will contest the supply-chain risk label through the courts, highlighting ongoing tensions between AI safety priorities and government defense requirements.
The developments underscore a growing divide in how U.S. authorities interact with AI labs, with OpenAI achieving a cooperative arrangement while Anthropic confronts regulatory and operational obstacles.