'We shouldn't have rushed to get this out on Friday': OpenAI hastily amends the terms of its controversial deal with the US Department of War as CEO Sam Altman claims it's been a 'good learning experience'
After a very public falling out between Anthropic and the US Department of War late last week—in which the former refused to remove safeguards preventing its AI tools from being used for autonomous weaponry and mass surveillance purposes—OpenAI stepped into the vacuum with a deal to use its own AI tools in the US military's systems.
However, after reaching an agreement with the Pentagon on Friday, OpenAI CEO Sam Altman has since announced that his company will be amending the language used within the deal (via BBC News). In a statement posted on X, Altman appears to regret jumping into the fold quite so quickly, amid considerable backlash to the earlier terms.
Here is re-post of an internal post:We have been working with the DoW to make some additions in our agreement to make our principles very clear.1. We are going to amend our deal to add this language, in addition to everything else:"• Consistent with applicable laws,…March 3, 2026
"One thing I think I did wrong: we shouldn't have rushed to get this out on Friday", said Altman. "The issues are super complex, and demand clear communication. "
The language Altman wishes to tweak revolve around domestic mass surveillance concerns. Citing the Fourth Amendment of the US Constitution and the National Security Act of 1947, the new terms amount to the following:
"The AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.", the statement reads. "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."
"It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear" says Altman, although he then clarifies that "just like everything we do with iterative deployment, we will continue to learn and refine as we go."
Altman also says that the Department of War has affirmed that OpenAI's services will not be used by its US intelligence agencies like the NSA, and that OpenAI "want[s] to work through democratic processes."
"It should be the government making the key decisions about society," Altman continues. "We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it)."
However, Altman's second-to-last point is perhaps the most interesting. "There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods."
In a now-updated statement on OpenAI's website (which echoes many of the points Altman makes in his previous posting), the lines are drawn slightly more clearly:
"We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs", says the company. "No use of OpenAI technology for mass domestic surveillance. No use of OpenAI technology to direct autonomous weapons systems. No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as 'social credit')."
Which seems suspiciously close to the same points that Anthropic was pushing back on, and those which appear to have cost it its $200 million government contract as a result.
However, OpenAI still states that it thinks "our agreement has more guardrails than any previous agreement for classified AI deployments", and that "we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections."
For these seemingly-last minute changes, Altman appears somewhat contrite. Summing up his fifth and final point in his earlier X post, the OpenAI CEO said:
"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. [It's been a] good learning experience for me as we face higher-stakes decisions in the future."
Quite the public learning experience, at the very least. ChatGPT uninstalls were reported to have surged by 295% after the initial agreement was announced, and users appear to have reacted poorly to the idea of their AI tool of choice jumping into bed with the US Department of War. At the time of writing, the most liked comment on Altman's X post reads as thus:
"No amount of damage control is going to fix the irreparable harm you did to your brand this week. It's over, Sam."
Time will tell, I suppose.