OpenAI has reached an settlement with the USA Division of Protection to deploy its synthetic intelligence fashions on labeled navy networks, simply hours after the White Home ordered federal businesses to cease utilizing expertise from rival agency Anthropic.
In a late Friday post on X, OpenAI CEO Sam Altman introduced the deal, saying the corporate would offer its fashions contained in the Pentagon’s “labeled community.” He wrote that the division confirmed “deep respect for security” and a willingness to work throughout the firm’s working limits.
The announcement got here amid a turbulent week for the AI sector. Earlier the identical day, Protection Secretary Pete Hegseth labeled Anthropic a “Provide-Chain Threat to Nationwide Safety,” a designation usually utilized to overseas adversaries. The ruling requires protection contractors to certify they don’t seem to be utilizing the corporate’s fashions.
President Donald Trump concurrently directed each US federal company to right away halt use of Anthropic expertise, with a six-month transition interval for businesses already counting on its methods.
Associated: Crypto VC Paradigm expands into AI, robotics with $1.5B fund: WSJ
Anthropic Pentagon talks collapse over AI use limits
Anthropic was the primary AI lab to deploy fashions throughout the Pentagon’s labeled atmosphere beneath a $200 million contract signed in July. Negotiations collapsed after the corporate sought ensures that its software program wouldn’t be used for autonomous weapons or home mass surveillance. The Protection Division insisted the expertise be obtainable for all lawful navy functions.
In an announcement, Anthropic said it was “deeply saddened” by the designation and intends to problem the choice in courtroom. The corporate warned the transfer may set a precedent affecting how American expertise companies negotiate with authorities businesses, as political scrutiny of AI partnerships continues to accentuate.
Altman mentioned OpenAI maintains related restrictions and that they had been written into the brand new settlement. In accordance with him, the corporate prohibits home mass surveillance and requires human duty in choices involving using drive, together with automated weapons methods.
Associated: Pantera, Franklin Templeton join Sentient Arena to test AI agents
OpenAI faces backlash after deal
In the meantime, some customers on X voiced skepticism. “I simply canceled ChatGPT and acquired Claude Professional Max,” Christopher Hale, an American Democratic politician, wrote. “One stands up for the God-given rights of the American individuals. The opposite folds to tyrants,” he added.
“2019 OpenAI: we’ll by no means assist construct weapons or surveillance instruments. 2026 OpenAI: division of Struggle, maintain my labeled cloud occasion. Integrity arc go brrrrrrr,” one crypto consumer wrote.
Journal: Bitcoin may take 7 years to upgrade to post-quantum — BIP-360 co-author

























