When Anthropic final 12 months turned the primary main AI firm cleared by the US authorities for categorized use—together with navy purposes—the information didn’t make a significant splash. However this week a second improvement hit like a cannonball: The Pentagon is reconsidering its relationship with the corporate, together with a $200 million contract, ostensibly as a result of the safety-conscious AI agency objects to taking part in sure lethal operations. The so-called Division of Conflict may even designate Anthropic as a “provide chain threat,” a scarlet letter often reserved for corporations that do enterprise with nations scrutinized by federal businesses, like China, which implies the Pentagon wouldn’t do enterprise with corporations utilizing Anthropic’s AI of their protection work. In a press release to WIRED, chief Pentagon spokesperson Sean Parnell confirmed that Anthropic was within the sizzling seat. “Our nation requires that our companions be prepared to assist our warfighters win in any battle. In the end, that is about our troops and the security of the American individuals,” he mentioned. This can be a message to different corporations as nicely: OpenAI, xAI and Google, which at the moment have Division of Protection contracts for unclassified work, are leaping by way of the requisite hoops to get their very own excessive clearances.
There’s loads to unpack right here. For one factor, there’s a query of whether or not Anthropic is being punished for complaining about the truth that its AI mannequin Claude was used as a part of the raid to take away Venezuela’s president Nicolás Maduro (that’s what’s being reported; the corporate denies it). There’s additionally the truth that Anthropic publicly helps AI regulation—an outlier stance within the trade and one which runs counter to the administration’s insurance policies. However there’s an even bigger, extra disturbing challenge at play. Will authorities calls for for navy use make AI itself much less secure?
Researchers and executives consider AI is probably the most highly effective know-how ever invented. Nearly all the present AI corporations had been based on the premise that it’s potential to realize AGI, or superintelligence, in a method that stops widespread hurt. Elon Musk, the founding father of xAI, was as soon as the largest proponent of reining in AI—he cofounded OpenAI as a result of he feared that the know-how was too harmful to be left within the fingers of profit-seeking corporations.
Anthropic has carved out an area as probably the most safety-conscious of all. The corporate’s mission is to have guardrails so deeply built-in into their fashions that dangerous actors can not exploit AI’s darkest potential. Isaac Asimov mentioned it first and greatest in his legal guidelines of robotics: A robotic could not injure a human being or, by way of inaction, permit a human being to return to hurt. Even when AI turns into smarter than any human on Earth—an eventuality that AI leaders fervently consider in—these guardrails should maintain.
So it appears contradictory that main AI labs are scrambling to get their merchandise into cutting-edge navy and intelligence operations. As the primary main lab with a categorized contract, Anthropic offers the federal government a “customized set of Claude Gov fashions constructed solely for U.S. nationwide safety clients.” Nonetheless, Anthropic mentioned it did so with out violating its personal security requirements, together with a prohibition on utilizing Claude to provide or design weapons. Anthropic CEO Dario Amodei has particularly mentioned he doesn’t need Claude concerned in autonomous weapons or AI authorities surveillance. However which may not work with the present administration. Division of Protection CTO Emil Michael (previously the chief enterprise officer of Uber) informed reporters this week that the federal government received’t tolerate an AI firm limiting how the navy makes use of AI in its weapons. “If there’s a drone swarm popping out of a navy base, what are your choices to take it down? If the human response time will not be quick sufficient … how are you going to?” he requested rhetorically. A lot for the primary legislation of robotics.
There’s a great argument to be made that efficient nationwide safety requires the most effective tech from probably the most modern corporations. Whereas even a couple of years in the past, some tech corporations flinched at working with the Pentagon, in 2026 they’re typically flag-waving would-be navy contractors. I’ve but to listen to any AI government talk about their fashions being related to deadly pressure, however Palantir CEO Alex Karp isn’t shy about saying, with obvious delight, “Our product is used from time to time to kill individuals.”

























