U.S. military flags Anthropic as supply-chain risk

The Pentagon on Thursday assigned a formal supply-chain risk designation to artificial intelligence firm Anthropic, restricting the use of its technology in work related to the U.S. military. A source said the company’s tools had been used in support of military operations in Iran.

Anthropic confirmed the designation in a statement, saying it takes effect immediately and prevents government contractors from using its technology in Pentagon-related projects. However, companies may continue using Anthropic’s Claude model for work unrelated to U.S. defense contracts, according to CEO Dario Amodei.

Amodei said the designation has a “narrow scope,” explaining that the restrictions apply only to the use of Claude within Pentagon contracts and not to all use by customers who also work with the Defense Department.

The move follows a months-long dispute between Anthropic and the U.S. military over safeguards the company placed on how its technology can be used. The Defense Department had argued the restrictions were too limiting. Amodei said the company plans to challenge the designation in court.

In recent days, Anthropic and Pentagon officials had discussed possible plans for the military to stop using Claude. Amodei said the discussions also explored ways the company could continue working with the military without removing its safeguards.

However, Pentagon Chief Technology Officer Emil Michael later wrote on X that there are currently no active negotiations between the Defense Department and Anthropic.

Amodei also apologised for an internal memo that became public earlier this week, in which he suggested some Pentagon officials were unhappy with the company in part because it had not offered strong praise for President Donald Trump.

The disclosure of the memo came as investors attempted to limit reputational damage caused by the company’s dispute with the Pentagon.

The decision marks an unusual step by the U.S. government against a domestic technology firm that had previously been among the early companies collaborating with the military. Despite the designation, the Defense Department continues to rely on Anthropic’s technology for certain support functions in military operations, including in Iran, according to a person familiar with the matter.

Claude is believed to assist with tasks such as intelligence analysis and operational planning.

Microsoft said its legal team had reviewed the designation and concluded that Anthropic products, including Claude, can still be offered to customers other than the Defense Department through platforms such as Microsoft 365, GitHub and the company’s AI Foundry. The company added that it can continue working with Anthropic on non-defense projects.

Amazon, which has invested in Anthropic and is a major user of the Claude model, did not immediately comment.

Software platform Maven Smart Systems, developed by Palantir to support military intelligence analysis and weapons targeting, uses multiple prompts and workflows built with Anthropic’s Claude code.

Anthropic had been one of the most proactive AI firms in engaging with U.S. national-security agencies, but disagreements with the Pentagon over battlefield use of its technology have intensified in recent months.

The company has maintained restrictions preventing Claude from being used to power autonomous weapons or large-scale surveillance in the United States. The Pentagon has argued it should be able to use the technology when necessary, provided it complies with U.S. law.

The supply-chain risk label places Anthropic in a category the U.S. government has typically applied to foreign adversaries. Similar measures were previously used to remove Chinese technology giant Huawei from U.S. defense supply chains.

Click here for more on Technology

Source

Category
Lorem ipsum dolor sit amet, consectetur adipiscing elit eiusmod tempor ncididunt ut labore et dolore magna
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore