The Pentagon has formally designated Anthropic and its AI models as a supply-chain risk, marking the first time a U.S. company has received such a classification, according to U.S. media reports. The move requires defense contractors to certify that they do not use Anthropic’s Claude AI models in work with the military, potentially affecting the company’s broader business.
The dispute stems from Anthropic’s position that its technology should not be used for mass surveillance or fully autonomous weapons systems, a stance rejected by U.S. defense officials who say suppliers cannot dictate how their products are used. The company has vowed to challenge the designation in court.
The conflict has intensified after reports that Anthropic’s AI was previously used in a U.S. military operation against Iran and remains in use despite a government-wide ban. Meanwhile, CEO Dario Amodei reportedly told staff the action was politically motivated, pointing to donations by Greg Brockman of OpenAI to Donald Trump, whose administration ordered federal agencies to halt use of Anthropic’s technology with a six-month phaseout period.
Credit : CGTN