The US artificial intelligence company Anthropic filed a lawsuit Monday against the Trump administration, challenging the Pentagon's decision to classify the firm and its products as a "supply chain risk" following a breakdown in negotiations over safety restrictions on its AI technology.
The lawsuit, submitted to a federal court in California, argues that the designation and President Donald Trump's order directing federal agencies to stop using Anthropic's systems are "unprecedented and unlawful."
It called the government's campaign a violation of its First and Fifth Amendment rights.
A parallel suit was filed in the DC Circuit Court of Appeals.
Anthropic, the company behind the Claude family of AI models, asked the court to overturn the decision, warning that the implications of the case are significant for the company and the broader AI industry.
In the filing, the company said the government's move violates constitutional protections and lacks legal authority.
"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the lawsuit stated, adding that no federal statute authorizes such action.
The "supply chain risk" label is typically applied to entities linked to foreign adversaries and restricts defense contractors from using a company's products in government projects.
Anthropic's lawyers said the company pursued legal action only as a "last resort," alleging that the administration retaliated against the firm for its stance on AI safety and the limitations of advanced AI models.
Founded with a focus on safe AI development, Anthropic has provided its technology to the Pentagon and US intelligence agencies since late 2024 through a partnership with data analytics firm Palantir.
The dispute stems from negotiations with the Department of Defense over how the company's AI models could be used. Anthropic had insisted on restrictions against domestic mass surveillance and the use of AI in fully autonomous weapons, arguing the technology is not reliable enough for life-and-death decisions.
According to the company, the Pentagon rejected those conditions and maintained that AI tools should be available for "all lawful purposes."
Negotiations collapsed late last month after Anthropic CEO Dario Amodei said the company could not agree to the terms "in good conscience."
Shortly afterward, Defense Secretary Pete Hegseth informed the firm of the supply chain risk designation, while Trump ordered federal agencies to halt the use of Anthropic products, including its flagship Claude AI models.
Despite the Pentagon's move, major technology companies including Google, Amazon and Apple said Anthropic's AI tools would remain available on their platforms for uses unrelated to US defense operations.
Amid the dispute, Anthropic said it had held "productive" discussions with Pentagon officials, though Emil Michael, the US undersecretary of defense for research and engineering, later said there were no active negotiations with the company.
The case could become a landmark legal battle over the relationship between AI developers and government agencies as Washington seeks to expand the use of artificial intelligence in defense and national security.