Anthropic Sues Federal Government Over Supply Chain Risk Listing, Calls for 6-Month Ban on Claude, Escalating AI Military Use and Safety Governance Disputes
Artificial intelligence company Anthropic recently filed a lawsuit in U.S. federal court, accusing the Trump administration of designating it as a “supply chain risk” company and demanding that federal agencies cease using its AI systems. The company claims this measure is retaliatory suppression. The lawsuit has been submitted to the Northern District of California Federal Court.
In the complaint, Anthropic states that the government’s actions lack legal basis and have significantly impacted its operations and partnerships. The legal documents name several government agencies and officials as defendants, including the U.S. Department of Defense, Secretary Pete Hegseth, Secretary Scott Bessent, Senator Marco Rubio, and Secretary of Commerce Howard Lutnick.
Image Source: Anthropic Anthropic files lawsuit in U.S. federal court, accusing the Trump administration of listing it as a “supply chain risk” company and demanding federal agencies stop using its AI systems
The “supply chain risk” label is typically used for tech companies involving hostile nations, such as suppliers that may contain spyware or malicious software. Anthropic points out that categorizing an American AI company in this way damages its reputation and business relationships. Anthropic states that it is seeking legal clarification on whether the government’s actions are lawful, while protecting the interests of the company, its clients, and partners.
The core of this conflict stems from disagreements between Anthropic and the U.S. Department of Defense over the use of AI technology. Previously, Anthropic was in negotiations for a high-value AI contract worth approximately $200 million with the Pentagon. During negotiations, the Defense Department required that the AI system be usable for all lawful purposes, including military scenarios. Anthropic insisted on maintaining two safety restrictions:
The company considers these restrictions to be fundamental principles of AI safety governance. The two sides could not reach an agreement. In February 2026, the U.S. government designated Anthropic as a supply chain risk company and ordered federal agencies to stop using the Claude system within six months.
In the lawsuit, Anthropic states that the government is leveraging administrative power to pressure companies, attempting to force the company to adjust its AI safety policies. The company’s lawyers argue that the U.S. Constitution does not permit the government to impose punitive measures solely because a company publicly states its position.
Further Reading
National Security vs. Ethics: Anthropic Refuses to Remove Claude Safety Guardrails, Clashes with U.S. Department of Defense
The conflict between Anthropic and the government has quickly sparked discussion within the tech industry. Shortly after the lawsuit was filed, 37 AI researchers from companies like OpenAI and Google submitted amicus briefs to the court, expressing support for Anthropic. These researchers believe that punishing companies over AI safety policy disagreements could weaken the U.S. position in the AI industry.
The briefs argue that if the government suppresses AI companies under the guise of national security, it could have long-term negative impacts on the tech industry and research environment. Some scholars also note that “supply chain risk” is typically used in contexts involving foreign software security concerns. Anthropic’s case differs from such risks.
Ben Goertzel, CEO of AI company SingularityNET, states that refusing to allow AI to be used for mass surveillance or autonomous lethal weapons does not constitute a security threat. If the military needs such applications, they can choose other AI systems.
Image Source: DigFin CEO of SingularityNET Ben Goertzel
In response to the lawsuit, the White House quickly stated that the government will not allow companies to restrict the military’s access to critical technologies. A White House spokesperson emphasized that the U.S. government must ensure the military can use necessary technologies within legal bounds to maintain national security.
This dispute also touches on broader tech policy issues. The Trump administration previously eased export restrictions on AI chips to China and criticized certain tech companies’ political stances, further straining relations between Silicon Valley and Washington.
Anthropic states that after being blacklisted, some companies working with the Department of Defense may need to prove their systems do not use Claude. This requirement could have ripple effects on business collaborations.
Some major tech firms have expressed support for Anthropic. Google said it will continue offering Anthropic’s AI technology to cloud customers but will not involve military applications. Microsoft and Amazon also stated they will continue collaborating with the company in commercial sectors.
Anthropic has filed a legal request to declare the government’s actions illegal and to block the enforcement of the “supply chain risk” label. As the case proceeds through the judicial process, discussions around AI safety governance, national security, and corporate autonomy continue to expand.
Further Reading
OpenAI Partners with U.S. Military Sparks Boycott! Claude App Downloads Surpass Competitors, Revealing Ethical and Political Power Struggles
Wall Street Journal: After Trump’s Ban on Anthropic, U.S. and Israel Still Rely on Claude for Iran Airstrikes