Appeals Court Refuses to Block Pentagon Blacklist Against AI Firm Anthropic

anthropic ai
Anthropic redefining what responsible AI can be. [TechGolly]

Key Points:

  • A federal appeals court in Washington refused to temporarily stop the Department of Defense from blacklisting Anthropic.
  • The technology company remains banned from military contracts but can still work with other federal agencies after a separate court ruling.
  • The Pentagon labeled Anthropic a supply chain risk after the company refused to allow its artificial intelligence to power autonomous weapons.
  • Anthropic signed a $200 million contract with the military in July before negotiations collapsed over how the government would use the technology.

A federal appeals court in Washington, D.C., just delivered a major blow to artificial intelligence company Anthropic. On Wednesday, the court denied the company’s request to temporarily block the Department of Defense from blacklisting it. This legal fight started after the military labeled the technology firm a supply chain risk.

The judges explained their reasoning in a written decision. They weighed the financial harm Anthropic might suffer against the needs of the military. The court decided that the government’s needs matter more right now. The judges noted that the Department of Defense must secure vital artificial intelligence technology during an active military conflict. Because of this, the court refused to pause the blacklisting while the main lawsuit continues.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by dailyalo.com.

Anthropic does have some good news from a different courtroom. Late last month, a federal judge in San Francisco granted the company a preliminary injunction in a related case. That ruling stops the Trump administration from enforcing a total ban on the Claude software across the entire government. Because of these split decisions, Anthropic cannot currently accept military contracts, but it can still sell its software to other federal agencies.

The massive fight between the Pentagon and one of the world’s most valuable private companies began over basic rules of use. Anthropic actually signed a massive $200 million contract with the Pentagon back in July. The company even became the first business to deploy its models across classified military networks. The software worked well with existing defense contractors like Palantir.

Problems started in September when the two sides tried to negotiate how the military would deploy the Claude software on its main artificial intelligence platform. The Department of Defense demanded unfettered access to the models for all lawful purposes. Anthropic pushed back. The company demanded clear assurances that the military would never use its technology to build fully autonomous weapons or conduct domestic mass surveillance. Neither side compromised, and the dispute ended up in federal court.

The conflict spilled into the public eye in late February. Defense Secretary Pete Hegseth took to the social media platform X to declare Anthropic a supply chain risk. Soon after, the military sent the company an official letter confirming the designation. Historically, the government has used this label only for foreign adversaries. Anthropic holds the strange title of being the very first American company to receive this national security designation.

President Donald Trump escalated the situation right around the same time. He published a post on Truth Social ordering all federal agencies to stop using technology built by Anthropic immediately. He established a 6-month phase-out period for agencies to remove the software from their systems. These aggressive moves shocked many officials in Washington because government workers already used the tools daily.

Anthropic fought back by asking the appeals court to review the decision. The company argued the Pentagon acted unconstitutionally and simply wanted to retaliate against them. The appeals court acknowledged that Anthropic will likely suffer permanent harm without a stay, but the judges noted that the company primarily faces financial losses. The judges also rejected the claim that the military restricted the company’s free speech.

Because Anthropic faces real financial harm, the appeals court promised to expedite the legal review process. An Anthropic spokesperson released a statement after the ruling. The company thanked the court for recognizing the need for speed. The spokesperson said the company feels confident the courts will eventually agree that the supply chain designations broke the law. The company promised to keep working productively with the government to ensure Americans benefit from safe artificial intelligence.

EDITORIAL TEAM
EDITORIAL TEAM
Al Mahmud Al Mamun leads the TechGolly editorial team. He served as Editor-in-Chief of a world-leading professional research Magazine. Rasel Hossain is supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial expertise in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.
Read More