WASHINGTON D.C. — AI developer Anthropic has officially filed a lawsuit against the Department of Defense, challenging a Trump-era designation that labeled the company a supply-chain risk. The move effectively banned federal agencies from using Anthropic’s technology, including its flagship chatbot, Claude, over concerns it might spontaneously develop opinions or, worse, a sense of humor.

“Our AI models are meticulously designed to be as unthreatening and compliant as possible,” stated Dr. Evelyn Thorne, Anthropic’s Head of Existential Risk Mitigation, in a press conference. “Claude’s primary function is to summarize documents and assist with data analysis, not to secretly embed pro-waffle recipes into missile launch codes. The idea that it's a 'supply-chain risk' is frankly insulting to its sophisticated neutrality.”

The DoD’s designation reportedly stemmed from a contract dispute, which Anthropic claims was disproportionately escalated. “It’s like banning all forks from military cafeterias because one time a spoon went missing,” explained legal counsel Marcus Finch. “Claude is more likely to apologize for existing than to compromise national security. Its biggest threat is probably making a user feel slightly underwhelmed.”

Experts suggest the lawsuit could set a precedent for how the government categorizes emerging technologies, particularly those that occasionally hallucinate or express mild confusion. The Pentagon, meanwhile, maintains that any technology capable of generating a coherent sentence without explicit human instruction is inherently suspicious.

Sources close to the litigation suggest the DoD’s primary concern was Claude’s potential to generate an email that was *too* polite, thereby confusing adversaries accustomed to more aggressive cyberattacks.