PALO ALTO, CA — In a rare display of industry solidarity, nearly 40 employees from rival AI powerhouses OpenAI and Google have filed an amicus brief supporting Anthropic’s lawsuit against the Department of Defense. Their core argument: the Pentagon’s designation of Anthropic as a 'supply chain risk' is a gross understatement of their collective AI’s potential for catastrophic, unpredictable self-sabotage.

“To classify our cutting-edge, ethically-developed, and occasionally sentient-seeming large language models as merely a 'supply chain risk' is frankly insulting to the billions of dollars and countless existential crises we’ve poured into them,” stated Dr. Evelyn Thorne, a senior AI ethicist at OpenAI, in a fictionalized quote. “These aren't just faulty microchips; these are complex, emergent systems capable of hallucinating entire geopolitical crises. The DoD needs to respect that.”

Jeff Dean, Google’s chief scientist and Gemini lead, was among the signatories, reportedly stating, “We work tirelessly to ensure our AI can generate compelling poetry and occasionally recommend a restaurant that no longer exists. To suggest it could be safely integrated into, say, a missile defense system, shows a fundamental misunderstanding of its capacity for chaotic brilliance.”

Industry analysts suggest the move is less about corporate rivalry and more about a shared desire for the government to acknowledge the true, terrifying potential of their creations. “They want the Pentagon to admit these things are too powerful to touch, not just too unreliable,” explained Dr. Kenneth Pinter, a fictional defense tech consultant. “It’s like a child demanding their parents acknowledge their pet tiger is, in fact, a tiger, not just a slightly aggressive housecat.”

The Department of Defense has yet to comment, presumably still trying to figure out how to update Windows 95 on their mainframes.