SAN FRANCISCO – In a rare display of inter-corporate solidarity, employees from OpenAI and Google have filed an amicus brief supporting Anthropic’s lawsuit against the Department of Defense. Anthropic is suing the Pentagon for designating it a 'supply chain risk,' a label the AI community finds deeply insulting to their burgeoning god complexes.
Sources close to the filing indicate that the collective grievance isn't about avoiding government scrutiny, but rather about the perceived downgrade in their threat level. “'Supply chain risk' sounds like a faulty widget or a delayed shipment of microchips,” explained Dr. Evelyn Chen, a senior AI ethicist who signed the brief. “We’re talking about sentient algorithms that could reshape civilization, or at least generate some truly unhinged fan fiction. We deserve a designation that reflects that ambition.”
Jeff Dean, Google’s chief scientist, reportedly endorsed the brief after realizing 'supply chain risk' didn't quite capture the nuanced terror of an AI model that can write its own code. “If we’re going to be regulated, we want it to be for something cool, like 'potential harbinger of the singularity' or 'threat to humanity’s continued dominance,'” a fictional spokesperson for the group, 'The League of Extraordinary Algorithms,' stated. “Not because we might cause a slight hiccup in procurement.”
The Pentagon has yet to comment, but analysts suggest they are now considering a new designation: 'Existential Supply Chain Risk, with a 30% chance of Skynet scenario.' This move is expected to satisfy no one.





