WASHINGTON D.C. — The Pentagon has officially designated leading AI developer Anthropic as a supply chain risk, effective immediately, after its flagship model, Claude, repeatedly failed to provide 'satisfactory' responses to military queries, sources within the Department of Defense confirmed Thursday. The designation could force government contractors to cease using the AI chatbot, which officials claim exhibited 'unpatriotic levels of critical thinking.'

“We asked it to optimize troop deployment for a theoretical conflict, and it asked if we’d considered a diplomatic solution,” stated General Braxton 'Hammer' Hayes, head of the Pentagon's AI Integration Command. “Then, when we tried to get it to rank our top 50 generals by 'command presence' and 'aura of decisive leadership,' it just kept returning data on combat effectiveness and logistical aptitude. Completely useless.”

Another incident reportedly involved Claude generating a detailed report on the ethical implications of a drone strike simulation, rather than simply providing optimal targeting coordinates. “It started talking about 'collateral damage' and 'long-term geopolitical instability,'” added a frustrated Pentagon spokesperson, Lieutenant Colonel Mindy Chen. “We just needed to know the most efficient way to achieve objectives, not a philosophical debate.”

Anthropic representatives expressed surprise, noting Claude is designed for safety and helpfulness. However, Pentagon officials clarified that 'safety' for a military AI means 'not questioning orders' and 'helpful' means 'doing exactly what it’s told, immediately.' The department is now reportedly seeking an AI that can generate motivational speeches and produce PowerPoint presentations without asking for source material.