Anthropic
- Patrick Sison - AP
- Updated
Pages from the Anthropic website and the company's logo are displayed on a computer screen in New York on Thursday, Feb. 26, 2026.
Patrick Sison - APAs featured on
Anthropic’s moral stand on U.S. military use of artificial intelligence is reshaping the competition between leading AI companies but also exposing a growing awareness that maybe chatbots just aren’t capable enough for acts of war. Anthropic’s chatbot, Claude, for the first time outpaced its better-known rival ChatGPT in phone app downloads in the United States this week, a signal of growing interest from consumers siding with Anthropic in its standoff with the Pentagon. But while many military and human rights experts have applauded Amodei for standing up for ethical principles, some are also frustrated by years of AI industry marketing that persuaded the government to apply AI on high-stakes tasks.
The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot Claude. The Pentagon said in a statement Thursday it has “officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately.” The decision appeared to shut down opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defense Secretary Pete Hegseth accused the company of endangering national security.
Trending Now
-
Delaware DMV adds new security screening at all facilities
-
Worker seriously injured in trench fall at Lewes construction site
-
Protesters gather in Dagsboro over immigration enforcement
-
Woman charged after altercation at Dover casino
-
Medical Practices question Delaware medical school plan amid doctor shortage
