US Military Used Anthropic’s AI Model Claude in Venezuela Raid, Report Says

ARTIFICIAL INTELLIGENCE-AI, 2 Mar 2026

William Christou | The Guardian - TRANSCEND Media Service

A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the tool was required to comply with its policies. 
Photograph: GK Images/Alamy

Wall Street Journal says Claude used in operation via Anthropic’s partnership with Palantir Technologies.

14 Feb 2026 – Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed today, a high-profile example of how the US defence department is using artificial intelligence in its operations.

The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela’s defence ministry. Anthropic’s terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance.

Anthropic was the first AI developer known to be used in a classified operation by the US department of defence. It was unclear how the tool, which has capabilities ranging from processing PDFs to piloting autonomous drones, was deployed.

A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the AI tool was required to comply with its usage policies. The US defence department did not comment on the claims.

The WSJ cited anonymous sources who said Claude was used through Anthropic’s partnership with Palantir Technologies, a contractor with the US defence department and federal law enforcement agencies. Palantir refused to comment on the claims.

The US and other militaries increasingly deploy AI as part of their arsenals. Israel’s military has used drones with autonomous capabilities in Gaza and has extensively used AI to fill its targeting bank in Gaza. The US military has used AI targeting for strikes in Iraq and Syria in recent years.

Critics have warned against the use of AI in weapons technologies and the deployment of autonomous weapons systems, pointing to targeting mistakes created by computers governing who should and should not be killed.

AI companies have grappled with how their technologies should engage with the defence sector, with Anthropic’s CEO, Dario Amodei, calling for regulation to prevent harms from the deployment of AI. Amodei has also expressed wariness over the use of AI in autonomous lethal operations and surveillance in the US.

This more cautious stance has apparently rankled the US defence department, with the secretary of war, Pete Hegseth, saying in January that the department wouldn’t “employ AI models that won’t allow you to fight wars”.

The Pentagon announced in January that it would work with xAI, owned by Elon Musk. The defence department also uses a custom version of Google’s Gemini and OpenAI systems to support research.

___________________________________________________

William Christou is a Beirut-based journalist, focusing on human rights investigations and migration issues.

Go to Original – theguardian.com


Tags: , , , , , , , ,

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

There are no comments so far.

Join the discussion!

We welcome debate and dissent, but personal — ad hominem — attacks (on authors, other users or any individual), abuse and defamatory language will not be tolerated. Nor will we tolerate attempts to deliberately disrupt discussions. We aim to maintain an inviting space to focus on intelligent interactions and debates.

8 + 2 =

Note: we try to save your comment in your browser when there are technical problems. Still, for long comments we recommend that you copy them somewhere else as a backup before you submit them.

This site uses Akismet to reduce spam. Learn how your comment data is processed.