More

    Pentagon Deploys Anthropic’s Claude AI in Venezuela Operation to Capture Maduro – CoinCentral


    TLDR

    • Anthropic’s Claude AI was used in the U.S. military operation that captured former Venezuelan President Nicolás Maduro in January 2026
    • Claude was deployed through Anthropic’s partnership with Palantir Technologies, whose tools are widely used by the Defense Department
    • Anthropic’s usage policies prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance
    • The Pentagon’s use of Claude has created tensions, with the Trump administration considering canceling Anthropic’s $200 million contract
    • Anthropic was the first AI model developer to be used in classified Pentagon operations

    💥 Find the Next KnockoutStock!
    Get live prices, charts, and KO Scores from KnockoutStocks.com, the data-driven platform ranking every stock by quality and breakout potential.


    Anthropic’s artificial intelligence model Claude was used in the U.S. military operation that captured former Venezuelan President Nicolás Maduro in January 2026. The Wall Street Journal reported this development based on information from people familiar with the matter.

    The mission included bombing several sites in Caracas last month. Maduro and his wife were captured in the operation and taken to New York to face drug-trafficking charges.

    Claude was deployed through Anthropic’s partnership with data company Palantir Technologies. Palantir’s tools are commonly used by the Defense Department and federal law enforcement agencies.

    Anthropic declined to comment on whether Claude was used in any specific operation. The company said any use of Claude must comply with its usage policies. These policies govern how the AI model can be deployed across both private sector and government applications.

    The Defense Department also declined to comment on the report. Palantir Technologies did not immediately respond to requests for comment.

    Conflict Over AI Usage Policies

    Anthropic’s usage guidelines prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance. These restrictions have created tension with Pentagon officials.



    The conflict has pushed administration officials to consider canceling Anthropic’s contract. The contract is worth up to $200 million and was awarded last summer.

    Anthropic Chief Executive Dario Amodei has publicly expressed concern about AI’s use in autonomous lethal operations. Domestic surveillance represents another major sticking point in current contract negotiations with the Pentagon.

    At a January event announcing the Pentagon’s work with xAI, Defense Secretary Pete Hegseth made pointed comments. He said the agency wouldn’t “employ AI models that won’t allow you to fight wars.” This comment referred to discussions administration officials have had with Anthropic.

    Anthropic was the first AI model developer to be used in classified operations by the Department of Defense. Other AI tools may have been used in the Venezuela operation for unclassified tasks.

    Broader AI Military Adoption

    The Pentagon is pushing top AI companies to make their tools available on classified networks. This includes companies like OpenAI, which recently joined Google’s Gemini on an AI platform for military personnel.

    That platform is used by about three million people. The custom version of ChatGPT is used for analyzing documents, generating reports, and supporting research.

    Many AI companies are building custom tools for the U.S. military. Most of these tools are available only on unclassified networks used for military administration.

    Anthropic is the only company with tools available in classified settings through third parties. However, the government remains bound by Anthropic’s usage policies even in these classified environments.

    Company Background and Tensions

    Amodei and other co-founders of Anthropic previously worked at OpenAI. Amodei has broken with many industry executives by calling for greater regulation and guardrails to prevent harms from AI.

    The constraints have escalated the company’s battle with the Trump administration. The administration has accused Anthropic of undermining the White House’s low-regulation AI strategy.

    The accusations include claims that Anthropic is calling for more guardrails and limits on AI chip exports. Anthropic recently raised $30 billion in its latest funding round and is now valued at $380 billion.





    Source link

    Stay in the Loop

    Get the daily email from CryptoNews that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

    Latest stories

    - Advertisement - spot_img

    You might also like...