More

    What If Elon Musk’s Grok Leaks Classified Info? Elizabeth Warren Is Worried—The Pentagon Isn’t – Decrypt



    In brief

    • Senator Warren sent a letter to Defense Secretary Pete Hegseth demanding answers over Grok’s Pentagon access
    • Security agencies have warned about Grok’s risks, but the Pentagon appears to be pushing ahead anyway.
    • Grok’s history includes lurid deepfake images of minors, antisemitic outputs, and leaked conversations.

    Senator Elizabeth Warren wants to know how a chatbot that allegedly generated millions of deepfake images—including compromising images depicting minors—ended up with keys to the Pentagon’s most classified systems.

    On Sunday, Warren sent a four-page letter to Defense Secretary Pete Hegseth demanding answers about the Department of Defense’s decision to give Elon Musk’s xAI access to classified military networks, which she said was granted while multiple federal agencies were raising red flags.

    “I write regarding my concerns about the Department of Defense’s (DoD) reported decision to allow Elon Musk’s xAI to access classified systems despite concerns raised by multiple federal agencies, including the National Security Agency (NSA) and the General Services Agency (GSA),” Warren wrote.

    “I am concerned that Grok’s apparent lack of adequate guardrails could pose serious risks to the safety of U.S. military personnel and to the cybersecurity of classified systems,” she added, “especially if Grok is given sensitive military information and access to operational systems.”

    The National Security Agency, Warren’s letter notes, “conducted a classified review” and “determined Grok had particular security concerns that other models didn’t.” The General Services Administration raised similar alarms.

    “Were Grok to leak government information, this could reveal sensitive military plans, U.S. intelligence efforts, and potentially put service members in danger,” Warren wrote.

    Neither concern appears to have slowed anything down.

    “It is unclear what assurances or documentation xAI has provided to the Department of Defense about Grok’s security safeguards, data-handling practices, or safety controls, and whether DoD has evaluated those assurances before reportedly allowing Grok access to classified systems,” the letter reads.

    The timing couldn’t be harder to ignore. The same day Warren’s letter went out, three Tennessee minors filed a federal class action lawsuit against xAI, alleging Grok generated child sexual abuse material based on their real photographs. The complaint accuses xAI of deliberately releasing Grok without industry-standard safeguards, calling it “a business opportunity” to profit from the exploitation of real people, including children.

    Last week, the Washington Post reported that a Department of Government Efficiency (DOGE) employee under Musk’s oversight copied sensitive Social Security Administration data records on hundreds of millions of Americans, and intended to use that data at their new tech startup.

    Warren’s letter also cites Grok’s history of generating antisemitic content, giving users instructions on how to commit murders and terrorist attacks, and running wild with non-consensual deepfakes despite repeated promises of fixes. Hundreds of thousands of private Grok conversations were also found indexed on Google last August.

    Government testing showed Grok is more susceptible than competing models to “data poisoning” attacks—where manipulated data corrupts the system’s outputs—a serious vulnerability for a tool being considered for weapons development and battlefield intelligence. The Pentagon’s own Chief of Responsible AI circulated internal memos about these risks and stepped down shortly thereafter.

    The deal itself came together under unusual circumstances. xAI was reportedly a late addition to the Pentagon’s AI contract pool, awarded a deal worth up to $200 million last July. The classified access agreement followed in February, just as the DoD was publicly feuding with Anthropic over safety guardrails.

    When asked about it, a Pentagon spokesperson told the Wall Street Journal that the department was “excited to have xAI, one of America’s national champion frontier AI companies onboard and looks forward to deploying Grok to its official AI platform GenAI.mil in the very near future.”

    That context matters. Anthropic had been the only AI company with classified-ready systems, with Claude deployed in real military operations. After Anthropic refused the Pentagon’s demand to make Claude available for “all lawful purposes”—specifically pushing back on autonomous weapons and mass domestic surveillance—the DoD labeled the company a supply chain risk. xAI and OpenAI were announced as replacements.

    There are no records of xAI questioning the reach of the “all lawful purposes” standard. OpenAI was more diplomatic about it, establishing some boundaries on a server level.

    Warren is asking Hegseth to respond by March 30 with the full text of the xAI agreement, all internal communications about the deal, and answers on whether any testing or evaluation took place before access was granted. One of her 10 questions asks directly whether safeguards exist to ensure Grok does not cause “erroneous targeting decisions” if deployed in critical operational systems.

    Daily Debrief Newsletter

    Start every day with the top news stories right now, plus original features, a podcast, videos and more.



    Source link

    Latest stories

    You might also like...