The administration of Donald Trump appears to be softening its stance on the use of artificial intelligence tools developed by Anthropic, signaling a potential policy reversal after months of friction between the company and U.S. defense authorities. According to reports, officials are exploring ways to reintroduce Anthropic’s technology—particularly its Claude system—into federal operations.
From Rejection to Re-engagement
The shift comes after a previously contentious fallout between Anthropic and the Pentagon. Earlier disputes centered on disagreements over how the company’s AI systems could be deployed in classified military environments, with concerns raised about security risks and compliance with strict government requirements.
At the time, Trump publicly criticized Anthropic, accusing the firm of pushing ideological agendas and attempting to impose restrictive usage policies on government agencies. His administration had even outlined a phased withdrawal of Anthropic tools from federal use, citing supply chain vulnerabilities and national security concerns.
Now, however, internal discussions suggest a more pragmatic approach may be taking shape. A draft executive action is reportedly under consideration, aimed at reopening pathways for collaboration between the government and the AI developer.
High-Level Talks Signal a Reset
Senior administration figures, including Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent, are said to have recently met with Anthropic CEO Dario Amodei. Sources describe the meeting as constructive, with both sides exploring frameworks for future cooperation.
The White House has also initiated broader consultations with industry leaders to establish guidelines for deploying advanced AI systems responsibly. These discussions reportedly include simulated policy exercises—referred to as “table reads”—to refine best practices and ensure alignment across agencies.
While officials have not confirmed any final decisions, the administration emphasized its ongoing commitment to balancing innovation with national security. Any formal policy changes, they noted, would be announced directly by the president.
The ‘Mythos’ Factor
Driving much of this reconsideration is Anthropic’s latest AI model, known as Mythos. The system has generated significant interest due to its advanced capabilities, particularly in cybersecurity applications.
Although Anthropic previously flagged Mythos as too sensitive for public release—citing its potential misuse in automating cyberattacks—it is also seen as a valuable tool for defensive operations. This dual-use nature has made it especially appealing to government agencies tasked with protecting critical infrastructure.
Despite ongoing legal disputes between the Pentagon and Anthropic over supply chain classifications, some federal bodies are already moving ahead with adoption. Reports indicate that the National Security Agency (NSA) has begun utilizing Mythos in certain capacities.
Internal Divisions and Strategic Pressures
Within the defense establishment, opinions remain divided. Some Pentagon officials continue to oppose reinstating Anthropic, maintaining that earlier concerns have not been fully resolved. Others argue that prolonging the standoff risks putting the U.S. at a technological disadvantage, especially as global competition in AI intensifies.
The growing demand for cutting-edge AI tools across federal agencies appears to be influencing the administration’s recalibration. As cybersecurity threats evolve, access to advanced systems like Mythos is increasingly viewed as a strategic necessity rather than a liability.
A Turning Point for Government-AI Relations
If finalized, the proposed policy shift would mark a significant turning point in the relationship between the U.S. government and frontier AI developers. It reflects a broader recognition that collaboration—despite its challenges—may be essential to maintaining both technological leadership and national security in the AI era.





