
OpenAI expands Codex to control your computer generate images and manage long term tasks
April 17, 2026Anthropic is working to repair its relationship with the Trump administration, even as the Pentagon has designated the AI company as a supply-chain risk. The company’s CEO recently met with top White House officials in what both sides called productive discussions about collaboration.
The talks signal a split within the administration over how to handle Anthropic. While the Defense Department has effectively blacklisted the company, other agencies appear eager to use its AI technology.
High-level White House meeting signals warmer relations
Treasury Secretary Scott Bessent and White House Chief of Staff Susie Wiles met with Anthropic CEO Dario Amodei on Friday, according to reports. The White House described this as an “introductory meeting” that was “productive and constructive.”
“We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology,” the White House said in a statement.
Anthropic confirmed the meeting in its own statement, saying Amodei had “productive discussion on how Anthropic and the U.S. government can work together on key shared priorities such as cybersecurity, America’s lead in the AI race, and AI safety.” The company added it’s “looking forward to continuing these discussions.”
Banks encouraged to test new AI model
Earlier signs pointed to a thaw in relations between Anthropic and parts of the administration. Reports emerged that Bessent and Federal Reserve Chair Jerome Powell were encouraging major bank executives to test Anthropic’s new Mythos model.
Anthropic co-founder Jack Clark seemed to confirm the improving relationship, calling the Pentagon’s supply-chain designation a “narrow contracting dispute” that wouldn’t stop the company from briefing the government on its latest AI models.
Pentagon dispute stems from military AI disagreement
The conflict between Anthropic and the Defense Department began after failed negotiations over military use of the company’s AI models. Anthropic wanted to maintain safeguards preventing the use of its technology for:
- Fully autonomous weapons systems
- Mass domestic surveillance operations
When talks broke down, the Pentagon designated Anthropic as a supply-chain risk. This label is typically reserved for foreign adversaries and severely limits government agencies’ ability to use the company’s technology. Anthropic is now challenging this designation in court.
OpenAI quickly stepped in to announce its own military partnership, though this decision sparked some consumer backlash.
Administration split on Anthropic’s role
The White House meeting suggests the Pentagon’s hostility toward Anthropic isn’t shared across the administration. An administration source told reporters that “every agency” except the Department of Defense wants to use Anthropic’s technology.
This division highlights the broader challenge facing the government as it tries to balance national security concerns with the desire to use advanced AI capabilities. Anthropic’s Claude models are considered among the most capable AI systems available, making them attractive to government agencies seeking to improve their operations.
The outcome of this dispute could set important precedents for how the U.S. government works with AI companies that impose ethical restrictions on their technology’s use.




