Key Points:
- The Trump administration plans to issue an executive order to create a new artificial intelligence working group.
- The White House wants to establish a formal government review process to test new digital models before public release.
- The Pentagon recently gave staff a 6-month deadline to stop using Anthropic software after contract talks broke down.
- National security agencies will soon receive a new memo forcing contractors to follow the military chain of command.
The Trump administration wants to increase its control over artificial intelligence. According to a recent New York Times report, the White House plans to issue a new executive order to create a dedicated working group. This group will monitor and regulate the rapidly growing technology sector. As part of this plan, the government wants to establish a mandatory review process for 100% of new artificial intelligence models before companies release them to the public.
White House officials have already started discussing these plans with industry leaders. Last week, government representatives held closed-door meetings with top executives from major technology companies. They spoke directly with leaders from Alphabet, Google, OpenAI, and Anthropic. An anonymous White House official stated that any official announcement will come directly from the president. The official called the current discussions about the executive order pure speculation, but the details point to a serious policy shift.
A formal government review process marks a massive change in how President Donald Trump handles artificial intelligence. In the past, Trump focused heavily on supporting the technology industry. He championed efforts to build data centers across the country and worked to secure the massive amounts of electrical power these facilities need to operate. Now, the administration wants to balance that rapid growth with strict security measures and national oversight.
Right now, the exact details of the review process remain unclear. The New York Times did not specify which federal agencies will handle the testing or how the government will enforce the new rules. However, the report suggested the United States might copy a system currently used in the United Kingdom. The British Artificial Intelligence Security Institute actively researches leading models and makes strict recommendations to ensure companies use the technology safely.
In the United States, several technology labs already voluntarily test their cutting-edge models. They share their programs with the federal Center for Artificial Intelligence Standards and Innovation before launching them to the general public. The new executive order would likely change this voluntary system into a strict, mandatory requirement for all major developers.
This sudden push for government oversight accelerated after a major announcement from Anthropic last month. The company revealed its breakthrough model, called Mythos. Anthropic warned that Mythos possesses a unique ability to find critical weaknesses in computer networks. Because of this ability, the software poses a massive global cybersecurity risk if it falls into the wrong hands. Hackers could potentially use Mythos to attack banks, power grids, and government databases.
To prevent a disaster, Anthropic severely limited access to the Mythos model. Right now, only a handful of trusted financial and technology companies can use the software to test their own computer systems for flaws. Meanwhile, federal officials are working hard to secure wider access to Mythos. They want federal agencies to test the model thoroughly so the government can understand the full scope of the cybersecurity threat.
At the same time, a bitter dispute is tearing apart the relationship between Anthropic and the Pentagon. The United States Department of Defense heavily relied on Anthropic and its Claude product. Until early this year, Claude stood as the only artificial intelligence tool officially cleared for use on classified military networks. However, contract negotiations between the 2 sides collapsed in February.
Anthropic demanded extra safety guardrails before allowing the military to use its technology in combat or defense scenarios. The Pentagon rejected these demands. Following the failed talks, the Defense Department declared Anthropic a severe threat to the military supply chain. The military gave its staff a strict 6-month deadline to stop using Claude software across all secure networks completely.
The White House is actively trying to resolve these dangerous tensions. On April 17, White House Chief of Staff Susie Wiles met directly with Anthropic Chief Executive Officer Dario Amodei. They discussed several major topics, including the security risks surrounding the Mythos model. The administration wants to keep lines of communication open with top developers, even while the military cuts ties with them.
Federal officials are now drafting a comprehensive memo to guide national security agencies in adopting artificial intelligence in the future. The new rules require agencies to buy software from multiple vendors rather than relying on just 1 company. This strategy helps minimize vulnerabilities in the supply chain. Furthermore, the memo will require all technology contractors to strictly follow the military chain of command to secure lucrative government contracts.