Key Points:
- OpenAI CEO Sam Altman announced updates to the recent military contract.
- The changes clarify the core ethical principles of the technology company.
- The deal blocks intelligence agencies like the NSA from using the software.
- The military must sign a new agreement to expand access to spies.
OpenAI is drawing a strict line regarding how the government uses its technology. Chief Executive Officer Sam Altman announced on Monday that the company is actively updating its new contract with the United States military.
Altman shared the news through a message on the social media platform X. He explained that the artificial intelligence company needed to make its ethical principles completely clear to the public. To achieve this goal, OpenAI and the Department of War worked together to add specific limits to their recent agreement.
The most important change focuses directly on government spies. The new rules strictly prevent intelligence agencies from accessing OpenAI services. Altman specifically named the National Security Agency as a group that cannot use the current software package under any circumstances.
This restriction creates a solid wall between standard military operations and covert surveillance networks. If the government ever decides it wants to hand these artificial intelligence tools over to the intelligence community, officials cannot simply share their access. Instead, they will need to negotiate and sign a completely new modification to the existing contract.
This public update follows a major business announcement from just last week. OpenAI recently secured a massive deal to install its technology inside the military’s secure, classified networks. That initial project sparked immediate questions about how defense officials might use powerful tools like ChatGPT behind closed doors.
The military plans to use these advanced systems to process vast amounts of information quickly. Commanders want artificial intelligence to help organize data, draft reports, and streamline daily logistics. However, the prospect of mixing this technology with national security makes many privacy advocates very nervous.
By adding these new limits, OpenAI hopes to calm public fears about mass surveillance. The company clearly wants to sell its software to big government buyers while still keeping tight control over who exactly gets to use it in the real world.