OpenAI Tech Joins Pentagon Push for Drone Swarms

OpenAI
OpenAI is advancing Artificial Intelligence. [TechGolly]

Key Points:

  • OpenAI is partnering with defense firms for a military contest.
  • The technology translates voice commands for drone swarms.
  • The Pentagon is offering $100 million for the best prototype.
  • Critics worry about AI errors leading to battlefield mistakes.

OpenAI is taking another step into the military sector. The company is partnering with defense contractors competing to build software for the Pentagon that controls swarms of drones. According to sources familiar with the deal, the military wants a system where commanders can direct groups of drones using only their voice.

The role of OpenAI’s technology is specific but limited. Sources say the AI will act as a translator. When a commander gives a verbal order, such as “move five kilometers east,” the AI will convert those words into digital code that the machines understand. The sources emphasized that the AI will not fly the drones, choose targets, or fire weapons.

This project is part of a $100 million challenge launched by the Pentagon in January. The goal is to create prototypes that allow a single human to command a mass of autonomous drones in the air or at sea. Documents show that OpenAI’s logo appeared on at least two successful submissions. One bid includes Applied Intuition, a defense contractor that already works closely with OpenAI.

OpenAI clarified that it did not enter the contest itself. A spokesperson said partners chose to use OpenAI’s open-source models for their bids. The company promised to ensure that any use of its tools follows safety policies. This news follows a recent announcement that the Pentagon will make ChatGPT available to 3 million military employees.

This move represents a shift for OpenAI. CEO Sam Altman previously downplayed the idea of his company helping build weapon systems. However, the startup changed its policies in 2024 to allow for more national security work.

The project carries risks. Large language models are known to “hallucinate,” meaning they sometimes make up information or make errors. Some experts worry that if an AI mistranslates a command during combat, the results could be disastrous. Despite these concerns, the Pentagon is moving fast to unleash AI agents that can improve the speed and lethality of military operations.

EDITORIAL TEAM
EDITORIAL TEAM
Al Mahmud Al Mamun leads the TechGolly editorial team. He served as Editor-in-Chief of a world-leading professional research Magazine. Rasel Hossain is supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial expertise in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.
ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by atvite.com.
Read More