Key Points
- South Korea hosts a global summit to draft guidelines for responsible AI use in the military, attended by over 90 countries.
- Discussions include legal reviews for compliance with international law and mechanisms to prevent autonomous weapons from making independent life-and-death decisions.
- The blueprint aims to establish minimum standards for AI in the military, reflecting principles from NATO, the U.S., and other nations but lacks binding commitments.
- The summit aligns with other international efforts, including U.N. discussions on autonomous weapons under the CCW and a U.S.-led declaration on responsible AI use in the military.
South Korea convened an international summit on Monday in Seoul, bringing together over 90 countries, including the United States and China, to draft a framework for the responsible use of artificial intelligence (AI) in military operations. The two-day event aims to establish principles for the ethical application of AI technologies in the military, although the agreement is not expected to be legally binding.
Following a similar event in Amsterdam last year, it is the second summit where participating nations, including the U.S. and China, endorsed a “call to action” without legal obligations. At the Seoul summit, South Korean Defence Minister Kim Yong-hyun emphasized the transformative potential of AI in the military, citing Ukraine’s use of AI-powered drones in the ongoing conflict with Russia. He highlighted the dual nature of AI in defense, describing it as a “double-edged sword” that can significantly enhance military capabilities and pose risks if misused.
South Korean Foreign Minister Cho Tae-yul stated that discussions would focus on ensuring AI applications in the military comply with international laws, including mechanisms to prevent autonomous weapons from making life-and-death decisions without human oversight. The summit aims to create a blueprint for action, reflecting guidelines from NATO, the U.S., and other nations to set minimum standards for the responsible use of AI in military contexts.
However, it remains uncertain how many participating nations will endorse the document, which seeks to establish clearer boundaries on AI use in the military without imposing binding legal commitments. The summit is part of broader international efforts to regulate military AI. Separately, U.N. member states under the 1983 Convention on Certain Conventional Weapons (CCW) also discuss potential restrictions on lethal autonomous weapons systems to align with international humanitarian law.
Additionally, the U.S. government launched a declaration last year promoting responsible military AI use, with 55 countries endorsing it as of August. The Seoul summit, co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, underscores the need for continued multi-stakeholder dialogue in a rapidly evolving field driven by private sector innovation, with governments playing a key regulatory role. Around 2,000 global participants, including representatives from international organizations, academia, and the private sector, have registered to discuss critical issues like civilian protection and AI’s role in nuclear weapons control.