US Plans to Host the Global AI Safety Summit in November 20-21

US Plans to Host the Global AI Safety Summit in November 20-21, virtual political debate

Key Points

  • The U.S. will host a global AI safety summit in San Francisco on November 20-21.
  • The meeting will include countries like the EU, Japan, and the UK to discuss safe AI development.
  • The Commerce Department proposes detailed AI reporting requirements, while legislative action in Congress remains stalled.
  • President Biden’s October 2023 executive order mandates safety testing for high-risk AI systems.

The Biden administration announced plans for a global summit on artificial intelligence (AI) safety, set to take place on November 20-21 in San Francisco. Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken will host the summit, the first meeting of the International Network of AI Safety Institutes. The goal is to enhance global cooperation to ensure the safe and responsible development of AI technology.

The network includes members from Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, Britain, and the United States. These nations have committed to working together on AI safety, innovation, and inclusivity, focusing on generative AI technologies. Generative AI, which can create text, images, and videos based on prompts, has raised concerns over its potential to disrupt industries and elections and even pose existential risks.

This global summit follows Raimondo’s earlier announcement of the AI safety network during the AI Seoul Summit in May, where participating nations agreed to prioritize these issues. The San Francisco meeting aims to kickstart technical collaboration among countries before the AI Action Summit in Paris in February 2025.

Raimondo emphasized the importance of working closely with allies to develop shared rules for AI governance based on safety, security, and trust. The San Francisco summit will bring technical experts from each member country’s AI safety institute or scientific office to discuss work areas and foster collaboration on AI safety initiatives.

In addition to international collaboration, the U.S. Commerce Department recently proposed new reporting requirements for advanced AI developers and cloud computing providers to ensure the technology is resilient against cyberattacks. These efforts are part of a broader regulatory push as legislative action in Congress on AI continues to stall. In October 2023, President Biden signed an executive order requiring AI developers to share safety test results with the government, whose systems pose risks to U.S. security, the economy, or public safety.

EDITORIAL TEAM
EDITORIAL TEAM
TechGolly editorial team led by Al Mahmud Al Mamun. He worked as an Editor-in-Chief at a world-leading professional research Magazine. Rasel Hossain and Enamul Kabir are supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial knowledge and background in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.

Read More

We are highly passionate and dedicated to delivering our readers the latest information and insights into technology innovation and trends. Our mission is to help understand industry professionals and enthusiasts about the complexities of technology and the latest advancements.

Visits Count

Last month: 34596
This month: 50742 🟢Running

Company

Contact Us

Follow Us

TECHNOLOGY ARTICLES

SERVICES

COMPANY

CONTACT US

FOLLOW US