OpenAI Establishes Independent Safety Committee to Oversee AI Security

State-Backed Hackers Utilize ansMicrosoft-Backed OpenAI Tools for Espionage

Key Points

  • OpenAI forms an independent body to oversee AI safety and security.
  • The committee proposed creating an ISAC to share AI threat intelligence.
  • Zico Kolter, a Carnegie Mellon professor, will lead the committee.
  • OpenAI commits to greater openness about the risks and capabilities of its AI models.

Microsoft-backed OpenAI has introduced an independent Safety and Security Committee to supervise developing and deploying its AI models. This move follows the committee’s recommendations, which were publicly shared for the first time. The initiative aims to bolster safety practices as OpenAI expands its AI technologies.

OpenAI, known for developing the viral chatbot ChatGPT, created the Safety and Security Committee in May 2023 to evaluate and refine its safety protocols. The launch of ChatGPT in late 2022 generated significant global interest in AI, bringing excitement and concerns over ethical usage, security, and potential biases in AI systems. In response to these concerns, the committee ensured that OpenAI’s AI models were developed and deployed with maximum security and ethical oversight.

Among the key recommendations made by the committee is the creation of an Information Sharing and Analysis Center (ISAC) for the AI industry. This initiative would promote collaboration and information sharing on threat intelligence and cybersecurity issues between organizations working in the AI sector.

Zico Kolter, a professor and director of the machine learning department at Carnegie Mellon University and a member of OpenAI’s board, will chair the independent committee. OpenAI also announced plans to enhance internal security operations, improve information segmentation, and increase staffing to support around-the-clock security monitoring.

As part of its long-term goals, OpenAI has committed to becoming more transparent about the capabilities and risks of its AI models. The company recognizes the growing concerns around AI safety and aims to lead the industry in developing best practices for the ethical use of AI.

In August 2023, OpenAI partnered with the U.S. government to research, test, and evaluate its AI models, further underscoring its dedication to responsible AI development.

EDITORIAL TEAM
EDITORIAL TEAM
TechGolly editorial team led by Al Mahmud Al Mamun. He worked as an Editor-in-Chief at a world-leading professional research Magazine. Rasel Hossain and Enamul Kabir are supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial knowledge and background in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.

Read More

We are highly passionate and dedicated to delivering our readers the latest information and insights into technology innovation and trends. Our mission is to help understand industry professionals and enthusiasts about the complexities of technology and the latest advancements.

Visits Count

Last month: 86272
This month: 62137 🟢Running

Company

Contact Us

Follow Us

TECHNOLOGY ARTICLES

SERVICES

COMPANY

CONTACT US

FOLLOW US