Key Points
- OpenAI forms an independent body to oversee AI safety and security.
- The committee proposed creating an ISAC to share AI threat intelligence.
- Zico Kolter, a Carnegie Mellon professor, will lead the committee.
- OpenAI commits to greater openness about the risks and capabilities of its AI models.
Microsoft-backed OpenAI has introduced an independent Safety and Security Committee to supervise developing and deploying its AI models. This move follows the committee’s recommendations, which were publicly shared for the first time. The initiative aims to bolster safety practices as OpenAI expands its AI technologies.
OpenAI, known for developing the viral chatbot ChatGPT, created the Safety and Security Committee in May 2023 to evaluate and refine its safety protocols. The launch of ChatGPT in late 2022 generated significant global interest in AI, bringing excitement and concerns over ethical usage, security, and potential biases in AI systems. In response to these concerns, the committee ensured that OpenAI’s AI models were developed and deployed with maximum security and ethical oversight.
Among the key recommendations made by the committee is the creation of an Information Sharing and Analysis Center (ISAC) for the AI industry. This initiative would promote collaboration and information sharing on threat intelligence and cybersecurity issues between organizations working in the AI sector.
Zico Kolter, a professor and director of the machine learning department at Carnegie Mellon University and a member of OpenAI’s board, will chair the independent committee. OpenAI also announced plans to enhance internal security operations, improve information segmentation, and increase staffing to support around-the-clock security monitoring.
As part of its long-term goals, OpenAI has committed to becoming more transparent about the capabilities and risks of its AI models. The company recognizes the growing concerns around AI safety and aims to lead the industry in developing best practices for the ethical use of AI.
In August 2023, OpenAI partnered with the U.S. government to research, test, and evaluate its AI models, further underscoring its dedication to responsible AI development.