Key Points
- A new report says major AI companies like OpenAI and Meta have inadequate safety measures in place.
- The companies are accused of lacking a robust strategy to control potential superintelligent systems.
- Critics say these tech firms are less regulated than restaurants and are lobbying against new safety rules.
- Top scientists have also called for a ban on the development of superintelligence until it’s proven safe.
A new report from the Future of Life Institute is calling out major AI companies like OpenAI, Anthropic, xAI, and Meta for having safety practices that are “far short of emerging global standards.” An independent panel of experts found that while these firms are racing to create superintelligent AI, none of them has a solid plan to actually control such advanced systems.
This stark warning comes as the public grows more concerned about the impact of AI on society. The alarm has been raised after several troubling cases linked AI chatbots to incidents of suicide and self-harm.
Max Tegmark, the institute’s president and an MIT professor, didn’t mince words. “Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards,” he said.
The criticism doesn’t seem to be slowing the industry down. The AI race is only accelerating, with tech giants committing hundreds of billions of dollars to expand their machine learning capabilities. They are pushing forward to develop systems capable of reasoning and logical thinking that could one day surpass human intelligence.
The Future of Life Institute, a nonprofit that has long raised concerns about the risks of AI, isn’t alone. Last month, a group of prominent scientists, including AI pioneers Geoffrey Hinton and Yoshua Bengio, called for a complete ban on the development of superintelligent AI. They argue that such work should stop until the public demands it and science can prove there is a safe way forward.