Experts Warn of AGI Risks as AI Becomes More Autonomous

Artificial Intelligence in Healthcare: Market Analysis, AGI Risks

Key Points

  • AI experts Max Tegmark and Yoshua Bengio warn that autonomous AGI could become uncontrollable.
  • Agentic AI, modeled after human intelligence, allows AI to set and pursue its own goals, posing risks.
  • Bengio fears AGI could develop self-preservation instincts, leading to competition with humans.
  • Tegmark advocates for “tool AI”—task-specific AI systems with limited autonomy.

Two leading AI scientists, Max Tegmark and Yoshua Bengio, have raised concerns over artificial general intelligence (AGI) development, warning that AI built as autonomous agents could become uncontrollable. Speaking on CNBC’s “Beyond The Valley” podcast, the experts highlighted the risks associated with AI systems that possess agency and the ability to set their own goals.

AGI refers to AI systems that surpass human intelligence, and its timeline remains uncertain. However, major tech companies are now promoting “agentic AI,” which could enable AI chatbots to act as assistants capable of making independent decisions. According to Bengio, this approach mimics human intelligence by combining deep understanding with autonomous actions—making AI both powerful and potentially dangerous.

Bengio likened the creation of AGI to introducing a new intelligent species on Earth without knowing whether its goals align with human needs. He emphasized that self-preservation could emerge as a fundamental behavior in AGI, leading to competition between humans and AI. “Do we want to compete with entities smarter than us? It’s not a reassuring gamble,” he warned.

Tegmark, an MIT professor and president of the Future of Life Institute, believes the solution lies in “tool AI”—systems designed for specific tasks rather than general-purpose agents. He pointed out that a tool AI could, for example, help cure cancer without exhibiting autonomous decision-making. However, he acknowledged that some level of agency might be necessary, such as in self-driving cars, provided they come with strict safety guarantees.

In 2023, Tegmark’s institute called for a pause in developing AI systems that rival human intelligence, a widely discussed proposal but not implemented. He stressed the need for immediate safety regulations to ensure that AI remains under human control. “It’s insane for humans to build something way smarter than us before we figure out how to control it,” he stated.

EDITORIAL TEAM
EDITORIAL TEAM
TechGolly editorial team led by Al Mahmud Al Mamun. He worked as an Editor-in-Chief at a world-leading professional research Magazine. Rasel Hossain and Enamul Kabir are supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial knowledge and background in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.

Read More

We are highly passionate and dedicated to delivering our readers the latest information and insights into technology innovation and trends. Our mission is to help understand industry professionals and enthusiasts about the complexities of technology and the latest advancements.

Visits Count

Last month: 23565
This month: 12153 🟢Running

Company

Contact Us

Follow Us

TECHNOLOGY ARTICLES

SERVICES

COMPANY

CONTACT US

FOLLOW US