Key Points
- OpenAI is developing its own AI chip to reduce its reliance on Nvidia. TSMC will manufacture the first chip, and mass production is expected in 2026.
- The in-house AI chip is initially designed to run AI models, not train them.
- Richard Ho leads OpenAI’s chip team, which has grown to 40 engineers.
- Meta, Microsoft, and OpenAI seek alternatives to Nvidia’s dominance in AI hardware.
OpenAI is accelerating its efforts to develop its own artificial intelligence (AI) chips to lessen its reliance on Nvidia’s dominant GPU technology. According to sources familiar with the matter, the company is finalizing the design for its first in-house AI chip, which will be fabricated by Taiwan Semiconductor Manufacturing Co. (TSMC) in the coming months.
The process, known as “taping out,” involves finalizing a chip’s design before it enters the manufacturing phase. OpenAI expects to mass-produce the chip at TSMC by 2026. However, the first version may not work perfectly, requiring additional iterations. Despite these challenges, OpenAI’s rapid progress suggests it could introduce an alternative to Nvidia’s chips as early as this year.
The company views this move as strengthening its negotiating position with existing chip suppliers while preparing for future AI hardware advancements. In recent months, OpenAI’s team, led by Richard Ho, has doubled to 40 engineers, working in collaboration with Broadcom. Ho, who previously played a key role in Google’s custom AI chip program, has led the project for over a year.
Building a custom AI chip is costly, with estimates reaching $500 million for a single version. This cost could double when factoring in software and supporting infrastructure. Even major tech firms like Microsoft and Meta have faced difficulties producing effective AI chips despite years of investment.
Meta plans to invest $60 billion in AI infrastructure this year, while Microsoft is allocating $80 billion for 2025. Nvidia currently controls around 80% of the AI chip market, making it the go-to provider for companies like OpenAI, Google, and Meta. However, the need for greater control and cost efficiency is driving these firms to seek alternatives.
OpenAI’s first-generation chip will focus on running AI models rather than training them. The company’s long-term goal is to expand its chip development efforts, though it would require hiring hundreds more engineers. The AI chip will be produced using TSMC’s advanced 3-nanometer process and feature high-bandwidth memory, similar to Nvidia’s designs.