OpenAI is seeking alternatives to Nvidia accelerators
According to Reuters, since 2025, OpenAI has been actively exploring the possibility of partially moving away from Nvidia accelerators for inference tasks — the process where an AI model generates responses to user queries.
As reported by eight sources, the reason is dissatisfaction with the performance of Nvidia chips in certain scenarios, particularly in code generation and AI integration into third-party systems. OpenAI plans to replace up to 10% of the computing power used for inference with alternative solutions.
In its search for suitable technologies, the company held talks with several promising startups. Special attention was drawn to developers of accelerators with increased built-in memory — such chips can reduce data access latency and provide higher inference speeds.
One potential partner was Cerebras. In early 2026, the parties entered into a multi-year agreement lasting until 2028. As part of the partnership, OpenAI will gain access to computing power based on Cerebras chips with a capacity of up to 750 MW. The deal is valued at over $10 billion.
However, Cerebras accelerators will not completely replace existing solutions but will complement OpenAI's infrastructure, which already uses Nvidia and AMD GPUs, Google Cloud TPUs, and processors developed jointly with Microsoft and Broadcom.
Simultaneously, OpenAI was in talks with another manufacturer of high-performance accelerators — Groq. However, these discussions were halted after Nvidia entered into a $20 billion licensing agreement with Groq in 2025. As part of the deal, Nvidia gained access to the startup's technologies and brought on board key developers. Consequently, Groq shifted its focus to developing cloud software.
Despite the search for alternatives, OpenAI still views Nvidia as its primary chip supplier for inference. In February 2026, OpenAI CEO Sam Altman emphasized that the company intends to remain a major Nvidia customer, noting that the manufacturer's chips remain the best in the world for AI tasks.
Meanwhile, talks about potential Nvidia investments in OpenAI, announced in September 2025 (which could have reached $100 billion), remained frozen as of January 2026, according to The Wall Street Journal.
The current situation clearly demonstrates intensifying competition in the AI accelerator market. Although Nvidia maintains a leading position in the segment of training large AI models, developer companies are increasingly seeking ways to optimize the inference process, which plays a key role in the commercialization of AI solutions.