lipflip – OpenAI has secured a new deal with Broadcom to develop custom chips and systems designed to power its AI infrastructure. The collaboration aims to accelerate OpenAI’s compute capacity, which has become crucial for the company’s growing AI needs. These chips, known as AI accelerators, will be deployed in both OpenAI’s data centers and its partners’ facilities.
Read More : ChatGPT No Longer Required to Retain All User Data
The two companies began working together 18 months ago. Broadcom will begin rolling out these custom AI racks in the second half of 2026. They plan to complete the rollout by 2029. The deal is expected to deliver a staggering 10 gigawatts of AI compute power, worth multiple billions of dollars, according to reports from The Wall Street Journal.
This new partnership follows OpenAI’s ongoing collaboration with other major players in the chipmaking industry. OpenAI has previously signed deals with NVIDIA and AMD, securing massive amounts of compute power from both companies. NVIDIA’s investment in OpenAI totals $100 billion, providing 10 gigawatts of AI infrastructure. Meanwhile, the deal with AMD covers 6 gigawatts, with OpenAI expected to pay tens of billions for that capacity. These partnerships, along with the Broadcom deal, highlight OpenAI’s aggressive expansion plans for AI compute infrastructure.
In addition to the custom chip agreements, OpenAI has also partnered with Oracle. This collaboration will provide 4.5 gigawatts of data center capacity as part of OpenAI’s Stargate Project, further boosting its computational power. All of these deployments are expected to start in 2026. Marking a pivotal point in OpenAI’s pursuit of the vast compute capacity required for its ambitious AI models.
OpenAI’s Massive Compute Ambitions and Challenges Ahead
OpenAI’s compute expansion goals are set to skyrocket over the next several years. CEO Sam Altman recently shared with employees that the company aims to build out 250 gigawatts of compute power by 2033. This would represent a significant leap from the current 2 gigawatts OpenAI expects to have by the end of 2025. To put this into perspective, 250 gigawatts would be roughly a fifth of the United States’ total energy generation capacity, which stands at about 1,200 gigawatts.
Achieving such a monumental increase in compute capacity will require substantial financial investment. Based on current market estimates, acquiring 250 gigawatts could cost OpenAI around $10 trillion. Altman acknowledged the immense cost but mentioned that the company would explore new financing tools to make this possible. However, he did not provide further details on how OpenAI plans to raise this capital.
Despite OpenAI’s significant deals with chipmakers and cloud providers, raising $10 trillion is a massive challenge. While companies like NVIDIA and Microsoft have heavily invested in OpenAI. No entity currently possesses the resources to fund this level of compute infrastructure. OpenAI’s revenue, estimated to reach $13 billion this year, falls far short of covering the costs associated with these ambitious plans. The company will need to find innovative financial solutions to bridge the gap between its current earnings and its enormous compute ambitions.
As OpenAI expands its AI capabilities, it will face significant financial challenges. The company’s ongoing partnerships and investments in cutting-edge technologies demonstrate its commitment to leading the AI race. However, its success will depend not only on securing the necessary infrastructure but also on finding innovative ways to finance its rapidly growing vision.
