OpenAI builds first chip with Broadcom and TSMC, scales back foundry ambitions


OpenAI is working with Broadcom and TSMC to create the first in-house chips designed to support its artificial intelligence systems, while Nvidia is working with Nvidia to meet its growing infrastructure demands, sources told Reuters. Along with chips, AMD is also adding chips.

OpenAI, the fast-growing company behind ChatGPT, has examined several options to diversify chip supply and reduce costs. OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories known as “foundries” for chip manufacturing.

The company has abandoned ambitious foundry plans for now due to the cost and time required to build the network, and instead plans to focus on in-house chip design efforts, according to the sources, who requested anonymity. Did because they were not authorized for private discussion. Matters.

The company’s strategy, detailed here for the first time, highlights how the Silicon Valley startup is looking to secure chip supply and manage costs against larger rivals like Amazon, Meta, Google and Microsoft through industry partnerships and Blending internal and external perspectives. As one of the largest buyers of chips, OpenAI’s decision to source from a variety of chip manufacturers when developing its customized chips could have wide-ranging implications in the tech field.

Broadcom stock surged after the report and ended Tuesday trading up more than 4.5 percent. AMD shares also extended their gains from the morning session and ended the day up 3.7 percent.

OpenAI, AMD and TSMC declined to comment. Broadcom did not immediately respond to a request for comment.

OpenAI, which helped commercialize generative AI, which generates human-like responses to questions, relies on ample computing power to train and run its systems. As one of the largest buyers of Nvidia’s graphics processing units (GPUs), OpenAI uses AI chips to train models where the AI ​​learns from data and to make inferences, predictions based on new information. Or applies AI to make decisions.

Reuters previously reported on OpenAI’s chip design efforts. The filing reported talks with Broadcom and others.

According to sources, OpenAI has been working with Broadcom for months to create its first AI chip focusing on inference. Demand for training chips is high right now, but analysts predict the need for inference chips could exceed them as more AI applications are deployed.

The Broadcom Alphabet unit helps companies including Google fine-tune chip designs for manufacturing and also supplies design parts that help transfer information quickly on chips. This is important in AI systems where thousands of chips work together.

OpenAI is still determining whether to develop or acquire other elements for its chip design, and additional partners may be involved, the two sources said.

The company has assembled a chip team of about 20 people, led by top engineers who previously built Tensor Processing Units (TPUs) at Google, including Thomas Norrie and Richard Ho.

Through Broadcom, OpenAI has secured manufacturing capacity with Taiwan Semiconductor Manufacturing Co. to produce its first custom-designed chip in 2026, the sources said. He said the timeline could change.

Currently, Nvidia’s GPUs have more than 80% market share. But shortages and rising costs have prompted major customers like Microsoft, Meta and now OpenAI to seek in-house or external alternatives.

The planned use of AMD chips by OpenAI through Microsoft’s Azure, first reported here, shows how AMD’s new MI300X chips are trying to gain a share of the market dominated by Nvidia. AMD forecasts $4.5 billion in 2024 AI chip sales, following the chip’s launch in the fourth quarter of 2023.

Training of AI models and operational services like ChatGPT are expensive. According to sources, OpenAI has estimated a loss of $5 billion this year on revenues of $3.7 billion. Calculating costs or expenses for the hardware, electricity, and cloud services required to process large datasets and develop models is a company’s largest expense, driving efforts to optimize usage and diversify suppliers.

Sources said OpenAI is wary of poaching talent from Nvidia as it wants to maintain a good rapport with the chip maker, with which it is committed to working, especially on access to the new generation of Blackwell chips.

Nvidia declined to comment.

© Thomson Reuters 2024



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *