Nvidia has been dominating the market for chips capable of training generative AI programs, but AMD is now trying to claim its share of the pie through a new enterprise-grade GPU.
The company today announced the AMD Instinct MI300X, a so-called “accelerator” chip designed to train large language models that can power programs such as OpenAI’s ChatGPT.
“AI is really the defining technology that’s shaping the next generation of computing, and frankly it’s AMD’s largest and most strategic long-term growth opportunity,” said AMD CEO Lisa Su during the product’s unveiling.
The MI300X tries to beat the competition by featuring up to “an industry-leading” 192GB of HMB3 memory while being built on AMD’s data center-focused CDNA 3 architecture, which is meant for AI-based workloads. Customers will be able to pack eight MI300X accelerators into a single system, enabling the GPUs to train larger AI models over the competition.
“For the largest models, it actually reduces the number of GPUs you need, significantly speeding up the performance, especially for inference, as well as reducing total costs of ownership,” Su said.
The MI300X is also based on AMD’s other AI-focused chip, the MI300A, which is slated to arrive in supercomputers. The difference is that the company swapped out the Zen 4 CPU chiplets in the MI300A, turning the MI300X into a pure GPU processor.
“You might see it looks very, very similar to MI300A, cause basically we took three chiplets off and put two (GPU) chiplets on, and we stacked more HBM3 memory,” Su added. “We truly designed this product for generative AI.”
In a demo, Su also showed a single MI300X equipped with 192GB of memory running the open-source large language model, Falcon-40B. The program was asked to write a poem about San Francisco and it created the text in several seconds.
“What’s special about this demo is it’s the first time a large language model of this size can be run entirely in memory on a single GPU,” she added.
The new hardware will arrive as Nvidia expects its sales to skyrocket in the coming quarters, thanks to demand for generative AI chatbots. To develop the technology, companies across the industry have been buying Nvidia’s A100 GPU, which can cost around $10,000. In addition, Nvidia is also selling the H100 GPU, which can now be configured with up to 188GB of HMB3 memory.
AMD says it’ll start sampling its own rival product, the MI300X, to key customers starting in Q3. The company adds it sees the market for data center chips designed for generative AI reaching $150 billion in 2027, up from a mere $30 billion this year.