-+ 0.00%
-+ 0.00%
-+ 0.00%

AMD (AMD.US) releases two generations of flagship AI chips to challenge Nvidia Damo: MI400 may become a key inflection point

Zhitongcaijing·06/13/2025 12:57:01
Listen to the news

Zhitong Finance App learned that on June 12, local time, AMD (AMD.US), the world's second-largest AI chip supplier, unveiled the strongest new AI product lineup in history — flagship data center AI chips, AI software stacks, AI rack-level infrastructure, AI network cards and DPUs, and fully unveiled its ambition to compete with NVDA.US (NVDA.US).

Key products recently released or previewed by AMD include the data center AI chip AMD Instinct MI350 series, the data center AI chip AMD Instinct MI400 series (to be launched next year), the new AI software stack ROcm 7.0, and the next generation “Helios” AI rack level infrastructure (to be launched next year).

Regarding the launch of this new AI product, Morgan Stanley said that what could really become AMD's “long-term potential inflection point” is the upcoming MI400. The bank's analyst Joseph Moore said in a customer report: “AMD released the MI350 as expected, but the focus is still on the rack-grade MI400/450 products to be launched next year. If this product is delivered as scheduled, it could have an even greater impact.” The analyst maintained AMD's “hold and see” rating, with a target price of $121.

The analyst added that their initial opinion is that AMD's MI400 series chip and rack architecture is comparable to Nvidia's Vera Rubin series. The analyst said, “The long-term outlook for AI has considerable upside, but short-term products are not enough to give us a high level of confidence in that outlook. MI400 may change the situation, but it's still a story of 'speaking with the results'.”

It is reported that the AMD Instinct MI400 series of data center AI chips to be launched next year is specially designed for large-scale training and distributed inference. It doubles peak computing power to 40 PFLOPS under FP4 accuracy, and is equipped with 432GB HBM4 memory. The memory bandwidth reaches 19.6Tb/s, and the horizontal expansion bandwidth per GPU reaches 300Gb/s, which can achieve high-bandwidth interconnection across racks and clusters. It is designed to train and run large models with hundreds of billions and trillions of parameters. Compared to the MI355X, the performance of the MI400 series is increased by up to 10 times.

It is worth mentioning that OpenAI co-founder and CEO Sam Altman appeared as a surprise guest and revealed that the OpenAI team has carried out some work on the Mi300x and Mi450. Sam Altman commented that the MI450's memory architecture is ready for inference, and he believes it will also be an excellent training choice.

In response, analyst Joseph Moore said that Sam Altman's statement was remarkable, and he saw it as confirmation of AMD's future opportunities. “Given OpenAI's role in the cloud ecosystem, Sam Altman's appearance may add some credibility to AMD's projected 'tens of billion dollars of AI annual revenue' in the eyes of investors,” he said.

Furthermore, although this new AI product launch is largely in line with expectations (there isn't much new content in terms of hardware), AMD has placed special emphasis on the 25 acquisitions and investments it has completed in the past 12 months. Analysts pointed out that this “reveals the level of AMD's ability to integrate resources, and execution will still be a key factor in its attempt to compete for market share with competitors with a market capitalization of trillions of dollars.”