Nvidia has been the biggest highlight in the field of artificial intelligence computing power infrastructure and the unprecedented AI investment wave that has taken the world by storm, but looking at the US stock market as a whole in 2025, these five AI computing power technology stocks that provide the most core computing power systems for large-scale AI data centers currently under construction around the world have seen even more astonishing increases — namely Lumentum (LITE.US), Western Digital (WDC.US), Micron Technology (MU.US), Seagate (STX.US), and Tianhong Technology (STX.US), and Tianhong Technology ( CLS.US).
In addition to AI GPUs and AI ASIC computing power clusters led by two AI chip leaders, Nvidia and Broadcom, large-scale AI data centers (such as the “Stargate” project) are also in urgent need of the most core AI computing power components such as memory chips/components, optical fiber cables, high-performance Ethernet switches, optical interconnect technology equipment, customized power supply/high power chips, and high-performance central processors at the data center level.
Lumentum, the core participant of the “Google AI Chain” and an indispensable light-lacking module supplier for the “Nvidia AI Computing Power Chain”, is the biggest winner in the global AI computing power industry chain in 2025. The company's stock price has risen nearly 400% so far this year, while the stock prices of Celestica, Western Digital, Seagate, and Micron Technology have all increased by more than 200% since 2025.
Nvidia (NVDA.US) has been the biggest winner in the field of AI computing power infrastructure in the global enterprise layout superboom in artificial intelligence in recent years. Since the end of 2022, Nvidia's market value has almost increased by about 13 times. As of Wednesday, the total market value of US stocks was about 4.6 trillion US dollars. This year, Nvidia's market value even broke through the epic market capitalization mark of 5 trillion US dollars at one point.
Although Nvidia's stock price continued to rise by 40% in 2025, and Nvidia's core business, which provides H100/H200 and Blackwell Ultra AI GPUs, maintained a strong year-on-year growth rate of more than 60%, global investors are apparently aggressively betting that the performance growth rate and valuation expansion rate of these five artificial intelligence technology companies focusing on data center AI computing power systems will be much stronger than Nvidia, which has reached a phased peak in market value.
The four largest technology companies from the US (Amazon, Microsoft, Google, and Facebook parent company Meta) collectively spent about 380 billion US dollars on AI infrastructure construction this year, and Wall Street analysts combined predictions and model estimates revealed by the management of these four tech giants, the overall AI infrastructure spending of these four tech giants may increase by about 50% from 2025. This is why Wall Street financial giants, including Goldman Sachs, J.P. Morgan Chase, and Morgan Stanley, are extremely optimistic about Nvidia, Broadcom, Micron, and Hi. Core players in the AI computing power industry chain such as Jie, Western Digital, Lumentum, and TSMC will continue the “super bull market trajectory” in the future.
The chip stock bull market narrative is far from over! The three main lines may run through 2026: AI chips, storage, and optical interconnection
The latest research report released by Wall Street financial giant Morgan Stanley shows that as the unprecedented AI infrastructure boom continues to be promoted, the “long-term bull market logic” of chip stocks is still intact. Next year, chip stocks centered around AI chips and memory chips may still be one of the brightest performing sectors in the US stock market, and the AI data center optical interconnection industry chain may grow into a more powerful next-generation technology force.
As soon as the Gemini3 series products were released, they brought huge AI token processing capacity, further verifying that “the AI boom is still in the early stages of accelerated construction where the supply of computing power infrastructure is in short supply”, as Wall Street called. The Bank of America said in its research report that the global AI arms race is still in the “early to middle stage”. Despite recent sharp downside fluctuations in popular chip stocks such as Nvidia and Broadcom, investors should continue to pay attention to industry leaders. According to the 2026 semiconductor industry investment outlook recently released by these two major Wall Street giants, “AI chips”, memory chips, and optical interconnection technology can be described as the main investment line in the chip sector that both sides are optimistic about.
“We see 2026 as the midpoint of an 8-10 year journey of upgrading traditional IT infrastructure to suit acceleration and AI workloads.” “Greater scrutiny of AI return on investment and cash flow from hyperscale cloud service providers may cause share price fluctuations, but this will be offset by updated/faster LLM developers and AI factories serving corporate and sovereign customers. “We expect semiconductor sales to move towards the first $1 trillion in 2026, with growth of around 30%, while fab equipment sales will achieve nearly double-digit year-on-year growth.” Bank of America analysts said in the research report.
According to Wall Street giants Morgan Stanley, Citi, Loop Capital, and Wedbush, the global AI infrastructure investment wave with AI chip computing power hardware as the core is far from over and is only at the beginning. Driven by an unprecedented “AI inference computing power demand storm”, the scale of this round of AI infrastructure investment, which will continue until 2030, is expected to reach 3 trillion to 4 trillion US dollars.
According to the latest semiconductor industry outlook data recently released by the World Semiconductor Trade Statistics Organization (WSTS), the global chip demand expansion trend is expected to continue to be strong in 2026, and MCU chips and analog chips, where demand has continued to weaken since the end of 2022, are also expected to enter a strong recovery curve.

WSTS expects that after a strong rebound in 2024, the global semiconductor market will grow sharply by 22.5% in 2025, with a total value of 772.2 billion US dollars, higher than the forecast given by WSTS in spring; the total value of the semiconductor market in 2026 is expected to expand significantly to 9755 billion US dollars on the basis of strong growth in 2025, close to the market size target of 1 trillion US dollars in 2030 predicted by SEMI, which means it is expected to increase 26% year over year.

Take stock of the top five winners of the 2025 global AI investment boom
With the arrival of 2026, global investors focusing on the AI computing power industry chain will also pay close attention to the performance data and valuation trajectory of these five AI computing power leaders. On the one hand, their current stock prices already include high market expectations, and on the other hand, they represent the three new AI computing power investment spindles — memory chips, optical interconnection, and Google's TPU AI computing power chain. They may also be the main investment lines in the AI computing power industry chain in 2026, especially optical interconnection and memory chips — “OpenAI Chain” and “Google” represented by OpenAI and Google, respectively The “AI chain” is inseparable from these two core components of AI computing power. Strongly competing between the two or entering an absolute leading edge can be described as a major benefit for optical interconnection and storage.
Lumentum
Headquartered in San Jose, California, Lumentum has long focused on manufacturing switches, optical transceivers, high-speed optical module components, and other optical laser components for fiber optic cables. Customers are generally telecom operators and consumer electronics manufacturers such as Apple, which used Lumentum's optical components in its FaceID sensors.
However, as the unprecedented AI boom triggered by ChatGPT swept the world, AI server clusters required massive levels of optical connectivity devices. The AI GPU/AI ASIC in every large AI rack system requires a high-speed connection to every other GPU/ASIC. Future AI training/inference systems will require rapid expansion, which requires high-speed optical connections between racks or rack groups. Eventually, the entire AI data center will need to be connected via fiber.
More importantly, the high-speed equipment provided by Lumentum, which focuses on the optical interconnection technology route, is an indispensable device for the “Google AI Chain” and the “Nvidia AI Chain.” Optical interconnection and memory chips can be called the two “strongest beneficiary themes” between Google and OpenAI, which may continue to clash for many years.
In the Jupiter/AI data center network system, Google has embedded OCS (optical circuit switch) clusters on a large scale to support TPU AI systems and large-scale training/inference systems. Lumentum's OCS products such as R300/R64 are specifically aimed at “large-scale cloud computing scale + AI/ML data center networks”: using MEMS optical circuits to directly establish optical connections between endpoints, bypass intermediate power exchange and OEO conversion, focusing on high port numbers, low latency, and low power consumption. At the same time, Lumentum is also an important supplier of high-speed digital optical modules and optical interconnect chips such as 400G/800G. Officially, these products have been positioned as core devices that “provide expandable interconnect bandwidth for AI and hyperscale cloud data centers”, and are also indispensable for “InfiniBand+ Spectrum-X/Ethernet” high-performance network infrastructure+GPU clusters led by Nvidia.
Lumentum's stock price ended Wednesday when the US stock market closed. It has risen furiously by nearly 400% since 2025, making the company's market capitalization more than 28 billion US dollars. In the most recent quarter, sales increased 58% year over year to $533 million.
Michael Hurlston, CEO of Lumentum, said during the November results conference call: “Our growth is strongly driven by the demand for AI computing power, including the application of our laser chips and optical transceivers, optical module components in data centers, and connected long-distance network devices connecting these data centers.”
Lumentum's total revenue is expected to grow sharply by 58% in the fiscal year ending June, but Wall Street analysts expect growth to slow down in the next two years, but it is still strong compared to other AI computing power vendors, with growth expectations of 32% and 25%, respectively.
Western Data
Western Digital is one of the world's top three HDD manufacturers, and the other two are Seagate (Seagate) and Toshiba (Toshiba). This year, the 55-year-old tech company's stock price has risen nearly 300% since 2025.
In large-scale AI data centers such as “Stargate”, in addition to the AI computing capabilities provided by Nvidia/Broadcom GPU/ASIC, AI data centers also require more and more large-scale space to store sky level data related to large AI language models with extremely large operating parameters. Simply put, AI data centers require much larger HDDs/enterprise-class SSDs than traditional CPU-centered data centers.
Judging from the nearly $1.4 trillion AI computing power infrastructure cumulative agreement and the “Stargate” AI infrastructure process that OpenAI has signed, these super AI infrastructure projects all urgently require large-scale data center enterprise-grade high-performance storage (with storage products such as HBM storage systems, enterprise-grade SSD/HDD, and server-level DDR5 as the core). This surge is driving rapid growth in product demand and sales prices, as well as major enterprise storage product stocks such as Western Digital, Seagate, and Pure Storage.
Western Digital's production capacity covers both solid-state drives (that is, HDDs) and enterprise-grade SSDs (using master-control level chips to store data), but the company has long been famous worldwide for producing HDD hard drives (using rotating disks to store terabytes or larger data), and the current production capacity is mainly concentrated on HDDs.
Western Digital (WDC)'s nearline/data center HDDs focus on the ePMR+ UltraMR route, mass-produce 32TB SMR, 24TB CMR and other ultra-large capacity disks (Ultrastar DC series), and serve object storage and data lake super storage scenarios. They are a “cost-effective” carrier layer for AI training/inference to generate large amounts of data.
Western Digital CEO Irving Tan said on the company's performance conference call in October: “Data is the core fuel that drives AI, and HDDs (hard drives) provide the most reliable, scalable, and cost-effective data storage solution.”
In the most recent quarter, Western Digital achieved a significant 27% increase in revenue, reaching $2.82 billion. The company said that selling more storage hard drives for data centers will significantly increase profitability because AI data centers require larger and more expensive high-performance hard drives.
Wall Street analysts generally expect Western Digital's total revenue to grow by about 23% in the 2026 fiscal year, but growth will slow to 13% in 2027.
In February of this year, Western Digital spun off its NAND flash memory business into SanDisk (“SanDisk”). SanDisk, which has grown into an enterprise-grade SSD leader, currently has a market value of about 35 billion US dollars, which exceeds about half of Western Digital's market value.
Micron Technology
Micron is one of the world's top three memory chip manufacturers, and the other two are Samsung Electronics and SK Hynix, headquartered in South Korea. As a result, as the only major manufacturer of memory chips in the US, the company's basic outlook has undoubtedly greatly benefited from the “return of manufacturing to the US” policy led by the Trump administration, so Micron enjoys a unique “US storage premium” compared to Samsung and Hynix in the stock market.
AI server clusters require a large number of DRAM memory chips to store and process extremely large AI models. AI GPU computing power devices built by Nvidia or AMD are equipped with tens of gigabytes of high-bandwidth memory. Since this year, AI data centers have occupied the vast majority of DRAM/NAND memory chip production capacity, leading to a sharp shortage of global memory chips and continuously driving up the price of memory chips.
Wells Fargo (Wells Fargo), a well-known investment institution on Wall Street, recently released a research report saying that whether it is Google's huge TPU AI computing power cluster or a massive Nvidia AI GPU computing power cluster, it is inseparable from HBM storage systems that require full integration of AI chips, and that currently tech giants must purchase server-level DDR5 storage and enterprise-grade high-performance SSD/HDD on a large scale to accelerate the construction or expansion of AI data centers; and Micron is stuck in these three core storage areas at the same time: HBM and server DRAM (including DDR5/LPDDR5X) and high-end data center SSDs are one of the most direct beneficiaries of the “AI memory+storage stack”, and can be described as receiving the “super dividend” of AI infrastructure.
Wells Fargo said in its latest research report that as overall sales in the DRAM industry, including HBM storage systems, are expected to increase by more than 100% year over year in 2026, the US-based storage giant — and the world's third-largest memory chip manufacturer with a relatively high contribution to the DRAM business, will be one of the biggest beneficiaries.
Micron even announced in early December that it would stop selling storage products to individual consumers in the PC/DIY market in order to focus on providing storage production capacity for large-scale AI data centers being built on a large scale, and even announced in early December that it would stop selling storage products to individual consumers in the PC/DIY market, highlighting that as the global AI infrastructure boom is in full swing, demand for high-performance data center-level DRAM and NAND series products continues to surge.
The earnings outlook announced by Micron last week can be described as far exceeding Wall Street analysts' sales and profit expectations. Micron's management said that the company's revenue forecast range for the second quarter of fiscal year 2026 will be $18.3 billion to $19.1 billion; in contrast, Wall Street analysts' average forecast for the period was $14.4 billion — you need to know that this performance forecast has been continuously raised since tech giants such as Google, Amazon, and Nvidia announced strong results at the end of October. Even so, the official outlook given by Micron is still stronger than analysts' expectations after continuous improvement. It's enough to see how much demand for memory chips has exploded under the wild wave of global AI infrastructure.
Large-scale AI data centers are in full swing around the world, and strong demand for storage components indispensable to AI training/inference systems is surpassing supply, benefiting memory chip companies such as Micron. However, there is also a sharp shortage of storage products with low requirements for consumer electronics such as personal computers and smartphones, which together drive the prices of DRAM and NAND series storage products to a sharp rise. This is due in large part to the storage industry shifting production capacity to more advanced manufacturing technology for AI data centers.
The three monopoly memory chip manufacturers SK Hynix, Samsung, and Micron have concentrated most of their production capacity on HBM storage systems — the advanced process production capacity and manufacturing and testing complexity required for such storage products are much more complex than DDR series and HDD/SSD series memory chips, so the three major memory chip leaders continue to migrate production capacity to HBM, which has largely caused these hard disk storage products to be in short supply.
Under the impetuous bullish narrative of the so-called “memory chip supercycle,” Micron Technology's stock price trend has entered a sharp trend since the second half of this year, and Micron's stock price has risen sharply by about 240% since 2025.
Sumit Sadana, head of Micron's business, said during the results conference call that the company “sold out” its memory chips. In the results conference call, Micron CEO Sanjay Mehrotra said that the storage shortage is likely to continue for a long time. “Continued and strong industry demand, combined with supply constraints, is contributing to a very tight market situation.” he emphasized. “We expect these conditions to continue beyond the end of the 2026 calendar year.”
The CEO also said Micron was disappointed that it was unable to fulfill all storage orders. “We can only meet about 50% to two-thirds of the demand from some key customers,” he said. “As a result, we remain extremely focused on working to increase the supply of production capacity and making the necessary investments.”
In recent months, the world's top three storage giants — Samsung Electronics, SK Hynix, and Micron, as well as Western Digital and Seagate — have recently announced strong results, causing Wall Street banks such as Morgan Stanley to call the “storage supercycle” a “storage supercycle”, highlighting the world's continued blowout AI training/inference computing power demand and consumer electronics demand recovery cycle driven by the end-side AI boom have all driven the exponential expansion of demand for DRAM/NAND series storage products. In particular, Micron's storage business accounts for the highest share of DRAM Domain HBM storage and server levels High-performance DDR5; in addition, demand for enterprise-grade SSDs in the NAND sector has also recently surged.
Analysts at Morgan Stanley said in a December research report that Micron's performance showed “the best revenue and profit growth trajectory in the history of the US semiconductor industry other than Nvidia.” The Morgan Stanley analyst team expects Micron's revenue to almost double in the fiscal year ending August, but it will slow down somewhat in the 2027 and 2028 fiscal years.
Seagate
Seagate (Seagate) was founded nine years after Western Digital (Western Digital). Since this year, Seagate's stock price has also benefited from a surge in storage demand brought about by the global AI infrastructure frenzy. Since 2025, Seagate's stock price has risen sharply by 231%.
Since this year, not only have the stock prices of the three global memory chip leaders SK Hynix, Samsung Electronics, and Micron Technology increased by more than three digits, but the stock price increases of enterprise-grade data storage product giants are no less than 200% this year as the stock prices of these three original memory chip manufacturers — such as Seagate, SanDisk, and Western Digital, all rose by more than 200% this year.
Why did Seagate and Western Digital, whose stock prices have been dormant for a long time, suddenly rise strongly overnight? The core logic is that the AI data center construction process is in full swing not only driving a surge in demand for HBM storage, but the AI data center's three-tier storage stack (hot-tier NVMe SSD, warmtier/near-line HDD, cold tier objects and backup) is being expanded simultaneously, while long-term supply restraint and NAND cycle recovery by the HDD industry combined with years of lock-up by cloud vendors, causing the volume price and order visibility of these three storage product leaders (Western Digital, Seagate, and SanDisk split from Western Digital) to rise simultaneously.
All the “feast of computing power” surrounding AI will eventually come down to data collection, cold storage, sedimentation, recharging, and governance. Enterprise-level data storage platforms and massive capacity media become the foundation for a closed loop of training-inference. This is why Wall Street financial giants such as J.P. Morgan, Citibank, and Morgan Stanley are increasingly optimistic about the future stock price prospects of Western Digital, Seagate, and the “new” SanDisk (SanDisk) split from Western Digital.
The Seagate (Seagate) HAMR platform (Mozaic 3+) has mass-produced and shipped 30TB near-line disks and is promoting higher capacity (>30TB) nodes; HAMR is leading in surface density, directly hitting the pain point of “rack power/cost per TB” of cloud vendors, and is a core beneficiary of AI data lakes and cold data pools.
Overall revenue for the third fiscal quarter (ending October 3) increased significantly by about 21%, reaching US$2.63 billion. The company's management said that approximately 80% of sales come from the AI data center market.
Seagate CEO Dave Mosley said in a performance conference call with analysts: “There is no doubt that AI application leaders such as OpenAI are reshaping the demand pattern for hard disk storage products by increasing the economic value of data storage.”
Analysts from Bank of America said in a report that since any additional hard drive product shipments will soon be snapped up by large-scale customers focusing on data center construction, Seagate may not have excess hard drive inventory for the next 2 years.
“All cutting-edge AI tools are triggering a surge in the scale of new content creation, covering text/images/audio/video, speeding up the generation of huge amounts of unstructured data, which is stored, copied, and formed multiple complex versions on collaborative platforms. Seagate and Western Digital have repeatedly stated that they will minimize the scale of capital expenditure in terms of expanding original production capacity, while EB growth will be driven by the accelerated switch of customers to higher capacity storage products. The incremental information in our recent research is that pricing dynamics will continue to improve positively in 2026, and any additional supply will be released through higher yield levels, thereby driving the premium to continue to rise and further raise prices.” Citigroup analysts said.
According to Wall Street analysts, Seagate's future performance growth trajectory is similar to Western Digital. Analysts expect the hard drive storage company's revenue to grow 21% this fiscal year, and increase by about 15% and 10% in the next two years, respectively.
Tianhong Technology (Celestica)
Celestica (Celestica) is a multinational electronics manufacturing service company headquartered in Canada. It was founded in 1994 and was initially a subsidiary of IBM. It has long focused on manufacturing high-performance switches connected to networks and actively managing the data and traffic flowing through them. The company's stock price has risen by more than 230% since this year.
Tianhong Technology is an electronic manufacturing service (EMS/ODM) and infrastructure solution provider headquartered in Canada. It not only engages in traditional hardware assembly, but also plays a key role in designing, manufacturing and integrating AI data center infrastructure-related products (such as network switches, servers, rack-level solutions, high-bandwidth network components, etc.); as cloud service providers and large technology companies (such as Google e, Meta, Amazon, etc.) build AI data centers on a large scale, the demand for high-speed network connections, customized hardware and rack-level integrated solutions has increased dramatically. Tianhong Technology is the core supplier of high-performance network switches, servers, ASIC/TPU related hardware modules and integrated services for these large-scale data centers.
According to several research institutes, Tianhong Technology is one of the key manufacturing partners for Google's TPU AI computing power modules and Google data center network switches, which enables it to directly benefit from the continued capital expenditure of Google and other large technology companies on AI computing power infrastructure.
Google's TPU (Tensor Processing Unit) AI computing power cluster is a self-developed AI acceleration chip cluster for large-scale machine learning and massive AI training and inference processes. Tianhong Technology, on the other hand, undertook the manufacturing, integration and assembly of TPU AI server devices and Google network modules in Google's AI supply chain to help turn the designed TPU hardware into a high-performance computing node that can actually be deployed.
Tianhong Technology not only targets Google, but also sells its data center core hardware equipment on a large scale to many AI cloud computing leaders, including Amazon and Microsoft. The company's third-quarter sales rose sharply by 28% to $3.19 billion. Wall Street analysts generally expect sales to grow more strongly in 2026 and 2027 — to 33% and 34% from 26% this year.
Robert Mionis, CEO of Tianhong Technology, said during the October performance conference call that a hyperscale cloud computing company recently found the company and requested it to carry out large-scale mass production of high-performance AI liquid-cooled rack computer connection components, and plans to begin mass production next year. “Our largest and fastest growing market is in the field of artificial intelligence data centers to support high-performance networks and customized ASIC-driven hyperscale AI/ML computing platforms.”
One of Tianhong Technology's key advantages is the huge number of orders brought about by the recent surge in demand for customized AI chips (that is, AI ASICs, TPUs are also part of the ASIC technology route). These chips are not as flexible and versatile as Nvidia's AI GPU technology route, but they may be more cost-effective and energy-efficient in AI inference scenarios with a market size of up to trillion dollars.
Analysts from Goldman Sachs said that Tianhong Technology is one of Google's core ASIC computing power cluster component suppliers. It is expected that it will continue to benefit from the surge in TPU computing power demand in 2026 and become the leading provider of Google's TPU rack-level solutions.
The “Google TPU AI Computing Power Cluster” that Wall Street analysts are currently collectively focusing on is even expected to occupy 3-40% of the AI computing power infrastructure market in the near future, which in turn will impact Nvidia, which currently has 90% of the market share and is a monopoly. According to Semianalysis calculation data, Google's latest TPU v7 (Ironwood) shows an amazing intergenerational leap. The BF16 computing power of the TPU v7 is as high as 4614 TFLOPS, while the TPU v5p, which was widely used in the previous generation, was only 459 TFLOPS, which is an entire order of magnitude improvement. Furthermore, the TPU V7 video memory directly matches the B200 of Nvidia's BlackWell architecture. For specific applications, AI ASICs that are architecturally more cost-effective and energy-efficient can more easily absorb mainstream inference computing power loads. For example, Google's latest TPU cluster can even provide 1.4 times higher performance per dollar than Nvidia Blackwell.