The Zhitong Finance App learned that Nomura released a research report saying that in the field of AI servers, Nvidia has so far occupied more than 80% of the market value share, while ASIC AI servers have a value share of about 8% to 11%. Also, according to supply chain feedback, more cloud service providers (CSPs) are more actively deploying their AI ASIC solutions, such as Meta (META.US) starting 2026 and Microsoft (MSFT.US) starting 2027. As the internal requirements of hyperscale cloud service providers mature (this is a requirement in a specific field), cloud ASICs still have room to grow, which will challenge NVDA.US (NVDA.US)'s dominance in AI computing clusters.
Meta's main MTIA projects (especially the V1.5 version) are still in the early stages of development, and there are many uncertainties. We learned from the supply chain that Meta plans to advance large-scale deployment by 2026, but it is unclear whether it can overcome these potential challenges. However, from an investment perspective, considering current AI industry trends (increased computing capacity, faster connection speeds, increased power and thermal management requirements, etc.), it is recommended to lay out potential beneficiary companies with diverse customer coverage, including Nvidia, AWS Trainium, Meta MTIA, and Google TPU related supply chains.
Nomura's main views are as follows:
Nomura released a research report stating that Meta's MTIA AI server could be a milestone in 2026.
AI ASICs are booming: shipments are expected to surpass from the second half of 2026 to 2027
In the field of AI servers, Nvidia (NVDA.US) has so far accounted for more than 80% of the market value share, while ASIC AI servers have a value share of around 8%-11% (based on Bloomberg's consensus estimates). However, if you only compare AI ASIC shipments with Nvidia's AI GPUs (this excludes value deviations caused by Nvidia's gross margin of up to 70-80%), Google's TPU may reach 1.5 million to 2 million units in 2025, Amazon's Ium2 ASIC Train may be around 1.4 million to 1.5 million units, and the AI GPU supply prepared by Nvidia is 5 million to 1.5 million units in 2025 Over 6 million units (actual sales may be lower than supply).
In 2025, Google and Amazon's combined AI TPU/ASIC shipments have reached 40%-60% of Nvidia's AI GPU shipments.
Nomura's supply chain feedback also shows that more cloud service providers (CSPs) are more actively deploying their own AI ASIC solutions, such as Meta (META.US) starting 2026 and Microsoft starting 2027. Total AI ASIC shipments could surpass Nvidia's AI GPUs sometime in 2026.
On the other hand, Nomura noticed that Nvidia did not sit back and allow custom chip projects to encroach on its market share, which ultimately had a major impact on its dominant position in the cloud AI computing field.
At COMPUTEX 2025, Nvidia unveiled NVLink Fusion, a technology that opens up a proprietary interconnect protocol to support chip-to-chip connections between Nvidia AI GPUs and third-party CPUs, or between custom xPUs and Nvidia ARM-based CPUs (Figure 1).
Although this hybrid architecture is only semi-customized, Nvidia is trying to avoid completely losing part of the cloud computing market, so the final adoption and feedback from cloud customers is worth watching.
AI ASIC solutions could mean more opportunities for suppliers
Nvidia's lead in AI computing over other AI accelerators, including cloud ASICs, is expanding due in part to:
(1) Nvidia's products may be superior to most competitors' products in terms of computational power per chip area (related to logic density; Figure 2). According to a study of its product roadmap, the company is more actively spearheading the adoption of new technology from its logic foundry partner TSM.US (TSM.US) (2330 TT, buy rating).
(2) Nvidia's interconnect technology (NVLink) for AI cluster expansion is almost unrivaled. Nomura believes that although the industry is seeking an open-specification UALink, its architectural performance will not match Nvidia's until 2026-2027 (Figure 3).
Nomura notes that to make up for the lack of technical performance, initial ASIC solutions often used better materials, components, and less efficient architectures compared to Nvidia's solutions to ensure system stability and performance. This benefits the supplier's added value. However, despite the high bill of materials (BOM) cost structure, ASIC solutions may still be more cost-effective for cloud service providers themselves, given the customized nature of ASIC solutions and the elimination of Nvidia's high system margins.
Nomura believes that as the internal requirements of hyperscale cloud service providers mature (this is a requirement in a specific field), cloud ASICs still have room to grow, which will challenge Nvidia's dominant position in the AI computing cluster.
Nomura has always anticipated that in TSMC's AI logic semiconductor revenue pool in 2026, the growth of custom AI accelerators will be stronger than commercial GPUs.
To date, Nomura's supply chain feedback continues to support this view, but the development of AI ASICs is embracing more complex specifications (such as a larger intermediary layer for subsystem integration) with a more positive attitude, which has surpassed market expectations and brought surprise.
Announced ASIC projects, such as Amazon's Trainium and Microsoft's Maia 100, have in the past been characterized by fewer stacks of HBM storage cubes and smaller intermediary layers than Nvidia's AI GPUs (mostly less than 2.5 times the mask size), with the exception of Google's TPU, whose design partner Broadcom has long been a key customer driving innovation in TSMC's technology roadmap.
Currently, Nomura has observed that AI ASIC specifications are catching up with Nvidia (Figure 4 and Figure 5), but Nvidia still has advantages in cluster expansion connectivity, cluster externality, and a proprietary CUDA ecosystem (ideal for enterprise-specific AI solutions).
Nomura also notes that some ASIC solutions (such as Meta's) are trying to take advantage of Nvidia's system architecture, and some are even planning to launch more aggressive ASIC specifications in the next few years.
Meta's ASIC AI (MTIA) servers could be deployed at scale in a meaningful way in 2026-2027
Nomura has emphasized in the past that Meta will launch its first ASIC with significant shipments before the end of 2025. Now, Nomura has more details about its plans. Nomura has observed that Meta will launch several MTIA chips with different system architectures from the end of 2025 to 2027. The specifications for these ASICs and systems are summarized in the figure below.
The next AI ASIC (MTIA T-V1) will be launched in the fourth quarter of 2025, designed by Broadcom, with a layer similar to or slightly smaller than Amazon's Trainium 2. Nomura has observed that Meta is using its blade architecture in general-purpose servers to design this AI ASIC system. Supply chain analysis showed that the project was led by Tianhong, and the computing tray and CDU were manufactured by Guangda. The rack design is shown in Figure 6. The 16 compute blades and 6 switch blades are inserted vertically into 2 cable backplanes and connected via DAC.
The main ASICs and switch ICs are liquid cooled, while the remaining low power ICs and modules are air cooled. Although there is only one ASIC and one CPU in each computing tray, ASIC motherboards have complex PCB designs (albeit small in size), using high-grade materials (M8 mixed CCL) and a high number of layers (such as 36 layers). This may be because the ASIC incorporates more functionality, requires more signal layers in the PCB, or the power transmission layer is separated to ensure lower noise. Additionally, a BBU tray may have been used.
The MTIA T-V1.5 (V1.5) ASIC is likely to be launched in mid-2026, and the system will be deployed at scale in the second half of 2026. The MTIA T-V1.5 chip is likely to be much more powerful than the V1, where the interlayer size may be twice the V1, more than 5 times the size of the mask, similar to or slightly larger than Nvidia's next-generation GPU Rubin. Although the MTIA T-V1.5 system is still in early sampling, its system architecture is more similar to Meta's GB200 (NVL36, using an Arial module — one GPU and one CPU instead of Bianca — two GPUs and one CPU). This explains why Meta insists on using Arial modules in GB200 because its single-node, mutually backing up architecture is similar to the blade concept discussed previously. Channel surveys indicate that Meta may apply the components and design of the GB200 to the V1.5 ASIC system.
Nomura observed that Quanta, the main ODM manufacturer for Meta GB200, is leading this ASIC project, which is responsible for calculating trays, racks, and CDUs, while Tenhong may be responsible for switches. Given the more complex design of the ASIC, the number of PCB layers of the calculation board may be as high as 40 layers of HLC PCBs, using hybrid M8 CCL (while Trainium2 motherboards are only 24-26 layers of M8 HLC PCBs, and Nvidia uses an M8+M4 hybrid HDI+5 layer structure). Because the compute tray and switch tray of the V1.5 system are placed horizontally, its cluster expansion connectivity is achieved through DAC, just like NVLink Spines, without the need for a backboard box. Both the V1.5 and V1 racks use a liquid-cooled to air-cooled design, and Nomura believes they share the same CDU design and infrastructure as the GB200 and GB300, so Meta can be more flexible in installing different systems within the same infrastructure.
The MTIA T-V2 ASIC is likely to be launched in 2027, and its CoWOS package size may be larger than the MTIA T-V1.5. The rack system is likely to be much larger than the MTIA T-V1.5 rack system (170 kW). Nomura believes that a liquid-cooled to liquid-cooled cooling may be required for such a high-power system.
Meta-platform MTIA's production volume — ambitions and potential challenges
In Broadcom's latest earnings call, Broadcom reiterated expectations that at least three hyperscale customers expect to deploy 1 million xPU clusters by the end of 2027.
The downstream supply chain also believes that the meta-platform aims to produce 1.5 million MTIA V1 and V1.5 units from the end of 2025 to 2026, and the production of V1.5 will be much higher than V1. Assuming a production distribution ratio of 1:2 for V1 and V1.5, that is, V1 will produce 500,000 units and V1.5 will produce 1 million units (Nomura's assumption is to simplify calculations), then MTIA's total production of 2 million sets of CoWos wafers in 2026 is approximately 85,000 (mainly used for TPU), which is roughly equivalent to Broadcom's visible demand for CoVos wafers in 2025.
However, judging from the current situation, Nomura expects MTIA to allocate at most about 30,000 to 40,000 CoWoS wafers in 2026. To meet production targets of more than 1 million units, we need to see more CoWoS wafer bookings in the downstream supply chain.
MTIA also faces other potential challenges. Given the aggressive specifications of ASIC V1.5, its CoWoS package size is large, which is probably one of the biggest challenges in mass production. Not sure if its large size would cause problems with the CoWoS substrate. Not to mention that the V1.5 system is comparable in terms of computational power density to the GB200 NVL36 equipped with Broadcom's connectivity chip and meta-platform architecture. It is unclear whether the metaplatform will have enough time to mass-produce the system (including at least 6 to 9 months of material climbing time). Additionally, as emphasized previously, if ASICs tend to use better materials and components to bridge the performance gap, when MTIA is mass-produced and there is high demand for AI fabs (fabs), as stated by downstream suppliers, there may be a potential shortage of downstream materials and components, which are also commonly used in AI GPUs and ASICs.
Investment views:
Guangda, Xinxing Electronics, Lianmao Electronics, Shenghong Technology, and Bizlink are potential key beneficiaries of the Meta MTIA project
Nomura believes that Meta's main MTIA projects (especially the V1.5 version) are still in the early stages of development, and there are many uncertainties. We learned from the supply chain that Meta is ambitious and plans to scale up deployment by 2026, but it's unclear whether it can overcome these potential challenges. However, from an investment perspective, considering current AI industry trends (increased computing capacity, faster connectivity, increased power and thermal management requirements, etc.), it is recommended to lay out potential beneficiary companies with diverse customer coverage, including Nvidia, AWS Trainium, Meta MTIA, and Google TPU related supply chains.
For Meta projects that have not received sufficient attention before, Guangda, Xinxing Electronics, Lianmao Electronics, Shenghong Technology, and Bizlink are key beneficiaries within Nomura's scope:
Quanta (Quanta): As the main original design manufacturer (ODM) for Amazon Web Technology (AWS), Google, and Meta GB200/300 projects, Quanta is also responsible for the manufacture of Meta MTIA V1 and V1.5 computing trays and CDU (cooling distribution units), and dominates the overall MTIA V1.5 rack assembly. Its diverse layout in Nvidia and ASIC solutions is expected to help the company achieve further business growth.
Unimicron (3037 TT, purchase rating): Xinxing Electronics has long been a key substrate partner of Broadcom, Miman Electronics (Marvell, MRVL.US, unrated), and Nvidia. Nomura believes Xinxing Electronics will also become a major supplier of ASIC substrates for Meta, Google, and AWS. Furthermore, in Nvidia's Blackwell AI GPU business, its share of B200/300 products is expected to reach 30%-40%, which will benefit simultaneously.
Lianmao Electronics (EMC, 2383 TT, buy rating): A dominant copper clad plate (CCL) supplier in the Amazon Web Technology and Meta ASIC sector. Meta ASIC PCBs require extremely high copper clad plates (36/40 layers, using M8 hybrid copper-clad panels), while demand for Amazon Web Technology ASICs (24-26 layers M8 VCL) is just as strong. In view of its higher unit value and market share in the field of artificial intelligence ASICs, Lianmao Electronics will benefit significantly.
Shanghai Electric Power Co., Ltd. (WUS, 002463.SZ, buy rating): It is expected to become Meta ASIC's main printed circuit board (PCB) supplier. Although there are other PCB manufacturers in the Meta project, Meta PCB's ultra-high digital specifications are Shenghong Technology's advantage — its number of layers and structure are similar to 800G switches, and Shanghai Electric Power Co., Ltd. is strong in terms of technology and production capacity.
Nomura also anticipates that the Baseboard Management Controller (BMC) will benefit from Meta ASIC AI server development. The reason is that the amount of BMC units per rack may increase, and more importantly, an additional layer of mini BMC will be used in MTIA blade servers. According to BMC vendor estimates, an MTIA server rack contains 23 BMCs, 16 in the MTIA blade, 6 in the network blade, 1 in the rack management blade, and 16 mini BMCs in the MTIA blade.
Nomura believes that Bizlink (3665 TT, buy rating) will significantly benefit from large-scale expansion/upgrade connectivity to MTIA servers. Bizlink is the world's leading active cable (AEC) supplier. Its AEC products are widely used in ASIC servers of many hyperscale cloud computing service providers (CSPs), such as AWS, Ultra Micro (XAL), and Quanta, for large-scale expansion scenarios. Notably, the vendor plans to launch PCIe AEC in 2026 to tap the growing AI server scale-up market. In Meta's rack design, Bizlink's AEC is expected to be used in large-scale expansion (rack to switch) and upgrade (in-rack) scenarios, as well as power lines.