
Nvidia’s earnings print last week revealed the company grew quarterly revenue +56% YoY to USD46.7 bn and guided for next quarter to generate USD54.0 bn +/-2% in revenue (representing +54% growth from the prior corresponding period). The scale of the quarter’s beat was slightly below market expectations. Nevertheless, the reduced reliance on Chinese revenues (now largely struck from expectations) mitigates the risk of geopolitical noise.
More importantly, CEO Jensen Huang pointed to cumulative USD3 – 4tn AI infrastructure spend across the next five years. Taken in conjunction with the company’s projections below, 2029 also looks to be greater than USD1tn.
No slow down on Nvidia’s horizon
Source: Nvidia
Current real-world applications are driving productivity at large technology companies. So much so that these companies are reducing workforces and further increasing AI hardware investment. Cloud provider CapEx, which translates into sales dollars for Nvidia, is accelerating to record levels. See August’s Update+ for more detail on the scale of this CapEx.
This opportunity encompasses both physical infrastructure and Nvidia’s AI hardware. Initially, spending has prioritised physical infrastructure like buildings. This is seen in the lower percentage of Nvidia revenues relative to datacentre CapEx in 2022 and 2023. Over time, a greater percentage of this AI infrastructure investment is expected to flow towards Nvidia and its peers as these buildings are increasingly filled with chips.
Nvidia dominates the AI hardware market
Nvidia is the sole hardware company offering a complete hardware-software stack and this is reflected in its market share. As Morgan Stanley analyst Joseph Moore noted on 18 August, MS as a firm was “optimistic on prospects for growing share in 2025 and holding share at close to the current 85% in CY26.”
But the AI trade isn’t just Nvidia, which is why LP investors have exposure to other important players such as Advanced Micro Devices (AMD), Broadcom and Marvell.
AMD competes directly with Nvidia, selling graphics processing units (GPUs). Its chip performance is broadly comparable, however, AMD’s software platform trails Nvidia’s in scope and adoption – a very important disadvantage. That said, AMD is making strides in designing software specifically for large customers eager to support AMD to lessen reliance on a single provider (Nvidia). Overall, AMD is a competitive alternative and stands to gain immensely if Nvidia falters. Even without such a misstep, with the AI infrastructure market projected to reach USD1tn by 2028, AMD’s current valuations are prospective, assuming a mere 5% market share.
It’s not just GPUs…
Application Specific Integrated Circuits (ASICs) are more efficient than GPUs for fixed functions. With consistent software, ASICs can bypass Nvidia’s software advantage, offer comparable performance and even outperform Nvidia in energy efficiency. The trade-off is that such chips become obsolete if underlying AI algorithms change, and they require significant development and scaling expenditure before becoming more cost-effective.
Broadcom is the poster child for ASIC development, a fact which has been reflected in its share price outperforming Nvidia by ~35% in the last twelve months. Broadcom has helped engineer Alphabet’s Tensor Processing Unit (TPU) since its 2013 inception. The TPU is now in its seventh generation. Other companies comparable to Alphabet now collaborate with Broadcom to develop their own TPU equivalents. These customers are believed to include Meta, Apple, OpenAI and xAI.
Marvell offers a similar ASIC service, facing the same risks as Broadcom, but generally less performative and less expensive. This trade-off appeals to large cloud providers which will likely bear significant AI workloads, but won’t train frontier models. Amazon and Microsoft are set to be Marvell’s key customers. Marvell’s share price has gone backwards over the last twelve months due to concerns that Amazon might replace it with a Taiwanese alternative.
International players
It’s easy to overlook AI hardware development’s global footprint. Taiwanese companies like AlChip, Global Unichip and Mediatek have close relationships with Taiwan Semiconductor Manufacturing Company (TSMC). Their domicile means they have better access to TSMC technology than their US peers, despite for the most part being lower volume customers. This, combined with their lower gross margins, makes them compelling plan B partners for ASIC’s. We already see this as AlChip co-designs Amazon’s Tranium and Mediatek is set to co-design the TPU.
Despite, or perhaps because of blocks by the US, China is striving for a share in the AI hardware sector. Huawei’s Ascend and Cambricon’s Siyuan chips are produced using a makeshift fabrication process at SMIC, China’s leading chip manufacturer, which is itself barred from leading-edge fabrication equipment by the US. This results in high defect rates for these chips. Huawei and Cambricon compensate for this by deploying these chips in massive clusters, incurring significant energy costs. This approach can be economically viable given China’s relatively abundant energy supply (approximately three times that of the US).
Hardware demand suggests AI scepticism is misplaced
Since its launch in 2022, ChatGPT has not been followed by other significant consumer AI products, leading to some scepticism. This perspective suggests that AI is merely a gimmick, as evidenced by romantic AI partners. However, this trivialization has overshadowed the substantial improvements in AI models. These enhancements have led to greater accuracy which interestingly has led them to be less agreeable. Ultimately, it was this continuous improvement, rather than any top-down mandate from OpenAI, that led to the termination of romantic AI partners – AI that agreed with user propositions to such an extent that it mimicked a bond of ‘love’ for some, which turned out to be very dangerous for a few unstable souls.
Meanwhile enterprise hasn’t bought into this scepticism. Enterprise AI is experiencing rapid integration, with basic applications like call centre automation and image generation already established. As AI models advance, more complex tasks such as code generation, large dataset analyses, supply chain optimization and accelerated research are increasingly being handled by AI. These enterprise use cases are consequently fuelling demand for cloud compute. The leading cloud compute providers are supply constrained yet anticipate even further demand. Consequently, they acquire even more AI accelerators at increasingly high volumes.
So, who is going to win the AI hardware battle?
This remains an open question. Competitive dynamics will evolve and the valuations derived from them will evolve as well. Flexibility is key in the fast-changing world of disruption. This is why Loftus Peak believes that active management combined with strong industry understanding and valuation discipline results in a strategy that is differentiated and prospective. It was this very flexibility that enabled Nvidia to feature in the portfolios we managed in early 2016. At this time the company was only 0.5% of the Nasdaq and $0.75 per share (adjusted for splits). The stock today is $174.11.
Share this Post