Header cover image

An Oversupply Of Compute Power Will See Margins Recede

GO
Goran_DamchevskiNot Invested
Equity Analyst

Published

August 03 2023

Updated

December 04 2024

Narratives are currently in beta

Announcement on 04 December, 2024

Why NVIDIA’s Growth Curve May Flatten Out

  • NVDA continues to outperform while QoQ growth levels off.
  • Future company growth rates may not be as drastic due to the already high revenue base of $35B.
  • The next catalysts to watch are capex guidance from customers, occurring mid Jan to early Feb.
  • Maintaining AI customer demand and a value proposition of increased profitability may become more difficult.
  • Management is known to be “2 steps” ahead and announcing a new chip may sustain enthusiasm.

Nvidia posted Q3'25 revenues at $35B up 94% YoY. Sequential growth remained elevated and increased to 16.8%. The last 4 quarters resulted in revenues of $113.3B, up 152% from the prior period. This is well on its way to reaching my 2028 estimate for $165B (derived as a 55% share of my estimated 300B SAM for NVDA). 

NVDA: Quarterly revenue performance. QoQ & YoY emphasis from author

The company guided (p. 12) for a QoQ growth of around 7% to $37.5B, implying some 70% growth YoY.

Assuming a sustained QoQ growth rate of around 10%, NVDA will be able to reach my $165B target for 2028 in about 1 year, indicating that NVDA is outperforming my expectations. However, despite investor enthusiasm, I think that extrapolating NVDA's QoQ growth from a $35B quarterly base may not be grounded with the data center capex renewal cycles from NVDA's largest customers. For example, in their last earnings call, TSLA has pointed out that they will be more restrained on their future capex spend (1, 2), including on GPUs.

What I’m Factoring Into NVDA’s Growth

Most of NVDA’s revenue comes from cloud providers, which already have dedicated a substantial portion of their capex to updating their GPU infrastructure, and this is why I expect them to be more conservative with spending going forward unless they see a high improvement in the value proposition.

Generally, the value proposition from NVDA has 2 aspects.

  1. Profit via cost savings. This is done by improving performance and is likely to be even better with NVDA’s Grace-Blackwell GPUs. This is especially true if someone is upgrading pre-H100 architecture to Blackwell, and while incrementally improving upon H100 or H200 infrastructure, its performance benefits likely still create cost savings.
  2. The other side of the equation is AI or general compute demand. My sense is that NVDA has a fall-back pitch to companies using their GPUs along the lines of: if AI demand goes down, you can still use the infrastructure for general compute. This is true, and compute inflation seems to be creating demand. However over-provisioning on cutting edge infrastructure may lead to scenarios where the customers find themselves paying a premium for the hardware, and experiencing a lower demand, leading to pricing cuts on cloud infrastructure. 

My current thinking is that some of these cloud providers may start postponing infrastructure upgrades as the benefits may not be as steep as the first introduction of new gen GPUs.

Consumer demand may also change in 2 ways:

  • General stagnation in AI applications. I’m not implying that interest falters, but that enough of the world is already using AI and are content with the current service offering.
  • Market consolidation to one provider i.e. if ChatGPT becomes the clear winner in consumer AI, then rivals like Amazon and Google will have a more difficult time pitching continued increases in GPU spend.

In contrast to these scenarios, we will take a look at how NVDA is working to maintain and grow its GPU demand.

In order to sustain growth, NVDA's management is looking to expand sales to governments with sovereign AI. They have noted that Japan, India, and Indonesia are building AI infrastructure, and Denmark has launched its largest AI supercomputer built with 1,528 NVIDIA H100 GPUs. I expect this direction to continue and even more countries to adopt the sovereign AI pitch.

Additionally, NVDA has been expanding its AI and GPU use-cases to sectors that can benefit from, but whose main industry is not technology. These sectors include healthcare, robotics, telecom, automotive, etc.

To summarize, a viable portion of the cloud infrastructure now enables AI services, and NVDA needs to show ways that customers can benefit while expanding the number of large customers (such as governments) it will be selling its infrastructure to.

What Are Some Possible Catalysts On The Horizon

Since the beginning of my narrative, I speculated a cycling-down of demand for AI. However, NVDA’s exceptional performance and general value proposition to cloud service providers made this scenario difficult to time, and it seems that management has executed on selling AI as a global phenomenon. 

Things are looking like they are about to start changing as compute capacity grows and customers are looking to develop their own solutions in order to avoid NVDA’s large premiums. 

I still expect that NVDA will overperform its sales estimates and that quarterly revenue growth will be around 10% instead of the 7% guide. However, the potential news of a “high single-digit” growth may be a catalyst that shakes things up in the market, and investors may change their approach on the stock. 

This will likely play out in the following 2-3 quarters, but its beginnings will be seen on the quarterly results commentary from NVDA’s key customers such as MSFT, META, AMZN, GOOGL, as well as downstream from TSM, etc., all of which are typically reporting their next results (mid Jan to early Feb) sooner than NVDA. This is why I expect any repricing to start occurring early in the next earnings season. 

This is only my scenario - ultimately, there are no ways to accurately predict markets and NVDA may find growth avenues that lead to continued outperformance. One way management could retain investor enthusiasm would be to announce the development of a new chip, even if it's in its early stages.

Valuation

I am extending my forward estimates to 2029, with a revenue growth of 14.4%, resulting in the increase from $165B in 2028 (previous) to $189B in 2029. I am retaining my expected net margin to 35%, and therefore estimate a net income of $66B.

Using a 35x PE for 2029, I’m upgrading my future value to $2.3T from $2T. 

Discounted back to today, using a 7.5% rate, I get a present value of 1.6T or $68 per share.

In my view, NVIDIA continues to be an exceptional company, trading at an asymmetrically high premium.

Key Takeaways

  • NVIDIA facing challenges, 30% data center market share at risk from chiplets
  • AMD, Samsung, Intel entering GPU manufacturing race at cheaper price points
  • Largest customers producing own chips, limits market for NVIDIA
  • Generative AI cloud solutions introduces revenue cannibalization risk
  • Gaming market revenue share at risk of stagnating

Catalysts

Company Catalysts

Accelerating GPU Performance Will Reduce Profitability

There is an argument to be made that the rapidly accelerating performance of GPUs will unlock markets in the next 3 to 5 years, but the same technology can lead to oversupply of compute power in the long term.

As recently as 2021, there was a shortage of compute power for use cases like cryptocurrency mining, which was promoted as a key growth avenue. However, when crypto prices fell the market saw a decline of GPU margins, swinging the sentiment of investors from bullish to bearish on industry stocks including NVIDIA. In the short gap after the crypto downfall and before the appearance of the AI trend, investors stopped evaluating the future potential of NVIDIA and flipped to analyzing the company based on its current performance. This may have been a partial reason why some portfolio managers like Cathie Wood sold their NVDA stock.

Now, NVIDIA is experiencing a “second coming” as investors price-in the potential of AI technology. This is backed by the accelerated computing innovation leading to an exponential increase in compute power that can be used in all kinds of AI and data center applications. 

However, after extrapolating this factor a few years further into the future, I arrive at the assumption that as GPUs become more powerful, they will inevitably be more cost effective. This will lead to a runaway supply of compute power while businesses and consumers fail to find use cases to catch up. This is why we will start increasingly seeing CEOs flip their rhetoric from the capability of their products to the possible use cases for the new technology.

NVIDIA’s H100 vs the A100 Series Performance on AI Tasks

The end result of this scenario is an expanding market but shrinking profitability margins - which converge to the margins of the cheapest peer. I say this because there are no luxury compute machines, as there is no luxury electricity. While companies are attempting to shield their customer base with proprietary technology, these walls will erode over time, and it may be faster than we realize. One of these peers may be AMD, but even cheaper alternatives may be hyperscalers like Amazon’s AWS, that produce their own chips and can vertically integrate their hardware with software infrastructure offerings.

The Difference in Selling GPUs to Consumers and Hyperscalers

As a result of innovation, semiconductor designers are under pressure to differentiate their products. They need to find ways to make their chips more powerful, efficient, or cost-effective than the competition. Otherwise, they risk losing customers to hyperscalers who are willing to develop their own chips.

Hyperscalers are large cloud computing providers that offer a wide range of services, including computing, storage, and networking. Some of the most well-known hyperscalers include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. 

Top Hyperscalers

As opposed to consumers who use a GPU for gaming or business, hyperscalers have more control over the procurement pricing and can negotiate discounts from vendors like NVIDIA. Ultimately, these hyperscalers may choose to produce their own chips and close a whole market for a vendor, such as the example when Apple enacted to produce their own M1 chips, and cut off Intel chips from Apple products. This partly happened because Apple managed to develop superior chips than what Intel could offer, which is why it is important for NVIDIA to stay on the edge of innovation, or risk being shut off from customers. 

Note, NVIDIA is currently the top player in the space, however a few years from now, companies may have access to comparative technology at lower prices.  

Currently, key customers are developing their own chips in order to vertically integrate their products. Hyperscalers like Google, Amazon, Microsoft, and Oracle know their environments better than pure semiconductor companies do. This gives them the ability to design chips that are optimized for their specific needs.

We already have examples of hyperscalers vertically integrating and developing their own chips, and it is not long jump before more of them produce their own GPUs:

  • Amazon Web Services (AWS) uses its own Graviton chips, which are based on the ARM architecture.
  • Microsoft Azure uses its own custom-designed FPGAs (field-programmable gate arrays) for some of its workloads.
  • Google Cloud Platform is developing in-house data-center chips, and already uses its own custom-designed TPUs (Tensor Processing Units) for machine learning workloads.
  • Meta is developing its own chips for a variety of workloads, including machine learning and video streaming.

In summary, I view the core market for NVIDIA impacted by the demand from hyperscalers, which are at the end stages of validating the demand for the cloud business and will strengthen their vertical integration of the services in order to cut costs, i.e. making their own chips will be the rational next steps for large cloud companies.

Monolithic vs. Chiplet GPUs

Nvidia has been a pioneer in the Graphical Processing Unit (GPU) and now the AI space, but the company is facing some challenges that could threaten its dominance. One of the biggest is the rise of chiplets, which are smaller, modular components that can be manufactured separately and then interconnected to form a complete processor. 

This approach has several advantages over the monolithic chip design that Nvidia currently uses. Monolithic chips are large and complex, and they can be difficult to manufacture. As process nodes shrink, it becomes even more difficult to produce monolithic chips without introducing defects. This higher probability of getting manufacturing wrong can break a chip and lead to increased production costs.

STH: Difference Between Monolithic and Chiplet Design Approaches

We can think of monolithic chips vs chiplets similarly to the old big TVs and the now flat screen OLED TVs. If one pixel of the OLED TV is broken, the rest of the TV still functions, while if something breaks in the older TVs, you would need to repair the whole system. This is not a comparison on performance, but on the concept of connecting a large amount of small components vs. dealing with one big device.

If Nvidia doesn’t adapt, it could be disrupted by competitors who are already using this technology. AMD is one company that is using chiplets, giving them a significant advantage over Nvidia in terms of manufacturing costs and efficiency.

A somewhat recent NVIDIA paper explored the benefits of many connected GPUs instead of the monolithic design and noted that:

“Most importantly, the optimized MCM-GPU design is 45.5% faster than the largest implementable monolithic GPU, and performs within 10% of a hypothetical (and unbuildable) monolithic GPU.” 

This indicates that the company is aware of the possible limitations of the monolithic technology, but seems to be betting that their technology will be more than enough to satisfy a large portion of the industrial computer needs. I largely agree with this bet, but think that the margins will suffer as competitors introduce cost effective alternatives.

The AI Race Has Just Started

Another challenge facing Nvidia is the rapidly accelerating pace of AI development. As AI technology continues to improve, the demand for high-performance GPUs will increase. This could put pressure on Nvidia's margins, as the company will need to invest heavily in research and development to stay meaningfully ahead, while competitors bet on more cost-effective solutions.

NVIDIA is a leader in this field and there is a good reason why. The combination of NVIDIA's DGX systems and its powerful GPUs is making it the go-to solution for AI training and inference in the cloud. NVIDIA's DGX systems are so powerful because they are equipped with multiple GPUs that are designed for parallel processing, which means that they can perform many calculations at the same time. This makes them ideal for AI workloads, which can be very computationally intensive.

The DGX GH200 and H100 are the latest and most powerful DGX systems from NVIDIA. They are equipped with up to 8 GPUs, which gives them a combined compute power of over 100 teraflops. This is more compute power than most supercomputers, and it allows the DGX GH200 and H100 to handle even the most demanding AI workloads.

NVIDIA is a company that has been ahead of the curve in the AI space. However, the company is now facing some challenges that could threaten its dominance:

  • Peers like AMD, Samsung and Intel are increasingly entering the GPU manufacturing race at cheaper price points.
  • As mentioned above, larger customers are moving to produce their own chips, limiting the potential market of NVIDIA.
  • NVDA’s generative AI cloud solutions will allow smaller customers to utilize NVDA’s compute capabilities in order to satisfy their market needs, effectively inducing a cannibalization effect on NVDA’s revenue. The company has to provide this opportunity in order to retain market share, else, peers will offer a similar service.

Stagnant Gaming Market Share

Despite Nvidia’s dominant position in the gaming industry with its GeForce GPUs, it faces competition from other players like AMD and Intel. In terms of raw hardware performance, these companies lag, but with reliance on accelerated computing increasing, they have the ability to bridge that performance gap through software implementation that Nvidia can’t compete with Eg. the interplay between AMD CPUs and GPUs (or Intel’s CPUs and Arc GPUs).

The gaming market is still projected to grow to about $546B in 2028 with most of it expected to come from mobile device gaming, so this growth may not be equally reflected in NVIDIA’s graphics segment, and NVIDIA’s gaming revenues may stagnate. NVIDIA is already a market leader in the gaming GPU field, and while sales will grow for the gaming segment, it doesn’t have much room to improve market share, while peers are more aggressive to move into the segment.

NVIDIA Omniverse’s Adoption Limits

Nvidia's Omniverse platform is a powerful real-time simulation and collaboration platform for 3D design. Nvidia positions this as something to revolutionize the workplace, but people may be resistant to digitizing their workplace and the technology may be ahead of a viable business case.

There are a few other factors that could limit the adoption of Nvidia's Omniverse platform. These include the high cost of the platform, the complexity of the software, and the lack of user-friendly documentation.

Geopolitical Tensions Mean Manufacturing Risks Are High 

A steep rise in the stock price is a bet on a company’s future, but it is also a signal from investors saying “Hey, we think you are the right company for the job”. To that end, investors are electing the company to bring about innovation and growth. 

This is difficult to pull off with the existing research and manufacturing capacity (NVIDIA designs chips, which are produced in Taiwan Semiconductor Manufacturing), and the company will need to ramp up R&D and secure more manufacturing capacity or additional foundry partners.

Given the geopolitical risks surrounding Taiwan, the company may reduce its production risk by either investing in its own onshore production capacities, finding local partners or looking for partners in other countries, like the example of Apple.

In the end, not making a move may be more expensive than the possible up-front investment, especially given NVIDIA’s current $1.15T market cap relies heavily on them not stumbling in this respect.

NVIDIA’s Entry Into the EV Market is Experiencing a Counter From Tesla

NVIDIA's autonomous DRIVE platform is a suite of software and hardware solutions for developing and deploying autonomous vehicles. The company has secured some customers that use its DRIVE platform such as Volvo, Nio, XPENG, Zoox, SAIC.

NVIDIA’s Growing Drive Pipeline

Tesla went for a lose-lose scenario against NVIDIA (and Apple) by testing out the waters for licensing their self- driving software and hardware technology. I assume that part of the motivation is to retain market share, discourage investment in R&D for the technology, and shrinking the profitability projections of the hardware and software components in the segment i.e. to retain profitability only for the full vertically-integrated EV solution.

In its latest earnings press release, NVIDIA outlined that a market value of $1T in data centers and AI related products will transition from general purpose to accelerated computing. Within this, the automotive opportunity is $300 billion, and Nvidia sees this as a valuable growth pathway. This goal is further away and the company is targeting a $14B design pipeline from future customers.

There will be significant competition for the mentioned revenues, and it seems that Tesla wants to block NVIDIA before it can start establishing partnerships in the EV market.

Industry Catalysts

Data Center Demand Will Be A Core Revenue Driver

The growth of the data center market is a major driver of demand for GPUs. GPUs are used in data centers for a variety of computationally intensive tasks, including machine learning, artificial intelligence, and cloud computing.

The data center market is estimated (1, 2) to reach $565.5B in 2032, growing at a CAGR of 7.3% from a $280B basis in 2022. About 35% of the spending is estimated to come from the IT infrastructure segment, which includes: compute, servers, storage, and networking equipment that powers applications like computing, analytics, IoT, machine learning & AI etc. This means that NVIDIA’s data center revenues may be exposed to a $200B market by 2032, or $164B in 2028.

As the data center market grows, GPU manufacturers will need to invest in research and development to develop new and more powerful GPUs. They will also need to expand their manufacturing capacity to meet the growing demand. An exception to this is accelerated computing - which is the technology that NVIDIA is betting on. 

Should this become a reality, then most of the CapEx spending will be shifted away from expanding production capacity into R&D since accelerated compute engines produce exponentially better performance per chip, but is a cutting edge technology that requires further development and optimization. This has the potential to increase the supply of global compute power ahead of demand, and it may have negative impacts on NVIDIA’s revenue and margins, since customers will be able to do much more work with fewer machines. 

Cyclicality Impact Will Lead To Lumpy Revenues

High quality hardware with higher sales will lead to an amplification of chip cyclicality as companies will satisfy their compute needs with a single purchase that may take years to become obsolete. However, if performance keeps accelerating, then customers will have an additional reason to purchase the newest performing hardware in order to keep up with the competition.

It may be a dangerous extrapolation to assume that demand for chips backed by accelerated computing and new industry needs like AI will continue to be as strong. To me, the more reasonable assumption is that once hyperscalers satisfy their demand for accelerated computing, they will reduce their demand to maintenance needs and start a new demand cycle only when the next gen technology is differentiated enough in performance. This would mean that companies like NVDA will get a large boost in sales, but it will also quickly slump as they fulfill demand. 

The counterpoint to this is that there is demand for years to come, but I see that as a marketing move to retain high pricing, while in reality companies will positively surprise on their capacity to ship products and foundries are already expanding production capacity (1, 2).

Hardware and Software Companies Will Benefit from AI Growth

The global AI market is projected to grow 39% annually and reach $357 billion in 2027. This growth is being driven by the increasing adoption of AI-powered solutions across a wide range of industries. The two key beneficiaries of this growth are likely to be hardware manufacturers and software companies. 

Hardware manufacturers, such as NVIDIA, are making processing units that are specifically designed for AI applications. Software companies, such as Google, are integrating AI functionality into their existing offerings and developing new innovative technologies.

A Muziho (1, 2) analyst estimates that as NVIDIA currently holds around 90% of the market share in the industry, that it may be able to retain 75% of the market share and reach $300B in AI-related revenue, by 2027. A Citi analyst shares the sentiment, and is even more bullish on NVIDIA’s ability to dominate over the competition.

There is no doubt that NVIDIA will benefit from the growth in the industry. My concern is that the projections are optimistic, customers will find cheaper solutions, and peers will be increasingly competitive for margins and market share.

Assumptions

TAM Is Probably Lower Than Forecasts Expect

I assume that both the company’s TAM projection of $1T and sell side’s projections of a $300B opportunity are unrealistic revenue targets for NVIDIA as competitors and customers find more cost effective alternatives, eroding the company’s pricing power. I estimate that the 2028 total addressable market for NVIDIA is around $300B, comprising $164B in IT infrastructure, $100B for the EV market, and $20B for B2C (gaming and PC GPUs) customers.   My estimate is that NVIDIA can reach up to $85B in sales by 2028, comprising $15B in B2C (gaming, consumer) sales and $70B in data center, EV and AI related revenues. This means that I expect the company to grow revenue by 3.3x in 5 years and capture some 30% of its 2028 TAM.

Profit Margins To Reach 20%

I assume that NVDA will initially increase its profit margins in the next few years, but an oversupply and competitive pressures will start eroding that pricing power and the company will converge on a 20% profit margin in 2028.

Earnings Multiple To Rerate Lower 

The reduction in 2028 margin estimates, as well as the anticipation of the next cyclical sales downtrend for chips driven by an oversupply, self-developed silicon and AI solutions by hyperscalers, and exponentially increased compute power, will lead to a re-rating of the stocks multiple.

I use the Price to Earnings ratio in the pricing of NVIDIA and assume that as future revenue growth expectations change from more than 60% in the next 12 months, down to 15% in 2028, that the PE ratio will re-rate from the current 228.5x to a ratio more in-line with technology peers such as AMD, GOOGL, MSFT, AAPL, AMZN. That’s why my estimate for NVIDIA’s 2028 PE is 45x. While I considered the historical averages for the stock’s PE, I view that NVIDIA will find itself in a more competitive environment, and the higher future revenue base will make it difficult to expect high growth. This is why I think the stock will converge to a lower multiple than seen in the past 5 years.

Share Count To Stay Roughly The Same

I assume that the share count will fluctuate and the charge count may increase due to issuing new shares as NVDA takes on higher CapEx spending in order to foster the expected revenue growth. That’s why I estimate that the share count will be volatile but will net out in five years.

Risks

Nvidia May Remain Leader in AI Hardware if Chiplet Latency Issues Not Addressed

Despite these challenges, Nvidia is still a well-positioned company. The global AI market is expected to grow significantly in the coming years, and Nvidia is one of the leading players in this market. Additionally, Nvidia has a strong software ecosystem that supports its hardware products. The drawback of the chiplet approach is higher latency, which is important for AI, and customers in this segment may prefer the monolithic hardware. If chiplets in GPUs can’t be connected so that latency levels are acceptable or even negligible to customers, then NVIDIA will likely remain the leader in AI hardware. While branching off to chiplets may be difficult as there are multiple points of friction, NVIDIA may move in via multiple acquisitions.

Nvidia is a leading provider of hardware and software for artificial intelligence (AI)

The company's GPUs are used in a wide range of AI applications, including machine learning, natural language processing, and computer vision. Their proprietary edge is not going away and performance per chip will increase. The most insightful KPI for me is to follow how peers react to innovation in this sector, and monitor the adaptation of chiplets vs monolithic GPU technology. On the software side, I would keep an eye on open-source AI algorithms as they may end up iterating faster on the tech and make the market less profitable in the future. 

Mitigating the chiplet branch

Nvidia has already taken some steps to adapt to the chiplet trend, and its acquisition of Mellanox Technologies in 2020 gives the company a leading position in the high-performance networking market. This will be important for Nvidia as it seeks to expand its reach into the data center market. I would also look for signs to see if NVIDIA starts branching out to adopt some form of chiplet technology, or if their monoliths and connectors manage to significantly improve in both performance and affordability against chiplets.    

Nvidia has a strong software ecosystem that supports its hardware products

Nvidia's CUDA software platform is a key competitive advantage. CUDA makes it easy for developers to write code that runs on Nvidia GPUs. This has helped Nvidia to build a large and loyal customer base. Should it manage to enter another software vertical in the AI space it may unlock more revenue and mitigate the risk of chiplets.

How well do narratives help inform your perspective?

Disclaimer

Simply Wall St analyst Goran_Damchevski holds no position in NasdaqGS:NVDA. Simply Wall St has no position in the company(s) mentioned. This narrative is general in nature and explores scenarios and estimates created by the author. The narrative does not reflect the opinions of Simply Wall St, and the views expressed are the opinion of the author alone, acting on their own behalf. These scenarios are not indicative of the company's future performance and are exploratory in the ideas they cover. The fair value estimate's are estimations only, and does not constitute a recommendation to buy or sell any stock, and they do not take account of your objectives, or your financial situation. Note that the author's analysis may not factor in the latest price-sensitive company announcements or qualitative material.

Read more narratives

Fair Value
US$68.0
106.4% overvalued intrinsic discount
Goran_Damchevski's Fair Value
Future estimation in
PastFuture050b100b150b2014201720202023202420262029Revenue US$186.9bEarnings US$65.4b
% p.a.
Decrease
Increase
Current revenue growth rate
25.46%
Semiconductors revenue growth rate
0.95%