Header cover image

An Oversupply Of Compute Power Will See Margins Recede

GO

goran_damchevski

Not Invested

Equity Analyst

Published

August 03 2023

Updated

June 25 2024

Narratives are currently in beta

Announcement on 27 May, 2024

NVIDIA Expects High Demand Through 2025, The Valuation Is Becoming  Difficult To Justify
  • NVIDIA reported record revenue of $26B, up 262% from a year ago and up 18% sequentially. The company expects Q2 revenues to be $28B +/-2%. Growth was driven by data centers with $22.56B, up 262% YoY and 18% QoQ. The company beat the $26.7B revenue forecasts and further raised guidance to $28B for Q2.
  • The company reported a net income of $15.2B, and $6.12 EPS, up by 461% YoY, 19% QoQ. Using midpoint guidance, the net income for Q2 would be around $14.3B.
  • The current cutting-edge H200 chips, now available, practically double the inference capabilities of H100s. The company is producing Blackwell chips, expects to ship in Q2 and ramp up in Q3. Management noted that they expect demand to exceed supply for both Blackwell and H200 in 2024 and 2025.
  • In the earnings call, the CEO noted that large cloud providers represent more than 40% of data-center revenues. The rest partly come from 15k to 20k startups developing AI technology. NVIDIA is used in every cloud provider, and while some of them are developing their own chips, the CEO posits that there are aspects that cannot be run without NVIDIA architecture, arguing that NVIDIA builds AI factories i.e. systems that work together instead of standalone chips. He further explained that NVIDIA chips are capable of taking in both structured and unstructured data to train their models, and that the memory capacity in H200 chips is now proving to be useful in expanding the context available for inference. This indicates that the company’s GPU’s have large staying power for the cloud providers which are a key customer.
  • Regarding a question on what will drive sustained demand for new generation chips like Blackwell and thereafter, the CEO discussed how peers need to be at the cutting edge of AI training and inference capabilities if they want to retain the ability to capture value. He also noted that there is a long line of startups producing projects across different fields: multimedia, productivity, healthcare, biology, EVs. As well as sovereign AI projects that are countries training their own AI models.
  • Mr. Huang, (the CEO) expects innovation cycles to last about a year, indicating that they are preparing a better version of Blackwell down the line.
  • NVIDIA's CEO envisions a shift from procedural computing, where computers execute instructions, to intention-driven machines. These machines would produce output by understanding user intent. This may prove to be a hard problem to solve, even if computers can summarize intent, approximating output is not enough in technology, and computers still rely on exact code in order to run a process. This means that the solution of a request must be 100% correct in order to be executed as code, a 99.9% solution won’t work. In contrast, humans can abstract away information, which is why producing visuals, sounds or text may be a better use case than intent-driven compute. An example of this is OpenAI’s Sora, which produces a good enough output which humans have no problem interpreting.
  • NVIDIA bought back $7.7B of their own stock, and distributed $98M in dividends. The total capital return is equivalent to 0.33% of the market cap. This indicates that while buybacks are present, they are not a key driver of the company’s value.
  • The company announced a 10 to 1 stock split aimed to make stock ownership more accessible to investors and employees. A stock split has no effect on value, but allows smaller portfolio owners to easily allocate shares at a cheaper absolute price.
  • Extrapolating NVIDIA's revenue guidance for the next three quarters suggests a total of $108B for 2025. This would be a much faster scale-up than my 2028 estimate for $106.5B and as discussed before, the CEO also remarked that there is demand for the next 2 years. In order for this to be sustained, end-user AI projects eventually need to yield a return on the invested GPU infrastructure. I believe that demand will be high for some time, but don’t see how that can be sustained to 2028. As I noted in my previous update, I see 2024 as the year when we test the practical use case for AI applications and their market viability. I am also not convinced that cloud giants will keep up their AI infrastructure spend as fast as NVIDIA’s 1-year innovation cycles, but this will ultimately be determined by end-user demand. 
  • For the reasons discussed, I expect the company to top my estimates in the short-term, but cycle down to a more sustainable level driven by the practical applications of AI. I am maintaining my forward value of $1.1T for NVIDIA.

Key Takeaways

  • NVIDIA facing challenges, 30% data center market share at risk from chiplets
  • AMD, Samsung, Intel entering GPU manufacturing race at cheaper price points
  • Largest customers producing own chips, limits market for NVIDIA
  • Generative AI cloud solutions introduces revenue cannibalization risk
  • Gaming market revenue share at risk of stagnating

Catalysts

Company Catalysts

Accelerating GPU Performance Will Reduce Profitability

There is an argument to be made that the rapidly accelerating performance of GPUs will unlock markets in the next 3 to 5 years, but the same technology can lead to oversupply of compute power in the long term.

As recently as 2021, there was a shortage of compute power for use cases like cryptocurrency mining, which was promoted as a key growth avenue. However, when crypto prices fell the market saw a decline of GPU margins, swinging the sentiment of investors from bullish to bearish on industry stocks including NVIDIA. In the short gap after the crypto downfall and before the appearance of the AI trend, investors stopped evaluating the future potential of NVIDIA and flipped to analyzing the company based on its current performance. This may have been a partial reason why some portfolio managers like Cathie Wood sold their NVDA stock.

Now, NVIDIA is experiencing a “second coming” as investors price-in the potential of AI technology. This is backed by the accelerated computing innovation leading to an exponential increase in compute power that can be used in all kinds of AI and data center applications. 

However, after extrapolating this factor a few years further into the future, I arrive at the assumption that as GPUs become more powerful, they will inevitably be more cost effective. This will lead to a runaway supply of compute power while businesses and consumers fail to find use cases to catch up. This is why we will start increasingly seeing CEOs flip their rhetoric from the capability of their products to the possible use cases for the new technology.

NVIDIA’s H100 vs the A100 Series Performance on AI Tasks

The end result of this scenario is an expanding market but shrinking profitability margins - which converge to the margins of the cheapest peer. I say this because there are no luxury compute machines, as there is no luxury electricity. While companies are attempting to shield their customer base with proprietary technology, these walls will erode over time, and it may be faster than we realize. One of these peers may be AMD, but even cheaper alternatives may be hyperscalers like Amazon’s AWS, that produce their own chips and can vertically integrate their hardware with software infrastructure offerings.

The Difference in Selling GPUs to Consumers and Hyperscalers

As a result of innovation, semiconductor designers are under pressure to differentiate their products. They need to find ways to make their chips more powerful, efficient, or cost-effective than the competition. Otherwise, they risk losing customers to hyperscalers who are willing to develop their own chips.

Hyperscalers are large cloud computing providers that offer a wide range of services, including computing, storage, and networking. Some of the most well-known hyperscalers include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. 

Top Hyperscalers

As opposed to consumers who use a GPU for gaming or business, hyperscalers have more control over the procurement pricing and can negotiate discounts from vendors like NVIDIA. Ultimately, these hyperscalers may choose to produce their own chips and close a whole market for a vendor, such as the example when Apple enacted to produce their own M1 chips, and cut off Intel chips from Apple products. This partly happened because Apple managed to develop superior chips than what Intel could offer, which is why it is important for NVIDIA to stay on the edge of innovation, or risk being shut off from customers. 

Note, NVIDIA is currently the top player in the space, however a few years from now, companies may have access to comparative technology at lower prices.  

Currently, key customers are developing their own chips in order to vertically integrate their products. Hyperscalers like Google, Amazon, Microsoft, and Oracle know their environments better than pure semiconductor companies do. This gives them the ability to design chips that are optimized for their specific needs.

We already have examples of hyperscalers vertically integrating and developing their own chips, and it is not long jump before more of them produce their own GPUs:

  • Amazon Web Services (AWS) uses its own Graviton chips, which are based on the ARM architecture.
  • Microsoft Azure uses its own custom-designed FPGAs (field-programmable gate arrays) for some of its workloads.
  • Google Cloud Platform is developing in-house data-center chips, and already uses its own custom-designed TPUs (Tensor Processing Units) for machine learning workloads.
  • Meta is developing its own chips for a variety of workloads, including machine learning and video streaming.

In summary, I view the core market for NVIDIA impacted by the demand from hyperscalers, which are at the end stages of validating the demand for the cloud business and will strengthen their vertical integration of the services in order to cut costs, i.e. making their own chips will be the rational next steps for large cloud companies.

Monolithic vs. Chiplet GPUs

Nvidia has been a pioneer in the Graphical Processing Unit (GPU) and now the AI space, but the company is facing some challenges that could threaten its dominance. One of the biggest is the rise of chiplets, which are smaller, modular components that can be manufactured separately and then interconnected to form a complete processor. 

This approach has several advantages over the monolithic chip design that Nvidia currently uses. Monolithic chips are large and complex, and they can be difficult to manufacture. As process nodes shrink, it becomes even more difficult to produce monolithic chips without introducing defects. This higher probability of getting manufacturing wrong can break a chip and lead to increased production costs.

STH: Difference Between Monolithic and Chiplet Design Approaches

We can think of monolithic chips vs chiplets similarly to the old big TVs and the now flat screen OLED TVs. If one pixel of the OLED TV is broken, the rest of the TV still functions, while if something breaks in the older TVs, you would need to repair the whole system. This is not a comparison on performance, but on the concept of connecting a large amount of small components vs. dealing with one big device.

If Nvidia doesn’t adapt, it could be disrupted by competitors who are already using this technology. AMD is one company that is using chiplets, giving them a significant advantage over Nvidia in terms of manufacturing costs and efficiency.

A somewhat recent NVIDIA paper explored the benefits of many connected GPUs instead of the monolithic design and noted that:

“Most importantly, the optimized MCM-GPU design is 45.5% faster than the largest implementable monolithic GPU, and performs within 10% of a hypothetical (and unbuildable) monolithic GPU.” 

This indicates that the company is aware of the possible limitations of the monolithic technology, but seems to be betting that their technology will be more than enough to satisfy a large portion of the industrial computer needs. I largely agree with this bet, but think that the margins will suffer as competitors introduce cost effective alternatives.

The AI Race Has Just Started

Another challenge facing Nvidia is the rapidly accelerating pace of AI development. As AI technology continues to improve, the demand for high-performance GPUs will increase. This could put pressure on Nvidia's margins, as the company will need to invest heavily in research and development to stay meaningfully ahead, while competitors bet on more cost-effective solutions.

NVIDIA is a leader in this field and there is a good reason why. The combination of NVIDIA's DGX systems and its powerful GPUs is making it the go-to solution for AI training and inference in the cloud. NVIDIA's DGX systems are so powerful because they are equipped with multiple GPUs that are designed for parallel processing, which means that they can perform many calculations at the same time. This makes them ideal for AI workloads, which can be very computationally intensive.

The DGX GH200 and H100 are the latest and most powerful DGX systems from NVIDIA. They are equipped with up to 8 GPUs, which gives them a combined compute power of over 100 teraflops. This is more compute power than most supercomputers, and it allows the DGX GH200 and H100 to handle even the most demanding AI workloads.

NVIDIA is a company that has been ahead of the curve in the AI space. However, the company is now facing some challenges that could threaten its dominance:

  • Peers like AMD, Samsung and Intel are increasingly entering the GPU manufacturing race at cheaper price points.
  • As mentioned above, larger customers are moving to produce their own chips, limiting the potential market of NVIDIA.
  • NVDA’s generative AI cloud solutions will allow smaller customers to utilize NVDA’s compute capabilities in order to satisfy their market needs, effectively inducing a cannibalization effect on NVDA’s revenue. The company has to provide this opportunity in order to retain market share, else, peers will offer a similar service.

Stagnant Gaming Market Share

Despite Nvidia’s dominant position in the gaming industry with its GeForce GPUs, it faces competition from other players like AMD and Intel. In terms of raw hardware performance, these companies lag, but with reliance on accelerated computing increasing, they have the ability to bridge that performance gap through software implementation that Nvidia can’t compete with Eg. the interplay between AMD CPUs and GPUs (or Intel’s CPUs and Arc GPUs).

The gaming market is still projected to grow to about $546B in 2028 with most of it expected to come from mobile device gaming, so this growth may not be equally reflected in NVIDIA’s graphics segment, and NVIDIA’s gaming revenues may stagnate. NVIDIA is already a market leader in the gaming GPU field, and while sales will grow for the gaming segment, it doesn’t have much room to improve market share, while peers are more aggressive to move into the segment.

NVIDIA Omniverse’s Adoption Limits

Nvidia's Omniverse platform is a powerful real-time simulation and collaboration platform for 3D design. Nvidia positions this as something to revolutionize the workplace, but people may be resistant to digitizing their workplace and the technology may be ahead of a viable business case.

There are a few other factors that could limit the adoption of Nvidia's Omniverse platform. These include the high cost of the platform, the complexity of the software, and the lack of user-friendly documentation.

Geopolitical Tensions Mean Manufacturing Risks Are High 

A steep rise in the stock price is a bet on a company’s future, but it is also a signal from investors saying “Hey, we think you are the right company for the job”. To that end, investors are electing the company to bring about innovation and growth. 

This is difficult to pull off with the existing research and manufacturing capacity (NVIDIA designs chips, which are produced in Taiwan Semiconductor Manufacturing), and the company will need to ramp up R&D and secure more manufacturing capacity or additional foundry partners.

Given the geopolitical risks surrounding Taiwan, the company may reduce its production risk by either investing in its own onshore production capacities, finding local partners or looking for partners in other countries, like the example of Apple.

In the end, not making a move may be more expensive than the possible up-front investment, especially given NVIDIA’s current $1.15T market cap relies heavily on them not stumbling in this respect.

NVIDIA’s Entry Into the EV Market is Experiencing a Counter From Tesla

NVIDIA's autonomous DRIVE platform is a suite of software and hardware solutions for developing and deploying autonomous vehicles. The company has secured some customers that use its DRIVE platform such as Volvo, Nio, XPENG, Zoox, SAIC.

NVIDIA’s Growing Drive Pipeline

Tesla went for a lose-lose scenario against NVIDIA (and Apple) by testing out the waters for licensing their self- driving software and hardware technology. I assume that part of the motivation is to retain market share, discourage investment in R&D for the technology, and shrinking the profitability projections of the hardware and software components in the segment i.e. to retain profitability only for the full vertically-integrated EV solution.

In its latest earnings press release, NVIDIA outlined that a market value of $1T in data centers and AI related products will transition from general purpose to accelerated computing. Within this, the automotive opportunity is $300 billion, and Nvidia sees this as a valuable growth pathway. This goal is further away and the company is targeting a $14B design pipeline from future customers.

There will be significant competition for the mentioned revenues, and it seems that Tesla wants to block NVIDIA before it can start establishing partnerships in the EV market.

Industry Catalysts

Data Center Demand Will Be A Core Revenue Driver

The growth of the data center market is a major driver of demand for GPUs. GPUs are used in data centers for a variety of computationally intensive tasks, including machine learning, artificial intelligence, and cloud computing.

The data center market is estimated (1, 2) to reach $565.5B in 2032, growing at a CAGR of 7.3% from a $280B basis in 2022. About 35% of the spending is estimated to come from the IT infrastructure segment, which includes: compute, servers, storage, and networking equipment that powers applications like computing, analytics, IoT, machine learning & AI etc. This means that NVIDIA’s data center revenues may be exposed to a $200B market by 2032, or $164B in 2028.

As the data center market grows, GPU manufacturers will need to invest in research and development to develop new and more powerful GPUs. They will also need to expand their manufacturing capacity to meet the growing demand. An exception to this is accelerated computing - which is the technology that NVIDIA is betting on. 

Should this become a reality, then most of the CapEx spending will be shifted away from expanding production capacity into R&D since accelerated compute engines produce exponentially better performance per chip, but is a cutting edge technology that requires further development and optimization. This has the potential to increase the supply of global compute power ahead of demand, and it may have negative impacts on NVIDIA’s revenue and margins, since customers will be able to do much more work with fewer machines. 

Cyclicality Impact Will Lead To Lumpy Revenues

High quality hardware with higher sales will lead to an amplification of chip cyclicality as companies will satisfy their compute needs with a single purchase that may take years to become obsolete. However, if performance keeps accelerating, then customers will have an additional reason to purchase the newest performing hardware in order to keep up with the competition.

It may be a dangerous extrapolation to assume that demand for chips backed by accelerated computing and new industry needs like AI will continue to be as strong. To me, the more reasonable assumption is that once hyperscalers satisfy their demand for accelerated computing, they will reduce their demand to maintenance needs and start a new demand cycle only when the next gen technology is differentiated enough in performance. This would mean that companies like NVDA will get a large boost in sales, but it will also quickly slump as they fulfill demand. 

The counterpoint to this is that there is demand for years to come, but I see that as a marketing move to retain high pricing, while in reality companies will positively surprise on their capacity to ship products and foundries are already expanding production capacity (1, 2).

Hardware and Software Companies Will Benefit from AI Growth

The global AI market is projected to grow 39% annually and reach $357 billion in 2027. This growth is being driven by the increasing adoption of AI-powered solutions across a wide range of industries. The two key beneficiaries of this growth are likely to be hardware manufacturers and software companies. 

Hardware manufacturers, such as NVIDIA, are making processing units that are specifically designed for AI applications. Software companies, such as Google, are integrating AI functionality into their existing offerings and developing new innovative technologies.

A Muziho (1, 2) analyst estimates that as NVIDIA currently holds around 90% of the market share in the industry, that it may be able to retain 75% of the market share and reach $300B in AI-related revenue, by 2027. A Citi analyst shares the sentiment, and is even more bullish on NVIDIA’s ability to dominate over the competition.

There is no doubt that NVIDIA will benefit from the growth in the industry. My concern is that the projections are optimistic, customers will find cheaper solutions, and peers will be increasingly competitive for margins and market share.

Assumptions

TAM Is Probably Lower Than Forecasts Expect

I assume that both the company’s TAM projection of $1T and sell side’s projections of a $300B opportunity are unrealistic revenue targets for NVIDIA as competitors and customers find more cost effective alternatives, eroding the company’s pricing power. I estimate that the 2028 total addressable market for NVIDIA is around $300B, comprising $164B in IT infrastructure, $100B for the EV market, and $20B for B2C (gaming and PC GPUs) customers.   My estimate is that NVIDIA can reach up to $85B in sales by 2028, comprising $15B in B2C (gaming, consumer) sales and $70B in data center, EV and AI related revenues. This means that I expect the company to grow revenue by 3.3x in 5 years and capture some 30% of its 2028 TAM.

Profit Margins To Reach 20%

I assume that NVDA will initially increase its profit margins in the next few years, but an oversupply and competitive pressures will start eroding that pricing power and the company will converge on a 20% profit margin in 2028.

Earnings Multiple To Rerate Lower 

The reduction in 2028 margin estimates, as well as the anticipation of the next cyclical sales downtrend for chips driven by an oversupply, self-developed silicon and AI solutions by hyperscalers, and exponentially increased compute power, will lead to a re-rating of the stocks multiple.

I use the Price to Earnings ratio in the pricing of NVIDIA and assume that as future revenue growth expectations change from more than 60% in the next 12 months, down to 15% in 2028, that the PE ratio will re-rate from the current 228.5x to a ratio more in-line with technology peers such as AMD, GOOGL, MSFT, AAPL, AMZN. That’s why my estimate for NVIDIA’s 2028 PE is 45x. While I considered the historical averages for the stock’s PE, I view that NVIDIA will find itself in a more competitive environment, and the higher future revenue base will make it difficult to expect high growth. This is why I think the stock will converge to a lower multiple than seen in the past 5 years.

Share Count To Stay Roughly The Same

I assume that the share count will fluctuate and the charge count may increase due to issuing new shares as NVDA takes on higher CapEx spending in order to foster the expected revenue growth. That’s why I estimate that the share count will be volatile but will net out in five years.

Risks

Nvidia May Remain Leader in AI Hardware if Chiplet Latency Issues Not Addressed

Despite these challenges, Nvidia is still a well-positioned company. The global AI market is expected to grow significantly in the coming years, and Nvidia is one of the leading players in this market. Additionally, Nvidia has a strong software ecosystem that supports its hardware products. The drawback of the chiplet approach is higher latency, which is important for AI, and customers in this segment may prefer the monolithic hardware. If chiplets in GPUs can’t be connected so that latency levels are acceptable or even negligible to customers, then NVIDIA will likely remain the leader in AI hardware. While branching off to chiplets may be difficult as there are multiple points of friction, NVIDIA may move in via multiple acquisitions.

Nvidia is a leading provider of hardware and software for artificial intelligence (AI)

The company's GPUs are used in a wide range of AI applications, including machine learning, natural language processing, and computer vision. Their proprietary edge is not going away and performance per chip will increase. The most insightful KPI for me is to follow how peers react to innovation in this sector, and monitor the adaptation of chiplets vs monolithic GPU technology. On the software side, I would keep an eye on open-source AI algorithms as they may end up iterating faster on the tech and make the market less profitable in the future. 

Mitigating the chiplet branch

Nvidia has already taken some steps to adapt to the chiplet trend, and its acquisition of Mellanox Technologies in 2020 gives the company a leading position in the high-performance networking market. This will be important for Nvidia as it seeks to expand its reach into the data center market. I would also look for signs to see if NVIDIA starts branching out to adopt some form of chiplet technology, or if their monoliths and connectors manage to significantly improve in both performance and affordability against chiplets.    

Nvidia has a strong software ecosystem that supports its hardware products

Nvidia's CUDA software platform is a key competitive advantage. CUDA makes it easy for developers to write code that runs on Nvidia GPUs. This has helped Nvidia to build a large and loyal customer base. Should it manage to enter another software vertical in the AI space it may unlock more revenue and mitigate the risk of chiplets.

How well do narratives help inform your perspective?

Disclaimer

Simply Wall St analyst goran_damchevski holds no position in NasdaqGS:NVDA. Simply Wall St has no position in the company(s) mentioned. This narrative is general in nature and explores scenarios and estimates created by the author. These scenarios are not indicative of the company's future performance and are exploratory in the ideas they cover. The fair value estimate's are estimations only, and does not constitute a recommendation to buy or sell any stock, and they do not take account of your objectives, or your financial situation. Note that the author's analysis may not factor in the latest price-sensitive company announcements or qualitative material.
Fair Value

US$43.3

179.8% OVERVALUED

goran_damchevski's Fair Value

Future estimation in
PastFuture0100b200b300b400b2014201720202023202420262029Revenue US$467.6bEarnings US$140.3b
% p.a.
Decrease
Increase

Current revenue growth rate

22.80%

Semiconductors revenue growth rate

0.97%

Simply Wall Street Pty Ltd (ACN 600 056 611), is a Corporate Authorised Representative (Authorised Representative Number: 467183) of Sanlam Private Wealth Pty Ltd (AFSL No. 337927). Any advice contained in this website is general advice only and has been prepared without considering your objectives, financial situation or needs. You should not rely on any advice and/or information contained in this website and before making any investment decision we recommend that you consider whether it is appropriate for your situation and seek appropriate financial, taxation and legal advice. Please read our Financial Services Guide before deciding whether to obtain financial services from us.