Narratives are currently in beta
Announcement on 10 September, 2024
Q2 Earnings Analysis And Valuation Revision
- Nvidia reported Q2 revenue of $30B, up 15% from Q1 and 122% YoY. Revenue beat the $28B guidance. Earnings per share of $0.68 beat expectations of $0.64 per share.
- Management issued a revenue forecast to $32.5B, implying a 80% YoY growth over the $18.12B in Q3’23. This also means that growth will slightly decelerate from the latest result of 122% to 80% in Q3.
- The data center business, including AI processors, drove most of the sales, climbing 154% from last year and accounting for 88% of total sales with $26.3B. Nvidia’s Hopper and networking platforms were the main contributors for the segment. The largest customer group were cloud service providers, representing 45% of datacenter revenue.
- At a price point around $29,000 per H-200 GPU, Nvidia would hypothetically need to sell around 907K GPU units to achieve the last Q2 datacenter revenue of $26.3B. Note that this is a hypothetical calculation and assumes that new revenues will come from H-200 GPUs.
- The CEO noted that Nvidia's next-generation AI chip, Blackwell, is highly anticipated, with samples shipped in the quarter and a production ramp up expected in Q4. Supply for the current-generation chip, H-200 Hopper, is ramping up and becoming more available.
- Nvidia's gaming revenue increased by 16% and the professional visualization business rose 20%, contributing to the overall growth.
- The company announced a $50B share buyback, reflecting its strong financial performance. At an average price of $117 per share, the company would be able to buy 427,35M shares, amounting to 1.74% of the 24.587B shares outstanding. While the absolute value of the buybacks is enormous, the company can only provide a relatively small return to shareholders around current price levels.
My Earnings Call Takeaways
During the prepared remarks, management noted that next generation models will require 10x to 20x more compute to train and significantly more data as inputs. I view the statement here to be strategically inserted in order to maintain investor enthusiasm. Conversely, future improvements in AI may be marginal even with larger models relying on more raw power. The remark that Hopper is now widely available, may signal to the market that it is no longer supply constrained and that the price of hardware may start coming down.
In the Q&A session management was asked about the ROI of GPU infrastructure investments to customers. Essentially, the CEO was asked why it makes sense for companies to buy all of the new AI infrastructure?
The CEO answered with multiple points, and came back to the question during a follow up. At the risk of partly misunderstanding his answer, here is my interpretation:
New GPUs have one of the highest ROI because they are much more compute and power efficient vs CPUs. This creates an incentive for companies (especially cloud providers) to replace their $1T+ CPU infrastructure with brand new GPU’s. Conversely, doesn’t it follow that if new gen infrastructure is hypothetically 5x more performant, that cloud providers could produce the same compute with ⅕th of the hardware (not ⅕ the costs because Nvidia GPU’s are currently priced at a premium)?
The assumption here is that compute demand will persist and possibly accelerate on the back of a need for increased capacity in training new generation AI models. Additionally, GPUs are indifferent to their use case, and can perform whether it's for AI training, inference, or traditional purposes like data processing and servers.
I partly agree with the assumption that compute demand will keep increasing as time goes by, but perhaps not to the extent that Nvidia's management would like it to happen. It may very well end up being the case that GPU’s can be seamlessly reoriented away from AI to general purpose compute should the demand landscape change. This is great for Nvidia, as it will indeed have the opportunity to get a market share of the estimated $1T worth of cloud infrastructure. Note that in this scenario we need to factor-in the advancements of in-house chips from cloud providers, as well as competitors - who, while not offering the same performance, may offer a better value for their chips.
However this poses the question “Are cloud providers overprovisioning compute power, and are they overpaying on products to a company making 75% gross margins on hardware?”
Finally, the CEO does seem to have glossed over a key point of the first question, which was likely referring to what is the ROI to end customers? - Not just the cloud providers that sell compute infrastructure. Here, I find it more difficult to see a high-ROI use case for the majority of companies and startups using AI. Some services, like customer support, communication streamlining, AI advertising content, AI recommendations, and knowledge summarization, are now widely enhanced with AI, however the companies implementing them may not be making the ROI needed that would incentivise them to keep up their AI capex spend.
During all of the Q&A, the underlying assumption was that corporations & startups will keep working on AI applications, and thus keep renting more compute capacity from cloud providers.
Forward Value Upgrade To 2 Trillion
In my last update I noted that 2024 will be the year of the use-case for AI, and while AI spending increases as cloud providers continue to upgrade their infrastructure, I am still unconvinced that the profitability for end-customers is high enough to warrant a continued GPU infrastructure capex spend at these hardware prices.
Extrapolating the $32.5B Q3 revenue forecast, and applying a 15% QoQ growth rate equivalent to Q1 - Q2, I get a $126B revenue forecast for 2024. This would mean that the company is trading around 24x forward revenue, or at 40x forward earnings, from my estimated net income of $70B for 2024.
I am changing my narrative to reflect an expectation of holding a 55% market share of my estimated $300B SAM in 2028 for Nvidia. This is an upgrade from my expected 30% market share when I initially published the narrative one year ago. The reason for this is that Nvidia indeed demonstrated that it can sustain technological dominance, while competitors have failed to catch up. The upgrade results in my new estimate for revenues of $165B in 2028. These are not projections shared by most analysts, but I find it difficult to justify a continued growth extrapolation after 2026.
I am also upgrading my net profit estimates from 20% to 35%, yielding $57.7B in net profit in 2028. This means that I am expecting the current profitability levels to become unsustainable and slowly decline in five years. At a PE of 35x, the forward value of Nvidia becomes $2T, some 50% higher than my previous estimate.
Discounting back at an 8.3% rate, I get a present value of $1.47T or $57 per share (assuming 24 billion shares).
Investors that believe that the company will keep up or accelerate growth can easily justify the current market value for Nvidia, since the company is projected to surpass $105B in net income by 2027. Using this estimate, at a 30x PE, we get a $3T market cap. But alas, I am not one of those investors.
Key Takeaways
- NVIDIA facing challenges, 30% data center market share at risk from chiplets
- AMD, Samsung, Intel entering GPU manufacturing race at cheaper price points
- Largest customers producing own chips, limits market for NVIDIA
- Generative AI cloud solutions introduces revenue cannibalization risk
- Gaming market revenue share at risk of stagnating
Catalysts
Company Catalysts
Accelerating GPU Performance Will Reduce Profitability
There is an argument to be made that the rapidly accelerating performance of GPUs will unlock markets in the next 3 to 5 years, but the same technology can lead to oversupply of compute power in the long term.
As recently as 2021, there was a shortage of compute power for use cases like cryptocurrency mining, which was promoted as a key growth avenue. However, when crypto prices fell the market saw a decline of GPU margins, swinging the sentiment of investors from bullish to bearish on industry stocks including NVIDIA. In the short gap after the crypto downfall and before the appearance of the AI trend, investors stopped evaluating the future potential of NVIDIA and flipped to analyzing the company based on its current performance. This may have been a partial reason why some portfolio managers like Cathie Wood sold their NVDA stock.
Now, NVIDIA is experiencing a “second coming” as investors price-in the potential of AI technology. This is backed by the accelerated computing innovation leading to an exponential increase in compute power that can be used in all kinds of AI and data center applications.
However, after extrapolating this factor a few years further into the future, I arrive at the assumption that as GPUs become more powerful, they will inevitably be more cost effective. This will lead to a runaway supply of compute power while businesses and consumers fail to find use cases to catch up. This is why we will start increasingly seeing CEOs flip their rhetoric from the capability of their products to the possible use cases for the new technology.
NVIDIA’s H100 vs the A100 Series Performance on AI Tasks
The end result of this scenario is an expanding market but shrinking profitability margins - which converge to the margins of the cheapest peer. I say this because there are no luxury compute machines, as there is no luxury electricity. While companies are attempting to shield their customer base with proprietary technology, these walls will erode over time, and it may be faster than we realize. One of these peers may be AMD, but even cheaper alternatives may be hyperscalers like Amazon’s AWS, that produce their own chips and can vertically integrate their hardware with software infrastructure offerings.
The Difference in Selling GPUs to Consumers and Hyperscalers
As a result of innovation, semiconductor designers are under pressure to differentiate their products. They need to find ways to make their chips more powerful, efficient, or cost-effective than the competition. Otherwise, they risk losing customers to hyperscalers who are willing to develop their own chips.
Hyperscalers are large cloud computing providers that offer a wide range of services, including computing, storage, and networking. Some of the most well-known hyperscalers include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.
As opposed to consumers who use a GPU for gaming or business, hyperscalers have more control over the procurement pricing and can negotiate discounts from vendors like NVIDIA. Ultimately, these hyperscalers may choose to produce their own chips and close a whole market for a vendor, such as the example when Apple enacted to produce their own M1 chips, and cut off Intel chips from Apple products. This partly happened because Apple managed to develop superior chips than what Intel could offer, which is why it is important for NVIDIA to stay on the edge of innovation, or risk being shut off from customers.
Note, NVIDIA is currently the top player in the space, however a few years from now, companies may have access to comparative technology at lower prices.
Currently, key customers are developing their own chips in order to vertically integrate their products. Hyperscalers like Google, Amazon, Microsoft, and Oracle know their environments better than pure semiconductor companies do. This gives them the ability to design chips that are optimized for their specific needs.
We already have examples of hyperscalers vertically integrating and developing their own chips, and it is not long jump before more of them produce their own GPUs:
- Amazon Web Services (AWS) uses its own Graviton chips, which are based on the ARM architecture.
- Microsoft Azure uses its own custom-designed FPGAs (field-programmable gate arrays) for some of its workloads.
- Google Cloud Platform is developing in-house data-center chips, and already uses its own custom-designed TPUs (Tensor Processing Units) for machine learning workloads.
- Meta is developing its own chips for a variety of workloads, including machine learning and video streaming.
In summary, I view the core market for NVIDIA impacted by the demand from hyperscalers, which are at the end stages of validating the demand for the cloud business and will strengthen their vertical integration of the services in order to cut costs, i.e. making their own chips will be the rational next steps for large cloud companies.
Monolithic vs. Chiplet GPUs
Nvidia has been a pioneer in the Graphical Processing Unit (GPU) and now the AI space, but the company is facing some challenges that could threaten its dominance. One of the biggest is the rise of chiplets, which are smaller, modular components that can be manufactured separately and then interconnected to form a complete processor.
This approach has several advantages over the monolithic chip design that Nvidia currently uses. Monolithic chips are large and complex, and they can be difficult to manufacture. As process nodes shrink, it becomes even more difficult to produce monolithic chips without introducing defects. This higher probability of getting manufacturing wrong can break a chip and lead to increased production costs.
STH: Difference Between Monolithic and Chiplet Design Approaches
We can think of monolithic chips vs chiplets similarly to the old big TVs and the now flat screen OLED TVs. If one pixel of the OLED TV is broken, the rest of the TV still functions, while if something breaks in the older TVs, you would need to repair the whole system. This is not a comparison on performance, but on the concept of connecting a large amount of small components vs. dealing with one big device.
If Nvidia doesn’t adapt, it could be disrupted by competitors who are already using this technology. AMD is one company that is using chiplets, giving them a significant advantage over Nvidia in terms of manufacturing costs and efficiency.
A somewhat recent NVIDIA paper explored the benefits of many connected GPUs instead of the monolithic design and noted that:
“Most importantly, the optimized MCM-GPU design is 45.5% faster than the largest implementable monolithic GPU, and performs within 10% of a hypothetical (and unbuildable) monolithic GPU.”
This indicates that the company is aware of the possible limitations of the monolithic technology, but seems to be betting that their technology will be more than enough to satisfy a large portion of the industrial computer needs. I largely agree with this bet, but think that the margins will suffer as competitors introduce cost effective alternatives.
The AI Race Has Just Started
Another challenge facing Nvidia is the rapidly accelerating pace of AI development. As AI technology continues to improve, the demand for high-performance GPUs will increase. This could put pressure on Nvidia's margins, as the company will need to invest heavily in research and development to stay meaningfully ahead, while competitors bet on more cost-effective solutions.
NVIDIA is a leader in this field and there is a good reason why. The combination of NVIDIA's DGX systems and its powerful GPUs is making it the go-to solution for AI training and inference in the cloud. NVIDIA's DGX systems are so powerful because they are equipped with multiple GPUs that are designed for parallel processing, which means that they can perform many calculations at the same time. This makes them ideal for AI workloads, which can be very computationally intensive.
The DGX GH200 and H100 are the latest and most powerful DGX systems from NVIDIA. They are equipped with up to 8 GPUs, which gives them a combined compute power of over 100 teraflops. This is more compute power than most supercomputers, and it allows the DGX GH200 and H100 to handle even the most demanding AI workloads.
NVIDIA is a company that has been ahead of the curve in the AI space. However, the company is now facing some challenges that could threaten its dominance:
- Peers like AMD, Samsung and Intel are increasingly entering the GPU manufacturing race at cheaper price points.
- As mentioned above, larger customers are moving to produce their own chips, limiting the potential market of NVIDIA.
- NVDA’s generative AI cloud solutions will allow smaller customers to utilize NVDA’s compute capabilities in order to satisfy their market needs, effectively inducing a cannibalization effect on NVDA’s revenue. The company has to provide this opportunity in order to retain market share, else, peers will offer a similar service.
Stagnant Gaming Market Share
Despite Nvidia’s dominant position in the gaming industry with its GeForce GPUs, it faces competition from other players like AMD and Intel. In terms of raw hardware performance, these companies lag, but with reliance on accelerated computing increasing, they have the ability to bridge that performance gap through software implementation that Nvidia can’t compete with Eg. the interplay between AMD CPUs and GPUs (or Intel’s CPUs and Arc GPUs).
The gaming market is still projected to grow to about $546B in 2028 with most of it expected to come from mobile device gaming, so this growth may not be equally reflected in NVIDIA’s graphics segment, and NVIDIA’s gaming revenues may stagnate. NVIDIA is already a market leader in the gaming GPU field, and while sales will grow for the gaming segment, it doesn’t have much room to improve market share, while peers are more aggressive to move into the segment.
NVIDIA Omniverse’s Adoption Limits
Nvidia's Omniverse platform is a powerful real-time simulation and collaboration platform for 3D design. Nvidia positions this as something to revolutionize the workplace, but people may be resistant to digitizing their workplace and the technology may be ahead of a viable business case.
There are a few other factors that could limit the adoption of Nvidia's Omniverse platform. These include the high cost of the platform, the complexity of the software, and the lack of user-friendly documentation.
Geopolitical Tensions Mean Manufacturing Risks Are High
A steep rise in the stock price is a bet on a company’s future, but it is also a signal from investors saying “Hey, we think you are the right company for the job”. To that end, investors are electing the company to bring about innovation and growth.
This is difficult to pull off with the existing research and manufacturing capacity (NVIDIA designs chips, which are produced in Taiwan Semiconductor Manufacturing), and the company will need to ramp up R&D and secure more manufacturing capacity or additional foundry partners.
Given the geopolitical risks surrounding Taiwan, the company may reduce its production risk by either investing in its own onshore production capacities, finding local partners or looking for partners in other countries, like the example of Apple.
In the end, not making a move may be more expensive than the possible up-front investment, especially given NVIDIA’s current $1.15T market cap relies heavily on them not stumbling in this respect.
NVIDIA’s Entry Into the EV Market is Experiencing a Counter From Tesla
NVIDIA's autonomous DRIVE platform is a suite of software and hardware solutions for developing and deploying autonomous vehicles. The company has secured some customers that use its DRIVE platform such as Volvo, Nio, XPENG, Zoox, SAIC.
NVIDIA’s Growing Drive Pipeline
Tesla went for a lose-lose scenario against NVIDIA (and Apple) by testing out the waters for licensing their self- driving software and hardware technology. I assume that part of the motivation is to retain market share, discourage investment in R&D for the technology, and shrinking the profitability projections of the hardware and software components in the segment i.e. to retain profitability only for the full vertically-integrated EV solution.
In its latest earnings press release, NVIDIA outlined that a market value of $1T in data centers and AI related products will transition from general purpose to accelerated computing. Within this, the automotive opportunity is $300 billion, and Nvidia sees this as a valuable growth pathway. This goal is further away and the company is targeting a $14B design pipeline from future customers.
There will be significant competition for the mentioned revenues, and it seems that Tesla wants to block NVIDIA before it can start establishing partnerships in the EV market.
Industry Catalysts
Data Center Demand Will Be A Core Revenue Driver
The growth of the data center market is a major driver of demand for GPUs. GPUs are used in data centers for a variety of computationally intensive tasks, including machine learning, artificial intelligence, and cloud computing.
The data center market is estimated (1, 2) to reach $565.5B in 2032, growing at a CAGR of 7.3% from a $280B basis in 2022. About 35% of the spending is estimated to come from the IT infrastructure segment, which includes: compute, servers, storage, and networking equipment that powers applications like computing, analytics, IoT, machine learning & AI etc. This means that NVIDIA’s data center revenues may be exposed to a $200B market by 2032, or $164B in 2028.
As the data center market grows, GPU manufacturers will need to invest in research and development to develop new and more powerful GPUs. They will also need to expand their manufacturing capacity to meet the growing demand. An exception to this is accelerated computing - which is the technology that NVIDIA is betting on.
Should this become a reality, then most of the CapEx spending will be shifted away from expanding production capacity into R&D since accelerated compute engines produce exponentially better performance per chip, but is a cutting edge technology that requires further development and optimization. This has the potential to increase the supply of global compute power ahead of demand, and it may have negative impacts on NVIDIA’s revenue and margins, since customers will be able to do much more work with fewer machines.
Cyclicality Impact Will Lead To Lumpy Revenues
High quality hardware with higher sales will lead to an amplification of chip cyclicality as companies will satisfy their compute needs with a single purchase that may take years to become obsolete. However, if performance keeps accelerating, then customers will have an additional reason to purchase the newest performing hardware in order to keep up with the competition.
It may be a dangerous extrapolation to assume that demand for chips backed by accelerated computing and new industry needs like AI will continue to be as strong. To me, the more reasonable assumption is that once hyperscalers satisfy their demand for accelerated computing, they will reduce their demand to maintenance needs and start a new demand cycle only when the next gen technology is differentiated enough in performance. This would mean that companies like NVDA will get a large boost in sales, but it will also quickly slump as they fulfill demand.
The counterpoint to this is that there is demand for years to come, but I see that as a marketing move to retain high pricing, while in reality companies will positively surprise on their capacity to ship products and foundries are already expanding production capacity (1, 2).
Hardware and Software Companies Will Benefit from AI Growth
The global AI market is projected to grow 39% annually and reach $357 billion in 2027. This growth is being driven by the increasing adoption of AI-powered solutions across a wide range of industries. The two key beneficiaries of this growth are likely to be hardware manufacturers and software companies.
Hardware manufacturers, such as NVIDIA, are making processing units that are specifically designed for AI applications. Software companies, such as Google, are integrating AI functionality into their existing offerings and developing new innovative technologies.
A Muziho (1, 2) analyst estimates that as NVIDIA currently holds around 90% of the market share in the industry, that it may be able to retain 75% of the market share and reach $300B in AI-related revenue, by 2027. A Citi analyst shares the sentiment, and is even more bullish on NVIDIA’s ability to dominate over the competition.
There is no doubt that NVIDIA will benefit from the growth in the industry. My concern is that the projections are optimistic, customers will find cheaper solutions, and peers will be increasingly competitive for margins and market share.
Assumptions
TAM Is Probably Lower Than Forecasts Expect
I assume that both the company’s TAM projection of $1T and sell side’s projections of a $300B opportunity are unrealistic revenue targets for NVIDIA as competitors and customers find more cost effective alternatives, eroding the company’s pricing power. I estimate that the 2028 total addressable market for NVIDIA is around $300B, comprising $164B in IT infrastructure, $100B for the EV market, and $20B for B2C (gaming and PC GPUs) customers. My estimate is that NVIDIA can reach up to $85B in sales by 2028, comprising $15B in B2C (gaming, consumer) sales and $70B in data center, EV and AI related revenues. This means that I expect the company to grow revenue by 3.3x in 5 years and capture some 30% of its 2028 TAM.
Profit Margins To Reach 20%
I assume that NVDA will initially increase its profit margins in the next few years, but an oversupply and competitive pressures will start eroding that pricing power and the company will converge on a 20% profit margin in 2028.
Earnings Multiple To Rerate Lower
The reduction in 2028 margin estimates, as well as the anticipation of the next cyclical sales downtrend for chips driven by an oversupply, self-developed silicon and AI solutions by hyperscalers, and exponentially increased compute power, will lead to a re-rating of the stocks multiple.
I use the Price to Earnings ratio in the pricing of NVIDIA and assume that as future revenue growth expectations change from more than 60% in the next 12 months, down to 15% in 2028, that the PE ratio will re-rate from the current 228.5x to a ratio more in-line with technology peers such as AMD, GOOGL, MSFT, AAPL, AMZN. That’s why my estimate for NVIDIA’s 2028 PE is 45x. While I considered the historical averages for the stock’s PE, I view that NVIDIA will find itself in a more competitive environment, and the higher future revenue base will make it difficult to expect high growth. This is why I think the stock will converge to a lower multiple than seen in the past 5 years.
Share Count To Stay Roughly The Same
I assume that the share count will fluctuate and the charge count may increase due to issuing new shares as NVDA takes on higher CapEx spending in order to foster the expected revenue growth. That’s why I estimate that the share count will be volatile but will net out in five years.
Risks
Nvidia May Remain Leader in AI Hardware if Chiplet Latency Issues Not Addressed
Despite these challenges, Nvidia is still a well-positioned company. The global AI market is expected to grow significantly in the coming years, and Nvidia is one of the leading players in this market. Additionally, Nvidia has a strong software ecosystem that supports its hardware products. The drawback of the chiplet approach is higher latency, which is important for AI, and customers in this segment may prefer the monolithic hardware. If chiplets in GPUs can’t be connected so that latency levels are acceptable or even negligible to customers, then NVIDIA will likely remain the leader in AI hardware. While branching off to chiplets may be difficult as there are multiple points of friction, NVIDIA may move in via multiple acquisitions.
Nvidia is a leading provider of hardware and software for artificial intelligence (AI)
The company's GPUs are used in a wide range of AI applications, including machine learning, natural language processing, and computer vision. Their proprietary edge is not going away and performance per chip will increase. The most insightful KPI for me is to follow how peers react to innovation in this sector, and monitor the adaptation of chiplets vs monolithic GPU technology. On the software side, I would keep an eye on open-source AI algorithms as they may end up iterating faster on the tech and make the market less profitable in the future.
Mitigating the chiplet branch
Nvidia has already taken some steps to adapt to the chiplet trend, and its acquisition of Mellanox Technologies in 2020 gives the company a leading position in the high-performance networking market. This will be important for Nvidia as it seeks to expand its reach into the data center market. I would also look for signs to see if NVIDIA starts branching out to adopt some form of chiplet technology, or if their monoliths and connectors manage to significantly improve in both performance and affordability against chiplets.
Nvidia has a strong software ecosystem that supports its hardware products
Nvidia's CUDA software platform is a key competitive advantage. CUDA makes it easy for developers to write code that runs on Nvidia GPUs. This has helped Nvidia to build a large and loyal customer base. Should it manage to enter another software vertical in the AI space it may unlock more revenue and mitigate the risk of chiplets.
How well do narratives help inform your perspective?