After Nvidia’s (NVDA) incredible quarter and sharp increase in guidance, we believe the AI chip powerhouse’s stock can climb another 14% from its record highs over the next six to nine months. That’s generally our time horizon for a club price target, which we’re raising from $300 to $450 per share on Nvidia. We keep the stock’s rating at 2, suggesting that we should wait for a decline before making any further purchases. No joke right? Nvidia closed at $305 a share on Wednesday, ahead of astonishing after-hours financials that pushed shares nearly 29% to Thursday’s all-time intraday high of $394.80 a share. It’s almost in the $1 trillion market cap range. Jim Cramer, a supporter of Nvidia since at least 2017, recently referred to it as the club’s second stock to own without trading. (Apple was first). Jim even renamed his dog Nvidia. Our new price target of $450 per share for Nvidia is about 45 times full-year 2025 (or calendar 2024) earnings estimates. Nvidia has a strange financial calendar and on Wednesday night it reported results for the first quarter of fiscal 2024. While 45x isn’t cheap on a valuation basis at just over twice the S&P 500’s current valuation, it’s only slightly above the average 40x valuation investors have given the stock over the past five years. In our view, this is more than justified given the growth potential Nvidia has ahead of it. That’s what we’re seeing on Thursday, as this latest round of upward revisions to estimates also serves as a reminder that, in most cases, Nvidia has proven cheaper (or more valuable) than initially thought, as analysts continue to push Nvidia’s potential too conservative It’s a disruptive character that’s now being flaunted as the undisputed leader in artificial technology control cards. NVDA 5Y Berg Nvidia’s 5-year accomplishment Jim has been praising Nvidia CEO Jensen Huang for years – not to mention covering many of the graphics processing unit (GPU) technologies already in place that allowed the company to capitalize on the explosion in AI into consumer awareness when ChatGPT went viral earlier this year. During the post-earnings call on Wednesday evening, management made it clear that it expects an improvement later this calendar year. While they’re not releasing official guidance beyond the current quarter, the team said that demand for generative AI and big language models “extended the visibility of our data center by a couple of quarters, and we’ve secured a much higher supply for the second half of the year.” .” Put simply, management appears to be implying that second half earnings will be even higher than first half. The demand they are talking about is broad-based, coming from consumer internet companies, cloud service providers, enterprise customers and even AI-based startups. Keep in mind that Nvidia’s first-ever data center central processing unit (CPU) is coming later this year, with management noting that “Bristol University announced this week at the International Supercomputing Conference in Germany announced a new Nvidia-based supercomputer.” Grace CPU Superchip, which is six times more energy efficient than the previous supercomputer.” Energy efficiency is a key selling point. As we saw in 2022, energy is a major expense when running a data center, so anything that can be done to reduce these costs will be extremely attractive to customers looking to increase their own profitability. The Omniverse Cloud is also expected to be available in the second half of the year. At a higher level, management spoke on the conference call about the need for the world’s data centers to go through a significant upgrade cycle to meet the computational demands of generative AI applications like OpenAI’s ChatGPT. (Microsoft, also a club name, is a major supporter of open AI, using the startups’ technology to power its new AI-powered search engine, Bing.) “Data centers around the world are moving towards accelerated computing,” he said Huang said Wednesday evening. That’s a $1 trillion data center infrastructure that needs updating because it’s almost entirely CPU-based, which Huang noted means it’s “essentially unaccelerated.” However, with generative AI clearly becoming a new standard and GPU-based accelerated computing being so much more energy efficient than non-accelerated CPU-based computing, as Huang put it, data center budgets “need to shift very dramatically towards accelerated computing and you I see that now. As mentioned in our guide to how the semiconductor industry works, the CPU is basically a computer’s brain that is responsible for fetching instructions/input, decoding those instructions, and routing them to perform an operation that produces the desired result. GPUs, on the other hand, are more specialized and can handle many tasks at the same time. While a CPU processes data sequentially, a GPU breaks down a complex problem into many small tasks and executes them simultaneously. Huang went on to say that moving forward, data center customers’ investment budgets will essentially focus heavily on generative AI and accelerated computing infrastructure. So over the next five to 10 years, we’ll expect data center budgets, currently around $1 trillion, to shift heavily in Nvidia’s favor as cloud providers look to them for accelerated computing solutions. Ultimately, it’s actually quite simple: All roads lead to Nvidia. Every big-name company is moving workloads to the cloud — be it Amazon Web Services (AWS), Microsoft Azure, or Google Cloud — and all cloud providers rely on Nvidia to support their products. Why Nvidia? Huang pointed out in the conference call that Nvidia’s core value proposition is that it’s the solution with the lowest total cost of ownership. Nvidia excels in several areas that make this up. It is a full-stack data center solution. It’s not just about having the best chips, but also about developing and optimizing software solutions that allow users to get the most out of the hardware. In fact, during the conference call, Huang named a network stack called DOCA and an acceleration library called Magnum IO, commenting, “These two pieces of software are among the crown jewels of our company.” He added, “Nobody ever talks about it because it’s hard to understand but it allows us to connect tens of thousands of GPUs.” It’s not just about a single chip, Nvidia excels at maximizing the architecture of the entire data center – the way it’s built from the ground up and all the pieces work in unison work. As Huang put it, “It’s a different way of thinking that the computer is the data center or that the data center is the computer. It’s not the chip. It’s the data center and nothing like this has ever happened before, and in that particular environment, your network operating system, your distributed computing engines, your understanding of the architecture of the network equipment, the switches and the computer systems, the computer structure, the whole system is Your computer, and that’s what you’re trying to serve, and so on. To get the best performance, you have to understand the full stack, you have to understand data center scaling, and that’s accelerated computing.” The utilization is another important factor in Nvidia’s competitive advantage. As Huang noted, a data center that can only do one thing, even if it’s incredibly fast, will not be sufficiently utilized. However, Nvidia’s “universal GPU” is able to do many things – again, going back to their vast software libraries – thus making for much higher utilization rates. Finally, there’s the company’s data center expertise. On the call, Huang discussed the issues that can arise when setting up a data center, noting that some can take up to a year to set up. Nvidia, on the other hand, has managed to perfect the process. Instead of months or a year, Nvidia can measure its delivery times in weeks. This is a key selling point for customers who want to constantly stay at the cutting edge of technology, especially as we enter this new age of AI and there is now so much market share to gain. Conclusion As we look to the future, it’s important to realize that while ChatGPT has been an eye-opening moment, or an “iPhone moment” as Huang put it, we’re just getting started. The excitement about ChatGPT isn’t so much in what it already can, as in that it’s something of a proof of concept of what’s possible. The first-generation iPhone, which launched 16 years ago next month, was nowhere near what we have today. But it showed people what a smartphone can really be. Now, expanding the metaphor, what we have is the original first-gen iPhone. If you own Nvidia and don’t want to do what we plan to do, impressive as generative AI applications already are, you need to think less about what we have now and more about what this technology will be capable of when we do plus “iPhone 14 versions” of generative AI. That’s the really exciting (and a little scary) reason to hold on to this AI-enabled juggernaut. (Jim Cramer’s Charitable Trust is Long NVDA, MSFT, AMZN, AAPL, GOOGL. For a full list of stocks click here.) As a subscriber to CNBC Investing Club with Jim Cramer, you’ll receive trade alerts before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling any stock in his charitable foundation’s portfolio. When Jim spoke about a stock on CNBC television, he waits 72 hours after the trade alert is issued before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY, ALONG WITH OUR DISCLAIMER. THE RECEIVING OF YOUR INFORMATION PROVIDED IN CONNECTION WITH THE INVESTMENT CLUB SHALL HAVE NO FOCUS OBLIGATION OR DUTY. NO PARTICULAR RESULTS OR PROFITS ARE GUARANTEED.
Nvidia CEO Jensen Huang wears his usual leather jacket.
Getty
After Nvidia’s (NVDA) incredible quarter and sharp increase in guidance, we believe the AI chip powerhouse’s stock can climb another 14% from its record highs over the next six to nine months.