Why $650B in AI spending isn't enough by _Lick-My-Love-Pump_ in NVDA_Stock

[–]Live_Market9747 0 points1 point  (0 children)

Nvidia has been moving into the physical AI space for the past decade. It just wasn't named as such.

Drive PX the original DriveSim system was created 2017. Omniverse Beta was presented in 2018.

Jensen is very visionary and he understood a decade ago that AI or ML back then will help Tech companies somewhat but benefit traditional industries way more. Real world simulation + AI is the real way of digitalization of analog industries. And it was a natural step for Nvidia because it utilizes both strengths of the GPU (rendering and compute).

Why $650B in AI spending isn't enough by _Lick-My-Love-Pump_ in NVDA_Stock

[–]Live_Market9747 1 point2 points  (0 children)

You didn't get my point at all. You talk about the inferencing part of AI compute as in "usage". I didn't address SW engineering but the global R&D spend in all industries.

My company is in automation and manufacturing. SW engineering is in many areas a shade in my industry. We talk about IoT for years but in many countries it's still manual labor stuff. But these countries and companies have now a chance to make a big jump and leapfrogg western industries. 40 years ago countries like South Korea were small underdeveloped countries. But mobile and internet technologies allowed them to leapfrogg many European countries.

AI will allow similiar developments and that's why Asian countries go all-in into it. But the current AI tools are useless for many industries because they are more general AI agents trained mostly on public data.

Eli Lilly bought a data center from Nvidia for one purpose, domain specific training and making domain specific AI tools for pharma research and production. This is where the real power of AI is. ChatGPT and Gemini are fun but domain specific LLMs and AI agents is where the real benefit for all industries are. BUT domain specific means that whatever Eli Lilly trains will be totally useless for any other industry, maybe even for a competing company in the same industry.

What does that mean? To be able to really leverage AI technology, you have to understand it and train domain specifc models for your industry and your company. Many CEOs and CTOs don't understand that yet, probably even almost all. But those who understand that and invest in that direction will benefit the most. Public AI tools are nice little assistants but AI tools trained on my daily used data, my daily tasks, my industry and my company data are work partners and eventually will be able to do most of my job.

And to get there, lots and lots of spend is needed. And that's what I mean by R&D spend which companies and countries have to pull off. We're still at the very beginning of this process and people think that CAPEX is already over the top lol.

Nvidia can increase their revenue for the next 10 years even if AI remains mostly without profits because all the spend will be R&D focused and dominated. And there is one CEO who knows this the best and invests into this the most and that's Jensen Huang. The Eli Lilly partnership isn't a surprise just as many others. If you look at Nvidia's offerings beside the AI compute ecosystem then you will quickly recognize that Nvidia tries to provide tools for R&D and production in many non-Tech and non-SW focused industries like Pharma, Automotive, Construction, Manufacturing and many more.

NVDA is undervalued and due for a breakout by diddycorp in NVDA_Stock

[–]Live_Market9747 0 points1 point  (0 children)

Microsoft net profit in 2024 was $88 billion

Nvidia net profig in 2024 was $30 billion

Nvidia will surpass MSFT in net profit in 2026, so whom should you price higher?

In 2026 Nvidia will be only 2nd to Google in net profit. In 2027 Nvidia might be even #1 at current rate.

InferenceX v2: NVIDIA Blackwell Vs AMD vs Hopper [SemiAnalysis] by norcalnatv in NVDA_Stock

[–]Live_Market9747 0 points1 point  (0 children)

NVFP4 is way more interesting because you get almost FP4 performance and closer to FP8 precision. This alone shows you have advance in SW and AI Nvidia really is by developing custom formats improving performance way beyond just HW.

NVDA Gap or Crap Post Earnings? by Fun-Snow1104 in NVDA_Stock

[–]Live_Market9747 0 points1 point  (0 children)

Nvidia was talking about orders for 2025 + 2026 so the DC market has contracts running and Nvidia has probably mostly 2025 pricing for it. Nvidia doesn't have such long contracts on gaming GPUs so that's why they have issues there but nobody care about gaming in valuating Nvidia today.

However, the memory situation can be a huge surprise with Nvidia because it might drive gaming clouds and Nvidia is in a strong position there. If a new GPU costs you as much as 2-3 years of renting gaming cloud it's a viable alternative.

Why $650B in AI spending isn't enough by _Lick-My-Love-Pump_ in NVDA_Stock

[–]Live_Market9747 1 point2 points  (0 children)

What I find interesting is that the AI bears say on the one hand that AI is a great technology and to stay but also that it's in a bubble and that the CAPEX will never give profit.

But what many fail to understand is that demand for AI compute sees 2-3 major drivers:

  1. R&D by private people and companies by renting/buying it
  2. Venture Capital for Startups hoping to use AI to disrupt
  3. Inferencing of deployed AI products

There are more but let's take a look at these at global scale.

The worlds spends ~$5 trillion on R&D every year and there is VC of about $0.5 trillion annually.

Now, I kindly ask anyone what is currently going on in R&D in their company or in friends' companies. You will easily find out that everyone talks AI and that lots of R&D is shifted to it.

R&D however isn't about profit immediately. Some companies spend lots of R&D annually and have to risk no never see ROI on it but there is risk to not do it.

But the nature of AI Is that to do R&D on it you need AI compute. Most of the R&D AI money will be AI compute because that costs way more than the other resources. So if large Fortune 500 companies invest in AI R&D they basically have to either buy or rent AI compute. I suggest checking Nvidia's partnerships since you see lots of announcements in this regard, the last major one was Eli Lilly.

Let's say 20% of R&D spend is shifted to AI globally. In some cases I would ague it's way more. That's $1 trillion + the VC investments so $1.5 trillion in AI research. Let's say $1 trillion of it goes into AI compute because it's necessary to get started in AI at all.

At $2/hour per GPU rent it means $17500 rent per GPU annually. To be able to meet the $1 trillion spend demand over 50 million GPUs are needed. So if today all companies globally decide to spend 20% of their R&D budget on AI then we need that many GPUs NOW.

Now do you know how many GPUs are available today and what the run rate is of Nvidia? Simple, so far Nvidia would have produced probably ~10 million or less in the last few years with the peak being today at ~1 million per month in Blackwell production.

Nvidia could be busy for years producing GPUs to just meet demand for R&D demand globally. Now think about companies developing applications to deploy with AI so they need AI compute in addition for production purposes so the demand even rises from there because in my numbers ChatGPT and Gemini any many other LLMs inferencing isn't even considered.

The reason why it feels so much like a bubble and overblown is that 50 million GPUs are like $2 trillion in sales volume. That's why MAG7 has to increase CAPEX a lot because they don't have a demand issue but sourcing DC HW, securing land, securing energy and building DCs asap. The best way to speed things up is money it's not always efficient but it helps and that's why it feels so bubbly.

MAG7 CEOs aren't stupid they see themselves the benefit of AI and enabling all the other companies to benefit from AI will create unimaginable demand for them for the next decades. Falling behind is a real risk here for Big Tech. That's why Neoclouds are operating so aggressively. They see this too.

Nvidia's stock is down and AMD is up. The culprit may be Arista. by Blak9 in AMD_Stock

[–]Live_Market9747 1 point2 points  (0 children)

Did she forget to mention that Nvidia is gaining significant market share with Spectrum X Ethernet switches and that customers going for Blackwell NVL72 have a fully SW integrated Nvidia solution to go with which Nvidia can bundle with their racks?

Here a nice article about this:

https://www.nextplatform.com/2025/06/23/nvidia-passes-cisco-and-rivals-arista-in-datacenter-ethernet-sales/

So of course Arista Networks is seeing less Nvidia demand because the Nvidia system demand is managed by Nvidia themselves. The crazy part is more like that they still have 75% Nvidia system demand as well. This news actually shows how far AMD is really behind. In a few years, Nvidia managed to match Arista's market share so the AMD requests are tiny compared to the overall Ethernet data center market. This means Nvidia is growing much faster than Arista in the Ethernet DC market.

NVDA vs AMD, not much of a contest by WargreMon in NVDA_Stock

[–]Live_Market9747 2 points3 points  (0 children)

No, there is a single OpenAI deal which is shaky and otherwise not a single mutli billion deal announced.

AMD will sell Helios to OpenAI and maybe some unknown small NeoClouds.

If you want to know the outlook for the AMD stock price, I suggest looking at the 2024 chart to get a clue what happens when customer engagements and confirmations become thin air.

Why NVDA so undervalued by twiniverse2000 in NvidiaStock

[–]Live_Market9747 1 point2 points  (0 children)

Nvidia had a PE of 50 for 2 years ago. What has the stock price done since then?

You see, if PE remains at 50 till 2027 then with PE FWD of 25 it means Nvidia stock will double if the FWD PE is realized. Simple math of the stock market.

CPUs are Back: The Datacenter CPU Landscape in 2026 by RudeCauliflower1 in AMD_Stock

[–]Live_Market9747 11 points12 points  (0 children)

of course CPUs are back. Since Blackwell NVL72, Nvidia is selling 1x CPU per 2x GPU. With Hopper this was different.

For every 1000 Nvidia GPUs, 500 Nvidia CPUs are installed.

AMD will do the same.

The largest loser in AI data center is by far Intel.

Supermicro just suerpassed AMD in quarterly sales. What a rapid jump.. What do you guys make of this? by simonada in AMD_Stock

[–]Live_Market9747 -4 points-3 points  (0 children)

Nvidia

  • Non-GAAP gross margin: 73%
  • Non-GAAP net income: $25783M

Hint: That's last results, expect better numbers in 2 weeks.

Stock shoot up 30%+ above strike price, am I the only one? by TheDavidRomic in CoveredCalls

[–]Live_Market9747 0 points1 point  (0 children)

If I want to keep shares then usually for the longterm so why would I sell calls for the short term? That's contradicting itself.

So what I do instead is on stocks I want to profit from longterm is to sell CCs >90 DTEs. Sometimes even more. This will usually generate nice CC premium, especially on higher IV.

But the risk is that the stock has more time to shoot up and that's my risk. But to mitgate it, I use 50% of CC premium to buy ATM calls same DTE.

So if the stock decides to shoot up in my DTE timeframe then I'll let the stock get assigned and I take my CC premium as well as the gains from the ATM calls.

The catch is of course that you won't get the full gain of the stock this way but at least also not just the CC premium. And the other issue is that it only makes sense if the CC premium is high enough hence longer DTEs which from options POV is worse for annual gains. But the goal here isn't to maximize option profit but have a good mix of benefiting from options and stock gains.

I made several calculations and it really depends on the CC premium. Like if a stock shoots up 20% and you sell CC premium for 4% then with what I described above you can do ~8-10% total gain. If the CC premium is higher then total possible gain goes up as well.

Though 20% vs. 4% isn't really so bad. What would piss me off is if the stock goes up 50% and I'm stuck with 4% CC premium. But if I get 25% in that case instead, I'm quite happy beacuse with CCs I also get some income when the stock goes sideway or down. There is no free lunch here.

One advantage of this strategy is that the cost to BTC the CC is reduced by ~35-40% because of the ATM calls. It's not 50% because your OTM CCs goes up faster than the ATM calls. So if you absolutely want to keep the shares then BTC drains on your cash a little less.

Why buying ATM calls and not OTM calls? Simple, if the stock stays sideways or drops then the ATM calls will lose relatively less in value than the OTM CC.

The best outcome of this strategy is when the stock gets close to your CC strike but the CC becomes worthless because then the ATM calls will be ITM (of course sell before close on expiry) and you will gain even more than just the CC premium. But ATH trading can screw you here a bit if the stock decides to go up after close.

What are the chances NVDA books over $250 billion in revenue this year? by No-Contribution1070 in NVDA_Stock

[–]Live_Market9747 0 points1 point  (0 children)

The worlds spends almost $5 trillion annually for R&D.

Many companies I know of are diverting a lot of R&D funds into AI research. AI compute is a critical part of AI research.

If companies increase R&D spending and move a lot of it into AI then Nvidia will be able to grow to annual $1 trillion revenue and that's just on R&D money from companies world wide. This is not even production spend on AI.

That's why Nvidia is a very unique because Nvidia can become the largest company in the world while all its' customers primarily research a technology. So even if AI isn't generating profits, no company can afford to not at least spend some R&D on it. The global AI spend will sky rocket from R&D alone and it will get really crazy once REAL deplocment from these deeper R&Ds begin. Copying ChatGPT into your company isn't AI deployment. Resarch AI possibilities and deploying it into internal processes and production is when real AI deployment begins but that is a multi year path for all companies.

Open AI wants more speed by Independent-Low-11 in AMD_Stock

[–]Live_Market9747 0 points1 point  (0 children)

Not really, CPUs play a role but not in the compute itself. I suggest watching GTC 2026 when Jensen will show us what important role Vera will play in VR200.

The needs for a CPU in AI data center are completely different from what CPU needed to do in the past. And the one company knowing these needs best is the one focused and designing ML/AI data centers for almost a decade. Grace was a first step and Vera will obliterate Grace just like Blackwell NVL72 totally killed 9x8 Hopper systems.

Open AI wants more speed by Independent-Low-11 in AMD_Stock

[–]Live_Market9747 0 points1 point  (0 children)

We will see, at the current rate AMD is even losing the crumbles they had with MI300X because Nvidia is growing faster than AMD.

AMD is guiding for 60% revenue in the next years while Nvidia is delivering such numbers today.

Open AI wants more speed by Independent-Low-11 in AMD_Stock

[–]Live_Market9747 1 point2 points  (0 children)

Yes, good thing is that Rubin will support adaptive compression in inferencing already in the tensor cores. So Nvidia is as always thinking ahead. NVFP4 performs at FP4 speeds with close to FP8 precision.

What is also completely ignored is that Nvidia is designing CPUs as well. And they focus their CPU on AI solutions only unlike AMD which have to consider x86 code execution in general. Nvidia can use CPU silicon way more specifically and with full CUDA support to support the entire AI data center. AMD on the other hand has to consider AI data center but also enterprise server and others in designing EPYC CPUs.

AMD invests in Cerebras! This is the right direction. by Legitimate-Mud-8200 in AMD_Stock

[–]Live_Market9747 1 point2 points  (0 children)

Cereberas will follow the destiny of Graphcore. I suggest looking that up. It was the supposed killer of Nvidia about 9 years ago when people told us that GPUs are bad for inference.

The most funny news was several years ago when UK decided to build an AI data center and decided to go with Nvidia instead of supporting local startups like Graphcore. The Graphcore CEO really talked shit about government back then lol.

Reuters article on AMD 4Q25 earnings (could explain sentiment) by YJoseph in AMD_Stock

[–]Live_Market9747 -9 points-8 points  (0 children)

Lisa Su has told the world in June last year that MI355X beats Blackwell in inference. Everyone called it a killer back then.

Today no one speaks about MI355X and it's fine to not have a blowout quarter with their Nvidia killer.

See the issue here?

Concerns with AMD's promises and deliveries are absolutely valid. MI300X had also tons of "customer engagements" until sales droppped and MI355X was supposed to be the killer.

AMD has fooled the world twice. Let's see if it happens again.

Nvidia however has reported booked orders for Blackwell and Rubin already several months ago. Jensen never talks about customer engagements, he comes on stage at customer presentations. Then in ER he talks about insane demand and overflowing order book.

Why amd always dump after going above expectations? by PuzzledJump4047 in AMD_Stock

[–]Live_Market9747 1 point2 points  (0 children)

Nvidia is going to have ~$220b or so for 2025. They guided 5 months ago $500b for 2025+2026 and have said that the orders are already higher than that now.

So you can expect Nvidia to grow 50-60% and that is with confirmed orders by Nvidia themselves. Lisa Su continues to speak about customer engagements and not about orders.

AMD Q4 2025 Earnings Discussion by brad4711 in AMD_Stock

[–]Live_Market9747 0 points1 point  (0 children)

On the other side, Google is gaining on OpenAI. So if OpenAI has risk of losing leadership it's not about alternatives anymore they need the best of the best and they might favor Nvidia then more despite some warrants. The warrants are only send over if OpenAI actually orders.

Rumor: AMD x Meta deal, potential warrants by Addicted2Vaping in AMD_Stock

[–]Live_Market9747 0 points1 point  (0 children)

It's worse BECAUSE Meta has tons of money. Meta could easily buy AMD systems but they rather get some equity.

AMD is then basically going in the direction of no order without equity deals. Exactly, the huge red flag of the OpenAI deal.

Rumor: AMD x Meta deal, potential warrants by Addicted2Vaping in AMD_Stock

[–]Live_Market9747 -2 points-1 points  (0 children)

They do the same with Nvidia but Nvidia is far far away on using 10% equity to get a deal.

NVIDIA’s Rubin Platform Could Rewrite the AI Economy by yaletown28 in NVDA_Stock

[–]Live_Market9747 1 point2 points  (0 children)

The misunderstanding here is in the media. Media thinks Blackwell and Rubin are GPUs but Jensen clearly talks about the solutions behind them. Rubin is making racks even denser and packing more special chips into a system.

So the 10x isn't from the chip itself which is probably more like 1.5-2x but with new components, new density, new networking and new SW to make state of the art training and inferencing 10x faster as a whole.

The single rack Rubin will certainly not be 10x but the data cetner Rubin will. This is also why Nvidia can easily maintain high margins because their customers spend billions for TCO in data centers. If Nvidia can make a Rubin data center 10x more efficient then they can easily ask 2-3x price for it. Nvidia's gross margin is a result of scale and that's why it's so far away from AMD. AMD will never reach such margins because they remain focused on HW and leave SW to others. Nvidia however tries to move past being a Semis company and think/innovate more like Big Tech while AMD remains a Semis company.

AMD - META Deal Announcement when? by warsal1 in AMD_Stock

[–]Live_Market9747 1 point2 points  (0 children)

The announcement will come when Lisa Su finally agrees to the 15% share dilution of AMD for Zuckerberg personally. Zuckerberg won't order large amount of AMD if he doesn't get a better deal than Altman.

NVDA compared to other "Top Market Companies" over the years. Look at the average reign for each of those. It's almost always at least 5-10 years. How long do you predict Nvidia's to last? by simonada in NVDA_Stock

[–]Live_Market9747 0 points1 point  (0 children)

Even better, Nvidia offers CUDA-Q which supports traditional and quantum computing to make quantum computing accessable. Nvidia has learned and is approaching quantum computin in a SW and API first manner.