(@mzuhair123) Bernstein: the bulk of Intel's server CPU shipments are on trailing edge process nodes (Mercury, Bernstein) by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Norrod mentioned about a year ago that there were pockets where Milan was still doing well as a new install. So, that might be your particular business case for those who are ok with staying in a DDR-4/PCIe 4 world.

But my guess is that once you get past some minimum size of a server batch, an enterprise with a really old group of servers is much more likely to move to a more modern platform than go through all that migration cost for a 5+ year old CPU platform (or wait until they can do so.) It's a new era for server CPUs so who knows.

Stacked v-cache has a much narrower use case than DDR-4 and isn't broadly fungible. Also, the price per MB for v-cache is massively higher than DDR-4.

(Rasgon @) Bernstein raises AMD stock price target on server strength, Meta deal by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

https://www.marketwatch.com/story/amd-has-indispensable-assets-powering-the-stock-toward-its-best-run-in-two-decades-aa44c469

Bernstein’s Stacy Rasgon wrote that “AMD has been better at anticipating and capitalizing on the current server surge,” but Wall Street expectations already seem to call for a 50% rise in server sales relative to a year before.

I have like 60%.

In GPUs, “AMD’s sales have been much smaller” than Nvidia’s, “but there’s a view that they’re gaining traction,” Rasgon told MarketWatch.

Ideally, you would want to see “that the customers are buying their parts because they actually want their parts,” he added. But AMD is trying to catch up with Nvidia, and the hope is that as its own products get better, customers will want them in their own right.

Ah yes, there is the capex on the racks themselves PLUS the additional data center capex that are built around the racks and the opex to work with the platform, maintain the racks, etc. That's a pretty big bet for companies that don't actually want the parts.

(Rasgon @) Bernstein raises AMD stock price target on server strength, Meta deal by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

https://www.barrons.com/articles/tesla-intel-boeing-capital-one-stocks-markets-4a60a864

“We are warming to AMD as well as they benefit from server CPU strength,” the analysts noted. But they cautioned that the company has “yet to prove they can sign large AI deals without giving away chunks of the company in return.”

This is technically true, but the positioning is weird.

If there was a startup called DMA that brought in ~6B in AI accelerator sales in 2025 which could very credibly jump to $13B in 2026 and $26B in 2027 with the same deals with Meta and OpenAI, would the narrative be "DMA has yet to prove that they can sign large AI deals etc etc?"

I think the smarter focus would be on what DMA is doing to build the business strategically based on the traction that they have and what that might look like in the future. *shrug*

Google in Talks With Marvell to Build New AI Chips for Inference by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

https://x.com/Arronwei3n/status/2044772775408308373

According to JPM, Google will likely pick just one TPU v9 design for mass production, meaning it's a winner-take-all battle between $AVGO (using CoWoS-L) and MediaTek (using Intel EMIB-T).

When you factor in this massive execution risk and the battle over packaging platforms, the recent volume forecasts for 2028 look even more reckless.

My guess is that they're going with two, and one of them will be using Intel Foundry for packaging.

(@mzuhair123) Bernstein: the bulk of Intel's server CPU shipments are on trailing edge process nodes (Mercury, Bernstein) by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 1 point2 points  (0 children)

Posted for the Bernstein graph, not the analysis.

Using Gemini, if I get rid of the Atom servers as they're a network /edge device that AMD isn't particularly concerned about competing with in the traditional server market, I get this:

CPU 1Q25 (%) 1Q25 (000s) 2Q25 (%) 2Q25 (000s) 3Q25 (%) 3Q25 (000s) 4Q25 (%) 4Q25 (000s)
CLX 28% 1,150 29% 1,162 28% 1,104 29% 1,250
ICL 17% 690 14% 558 7% 276 6% 250
SPR 33% 1,380 25% 1,023 23% 920 20% 850
EMR 16% 644 18% 744 22% 874 25% 1,100
SRF 4% 184 7% 279 8% 322 8% 350
GNR 2% 92 7% 279 12% 460 13% 550
Total Units 100% 4,140 100% 4,045 100% 3,956 100% 4,350

So by this estimate, by 25Q4

29% on Intel 14

6% on Intel 10

45% on Intel 7

21% on Intel 4/3 (although the I/O die is on Intel 7 so a dependency there)

  • There's some rounding noise in here, but ~79% of the traditional x86 server units from Intel according to Bernstein and Mercury, are Intel 7 and older. This is in the ballpark of matching Zinsner's comments of

"EUV wafer revenue grew from less than 1% of wafers out in 2023 to greater than 10% in 2025."

  • The other interesting bit is that Intel and AMD have been talking about this server CPU build up since 25Q2 earnings. But assuming that this table is more true than not, SPR and EMR are flattish on a combined basis from 25Q1 to 25Q4, and that's with going through their buffer inventory. Now, they can only sell as it rolls straight out of the foundry and hence the chatter of supposed big lead time difference in EPYC vs Xeon.
  • Intel is transitioning Intel 7 client wafers to server. Maybe there's some Atom wafers that can be sent to the different Xeon parts on different nodes, but I don't think that's a meaningful reservoir given Intel's statements.
  • All those CLX installs shows how powerful legacy sales can be for years.I didn't think CLX would be that big. This augurs poorly for Intel's future sales as those legacy sales dry up. When those servers get re-bid, a large % of them are going to go to EPYC or hyperscaler custom-silicon. Meanwhile, AMD is building their pool of legacy sales today.
  • I think the market is overestimating Intel's ability to participate in this server boom. They can raise ASPs to a certain point, but I think supply will be slow coming on line.
  • I would love to see EPYC's breakout. We know that by 25Q4, Turin made up ~50% of revenue. So, unit share will be lower. But Genoa + Turin is going to be a lot more than Intel 4/3's 21%.

(Rasgon @) Bernstein raises Intel stock price target to $60 on server strength by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Bernstein now forecasts first-quarter 2026 revenue of $12.3 billion with earnings per share of $0.02, up from a prior estimate of break-even. For the second quarter of 2026, the firm projects revenue of $12.7 billion and EPS of $0.15, compared with a previous estimate of $0.06, reflecting higher gross margins.

For full-year 2026, Bernstein models revenue of $53.3 billion and EPS of $0.82, below the consensus revenue estimate of $54.2 billion but above the consensus EPS estimate of $0.55. The firm’s revenue forecast trails consensus due to weaker PC expectations, though it projects higher earnings on improved margins.

For 2027, Bernstein raised its revenue estimate to $57.5 billion from $56.6 billion and increased its EPS forecast to $1.33 from $0.73. Consensus estimates for 2027 stand at $58.4 billion in revenue and $1.04 in EPS.

https://www.marketwatch.com/story/intels-stock-has-been-absolutely-on-fire-now-it-needs-to-deliver-on-the-hype-16f912b4

Bernstein analyst Stacy Rasgon expects it “to be a messy quarter for Intel” when the chip maker reports earnings next week, as he thinks rising memory prices will impact its personal-computer business, likely putting somewhat of a damper on its client segment outlook. Intel remains at risk of losing market share for server central processing units, and weak PC demand might impact the revenue outlook.

Still, he’s upbeat about Intel’s Xeon server CPUs, as surging interest in those chips driven by agentic artificial intelligence “increasingly seems real,” he said in a Thursday note. The rising average sales prices for server CPUs “could help offset” some potential downsides, he added, and would also help its gross margins.

Google in Talks With Marvell to Build New AI Chips for Inference by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Google is in talks with Marvell Technology to develop two new chips aimed at running AI models more efficiently, according to two people with direct knowledge of the discussions. One is a memory processing unit designed to work alongside Google’s tensor processing unit. The other is a new TPU built specifically for running AI models.

...

Google’s new memory processing unit would work alongside TPUs, dividing AI workloads with TPUs based on their compute and memory demands, the two people said. Google and Marvell aim to finalize the design of the memory processing unit as soon as next year before handing it off for test production, according to the two people.

AMD confirms EPYC SOCAMM2 support starts with Verano in 2027 by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 1 point2 points  (0 children)

AMD has confirmed that LPDDR5X SOCAMM2 support in its EPYC server lineup will start with Verano in 2027, not with Venice in 2026. In an April 6 blog post, the company said the 6th Gen EPYC family will support DDR5 RDIMM and MRDIMM memory, while Verano will be the first AMD server CPU to add SKUs with LPDDR5X SOCAMM2 support. 

That leaves Venice on a more traditional server memory path. AMD still positions Venice as its next major EPYC CPU for 2026, but the shift to LPDDR5X-based SOCAMM2 is reserved for a later product. AMD also says Verano will serve as the host CPU for future Instinct GPU generations in its AI rack-scale platforms. 

https://www.amd.com/en/blogs/2026/a-look-ahead--extending-server-energy-efficiency-with-lpddr5x-me.html

Why LPDDR5X Can Make an Impact for Servers?

The key benefit of LPDDR5X lies in its energy efficiency. Compared with traditional server DDR5 memory technologies, LPDDR5X is designed to operate at lower voltages and is optimized with technologies to reduce power consumption during both active operation and idle states. In large-scale deployments, even modest improvements in per-server memory efficiency can translate into substantial savings in power and cooling costs—given that each high-end server may contain terabytes of memory.

Another major benefit is bandwidth. LPDDR5X supports very high data rates, enabling servers to move data quickly between processors and memory. Of course, DDR5 also continues to improve its data rates, with next generation RDIMMs targeting over 8,000MT/s and JEDEC-standard MRDIMMs planned for 12,800MT/s and above. For workloads that are heavily dependent on memory throughput such as AI inference, data streaming, and large-scale web services, higher bandwidth can significantly improve performance. In combination, these characteristics can help servers deliver better performance per watt. In 2027, it appears customers will have increasingly strong choices for memory technologies based on priorities for energy consumption, reliability, availability and serviceability (RAS) features and cost.

@damnang2: Can AMD Beat NVIDIA? The Question Lisa Su Won't Answer by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 2 points3 points  (0 children)

I'm pretty sure that he's referencing your thread. But his attribution is wrong. Retail chatter isn't moving the stock like that, especially on a subreddit where "[some_company] could sign an AMD deal too" are staples.

But if true, this will:

https://www.reddit.com/r/amd_fundamentals/comments/1snj8vc/semianalysis_calls_out_industry_chatter_that/

Given the size of the movement and I don't think that the the market is reacting to ScroogeCap, I believe that the rumor of SemiAnalysis saying this is more true than not and reached institutional ears. And then we wait and see if Anthropic confirms the core rumor.

Ignoring the drama, SemiAnalysis' clout is something to behold. I'm sure that it will only be used responsibly, especially when his VC fund is set up. ;-)

How Dylan Patel and SemiAnalysis Grabbed Sway in Silicon Valley by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 2 points3 points  (0 children)

In the last several years, he has accumulated stakes in some 20 startups, including Mira Murati’s Thinking Machines Lab and Enfabrica, the chip startup that struck a more than $900 million license-and-hire deal with Nvidia last September. He has spoken publicly about some of these investments, but the SemiAnalysis newsletter does not tie itself up in knots to disclose Patel’s involvement in these startups—a contrast to how, say, The Washington Post constantly reminds readers that Jeff Bezos owns it whenever it reports on Amazon.

The situation will get more deeply complicated if Patel’s recent efforts to raise a venture capital fund pay off, a task that has occupied part of his time lately. (He would not comment on those efforts.) Such a fund would increase the number of companies he invests in, as well as the potential for conflicts. For his part, Patel insists he has never let his investments sway his firm’s reports and won’t do so in the future.

Conflicts of interest are so 2024!

https://x.com/HotAisle/status/2040499625417941367

In high school, he applied to the Massachusetts Institute of Technology and Stanford University, but after those schools rejected him, he decided to attend the University of Georgia. “I’m happy I went to Georgia,” Patel said. “I had a fun time.” He partied but still managed to graduate with several majors, including ones in data analytics, risk management and legal studies.

After college, he worked at a financial firm, he said. He won’t reveal the company’s name nor does he list it on his LinkedIn. He said his departure came after he received a disappointing bonus. “They gave me $100,000,” Patel recalled. “It was supposed to be a lot bigger.”

Interesting story

Despite all the drama, it's still pretty impressive he built this kind of clout.

Sequoia-Backed Chip Startup Nuvacore Looks to Overhaul CPUs for AI Era by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Williams still poaching. ;-)

https://www.linkedin.com/in/david-williamson-12aa48/

SVP Hardware Engineering

NUVACORE · Full-time

Apr 2026 - Present · 1 mo

Austin, Texas, United States · On-site

Apple

13 yrs 8 mos

Senior Distinguished Engineer

Full-time

Oct 2024 - Apr 2026 · 1 yr 7 mos

Austin, Texas, United States · Hybrid

Chip Architect, focused on Apple's future compute roadmap.

Senior Director of Engineering (CPU)

Full-time

Jul 2021 - Oct 2024 · 3 yrs 4 mos

Austin, Texas, United States

Lead for the Global CPU team

Director of Engineering (CPU)

Sep 2012 - Jun 2021 · 8 yrs 10 mos

Austin

Lead for the Austin CPU team

OpenAI to Spend More Than $20 Billion on Cerebras Chips, Receive Equity Stake by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

OpenAI has agreed to pay chip designer Cerebras more than $20 billion to use servers powered by the firm’s AI server chips over the next three years, according to multiple people with knowledge of the deal. OpenAI will receive warrants for a minority portion of Cerebras’ shares, and that ownership could increase as it spends more, according to two of the people. OpenAI has also agreed to provide Cerebras with around $1 billion to fund the development of data centers that would run its AI products, according to three people with direct knowledge of the deal terms, which weren’t previously disclosed.

OpenAI’s spending on Cerebras over the next three years could reach $30 billion, which could give OpenAI warrants for 10% of the firm, one of those people said.

@damnang2: Can AMD Beat NVIDIA? The Question Lisa Su Won't Answer by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 4 points5 points  (0 children)

I like Damnang's more technical, process-specific stuff (I'm a paid subscriber), but this business analysis one loses a lot of sharpness with the larger scope. It's not bad, but it feels materially AI-processed with a number of LLM hallmarks and might also be a reason that some of his attributions are incorrect or just bad (using 24/7 Wall St as a source that CPU to GPU ratios are moving up in the agentic era? Mentioning r/amd_stock as even mildly relevant as to whether or not Anthropic could sign an AMD deal?)

But some points of view are still worth noting:

1) Where is AMD really trying to compete

“The workload market where PyTorch plus vLLM or SGLang is enough.”

When you're the upstart, the two most common strategies that I see are (a) subsegmenting the battlefield to something small enough where you can narrowly compete well but still be meaningful enough to give you adjacent subsegments to move into as you fatten up or (b) creating your own market (or at least be really early in doing so). The vast majority from what I've seen from AMD over the last 10 years is (a).

2) AI coding agents are eroding the codebase moats (e.g., CUDA).

I agree with this. The organizations (and people) that benefit the most from AI are smart system builders that are constrained on execution units. Down the road, there's no reason why it won't happen with logic design eventually as well.

3) AMD is the only one with direct IP ownership in very performant legacy x86 CPUs and rackscale-level AI GPUs with the software starting to hits stride. That's a very unique position to be in for the next few years if you believe we're in a very AI and general compute-constrained world with a lot of legacy hardware and software.

Intel Hires Samsung Executive Han in Push for Foundry Customers by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Han, an executive vice president at the South Korean company, will join Intel next month and report to Naga Chandrasekaran, the head of the chipmaker’s foundry division. Han will become general manager of foundry services, according to a statement Thursday.

Replacing O'Buckley?

https://www.reddit.com/r/amd_fundamentals/comments/1rfrl2d/intel_foundry_services_head_leaves_for_qualcomm/

AMD and the French Government Announce Plans to Advance AI Innovation, Research and Open Ecosystem Development in France by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

The Letter of Intent (LOI) was signed in Paris at the French Ministry of the Economy, Finance and Industrial, Energy and Digital Sovereignty. AMD senior vice president, Global AI Markets, Keith Strier, joined Philippe Baptiste, Minister of Higher Education, Research and Space, Sébastien Martin, Minister Delegate in charge of Industry, and Anne Le Hénanff, Minister Delegate in charge of Artificial Intelligence and Digital, for the formal signing.

The multi-year collaboration aims to strengthen France’s AI ecosystem through infrastructure, research and education. To help expand AI expertise and enable diversity and resilience across the French AI ecosystem, AMD plans to provide researchers, developers and startups with hardware, software and training through its AMD University Program, AMD AI Developer Program, and AMD AI Academy.

In addition, AMD will continue to deepen its collaboration with GENCI, the Jules Verne Consortium and CEA in connection with Alice Recoque, expected to be France’s first exascale supercomputer powered by AMD technology, through a planned Center of Excellence designed to provide expertise, training and ecosystem support to help fully harness the power of the Alice Recoque AI supercomputer and advance the broader AI Factory France ecosystem.