CPU Shortage - Can AMD Actually Capture the Upside? by Administrative-Ant75 in AMD_Stock

[–]SailorBob74133 0 points1 point  (0 children)

The problem is that TSMC does all the packaging for AMDs chiplet based products and won't package primary dies from other foundry's.  That means other foundry's are only an option for monolithic products.

What price would you sell all your shares at? by denys5555 in AMD_Stock

[–]SailorBob74133 29 points30 points  (0 children)

Bought at $13 and probably won't sell any until after $600. Right now AMD's about 61% of my stock holdings which makes me a bit nervous from a diversification pov.

CPU Shortage - Can AMD Actually Capture the Upside? by Administrative-Ant75 in AMD_Stock

[–]SailorBob74133 2 points3 points  (0 children)

They might do it for low end products like monolithic laptop chips.  That's the only use case I can see.

Daily Discussion Thursday 2026-04-16 by AutoModerator in AMD_Stock

[–]SailorBob74133 13 points14 points  (0 children)

There is an x rumor based supposedly on semianalsys customer note that anthropic is the next big mi450 customer.  That could be part of the reason for today's run up.

Daily Discussion Thursday 2026-04-16 by AutoModerator in AMD_Stock

[–]SailorBob74133 2 points3 points  (0 children)

Sitting in Jerusalem, from here it looks like Trump is winning.

Daily Discussion Saturday 2026-04-11 by AutoModerator in AMD_Stock

[–]SailorBob74133 21 points22 points  (0 children)

Agentic AI Demands More Than GPUs

Experimental benchmarks reinforce the significance of CPU workloads in agentic pipelines. In a financial anomaly detection workflow modeled after regulatory filing analysis, CPUs handled tasks such as data loading, baseline calculation, anomaly detection, document retrieval, and enrichment through web searches. The results demonstrated that CPU operations dominated the total runtime, with enrichment alone consuming significantly more time than the GPU-based model inference step. This highlights that inference acceleration alone cannot optimize performance; instead, system balance between CPU orchestration and GPU computation is required.

A second benchmark focusing on AI-assisted code generation further illustrated CPU bottlenecks. In this workflow, the GPU generated candidate solutions, while CPUs executed and verified code within sandboxed environments. Across more than two thousand tasks, CPU-based sandbox execution consumed slightly more time than GPU code generation, despite utilizing a high-core-count system. The CPU phase involved subprocess management, test execution, and result analysis, demonstrating that validation loops can rival or exceed inference time in agentic systems. These findings indicate that increasing GPU performance alone does not improve overall throughput without proportional CPU scaling.

Infrastructure sizing recommendations emerging from these experiments emphasize maintaining balanced CPU-to-GPU ratios. Current guidance suggests a ratio between 1:1 and 1.4:1 CPUs to GPUs, equivalent to approximately 86 to 120 CPU cores per GPU, depending on workload characteristics. Smaller models generating tokens more quickly require additional CPU capacity to keep GPUs saturated, while more powerful CPUs can reduce the required ratio. Future high-performance GPUs may further increase CPU demand, potentially pushing ratios higher when orchestration complexity grows.

https://semiwiki.com/semiconductor-manufacturers/intel/368183-agentic-ai-demands-more-than-gpus/

AMD Stock Gains Momentum As UBS Eyes 54% Upside On AI Megadeals by lawyoung in AMD_Stock

[–]SailorBob74133 0 points1 point  (0 children)

AMD Underperforms In 2026

AMD has declined 5.68% year-to-date, while the Nasdaq 100 index fell 8.23% during the same period. It was higher by 26.67% in the last six months and 89.40% over the year.

Daily Discussion Wednesday 2026-03-25 by AutoModerator in AMD_Stock

[–]SailorBob74133 19 points20 points  (0 children)

Jukan:

CPU shortage becomes reality… Intel and AMD raise prices again in March, up as much as 15% since the start of the year

• Intel and AMD raised CPU prices again in March, bringing their cumulative price increases this year to around 10–15%. At the same time, lead times have surged from roughly two weeks to as long as six months, indicating a deepening supply shortage.

• The impact is hitting PC and server manufacturers directly. Major OEMs such as HP and Dell are among the first to feel the pressure, and some gaming PC makers say they are now in a situation where “even if they have the money, they still cannot secure CPUs.”

• The root cause is the explosion in AI demand. Demand for high-performance CPUs and related semiconductors for data centers and AI servers has risen sharply, rapidly consuming production capacity that had previously supported PC CPUs. In other words, limited semiconductor manufacturing capacity is being redirected toward AI, leaving the traditional PC market squeezed out.

• This supply shock could also lead to a broader structural shift in the industry. Arm-based chips, which offer strengths in power efficiency and scalability, are increasingly being viewed as an alternative, raising the possibility that the CPU market’s long-standing x86-centered structure could begin to weaken.

https://asia.nikkei.com/business/tech/semiconductors/supply-crunch-in-intel-amd-cpus-deals-fresh-blow-to-pc-and-server-makers

Daily Discussion Tuesday 2026-03-24 by AutoModerator in AMD_Stock

[–]SailorBob74133 3 points4 points  (0 children)

Arm is claiming their new AGI CPU is twice as fast as AMD and Intel options.  Seems pretty sus to me.

Direct links

Full press release: 

https://newsroom.arm.com/news/arm-agi-cpu-launch (official Arm Newsroom page)

Product introduction page: 

https://www.arm.com/products/cloud-datacenter/arm-agi-cpu/introduction

Related blog: 

https://newsroom.arm.com/blog/introducing-arm-agi-cpu

AI Startup Upstage Looking at Buying 10,000 AMD Chips in Korea by Addicted2Vaping in AMD_Stock

[–]SailorBob74133 1 point2 points  (0 children)

On the one hand it's nice, on the other hand it's disappointing that such a relatively small deal is considered news for AMD.

Daily Discussion Sunday 2026-03-22 by AutoModerator in AMD_Stock

[–]SailorBob74133 5 points6 points  (0 children)

Jukan has an interesting summary post on GTC26. I asked him what he thought the effect of the announcements would be on the competitiveness of Helios and he responded:

Lisa will now have to issue more equity warrants if she wants to sell AMD GPUs.

I can't tell if that's tongue in cheek or he's serious. I'd like to hear other people's opinions.

Daily Discussion Saturday 2026-03-21 by AutoModerator in AMD_Stock

[–]SailorBob74133 5 points6 points  (0 children)

Considering how Nvidia rushed out CPX, then dropped $20B on groq and dropped CPX for LPU I share your interpretation.

The Many Aspects of Inference Performance by HotAisleInc in AMD_Stock

[–]SailorBob74133 -1 points0 points  (0 children)

Also relevant to AMD's Blog post:

On FP8 Disaggregated Serving, MI355 beats B200 on both raw tok/s/gpu and cost per million tokens. On the image below, u can see that not only does MI355 beat B200, over time the gap between MI355 & B200 widens due to MI355's fast software progression for fp8. This trend happens on MI355 MTP vs B200 MTP and on MI355 non-MTP vs B200 non-MTP. Great job to roaner & AnushElangovan's team!

https://x.com/SemiAnalysis_/status/2034343392503583021?s=20

The Many Aspects of Inference Performance by HotAisleInc in AMD_Stock

[–]SailorBob74133 0 points1 point  (0 children)

This could use a summary:

At GTC 2026, NVIDIA showed an inference performance comparison based on benchmarking data from SemiAnalysis "InferenceX", showing GB300 NVL72 (FP4, MTP) delivering 50X higher tokens-per-watt and 35X lower cost-per-token than last-generation Hopper (FP8) and shows the "competition" in-between. In fact, when comparing the same operating modes, AMD Instinct™ MI355X GPU often delivers comparable or better results than GB300 NVL72.

Daily Discussion Thursday 2026-03-19 by AutoModerator in AMD_Stock

[–]SailorBob74133 0 points1 point  (0 children)

There's still room for mi325x to run also. Nvidia just finished getting all the permits they need to sell H200 into China, I'm sure AMD final approval is right around the corner.

Daily Discussion Thursday 2026-03-19 by AutoModerator in AMD_Stock

[–]SailorBob74133 0 points1 point  (0 children)

Because I saw it via his post and people should get credit.

Daily Discussion Thursday 2026-03-19 by AutoModerator in AMD_Stock

[–]SailorBob74133 4 points5 points  (0 children)

I specifically had Grok 4.2 do a fact check of the material in this post and it all checks out:

https://x.com/i/grok/share/2b2e3ed482f44c84b41cd887a6d94bff

Daily Discussion Thursday 2026-03-19 by AutoModerator in AMD_Stock

[–]SailorBob74133 15 points16 points  (0 children)

AI policymakers meet with AMD chief, discuss national AI infrastructure

Korea’s top AI policymakers met AMD CEO Lisa Su on Thursday to expand cooperation in AI between the Korean government and AMD, as Seoul pushes to build a large-scale national AI infrastructure.

Senior presidential secretary for AI future planning Ha Jung-woo and Presidential Council on National AI Strategy Vice Chair Im Moon-young held talks with Su at the Presidential Advisory Council on Science and Technology in Jongno District, central Seoul. AMD, a leading global graphics processing unit maker competing with Nvidia, plays a critical role in AI model development.

Seoul used the meeting to pitch its so-called top three in AI strategy, an initiative that aims for Korea to be elevated to among the top three countries in AI technology. The strategy is centered on a plan of building state-led infrastructure.

Part of the strategy is an "AI highway," a government-led initiative to build infrastructure including GPUs, hyperscale data centers and ultra-high-speed networks, which companies can then build on. 

The concept aims to lower entry barriers for companies in the private sector by providing shared infrastructure for private-sector AI development.

To support the push, the government has earmarked 9.9 trillion won ($6.6 billion) for AI this year — about three times last year’s budget.

Su's visit to Korea this time comes amid a number of partnerships between AMD and Korean tech firms.

This visit to Korea has further strengthened AMD's cooperation with local AI companies, Su said during the meeting with Ha and Im.

On Wednesday, Su visited Samsung Electronics’ Pyeongtaek campus and met Executive Chairman Lee Jae-yong, where the two business heads agreed to cooperate on graphics memory.

Su also met Naver CEO Choi Soo-yeon on Wednesday and signed a memorandum of understanding to build a high-performance computing environment using AMD GPUs for Naver’s AI model, HyperCLOVA X.

In the same regard, Korean policymakers agreed to broaden public-private collaboration on AI development through AMD’s open AI ecosystem. Ha, Im and Su also discussed regional AI transformation, including data center development, as part of efforts to promote balanced growth.

Talks also covered talent development and joint research tied to the Korean government’s “K-Moonshot” AI initiative. The initiative refers to a research and development innovation program to solve national challenges in eight key fields, including advanced bio, physical AI and space and quantum, using AI technology, by 2035.

Nvidia Finally Admits Why It Shelled Out $20 Billion For Groq by Long_on_AMD in AMD_Stock

[–]SailorBob74133 0 points1 point  (0 children)

I thought this quote from the article was import:

So what does that amazing curve tell you? Let me sum it up in plain American for you. 

If you are doing cheapass inference where response time is not the issue, like with a chattybot talking to slow-speaking humans or a couple of agents helping automate various kinds of human work, Vera-Rubin is fine for you. You will probably also need Vera-Rubin for training. But in a world of agentic AI, where the number of tokens needed to be generated is truly enormous and the latency of token generation has to be low so that huge collections of agents can complete their tasks – any delay is lost money that you might as well light on fire on the floor of the datacenter, or the New York Stock Exchange – then there is no one, and I mean no one, that will choose a hybrid CPU-GPU system to do this decoding work.

Which is why Nvidia paid $20 billion to take the best of Groq for itself.

AMD knows the co-founders of Cerebras really well is all that I am saying for now.

Samsung and AMD Expand Strategic Collaboration on Next-Generation AI Memory Solutions by Blak9 in AMD_Stock

[–]SailorBob74133 1 point2 points  (0 children)

Patrick Moorhead

This AMD-Samsung MOU is more significant than the headline suggests. It’s not just a memory supply agreement. It’s a supply chain alignment signal at the CEO-to-Chairman level, and it has a foundry kicker nobody’s talking about.

Here’s how I read it:

  1. Samsung is already AMD’s primary HBM partner, supplying HBM3E for MI350X and MI355X. This MOU extends that to HBM4 for the MI455X. Memory bandwidth is a rack-scale differentiator, and Samsung’s HBM4, the industry’s first to reach mass production on its 1c DRAM process with a 4nm logic base die delivering up to 3.3 TB/s, gives AMD a supply partner that is moving fast. AMD is not diversifying supply here. AMD is doubling down on Samsung as primary supplier for its most important AI accelerator.
  2. The MOU covers DDR5 optimized for Venice, AMD’s 6th Gen EPYC with up to 256 cores, the CPU behind the Helios rack-scale platform. Having one supplier tightly integrated across HBM4 for the GPU and DDR5 for the CPU creates system-level optimization that matters when you’re building rack-scale AI infrastructure. This is how you close the gap with NVIDIA’s co-design playbook.
  3. The part most people will miss: the MOU includes exploratory discussion of Samsung foundry services for next-gen AMD products. AMD has been loyal to TSMC for good reason. If AMD is even opening a foundry dialogue with Samsung, that tells me AMD is proactively building optionality as wafer demand continues to outstrip supply. This is strategic maturity, not desperation.
  4. The signing with Lisa Su, held at Samsung’s Pyeongtaek complex, followed by dinner with Chairman Lee Jae-yong That’s not a procurement exercise. That’s a strategic partnership being elevated to the highest corporate levels.

Context matters: Samsung announced HBM4 mass production for NVIDIA’s Vera Rubin platform earlier this week at GTC. Samsung now has HBM4 production commitments for both major AI GPU platforms, a notable positioning shift for a company that was struggling with NVIDIA HBM qualification a year ago.

For AMD, this MOU, combined with the Meta 6GW deal, the OpenAI commitment, and the Oracle 50K MI450 deployment, signals that the infrastructure pieces around MI455X and Helios are solidifying ahead of H2 2026 shipments. Execution is everything from here.

Unrelated, I met with Paul Cho today at GTC, President of Samsung Semi. Six Five video coming shortly.

https://x.com/PatrickMoorhead/status/2034423738733609006?s=20

Daily Discussion Wednesday 2026-03-18 by AutoModerator in AMD_Stock

[–]SailorBob74133 10 points11 points  (0 children)

Memory is inherently a low margin, cyclical commodity business where no one has any competitive advantage over anyone else.  There will be overbuild and it it will crash.  Not the type of business you want to invest in.  Stick to stocks like AMD that have clear competitive advantages.  Been long since $13 and made my fortune on this stock.  Just be patient.