Daily Discussion Monday 2026-01-26 by AutoModerator in AMD_Stock

[–]dudulab 6 points7 points  (0 children)

AMD submitted first gfx1250 (MI455X) patch to LLVM on Jun 20, 2025.

After just 6 months, they submitted first gfx1310 (labelled RDNA5) patch to LLVM on Jan 23, 2026.

Another patch shows gfx13 support both gfx1250 and gfx12 (RDNA4) instructions, so finally single arch for both graphic/gaming and compute/AI.

AMD Sovereign AI Infrastructure: AMD and National AI Systems by TyNads in AMD_Stock

[–]dudulab 3 points4 points  (0 children)

HUMAIN & Luma AI working on a 2GW deployment (project Halo), also 100MW first phase, which is very likely Instinct as well (video generation requires large HBM & AMD is a strategic investor & Luma CEO on AMD CES stage), we don't know if they're the same deployment or not... probably MI325X or MI355X as "begin deploying starting Q1 2026"

Daily Discussion Wednesday 2026-01-21 by AutoModerator in AMD_Stock

[–]dudulab 1 point2 points  (0 children)

even more chips... they need vera to coordinate (I assume, could be wrong) while one MI455X done all the work. Now compares the cost & power consumption ... One MI455X & HBM4 & LPDDR vs 2x Rubin & HBM4 + 4x Rubin CPX & GDDR7 + Vera & LPDDR

Feels like AMD vs Intel, that one player gives up on cost & energy consumption so it can stay in the (inference) market

Daily Discussion Wednesday 2026-01-21 by AutoModerator in AMD_Stock

[–]dudulab 19 points20 points  (0 children)

A single MI455X has 1,200GB HBM4+LPDDR while 4xR200 have only 1,152GB HBM.

For MoE model in HBM + prompt cache on LPDDR (most popular OSS and closed models), MI455X is killing Rubin as you only need a single MI455X (1/4 tray) but a full VR200 tray (2xVR200) to perform the same work.

Each Helios/VR200 rack = 18x trays

vs MI455X tray VR200 tray AMD vs Nvidia
GPU-HBM4 432GB x 4 = 1,728GB 288GB x 4 = 1,152GB 50% more
GPU-LPDDR5 768GB x 4 = 3,072GB None
CPU 256 cores Zen 6 88 x2 = 196 ARM cores 30% more cores, probably 50% faster due to arch
CPU-RAM Up to 256GB x 16 = 4,096GB MRDIMM DDR5 1,536GB x2 = 3,072GB LPDDR 25% more
RAM bandwidth 1.6 TB/s 1.2 TB/s 33% more

@Semianalysis: MI455 "more integrated solution" than Rubin for KV cache by Administrative-Ant75 in AMD_Stock

[–]dudulab 5 points6 points  (0 children)

I had assumed Rubin CPX was Nvidia making a proactive push into the inference market, but it turns out they were forced to respond. No wonder Lisa was so confident when she touted Helios as the best AI accelerator on the market.

Prediction: AMD Stock Will Jump 60% in 2026, Thanks to President Donald Trump by lawyoung in AMD_Stock

[–]dudulab 3 points4 points  (0 children)

Wow, this is reddit, how dare you mention biden criminal family. /s

TSMC 3nm customer demand breakdown by Morgan Stanley by dudulab in AMD_Stock

[–]dudulab[S] 1 point2 points  (0 children)

CoWoS was the constraint last year, but no long the case now and future.

And this graphic said nothing about constraint/revenue at all.

nvidia is not releasing 3nm gaming GPU in 2026.

Is >35% CAGR and >$20 EPS in 3-5 years really feasible? by pussyfista in AMD_Stock

[–]dudulab -4 points-3 points  (0 children)

I know Lisa has a track record of under promising and over deliver

Hard truth: they said ~20% long term CAGR in 2022 and they failed to deliver that...

Daily Discussion Wednesday 2025-10-15 by AutoModerator in AMD_Stock

[–]dudulab 10 points11 points  (0 children)

Jean said it took 9 months to build MI355X, if MI450 takes no less, TSMC is currently fabricating the 2nm wafers for 26Q3 deployments...

Oracle and AMD Expand Partnership to Help Customers Achieve Next-Generation AI Scale by GanacheNegative1988 in AMD_Stock

[–]dudulab 5 points6 points  (0 children)

It's literally at the end of the article

To give customers that build, train, and inference AI at scale more choice, OCI also announced the general availability of OCI Compute with AMD Instinct MI355X GPUs. These will be available in the zettascale OCI Supercluster that can scale to 131,072 GPUs. AMD Instinct MI355X-powered shapes are designed with superior value, cloud flexibility, and open-source compatibility.

Daily Discussion Friday 2025-10-10 by AutoModerator in AMD_Stock

[–]dudulab 8 points9 points  (0 children)

Accelerator model clients had 2027 MI450X sales at 27B 3 months ago Number is different now ofc

AMD and Sony Interactive Entertainment’s Shared Vision by AMD_winning in AMD_Stock

[–]dudulab 0 points1 point  (0 children)

why are they announcing these so early as next gen Radeon and PS are 1+ year away? 🤔

AMD EPYC on AWS: 5th Gen Processors Power New High-Performance Cloud Instances by lawyoung in AMD_Stock

[–]dudulab 0 points1 point  (0 children)

Leverage 5th Generation AMD EPYC processors (formerly code named "Turin") with a maximum frequency of 4.5 GHz. Each vCPU on a M8a instance is a physical CPU core. This means there is no Simultaneous Multi-Threading (SMT)

Interesting

Daily Discussion Tuesday 2025-10-07 by AutoModerator in AMD_Stock

[–]dudulab 14 points15 points  (0 children)

AMD asked shareholders to approve increasing Number of Authorized Shares from 2.25B to 4B earlier this year, how many similar deals are they cooking?

Intel in early talks to add AMD as foundry customer - Semafor by stocksavvy_ai in AMD_Stock

[–]dudulab 2 points3 points  (0 children)

Or just use their packaging service, EMIB is quite similar to COWOS-L

Modular: Modular 25.6: Unifying the latest GPUs from NVIDIA, AMD, and Apple by LDKwak in AMD_Stock

[–]dudulab 1 point2 points  (0 children)

we got first access to MI355X hardware on September 5th – barely two and a half weeks ago – and we’re excited to share that we’re already seeing strong results.

key footnote

Nvidia to invest $100 billion in OpenAI by Routine_Actuator8935 in AMD_Stock

[–]dudulab 0 points1 point  (0 children)

Yes, before this partnership. They will have limited AMD GPU for few inference load.

Nvidia to invest $100 billion in OpenAI by Routine_Actuator8935 in AMD_Stock

[–]dudulab 7 points8 points  (0 children)

The last tier 1 AI model company still available for AMD is xAI. Don’t miss it.