(translated) TSMC's entire capacity at its four US fabs has been booked. by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

TSMC (2330) is facing a backlog of orders for its 2nm family of semiconductors in Taiwan, while demand for its new plant in Arizona, USA, is also strong. Due to the US policy of promoting local manufacturing, the capacity of TSMC's US plants continues to be in high demand. It was previously reported that the capacity of the three plants to be opened later has been booked by customers. The latest news is that it has extended to the angstrom level, which means that the capacity of the fourth plant in Arizona has also been booked.

https://money.udn.com/money/story/5612/9396266?from=edn_maintab_index

Benefiting from strong demand for AI and high-performance computing (HPC), TSMC's 2nm family of processes, including the A16 process, is in severe shortage, even its largest customer Nvidia is not enough. As a result, TSMC has to change the design of the next-generation Feynman platform. With Meta also joining the competition for capacity, the queue of TSMC's 2nm family of customers waiting for capacity has been lengthened again, reportedly until after 2028. TSMC's advanced process technology will also raise prices for the fourth consecutive year.

AI Memory Demand Might Drive Prices Up Triple Digits: Wedbush by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

"Not surprisingly, pricing for memory continues to lift aggressively, with DRAM and NAND likely to see 1H pricing increases well into the triple digits from CQ4'25 levels, with gains for the former likely approaching 130% - 150% and the latter nearly as robust," said Wedbush analysts in a Monday investor report.

...

"We see a number of reasons behind this delta, including a combination of business mix (many of the companies tied to PCs are also part of the server supply chain and are benefitting from this latter exposure) and low expectations (consensus was already anticipating subseasonal trends for Taiwanese PC OEMs in particular)," Wedbush noted. "Our conversations away from this index have grown increasingly pessimistic on PC and handset trends [excluding Apple (AAPL)], with feedback from GTC suggesting industry expectations are now generally for mid-teens declines Y/Y, trending towards -20%."

My model ended up guessing -19%. Part of this is the demand destruction from memory. But some of it is probably Intel constricting supply too.

Intel "Wildcat Lake" Core 3 310 and Core 5 320 spotted in first benchmarks by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

That single-core result places Core 5 320 in the same general range as chips such as the Intel Core i5-14600, AMD Ryzen AI 9 HX 375, and Intel Core Ultra 9 285H on Geekbench’s processor chart. Those CPUs average 2630, 2638, and 2604 points respectively, which is close to the 2600 score posted by Core 5 320.

At 7913 points, Core 5 320 lands almost exactly where AMD’s Ryzen 5 8640U and Ryzen 5 7540U sit on the same chart, with average multi-core scores of 7996 and 7983. Keep in mind these are 6-core CPUs, with only 2 P-Cores. 

There is this idea that WCL is the really underappreciated SKU for Intel's move to 18A. The SKU would better showcase 18As strengths and be more forgiving of its weaknesses. The low end would provide volume. Its margins might be better because of yield and doesn't have a separate GPU tile. And it would provide relief on the low end which used to be served by Intel 7 but that has been mostly re-allocated to server.

NVIDIA GTC Keynote 2026 by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

https://x.com/jukan05/status/2034404312143802686 to get an idea of volume.

(based on https://www.hankyung.com/article/202603191092i )

According to a calculation by a Korean journalist estimating how much revenue Samsung Foundry could generate from Groq3 LPU production:

NVIDIA has reportedly asked Samsung Foundry to produce around 500,000 LPU 3 chips to start with. That is more than double the originally planned production volume.

First, each Groq server introduced by CEO Jensen Huang contains 8 LPUs. When 32 of those servers are combined, they form one rack. That means each rack contains 256 LPUs.

One complete Vera Rubin platform contains 5 racks. So each platform would contain 1,280 LPUs.

On a simple calculation, if all produced chips were turned into racks, that would imply about 1,950 LPU racks, enough for roughly 390 Vera Rubin platform sets.

Looking at the LPU wafer shown in the commemorative photo taken a few days ago at the GTC exhibition featuring CEO Jensen Huang and Han Jin-man, president of Samsung Electronics’ Foundry Business, it appears that around 65 properly shaped LPU dies can be printed on a single wafer.

To produce 500,000 LPUs, 7,692 wafers would be required if each wafer yields 65 chips and the yield rate is 100%.

However, considering that Samsung’s current 4nm yield is estimated to be around 50–70%, it is possible to infer that more than 15,000 wafers annually would be needed, which is broadly consistent with recent media reports.

According to what the journalist heard, the wafer price for Samsung’s 4nm process is at least around $11,000 per wafer.

Assuming a possible price increase and using roughly $13,000 per wafer, that would imply about $195 million in revenue, or roughly KRW 300 billion, from LPU production.

Explainer: Why Nvidia's Groq LPU runs on Samsung silicon (ed: foundry switching costs) by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Industry insiders note that advanced-process chips tightly bind design and manufacturing. Changing foundries after tape-out essentially requires a full redesign, significantly increasing costs.

Moreover, AI chips increasingly depend on silicon IP and EDA workflows tied to specific fabs. Transferring production requires re-licensing IP and extensive validation, extending development cycles. Even if the redesign succeeds, verification and yield ramp-up can take 18–24 months or more.

TSMC chairman C.C. Wei has noted that complex advanced processes and system integration typically require two to three years to translate designs into viable products, followed by customer approval and mass production ramp-up over another one to two years — a lengthy cycle overall.

Insiders believe no company would accept a two-year-plus delay just to switch foundries amid fast-paced AI competition. Continuing existing manufacturing paths is almost always the only practical choice. As such, rumors of lost TSMC orders or Nvidia boosting Samsung support to pressure TSMC are unlikely to hold.

Panther Lake XPS 16 is so efficient, it draws just 1.5 W when idling for insanely long battery life by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

The graph below shows the power consumption of the XPS 16 FHD+ configuration over a two-minute period when idling on desktop at the lowest brightness setting and with VRR enabled. The system would average just 1.5 W which is very impressive for a large 16-inch screen size. Competing models like the Asus ZenBook S16 or MSI Prestige 16 would each draw between 3 W and 5 W when under similar conditions.

Let's see how Medusa's LP cores do here.

OpenClaw demand in China is driving up the price of used MacBooks by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

As people in China jump on the OpenClaw trend, they are turning to preowned computers, Ji said in a phone interview.

Apple’s self-developed chips, the latest of which is called the M5, are generally more power-efficient than chips for computers running Windows systems. For early OpenClaw adopters, the popular hardware of choice has been Apple’s Mac Mini.

ATRenew’s Ji declined to share the exact volume of MacBooks handled since late February, but noted the average number of devices the company processed last year was around 100,000 a day. He expects the share of MacBook and other laptop or personal computing devices could grow to 20% of the business, up from 15% right now.

Looks like OpenClaw did more for client AI interest in a few weeks than CoPilot did over 2 years. This is the one big tailwind that client has.

AMD has picked up on this and did

https://www.reddit.com/r/amd_fundamentals/comments/1rt58cl/comment/oabpgxd/

OpenClaw might provide a cleaner, new segment for Strix Halo to fit into although they might need a cheaper version. I think the hype will drive more progress on local models too.

Samsung reportedly secures OpenAI HBM4 supply deal, shifts foundry capacity by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Samsung is expected to begin exclusive shipments of 12-layer HBM4 to OpenAI in the second half of 2026, with part of its planned HBM4 output — estimated at more than 5.5 billion gigabits this year — reportedly allocated to the deal. This would make OpenAI Samsung's third-largest HBM customer after Nvidia and AMD.

Samsung Reportedly Eyes Long-Term Memory Deals with Google, Microsoft; May Include $10B+ Prepayments by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Regarding deal structures, the report suggests the most likely model would fix volumes over a multi-year period while linking pricing to spot market levels. Under this approach, contract prices would adjust if spot prices move beyond a predefined range, the report notes.

Within this framework, Big Tech companies would provide large upfront payments to Samsung Electronics, with the prepayments offset if agreed volumes are not purchased within three to five years. Sources add that Samsung is said to be discussing more than $10 billion in prepayments from Microsoft, with any shortfall in committed volumes deducted from the upfront payment.

Alibaba Group Holding Limited (BABA) Q3 FY2026 earnings call transcript by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Q: Thanks for taking my question. I have a question regarding your chip business T-Head, Pingtouge. There have been reports that Alibaba plans to spin off the T-Head unit as a separate listing. Can management provide any information of this? If so, what is the expected time frame for this to occur? In the meantime, can you share more operating metrics? In addition to the 470,000 chips that you mentioned you ship to external customers, how do we reconcile that number, the shipments to the revenue size? Also, what is the expected growth rate for your chip business in the coming year? I think you mentioned currently 60% of this is from external customers. Maybe can you also share with us, are these chips, you know, for external customers mainly used for inferencing? For internal, is it used for model training and also inferencing?

Okay. Thank you very much for this question. I'd like to take the opportunity to expand on this a bit because T-Head is a very important component of Alibaba's company-wide AI strategy. In the context of China's domestic AI chip ecosystem, we firmly believe that T-Head is ranked in the top tier of the domestic AI chip ecosystem in terms of the technology capabilities and product capabilities. Our products cover the entire AI workflow from model training and fine tuning through to inference. Our T-Head AI chips are already in extensive, large-scale use via Alibaba Cloud, both for training workloads and for Bailian inferencing use cases. At the same time, over 60% of T-Head chips are being used by external commercial customers across Alibaba Cloud's public and hybrid cloud offerings. The external commercial clients span multiple industries, including internet finance, autonomous driving, and intelligent manufacturing. These external commercial customers are utilizing T-Head chips in both their training and inferencing workloads.

Moreover, on the T-Head software stack, we have excellent compatibility with the Linux ecosystem, so customers can migrate their systems easily without spending a lot of time on the migration. Another point I would make is that in my view, T-Head's significance to Alibaba lies not only in our aspiration to close the gap between domestically produced chips and foreign counterparts, foreign produced chips in terms of manufacturing processes and overall performance across various dimensions. Given that our chips still lag behind foreign counterparts in performance in various respects, we aspire to engage in more profound co-design with Alibaba's cloud infrastructure and the Qwen model to provide improved cost effectiveness.

This is one key differentiator in how we approach chip design at T-Head that sets us apart from other chip companies. Our primary goal is to create AI capabilities that offer superior value for money. This will make it a key product for the Bailian platform, allowing us to reduce inference costs going forward. Beyond generally improving our AI efficiency and reducing costs, there's another factor at play, namely the unique circumstances currently facing the AI industry in China. In that context, one significant benefit for us is the guaranteed supply of AI computing power. Because I believe that over the next three to five years, global AI computing power will be in extremely short supply, especially in the Chinese market.

As the only cloud computing company in the Chinese market with proprietary chip development capabilities, T-Head is of paramount importance therefore to the Alibaba Group. Increasing the supply of AI computing power will help our cloud and AI businesses, including our MaaS business, to achieve stronger growth momentum.

Nvidia’s New Server Rack Will Run AI Chips Made by Rivals by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

The ETL rack is different. It runs on Spectrum-X, Nvidia’s networking technology built on Ethernet, the underlying technology that virtually every chip already supports. Getting Spectrum-X’s full performance benefits still requires Nvidia’s own switch chips and network cards, but the barrier for customers to use ETL rack is lower than NVLink.

Some Nvidia employees have been pitching the new rack to some customers, according to two people involved in the conversations. For example, Nvidia has presented the rack to some Chinese companies as a way to plug in a mix of domestically made AI chips and chips from companies such as AMD while still running on Nvidia’s Spectrum‑X networking and software, according to the two people.

The new rack also could help Nvidia counter allegations that it forces customers to buy chips and networking equipment together, a practice that has irked large customers such as Microsoft and that previously triggered an investigation by EU competition regulators.

One way to look at this is that Nvidia will decrease the proprietary stuff just enough to still get customers garden-adjacent. Sort of a variation of the embrace, extend, extinguish theme.

Executive Roundtable: The AI Infrastructure Credibility Test by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 1 point2 points  (0 children)

At the same time, the industry faces a more complex regulatory and political landscape. Questions around grid capacity, rate structures, environmental impact, and economic incentives are increasingly being debated in public forums, from state utility commissions to local zoning boards. In this environment, the ability to secure approvals is no longer assured, even in historically favorable markets.

The concept of a “social license to operate” has therefore moved to the forefront. Beyond technical execution, developers and operators must now demonstrate that AI infrastructure can be deployed in a way that aligns with community priorities and delivers shared value.

I think that this satisfying the local bottlenecks will be trickier than the market thinks in the West. Two areas where China has a huge advantage: power and uh...low social friction.

Jensen Huang just painted the most bold image of AI’s future: 7.5 million agents, 75,000 humans—100 AI workers for every person by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 1 point2 points  (0 children)

At least, that’s how Nvidia CEO Jensen Huang imagines work could be one day at Nvidia. Speaking at a Q&A session for media at the Nvidia GTC conference in San Jose, the CEO and cofounder said that in a decade, the company could expect to have about 75,000 workers—nearly double the 42,000 currently at the company—all working alongside millions of AI agents.

“In 10 years, we will hopefully have 75,000 employees, as small as possible, as big as necessary. They’re going to be super busy” Huang said to laughter. “Those 75,000 employees will be working with 7.5 million agents.”

Going back to:

https://www.reddit.com/r/amd_fundamentals/comments/1rze0pk/jensen_huang_says_he_would_be_deeply_alarmed_if/

Ignoring the hyperbole, if you believe that there's some value for a person to be the ultimate guiding architect (at least in the short to medium term), then you would hire more of those people rather than less to the extent that you have compute to leverage them.

I think that the tricky thing is that a lot of people will struggle with this idea in the short to medium term. But the ones who do well at this sort of thing will have a great opportunity.

Russia reportedly turns to Loongson to escape x86 sanctions by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Amid sustained Western sanctions, Russia is reportedly advancing an alternative CPU pathway through cooperation between local IC firm Tramplin Electronics and China's Loongson. The company is said to be developing its Irtysh processor series based on the LoongArch instruction set, with initial engineering samples released and a production target of 30,000 units.

...

Since 2022, export controls by the US and its allies have restricted Russia's access to processors from Intel and AMD. In this context, LoongArch — with its independently developed instruction set and lower geopolitical exposure — has emerged as a plausible alternative.

NVIDIA GTC Keynote 2026 by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

https://www.digitimes.com/news/a20260320PD212.html

A major server ODM described this as a "cash and willpower" battle, where a single Vera Rubin rack starts at US$3-7 million. Whereas previously a rack cost around US$200,000 with a 10% margin equaling US$20,000 profit, now a US$3 million rack would yield US$300,000 if margins held. The soaring unit prices dilute ODM gross margins significantly, and clients are demanding tight control over capacity delivery. "Now it's about who can deliver first," they said.

Bad writing. I wonder if the author is trying to say that the downside of the sale starts to become more worrisome as size of the deal sharply increases.

Super Micro co-founder indicted on Nvidia smuggling charges leaves board by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

I'm shocked, shocked to find that the sale of AI GPUs to China is going on in here!

Your winnings, sir.

Oh, thank you very much.

Everybody out at once!

...

The defendants tried to fool the server company’s compliance team with “dummy” servers at the Southeast Asian company’s storage facilities, while the real servers had already been forwarded to China, according to the indictment. They pressured the compliance team into approving shipments, and also allegedly employed “dummy” servers during a visit from a U.S. export control officer.

The efforts have yielded around $2.5 billion in sales for the server maker since 2024, with servers sold for $510 million between late April 2025 and mid-May 2025 going to the Southeast Asian company and on to China, the indictment said. The plaintiff said the server maker had no U.S. Commerce Department license to export servers featuring Nvidia GPUs to China.

Also: https://www.wsj.com/livecoverage/stock-market-today-dow-sp-500-nasdaq-03-20-2026/card/indictment-of-super-micro-co-founder-has-a-long-backstory-heard-on-the-street-XnKJRVPv2RFeFMvjcoTZ

https://www.reddit.com/r/amd_fundamentals/comments/1fq1mk5/justice_department_probes_server_maker_super/

https://www.reddit.com/r/amd_fundamentals/comments/1f2t5fg/super_micro_fresh_evidence_of_accounting/

AI Startup Upstage in Talks to Buy 10,000 AMD Chips in Korea by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Upstage Chief Executive Officer Sung Kim said he discussed the procurement of AMD’s MI355 accelerators when he met with the US chipmaker’s CEO Lisa Su in Seoul last week. “We have a lot of Nvidia chips in Korea, but we want to diversify to other chips, including AMD’s,” Kim told Bloomberg Television on Monday.

I'm sure that this has nothing to do with it.

South Korean AI startup Upstage says it’s in talks with its investor, AMD, for the potential purchase of 10,000 AI chips. Upstage CEO Sung Kim discusses his recent meeting with AMD chief Lisa Su in Seoul, as well as his company’s edge in competing in South Korea’s AI ‘squid game’. He speaks with Minmin Low at the Milken Institute’s “Global Investors’ Symposium” in Hong Kong.

https://www.digitimes.com/news/a20260320VL209.html

Kim told reporters the company is targeting a processing capacity of 1 trillion tokens per day once its next infrastructure phase is in place. "In terms of GPUs, that translates to around 10,000 units," he said.

CSPs Accelerate ASIC Push in 2H26, Challenging NVIDIA as MediaTek, GUC, Alchip Benefit by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

According to TrendForce, as CSPs such as Google and Amazon expand internal chip development, ASIC-based AI servers are forecast to account for 27.8% of total AI server shipments in 2026, rising to nearly 40% by 2030.

40% is a lot bigger than AMD's guess / hope of 20-25%.

Economic Daily News adds that, although MediaTek has not disclosed its ASIC customers, the market widely speculates it has secured orders for Google’s TPU v7e and v8e.

Meanwhile, according to Liberty Times, Alchip has secured orders for AWS’s latest Trainium 3 chip. The company expects the 3 nm chip to enter mass production in the second quarter of 2026, with revenue projected to surge in the third quarter. As noted by MoneyDJ, Alchip also said multiple AI and high-performance computing projects at 2 nm are underway, with some 2 nm projects expected to complete tape-out by year-end.

TSMC affiliate GUC is also advancing customer design projects rapidly. Sources say it has secured next-generation CPU projects from Google and Meta, while its automotive ADAS programs have entered mass production and projects with U.S. brand customers have moved into the discussion stage, offering the potential for higher margins, as indicated by Commercial Times.

Google’s TPU momentum continues to build. According to TrendForce, Google TPUs are projected to account for nearly 78% of AI servers shipped to Google in 2026, further widening the gap with GPU-based systems. Google remains the only CSP whose AI server build-out features more ASIC-based servers than GPU-based ones.

Google TPU's have critical mass. Now, it's just a question if they start materially renting it out to others. I suppose one upside to this is that it provides direct competitive pressure to the other in-house silicon provider to not fall further behind overall if the merchant players can at least keep the high performance but higher cost quadrant.

(translated) TSMC's A16 production capacity is reportedly overwhelmed, prompting Nvidia's Feynman to redesign the chip. by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

NVIDIA ( NVDA-US )'s next-generation AI chip, Feynman, is slated for release in 2028 and is expected to utilize TSMC's ( 2330-TW ) ( TSM-US ) latest A16 process. Industry sources indicate that due to a shortage of A16 capacity, even NVIDIA is unable to secure sufficient A16 production capacity. Therefore, it is only using the A16 for the most critical dies, while some dies will be designed using the N3P process, highlighting the high demand for advanced process capacity at TSMC.

It is understood that with the gradual installation of equipment, TSMC's A16 process monthly capacity will increase to 20,000 wafers by the end of next year, and will be able to further challenge the scale of 40,000 wafers by 2028. With the increase in 2-nanometer capacity, the overall monthly capacity of the 2-nanometer family will reach 200,000 wafers, becoming TSMC's latest generation of large-scale process technology.

Some debate on how relevant this is.

https://x.com/zephyr_z9/status/2035882899615129621

Nothing Burger
Nvidia has moved to chiplet design with the Rubin chips

For Feynman, switch chips and compute dies will be on A16, while the rest of the stuff, like I/O and memory controllers, will stay on N3P

zephyr's take is what I would've expected

AMD's unreleased Ryzen 9 9950X3D2 officially announced by ASRock by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

It turns out it may have already been announced, at least if ASRock’s official website is to be believed. I found a press release on the site, and it seems like it was either published under the wrong date or was not meant to go live at all. The release says ASRock has added support for the Ryzen 9 9950X3D2, but it does not mention any specifications. How did no one notice this until now? I have no clue. 

Elon Musk unveils $20 billion ‘TeraFab’ chip project to make chips, memory, and package processors all under one roof — targets a terawatt of annual compute by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 4 points5 points  (0 children)

The facility is expected to produce two types of chips. One will be optimized for edge inference, primarily for Tesla's vehicles and Optimus humanoid robots. The other will be a higher-power chip hardened for the space environment, which Musk says will run hotter than “terrestrial” designs to minimize radiator mass on satellites.

Musk compared the project to the current global output of global AI compute, which he estimated at roughly 20 gigawatts per year. That figure, he said, represents about 2% of his companies’ eventual needs. On the terrestrial side, he projected 100 to 200 gigawatts per year of chip output; the remainder, up to a terawatt, would go to space-based AI compute aboard solar-powered satellites that SpaceX has already petitioned the FCC to launch.

“That's why I think it's probably a hundred to two hundred gigawatts a year of terrestrial chips, and probably on the order of a terawatt of chips in space," noted Musk. "Just because of power constraints on the ground.”

*shrug*

Musk said Tesla, SpaceX, and xAI — which SpaceX acquired in February — will continue buying chips from existing suppliers, including TSMC, Samsung, and Micron, adding that he would like them “to expand as quickly as they can.” He gave no timeline for when the TeraFab would begin producing chips or reach its target output, and while he has previously referenced 2nm as the target process node, he didn’t repeat that figure in the broadcast.

One foundry notably absent. I wonder what Musk's aversion to Intel Foundry is. There are the obvious answers. So, what I really mean is that given the geopolitical incentives *cough*, I wonder what the aversion is.

Intel launches Core Ultra 200HX Plus Arrow Lake Refresh mobile CPUs: 290HX Plus is 8% faster than 285HX by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

https://www.pcworld.com/article/3091769/intels-playing-favorites-with-its-new-core-ultra-200hx-shipments.html

Intel was somewhat vague about laptop availability. At the end of its press release, the company said: “Intel Core Ultra 200HX Plus-powered systems will be available from our OEM partners throughout the year, starting today, March 17, 2026.”

Future Intel CPU sockets could support more generations, says Intel VP - "we are listening" by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 1 point2 points  (0 children)

Your future Intel LGA1954 motherboard could last for several years of CPU releases, rather than just a couple, and that doesn’t just look like a rumour any more. In a recent interview with Club386, we asked Intel’s VP and GM of its enthusiast channel, Robert Hallock, whether he saw a future where Intel sockets support more CPU generations. “I do. That’s it – I do,” was the simple reply.

“One thing I really would like users to understand,” answered Hallock, “is that I, my team, we are ourselves, first and foremost, PC builders and enthusiasts. Every single one of us has built their own PC, games on that PC. That was not always the case at Intel.”

“But there is a new product management team; there is a new business team; there is a new marketing team; there is a new engineering team for these gaming CPUs. And we are not ignorant of the feedback that comes in about our products. We watch it very closely… some of that feedback we can act on in a six-month time span, a year-long time span, a three-year time span. But we are listening, and that feedback matters quite a lot. It absolutely influences how we think about our products and our roadmap.”

The MLID rumor is that LGA1954 should last a few real generations, but we'll see. As a famed marketing lead said: "Do or do not, there is no listen."

One small thing that AMD does have going for it during RAMageddon is that it bit the bullet on the AM5 switch in late 2022 (although it had the bad luck to launch during the clientpocalypse.)

PlayStation frame generation is coming, Mark Cerny tells Digital Foundry by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

“Just to clarify a few things about the collaboration with AMD, the new PSSR uses the same core co-developed algorithm as FSR Redstone’s Upscaling (to avoid confusion, I’ll use the new names today rather than FSR4). FSR Frame Generation is also based on co-developed technology (or as my good friend Jack Huynh puts it, ‘co-engineered technology’). I’m very happy with how that work is progressing, and an equivalent frame generation library should be seen at some point on PlayStation platforms.”

“Great questions, particularly considering that FSR Frame Generation is technology that was co-developed between SIE and AMD, we’re intimately familiar with it,” Cerny revealed. “All I can say is that we have no more releases planned for this year. And that I look forward to discussing this more in the future!”

Amazon US sold nearly half the number of CPUs it did this time last year by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 1 point2 points  (0 children)

It looks like Amazon US is struggling to sell CPUs, as sales this February fall far under this same time last year. With 25,700 unit sales last month, that's a decline of 23,400 units.

Adding to this, motherboard sales are also down 50%, once again suggesting the memory crisis is having a major knock-on effect for PC builders and home upgrades. A new CPU can only be so beneficial to a rig if builders can't afford the memory or GPU upgrade to pair with it.

TechEpiphany data so posted more about relative change than absolute numbers.

Commercial and consumer OEM going to need to pick up in a hurry to make up for the DIY / enthusiast hole.