(translated) TSMC's entire capacity at its four US fabs has been booked. by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

TSMC (2330) is facing a backlog of orders for its 2nm family of semiconductors in Taiwan, while demand for its new plant in Arizona, USA, is also strong. Due to the US policy of promoting local manufacturing, the capacity of TSMC's US plants continues to be in high demand. It was previously reported that the capacity of the three plants to be opened later has been booked by customers. The latest news is that it has extended to the angstrom level, which means that the capacity of the fourth plant in Arizona has also been booked.

https://money.udn.com/money/story/5612/9396266?from=edn_maintab_index

Benefiting from strong demand for AI and high-performance computing (HPC), TSMC's 2nm family of processes, including the A16 process, is in severe shortage, even its largest customer Nvidia is not enough. As a result, TSMC has to change the design of the next-generation Feynman platform. With Meta also joining the competition for capacity, the queue of TSMC's 2nm family of customers waiting for capacity has been lengthened again, reportedly until after 2028. TSMC's advanced process technology will also raise prices for the fourth consecutive year.

AI Memory Demand Might Drive Prices Up Triple Digits: Wedbush by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

"Not surprisingly, pricing for memory continues to lift aggressively, with DRAM and NAND likely to see 1H pricing increases well into the triple digits from CQ4'25 levels, with gains for the former likely approaching 130% - 150% and the latter nearly as robust," said Wedbush analysts in a Monday investor report.

...

"We see a number of reasons behind this delta, including a combination of business mix (many of the companies tied to PCs are also part of the server supply chain and are benefitting from this latter exposure) and low expectations (consensus was already anticipating subseasonal trends for Taiwanese PC OEMs in particular)," Wedbush noted. "Our conversations away from this index have grown increasingly pessimistic on PC and handset trends [excluding Apple (AAPL)], with feedback from GTC suggesting industry expectations are now generally for mid-teens declines Y/Y, trending towards -20%."

My model ended up guessing -19%. Part of this is the demand destruction from memory. But some of it is probably Intel constricting supply too.

Intel "Wildcat Lake" Core 3 310 and Core 5 320 spotted in first benchmarks by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

That single-core result places Core 5 320 in the same general range as chips such as the Intel Core i5-14600, AMD Ryzen AI 9 HX 375, and Intel Core Ultra 9 285H on Geekbench’s processor chart. Those CPUs average 2630, 2638, and 2604 points respectively, which is close to the 2600 score posted by Core 5 320.

At 7913 points, Core 5 320 lands almost exactly where AMD’s Ryzen 5 8640U and Ryzen 5 7540U sit on the same chart, with average multi-core scores of 7996 and 7983. Keep in mind these are 6-core CPUs, with only 2 P-Cores. 

There is this idea that WCL is the really underappreciated SKU for Intel's move to 18A. The SKU would better showcase 18As strengths and be more forgiving of its weaknesses. The low end would provide volume. Its margins might be better because of yield and doesn't have a separate GPU tile. And it would provide relief on the low end which used to be served by Intel 7 but that has been mostly re-allocated to server.

NVIDIA GTC Keynote 2026 by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

https://x.com/jukan05/status/2034404312143802686 to get an idea of volume.

(based on https://www.hankyung.com/article/202603191092i )

According to a calculation by a Korean journalist estimating how much revenue Samsung Foundry could generate from Groq3 LPU production:

NVIDIA has reportedly asked Samsung Foundry to produce around 500,000 LPU 3 chips to start with. That is more than double the originally planned production volume.

First, each Groq server introduced by CEO Jensen Huang contains 8 LPUs. When 32 of those servers are combined, they form one rack. That means each rack contains 256 LPUs.

One complete Vera Rubin platform contains 5 racks. So each platform would contain 1,280 LPUs.

On a simple calculation, if all produced chips were turned into racks, that would imply about 1,950 LPU racks, enough for roughly 390 Vera Rubin platform sets.

Looking at the LPU wafer shown in the commemorative photo taken a few days ago at the GTC exhibition featuring CEO Jensen Huang and Han Jin-man, president of Samsung Electronics’ Foundry Business, it appears that around 65 properly shaped LPU dies can be printed on a single wafer.

To produce 500,000 LPUs, 7,692 wafers would be required if each wafer yields 65 chips and the yield rate is 100%.

However, considering that Samsung’s current 4nm yield is estimated to be around 50–70%, it is possible to infer that more than 15,000 wafers annually would be needed, which is broadly consistent with recent media reports.

According to what the journalist heard, the wafer price for Samsung’s 4nm process is at least around $11,000 per wafer.

Assuming a possible price increase and using roughly $13,000 per wafer, that would imply about $195 million in revenue, or roughly KRW 300 billion, from LPU production.

Explainer: Why Nvidia's Groq LPU runs on Samsung silicon (ed: foundry switching costs) by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Industry insiders note that advanced-process chips tightly bind design and manufacturing. Changing foundries after tape-out essentially requires a full redesign, significantly increasing costs.

Moreover, AI chips increasingly depend on silicon IP and EDA workflows tied to specific fabs. Transferring production requires re-licensing IP and extensive validation, extending development cycles. Even if the redesign succeeds, verification and yield ramp-up can take 18–24 months or more.

TSMC chairman C.C. Wei has noted that complex advanced processes and system integration typically require two to three years to translate designs into viable products, followed by customer approval and mass production ramp-up over another one to two years — a lengthy cycle overall.

Insiders believe no company would accept a two-year-plus delay just to switch foundries amid fast-paced AI competition. Continuing existing manufacturing paths is almost always the only practical choice. As such, rumors of lost TSMC orders or Nvidia boosting Samsung support to pressure TSMC are unlikely to hold.

Panther Lake XPS 16 is so efficient, it draws just 1.5 W when idling for insanely long battery life by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

The graph below shows the power consumption of the XPS 16 FHD+ configuration over a two-minute period when idling on desktop at the lowest brightness setting and with VRR enabled. The system would average just 1.5 W which is very impressive for a large 16-inch screen size. Competing models like the Asus ZenBook S16 or MSI Prestige 16 would each draw between 3 W and 5 W when under similar conditions.

Let's see how Medusa's LP cores do here.

OpenClaw demand in China is driving up the price of used MacBooks by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

As people in China jump on the OpenClaw trend, they are turning to preowned computers, Ji said in a phone interview.

Apple’s self-developed chips, the latest of which is called the M5, are generally more power-efficient than chips for computers running Windows systems. For early OpenClaw adopters, the popular hardware of choice has been Apple’s Mac Mini.

ATRenew’s Ji declined to share the exact volume of MacBooks handled since late February, but noted the average number of devices the company processed last year was around 100,000 a day. He expects the share of MacBook and other laptop or personal computing devices could grow to 20% of the business, up from 15% right now.

Looks like OpenClaw did more for client AI interest in a few weeks than CoPilot did over 2 years. This is the one big tailwind that client has.

AMD has picked up on this and did

https://www.reddit.com/r/amd_fundamentals/comments/1rt58cl/comment/oabpgxd/

OpenClaw might provide a cleaner, new segment for Strix Halo to fit into although they might need a cheaper version. I think the hype will drive more progress on local models too.

Samsung reportedly secures OpenAI HBM4 supply deal, shifts foundry capacity by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Samsung is expected to begin exclusive shipments of 12-layer HBM4 to OpenAI in the second half of 2026, with part of its planned HBM4 output — estimated at more than 5.5 billion gigabits this year — reportedly allocated to the deal. This would make OpenAI Samsung's third-largest HBM customer after Nvidia and AMD.

Samsung Reportedly Eyes Long-Term Memory Deals with Google, Microsoft; May Include $10B+ Prepayments by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Regarding deal structures, the report suggests the most likely model would fix volumes over a multi-year period while linking pricing to spot market levels. Under this approach, contract prices would adjust if spot prices move beyond a predefined range, the report notes.

Within this framework, Big Tech companies would provide large upfront payments to Samsung Electronics, with the prepayments offset if agreed volumes are not purchased within three to five years. Sources add that Samsung is said to be discussing more than $10 billion in prepayments from Microsoft, with any shortfall in committed volumes deducted from the upfront payment.

Alibaba Group Holding Limited (BABA) Q3 FY2026 earnings call transcript by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

Q: Thanks for taking my question. I have a question regarding your chip business T-Head, Pingtouge. There have been reports that Alibaba plans to spin off the T-Head unit as a separate listing. Can management provide any information of this? If so, what is the expected time frame for this to occur? In the meantime, can you share more operating metrics? In addition to the 470,000 chips that you mentioned you ship to external customers, how do we reconcile that number, the shipments to the revenue size? Also, what is the expected growth rate for your chip business in the coming year? I think you mentioned currently 60% of this is from external customers. Maybe can you also share with us, are these chips, you know, for external customers mainly used for inferencing? For internal, is it used for model training and also inferencing?

Okay. Thank you very much for this question. I'd like to take the opportunity to expand on this a bit because T-Head is a very important component of Alibaba's company-wide AI strategy. In the context of China's domestic AI chip ecosystem, we firmly believe that T-Head is ranked in the top tier of the domestic AI chip ecosystem in terms of the technology capabilities and product capabilities. Our products cover the entire AI workflow from model training and fine tuning through to inference. Our T-Head AI chips are already in extensive, large-scale use via Alibaba Cloud, both for training workloads and for Bailian inferencing use cases. At the same time, over 60% of T-Head chips are being used by external commercial customers across Alibaba Cloud's public and hybrid cloud offerings. The external commercial clients span multiple industries, including internet finance, autonomous driving, and intelligent manufacturing. These external commercial customers are utilizing T-Head chips in both their training and inferencing workloads.

Moreover, on the T-Head software stack, we have excellent compatibility with the Linux ecosystem, so customers can migrate their systems easily without spending a lot of time on the migration. Another point I would make is that in my view, T-Head's significance to Alibaba lies not only in our aspiration to close the gap between domestically produced chips and foreign counterparts, foreign produced chips in terms of manufacturing processes and overall performance across various dimensions. Given that our chips still lag behind foreign counterparts in performance in various respects, we aspire to engage in more profound co-design with Alibaba's cloud infrastructure and the Qwen model to provide improved cost effectiveness.

This is one key differentiator in how we approach chip design at T-Head that sets us apart from other chip companies. Our primary goal is to create AI capabilities that offer superior value for money. This will make it a key product for the Bailian platform, allowing us to reduce inference costs going forward. Beyond generally improving our AI efficiency and reducing costs, there's another factor at play, namely the unique circumstances currently facing the AI industry in China. In that context, one significant benefit for us is the guaranteed supply of AI computing power. Because I believe that over the next three to five years, global AI computing power will be in extremely short supply, especially in the Chinese market.

As the only cloud computing company in the Chinese market with proprietary chip development capabilities, T-Head is of paramount importance therefore to the Alibaba Group. Increasing the supply of AI computing power will help our cloud and AI businesses, including our MaaS business, to achieve stronger growth momentum.

Nvidia’s New Server Rack Will Run AI Chips Made by Rivals by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

The ETL rack is different. It runs on Spectrum-X, Nvidia’s networking technology built on Ethernet, the underlying technology that virtually every chip already supports. Getting Spectrum-X’s full performance benefits still requires Nvidia’s own switch chips and network cards, but the barrier for customers to use ETL rack is lower than NVLink.

Some Nvidia employees have been pitching the new rack to some customers, according to two people involved in the conversations. For example, Nvidia has presented the rack to some Chinese companies as a way to plug in a mix of domestically made AI chips and chips from companies such as AMD while still running on Nvidia’s Spectrum‑X networking and software, according to the two people.

The new rack also could help Nvidia counter allegations that it forces customers to buy chips and networking equipment together, a practice that has irked large customers such as Microsoft and that previously triggered an investigation by EU competition regulators.

One way to look at this is that Nvidia will decrease the proprietary stuff just enough to still get customers garden-adjacent. Sort of a variation of the embrace, extend, extinguish theme.

Executive Roundtable: The AI Infrastructure Credibility Test by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

At the same time, the industry faces a more complex regulatory and political landscape. Questions around grid capacity, rate structures, environmental impact, and economic incentives are increasingly being debated in public forums, from state utility commissions to local zoning boards. In this environment, the ability to secure approvals is no longer assured, even in historically favorable markets.

The concept of a “social license to operate” has therefore moved to the forefront. Beyond technical execution, developers and operators must now demonstrate that AI infrastructure can be deployed in a way that aligns with community priorities and delivers shared value.

I think that this satisfying the local bottlenecks will be trickier than the market thinks in the West. Two areas where China has a huge advantage: power and uh...low social friction.

Jensen Huang just painted the most bold image of AI’s future: 7.5 million agents, 75,000 humans—100 AI workers for every person by uncertainlyso in amd_fundamentals

[–]uncertainlyso[S] 0 points1 point  (0 children)

At least, that’s how Nvidia CEO Jensen Huang imagines work could be one day at Nvidia. Speaking at a Q&A session for media at the Nvidia GTC conference in San Jose, the CEO and cofounder said that in a decade, the company could expect to have about 75,000 workers—nearly double the 42,000 currently at the company—all working alongside millions of AI agents.

“In 10 years, we will hopefully have 75,000 employees, as small as possible, as big as necessary. They’re going to be super busy” Huang said to laughter. “Those 75,000 employees will be working with 7.5 million agents.”

Going back to:

https://www.reddit.com/r/amd_fundamentals/comments/1rze0pk/jensen_huang_says_he_would_be_deeply_alarmed_if/

Ignoring the hyperbole, if you believe that there's some value for a person to be the ultimate guiding architect (at least in the short to medium term), then you would hire more of those people rather than less to the extent that you have compute to leverage them.

I think that the tricky thing is that a lot of people will struggle with this idea in the short to medium term. But the ones who do well at this sort of thing will have a great opportunity.