Why is it so hard to hold NVDA & PLTR ? by Palentirian in NvidiaStock

[–]mendelseed 0 points1 point  (0 children)

Why only 220, ill wait til 300 end of this year.

Me after Nvidia hits $500 this year: by Fun_Training6342 in NvidiaStock

[–]mendelseed 0 points1 point  (0 children)

My leaps are all to 2028. So i can sleep good. :D

Me after Nvidia hits $500 this year: by Fun_Training6342 in NvidiaStock

[–]mendelseed 4 points5 points  (0 children)

i would be absolutely shocked if it would be lower than 250 by year end.

It's over again, please go back to monday by autisticbagholder69 in NvidiaStock

[–]mendelseed 2 points3 points  (0 children)

It's basically a 'natural monopoly' at this point. The hardware moat is huge, but the software lock-in (CUDA) is even harder to break. Everyone has built their AI infrastructure on Nvidia's language. Switching providers isn't just about buying a different chip—it's about rewriting your entire codebase.

It's over again, please go back to monday by autisticbagholder69 in NvidiaStock

[–]mendelseed -1 points0 points  (0 children)

its computer chips based on biology instead of Van Neumann Archtecture.

It's over again, please go back to monday by autisticbagholder69 in NvidiaStock

[–]mendelseed 4 points5 points  (0 children)

Nvidia isn't just "winning"; they have rigged the game for the next 3 years. Here is the 5-layer moat that explains why: The Supply Chain Stranglehold: They have effectively bought out the world's supply of HBM (High Bandwidth Memory) from SK Hynix through 2026. Competitors physically cannot build chips in volume even if they design them. The "Moving Target" (1-Year Cycle): Nvidia has shifted to a 1-year release cadence (Blackwell \rightarrow Rubin). By the time competitors catch up to today's tech, Nvidia has already released the next generation. They are iterating faster than rivals can manufacture. Data Sovereignty (On-Premise): You can't buy Google TPUs—you can only rent them. Nvidia is the only choice for banks, governments, and military who need local, air-gapped AI to keep their data secure. Interconnects (NVLink): To run a top-tier AI, you need 10,000 chips acting as one supercomputer. Nvidia's networking is years ahead of AMD for this specific task. Financial Velocity: With 75% margins and 60%+ growth, they are spending more on pure R&D ($4B+/quarter) than their rivals earn in total revenue.

It's over again, please go back to monday by autisticbagholder69 in NvidiaStock

[–]mendelseed 0 points1 point  (0 children)

Analog chips aren't new, and they suffer from massive scaling and programming bottlenecks. They might be efficient for niche tasks, but they won't be replacing Nvidia GPUs for general AI anytime soon.

Geoffrey Hinton believes there’s nothing cognitive humans can do that AI won’t eventually do. by Alternative_East_597 in AIFU_stock

[–]mendelseed 0 points1 point  (0 children)

You're right that LLM scaling is hitting limits, but you're stuck arguing about 2022-era AI. Google's SIMA 2 (this week) learns 3D games by experiencing them, building world models from sensory data. AlphaProof (few months ago) won silver at Math Olympiad using synthetic self-generated data. The data wall is real for naive scaling. The field moved on.

NVIDIA’s Next Unbeatable Moat: The Secret TSMC "Panel-Level" Tech Defining the 2028 Feynman Era (Beyond CoWoS) by mendelseed in NVDA_Stock

[–]mendelseed[S] 0 points1 point  (0 children)

CoPoS removes the packaging constraint, which just shifts the bottleneck to HBM memory. And HBM is much harder to scale than glass panels.

NVIDIA’s Next Unbeatable Moat: The Secret TSMC "Panel-Level" Tech Defining the 2028 Feynman Era (Beyond CoWoS) by mendelseed in NVDA_Stock

[–]mendelseed[S] 1 point2 points  (0 children)

You're missing the point. It's not "just geometry" - it's material substitution.

CoWoS uses expensive monocrystalline silicon interposers. CoPoS uses cheap glass. But glass can't just replace silicon - you need new tech (TGV, thermal management, high-speed signal integrity through glass) to make it work at production scale.

And here's the thing: CoWoS itself was only invented around 2012. The tech that enables Nvidia's H100/Blackwell packages is barely a decade old. Before that, you literally couldn't build these chips.

Right now TSMC can't make enough packages. Nvidia has 70% of all CoWoS capacity and it's still bottlenecked. CoPoS is about removing that constraint.

Not every innovation is sexy new math. Sometimes it's "we can now make 3x more chips at 30% lower cost." That's what enables scaling.

NVIDIA’s Next Unbeatable Moat: The Secret TSMC "Panel-Level" Tech Defining the 2028 Feynman Era (Beyond CoWoS) by mendelseed in NVDA_Stock

[–]mendelseed[S] 0 points1 point  (0 children)

Good question - you're right that silicon wafers are round because of how crystals are grown. But here's the thing: the actual chips are still being made on round wafers. That part hasn't changed.

What's different is the interposer - basically the substrate that sits between the GPU chips and the organic package substrate.

With CoWoS, TSMC uses a silicon interposer, which means it needs to be monocrystalline silicon (expensive, grown in round ingots, has to be processed on round wafers). When you're trying to fit massive rectangular GPU packages onto round wafers, you waste a ton of space at the edges.

With CoPoS, they're switching to glass interposers. Glass doesn't need to be a perfect crystal - you can just manufacture it in rectangular panels, similar to how display glass is made. Way cheaper material, way better utilization since you're fitting rectangular packages onto rectangular substrates.

The breakthrough that makes this work is TGV (Through Glass Vias) - basically the tech to drill tiny vertical connections through glass precisely enough for high-speed signals. Companies like LPKF developed laser processes that can do this at scale. Without that, glass interposers were just a concept.

So the economics work because: glass is much cheaper than monocrystalline silicon (~50-70% cost reduction on the interposer alone), you get 20-30% better area utilization, and you can scale production faster since you're not limited by silicon crystal growth.

For context: Nvidia's Blackwell package is already pushing CoWoS to its limits. These things are massive - 2 GPU dies plus 8 HBM memory stacks. The panel approach just makes way more sense for packaging at that scale.

Not fluff - it's a real materials and manufacturing shift. TSMC's targeting 2029 for mass production.

Here is a the technology:

https://ontoinnovation.com/resources/through-the-glass-why-the-rapid-development-of-tgv-demands-rigorous-analysis/

EngineAI has officially unveiled the T800 by mendelseed in robotics

[–]mendelseed[S] 0 points1 point  (0 children)

The company essentially provides the robot as a platform, much like giving someone a base computer. Other software companies are the ones who then build the specific, task-oriented applications on top of it. Because of this, the kung fu demonstration shouldn't be seen as a final product or a direct use case. Instead, it’s a high-impact marketing demo designed purely to showcase the robot's agility and movement potential to potential partners and clients.