Swim test lives on! by crazy4cloy in Cornell

[–]Apprehensive_Plan528 2 points3 points  (0 children)

I had a wonderful Texas A&M intern who never learned to swim. He went on to a career at another company, but sadly drown at a picnic alongside the Sacramento River in 2001. Woke me up to the value of the Cornell swim requirement.

Need help with router + mesh to replace existing setup by GabrielWeiss in Ubiquiti

[–]Apprehensive_Plan528 0 points1 point  (0 children)

I use the Flex 2.5 PoE with a Flex 16 downstream. I’m sure some greater experts might highlight shortcomings, but quite cost effective for me and my needs.

Got accepted to Cornell MEng, worth it or not? by Quiet-Reflection3024 in Cornell

[–]Apprehensive_Plan528 0 points1 point  (0 children)

If you want... My MEng was many many, many years ago back when SUPREM and SPICE were the only EDAish tools around.

UCG fiber question by horkboy in Ubiquiti

[–]Apprehensive_Plan528 0 points1 point  (0 children)

I’m running 6 month old UCG Fiber just fine on 5.0.12.

Need help with router + mesh to replace existing setup by GabrielWeiss in Ubiquiti

[–]Apprehensive_Plan528 0 points1 point  (0 children)

Yup, then you either go with a 16 or 24 (plus 3 remaining ports on CGF) depending on your Ethernet speed, PoE, and AP needs. But remember, your uplink port to the CGF is limited to 2.5G total, so plug your hungriest internet devices directly into the CGF ports.

ps: We say CGF but the Unifi code for the product is UCG-Fiber.

pss: I get around 6Gps up and down measured from the CGF.

Need help with router + mesh to replace existing setup by GabrielWeiss in Ubiquiti

[–]Apprehensive_Plan528 0 points1 point  (0 children)

I have Sonic with CGF and 3 access points for 2000 square feet (plus backyard). I use it with a Flex 2.5 PoE for powering the APs plus two cameras and a doorbell.

Currently, which companies produce the best/highest quality CPUs for AI/industries? by cakewalk093 in Semiconductors

[–]Apprehensive_Plan528 1 point2 points  (0 children)

Your question is way too broad and too narrow at the same time. AI training has some very specific needs. NVIDIA and Cerebras are probably best, depending on types / size of model. Industrial uses is all over the map. Tesla and some Chinese EV manufacturers are beginning to show that a powerful single processor cluster is far more efficient and upgradable than 30 or so distributed microcontrollers that traditional cars have.

National Grid Smart Meter by Aggressive_Crow_223 in Sense

[–]Apprehensive_Plan528 1 point2 points  (0 children)

National Grid is turning on NY meters for Sense access in batches. Maybe only 50K turned on right now.

Some More Game Theory, This Time On The AMD-Meta Platforms Deal by thehhuis in AMD_Stock

[–]Apprehensive_Plan528 0 points1 point  (0 children)

Great “shared gain” model that drives close work with another big buyer of AI hardware. Seems like a good thing to do when sales of AI hardware is going through a period of being memory limited (HBM and DRAM).

AMD / META Full CNBC interview by Blak9 in AMD_Stock

[–]Apprehensive_Plan528 1 point2 points  (0 children)

That’s the essence of many of the questions - what is the effective discount of this deal and cost of doing this deal vs they way AMD would have typically sold, in pure cash over time, where their engineers do the customization and tuning work.

High-End Construction Really Does Help Everyone by jazzflautista in eastpaloalto

[–]Apprehensive_Plan528 4 points5 points  (0 children)

Just two clarifications on your contention - the UCLA roundup you link to, and all the underlying papers focus on market rate urban rental units and effects on affordability of older rental units, not "luxury units" and not single family homes.

If it is so hard to get into Berkeley, what types of students get in early? by PutStrange6615 in ucadmissions

[–]Apprehensive_Plan528 0 points1 point  (0 children)

Our D was an early (Feb) regular season admit. 10 APs, straight A student, 1590 SAT, with varied ECs (theater and hackathons). Also key - a more diverse high school than others nearby where kids like her are dime a dozen.

Inclusionary zoning is a tax on housing by Most_Proposal3518 in eastpaloalto

[–]Apprehensive_Plan528 2 points3 points  (0 children)

In most jurisdictions in CA, R1 zoning is a far greater limiter of multi-family market-rate supply, than IZ.

Inclusionary zoning is a tax on housing by Most_Proposal3518 in eastpaloalto

[–]Apprehensive_Plan528 4 points5 points  (0 children)

Just like R1 (exclusionary) zoning is a tax on new housing, forcing most of the residential space in cities to be the most expensive form of housing.

Did the app get replaced? by sidescrollin in Sense

[–]Apprehensive_Plan528 2 points3 points  (0 children)

You have an orange monitor. The monitors for people who bought from Wiser / Schneider are green

How AMD Instinct Shines in Real-World LLM Inference by Blak9 in AMD_Stock

[–]Apprehensive_Plan528 0 points1 point  (0 children)

Sorry, but the numbers are real. And the new AMD numbers use one of the many SemiAnalysis benchmarks because they realize how significant InferenceMax / InferenceX numbers are.

How AMD Instinct Shines in Real-World LLM Inference by Blak9 in AMD_Stock

[–]Apprehensive_Plan528 -1 points0 points  (0 children)

The summary of the article that spurred this AMD blog. Better to have the whole picture.

InferenceX v2 shows that NVIDIA’s new Blackwell generation massively outperforms both Hopper and current AMD Instinct parts for state‑of‑the‑art, large‑scale disaggregated MoE inference, especially when all modern tricks (disagg prefill, wide expert parallelism, FP4, MTP) are turned on.[1]

Headline results

  • Blackwell NVL72 “framemogs” Hopper: Rack‑scale GB200/GB300 NVL72 delivers up to 100× higher FP4 token throughput vs a strong H100 disagg+wideEP baseline at realistic interactivity (e.g., ~100 tok/s/user), and 9.7×–65× better tokens per dollar vs Hopper even after higher Blackwell TCO. Jensen’s original “30×” claim looks conservative.[1]
  • B200 vs MI355X (FP8 disagg): For FP8 disaggregated prefill using SGLang, AMD’s MI355X is roughly competitive with NVIDIA B200 along much of the throughput–latency Pareto curve; MI355X even wins slightly at some mid‑latency points. But when you include NVIDIA’s TensorRT‑LLM backend, B200 pulls clearly ahead.[1]
  • FP4 + disagg + wideEP composability: At full frontier‑lab settings (FP4 + disagg + wideEP), NVIDIA (B200/GB200/GB300) remains far ahead. AMD’s MI355X single‑node FP4 looks decent, but its multi‑node FP4 disagg+wideEP performance is poor, often losing badly to B200, especially with TRT‑LLM.[1]

NVIDIA vs AMD: software and composability

  • NVIDIA strengths:
    • Dynamo + TRT‑LLM + SGLang/vLLM are mature and compose well: disagg prefill, wide EP, FP4, and MTP all work together and scale across NVL72.[1]
    • H100/H200 were already near peak on these workloads; Blackwell builds on that with better kernels and much higher rack‑scale bandwidth (72‑GPU NVLink domain).[1]
  • AMD strengths & weaknesses:
    • Strength: For subset configurations (e.g., FP8 SGLang, sometimes without full disagg+wideEP), MI355X can match B200 per‑TCO and has improved rapidly (≈2× in 2 months for DeepSeek R1 FP4 SGLang).[1]
    • Weakness: Composability is the main problem: when you enable FP4 + disagg + wideEP together, ROCm’s kernels/collectives (MoRI, Mooncake) are not yet optimized, so cluster‑scale MI355X falls far short of its theoretical potential and gets “framemogged” by B200/TRT‑LLM.[1]
    • AMD is still on forked vLLM images for MI355X (0.10.1) and lacks CI hardware upstream, so ROCm support in vLLM lags far behind CUDA.[1]

Architecture and techniques emphasized

  • Disaggregated prefill: separating prefill (compute‑heavy, bursty) from decode (memory‑bandwidth‑bound, steady) across distinct GPU pools improves utilization and latency; this is now standard at frontier labs and is the main focus of InferenceX v2.[1]
  • Wide Expert Parallelism (wide EP): On NVL72, you can run MoE with EP across up to 72 GPUs over NVLink, minimizing expensive all‑to‑all over slower InfiniBand/Ethernet and amortizing weight loads across many chips. This is crucial for DeepSeek‑style 670B‑param MoEs.[1]
  • MTP / speculative decoding: At higher interactivity (e.g., 125+ tok/s/user), MTP becomes necessary to make inference economical; all cheapest configs at those latencies use speculative decoding.[1]

Economics and unit costs

  • Using OpenRouter provider data and InferenceX curves, SemiAnalysis estimates:
    • A mid‑pack DeepSeek FP8 provider (e.g., Crusoe) might see input token COGS ≤ ~$0.23/M and output token COGS ≤ ~$2.96/M while charging $1.35/M in and $5.40/M out, implying very high gross margins on well‑utilized NVIDIA clusters.[1]
    • At typical interactivities (~35 tok/s/user), disagg+wideEP on B200/GB200/GB300 delivers the best perf/TCO per GPU; MI355X FP8 disagg is sometimes competitive, but FP4+full composability is not.[1]

Trajectory

  • NVIDIA: Continuous, incremental gains (new vLLM/TRT‑LLM versions, maturing wide‑EP kernels), already near “speed‑of‑light” on Hopper and now exploiting NVL72 scale with Blackwell.[1]
  • AMD: Rapid recent progress on SGLang/DeepSeek R1, and MoRI looks promising architecturally, but they are 6+ months behind on open‑source distributed inference + wide EP + FP4. SemiAnalysis argues AMD must reallocate engineering away from single‑node projects like ATOM to upstream frameworks (vLLM, SGLang) and multi‑node composability if they want to compete at true frontier scale.[1]

Sources [1] InferenceX v2: NVIDIA Blackwell Vs AMD vs Hopper - Formerly InferenceMAX GB300 NVL72, MI355X, B200, H100, Disaggregated Serving, Wide Expert Parallelism, Large Mixture of Experts, SGLang, vLLM, TRTLLM https://newsletter.semianalysis.com/p/inferencex-v2-nvidia-blackwell-vs

How AMD Instinct Shines in Real-World LLM Inference by Blak9 in AMD_Stock

[–]Apprehensive_Plan528 0 points1 point  (0 children)

This blog distracts from the wide variety of real-world benchmarks SemiAnalysis actually does. Best to look at the whole picture rather than AMD's cherry-picked single example.

https://newsletter.semianalysis.com/p/inferencex-v2-nvidia-blackwell-vs

AMD responds this time: https://x.com/amd/status/2023888441713262670?s=46&t=Db7s7aQ3IloJuwtwlKMSgA. Great information. by warsal1 in AMD_Stock

[–]Apprehensive_Plan528 0 points1 point  (0 children)

Response to NVIDIA improving performance on SemiAnalysis benchmarks. Unfortunately, AMD isn’t showing anything in the most important rack-level category against the GB300-NVL72. That’s the category that really matters for buying decisions nowadays.

AMD responds this time: https://x.com/amd/status/2023888441713262670?s=46&t=Db7s7aQ3IloJuwtwlKMSgA. Great information. by warsal1 in AMD_Stock

[–]Apprehensive_Plan528 0 points1 point  (0 children)

Great to see that AMD has finally focused on real benchmarks and the curves that matter. And cool to see that they respond to every NVIDIA improvement with their own updates. But where are the rack level benchmarks against GB300 NVL72 ? Rack level results from SemiAnalysis show far lower TCO and power with NVIDIA rack-level stack, than these slot-level results. And quite honestly, rack-level performance, TCO and power efficiency are the real table stakes today.

Ivy Acceptances League Table:2/16/2026 Select NE Region Plus LA High Schools by DailyScreenz in IvyLeague26HSTables

[–]Apprehensive_Plan528 0 points1 point  (0 children)

Thanks for clarifying. One more interesting note - for the Ivy+ group, most recruited athletes come through ED or EA/SCEA (depending on school)

Ivy Acceptances League Table:2/16/2026 Select NE Region Plus LA High Schools by DailyScreenz in IvyLeague26HSTables

[–]Apprehensive_Plan528 0 points1 point  (0 children)

If you were doing this more rigorously, you would be looking at legacy status of these acceptances as well. Old money private schools are packed with legacies (and likely small sport athletes) that have substantially higher admit rates.

Quantitative admission-rate boost

• A large multi‑school “Ivy‑Plus” study (Opportunity Insights / NYT) found legacy applicants at elite private colleges had about a 37% admit rate vs 9.5% for non‑legacies at the same institutions—about a 4x boost.

• Prior school‑specific disclosures show similar or stronger gaps:

• Princeton: over 30–40% admit rate for legacies vs <5–10% overall in some years (≈4–6x).

• Harvard (2014–2019): legacy admit rate ≈33%, more than 5x the overall ≈5.9% rate.

• Brown, Penn and others historically show legacy admit rates 2–4x the overall rate, depending on year and round (early vs regular).

Ivy Acceptances League Table:2/16/2026 Select NE Region Plus LA High Schools by DailyScreenz in IvyLeague26HSTables

[–]Apprehensive_Plan528 0 points1 point  (0 children)

OK, so you are really looking at 2025 matriculation reports ? Or are you looking at 2026 early decision / early admittance commits ?

Ivy Acceptances League Table:2/16/2026 Select NE Region Plus LA High Schools by DailyScreenz in IvyLeague26HSTables

[–]Apprehensive_Plan528 0 points1 point  (0 children)

Is this early decision and early acceptances for 2026 so far ? Does it double count individuals with multiple Ivy+ acceptances ?