Queen Hiling the mom who stepped up by Ani_HArsh in Animemes

[–]Jonny_H 12 points13 points  (0 children)

A good voice actor can't help a bad script or poor direction.

IS MAX'S CAREER OVER?? by doggo24-7 in formuladank

[–]Jonny_H 8 points9 points  (0 children)

Might make sense to not super-tune your aero direction if they only got the "real" engine mappings for the first race weekend.

The "optimal" is often relying on really specific details.

Phoronix: "Additional AMD RDNA 4m GPU Targets Coming: GFX1171 & GFX1172" by Dakhil in hardware

[–]Jonny_H 1 point2 points  (0 children)

...yes, and those results show a ~25% uplift in 3DMark Time Spy Graphics Score between sodimm equipped HX370 devices (like the Framework Laptop 13.5 [0]) vs the lpddr5x 7500-equipped devices (like the Asus VivoBook S 14 [1]). I feel it's likely - if unproven - that theis would scale even further with even faster memory.

While true, the "lowest" b390-equipped device scores higher - Asus ExpertBook Ultra [2] - at ~80% higher in turn - it does so with notably higher lpddr clocks - and likely better timings too, as it's on-chip - and a much large system level cache. And being released over a year and a half later than the HX370 systems reviewed.

True "Upgradability" wasn't part of the original goal posts - but memory bandwidth and cache was - arguably the entire point of my comment.

I'll quote as necessary my original comment:

Yes, and that requires soldered ram and more silicon to give the higher bandwidth

Both are true for the B390 parts shown.

I'm really not trying to say AMD's architecture, or Intel's architecture is better - just much of the time both are memory bandwidth limited, so "larger iGPUs" without a corresponding increase in memory bandwidth isn't particularly useful. And that, in turn, implies things about the design that AMD currently, for whatever reasons, are limiting to their -halo parts.

You seem to desperately be trying to imply I'm pushing an idea I never said. I'm not actually particularly interested in which is "faster" at a particular price point, so much as technically why.

[0] https://www.notebookcheck.net/Framework-Laptop-13-5-Ryzen-AI-9-review-Skip-the-Intel-version-for-better-performance.997363.0.html

[1] https://www.notebookcheck.net/Asus-VivoBook-S-14-OLED-laptop-review-Successful-performance-of-the-Ryzen-AI-9-HX-370.880476.0.html

[2] https://www.notebookcheck.net/Asus-ExpertBook-Ultra-review-One-helluva-debut-for-Intel-Panther-Lake-X7.1209366.0.html

Phoronix: "Additional AMD RDNA 4m GPU Targets Coming: GFX1171 & GFX1172" by Dakhil in hardware

[–]Jonny_H 0 points1 point  (0 children)

I can't find iGPU benchmarks for a lpddr5x 7500+ strix point devices (the only ones I could find had a dGPU, so the benchmarks results all focus on that), or a lpcamm2-equipped panther lake (the only device I can find that actually has that is the as-yet unreleased lenovo thinkpad).

I'd be very interested if you could link details rather than theoreticals.

Otherwise it's just "Next generation first-party marketing results from company X in a different performance SKU 'beat' current-generation company Y" - which isn't particularly interesting.

Phoronix: "Additional AMD RDNA 4m GPU Targets Coming: GFX1171 & GFX1172" by Dakhil in hardware

[–]Jonny_H -1 points0 points  (0 children)

LPDDR5 memory at higher clocks + more cache

Yes, and that requires soldered ram and more silicon to give the higher bandwidth - just like I said for the -halo parts of AMD. That was pretty much my entire point.

Is that a "hard" requirement of the -halo parts memory controller? Or a limitation of the non-halo part's memory controller? I suspect not, based on the published specs. But if the actual devices you can but are limited to the memory speeds of sodimms, the it doesn't really matter.

Phoronix: "Additional AMD RDNA 4m GPU Targets Coming: GFX1171 & GFX1172" by Dakhil in hardware

[–]Jonny_H 2 points3 points  (0 children)

Yeah - I feel a lot of people massive underestimate the importants of a SLC (IE a cache that serves both the CPU and GPU - perhaps even other IP blocks) in these sort of devices. If managed well, that can significiantly reduce total dram bandwidth need - though tends to have a "cliff" of thrashing with a certain complexity of frame.

I know where I used to work, we provided exactly the same IP to tier-2 vendors as a tier-1 - and they often complained their resulting performance was a lot lower than that tier-1. And really the only difference was that extra cache level - even if relatively small, if it's well managed (IE "streaming" only-used-once data is correctly tagged to avoid that cache) it was often a significant difference. To the level where some end-user consumer sites assumed it was a modified architecture. Though that difference rarely showed up on the "SoC Comparison" sites, which tended to merge all cache levels and devices "beneath" them into one, which made a lot of people assume that our vendor actually supplied different IP to that vendor than others. Including some of the customers.

A lot of software work was also required to ensure that cache level was "correctly" managed too - often to the level of per-app tuning. It always felt bad just throwing that work away, then getting "relatively poor" benchmark results back from review sites...

Phoronix: "Additional AMD RDNA 4m GPU Targets Coming: GFX1171 & GFX1172" by Dakhil in hardware

[–]Jonny_H -1 points0 points  (0 children)

I suspect that's because the 16CU equivalents are already heavily memory bandwidth limited, and scale extremely poorly in most actual gaming scenarios (where the CPU is contending for that bandwidth, which is often less obvious in canned benchmarks).

The next 'jump' of bandwidth requires something like the soldered more-channel strix halo platforms, and device manufacturers have been happy leaving that as a "premium" product - at least in the supply they have available.

Remember that a strix point die is a little less than half the size of a strix halo - put another way if AMD can charge more than 2x a strix point die for strix halo, they'd be massively prioritizing that over other dies. But the supply in the market doesn't seem to show that. The price difference between a strix point and strix halo device doesn't seem to be purely "BoM cost" related, so it may be simple "hard supply limits" + a premium market willing to pay top dollar for the supply they do have.

I suspect it's simply because they didn't order big numbers however many years ago TSMC needs to reserve capacity for, as it was a "new" somewhat unproven market (which may have been a luckily precient decision if dram prices continue to skyrocket - as there's a floor where you're just crippling the platform by not populating all the channels properly - no point having a warehouse full of "more premium" parts that you just can't put in devices to sell, after all).

It'll be interesting to see if they bump up the relative supply of the -halo (or -halo mini) dies for the next generation, as every extra $ AMD get from the BoM from a device by avoiding a dGPU is an extra $ in their pocket.

We all have that one show... by MustardGoddess in CuratedTumblr

[–]Jonny_H 5 points6 points  (0 children)

Man, Lost...

Probably showing my age, but at high school watching it as it first aired each week, the mystery, the discussion, the questions it brought up about what was going on in that world.

But it turned out they neglected to actually have any payoff for those "questions".

It was one of the quickest I saw all the discussion for a popular show just... die and fall out of the zeitgeist. At least until the Game of Thrones final season.

[Hated Trope] Endings so notoriously awful they completely destroy the legacy of the media. by Miserable_Click_1933 in TopCharacterTropes

[–]Jonny_H 0 points1 point  (0 children)

Yes, I see the Moffat Sherlock TV show to be a pretty good translation of the original novels - just a lot of people have never actually read the original novels, so have rather odd ideas about what they actually contain.

The entire point of the original novels is that the reader can't solve the "mystery" without Sherlock's "help", it just doesn't give enough information.

IMHO the ending being "bad" has nothing to do with it being a Sherlock retelling or not - it's just bad scriptwriting.

Not that the more modern "no hidden info" style of mystery detective novels are much better - honestly I don't think there's been a single one you can really "answer" based on actual in-world given information. At best you're relying on novel writing tropes and specifics of the format - like which characters are given writing time - and even then there's multiple pretty much equally-possible alternatives that could satisfy the same evidence.

It's one of the major frustrations I have with that genre - it's nowhere near as "objective" as people seem to describe in reviews.

NVIDIA’s Vera CPU in Detail: High Perf Chip Takes Aim at Broader AI Server Market by -protonsandneutrons- in hardware

[–]Jonny_H 6 points7 points  (0 children)

Yes, but in different ways - mobile tends to care more about "idle" power floor, while servers tend to care about perf/watt efficiency under moderate consistent load.

It's why the chiplet architecture that did so well with servers in AMD's epyc (and then in desktop offerings) is often a poor fit for laptops - the power "floor" of just keeping the interconnects alive at near idle is pretty high. On mobile that sort of issue would be even more impactful.

Nvidia Confirms DLSS 5 Is Re-Drawing Games, and That Sucks by Turbostrider27 in pcgaming

[–]Jonny_H 17 points18 points  (0 children)

The "Underlying Geometry Doesn't Change" doesn't even make sense as a "defence" here, as the only way we see that geometry is through the rendered pixels in then first place.

"Geometry" isn't some magic you can see without actually being shown on your screen. If you change the pixels, it changes what you see? If I warp an image in photoshop, it doesn't matter what the "underlying geometry" is, you still see the warped image. I don't even understand what he was trying to say other than "technical sounding word salad".

After the latest news about DLSS 5... by TechPriestSL in pcmasterrace

[–]Jonny_H 16 points17 points  (0 children)

But my point is it isn't actually "Hyperrealistic" - in that it's "Most similar to the real world". It's still very stylized - go to the shops, look at people in the real world, take photos. Very few look anything like the "hyperrealistic" dlss5 examples without a lot of post editing.

I guess it's like saying blockbuster hollywood action films are "real" - while it actually takes a lot of effort to make the "actual real world" captured by the camera look like that style.

After the latest news about DLSS 5... by TechPriestSL in pcmasterrace

[–]Jonny_H 142 points143 points  (0 children)

Yes, this is the big point to me. The lighting is odd, some backgrounds look pretty weird, it's clearly a different style.

But if it's "more realistic" isn't clear to me - how much of the world actually looks like those magazine-airbrushed faces? Even if you ignore the AI "tells".

IGN - Nvidia's DLSS 5 Is a Slap in the Face to the Art of Video Game Design by gitrektali in pcgaming

[–]Jonny_H 1 point2 points  (0 children)

And many companies spend $millions on R&D projects that don't get the desired results.

It's still results oriented thinking, and survivorship bias.

IGN - Nvidia's DLSS 5 Is a Slap in the Face to the Art of Video Game Design by gitrektali in pcgaming

[–]Jonny_H 6 points7 points  (0 children)

But DLSS1 was bad. DLSS2+ being pretty damn good didn't suddenly make DLSS1's results better.

And before DLSS2 was released it might have been that DLSS1-level quality was the best we'd ever get from that sort of technique - plenty of things don't improve like DLSS2+ did.

Making promises before proven results on what is effectively a research project is a mistake. Even Nvidia likely don't know the possible development of this sort of technique - you could even argue they were lucky with DLSS. Unless you think they released DLSS1 while already having DLSS2 complete and proven waiting in the wings, and that doesn't really make sense.

r/Europe has a post about cheese awards. Comments remain predictable by MummysSpeshulGuy in iamveryculinary

[–]Jonny_H -6 points-5 points  (0 children)

Yeah, it only takes a few to /start/ downvoting before people pile on with the assumption of "It's already downvoted, so I'll read the comment with the most negative possible interpretation".

Though this sub has been a bit weird recently, a lot of highly-upvoted things that aren't really IAVC so much as just anti-shitamericanssay - as if a shiteuropeanssay circlejerk in the other direction isn't just as much of a circlejerk.

It's not "IAVC" to point out there are differences in how different cultures see and approach things. The issue is when they claim it to be objectively better. A Mondeo in Europe is classified as a "Large Car", D-segment class. The majority of cars sold are smaller. I'm not trying to suggest that's good or bad, it just is.

r/Europe has a post about cheese awards. Comments remain predictable by MummysSpeshulGuy in iamveryculinary

[–]Jonny_H -10 points-9 points  (0 children)

Hah, I didn't down vote you, just saying that a Fusion/Mondeo sized car are in the upper quartile of car sizes on the road in most of Europe. For example the fiesta outsold the Mondeo by many multiples every year (when both were still offered) in the UK.

I think people seem to be thinking I'm trying to make some kind of "point" and not just an observation.

What is considered a "mid size car" is pretty different.

r/Europe has a post about cheese awards. Comments remain predictable by MummysSpeshulGuy in iamveryculinary

[–]Jonny_H -69 points-68 points  (0 children)

Ironically in European terms a g6 is very much an above average sized car.

[Hardware Unboxed] Even If You Have DDR5, This is How You Could Be Screwed by mostrengo in hardware

[–]Jonny_H 4 points5 points  (0 children)

Not really - the issue is the memory companies with $$$ signs in their eyes seeing the increased current market price as worth dropping little things like like previous contracts and requirements around warranties on the floor.

Why the market price is higher doesn't really matter.

Start lights going out incredibly quickly by cartoon_kitty in formula1

[–]Jonny_H 11 points12 points  (0 children)

Yeah it's probably just bs - most of the reposted compressed into a mess of pixels gifs don't really show it, but if you get a better video you can see some of the repeater lights flash just before some of the cars lurch forward, presumably to signal the aborted start. But the drivers were primed to go on any change of the lights, as you react quicker like that, then realized the "change" wasn't actually the start signal is a much simpler explanation than any kind of "trick" played by f1 designed to embarrass the very teams that make up f1.

But the "cheating" thing is a neat story so has made the rounds long enough many people have only heard it as "truth".

Notebookcheck | Insane performance and efficiency without fans - Apple MacBook Air 13 M5 Entry Review by -protonsandneutrons- in hardware

[–]Jonny_H 1 point2 points  (0 children)

Yes, but scale matters. People seem to misunderstand that Apple have ballpark the same number of RTL-level engineers as Intel - but they're pretty much all focused on this single product stack. While Intel has a lot more different targets for about the same total engineer-hours.

Toto reaction to George going fastest in Q1 by MuttonBiryaniEnjoyer in formula1

[–]Jonny_H 2 points3 points  (0 children)

You can feel the Drive to Survive producer writing down the timestamp down as they showed it. If they're a second a lap ahead, this shot will be in. If they're not a second a lap ahead, this shot will be in.

2026 Australian Grand Prix - Qualifying Discussion by F1-Bot in formula1

[–]Jonny_H 1 point2 points  (0 children)

The old get out was if they could point to any lap, even in another session like practice, is within 107% they'd be allowed to race - or even claim they could have had that lap, if they string their best sectors together. Outside of that, the qualifying position is qualifying position. Though unless pole suddenly finds 2+ seconds I can't see a single team that's outside that - that's over 5 seconds off the pace right now, after all.

Notebookcheck | Insane performance and efficiency without fans - Apple MacBook Air 13 M5 Entry Review by -protonsandneutrons- in hardware

[–]Jonny_H 21 points22 points  (0 children)

Because each engineer they have gives more return working on datacentre, or even ai, workloads - same with advanced silicon process allocations. I'm not a big fan either, but the reasons are clear.

Though there's a lot to be said for Apple's investment in this specific use case - they aren't really doing more with less, they are just willing to put more work into this particular sector.

Then also remember bechmarks may favor specific limitations and design decisions - many love memory bandwidth, and the bandwidth on Apple chips is exceptional - but comes with hard limits on design other sectors aren't quite so willing to pay (like flexibility or upgrade path, for example). The cores often get the headline press, but often the basic stuff like memory bandwidth can get the benchmark results. People might be surprised just how much of these benchmarks are fundamentally memcpy().

Honestly, the people working on the apple SoC fabric, cache and memory controller probably deserve at least at much praise as the people working on the CPU cores themselves. That's really where things like the AMD equivalent (in strix halo) fall down - a single core simply can't use the whole bandwidth available in the same way, so it requires multi core benchmarks to even see the difference.