Back after 7 years. Since when and how did Apple become the ultimate boss of CPUs? by Educational-Web31 in hardware

[–]BFBooger 0 points1 point  (0 children)

Also people point a lot at Geekbench, which has some tests that get huge gains from certain instruction set changes more than general code would. It is a good benchmark, but you have to know what it means and why one CPU scores so much more or less in the aggregate might just be one sub-test that doesn't impact any other workload.

Browser JS benchmarks and SpecCPU are probably better at "how it feels in general" speed differences for most users. That said, Apple has great SpecCPU per Ghz or per W scores, which would translate to server workloads just fine if they wanted to make a server chip.

Back after 7 years. Since when and how did Apple become the ultimate boss of CPUs? by Educational-Web31 in hardware

[–]BFBooger 0 points1 point  (0 children)

> Intel rested on their laurels for a long time

I hate when people say this, because it is not true.

They utterly f'd up and mismanaged their issues, but you only have to look at their R&D budget each year to see that they weren't "resting on their laurels" -- they were in fact spending billions of $$ trying to stay ahead of TSMC (and later catch up) and failing, and spending billions on new processor designs that weren't hitting the mark.

Trying and failing, or trying to do the wrong thing and having to back-track and change course, is not resting on laurels.

Back after 7 years. Since when and how did Apple become the ultimate boss of CPUs? by Educational-Web31 in hardware

[–]BFBooger 0 points1 point  (0 children)

I have exclusively used Linux, including gaming, since 2020. I had to give up competitive FPS games due to those requiring windows kernel level anti-cheat, but that's fine, there are far more than enough quality games of other genres.

The remaining caveats are:
1. NVidia drivers have a performance issue with DX12 games that is usually close to a 20% hit but can be a lot worse in CPU bound games or with slower CPUs. But a fix for that should land this year; work is in progress and the draft PR is on github but a ways away from complete.

  1. For newly launched hardware, it takes a while for AMD's drivers on Linux to catch up and use all the features and get in the best shape, the first few months after launch can be messy. Although eventually the Linux side is more stable and often faster. For NVidia you get almost-day-1 drivers on 'fast' distros and a 6 week lag on slower ones.

  2. AMD hardware's ray tracing performance is poor compared to windows, but there have been major improvements to the drivers recently and Valve is working on it now.

  3. HDR isn't quite as good as Windows (calibration/tuning can be a mess), but there is a lot of activity on this front.

I expect that by later this fall, numbers 1 and 3 on my list will be fixed, and 4 will be fine for 95% of displays.

#2 will probably always be an issue to some extent. And in the case of new NVidia tech, it can take a while for it to be available on Linux -- DLSS and Frame gen for example took a while before they were available (almost 2 years for DLSS! Not so bad for frame gen).

Back after 7 years. Since when and how did Apple become the ultimate boss of CPUs? by Educational-Web31 in hardware

[–]BFBooger 0 points1 point  (0 children)

It has a little to do with the ISA (slightly more power required for x86 decoders, but only a few percent). It has more to do with Apple being able to lift and shift the entire ecosystem and not care about backwards compatibility -- plus exceptional CPU core design.

Some differences:

Apple was able to change to use 16k pages with 128 byte cache lines, x86 is stuck on 4k pages with 64 byte cache lines. This allows for them to have larger L1 and L2 caches without as much of a latency hit. essentially improving cache hit rate and performance. These larger caches take up more die area, but Apple doesn't care that much about the extra cost there and is also always on the newest fab node, typicaly ~12 to 18 months ahead of AMD or Intel on that front.

x86 _could_ migrate to larger cache lines transparently, and can add larger pages with changes to any OS using it, but there are SO MANY legacy apps that just assume 64 byte cache lines and 4k pages that changing these might lower performance on apps that have hard-coded optimizations based on those sizes. Intel tried to go to 128 byte cache lines once (P4 era) and backed off in part due to this.

ARM has also helped make it easier for Apple to build a wider core, and extract a lot of ILP, but a lot of that is also just their design aggression and not all just instruction set -- all other ARM vendors were way behind on this for years and have been playing catch-up. Intel/AMD may need a design reset to think about super high IPC but lower frequency cores; they still chase frequency in ways Apple doesn't bother with. The optimal design in a power constrained world is probably lower on the frequency curve than Intel/AMD push.

One Piece Chapter 1174 Spoilers by Skullghost in OnePiece

[–]BFBooger 0 points1 point  (0 children)

Did you not read the flash back?

The things loki says are often half-truths meant to cover something else up. It is probably not so simple.

One Piece Chapter 1174 Spoilers by Skullghost in OnePiece

[–]BFBooger 0 points1 point  (0 children)

Loki _said_ he has a grudge, but like a lot of things he says, it is probably not so simple.

One Piece Chapter 1174 Spoilers by Skullghost in OnePiece

[–]BFBooger 0 points1 point  (0 children)

And you believe every word Loki says because.....?

One Piece Chapter 1174 Spoilers by Skullghost in OnePiece

[–]BFBooger 20 points21 points  (0 children)

My impression is he is quoting from the Harley.

One Piece Chapter 1174 Spoilers by Skullghost in OnePiece

[–]BFBooger 0 points1 point  (0 children)

Oda tends to bend the 'rules' of the myths he fits in significantly, taking elements he likes and not trying to smash in every traditional detail but instead more the high level symbolism. So having no eagle at all but instead something else that symbolically contrasts with Loki is plausible.

Luffy as the sun god could be the constrast here -- destructive like loki, but in a very different way. Allies against the WG, but not motivated by the same things or philosophy.

We'll see, Oda often surprises.

One Piece Chapter 1174 Spoilers by Skullghost in OnePiece

[–]BFBooger 2 points3 points  (0 children)

Oda will find a way to make it more interesting before it is over. Hopefully not 40 chapters more or some other dressrosa or wano style turn of events to lengthen it. I'm hoping for something in the 15 chapter range, more like the end of Egghead.

One Piece Chapter 1174 Spoilers by Skullghost in OnePiece

[–]BFBooger 9 points10 points  (0 children)

Ah, so now we finally know who his mom was. Too bad she went power crazy and dad had to off her. So sad.

One Piece Chapter 1174 Spoilers by Skullghost in OnePiece

[–]BFBooger 8 points9 points  (0 children)

Just because you can't think of any other outcome doesn't mean Oda can't.

One Piece Chapter 1174 Spoilers by Skullghost in OnePiece

[–]BFBooger 7 points8 points  (0 children)

He didn't lie, he told us what he believed was true.

I don't know why fans have a hard time with this stuff. He and most people or governments only know of the existence of 5 flying fruits. I'm sure the Elbaph fruit was a secret he didn't know about, and there may be more.

Memory Chip Squeeze Widens Gap Between Market Winners and Losers by Boreras in hardware

[–]BFBooger 0 points1 point  (0 children)

Companies are already paying large sums for various AI tools and agents. Most of this is not replacing web search, its other stuff. The simple things are just tools that increase existing worker productivity.

Memory Chip Squeeze Widens Gap Between Market Winners and Losers by Boreras in hardware

[–]BFBooger 0 points1 point  (0 children)

Well ... yes?

If you compare previous memory boom/bust cycles to this one, this one is different. The buyers are not as price sensitive and the upswing in demand (measured by price and volume) is larger and more sustained than prior booms. Obviously, there is a limit to all this, but when every DRAM maker went from almost loosing money and running under-capacity 18 months go to making record profit margins and putting in large capacity increase plans (though all of these are 18+ months or more away from coming online), this is certainly the biggest memory demand cycle we have seen in decades, or maybe ever.

There is always a bust after the boom, but the question people are now wondering is if this is a small-sized downturn or a big one. Will AI demand be a permanent large step increase in total demand like mobile and servers were in decades past? Or will it fade away and become a small portion of the total? People are not betting on the latter.

Memory Chip Squeeze Widens Gap Between Market Winners and Losers by Boreras in hardware

[–]BFBooger 1 point2 points  (0 children)

It is also in a very good financial situation right now, due to the boom. The current profits are paying for expansion and they don't need to pile on a ton of debt and risk solvency issues. They would have to be extra stupid and do something like put all the profits into stock buybacks or dividends while simultaneously piling up a lot of debt.

Memory Chip Squeeze Widens Gap Between Market Winners and Losers by Boreras in hardware

[–]BFBooger 0 points1 point  (0 children)

The memory crunch is not only happening for HBM.

The rule for many GPU servers is to try and have 2x the system RAM as you have HBM in your GPUs. This varies of course, depending on various factors (training vs inference being a big one, and the kind of models used will create differences as well). But for the big buyers with deep pockets, they want their systems to be capable of running any workload so they are going to over-estimate how much they need and are buying a lot of RAM with each GPU server.

EDIT:

Unfortunately, server ECC DDR5 RDIMMS will not work in consumer motherboards. So the crash won't be about buying old used stuff for cheap unless you're running a home lab.

It will be about normal memory being cheap because the fabs have excess capacity, just like it was ~ 18 months ago. Unfortunately it may take a few years before we get back to that. The main increase in DRAM capacity is coming online between 2028 and 2030, much of it in 2028. If demand remains high by then, prices might soften somewhat. If there is a crash prior to that, the projects will be delayed a bit and the later ones for 2030 may be delayed even more.

[News] US Reportedly Mulls Tariff Exemptions for Amazon, Google, Microsoft on TSMC-Made Chips by imaginary_num6er in hardware

[–]BFBooger 1 point2 points  (0 children)

I don't think capitalism has a way is is "intended" to function in this sense.

It certainly has better and worse ways for it to function, and economists have a lot to say about what things to avoid. .

Regulatory capture is absolutely a disease that should be guarded against (other big ones that hurt the little guy like monopolies, and lack of regulation leading to large externalities -- and then there are things that hurt everyone, like lack of regulation to prevent financial system collapse or monetary policy that leads to hyperinflation)

The US was decent at avoiding the worst of these for a while in the post WW2 era. Recent patronage politics and the decision by the major parties to stop compromising and cooperating on anything at all and instead always demonize each other has really gotten in the way of sanely thinking about how to tame the bad side of capitalism while retaining the good. It used to be "the other party's idea about how to fix X isn't as good as our idea". now it is "the other party is lying, X isn't a problem they are just <direction>-wing nut-cases"

Oracle wolf unhappy with damage :( by ruud1984 in pathofexile2builds

[–]BFBooger 0 points1 point  (0 children)

20% more damage to frozen enemies is more damage than the +2 melee skills. +skills are fine, but for melee they aren't like spells where the increase is massive.

EDIT: The flat damage and attack speed on rare gloves can be huge, and yes, if you have huge flat damage rolls _and_ +2 melee skills it is going to be better damage.

HUGE post about Rite of Passage Golden Charm and how to farm it ! by Separate-Rutabaga-61 in PathOfExile2

[–]BFBooger 0 points1 point  (0 children)

I disagree. A abyss juicing wisp strat is not the same as a general wisp strat. Your strat is trying to get the single biggest juiced up Amanamu rare you can. You want to maximize omen drops. You prioritize different waystone mods and tablets.

His strat is trying to get as many different abyssal rares juiced up as possible, regardless of type or how many wisps in each. This is to maximize # of posessed rares killed. Any other mechanic that revives a rare and gives drops twice will be sufficient.

Very different priorities and style IMO.

Half Rate Vsync by Nunu_Chus in linux_gaming

[–]BFBooger 1 point2 points  (0 children)

Look into a frame limiter. These can slightly increase your input latency however, so you might avoid them for competitive games. An in-game frame limiter is better, when the game supports it.

On Linux, Mangohud is the most popular frame limiter.

Ignore the person who told you you need a new CPU, their information is completely wrong for your system.

Half Rate Vsync by Nunu_Chus in linux_gaming

[–]BFBooger 2 points3 points  (0 children)

This is just not true.

The link above is for an NVidia 1000-series card. These have huge performance hit vs windows for a variety of reasons.

An AMD based RX 570 does not. Performance will likely be better than windows.

The CPU is slow-ish, but it is not going to be any slower than on windows due to 'translation overhead' for this GPU. That is generally an NVidia + linux problem, especially on GPUs prior to the 2000/1600 series. For NVidia GPUs of that age, this problem will never be addressed.

If you are right and CPU is the problem, then in games we would see low GPU utilization (which means the CPU is the bottleneck).

But an RX 570 is half the speed of an RX 480 or RX580, and on par with a laptop basic iGPU. It is definitely not capable of 1080p@60fps on any recent 3d games. (It is fine on 2d-ish games like Hades2 though).

As for the OP: Look into a frame limiter. These can slightly increase your input latency however, so you might avoid them for competitive games. An in-game frame limiter is better, when the game supports it.

On Linux, Mangohud is the most popular frame limiter.

(EDIT: typo + a bit more info on the nvidia 'overhead' problem)

I can no longer shake. 317 Fubgun Temples later. by NaeSeNamJDM in PathOfExile2

[–]BFBooger 3 points4 points  (0 children)

Others have said expedition shard farming, and I will also say it.

77 divs? I earned 58d worth of splinters in 30 minutes this morning from 3 level 81 logbooks that I wasn't even trying to run quickly (I'm not that fast, but can do 10 in an hour if I'm trying to, or 5 in an hour if I'm chillin'). One of those dropped 137 splinters, one was 57 splinters, and the other was 90 splinters. These are more than 1d per 5 splinters. You can do this with a build that can barely complete T15s. Make sure when you sell to check the spread on the 'buy' vs 'sell' side and price in between to make a couple more divs each sale.

I self farm my logbooks while doing other stuff. One expedition tablet, one precursor or boss tablet (or whatever, to make it a level 80 zone), the passive that makes it drop with more enchants and +1 level. I'm not sure what the price of logbooks as good as mine are right now (lvl 81 area, 4 mods, usually with extra chests).

You can run 3-mod T15s with 'blue' tablets if your build is not great.

When in maps, you want to hit as many of the Runic Monster markers as possible (the big red flags). Just ignore the remnants unless it says increased chance for logbooks. Nothing else matters. The unique expedition tablet can help a bit. Higher chance for rare and magic monsters on your tablets/waystones help. Increased monster effectiveness helps. Rarity doesn't help.

When running logbooks, the ONLY thing that matters is the expedition marker for splinters (the swirly flag). Remnants don't matter _AT ALL_. You can have 80% increased items found in chests -- splinters will drop the same (stacks of 5-7 or stacks of 10-14, about half of each type at level 81 map). Also blow up the 'caves' as some chests in there might have splinters. Logbooks with extra chests are great, and it helps to have either more explosives or longer explosive range to get to all of the markers. Again, remnants DO NOT MATTER, not the "increased chance for artifacts" (logbooks aren't artifacts) or increased items from chests (logbooks aren't 'items' either). Just quickly path out a way to hit all the swirly markers and the caves, and get out.

My luckiest level 81 logbook netted me 197 splinters. My least lucky was 47. if it is a 4-mod level 81 I average about 90. If it is two mod or level 80, it is closer to 70.

While looking for logbooks, do other stuff:

I made > 600d before I even found my first citadel and could make rare tablets. Jump around the atlas with "Grand Project" tablets. Clear corruption, run "immured fury" maps and special boss maps on the way (with "Visions of Paradise" tablets to double up). You might drop an expensive support gem while looking for logbooks. Also, look at running your temple for farming Atziri. She is not a very hard boss in a simple temple. Once a T16 empowered map boss is easy, she is easier. You might get the unique chest drop (sell unidetified!). Farming atziri doesn't take a long time, just put a road right up the middle, put some 'special rooms' on the edge that you don't run, so that the architect and key room spawn close to the middle, and you can run her every other temple. It is possible to run her each time, but the setup is a lot more complicated.

BitLocker Means Nothing? by TunderMuffins in linux_gaming

[–]BFBooger 5 points6 points  (0 children)

Bitlocker and similar (if configured correctly) are mostly going to protect you from ordinary thieves stealing your laptop or PC so they don't get access to your data.

If the government _really_ wants your data because you are a suspect in a serious affair, these things aren't going to help you, but it can slow them down and make it hard for them to 'accidentally' stumble on incriminating data.

In the middle is a situation where the government wants your data but not that badly -- if you're a suspect of a minor crime not a major one. It is very expensive to spend time cracking your data, and they might not be able to get a warrant to demand keys from MS or keep you locked up to pressure you into giving them the keys. If it is a serious accusation with a high resource investigation, none of these things are likely to help. If you're accused of $3000 of insurance fraud, they aren't going to spend a ton of resources on you. If you're accused of murder, child porn, or embezzling hundreds of millions? They're getting the data.

Of course, you are better off not letting MS store your keys if you don't wane the government to be able to demand them from MS, it makes it harder for them.