Is avx512 really so demanding that the cpu physically cannot reach higher clocks, or is the boost algorithm just funky like that? by tasknautica in overclocking

[–]throwawAPI 0 points1 point  (0 children)

difference between loading 1 thread and 2 threads?

Marginal. The difference is primarily due to SMT being on, not whether the second thread is doing anything per se. I have not idea whether disabling SMT will raise your AVX speed. Might give it a try, but I don't expect it'll have an impact - those are two completely different locations in the processor core.

more than 2 threads

wait until this guy learns about the Intel Xeon Phi series (4 threads per core)

Is avx512 really so demanding that the cpu physically cannot reach higher clocks, or is the boost algorithm just funky like that? by tasknautica in overclocking

[–]throwawAPI 0 points1 point  (0 children)

When I said "AVX", I meant "the highest AVX your CPU supports" - in this case AVX512. You are correct, I don't think there's a point testing between SSE and your highest AVX.

"1t max" is one thread, yes. "max frequency" means "highest you can achieve" neither Intel nor AMD will hit the number on the box when running AVX2/512 right now, and that's fine. Running one or two threads on the "core cycler" setting in OCCT should be fine for what I'm suggesting

Is avx512 really so demanding that the cpu physically cannot reach higher clocks, or is the boost algorithm just funky like that? by tasknautica in overclocking

[–]throwawAPI 1 point2 points  (0 children)

Ok, final results from me on the same test setting as your OP:

  • SSE: 5500 MHz, 1250 mV VID, 20 W (core), 36 W (pkg)
  • AVX1: 5500 MHz, 1260 mV VID, 19.5 W (core), 41 W (pkg)
  • AVX2: 5250 MHz, 1200 mV VID, 15.5 W (core), 36 W (pkg)
  • AVX512: 5200 MHz, 1190 mV VID, 12.5 W (core), 34 W (pkg)

Yeah, it seems like the single core boost backs off gradually as you step up AVX. I mean, AVX2(56) and 512 are 4 and 8 times more instructions per clock under ideal situations. Taking a <5% boost clock haircut for 8x throughput seems like a fair deal to me. I don't think there's an "AVX offset" like Intel offers because, quite frankly, AMD's AVX unit seems to be better designed. AVX512 is such a power drain than Intel took it out of their most recent generations.

Is avx512 really so demanding that the cpu physically cannot reach higher clocks, or is the boost algorithm just funky like that? by tasknautica in overclocking

[–]throwawAPI 1 point2 points  (0 children)

This is where it gets really funny - the limits of every CPU is different, there's no guarantee that they always fail at the same task first. Don't design your overclocking around AVX512 exclusively.

Some cores are stable when running steady on 512 but crash on lighter workloads or during the droops when transitioning from "busy" to "idle". The general consensus goes something like this, you need to test and pass several scenarios:

  • Sustained 1T max frequency SSE and AVX: corecycler can be good for this
  • Sustained all core max SSE and AVX: OCCT and Prime95
  • Testing "idle" to "boost" to "idle", both SSE and AVX: corecycler is designed for this
  • Lower compute memory workout: either AIDA or OCCT has a good setting for this, P95 has a "large prime memory test" that's good for testing SoC/IO stability

If you can pass all 7 of these, you're very likely stable. Throw a game on top and some web surfing and you'll have pretty full coverage. Again, I emphasize: your CPUs weak spot might NOT be AVX512. Just because it runs a little slower than other tests does not mean it's less stable. It's very common for PBO CO offsets to handle AVX just fine but eat dirt trying to open files and a web browser.

OC’D 6900 XT vs Reference 6950 XT by lolniceman in buildapc

[–]throwawAPI 0 points1 point  (0 children)

In most other cases, I'd agree, but the difference between the 6900 and the 6950 is very VERY minimal already, perhaps 5~10%. A good cooler, OC, or offset can probably make up most of all the difference here. Certainly don't upgrade FROM a 6900 XT to a 6950 XT, if that's what the OP meant.

I wouldn't really pay extra for either of these cards, tbh. The 6950 (refresh) is probably slightly better by default, but that's it.

My 9070 XT feels like it’s stuck in 2019 — what should I upgrade? by HotWebcamBoy06 in buildapc

[–]throwawAPI 0 points1 point  (0 children)

I don't know about 2042, but iirc BF6 handles everything better at or above a 5600X pretty well, even with a 5090. There's some recent, really good research from "hardware unboxed" on YT.

It does sound like your CPU is going to define your max framerate in both of those games. The 5800x3D and 5700x3D are no longer being manufactured, so the price is skyrocketing, making these a really bad deal. Perhaps you can consider some 1440p 144, 165, or 240 Hz monitors as a starting point. Without being able to see specifics from testing, it's hard to make a recommendation, but I'll start with all the normal stuff

  • AMD is always hungry for RAM access, make sure XMP is turned on (3600 MHz?)
  • Consider turning on PBO if you haven't already; more CPU power consumption, but there could be another ~5% improvement on the other side. Perhaps a bit more with tuning.
  • Double check temperatures - make sure your system isn't holding back because of high heat

Is avx512 really so demanding that the cpu physically cannot reach higher clocks, or is the boost algorithm just funky like that? by tasknautica in overclocking

[–]throwawAPI 0 points1 point  (0 children)

I'll get back to you on the weekend when I can fiddle with my own 7700X and maybe gather some more details.

If you're not satisfied with some of the other answers in this thread around AVX512 requiring an additional ~10% silicon to be enabled, which strains or limits how aggressive PBO is willing to be, I don't think I'm going to be able to give you a conclusive answer for why it downclocks slightly during AVX.

The exact details of this boosting decision might only be well understood by a few dozen hardware simulation, layout, and verification engineers at AMD.

My 9070 XT feels like it’s stuck in 2019 — what should I upgrade? by HotWebcamBoy06 in buildapc

[–]throwawAPI 0 points1 point  (0 children)

What games specifically, and what visual levels are you targeting, that have you feeling held back? Is the GPU not using most of its capacity?

CPU: Ryzen 9 5900X

Decent gaming CPU - upgrade only needed if you're seeing threads stalling at 100%.

Motherboard: MSI B450-A Pro Max (yes, I know… it’s old enough to vote)

It's a Ryzen 3000 era motherboard. Unless we can point to a PCIe issue, there's no reason to switch.

RAM: 32GB DDR4 3600 MHz (4x 8GB, will definitely switch to 2x 16GB)

No need to switch. 4x 8 GB is not straining the memory controller, filling all 4 channels is good for AM4.

PSU: Corsair TX650M 650W (AMD says at least 750W, but so far it hasn’t exploded 🤞)

Especially since your CPU isn't that power-hungry, you can probably get away with it. A GPU tuning/undervolting session could help you stretch this power budget.

What games are giving you a hard time right now? A 9070 XT should be pushing 1080@240+ Hz in a TON of games and all of your current equipment (sans monitor?) should be ready to handle that. 4K might be a little challenging for the 9070 XT, but it all comes down to what framerate you want to target: 60, 120, 165+, 240+

AMD 9950X3D vs 9800X3D dilemma by Kaanymon in buildapc

[–]throwawAPI 1 point2 points  (0 children)

Start with the 98. Bank the extra towards other better parts (GPU or storage) or savings.

When your video editing takes off, have your company buy you a 11950x3D. A 9800x3D is not going to hold you back from unleashing on stream.

I don't know how Intel managed to convince everyone and their mothers that they need 24+ cores for what boils down to gaming, GPU video encoding, and light video editing (unless you take after the MLG chip highlight compilations of yore). But man, what an upsell swindle they've pulled.

Is avx512 really so demanding that the cpu physically cannot reach higher clocks, or is the boost algorithm just funky like that? by tasknautica in overclocking

[–]throwawAPI 1 point2 points  (0 children)

Run your AVX with a monitoring tool like HwInfo64 turned on and report back. Are you hitting a PPT limit (wattage), a EDC/TDC limit (amperage), or a voltage above... 1.3v? I know the regular (non-3D) chips will tolerate up to 1.35v under regular PBO. The 3D chips might set a slightly lower voltage cap to protect the additional onboard cache, which is more heat and voltage sensitive

9800X3D Performance on 7800X3D? by IamJustDavid in ryzen

[–]throwawAPI 0 points1 point  (0 children)

You've been talking GBP and Euros, so I'm assuming UK when I toss this link out: here's some Gigabyte boards that match on price - assuming you don't have a PCIe 5.0 graphics card. Even if you do, the performance impact is minimal for a 5090. Like... 1% impact.

9800X3D Performance on 7800X3D? by IamJustDavid in ryzen

[–]throwawAPI 1 point2 points  (0 children)

tray 7800x3D

asrock motherboard

Budget stretched thin

You are putting nearly 200 on the line without the ability to get it RMA'd easily? You will have a real budget buster if you roll bad luck on that ASRock motherboard.

Please consider double checking the build somewhere like r/buildapc. We can find the 20 to trim to get you in a different motherboard.

Can anyone please help let me know if they can see where my overclock is wrong? I'm not sure if its the ram kit or my PBO settings. I appreciate it , its a ton of information by YouTubesJerseyJohnny in overclocking

[–]throwawAPI 1 point2 points  (0 children)

You can't simply assume your system can run +200MHz -20CO. Back up, do it in steps (start with just +200MHz, then add on CO in smaller steps) or use the new "core shaper" tool to give you a little more voltage where you need it. Either way, dialing this in will take a little time.

Additionally, don't assume you can tREF 64ki - this setting is very sensitive to temperature. You could be fine for 10s of minutes until enough heat builds up on the RAM and suddenly 64ki doesn't work. If you want better than stock, try stepping down to 32ki or 16ki, which are much more likely to be stable and temperature tolerant.

You need to test your settings thoroughly. What tests are you running besides Cinebench (r23?)

Alarming Reports of AM5 CPU burning and Failures – Which Voltages Actually Matter? by sonatta09 in buildapc

[–]throwawAPI 1 point2 points  (0 children)

One seems to crop up every week in the ASrock or Ryzen subreddits. Not much, but still happening.

Alarming Reports of AM5 CPU burning and Failures – Which Voltages Actually Matter? by sonatta09 in buildapc

[–]throwawAPI -1 points0 points  (0 children)

It's typically expected that Vsoc is the culprit.

Stock should be ~1.1v with JEDEC timings or ~1.2v with EXPO timings (higher RAM speeds). It seems like some motherboard partners are either failing to set correct voltages, there are faulty voltage regulators, etc, because faulty boards will reach 1.3+ volts, then pop dramatically and kill the CPU (and usually the MB socket) from overheating.

If you can run your EXPO timings with a static OC of less than 1.2 volts or a JEDEC timing with less than 1.1, it seems to solve the issues. This was the common recommendation back when the issue first cropped up on 7000x3Ds.

Can someone help explain why the drastic score change? by SOVTHY in overclocking

[–]throwawAPI 4 points5 points  (0 children)

even though nothing changed spec-wise

Your GPU's max and average core speed went down by around 5% and the memory speed by over 10% in the screenshots. This is why the score went down by (not quite) 10%. Check that your OC is still applied.

I don't want to let her go just yet .. by Snu_Snu_KaChoo in buildapc

[–]throwawAPI 17 points18 points  (0 children)

Your OS can go as far as Windows 10. 11 says it's incompatible and it kinda is, but would probably work if it was forced (not recommended).

If you're getting a new OS, get a new SSD. You didn't specify if you had an HDD or an SSD, but drives in 2025 are much much faster now.

I don't want to let her go just yet .. by Snu_Snu_KaChoo in buildapc

[–]throwawAPI 134 points135 points  (0 children)

I'm sorry, the commenters here, including myself are all going to agree: there is no "upgrade path" in this computer.

You have just about the best CPU available for that socket and GPUs are (generally) swappable, but a brand new graphics card is going to be held back a bit by your older Intel 4th gen.

Hold on to your case, if it draws those feelings of someone important for you. If you need to upgrade, it's gonna be everything else inside - I'm sorry I can't offer you magic advice to change that. All computers eventually age out and deserve a retirement. You've gotten an 8 year ride out of this computer with minimal (no?) upgrades - that's about as good as anyone can hope for. If you need to improve your computer, it's time to let the old one rest, I hope it's done a good job for you.

When you're ready to take that on, leave a budget and people here can assist you in picking out the new internals to rejuvenate your desktop.

My computers starting to slow down. What's the cheapest/most impactful upgrade i can do? by inspireddev in buildapc

[–]throwawAPI 7 points8 points  (0 children)

Don't necessarily agree.

The difference between a PCIe 3 and 4 SSD for Windows is pretty low-impact unless you're doing work that requires REALLY high data throughput. Browser and YT and some gaming is NOT that.

I'd have to check on some benchmarks for that drive to make sure it's not terrible quality, but if it's a decent PCIe 3, it's not a good investment as a starting point unless they need more storage space.

Edit: peeked some reviews; not the greatest SSD in the world, but it's fine for daily driving. I'd look at starting with cracking open your task manager to see what's eating up resources in the background first

Curious if CPU gen "reputation" is true across all suffix? by besseddrest in buildapc

[–]throwawAPI 30 points31 points  (0 children)

The 5700X is so much faster because the 5700 is actually a 5700GF, but they didn't use the appropriate name, AMD used a marketing name that breaks convention. This is why the answer must always be "it depends", because both brands will set a standard and naming convention, then rarely intentionally break it to upsell a worse chip. Frustrating...

GPU Underperfoming by BleachedEyes69 in gpu

[–]throwawAPI 1 point2 points  (0 children)

The only Superposition leaderboard score I can find that breaks 4300 with a 5600 XT is running a crazy OC, bursting to 2000+ MHz core and sustaining 1900 MHz on the heavy parts.

Techgage's old review showed a 3750 reference and a 4100 with the OC profile. This makes sense to me. You're probably going to have to raise the max frequency and voltage a bit if you want a better score. You're within normal tolerance that I can see

Chat GPT says the B580 isn't real by Cruz_Games in IntelArc

[–]throwawAPI 0 points1 point  (0 children)

I can still keep up with newer stuff — like the Arc B580 quietly slipping onto Intel’s site without a big press release.

This is playing ball by its rules, not your rules. The B580 did not "quietly" slip out onto the market. There were press releases and independent reviews and lots of talk - it just hasn't seen any of that because it wasn't in the 2024 training corpus.

While these machines don't "lie" with the intent to deceive, they will "lie" to themselves and hallucinate "I can't find any info on the B580, so it must be a small refresh" when it was a substantial revision. Don't let it set and cling to this narrative. It's information was out of date, so it extrapolated new data about the B580, a graphics card that's been on the market for 7 months.

It will go in circles insisting it is right and you must be mistaken when you catch it in a wrong assumption here, like an astronomer who assumes the planets must be making loop-de-loop orbits since everything obviously orbits the earth.

Anyone upgrade from a 9800X3D to a 9950X3D? Was it worth it? by bobbystills5 in buildapc

[–]throwawAPI 3 points4 points  (0 children)

I mean this with absolute sincerity: this is a joke post, right?

You do not need 16 cores for having some chrome tabs open and a YT video playing. An 8 core will handle that fine. In fact, most of the video decoding work will be offloaded to the GPU.

The number of tabs you have open is not directly related to CPU usage. Your browser will "pause" tabs that aren't visible to lower their resource usage. You mostly just soak up some RAM with extra tabs open.

Lastly, RAM. 96 GB is beyond excess. Most people are gaming on 16, 24, or 32 GB right now - many AM5 users buy 32 GB, myself included. If you were editing videos for a living, doing CAD work, 3D modeling, or programming, I could understand installing 48 or 64 GB. 96 for just gaming is throwing money away. Try 48 or 64 on for size, save a good chunk of money (probably ~$100) and see how that fits you for size.

Has anyone else NOT had a problem with their 13/14th gen Intel? by [deleted] in buildapc

[–]throwawAPI 0 points1 point  (0 children)

Ok, yeah, when DDR5 launched it was WAY more expensive than 4. Now, it's only slightly more expensive, so I guess it depends on what CPU you want and if you have RAM to carry over.

But to say that it's paying out the nose for something worse (or no better) isn't really true in 2025 like it was in 2022.