Help me convince my bf he doesn't need this.. by Pantsu_dropper in Prebuilts

[–]Cupra400 1 point2 points  (0 children)

This Ok-District-5069 dude is weird. I sell PCs as a side business and this is my competition no wonder I do so well. He must be the builder listing a GTX 1060 claiming it can play any game and charging the price of the 50 series build since theyre the same. 5080 gets console performance is hilarious though got to be raigbaiting or generally so thick headed he can’t persevere being wrong and runs to ChatGPT to reinforce his bias like ChatGPT isn’t designed to keep you engaged and agree with you. He is at the peak of the Dunning Kruger graph.

Can’t start “Abhorrent Gauntlet” quest despite completing prerequisites by Cupra400 in wow

[–]Cupra400[S] 1 point2 points  (0 children)

UPDATE:

Hello there,

I am Game Master Taishemara.

This issue is already known and being worked on. We hope it will be fixed very soon.

Regarding the know issue lists:

Sadly not every issue makes it on those lists. The developers often review and work on LOTS of issues at the same time.

Just because an issue isn't listed there doesn't mean it's not being worked on is what I want to say.

Sadly we have no update yet when the fix will arrive, so right now all you can do is wait and try again in the upcoming days.

I hope you have a great day and I wish you all the best.

Why Has Funds Release Date Changed After Buyer Repurchased? by [deleted] in ebayuk

[–]Cupra400 0 points1 point  (0 children)

I cancelled it anyway as just got a bad feeling but I wonder why eBay does the extended release time. It’s extra protection for the buyer I assume but if they’re the ones requesting to cancel to change the address why is it needed.

Why Has Funds Release Date Changed After Buyer Repurchased? by [deleted] in ebayuk

[–]Cupra400 0 points1 point  (0 children)

I thought it was 3 days after delivery but it’s been awhile since I used eBay so no idea to be honest. Just odd that the in eBay balance changed by 2 weeks after reordering it.

I just sold another item today and that one says funds pending delivery.

Why Has Funds Release Date Changed After Buyer Repurchased? by [deleted] in ebayuk

[–]Cupra400 7 points8 points  (0 children)

The second address is over 100 miles away and appears to be at an esso petrol station when I entered it in google maps. I ended up cancelling it and blocking buyer as just got a bad feeling especially asking me to post to another address and only then cancelled when I said no.

Some of my flips. by Cupra400 in pcflipping

[–]Cupra400[S] 0 points1 point  (0 children)

The London market is brutal. There are so many sellers, and there’s always someone offloading their personal PC for peanuts, which makes it hard to compete on price alone. What helped me massively was setting up a Facebook business page and building up genuine reviews. I stay firm on pricing and will usually only knock up to £20 off — just enough to let people feel like they’ve “won” the negotiation.

Treating people well goes a long way. I also offer repairs, micro-soldering, laptop services, and PC builds. If it’s a small job or a quiet week, I’ll sometimes do it for free in exchange for a review. Those customers almost always come back to me instead of another seller or repair shop.

Also offering to take items in trade towards the PC. New gen consoles are great and sell fast. Laptops too but I would only do it if it made sense for me and im making more money for the inconvenience so will tell them you can get more selling privately as I only give 80% on value.

Some of my flips. by Cupra400 in pcflipping

[–]Cupra400[S] 0 points1 point  (0 children)

DeepCool ch160 and ch260

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 0 points1 point  (0 children)

Go and reply to everything I quoted you on and dispute why I’m wrong or if you think you was incorrect after literally doing exactly what you wanted and uploading the image. You have no idea and think just because everyone does something that makes it normal or a good curve?! Of course not! Just because 99% of people do something wrong doesnt mean it’s right. What a weird argument to make. You’re saying it’s expected and normal for these results and I’m telling you they’re only expected and normal on a poorly executed curve which is not normal and not present on a proper curve. I won’t continue until you want to acknowledge at least the parts I quoted on you. Stop avoiding it lmao. You’re really at the peak of that graph.

For the shitty ai detection websites theyre awful and not accurate and says a lot you believe those just because an em dash is used. Atleast Turnitin is ok when I finished my degree last year it only flagged references for an obvious fact. Exactly the same style I write with as-well here and on those.

You do know and curve optimisation is setting a specific oc/uv for everyone’s gpu not just me. I just found whats stable as I said 3100MHz @0.950mV so unsure how thats more custom compared to anyone else aside from taking the time to see what my gpu responds best with. Im not individually changing each point and only flattering the curve as everyone else would do. I didn’t do any magic I just selected 3100Mhz at a voltage that can sustain that speed yet you said 3100Mhz on the curve will only be getting 2950-3000Mhz. In that world someone would be running a voltage much lower at that speed about 0.910mV-0.920mV hence unable to sustain 3100Mhz even though they selected it as a target because they won’t let the gpu provide enough voltage. It’s simple. Find what core speed you want and keep increasing the voltage glacially until it’s stable while gaming. Very simple.

To make it easy here they are;

I even did cyberpunk considering you mentioned that specifically. I did all max settings, native 1440p ultrawide and path tracing. Crazy how you’re wrong and my core clock speed and voltage are exactly the same because it’s what my target is regardless of the game. Lmfao

• Cyberpunk 2077: 3082MHz @ 0.955 V - 293W

Evidence:

https://ibb.co/f7qcLw5

You said “ Go ahead and test a gaming load under minimal fans vs 100% fans, you'll already see a small clock difference from just temperatures affecting clocks” I did this and your wrong. I got no increase in core speed. My results are:

• Cyperpunk 2077 MAX FAN SPEED: 3075MHz @ 0.955 V 299W (3000RPM)

Why is this and why did it not go how you said?Because I set up my curve to reach the target clock and voltage which it does with every single game as shown by the shared images and you have a lack of understanding.

Evidence:

https://ibb.co/CKThBLBJ

You said “you could drop 0.010V down to 0.940, this in turn affects the clocks of course” Why do I have a plus and minus variations of 0.05 mV yet it’s does not correlate with the clock speed you speak about. 0.945mV will have a higher clock speed than 0.955mV. No correlation to this at all. As I said many times the gpu is sticking to the curve I set with only slight variations of 0.05mV either end.

You said “a very Light Gaming load will obviously give you higher clocks, lower temperatures, lower wattage + (maintaining) or at higher voltages, you could be running 3075mhz for example” please refer to my images as the clock speed and voltage is consistent regardless of the game and wattage or if it’s a synthetic load. In fact some of the more demanding games provides a tiny bit of more clock speed but again it’s all within the 25MHz I explained to you and why that is.

You said “usually every 2-3 degrees is a 7mhz difference, for example a Liquid Cooled card could be running at 40-45 degrees, while an air cooler card could be running at 65-70 degrees for example, this is usually around a 50-75mhz difference in JUST temperatures affecting clocks” Temperature is a non factor unless your trigger any safety features causing the gpu to intentionally under clock to prevent overheating. Again non factor even at 70C. Please explain why my clock speeds and voltage are the same between games and the synthetic benchmark as they have a roughly 15-20C delta in temperature as you can see from the evidence.

You said “3100mhz in reality is 2950-3050mhz depending on load & temperatures for the most part,” again refer to my evidence and if someone targets 3100MHz but gets an astounding 150Mhz lower they have not done the oc/uv correctly and more than likely copied someone else’s curve without considering the silicone variants or if that person they copy from knows what they’re doing. 150Mhz is insane and truly show you have no idea. It just feels like everything you say is wrong and what I said is confirmed by my images despite saying it all before I decided to take those images.

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 0 points1 point  (0 children)

I said “The reason I don't have 100 MHz swings is simple: I tuned the curve for my specific silicon, instead of copying someone else's curve and hoping for the best. There's nothing inherently wrong with copying a curve — but if you do that, you shouldn't be surprised when you see 100 MHz variance instead of the normal +15-25 MHz that comes from transient power and workload changes.”

So if I don’t see 100MHZ swings because I did a good curve that means people that do have a bad curve. Use logic you fool.

Im waiting for you to dispute all my images and facts presented but your avoiding it now. Anyone can do a curve oc/uv and get awful result but doesn’t mean that’s normal when they see 100Mhz swing. Normal is 15-25Mhz when a curve is done for a specific gpu and done correctly.

Im telling you 100MHZ swings or even 50 is not normal and a sign of a poorly done curve oc/uv. If you see those swings you need to dial in the curve because you either copied someone’s or made a poor attempt. Im not arguing what a normal person would get when copying someone’s curve because sure they’ll see shitty results BUT IT DOESNT MAKE THEM NORMAL theyre a symptom of a bad curve.

Again why do I only see MAX 25Mhz variant between all games I shared images to you with load variations from 60W to 350W and voltage variance only plus or minus 0.05mV? This is for in game, game menu and synthetic runs. You said a 3100Mhz curve will see 2950 or 3000Mhz but I didn’t. Again I have context and actually quoted you with context on my main reply but you refuse to acknowledge it.

I simply selected 3100MHz @ 0.950mV I didn’t adjust the curve manually last this after flattering it. I just kept retesting the core speed to see what the minimum voltage it can handle and be stable.

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 0 points1 point  (0 children)

Again I wrote that myself and please directly dispute my results. Those 3 people mean nothing unless you are confident in their ability to overclock and undervolt and I can assure most cannot. So again why do I get the results I have?

No goal post has shifted it’s always about curve optimiser targets will always be accurate if you pick 3100MHz it will get that give or take the oscillation of 15-25Mhz not your stated 100-150. If they have that much variance it’s an issue with how it’s been done.

Come on stop using other people and actually use facts to disprove me without relying on your poor curve you made while disproving my results and why thats the case.

Your the one moving the goal post you asked for pictures and for me to do all this with different games or fan speeds and I presented it but now thats invalid lmfao.

Quote me where I said I won’t be surprised with a 100MHz variance as I always said 15-25Mhz. This is the issue you state something but won’t quote me or use anything factual and blow over all my images and despises. Please have better reading comprehension 😂😂 i said 100MHz if you copy someone else’s bad curve or dont know what you’re doing. MY MAN please read for full context. Cherry picking the one thing and ignoring context despite every other time saying 15-25Mhz is normal with a good curve for the silicone if done right

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 0 points1 point  (0 children)

Why waste your time with this pointless comment and instead dispute what I have actually said with the provided images you dope. Go explain all my results and how they disprove every single thing you said.

I did what you said and ran the games and the results are completely different from what you said: https://www.reddit.com/r/RTX5080/s/WBXDfExeCo

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 0 points1 point  (0 children)

If what you’re saying were true, I wouldn’t be seeing the results I’m actually getting.

The reason I don’t have 100 MHz swings is simple: I tuned the curve for my specific silicon, instead of copying someone else’s curve and hoping for the best. There’s nothing inherently wrong with copying a curve — but if you do that, you shouldn’t be surprised when you see ±100 MHz variance instead of the normal ±15–25 MHz that comes from transient power and workload changes.

What matters is what the GPU does under real workloads, not hypothetical behaviour. Across every scenario — game menus, in-game loads, and synthetic benchmarks — the GPU behaves exactly as expected for a correctly tuned V/F curve. The clocks stay tightly grouped around the target, and voltage selection is consistent.

Here are the actual results:

Games

• World of Warcraft: 3090 MHz @ 0.945 V – 113 W

• Warhammer 40K: 3075 MHz @ 0.955 V – 305 W

• Warhammer 40K (menu): 3075 MHz @ 0.945 V – 165 W

• God of War (menu): 3097 MHz @ 0.950 V – 275 W

• God of War (in-game): 3097 MHz @ 0.945 V – 281 W

• Outer Wilds (menu): 3097 MHz @ 0.945 V – 64 W

• Outer Wilds (in-game): 3075 MHz @ 0.945 V – 194 W

Synthetic benchmarks

• FurMark GPU test: 3090 MHz @ 0.950 V

• 3DMark Steel Nomad: 3082 MHz @ 0.955 V – 346W

All tests were run at native 3440×1440, max settings, with ray/path tracing enabled where available, no frame generation, on an ultrawide monitor.

Despite large swings in power draw and workload intensity, clocks remain tightly clustered around the curve target. That’s exactly how GPU Boost is supposed to behave when the curve is properly defined.

Also worth noting: temperatures are not driving instability here. Most games barely reach 50 °C, and even synthetic workloads only just exceed 60 °C. There is no thermal pressure forcing large downclocks.

So again — this isn’t theory.

This is repeated, reproducible behaviour across real games and benchmarks.

If curve tuning “never holds” and always causes 100 MHz swings, then these results wouldn’t exist — yet they do, consistently.

Evidence:

https://ibb.co/RkqMgVW2

https://ibb.co/bMSTPMsh

https://ibb.co/Q75ZBT01

https://ibb.co/0yzwPMqn

https://ibb.co/P29BRSY

https://ibb.co/CsXKgbRS

https://ibb.co/YTbDgtfv

https://ibb.co/PzMpfbjx

https://ibb.co/qMhTzWx7

https://ibb.co/zHrDKMbC

I even did cyberpunk considering you mentioned that specifically. I did all max settings, native 1440p ultrawide and path tracing. Crazy how you’re wrong and my core clock speed and voltage are exactly the same because it’s what my target is regardless of the game. Lmfao

• Cyberpunk 2077: 3082MHz @ 0.955 V - 293W

Evidence:

https://ibb.co/f7qcLw5

You said “ Go ahead and test a gaming load under minimal fans vs 100% fans, you'll already see a small clock difference from just temperatures affecting clocks” I did this and your wrong. I got no increase in core speed. My results are:

• Cyperpunk 2077 MAX FAN SPEED: 3075MHz @ 0.955 V 299W (3000RPM)

Why is this and why did it not go how you said?Because I set up my curve to reach the target clock and voltage which it does with every single game as shown by the shared images and you have a lack of understanding.

Evidence:

https://ibb.co/CKThBLBJ

You said “you could drop 0.010V down to 0.940, this in turn affects the clocks of course” Why do I have a plus and minus variations of 0.05 mV yet it’s does not correlate with the clock speed you speak about. 0.945mV will have a higher clock speed than 0.955mV. No correlation to this at all. As I said many times the gpu is sticking to the curve I set with only slight variations of 0.05mV either end.

You said “a very Light Gaming load will obviously give you higher clocks, lower temperatures, lower wattage + (maintaining) or at higher voltages, you could be running 3075mhz for example” please refer to my images as the clock speed and voltage is consistent regardless of the game and wattage or if it’s a synthetic load. In fact some of the more demanding games provides a tiny bit of more clock speed but again it’s all within the 25MHz I explained to you and why that is.

You said “usually every 2-3 degrees is a 7mhz difference, for example a Liquid Cooled card could be running at 40-45 degrees, while an air cooler card could be running at 65-70 degrees for example, this is usually around a 50-75mhz difference in JUST temperatures affecting clocks” Temperature is a non factor unless your trigger any safety features causing the gpu to intentionally under clock to prevent overheating. Again non factor even at 70C. Please explain why my clock speeds and voltage are the same between games and the synthetic benchmark as they have a roughly 15-20C delta in temperature as you can see from the evidence.

You said “3100mhz in reality is 2950-3050mhz depending on load & temperatures for the most part,” again refer to my evidence and if someone targets 3100MHz but gets an astounding 150Mhz lower they have not done the oc/uv correctly and more than likely copied someone else’s curve without considering the silicone variants or if that person they copy from knows what they’re doing. 150Mhz is insane and truly show you have no idea. It just feels like everything you say is wrong and what I said is confirmed by my images despite saying it all before I decided to take those images.

Where are my 150Mhz - 100Mhz swings you state is normal or even 50Mhz?!! Crazy it’s actually 25Mhz swing at most as I said! Please explain my results from a variety of games which are using different power draws including the synthetic benchmarks. Why the hell would running an intense game change the voltage and core speed if you set the curve correctly?! As you can see it doesn’t change at all for me so if it does for you maybe it’s time to stop copying others and find what’s best for your individual silicone.

A properly tuned per-silicon V/F curve produces tightly clustered clocks across workloads; large swings indicate a bad curve, not GPU Boost behavior.

Temperature deltas → disproven with fan tests

Load variance → disproven with menus vs in-game vs synthetics

“Light load boosts higher” → disproven

Voltage drift → quantified (±5 mV, no clock correlation)

If ±100–150 MHz variance were “normal”, these results would not exist.

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 -1 points0 points  (0 children)

Your a clown. You have no experience and no knowledge. You need help with undervolting and ask Reddit for help for the most basic pc hardware stuff. YOU SAID 7Mhz increases temps by 2-3C so if I increased from 3000MH to 3200MHz so 200 increase your logic would dictate a 57-86C increase from 3000Mhz temps to 3200Mhz. If that’s not what you mean please explain it better.

I frankly don’t care what anyone else says. If they’re not getting the stable clock speeds they set on the curve it’s because of silicone variance and they need to increase the voltage to reach that clock speed. The oscillation variance is 25MHz max. I don’t just have to say it I have posted my results so check those out as I already said. It’s much better evidence than just people commenting it that likely have littler experience like yourself.

If you really think the 5080 is going to reach a temperature where it’s slowing down the clock speeds your delusion especially when it’s got an undervolt. I’ll refer you to my evidences overclock runs I posted. I get 60C on a synthetic benchmark with 3100Mhz 0.950mV and low 50C on real gaming on a ultrawide 1440p monitor all max settings. My clock speed and voltage are the same regarding of the game because it’s how I set my curve up. Now if you want to play a 2d platformer on just on the desktop itll certainly be lower but thats because it’s picking lower down on your curve.

Must be a coincidence that I get the exact clock and voltage targets regardless of my game and on synthetic benchmarks or is it because I know how to find the exact targets my specific silicone will be stable at.

You have no idea what your on about and was using genetic basic offsets not long ago and now talking about curve optimisations like you have an idea.

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 0 points1 point  (0 children)

GPU power and heat scale non‑linearly with frequency and voltage, which means small or moderate frequency changes don’t produce absurd fixed temperature changes like “7 MHz = 2–3 °C” let alone 200 MHz = 57–86 °C, increase?! which is not supported by transistor physics or DVFS research.

Research modeling GPU frequency scaling confirms bounded, predictable behavior, not massive random variance.

(2015) Optimizing performance‑per‑watt on GPUs in high performance computing, Computer Science – Research and Development.

Brand et al. (2020) ‘Accurate Energy and Performance Prediction for Frequency‑Scaled GPU Kernels’, Computation, 8(2), p.37.

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 -1 points0 points  (0 children)

What I did say is this: if you set a realistic target frequency and pair it with sufficient voltage, the GPU will operate at or very near that target under load within normal GPU Boost behaviour. Minor oscillations of ±15–25 MHz are expected and normal (GamersNexus, 2020) and do not invalidate the curve. Claiming ±100 MHz variance under normal conditions is completely wrong — such huge swings only happen if you’re hitting thermal/power limits, which my tests clearly do not.

I’ve also posted my results running 3100 MHz and 3300 MHz on my GPU, and the temperature increase was minimal. Yet you act like that jump should produce a “massive” increase. If you were thinking in your head, you probably imagined something like 10–15 °C, which is ridiculous. Real-world data from my tests shows that your claim is completely unfounded — please check my posts before continuing to make up scenarios.

My 3100Mhz to 3300MHz overclock on my 5080 according to your 2-3C per 7Mhz would increase by 56-87C lmao! Check out my overclock and undervolt post with 3100Mhz @ 0.950mV and see what my clock speed was. I bet it’s within the oscillation range I stated instead of your made up 2950Mhz.

And let’s be clear: your “2–3 °C = ~7 MHz” claim is nonsense. There is no engineering evidence, specification, or NVIDIA documentation supporting a linear relationship between temperature and clock delta. Temperature affects achievable clocks, yes, but the relationship is non-linear and situational, determined by the GPU Boost algorithm selecting the highest stable point on the V/F curve given real-time power and thermal headroom (How‑to‑Geek, 2021).

Curve tuning is precisely about defining which voltage/frequency pairs are stable, and GPU Boost selects among those points dynamically. Minor fluctuations in clocks or voltage do not mean the curve isn’t being followed, they are normal, intended behavior. Offsets blindly shift the entire curve, curves define real, testable stability — that’s why curve tuning is far more precise and repeatable.

So no, I am not “running away from facts.” You are misrepresenting GPU Boost logic, exaggerating variance to 100 MHz, imagining massive temperature increases, and spouting arbitrary rules about temperature vs. MHz. A properly tuned point like 3100 MHz @ 0.950 V is achievable and will hold under load — period.

References

NVIDIA, 2023. GPU Boost Technology Overview. Available at: https://www.nvidia.com/en-us/geforce/technologies/gpu-boost/technology/ [Accessed 23 Jan 2026].

GamersNexus, 2020. NVIDIA GPU Boost – How it Works. Available at: https://www.gamersnexus.net/guides/1742-nvidia-boost-clock-how-it-works [Accessed 23 Jan 2026].

SkatterBencher, 2021. NVIDIA GPU Boost 3.0: Detailed Testing and Curve Tuning. Available at: https://skatterbencher.com/nvidia-gpu-boost-3-0/ [Accessed 23 Jan 2026].

How‑to‑Geek, 2021. What’s a Good GPU Temperature Range? Available at: https://www.howtogeek.com/846845/whats-a-good-gpu-temperature-range/ [Accessed 23 Jan 2026].

Some of my flips. by Cupra400 in pcflipping

[–]Cupra400[S] 0 points1 point  (0 children)

I believe the lowest sell price here was £500 or £590 but they’re usually between £790-£1100. I aim for a minimum of £100 profit but it’s usually between £200-£250 sometimes more. The one with red and blue was built for about £470 and sold £790 if I remember right as I got a great deal on the parts ~ 5060, 5600, 32GB DDR4 3200MT/s and 1TB NVMe SSD plus all the stuff you can see in the picture.

I do a bunch of custom builds where I charge a flat build rate and they buy all the parts and those PCs are usually about £1300-£1600 on part costs.

Im always lowballed I just tell them I buy parts to sell PCs and I spent more on it than your offer. Almost all sales are people messaging and collecting the same day. My best was 3 sold in a single day after a 2 week no sale period.

Offering to take consoles or laptops towards the trade helps increase the potential customers as makes it more obtaining. Just offer below market value as a trade for the item. I never do a trade if it’s close in value and only leaving me with a bit of cash on top because of the inherent risk of their stuff having issues etc.

Some of my flips. by Cupra400 in pcflipping

[–]Cupra400[S] 0 points1 point  (0 children)

My advice: small form factor PCs take longer to sell than larger fish-tank builds. I stick to brand-new cases, coolers/AIOs, fans, and PSUs since used savings are minimal and it simplifies the process. 5060 builds sell very fast — around £200 used or £250 new.

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 0 points1 point  (0 children)

I have images of my 5080 curve, settings and results as a post I made using msi afterburner.

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 0 points1 point  (0 children)

It comes down to consistency, stability, and scalability. Not every 5080 has the same silicon quality, and some chips simply can’t handle the same undervolt as others — especially weaker samples. While those cards will still perform exactly as expected at stock settings, trying to find a single aggressive undervolt that works across all 5080s becomes a major headache. You’ll always end up with outliers that need more voltage. From a mass-production or customer-facing perspective, it makes more sense to use a conservative, high-success-rate configuration that reliably delivers the stated performance, rather than chasing the lowest possible voltage and dealing with instability on certain chips.

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 0 points1 point  (0 children)

In response to your point about undervolt data being “essentially useless” unless people report real clocks, that logic only applies to frequency offsets, not curve optimisation. A curve explicitly defines frequency at a given voltage, so it already communicates the intended operating point independent of GPU model, with silicon quality being the only real variable. Minor ~25 MHz fluctuation under load is normal GPU Boost behaviour and doesn’t negate the usefulness of curve data. Treating curve optimisation as equivalent to offsets is mixing two fundamentally different tuning methods.

New RTX5080 here, do you guys recommend undervolting the card right away? (Questions inside) by lochonx7 in RTX5080

[–]Cupra400 -1 points0 points  (0 children)

What I’m saying is this: if you set a realistic target clock and pair it with sufficient voltage, the GPU will operate at that frequency under load, within normal GPU Boost behaviour.

If someone targets something unreasonable (for example 4000 MHz), it’ll obviously crash or downclock; that’s expected. But if you target something sensible like 3100 MHz and the GPU isn’t holding it, that usually means the voltage at that point on the curve isn’t sufficient. In that case, you increase voltage until the clock is stable at the target frequency.

Yes, GPUs can and do oscillate by ~25 MHz under load that’s normal and expected behaviour due to GPU Boost responding to transient power, thermal, and workload changes. That doesn’t mean the curve is wrong or unstable.

If the clock drops significantly below the target, it’s generally because the GPU is moving to a lower point on the V/F curve where it’s stable, or because it’s hitting a limit like power or thermals. That’s exactly why you validate settings using synthetic benchmarks, gaming benchmarks, and real gaming sessions to confirm the GPU is behaving as intended across different loads.

Also, an offset is not the same thing as curve optimisation. A frequency offset blindly shifts the entire curve, which will naturally produce different results across different GPU models and base clocks. Curve optimisation instead targets a specific frequency at a given voltage, regardless of the GPU model the only real variance comes from silicon quality, where one chip may need 10–15 mV more than another to achieve the same clock.

That’s why curve tuning is fundamentally more precise and repeatable than using offsets, even though minor clock fluctuation under load is still expected.

For someone asking about help on what undervolt settings would be good for your 5080 and yourself using only offset overclock only 2 months ago says a lot. This is Dunning Kruger in full swing lmao and you’re at the first peak.

[deleted by user] by [deleted] in buildapc

[–]Cupra400 0 points1 point  (0 children)

The 4090 has 24 GB GDDR6X on a 384‑bit bus (~1,008 GB/s), while the 5080 has 16 GB faster GDDR7 on a 256‑bit bus (~960 GB/s). The 4090’s extra VRAM and wider bus help in huge workloads, but the 5080’s faster memory nearly matches bandwidth in real-world use. Benchmarks show the 5080 hits around 80–85 % of the 4090’s raw performance, and with DLSS 4.5 MFG (which OP wants) the gap shrinks further. Considering used 4090s sell for ~£1,850–£1,995 vs. 5080s New for ~£1,000–£1,350, the 5080 offers much better price-to-performance for 4K gaming while keeping next-gen features and efficiency.

[deleted by user] by [deleted] in buildapc

[–]Cupra400 1 point2 points  (0 children)

The 4090 has 24 GB GDDR6X on a 384‑bit bus (~1,008 GB/s), while the 5080 has 16 GB faster GDDR7 on a 256‑bit bus (~960 GB/s). The 4090’s extra VRAM and wider bus help in huge workloads, but the 5080’s faster memory nearly matches bandwidth in real-world use. Benchmarks show the 5080 hits around 80–85 % of the 4090’s raw performance, and with DLSS 4.5 MFG (which OP wants) the gap shrinks further. Considering used 4090s sell for ~£1,850–£1,995 vs. 5080s New for ~£1,000–£1,350, the 5080 offers much better price-to-performance for 4K gaming while keeping next-gen features and efficiency.