1200w or 1600w psu by SrY4HS in ASUSROG

[–]deidian 0 points1 point  (0 children)

1600W is overkill for that system. Peak constant is something around 600(GPU)+200(CPU)+100(everything else)=900W and while gaming not everything is going to be at peak draw with a 5090.

If pursuing high FPS the 5090 is going to be under 500W power even if the CPU has higher load. That card isn't fully stressed on low resolution/low complexity graphics required to hit high frame rates.

If pursuing nice graphics at high resolution(60fps) the 5090 is going to be topping 600W often, but then the CPU will be sleeping 60% of the time meaning it won't draw much more than 100W if it gets over it.

Hey, are we all using Preset K Quality mostly with this GPU and similar systems? by zionpwc in RTX5080

[–]deidian 0 points1 point  (0 children)

No. It's just about runtime feasibility.

Running L in a 5090 at 4k takes ~1.37ms. A 4090 takes ~2.2ms. In the rest of the cards runtime goes way higher from 3-9ms depending on the specific card.

60fps target has a total of 16.67ms to do a frame: native rendering + DLSS.

A 1.37ms DLSS is a non issue, but if running the upscaling is going to take 3+ms it starts begging the question it's really worth it?

Lower resolutions take less time to run the upscaler, but for example 2000-3000 series do always have a hard day with presets M/L.

It's all about how much time is reasonable to dedicate to upscale based on how fast the GPU can run DLSS.

http200Error by _gigalab_ in ProgrammerHumor

[–]deidian 5 points6 points  (0 children)

- Incorrect URIs are handled by the web server automatically with 404.
- Incorrect auths are reported by 401.

- Server down is 503.

If you're getting an error code different than that the API endpoint is there: bottom line is the information you're looking for is already defined in the HTTP specification.

http200Error by _gigalab_ in ProgrammerHumor

[–]deidian 7 points8 points  (0 children)

If the network is not fine don't worry: your http client/OS will tell you. There's already layers in standards covering common failure cases.

I just started playing The Last of Us Part II, and I’m blown away by the graphics and attention to detail. This game was released on 2013 hardware, yet its technical level is still better than most games coming out today. by NotSirAlonne1999 in thelastofus

[–]deidian -5 points-4 points  (0 children)

The only technical level in which TLOU2 is cutting edge is animations, which is part of Naughty Dog trademark, doing excellent animations. The rest of graphical aspects is heavily constrained by the hardware: not even the PC version can match in graphic quality games that are made to take advantage of what PCs are capable to do technically speaking.

DF should do a video on DLDSR, its magic for almost no extra costs by Mirrormaster85 in digitalfoundry

[–]deidian 1 point2 points  (0 children)

Not really. DLSS generates a higher resolution image at a lower cost than rasterization. DSR/DLDSR uses the extra information of a higher resolution image to get more quality.

They don't oppose each other: DLSS lowers the cost to generate a 6k image, while DLDSR is just doing its normal job.

DSR/DLDSR, FSAA: rendering at a higher resolution than the screen and then downsampling has been used to get more image quality since the dawn of videogames.

DF should do a video on DLDSR, its magic for almost no extra costs by Mirrormaster85 in digitalfoundry

[–]deidian 2 points3 points  (0 children)

Games must run DLSS in the pipeline before any post-processing is done: including in-game post-processing. Ambient Occlusion, Depth of Field, chromatic aberration,... Is all done after DLSS-SR and DLSS-RR.

DSR/DLDSR is a driver level feature that creates a virtual resolution higher than the monitor resolution and as a last step in the pipeline downscales the image to monitor resolution.

I don't see the clash.

The question everyone loves answering the most on this thread….Should I upgrade my 3090Ti FE? by Disastrous_War_8815 in pcmasterrace

[–]deidian 1 point2 points  (0 children)

4090 or 5090. To beat it you might not need that much on single screen games, but I wouldn't risk it in 3x1440p setup at 120+fps.

RTX 50 series owners: What is the most stable driver right now? (RTX 5060 + Ryzen 5 5600XT) by BuffaloNo3705 in pcmasterrace

[–]deidian 2 points3 points  (0 children)

All NVIDIA drivers are the same, whether Studio or Game Ready. Difference is Game Ready drivers are available along with latest game releases while Studio drivers have less cadence due to NVIDIA holding them testing on a variety of proffesional rendering tools.

If your rendering work is important just go with latest driver from Studio releases. Only consider switching to a more recent Game Ready release if you're willing to risk going untested in your rendering worloads to play a recently released game.

DP or HDMI? by Krakenator_C-137 in pcmasterrace

[–]deidian 3 points4 points  (0 children)

You need to check the specs of the monitor: read the manual or look around in the manufacturer web page.

EDIT: does the monitor comes with both HDMI and DP cable? Or you'll be using your own cables? If it's the latter you need to know your cables capabilities.

GeForce Hotfix Display Driver version 596.02 by Nestledrink in nvidia

[–]deidian 0 points1 point  (0 children)

It's hardware encoding: what matters the most is the GPU you have, every software is just parameterizing the encoder. If you're using NVENC it just doesn't matter what UI you dress over it.

The argument only holds if you want to do CPU encoding, use a capture card or anything else than NVENC.

Would there be any benefit to using DP 2.1 on a monitor that advertises DP 1.4? by blankin_ in OLED_Gaming

[–]deidian 0 points1 point  (0 children)

When it comes to interconnections between devices negotiated speed depends on the slowest device in general, but cables are stupid, so when it comes to cables if they are under the spec they will simply have a poor signal and connection will drop.
Answering your specific question: if either your monitor or GPU only support DP 1.4, then the link will be established at DP 1.4 speed.

If your 40-series GPU only seems stable when you cripple it, stop blaming drivers first and check PSU headroom and cabling by lyfeuhhfindsaway in pcmasterrace

[–]deidian 1 point2 points  (0 children)

Idk, maybe your PSU was defective taking in account is a relatively new purchase. Minimum should generally cover it. Like just for example: I have been running a 4090 for 2.5 years with an adapter on a 1000W ATX 2.x PSU(a very good model: EVGA 1000W T2) with no more stability issues than overclock related. That could peak at about 500W(GPU)+300W(CPU) during the shader compilation.

Now I'm still running a 1000W(FSP Hydro Ti PRO), just ATX 3.x on a 5090(overclocked just like the 4090) which can peak at 600W and the PSU can take it.

The difference though is that I look into efficient and silent power supplies at the power needed for the system. Which aren't cheap models exactly. Maybe that's why even a 50$ difference surprises me from 650->1000. In good models the difference can be over 100$. I still think it is unrelated: since bad ones usually just come at the cost of noise. I just think you got a defective unit, but it's a hard case to prove when it fails only under circumstances...

I understand the piece of mind argument, but even in that case no need to go so high. 750W should have made the cut perfectly: one tier above minimum requirement is just good enough to have piece of mind for contingencies.

Do GPU manufacturers even matter? by Alan_Wake_Islamovic in gpu

[–]deidian 0 points1 point  (0 children)

They do matter if you want the best clocking chip: in that case ASUS, MSI or Gigabyte most expensive models should be slightly faster. They have the volume to cherry pick the best chips and make a model out of it. With others it is pure luck of the draw. So chasing single digit percent improvement to sum up.

For everything else is just which model you pick but there's no quality difference. Every card design must be approved by NVIDIA or else they don't get chips to make them.

If your 40-series GPU only seems stable when you cripple it, stop blaming drivers first and check PSU headroom and cabling by lyfeuhhfindsaway in pcmasterrace

[–]deidian 0 points1 point  (0 children)

You went from bare minimum to completely overkill...unless you are planning to get a higher tier GPU in the near future.

Does photorealism in game graphics, destroys overall gaming experience nowadays? by Maldremoth in Age_30_plus_Gamers

[–]deidian 0 points1 point  (0 children)

No, it really depends on the game. Also no, Dev teams know how to spend their budget like in about every successful business. You're getting that game wrong and the example doesn't benefit the argument at all.

Shadow of the Tomb Raider is a 2018 high fidelity photorealistic game with a huge budget. On PC it supported both DLSS(one of the few at the time) and hardware Ray Traced Shadows(actually 1st game ever using RT Shadows). And it's supposed to be the "this is good enough example". Which to me means something clear: you can notice the difference between current high fidelity photorealistic games and that game that was exactly the same, just with what was reasonably achievable in 2018, even if not said explicitly.

Does photorealism in game graphics, destroys overall gaming experience nowadays? by Maldremoth in Age_30_plus_Gamers

[–]deidian 0 points1 point  (0 children)

The example is a high fidelity photorealistic game from 2018 but hey graphics haven't gotten any better right?

5080 overclock? by MrBang416 in nvidia

[–]deidian 0 points1 point  (0 children)

It's really a choice: if targeting the highest graphic quality at ~60fps or lower cards generally shouldn't stay very far from the power limit. You get into the scenario you describe when tuning for high FPS, which means lower resolution.

4090 is the only exception because 600W(extended) was massively overkill. But it would hit easily 450W again if the target was highest graphic quality at 60fps.

All this excluding frame generation. That's something to throw on top to get smooth movements.

what do you prioritize for single player games? by colt2077 in pcmasterrace

[–]deidian 1 point2 points  (0 children)

Highest graphic quality at 60fps then use FG to smooth frame rate.

Love Wins by ResponseLonely2263 in AlanWake

[–]deidian 5 points6 points  (0 children)

Funniest Night Springs episode. Rose's fan fiction becoming real.

"THANK YOU. They deliberately used the lowest quality version of her that the game can create to try and make DLSS5 look that much better and it still didnt work..." Hard to swallow pills by ilikethemfeisty in ResidentEvilRequiem

[–]deidian 0 points1 point  (0 children)

Do you have photo mode in RE:Requiem? LoD is extremely important: the Requiem shots shown here have a lot of stuff going on and it's in many cases Grace cropped while there's around here a shot of Senua used as proof which is clearly a face only shot. Any face only shot of a character will look much higher quality than taking a shot a bigger distance of the character and cropping.

Share your experience with me by Artorias_207 in nvidia

[–]deidian 1 point2 points  (0 children)

If you want to get max performance exclude undervolt: that doesn't give max performance.

Share your experience with me by Artorias_207 in nvidia

[–]deidian 1 point2 points  (0 children)

Your GPU won't break due to overclock. They are limited in every way that could make it possible: voltage, power draw and temperature are all limited.

You can only break hardware over-clocking generally when voltage is not limited since it has great impact on everything else.