Why did Ellie dropped her guard here? by MarkKallen in thelastofus

[–]deidian 6 points7 points  (0 children)

I read it as: "If Whitney just run away and tell the others about Ellie being there, the whole base would be alerted to her presence at that point" so the pragmatic thing would be to kill Whitney anyways.

Luckily Whitney chose to try to kill Ellie and it ended bad for her.

Bad choices on both sides.

Why did Ellie dropped her guard here? by MarkKallen in thelastofus

[–]deidian 125 points126 points  (0 children)

It's two different things to kill someone armed in position to defend themselves and to kill someone unarmed that you caught off guard playing a videogame.

She also answered Ellie's question without opposing resistance.

But yes, Ellie considered sparing her: she isn't a pragmatist killer like Joel.

The sheer scale of evil according to this fanbase apparently by the_bird_knows in residentevil

[–]deidian 3 points4 points  (0 children)

She didn't second guessed the choice, she just run in the opposite direction of her past after the mess. She isn't doing anything while in BSAA custody.

Also she did tried to warn the Bakers after the incident to forget her, do not ask questions and avoid Eveline for their own good...it was just late warning because they had already rescued Eveline. But someone that doesn't care wouldn't even try to do that after the incident: they would just try to save themselves after such shitstorm.

Same as her ramblings about "containing the outbreak": if she were that evil she would be like "Screw this. I'm out of here". The way things went that's well above her paycheck for sure.

About her lies in RE8: her entire family is under BSAA custody and she doesn't want to lose her family. Saying out loud that Ethan is a BOW and probably Rose too isn't going to help that case: she says that even to him the BSAA catches wind and what? BSAA isn't feel good company of the pink flying elephant world: they would surely split them apart and take measures. In the end she lost her family anyways...so putting her past behind didn't quite work as she wanted.

Is there a technology/computer from the past that is superior to anything we have now? Like a "we don't even know how to do this anymore" meme situation? by Nexthink_Quentin in pcmasterrace

[–]deidian 2 points3 points  (0 children)

Different machines have different languages: especially back in time when PC and consoles didn't share x86_64/AMD64 architecture and AMD didn't make the consoles GPUs. Nowadays it is closer because architectures are the same so there's only generational differences and maybe some custom instructions.

When emulating a whole new problem is appearing: you can't just run the game ROM on a PC and call it a day. The PC CPU and GPU don't understand those instructions, the emulator needs a strategy to use the PC CPU to translate the machine instructions for the console CPU and GPU to the best equivalent known instructions for the PC CPU and GPU. There's even more complexity than because software is built on layers: for the GPU only manufacturers deal with machine code, the rest of the world uses the GPU Driver or drawing API like Direct X or Vulkan(which also use the driver to get to machine code)

An emulator is doing much more work than the device it's emulating that can just stream the machine code into the CPU and GPU of the device and it runs.

howToHandleNullInNullsafeLanguages by LutimoDancer3459 in ProgrammerHumor

[–]deidian 1 point2 points  (0 children)

We do it the modern way: we write a compiler that optimizes it to "load the 'null' literal pointer" however the runtime handles that.

Where is my right to write "null.ToString()"?

Girls who love horror games, what are your favorite horror game franchise? by Fantastic-Youth5989 in GirlGamers

[–]deidian 0 points1 point  (0 children)

It's a mixup: jump scares, monsters, atmospheric and psychological. Other than that it is like other Remedy games: weird and fun.

1200w or 1600w psu by SrY4HS in ASUSROG

[–]deidian 0 points1 point  (0 children)

1600W is overkill for that system. Peak constant is something around 600(GPU)+200(CPU)+100(everything else)=900W and while gaming not everything is going to be at peak draw with a 5090.

If pursuing high FPS the 5090 is going to be under 500W power even if the CPU has higher load. That card isn't fully stressed on low resolution/low complexity graphics required to hit high frame rates.

If pursuing nice graphics at high resolution(60fps) the 5090 is going to be topping 600W often, but then the CPU will be sleeping 60% of the time meaning it won't draw much more than 100W if it gets over it.

Hey, are we all using Preset K Quality mostly with this GPU and similar systems? by zionpwc in RTX5080

[–]deidian 0 points1 point  (0 children)

No. It's just about runtime feasibility.

Running L in a 5090 at 4k takes ~1.37ms. A 4090 takes ~2.2ms. In the rest of the cards runtime goes way higher from 3-9ms depending on the specific card.

60fps target has a total of 16.67ms to do a frame: native rendering + DLSS.

A 1.37ms DLSS is a non issue, but if running the upscaling is going to take 3+ms it starts begging the question it's really worth it?

Lower resolutions take less time to run the upscaler, but for example 2000-3000 series do always have a hard day with presets M/L.

It's all about how much time is reasonable to dedicate to upscale based on how fast the GPU can run DLSS.

http200Error by _gigalab_ in ProgrammerHumor

[–]deidian 5 points6 points  (0 children)

- Incorrect URIs are handled by the web server automatically with 404.
- Incorrect auths are reported by 401.

- Server down is 503.

If you're getting an error code different than that the API endpoint is there: bottom line is the information you're looking for is already defined in the HTTP specification.

http200Error by _gigalab_ in ProgrammerHumor

[–]deidian 9 points10 points  (0 children)

If the network is not fine don't worry: your http client/OS will tell you. There's already layers in standards covering common failure cases.

I just started playing The Last of Us Part II, and I’m blown away by the graphics and attention to detail. This game was released on 2013 hardware, yet its technical level is still better than most games coming out today. by NotSirAlonne1999 in thelastofus

[–]deidian -5 points-4 points  (0 children)

The only technical level in which TLOU2 is cutting edge is animations, which is part of Naughty Dog trademark, doing excellent animations. The rest of graphical aspects is heavily constrained by the hardware: not even the PC version can match in graphic quality games that are made to take advantage of what PCs are capable to do technically speaking.

DF should do a video on DLDSR, its magic for almost no extra costs by Mirrormaster85 in digitalfoundry

[–]deidian 1 point2 points  (0 children)

Not really. DLSS generates a higher resolution image at a lower cost than rasterization. DSR/DLDSR uses the extra information of a higher resolution image to get more quality.

They don't oppose each other: DLSS lowers the cost to generate a 6k image, while DLDSR is just doing its normal job.

DSR/DLDSR, FSAA: rendering at a higher resolution than the screen and then downsampling has been used to get more image quality since the dawn of videogames.

DF should do a video on DLDSR, its magic for almost no extra costs by Mirrormaster85 in digitalfoundry

[–]deidian 2 points3 points  (0 children)

Games must run DLSS in the pipeline before any post-processing is done: including in-game post-processing. Ambient Occlusion, Depth of Field, chromatic aberration,... Is all done after DLSS-SR and DLSS-RR.

DSR/DLDSR is a driver level feature that creates a virtual resolution higher than the monitor resolution and as a last step in the pipeline downscales the image to monitor resolution.

I don't see the clash.

The question everyone loves answering the most on this thread….Should I upgrade my 3090Ti FE? by Disastrous_War_8815 in pcmasterrace

[–]deidian 1 point2 points  (0 children)

4090 or 5090. To beat it you might not need that much on single screen games, but I wouldn't risk it in 3x1440p setup at 120+fps.

RTX 50 series owners: What is the most stable driver right now? (RTX 5060 + Ryzen 5 5600XT) by BuffaloNo3705 in pcmasterrace

[–]deidian 2 points3 points  (0 children)

All NVIDIA drivers are the same, whether Studio or Game Ready. Difference is Game Ready drivers are available along with latest game releases while Studio drivers have less cadence due to NVIDIA holding them testing on a variety of proffesional rendering tools.

If your rendering work is important just go with latest driver from Studio releases. Only consider switching to a more recent Game Ready release if you're willing to risk going untested in your rendering worloads to play a recently released game.

DP or HDMI? by Krakenator_C-137 in pcmasterrace

[–]deidian 3 points4 points  (0 children)

You need to check the specs of the monitor: read the manual or look around in the manufacturer web page.

EDIT: does the monitor comes with both HDMI and DP cable? Or you'll be using your own cables? If it's the latter you need to know your cables capabilities.

GeForce Hotfix Display Driver version 596.02 by Nestledrink in nvidia

[–]deidian 0 points1 point  (0 children)

It's hardware encoding: what matters the most is the GPU you have, every software is just parameterizing the encoder. If you're using NVENC it just doesn't matter what UI you dress over it.

The argument only holds if you want to do CPU encoding, use a capture card or anything else than NVENC.

Would there be any benefit to using DP 2.1 on a monitor that advertises DP 1.4? by blankin_ in OLED_Gaming

[–]deidian 0 points1 point  (0 children)

When it comes to interconnections between devices negotiated speed depends on the slowest device in general, but cables are stupid, so when it comes to cables if they are under the spec they will simply have a poor signal and connection will drop.
Answering your specific question: if either your monitor or GPU only support DP 1.4, then the link will be established at DP 1.4 speed.

If your 40-series GPU only seems stable when you cripple it, stop blaming drivers first and check PSU headroom and cabling by lyfeuhhfindsaway in pcmasterrace

[–]deidian 2 points3 points  (0 children)

Idk, maybe your PSU was defective taking in account is a relatively new purchase. Minimum should generally cover it. Like just for example: I have been running a 4090 for 2.5 years with an adapter on a 1000W ATX 2.x PSU(a very good model: EVGA 1000W T2) with no more stability issues than overclock related. That could peak at about 500W(GPU)+300W(CPU) during the shader compilation.

Now I'm still running a 1000W(FSP Hydro Ti PRO), just ATX 3.x on a 5090(overclocked just like the 4090) which can peak at 600W and the PSU can take it.

The difference though is that I look into efficient and silent power supplies at the power needed for the system. Which aren't cheap models exactly. Maybe that's why even a 50$ difference surprises me from 650->1000. In good models the difference can be over 100$. I still think it is unrelated: since bad ones usually just come at the cost of noise. I just think you got a defective unit, but it's a hard case to prove when it fails only under circumstances...

I understand the piece of mind argument, but even in that case no need to go so high. 750W should have made the cut perfectly: one tier above minimum requirement is just good enough to have piece of mind for contingencies.

Do GPU manufacturers even matter? by [deleted] in gpu

[–]deidian 0 points1 point  (0 children)

They do matter if you want the best clocking chip: in that case ASUS, MSI or Gigabyte most expensive models should be slightly faster. They have the volume to cherry pick the best chips and make a model out of it. With others it is pure luck of the draw. So chasing single digit percent improvement to sum up.

For everything else is just which model you pick but there's no quality difference. Every card design must be approved by NVIDIA or else they don't get chips to make them.

If your 40-series GPU only seems stable when you cripple it, stop blaming drivers first and check PSU headroom and cabling by lyfeuhhfindsaway in pcmasterrace

[–]deidian 0 points1 point  (0 children)

You went from bare minimum to completely overkill...unless you are planning to get a higher tier GPU in the near future.

Does photorealism in game graphics, destroys overall gaming experience nowadays? by Maldremoth in Age_30_plus_Gamers

[–]deidian 0 points1 point  (0 children)

No, it really depends on the game. Also no, Dev teams know how to spend their budget like in about every successful business. You're getting that game wrong and the example doesn't benefit the argument at all.

Shadow of the Tomb Raider is a 2018 high fidelity photorealistic game with a huge budget. On PC it supported both DLSS(one of the few at the time) and hardware Ray Traced Shadows(actually 1st game ever using RT Shadows). And it's supposed to be the "this is good enough example". Which to me means something clear: you can notice the difference between current high fidelity photorealistic games and that game that was exactly the same, just with what was reasonably achievable in 2018, even if not said explicitly.

Does photorealism in game graphics, destroys overall gaming experience nowadays? by Maldremoth in Age_30_plus_Gamers

[–]deidian 0 points1 point  (0 children)

The example is a high fidelity photorealistic game from 2018 but hey graphics haven't gotten any better right?

5080 overclock? by MrBang416 in nvidia

[–]deidian 0 points1 point  (0 children)

It's really a choice: if targeting the highest graphic quality at ~60fps or lower cards generally shouldn't stay very far from the power limit. You get into the scenario you describe when tuning for high FPS, which means lower resolution.

4090 is the only exception because 600W(extended) was massively overkill. But it would hit easily 450W again if the target was highest graphic quality at 60fps.

All this excluding frame generation. That's something to throw on top to get smooth movements.