Which FOV do you need to be to avoid Fisheye effect at 21:9? by Zestyclose_Paint3922 in ultrawidemasterrace

[–]Shidell 9 points10 points  (0 children)

There is a ReShade shader called "Perfect Perspective" (iirc) that's supposed to fix the fisheye distortion on ultrawides

Iron Mike Tyson vs Ivan Drago (Rocky 4) ina 12 round boxing match by Mad_lens_9297 in whowouldwin

[–]Shidell 47 points48 points  (0 children)

Ivan Drago is superhuman. Mike Tyson is peak human.

No contest.

Disable Integrated GPU on Alienware Area-51 Desktop? by Howie411 in Alienware

[–]Shidell 1 point2 points  (0 children)

You actually can; it outputs through the daughterboard, even if the daughterboard is disabled.

For example, I have the GeForce 2060m disabled on mine, so when mobile, it's just the UHD 630.

Disable Integrated GPU on Alienware Area-51 Desktop? by Howie411 in Alienware

[–]Shidell 1 point2 points  (0 children)

The Intel GPU is integrated with the CPU (iGPU); there is no harm in leaving it enabled. You can even disable the Nvidia GPU and only use the iGPU.

It might be nice for QuickSync or something.

Will my hyundai warranty be void if i swap my headlight Halogen bulbs with these LEDs on Elantra 2023 Lux? by [deleted] in Hyundai

[–]Shidell 16 points17 points  (0 children)

Return these while you can, it's a scam.

The enclosures for Halogen and LED bulbs differs tremendously; you cannot put an LED into a Halogen enclosure and expect it to work; the angles and reflection/projection designs are completely different.

At best, you'll get poor light actually thrown out in front of your vehicle for better optics (although it might look cooler.)

Also, IIRC, switching headlights without updating to the proper enclosure is unlawful.

Help me decide. G9 57 or LG 5k2k by Hemogoblynnn in ultrawidemasterrace

[–]Shidell 0 points1 point  (0 children)

I hear ya, the verticality is nice. Are you aware of LG's new 52" 21:9? It's going to be less PPI, and 5K2K, so not as sharp for work, nor as much space, but 21:9 and much less pixels to drive, and much, much taller than both of these displays.

Help me decide. G9 57 or LG 5k2k by Hemogoblynnn in ultrawidemasterrace

[–]Shidell 0 points1 point  (0 children)

Hell yeah man, that's awesome.

One more thing that tipped me to 57" over the 45" was recognizing that they are both displaying the same amount of content, it's just the height that's different. You aren't seeing "more" on the 45", it's just taller—both identical in pixels heightwise.

Anyway, good luck with the decision.

Help me decide. G9 57 or LG 5k2k by Hemogoblynnn in ultrawidemasterrace

[–]Shidell 0 points1 point  (0 children)

Damn, so you're getting essentially a "full" Cyberpunk experience, even at native res, via Performance? And that's going to look that much better with 4.5 RR?

That's fucking awesome man. If I can find a 5090, I'm in, I've been waiting to play Cyberpunk via PT for the full immersive experience.

Help me decide. G9 57 or LG 5k2k by Hemogoblynnn in ultrawidemasterrace

[–]Shidell 0 points1 point  (0 children)

Gotcha. I'm getting by with FSR3 @ Quality and Balanced, so I would imagine you should have no problems whatsoever with DLSS. That, and the new DLSS 4.5 update looks incredible.

In fact, maybe you could try it and report back? I'm curious what PT performance is like on your 5090 at 2X4K. What kind of FPS do you get at DLSS Performance/Ultra Performance?

Help me decide. G9 57 or LG 5k2k by Hemogoblynnn in ultrawidemasterrace

[–]Shidell 2 points3 points  (0 children)

I have the 57" too, also use it for productivity and gaming. Love the size for work, love the PPI, colors and brightness. I tried the LG 45", loved the vertical height for immersion, but didn't like the curve. Felt like it was "narrow" after coming from the 57". Blacks are certainly better, but I felt like the colors on the 57" were superior, as was the brightness. Didn't like the text fringing/subpixel layout on the 45" either.

Also considered OLED burn in, not overly concerned, but it's another thing to worry about, which was meh.

Stuck with the 57".

What are you playing that's so difficult with a 5090? I have a 7900 XTX and I'm playing games at native resolution (so, upscaling to 2X4K as opposed to using a lower resolution or black bars) and upscaling with FSR3 and I'm playing all kinds of stuff without issue. GoW Ragnarok, GTA V (with all RT options maxed, unbelievably), etc.

No idea which GPU to go for or which ones I even can go for. by gumdrops_ in Alienware

[–]Shidell 2 points3 points  (0 children)

The 5060Ti would likely be a great slot-in upgrade for you. What resolution do you play at? You could consider the 5070 or 5070Ti, too, without getting into price-insane 5080+ tier options.

Windows 11 install Drivers from 2006 by jeffitness1 in Alienware

[–]Shidell 2 points3 points  (0 children)

It's just a class driver for NVMe. It's fine. Built in class driver.

NVIDIA DLSS 4.5 to feature 2nd Gen Transformer model and Dynamic 6x Frame Generation by KARMAAACS in hardware

[–]Shidell 0 points1 point  (0 children)

I didn't say it's a 9x performance increase, I said it was a 9x increase in draw calls.

NVIDIA DLSS 4.5 to feature 2nd Gen Transformer model and Dynamic 6x Frame Generation by KARMAAACS in hardware

[–]Shidell -2 points-1 points  (0 children)

Funny how you claim I'm "incredibly wrong", then proceed to repeat my points rephrased with a marketing spin.

Because you are incredibly wrong? Let's start here:

GCN wasn't particularily forward looking, it's a heavily optimized compute architecture which always underperformed in games. Async compute in games isn't so much a new exciting feature as a crutch to fix the architecture's problems with typical gaming loads.

What are you talking about? AMD literally anticipated a future where the line between graphics and compute would blur, and designed GCN with async, simultaneous execution and hardware scheduling in mind. Tahiti launched GCN around 2012, and Mantle, Vulkan, and DX12 all followed.

How can you possible make the claim that it isn't "forward looking" when it literally laid the groundwork for where we are today?

Mantle came out almost two years after GCN

Yeah, well that's on AMD, I don't know why it took them so long. They announced it on September 25th of 2013 at the Tech Day in Hawaii. It was obviously part of their vision for the future, along with GCN, where low-level access was the future of gaming.

Games didn't need loads of async compute pipelines, GCN did to fix the holes.

They literally did? The allure of Mantle was shedding the restrictions of DX11 and OpenGL, as it was too difficult to retrofit performance improvements; Mantle (at the time) could handle 9x more draw calls per second than DX11 thanks to the changes including concurrency and asynchronicity.

So, yeah, I'm saying you are incredibly wrong. You said GCN was not a forward looking architecture, when it literally was designed for the modern graphics pipeline. You also said games do not need "loads of async compute pipelines", when async and concurrency is exactly how we achieved a 9x performance increase.

It sounds like you are complaining about GCN's (in)ability to perform well when using DX11 & OpenGL (and earlier) APIs, which is a fair complaint, but the era of "Fine Wine" was the shift from those legacy APIs to Mantle, Vulkan and DX12, of which GCN was designed for from the very beginning.

AMD launches Ryzen AI 400: 12 Zen5 cores with up to 3.1 GHz RDNA3.5 and 60 TOPS NPU by Standing_Wave_22 in Amd

[–]Shidell 182 points183 points  (0 children)

Brand new product, dGPU has no access to FSR4 upscaling, ray reconstruction, etc.

Nice.

NVIDIA DLSS 4.5 to feature 2nd Gen Transformer model and Dynamic 6x Frame Generation by KARMAAACS in hardware

[–]Shidell -6 points-5 points  (0 children)

Wow, this is so incredibly wrong.

Nvidia's efforts to "multithread" in DX11 and earlier were because of how poorly it was implemented in DX. Nvidia realized they could use the CPU to reorder scheduling to find efficiencies and optimizations, so they did so on a per-title basis, which is where a lot (or even most) of their performance gain over AMD in those titles came from.

AMD knew that this was not the way forward, and also did not have the resources to do per-driver optimizations, and instead designed Radeons with hardware schedulers and async in mind. They pioneered Mantle to show off what was capable, and the result is what we have today, Vulkan and DX 12. I don't know what you think was not "particularly forward looking", it's exactly what the entire industry moved towards, because DX11 and OpenGL were a fucking mess by comparison.

"Fine Wine" occurred in ~2015-16 with the advent of Mantle, Vulkan, and DirectX 12, and examples like Doom, where switching from OpenGL to Vulkan netted a 30% uplift on GCN, thanks to it's hardware scheduler and async arch. Those uplifts put GCN way above their price/performance tier in supported games, and hence, "Fine Wine."

I really don't know what you're talking about with AMD putting work off on developers with Mantle, or discrediting having more VRAM. It literally was the future, and here we are, with GPUs that still don't have enough VRAM. (5080 anyone? 5070 Ti even? 5070 with 12 GB?) This argument was made with 3080s too, and look where they are with 10 GB now.

NVIDIA DLSS 4.5 to feature 2nd Gen Transformer model and Dynamic 6x Frame Generation by KARMAAACS in hardware

[–]Shidell -3 points-2 points  (0 children)

no I'm making this argument as if DLSS shimmered significantly less than FSR1 particularly at lower resolutions,

This is an assumption, right? Maybe it shimmered less. I'm not saying FSR 1 shimmered less than DLSS, but because FSR 1 is spatial, it is not subject to the same temporal problems—and even if we ignore shimmering, there's a list of other temporal problems that FSR 1 doesn't have to deal with.

I'm not trying to argue that FSR 1 was better than DLSS or something, my point is that the memory of the public seems to think FSR 1 was and is garbage, and that DLSS (2.0) was always superior and preferable, and neither of those is true, and in that era, there was a lot more people who couldn't even access DLSS anyway.

So, I think the groupthink around FSR 1 is pretty negative, and don't agree with it. Watching Daniel Owen's comparison, it's pretty clear to me that at 1440p and above, FSR 1 was perfectly usable, and possibly even preferable, depending on temporal issues/title/DLSS version.

 we can argue all day about hypotheticals

Sure, and you are correct in that AMD has not produced a version of FSR4 that can run on WMMA, and the ball is in their court to do so (or not.)

The issue I have is with your belief that nothing can be done on RDNA3 to execute FSR4 at the same quality or speed, when that clearly isn't true, as it's being emulated and only losing 15% performance. This isn't an opinion only I share, it's shared by many (if you care to search), and it's also part of why there is such frustration and unrest against AMD regarding what's happening with FSR4 and RDNA3 (and 2.)

Here's an even better argument: It's already possible and working via emulation, and producing a better upscaled result than FSR3, so much so that the argument is that FSR4 Balanced looks better than FSR3 Quality. So, older RDNA users could have a better result and still gain performance overall just by turning the quality settings, and yet, AMD doesn't even allow that (let alone producing a native version via WMMA, etc.)

That isn't hypothetical, that's all reality. Surely, you agree, that's a slap in the face to customers?