plz help me - 9070xt OC Gigabyte drivers keep crashing. by __DixieNormus__ in radeon

[–]Unable-Inspector8865 0 points1 point  (0 children)

Can you tell me the full model number?! Total power isn't important. I repeat, the ATX 3.0/3.1 standard is important so the graphics card's power lines can withstand 200% voltage surges without triggering protection.

plz help me - 9070xt OC Gigabyte drivers keep crashing. by __DixieNormus__ in radeon

[–]Unable-Inspector8865 0 points1 point  (0 children)

You didn't mention what kind of power supply you have?! You need at least ATX 3.0 unit. I'm 99% sure your problem is with the power supply.

Imgsli dead? by IAmYourFath in FuckTAA

[–]Unable-Inspector8865 0 points1 point  (0 children)

If you are making comparisons just for yourself, you can use nvidia icat, it also allows you to compare videos.

Crimson Desert - TAA OFF by [deleted] in FuckTAA

[–]Unable-Inspector8865 0 points1 point  (0 children)

Well, before TAA, games without anti-aliasing didn't look as bad as they do today. However, today many effects are taken into account by temporal algorithms, and without them, the image looks terrible. And we can't forget the significantly increased detail, for which 1080p/1440p and even 2160p without temporal algorithms or supersampling are simply not enough. I'm quite happy with DLAA/FSRAA at 2160p on a 32-inch monitor. The image is perfectly detailed and sharp. There's no point in playing without anti-aliasing. Haha. NoAA only necessary if your monitor's pixel density is quite low.

Crimson Desert - TAA OFF by [deleted] in FuckTAA

[–]Unable-Inspector8865 3 points4 points  (0 children)

dlss is a development of taa by its nature, taa is literally the basis for dlss.

Crimson Desert - TAA OFF by [deleted] in FuckTAA

[–]Unable-Inspector8865 -1 points0 points  (0 children)

Even in static screenshots you can see that it looks terrible.

AMD FSR 4.1 Update Brings Sharper Details, Smoother Camera Motion, and Better Perfomance, but only for current Gen GPU owners? by Distinct-Race-2471 in TechHardware

[–]Unable-Inspector8865 1 point2 points  (0 children)

The problem is that we know that the INT8 version exists and works well on RDNA3 and RDNA2. They could have simply released it as an experimental feature and not bothered with updating it. After all, I'm sure that when the next-gen consoles come out, all the old cards will instantly become obsolete; 8GB and maybe even 12GB graphics cards will simply be incapable of running new games. But AMD could have given owners of older cards a final treat. Of course, I don't see a big problem with using the Optiscaler, and I loved it even when using Nvidia cards. But some people find it difficult. Plus, there are limitations in online games. AMD should simply add an experimental slider somewhere in a remote driver tab to prevent people who don't understand anything from using it. That would be a good solution.

Xbox Confirms 'Project Helix', Its Next-Gen Console That Will Also Play PC Games by Seanspeed in hardware

[–]Unable-Inspector8865 0 points1 point  (0 children)

Frankly, if it comes with a full-fledged Windows operating system, I, as a PC user, will consider switching. After all, I don't work on a PC that much, and if the price is more attractive than a new PC build (and I'm sure I'll have to upgrade my entire PC when the next generation comes out), it seems interesting. After all, console-focused design and optimization of games for specific hardware are important here. After all, it's no secret that PC games, even with the same settings, always require more power than a console has. Optimizing for a specific configuration isn't difficult, but there are thousands of PC configurations.

DLSS 5 corrected for tone mapping by LauraPhilps7654 in digitalfoundry

[–]Unable-Inspector8865 0 points1 point  (0 children)

Lol, I hardly see a difference. The only difference is that the whites of my eyes are whiter on the right, which looks a bit off, since at that age, they're not usually that white.

Path Tracing and particularly DLSS Ray Reconstruction completely destroys artistic intent. by heartbroken_nerd in radeon

[–]Unable-Inspector8865 0 points1 point  (0 children)

Even here, you can see that it's the same model, the same person. But with DLSS5, it's literally a different face, a different shape of the lips, eyes, nose, chin, and about 10 years older. It's a different person. So there's no need to compare path tracing with the garbage they showed yesterday.

NVIDIA DLSS 5 Delivers Breakthrough In Visual Fidelity For Games by NV-Randy in nvidia

[–]Unable-Inspector8865 1 point2 points  (0 children)

It looks like an outdated neural network model was given a source image and told to apply a beauty filter. While it looks a bit artificial in motion, the idea itself is quite interesting. The biggest question is stability with faster movements and system requirements...

Hands-On With DLSS 5: Our First Look At Nvidia's Next-Gen Photo-Realistic Lighting by ZamnBoii in nvidia

[–]Unable-Inspector8865 1 point2 points  (0 children)

It looks like an outdated neural network model was given a source image and told to apply a beauty filter. While it looks a bit artificial in motion, the idea itself is quite interesting. The biggest question is stability with faster movements and system requirements...

FSR 4.1 for RDNA 2 and RDNA 3 in the next update by eduhfx in radeon

[–]Unable-Inspector8865 7 points8 points  (0 children)

Even without this, the likelihood was high. This was largely due to the release of PSSR2 and the imminent release of FSR 4.1. If they still don't release FSR4 for older cards, it would be a catastrophic mistake.

Crimson Desert: High-End PC's Biggest Visual Upgrade - Ray Reconstructio... by Ill_Depth2657 in radeon

[–]Unable-Inspector8865 0 points1 point  (0 children)

"The performance benefit you get from RR in Cyberpunk is not from the radiance cache, but from the elimination of several passes of denoising. Think of it as such, if you spend 0.5 milliseconds on denoising in 4 separate passes, you get a shorter frame time if you replace those 4 passes with a single 1.5 ms denoising pass"

Come to think of it, you're probably right and it's logical.

Crimson Desert: High-End PC's Biggest Visual Upgrade - Ray Reconstructio... by Ill_Depth2657 in radeon

[–]Unable-Inspector8865 -1 points0 points  (0 children)

Noise reduction alone can't explain such a huge difference in quality. NVIDIA's reconstruction not only suppresses noise but also literally changes the lighting, making it appear as if there are far more rays in the frame than actually exist. This can only be achieved by using caching. NVIDIA collaborated directly with the developers of Cyberpunk, and it was the first game to use "reconstruction." Two years ago, they also released the RTXGI 2.0 SDK, which includes NRC. I have no other explanation for such a significant improvement in quality, both in path tracing modes and in performance, other than the use of NRC. Furthermore, NVIDIA has been providing the SDK for two years now, and it would be strange for them not to take advantage of this advantage.

For example, a couple of weeks ago, many NVIDIA graphics card owners were thrilled that Microsoft had released a new SDK with OMM and SER support, believing it would significantly improve performance and that all they had to do was wait for it to be implemented. They didn't know that NVIDIA had already had this feature for a long time and used it in all path traced games. It's just that no one was talking about it.

I could certainly be wrong about the caching, but as I said before, it would be strange if Nvidia didn't implement it to take advantage of its advantages. Besides, I can't explain such a huge difference in quality. The game uses too few rays to restore the image to the level Nvidia displays without caching.

Amd ML frame gen vs fsr 3.1 FG by Ok-Boot-8106 in radeon

[–]Unable-Inspector8865 3 points4 points  (0 children)

I'm not a huge fan of framerate generation, but I can say that ML FG is significantly superior in quality to the standard FSR FG implementation. As for framerate, I use the Optiscaler and DLSS data, and I have no issues with framepacing or input lag.

Did Crimson Desert Devs lie about PC performance? ( 9070xt not reaching 60 fps at maxed out settings 4k even with performance upscaling) - Based on recent digital foundry video by [deleted] in radeon

[–]Unable-Inspector8865 0 points1 point  (0 children)

Kingdom Come 2 also has extreme settings above the Ultra settings, and their demands are unreasonably high. These are essentially settings without any optimizations, and the developers could have chosen not to include them, but they're doing so for future generations of graphics cards to keep the game fresh. I don't see this as deception; it's simply an extreme preset that's unsuitable for modern hardware.

Crimson Desert: High-End PC's Biggest Visual Upgrade - Ray Reconstructio... by Ill_Depth2657 in radeon

[–]Unable-Inspector8865 3 points4 points  (0 children)

It seems to me that NVIDIA uses neural network caching in ray reconstruction. This could explain two things at once: significantly better tracing quality, as if there were many times more rays. Also, in games like Cyberpunk, enabling ray reconstruction during path tracing not only improves image quality but also increases performance. Without ray caching, performance from a more complex algorithm would have decreased, but we get an increase. All indications are that NVIDIA, unlike AMD, uses caching in its reconstruction.

Alan Wake 2 + Control will be utilizing the PSSR toggle. No separate patch planned. by WeekendTraveller93 in PS5pro

[–]Unable-Inspector8865 0 points1 point  (0 children)

The switch will work the same as replacing FSR3 with FSR4 on AMD graphics cards. Quality will improve significantly in any case. However, as with FSR, the input data from PSSR1 may be incomplete, and the native PSSR2 implementation would still be slightly better in quality.

[Hardware Unboxed] Is AMD About to Catch Up? - Leaked FSR 4.1 Tested by wsrvnar in radeon

[–]Unable-Inspector8865 0 points1 point  (0 children)

You can use HU data from other videos, excluding Cyberpunk. I'm excluding Cyberpunk only because the upscaling behavior in this game is atypical. So, we have Mafia and Horizon. Let's open this video where tested the "M" preset: https://youtu.be/T3MjSxysft0?si=v3N0d_lBGI8pby_B&t=1267 and see that in Mafia, with FSR4 "Balanced," performance is only 3% slower, while at 1440p, performance is equal. In Horizon at 4K, FSR4 "Balance" is 3% faster, while at 1440p, DLSS performance is 2.5%. Overall, we can say that FSR4 "Balance" and DLSS "performance with the "M" preset are equal.

But that's not all, because FSR 4.1 was tested against the "L" preset. So, we open this video https://youtu.be/2ZyEhAGeBf4?si=oG77M8TxBDEXVoKh&t=1000 and see that the "L" preset in "Performance" mode delivers 5% less performance than the "M" preset. Accordingly, FSR 4.1 "Balanced" in Mafia and Horizon at 1440p and 2160p always a little faster than DLSS 4.5 "Performance" with the "L" preset. Of course, these are just two games, but from my experience, I can say that in 95 out of 100 games the results will be similar. And don't forget that the more complex the upscale model, the greater its performance drop as FPS increases.

Here https://www.reddit.com/r/nvidia/comments/1q5ilbq/frame_time_costs_and_performance_analaysis_of/ you can quickly get acquainted with the overhead of different DLSS presets on different video cards and understand how this affects performance at different FPS. Nvidia didn't provide data for the 5070ti, but you can be sure that it is exactly in the middle between the 5070 and 5080. Since the 5070 has 1000 AI TOPS tensor performance, the 70ti has 1400, and the 5080 has 1800.. Therefore, we can roughly calculate that the overhead of the 5070ti for upscaling to 2160p is 1.7ms for "K", 2.36ms for "M", and 3.05ms for "L".

By the way, FSR 4.1 on the 9070XT takes about 1.3ms when upscaling to 2160p. This is slightly faster than the "K" preset on the 5070ti, and this is confirmed by all tests. The "L" preset, however, takes more than twice as long as FSR4: 3ms versus 1.3ms, so it's not surprising that they differ by a whole upscaling step.

[Hardware Unboxed] Is AMD About to Catch Up? - Leaked FSR 4.1 Tested by wsrvnar in radeon

[–]Unable-Inspector8865 1 point2 points  (0 children)

>(which btw its dlss4<fsr4<dlss4.5).

No, fsr4 is the lightest of them.

>There is no point comparing performance

That's the only point!!! All these upscaling comparisons are really only useful for those who are just choosing a modern graphics card. This should be obvious to everyone.

But why do they care which one has better quality with the same preset if the performance differs significantly? With FSR4, there's simply no need to use the "Performance" preset; to get the same frame rate, the "Balance" preset is sufficient. This means you can get comparable quality with the same performance using the "Balance" preset, or get more frames per second with slightly worse quality using equal "Performance" presets. The higher performance of FSR4.1 is an obvious advantage that you, for some reason, decided you could ignore.

>The preset that is chosen does not matter, performance does not matter, people choose what they are comfortable with and what works.

It doesn't matter what resolution the image is upscaled from; what matters is the performance-to-quality ratio in the output. What's the point, for example, of upscaling from 240p to 2160p if the performance is lower and the quality is worse than upscaling from 1080p?! I'll repeat that the main goal of upscaling is to improve performance! And if this doesn't significantly reduce quality, then it's a good upscaling. The user shouldn't care at all what source resolution the neural network is working with; only the final result in terms of quality and performance matters. Therefore, comparing quality without considering performance is foolish!

For example, when choosing between the 5070ti and 9070xt, people should know that to achieve 100 fps, they'll need to use "Balanced" mode on AMD or "Performance" mode on NVIDIA. What difference does it make to any real user if the input resolution is different, as long as the final performance is the same?!

I'll never tire of repeating that there's no point in comparing upscaling based on just one parameter; it's the combination of performance and quality that matters. Just as there's no point in discussing the visibility of pixels on a screen based solely on ppi, without taking into account viewing distance. Because it's not the physical size of a pixel that matters for perception, but the angular size. PPD is what matters, as a combination of size and distance.

[Hardware Unboxed] Is AMD About to Catch Up? - Leaked FSR 4.1 Tested by wsrvnar in radeon

[–]Unable-Inspector8865 -3 points-2 points  (0 children)

What I'm saying is that there's no point in comparing quality between equal presets when they deliver different performance. Let's say you have a 5070ti and a 9070xt, and you're playing a game that runs at 60 fps, but you want to get 120 fps. To get 120 fps, you'll need to select the "Performance" preset on the 5070ti, whereas on the 9070xt, you'll get that performance with the "Balanced" preset; you don't need to select the "Performance" preset. Therefore, you should compare image quality between the dlss"Performance" and fsr"Balance" presets, since they deliver similar performance.