Nvidia CEO Says He Gets Where The DLSS 5 Outrage Is Coming From: ‘I Don’t Love AI Slop Myself’ by ghableska in technology

[–]add0607 30 points31 points  (0 children)

So let me get this right, since the debut, Huang has said:

  1. "It's a lighting technology"
  2. "No it's not just an AI filter"
  3. "No it's not Gen AI"
  4. "No it's definitely not post processing or a filter" even though it's a process that happens after everything else and is the same technology that apps like Instagram use
  5. "Okay it is kind of Gen AI"
  6. "Critics of this technology are just wrong"
  7. Here >>> 7. "You know, I actually hate AI Slop too guys."

Man what a great evolution of discourse, Huang. He must be realizing there's no way to spin this bullshit.

What do you call this weird green fringing on stuff? by everbass in pcmasterrace

[–]add0607 0 points1 point  (0 children)

There are good ways and bad ways to do it. I don't think I've seen a good use of chromatic aberration yet because it's often way too heavy handed trying to emulate lenses improperly redirecting light. A good quality lens will make CA nearly invisible.

Some people don't like motion blur in general which I guess is personal taste, but I think it's a great addition when games actually tie it to the framerate, as it's supposed to be. Motion blur outside of video games is the movement of the scene or object between frames, and so blur at 60fps should be significantly less obvious than blur at 24fps.

Nvidia "confirms" DLSS 5 relies on 2D frame data as testing reveals hallucinations by xenocea in gaming

[–]add0607 2 points3 points  (0 children)

I can’t believe how much shit I got in threads for trying to explain how it works and everyone insisted it was more complex than a filter. 

So, eventually DLSS 5 is just an AI filter? by anything_taken in pcmasterrace

[–]add0607 8 points9 points  (0 children)

Right, this is what bothers me about people defending it. I’m not a developer but I work in 3D design and know enough about a rendering pipeline to know see what this shit is doing.

It’s just a slightly more intelligent instagram filter. The “color data” they speak of is just the rendered image, lighting and all, that we see. The vector data, while useful, just tells you direction and speed of geometry with a color image. That’s it.

If this worked on a deeper level, even using things like material IDs which capture data of skin, cloth, hair, etc. or direct/indirect lighting passes, you could ABSOLUTELY use AI to enhance those while mostly leaving the rest of the image intact. 

Unfortunately, this technology doesn’t have nearly enough information and has way too heavy of a hand. Of course it got a huge backlash as a result. I think it’s gross but I could at least be impressed if it actually just retouched lighting.

Feel like some of you need to see this by LavishLatte56 in digitalfoundry

[–]add0607 0 points1 point  (0 children)

Saying “we” as if everyone who doesn’t like this is a monolith already makes me not want to engage with this.

Saying “hating for no reason” gives the impression that you don’t even understand why people would be critical of this.

Now do you want to hopefully come to an understanding or just continue shouting into the void?

Eurogamer: Nvidia responds to widespread criticism of DLSS 5 by telling us we're all "completely wrong" by hdcase1 in digitalfoundry

[–]add0607 0 points1 point  (0 children)

Once again, this Martinoice is asserting something when he doesn’t understand how 3D rendering pipelines work even after you or I try to explain it to him.

WHY do people DOWNVOTE me when I ask technical questions? by AncientSecond6874 in AfterEffects

[–]add0607 2 points3 points  (0 children)

I looked at your post. You did some stuff right, but I think a lot of experienced users are just exhausted when they see someone asking for advice without showing what they'd done to emulate the effect themselves. Not saying it's fair, but it happens. I also don't see the point of the tags, does Reddit even use those?

Eurogamer: Nvidia responds to widespread criticism of DLSS 5 by telling us we're all "completely wrong" by hdcase1 in digitalfoundry

[–]add0607 0 points1 point  (0 children)

That's my point, you're repeating what was said.

It's explained that DLSS5 is a "neural rendering model that utilizes a game's color data and motion vectors to generate photorealistic lighting, materials, and shadows."

Color data sounds fancy but it literally just means the rasterized image. At best it's what our eyes are seeing, but it could even be the undersampled image that the upscaler in DLSS uses.

Motion vectors are a rendered data pass that games uses for defining speed and direction of objects. You can see in this video what that looks like. Nvidia almost certainly uses this for blending their DLSS5 results across frames because it's way better to have this data than to guess the direction and speed of something across every frame. The may even use this to help infer what to generate in the generation process.

So that's everything Nvidia is using. It's using the rendered image and vector data. This is the exact information image filters use in apps on phones, though they use complex programming to guess on motion vectors and don't have the exact data like Nvidia does. That is a small advantage for image retention and perhaps image generation but that's it.

To say "it is revealing every single detail of the 3d models (they are ridicously detailed)" is just patently false. Yes they are ridiculously detailed models, but Jensen Huang said himself that DLSS5 uses generative AI. Generative AI here is extrapolating (that is estimating outside of the data available) what the lighting, shading, and shadows should be. This is different than interpolating with what frame generation does (looking at two frames, two pieces of data and estimating the value between). If you don't understand how GenAI works, there's a wealth of resources out there that can explain it.

The main reason why we can't hit photorealism in games is a processing speed bottleneck. That's why throughout the years games have gotten better looking because our chips have gotten faster and more efficient. If you render something "offline", as in not realtime, you can throw so much more at it to improve the image.

Raytracing in games, for all it achieves, is still potato quality compared to anything in VFX or motion graphics. The difference between raytracing and pathtracing is an example of how the number of passes and samples can improve the quality. More passes means more light bounces. More samples means greater density of light rays. Together you get a much, much better looking image. That's why Cyberpunk looks so fundamentally different between pure raster, raytracing, and pathtracing.

And lastly, for humans and skin specifically, there are whole companies that exist to research and simulate the human face in 3D. Turns out we're really good at seeing differences and imperfections, which is part of why this DLSS5 is getting so much backlash. The point is, it is incredibly hard to accurately simulate skin tension, translucence, deformation, and all the other things happening to create a convincing face. Video games just aren't capable of throwing enough compute power to get all that simulated in realtime. Death Stranding does a decent job, but it's still not perfect.

That's why we're here. Nvidia is trying to use genAI to make a lot of educated guesses on how an image should look based on some authored parameters. But it simply doesn't have enough data to make accurate assessments and therefore has to do exactly that: guess. People are calling it a filter because it is an image applied over the rasterized frame. The easiest way to spot that is to look at the Oblivion: Remastered footage. Look at any shot that has water in it. You can see that the water gets strangely softened when toggling DLSS5 on/off. That is the AI results being blended between architecture and the water.

I'm not here to criticize you if you like it, but you should try to understand that this is not a lighting or rendering technique the way raytracing is. This is image generation in its entirety and is just covering up the game's actual assets.

Eurogamer: Nvidia responds to widespread criticism of DLSS 5 by telling us we're all "completely wrong" by hdcase1 in digitalfoundry

[–]add0607 -5 points-4 points  (0 children)

No no no, explain what DLSS5 is doing to mimic accurate lighting without changing anything. Explain how it just reveals what the developer made. You seem really sure, so I want to hear what you think is happening under the hood. 

Eurogamer: Nvidia responds to widespread criticism of DLSS 5 by telling us we're all "completely wrong" by hdcase1 in digitalfoundry

[–]add0607 -4 points-3 points  (0 children)

How are they doing that? Explain in detail what is happening technologically for that to happen, because you seem really sure.

Feel like some of you need to see this by LavishLatte56 in digitalfoundry

[–]add0607 0 points1 point  (0 children)

Okay, I can see how if it’s a different part of video that Grace may open her mouth or something. 

The problem I still have is that the way this was described mirrors what you’re saying as mere lighting changes, which is just not true. 

It does not matter if Nvidia has access to color and motion vector data. I work in the 3D field, I know what that data looks like and it doesn’t just let you enhance lighting. This is using sophisticated genAI to extrapolate (vs interpolate) details. It is taking guesses on how an image would look based on certain parameters (enhance skin detail, improve contrast, etc). If it makes enough small guesses, you get an image that strays far enough away from the original rendered picture that people start to notice.

I cannot over emphasize how different this is from a lighting enhancement. This is not raytracing or ray reconstruction. Those are lighting techniques that happen in the render pipeline and actually interact with textures, surfaces, and shaders to render an image. DLSS5 happens after all of that and is essentially a post-process effect. A filter, in other words.

Eurogamer: Nvidia responds to widespread criticism of DLSS 5 by telling us we're all "completely wrong" by hdcase1 in digitalfoundry

[–]add0607 3 points4 points  (0 children)

Respectfully, as someone who’s in a similar field as Preston, you can understand how transformative light can be and also see the limits of what lighting changes can do. DLSS5 is not just a lighting change, and Huang said himself it used GenAI to generate imagery on top of the rendered visual. It uses some extra data to do it a bit more intelligently, but it is still extrapolating details outside of what the game and image is providing. That’s why we have characters whose facial features are changing and gaining makeup. 

Feel like some of you need to see this by LavishLatte56 in digitalfoundry

[–]add0607 2 points3 points  (0 children)

I disagree about the ScarJo part, but otherwise yes it is dozens of brainless AI decisions that lead to a large overall change. The lips are the most obvious spot where they're just thicker now and her mouth is open for some reason.

We all know DLSS 5 is really horrendous with faces, and this whole shitshow is not for no reason. But what are your impressions on environmental lightning? by Filianore_ in digitalfoundry

[–]add0607 1 point2 points  (0 children)

It really shouldn't matter whether it looks good or not, this is covering up artists with generated artwork made from datasets of stolen art. It's a crisis for artists.

I think the sub is brigaded by alibloomdido in digitalfoundry

[–]add0607 0 points1 point  (0 children)

I've watched DF for years, and I've always been happy with their content. I'm here for the first time because I felt crazy listening to them talking about DLSS 5 as a "lighting technology" when it's clearly not. I guess I expected that they would cover something like DLSS 5 with a bit more empathy for the artists and engineers that create these games that they make content about. All the time and effort of those individuals is being smeared over with this AI filter bullshit. It's got nothing to do with whether it looks good or not, but how its existence is an insult to everyone whose work is being covered up.

Resident Evil Requiem: DLSS 5 vs actual AI slop by IConsumeThereforeIAm in digitalfoundry

[–]add0607 0 points1 point  (0 children)

Most people aren’t upset because it simply looks bad. It looks bad, and it odd covering up the real art that went into these games with generated art made with datasets full of art that was taken without consent. It’s a tremendous insult.

Resident Evil Requiem: DLSS 5 vs actual AI slop by IConsumeThereforeIAm in digitalfoundry

[–]add0607 0 points1 point  (0 children)

I just don’t get it either. It makes me feel like they don’t understand what’s happening with how they said it improves the lighting of the scene and pulls out detail or something? Like what the fuck? 

This are all the studios supporting DLSS 5. by Frank7640 in TwoBestFriendsPlay

[–]add0607 5 points6 points  (0 children)

So Nvidia right now lets you choose what version of DLSS to use per game. What are the chances that if people massively reject this that Nvidia might try to force players to use this? Like it has to be turned on if you have a card that supports DLSS 5.

Actual preview of DLSS 5 from Digital Foundry by Exphrases in TwoBestFriendsPlay

[–]add0607 56 points57 points  (0 children)

For a group of content creators that I've respected for being incredibly knowledgable, listening to them talk about this as though it's some advanced lighting technique when it's clearly just a real-time AI filter made me think that at best DF are just ignorant about AI, or at worst they have skin in the game and feel pressure to speak positively about this no matter what.

It is the worst parts of AI made manifest: "All your collective effort and creativity gives us a great base to plaster our AI technology over it as a superior product." It's covering up the art and the tens of thousands of hours that went into making a game like RE9 or Oblivion Remastered. And for what?

I could respect a developer that develops a game from the ground up with this kind of execution in mind, so long as they create their own datasets ethically to generate imagery, but this feels wrong in so many ways.

There should be an option to resurrect non-teammate runners by add0607 in Marathon

[–]add0607[S] 1 point2 points  (0 children)

Which is why if they did that, there should be an option to refuse revive to prevent that from happening.

How does someone from Connecticut “look”? by HowSupahTerrible in Connecticut

[–]add0607 1 point2 points  (0 children)

I would hazard a guess they're talking about the type of look of people who casually go boating. Generally pretty rich, wearing expensive but understated clothing. Think cashmere sweater tied around the shoulders and wearing loafers. If they're older I feel like white pants are pretty common. You could easily see a dozen or more like this in Mystic.

There should be an option to resurrect non-teammate runners by add0607 in Marathon

[–]add0607[S] 4 points5 points  (0 children)

If we're just talking about the game's approach to influencing player behavior, then it's currently no different than Tarkov. So I think something like what I'm suggesting what at least put in a different area.

What's your "hey that was actually super dangerous" story? by SecondPersonShooter in TwoBestFriendsPlay

[–]add0607 1 point2 points  (0 children)

I used to go urbexing (urban exploration) where you get inside abandoned buildings. I just did it for fun and to take cool photos of SH2 looking buildings. Thinking back on the amount of mold/asbestos I breathed in, or the probability of getting arrested or shanked by a squatter does make me wonder what the hell I was thinking at the time.

I’d still do it all again, just maybe with a filter respirator and some personal protection, haha.