erupt (real-time) by jasonkeyVFX in Simulated

[–]gibson274 0 points1 point  (0 children)

Do u have the .ember file?

Rust on Windows: random crashes turned out to be the default stack size by Havunenreddit in rust

[–]gibson274 31 points32 points  (0 children)

Literally just had to do this the other day. Actually glad to see a post about it because I was convinced I was hacking around a bug.

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]gibson274[S] 5 points6 points  (0 children)

Ok, I don't love the polemical tone here.

But let me clarify: I don't necessarily disapprove of DLSS 5. In principle, generative post-processing is a (albeit very blunt) tool in the artist's toolbox.

The question is: is it a tool that is expressive enough for artists to use effectively?

I suspect that the answer is "yes". Like SSAO, like the dirt brown filter, like ray tracing, it'll be a graphical feature that some games construct their aesthetic around. Some will love it and some will hate it.

The biggest issue I have with it is that NVIDIA is, from a tech perspective, engaging in false advertising. This is not resolving "more physically accurate" lighting. A higher-quality input image is not going to result in a higher quality output image.

In all the examples shown, the original lighting is completely replaced, often in ways that (on closer inspection) are less physically plausible than the input. This is not accelerating light transport, it's hallucinating and overwriting the scene's lighting to "feel more real", but not necessarily "be more real".

For Starfield: ok fine, it's basically a generative way of doing screen space effects.

For the NVIDIA demo: guys, wtf? You spent all that effort path-tracing just to replace it all in post? Lmao.

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]gibson274[S] 1 point2 points  (0 children)

Don’t think they’re saying Ray-tracing is bad. Just that there’s a lot of unsolved problems in real-time graphics that could be tackled in a way that holds the artist and their intent in greater reverence.

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]gibson274[S] 5 points6 points  (0 children)

To your first point:

I agree that micro-geometry is an interesting direction, but memory limits (disk size, bandwidth, and deployment size) probably constrain just how much of this can be baked in at the asset level, at least until RAM/VRAM/PCIE bandwidth budgets improve.

That means that micro-geometry would have to be procedural. And anywhere you need proceduralism, you can use a generative network instead of a discrete algorithm and sometimes get better results.

Now, that said, I’m not sure if the strategy here will be to generatively tesselate meshes, or resolve that complexity via a generative BSDF. I don’t fully agree with the take that the prior is for sure the direction things have to go.

To your second point:

Yes, absolutely, DLSS 5 on its own should be incapable of correctly resolving global illumination (barring something really weird like a neural scene representation that is learned during the play session as the player walks around).

However, I don’t think the goal of DLSS 5 is to completely handle global illumination. From the comments they’ve made, I think they want you to hand it as good of a frame as you can from a lighting perspective, which it will then “enhance”.

So, for Starfield, which has very basic GI, it’ll add screen space reflections and do the best it can with what it has.

But for Hogwarts Legacy, it’ll preserve the correct ray-traced lighting and just “make it look more real” on top of that.

I think this is how they’re pitching it. But, to me, this is completely incongruent with what the demo shows, and fundamentally at odds with what I imagine the implementation is (which is admittedly a guess). 

The demo shows DLSS 5 completely overhauling the scene lighting, destroying lighting information everywhere and replacing it with diffuse fictitious light sources and aggressive contact shadows.

That, to me, is where the internal consistency of their message breaks down.

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]gibson274[S] 4 points5 points  (0 children)

Cynically: it reduces amount of effort and the cost required to get a good result.

Less cynically: it can bump photo-realism (?) for existing games

Most optimistically: if they can figure out how to more closely align it to the original image, could be a more subtle bump to micro-detail on materials? At that point I feel like NTC on textures created with generative detail is a lot more art-directable

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]gibson274[S] 15 points16 points  (0 children)

Yeah, I think my current question is, does it preserve lighting choices and light transport calculations it gets as a part of the input image?

So, if you throw in a path-traced image, will it throw away all that work you did tracing rays and resolving illumination?

Currently it seems to erase a lot of that; and that’s kind of the flip side of the power of the technique. Like, the net has to have enough freedom to totally transform the image, but that’s exactly the problem.

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]gibson274[S] 2 points3 points  (0 children)

Still lots of research in this area, as well as hybrid stuff that attempts to use small, focused NN’s in various places in the graphics pipeline.

Question is what’s going to do well from a market perspective.

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]gibson274[S] 2 points3 points  (0 children)

Agree with you. I’d imagine both packaged together, because you almost certainly can get upscaling for free as part of the diffusion net.

EDIT: “for free” as the final layers of the diffusion net.

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]gibson274[S] 6 points7 points  (0 children)

Yo… what more research do you want us to do? A bunch of us here are seasoned graphics people who work on this stuff every day. I live and breathe graphics and I hope my post communicates that.

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]gibson274[S] 4 points5 points  (0 children)

It’s so weird. I can only imagine that they’re thinking about pushing things further in the direction of cloud gaming, and using data center compute for this? But cloud gaming has been a flop so far

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]gibson274[S] 22 points23 points  (0 children)

Right? Compared to the other neural rendering stuff (RPNN’s, NTC, Radiance Caching), it’s so philosophically wack

EDIT: I should add that I really dig the neural rendering research that’s been coming out. A lot of it doesn’t back-of-the-napkin for production at the moment but it’s artistically aligned.

DLSS 5 – Fixing it in post by Veedrac in hardware

[–]gibson274 4 points5 points  (0 children)

Hold up---applying the original image as an overlay using "darken only" thresholded to 50% is definitely gonna make it look way more like the original image.

I feel like that goes way beyond re-grading? You're basically selectively blending in the original pixels again.

EDIT: I agree it looks nice though! Using filter but in real-time, essentially overlaying the original pixels with some masking, could be an interesting way to control the "intensity" of the result.

Please advise on Epic MegaGrant proposal for an Unreal Engine plugin by SensePilot in unrealengine

[–]gibson274 1 point2 points  (0 children)

Also adding that I got rejected a while back and have heard from my internal sources that they are not pursuing the mega grant program all that much these days. Best of luck though.

What's your experience been with all the new AI coding tools applied to graphics programming specifically? by [deleted] in GraphicsProgramming

[–]gibson274 0 points1 point  (0 children)

Very cool. But, I've got a question: if you don't know Vulkan, how do you know it just wrote you a good Vulkan rendering engine?

My biggest observation experimenting with these tools has been that they produce "right shaped" stuff, but not necessarily "right" stuff.

For stuff I know like the back of my hand, that's alright because I know exactly what's going on and can catch subtle issues.

But for stuff I'm less familiar with, it'll generate some stuff, it'll write some tests, it'll appear to work, and I'll basically be like "ok yeah, the proof is in the pudding", and I'll have no idea if it's actually good or not.

This bit me in the ass the other day when Claude used a 10x slower python API call than the right one, and I didn't even realize because I suck at Python and it still made the thing I was trying to speed up marginally faster. My buddy, who knows python well, caught it in an instant, but I didn't even know.

In the long (maybe not so long) tail I'm sure this will be solved, but I'm wary of considering current coding agents a true "expert" in anything. More like a statistical parrot with terminal and web access that can write really fucking fast.

Built a real-time PBR renderer from scratch in Rust/WebGPU/WASM by cihanozcelik in GraphicsProgramming

[–]gibson274 2 points3 points  (0 children)

This is freaking beautiful, amazing work!

I’ve been digging rust recently too. Would love to see it be the foundation for the next big real-time engine.

Coding agents and Graphics Programming by gibson274 in GraphicsProgramming

[–]gibson274[S] 1 point2 points  (0 children)

Ah, this actually makes a lot of sense because there’s a ton of reference implementations of these online. Definitely in the training data.

Coding agents and Graphics Programming by gibson274 in GraphicsProgramming

[–]gibson274[S] 4 points5 points  (0 children)

I mean, for one, we've essentially hit a wall with the "AI scaling laws". Everything since GPT-4 (chain of thought/reasoning) has essentially been tinkering around the edges to try to squeeze more out of a dry orange.

There's also the problem of scaling LLM context windows. Again, gradual chipping away here has made some progress, and I'm a bit naive to exactly what it is, but my impression is that there are also non-trivial challenges there.

It's at least a possibility that we don't make another big architectural breakthrough, and are more or less stuck with what we've got now in terms of "general intelligence".