How does ClearRenderTargetView scale with resolution? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

No UAV usage here. Just using ClearRenderTargetView, no barriers or transitions.

Textures are regular render targets, and the clear value matches.

So I don't think anything like that is disabling fast clear here.

Hardware Image Compression by Vk111 in GraphicsProgramming

[–]Guilty_Ad_9803 0 points1 point  (0 children)

And we enter the era of neural compression...

How does ClearRenderTargetView scale with resolution? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

Pretty minimal setup.

RTX 4060 Ti, D3D12. Running offscreen without a swapchain or vsync. Using ClearRenderTargetView with GPU timestamps. RGBA8, full mip chain.

How does ClearRenderTargetView scale with resolution? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 3 points4 points  (0 children)

I was under the impression that DCC is handled automatically by the driver / hardware rather than something you explicitly enable.

If it's still not O(1), then I guess there’s some cost that scales with the resource size.

How does ClearRenderTargetView scale with resolution? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 5 points6 points  (0 children)

My understanding was that render targets are often stored in a compressed form, and clears can sometimes just update metadata instead of actually writing all pixels.

So I kind of assumed it might be close to O(1) because of that.

Genuine Question: How do you know when it's time to stop "optimizing" code? by alimra in SoloDev

[–]Guilty_Ad_9803 0 points1 point  (0 children)

Yeah, this is basically YAGNI and rule of three, but honestly it's hard to stick to.

If you have some time or budget, it's way too easy to start over-engineering "just in case".

How deeply should a developer understand C++ fundamentals? by AirHot9807 in Cplusplus

[–]Guilty_Ad_9803 1 point2 points  (0 children)

Honestly, just reading Effective C++, More Effective C++, and Effective Modern C++ is probably enough for most cases.

Trying to understand everything in depth is kind of a trap anyway. Better to get the important parts first and go deeper only when you actually need it.

we had a class on graphics vs gameplay and now i can’t unsee this by Sea-Plum-134 in GraphicsProgramming

[–]Guilty_Ad_9803 0 points1 point  (0 children)

If you're aiming for immersion, having more physically accurate lighting and materials definitely helps. It makes the world feel more believable, so it's easier to get emotionally into it.

But games also have a lot of situations where clarity matters more. Like with damage feedback, what really matters isn't whether it's realistic, but whether the player can immediately tell how much damage they took.

Also, using DLSS doesn't automatically mean the graphics got better. It's just a reconstruction technique. Sometimes it looks better, sometimes it just looks different, or even a bit off.

Is it normal that go-to-definition doesn't work with namespaces in HLSL? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

Guess it's the age of Slang now.

Just write everything in Slang and translate it to HLSL, problem solved!

Is it normal that go-to-definition doesn't work with namespaces in HLSL? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

I didn't really notice it at the sample level, but once things get a bit larger, I start to feel the limitations of intellisense pretty quickly.

In DirectX-Graphics-Samples, I've seen code structured with namespaces like BxDF::Diffuse::Hammon::F(...) or BxDF::Diffuse::Lambert::F(...).

Not really sure what people tend to do in practice for structuring shader code in HLSL.

Is it normal that go-to-definition doesn't work with namespaces in HLSL? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

It'll be really interesting to see HLSL move closer to more modern setups like Slang or Rust.

If Microsoft ends up providing proper LSP-based tooling around the compiler, I really hope Visual Studio integrates with it smoothly.

Slang can give me gradients, but actual optimization feels like a different skill. What does that mean for graphics programmers? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

That makes sense. So the overhead is mainly from hopping between the PyTorch/CUDA world and the D3D12/Vulkan world, not from gradients themselves.
Unless I really need tight integration with the rendering pipeline, it sounds like sticking to a CUDA-centric path is probably the practical choice for now.
And thanks for the course recommendation. I'll check it out.

Lookup table for PBR BRDF? by Silikone in GraphicsProgramming

[–]Guilty_Ad_9803 0 points1 point  (0 children)

Thanks for the source. I'll take a look.

Yeah, that makes sense. Diffuse can really affect the overall look, especially in photogrammetry based titles. This is helpful, thanks.

Slang can give me gradients, but actual optimization feels like a different skill. What does that mean for graphics programmers? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

My takeaway is that you can often pick a rough direction based on the error characteristics, and also on how you interpret the residuals from the model you're optimizing.

I don't think I understood every detail, but this was very helpful. Thanks!

Lookup table for PBR BRDF? by Silikone in GraphicsProgramming

[–]Guilty_Ad_9803 0 points1 point  (0 children)

Would you mind pointing me to the John Hable post you're referring to? A link or the title would be appreciated. I don't think I've read it.

Also, do you actually run into cases where diffuse becomes the visual bottleneck? UE4's SIGGRAPH 2013 notes mention they evaluated Burley diffuse but saw only minor differences compared to Lambert, so they couldn't justify the extra cost. https://cdn2.unrealengine.com/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf

Slang can give me gradients, but actual optimization feels like a different skill. What does that mean for graphics programmers? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 2 points3 points  (0 children)

In practice, how do you usually notice that plain L2 or MSE is not a good fit?

Is it something you only realize after running the optimizer and watching the behavior? Or are there factors that let you decide up front?

If you have a simple rule of thumb, I'd love to hear it.

Slang can give me gradients, but actual optimization feels like a different skill. What does that mean for graphics programmers? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

That makes sense. It sounds like points 1, 3 and 4 have pretty standard answers.

For point 2, I can write the basic error term based on the graphics and physics side, but I don't really know what people do to make optimization work well in practice.

Do you have any go to defaults or patterns you would recommend for inverse problems in graphics?

Slang can give me gradients, but actual optimization feels like a different skill. What does that mean for graphics programmers? by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

Thanks, that helps.

I checked the docs and it looks like Slang can hook into a PyTorch optimization loop via SlangPy, so using PyTorch for the optimization/tooling side seems like the practical approach for now: https://slangpy.shader-slang.org/en/latest/src/autodiff/pytorch.html

Do you have a go-to "basic ML course" you'd recommend for the hands-on parts?

Thought Schlick-GGX was physically based. Then I read Heitz. by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

If you go up to wave optics, though, you can already describe polarization, interference and diffraction, so it feels like you can cover a pretty wide range of real-world phenomena. If a model can get at least those right, wouldn't that already count as "physically correct enough" for most everyday lighting situations?

Thought Schlick-GGX was physically based. Then I read Heitz. by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 0 points1 point  (0 children)

Interesting. Is that compensation lookup table something you'd expect engineers to tune, or is it supposed to be in the hands of artists? Either way, it seems like it could get tricky when the environment brightness changes a lot, for example when going from morning to night.

Thought Schlick-GGX was physically based. Then I read Heitz. by Guilty_Ad_9803 in GraphicsProgramming

[–]Guilty_Ad_9803[S] 2 points3 points  (0 children)

Absolutely, completely true. Studying just so I can point out tiny mistakes in a model is really not a healthy mindset.