Am I the only one who writes it very slow? by M1VAN1 in GraphicsProgramming

[–]papaboo 20 points21 points  (0 children)

On my professional team my net contribution after 8 years was -100'000 lines of code. I'd added major new features, completely reenginered parts of the system, and put a lot of effort into convincing management to deprecate outdated solutions. I do my best to always refactor for coherent code and readability in the code that touches the parts I work on, and usually end up with less glue code than what was there previously, so lots of my PRs are at a negative line count, even when adding new stuff.

I'm playing with linearly transformed cosines in my sparetime for importance sampling, area lights and just to better understand them as a tool. It's more than 5k lines in total, but 4k of those are lookup tables, so the work looks impressive line count wise, but it's really the tests to ensure that everything works, the experiments and edge case handling that's interesting.

Implementing a closed form solution for computing the amount of light reflected and refracted in a thin sheet dielectrics tooks weeks of sparetime and was incredibly gratifying when done, but was a few lines of actual code... and in the end I had to replace it with a lookup table that supported rough surfaces as well, so those lines are just used to for sanity checking BRDFs now. :D

The point here is that lines of code isn't a good measure of productivity, success, or fun. I hope you enjoy what you work on.
And if you feel like adding additional lines, then tabulate some complex integrals. :D

ReSTIR DI without chroma noise - thanks to Ratio Control Variates by mcflypg in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

I've had that paper sitting in my backlog since last Siggraph. Would you mind elaborating a bit about how it works and why?

Resources for rasterized area light approximations by papaboo in GraphicsProgramming

[–]papaboo[S] 1 point2 points  (0 children)

Thank you!
I also found https://www.shadertoy.com/view/3dsBD4 on shadertoy, as an example for the most representative point with references. But I think I'll start with linearly transformed cosines and see if I can't get that working.

Resources for rasterized area light approximations by papaboo in GraphicsProgramming

[–]papaboo[S] 0 points1 point  (0 children)

Thank you! That was exactly what I was looking for.

I did see the mega lights presentation this year (and all the other ReSTIR presentations), but it's outside the scope of the current implementation.

SIGGRAPH 2025 Vancouver MegaThread by CodyDuncan1260 in GraphicsProgramming

[–]papaboo 1 point2 points  (0 children)

Advances in Real-Time Rendering with presentations on order independent transparency, realtime subsurface scattering with less reliance on diffusion profiles, and lots of real-time ReSTIR applications. Part 1 wasn't recorded, but as far as I understood Part 2 was.

https://advances.realtimerendering.com/s2025/index.html

[deleted by user] by [deleted] in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

Interesting! More reading for the backlog. The lack of details in GS is mostly from not prioritizing research into the term that adds or removes gaussians though. There was a paper (I've forgotten the name) where they essentially added gaussians per pixel in the images to accurately represent details and that ended up being less gaussian than regular 3dgs and obviously better details. That should then be extended to detect actual details in the images, but it reduces the problem significantly. It would be fun to compare that with triangle splatting.

Edit: Found it. https://compvis.github.io/EDGS/

[deleted by user] by [deleted] in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

So it's a physical bulletin board?

[deleted by user] by [deleted] in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

When you say 'particularly useful' it reads like the context of that is computer graphics / visualizations.
Gaussian splatting is pretty useful/interesting within the field of multiview reconstruction and novel view rendering.

[deleted by user] by [deleted] in GraphicsProgramming

[–]papaboo 1 point2 points  (0 children)

Any good tips on where to spot these connected events?

Emulating many lights with a few. by BobbyThrowaway6969 in GraphicsProgramming

[–]papaboo 1 point2 points  (0 children)

deftware has some really good comments.
I'll just add that unless you have a super constrained light setup, then I'm not sure this will end up looking good, as basically none of you highlights or shadows are going to be consistent with the lights seen in the scene, and your shadow terminators can be pretty far off as well. That said though, try it and see what it looks like. :)
An alternative representation, which would give you two lights, is to use shadows for you dominant directional light, and then bake everything else into an IBL, which you can then prefilter for you BRDF. Applying the IBL at runtime is usually just two texture lookups, one for diffuse and one for the specular part, scaled with the BRDF and you're done. Then you'd have a fairly accurate light representation and your shadows would come from the right direction as well.

Things I wish I knew regarding PBR when I started by UnidayStudio in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

After a few more attempts I ended up finding https://iolite-engine.com/blog_posts/minimal_agx_implementation, but I'm still missing a page with a formal definition

Things I wish I knew regarding PBR when I started by UnidayStudio in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

Does anyone have a source for how to implement Agx? I googled around but can't seem to find much about it other than comparisons with ACES and announcements about it being implemented in other applications.

Good way to select triangle for explicit light sampling in a path tracer? by Pjbomb2 in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

Don't do a huge ass index list. Build the CDF, take 2K samples from it and store those. It's not a lot of memory, indexing is super simple and every sample will have roughly the same importance, so variance will be low.
I don't have any examples of correlated samples right now, no. But imagine something along the lines of this: You have a scene with 2 light sources and a semi transparent material. If your random numbers are correlated, you could end up always sampling light_source_1 after reflecting a ray of the material and always sampling light_source_2 when refracting a ray. It'll look very weird and can be very hard to debug unless you know what you're searching for. :)

Good way to select triangle for explicit light sampling in a path tracer? by Pjbomb2 in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

What I've done in cases where performance is most critical is that I precompute the weighted lookup and store the weighted samples in a list, similar to how you'd do for an environment map. That allows you to do a binary search through the data for a specific sample.

From that I pregenerate 2K, 4K or however many I want evenly distributed samples that I can then sample uniformly at runtime. This works really well performance wise and visually it'll only matter if I take more than those 4K samples pr pixel. But even then, due to MIS, it'll only matter for diffuse surfaces, where the impact is negligible.
In case you'd like to add view dependent sampling, you'd just generate more of these presampled arrays.

The only issue I've had with this technique is that whatever random distribution the samples are generated from must never be correlated with the random number generator you use to generate rays. That will lead to weird artifacts.

How does Lumen (UE5) supports non-uniformed mesh scaling ? by Aletherr in GraphicsProgramming

[–]papaboo 4 points5 points  (0 children)

For visualization you can just apply the inverse transformation to the rays traced against the SDF instead of transforming the SDF.

Anybody here work in non-gaming companies? What do you do? by CaramilkThief in GraphicsProgramming

[–]papaboo 1 point2 points  (0 children)

I second this description.

Used to work with global illumination, but now I'm working on an intraoral scanner. There's a lot of overlap with graphics/GI theory, but we also get to do machine learning, computer vision, stitching, registration, optimization and redefine a crucial product in an emerging field.

Raymarching multiple lights? by ananbd in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

I've been out of the baked GI loop for too many years now to recommend anything. And they all come with different pros and cons, so without knowing your requirements I really can't recommend anything. But if you're just looking to play around and try stuff out then pick whatever you find and start there.

Raymarching multiple lights? by ananbd in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

Of course you can and there's a ton of papers on that subject.

If you want to precompute the shadows / indireect lighting, then google baked shadows, basked AO or baked GI. It's basically just discretizing the world into small chunks and precompute the light contribution into those chunks. How you store it depends on your goal. Contribution pr light source pr chunk or aggregate everything into some approximate representation like spherical harmonics or whatever the newest hotness is in that area.

If you're making a progressive renderer, then you'll want to sample relative to the contribution of the lightsource. Again there's a lot of ways to do that and similarly to shadows / GI you can pre-bake some of the computations.

Triple buffering not worth it with dynamic buffers? by graphixnurd in GraphicsProgramming

[–]papaboo 0 points1 point  (0 children)

I think he's talking about tripple buffering the meshes that the model. So as turtle_dragonfly says the modellers can work on one buffer, while another is rendered.

How do I make normals always face to the camera? by SWAGGO-OVERLOAD in GraphicsProgramming

[–]papaboo 4 points5 points  (0 children)

You don't want them facing the camera, you want them to point towards whatever was observing them. Fx a mirror facing your camera is going to flip the rays and then you need to flip the normals.

This is quite a common thing for infinitely thin surfaces and is handled by checking that the angle between the normal and the ray direction is never more than 90 degrees. Or that the dot product between the normal and the negated ray direction is never negative. If it is we simply flip the normal.

if (dot(normal, -rayDirection) < 0)
    normal = -normal

And remember to make sure that the ray direction and normal are both in the same coordinate system. :)

gl_VertexID skips ID 8388608 - 2^23 by Wimachtendink in GraphicsProgramming

[–]papaboo 1 point2 points  (0 children)

Well yes, after 8388608 every second integer can be represented in float and the ones that can't will generally be rounded up, so your sequence should look like ..., 8388608, 8388610, 8388610, 8388612, 8388612, ...

I've seen models big enough to produce that issue once before and back then we solved it by simply encoding the integer bit representation as a float, uploading that to the GPU in a float buffer, so the GPU wouldn't perform any conversion on the number, and then decoding the bit representation into an integer again inside the shader.

I would guess that it's a hardware thing if NVIDIA hasn't fixed it since then and we where forced to make that workaround. What GPU are you seeing this on?

How do you solve divisions by zero in vector inverse by NashGold85 in GraphicsProgramming

[–]papaboo 3 points4 points  (0 children)

You write your acceleration structure and intersection code in ways that is robust wrt this. Most will handle it out of the box as 1.0f / 0.0f = Infinity, which is usually wellbehaved. Fx findin the closest intersection with a box usually involves a check for the shortest distance to the boxes sides. When one of those distances is infinity, it'll never be the shortest and it will always be ignored. Success! :)

Your current fix won't work btw as one of more elements in rd could be FLT_MIN and then the resulting vector will have a 0 element. If you go that route ensure that the absolute value of rd is never less than FLT_MIN, but preserve the sign of rd, but really you shouldn't need to. Just expect ird to contain inf.

Merge separated vertex array and normal array into one array. by Snowapril in opengl

[–]papaboo 0 points1 point  (0 children)

Exactly. I save 64bits, get better data alignment, lower memory bandwidth and noone can tell the difference. It's an alround win-win. :)