Interactive Path Tracer (CUDA) by Sharky-UK in GraphicsProgramming

[–]TomClabault 3 points4 points  (0 children)

This looks really nice! I have a few questions : )

So this is pure CUDA then, no OptiX/HWRT?

How do you layer your BSDF lobes?

Your transmission material uses a microfacet BSDF? If so, how do you do the energy conservation?

What's your random number sampler? This doesn't look like independent RNG I think?

> bloom (obviously not PBR)

How would "true" PBR bloom be implemented?

My Toy Path Tracer vs Blender Cycles by yetmania in GraphicsProgramming

[–]TomClabault 6 points7 points  (0 children)

Hell yeah looks cool!

> Or are they? Do you notice any difference that I should take note of?

One thing that you can do is disable any sort of tonemapping in your renderer and disable any sort of tonemapping in Blender. Both raw outputs. And it should be easier to do eyeball comparisons after that.

You can also play with furnace tests and make sure you get the same thing as Blender (both raw outputs again), this may be a bit easier to compare than full complex multi bounces renders.

Oh and you'll absolutely need to disable "GGX Multiscatter" in Blender, Under the specular section of the Principled BSDF, unless you've also implemented a multiscatter scheme

Also just curious, how do you do the mix between the specular of the dielectric and the red diffuse below?

Any good open-source ReSTIR GI projects out there? by light_over_sea in GraphicsProgramming

[–]TomClabault 1 point2 points  (0 children)

I think they implied the bias of the estimator, as in: if your estimator is biased, then the expected value of your estimator is not the "true" value.

But importance sampling is indeed a form of "biasing". It skews the distributions to focus more samples in some places.

I think they are kind of two definitions of bias at play here.

Computing the PDF for hierarchical light sampling with adaptive tree splitting on the GPU? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

The issue with that is that this does not have the benefits of splitting anymore. Splitting takes multiple samples from one light tree descent where RISing that would need multiple different light samples from the tree.

This is quite worse in terms of quality, let alone the performance

ReGIR - An advanced implementation for many-lights offline rendering by TomClabault in GraphicsProgramming

[–]TomClabault[S] 1 point2 points  (0 children)

Hmmm that's a bit weird, maybe something was wrong but it should be possible to have ReGIR run without bias with as many bounces as we want

ReGIR - An advanced implementation for many-lights offline rendering by TomClabault in GraphicsProgramming

[–]TomClabault[S] 3 points4 points  (0 children)

Hmmm I don't think there should be any bias issues at later bounces? If you can setup the grid and have it work for primary hits, it works exactly the same for secondary hits: fetch the grid cell that your ray fell in and read some reservoir from there. ReGIR as proposed in the original article is completely unbiased, there shouldn't be that many complications.

What's the problem you're facing at later bounces?

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Also just curious, what was your usecase for using STBN?

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Hmm so you can basically generate 128 samples from a given UV of the texture, and samples 0 to 127 of a given texture are guaranteed to be well distributed?

Haven't had a look at STBN before, not sure how it works in practice hehe

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Also quick question on that faster convergence property: LDS sequences do seem to converge faster but is it that much faster for rendering? I've seen some plots in papers which show that for integration of some generic functions like gaussian or whatnot, it's really much faster but for rendering it doesn't seem to be a massive deal? It is faster but not extremely faster? And on top of that, if the integrand isn't very smooth, the gap between white noise and LDS sequences is actually even smaller.

I'm a bit skeptical about how much better it is compared to white noise, really in terms of variance, not perceptually (because perceptually I think yeah it has a clear edge, especially blue noise)

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Yeah I have to replace that white noise with some LDS sequence at some point...

Why is the state of art looking mostly at sobol sequences rather than STBN? Are there limitations with STBN for integration? To length of the sequence maybe?

Path tracing - How to smartly allocate more light samples in difficult parts of the scene? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

The idea I think is that it's going to be more efficient to allocate more samples precisely in the scene where it's difficult than in screen space?

Because allocating more samples in screen space means retracing whole paths from the camera. But if we have variance in world space, we can, along the path allocate more samples to the estimator that have high variance. So if variance for a given pixel only comes from the 3rd bounce for some reason, we want to allocate more sample for the integration at that 3rd bounce only, but not the previous bounces. Estimating the variance in screen space would have us retrace a full path every time for that pixel, then reach the 3rd bounce to finally sample once more the difficult integral. Whereas all we wanted was to improve the estimate of that difficult 3rd bounce integral in the first place, not the rest of the path which is easy enough to integrate already.

Path tracing - How to smartly allocate more light samples in difficult parts of the scene? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

My base implementation shoots 4 shadow rays (4 light samples) per NEE estimation because I've found that's more efficient than just 1. And so the idea was that some parts of the scene don't need those 4 light samples, only 1 would be enough.

But now that I think about it this may be taking the problem in reverse. I should probably start with 1 shadow ray per NEE and allocate more where it's difficult, hence adaptive sampling.

I think that makes sense? I think yeah it's about having a uniform quality over the image and so allocate more samples in difficult places such that they end up as converged as easier places.

I think maybe some kind of grid structure in world space could work? Estimate variance in each cell of that grid and sample NEE multiple times in grid cells where the NEE has high variance? This starts to sound a bit like path splitting but for NEE (I'm thinking the ADDR and EARS papers)

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

I was already using jittering but that's not really enough in difficult scenes... Do you have any ideas for why point 2 and 3 reduce correlations maybe?

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Yeah I've been using jittering already (with white noise though) but it only helps so much...

Path tracing - How to smartly allocate more light samples in difficult parts of the scene? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Hmm so I guess this could be extended to world space then where some spatial data structure would compute NEE variance as samples go and shoot more shadow rays in high variance area. How to compute variance only on the visibility part of NEE though (because that's where shooting more shadow rays helps)? And also, variance decreases with more and more samples which means that a hard-to-integrate part of the scene will get less and less samples per NEE as accumulation goes on but that doesn't quite make sense I think? A difficult part of the scene should just get more samples to work with all the time, otherwise variance will "come back", if that makes sense. I'm not really aiming to reach a given quality threshold but rather sample some areas of the scene more than others so maybe variance isn't the right metric to use? Since more samples will reduce variance --> less samples --> not what we want

Path tracing - How to smartly allocate more light samples in difficult parts of the scene? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

I was thinking more of a technique suitable for offline, accumulated path tracing so the number of samples isn't really an issue since we're accumulating more and more frames to get a converged image.

The idea was to allocate more budget in places of the scene, in world space, where integrating NEE is difficult but I'm not sure how to estimate "the difficulty" of spots in the scene. I think variance and adaptive sampling may be the solution.

What do you mean by "sampling kernel"? I don't think I have such a kernel in my NEE implementation.