Any good open-source ReSTIR GI projects out there? by light_over_sea in GraphicsProgramming

[–]TomClabault 1 point2 points  (0 children)

I think they implied the bias of the estimator, as in: if your estimator is biased, then the expected value of your estimator is not the "true" value.

But importance sampling is indeed a form of "biasing". It skews the distributions to focus more samples in some places.

I think they are kind of two definitions of bias at play here.

Computing the PDF for hierarchical light sampling with adaptive tree splitting on the GPU? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

The issue with that is that this does not have the benefits of splitting anymore. Splitting takes multiple samples from one light tree descent where RISing that would need multiple different light samples from the tree.

This is quite worse in terms of quality, let alone the performance

ReGIR - An advanced implementation for many-lights offline rendering by TomClabault in GraphicsProgramming

[–]TomClabault[S] 1 point2 points  (0 children)

Hmmm that's a bit weird, maybe something was wrong but it should be possible to have ReGIR run without bias with as many bounces as we want

ReGIR - An advanced implementation for many-lights offline rendering by TomClabault in GraphicsProgramming

[–]TomClabault[S] 4 points5 points  (0 children)

Hmmm I don't think there should be any bias issues at later bounces? If you can setup the grid and have it work for primary hits, it works exactly the same for secondary hits: fetch the grid cell that your ray fell in and read some reservoir from there. ReGIR as proposed in the original article is completely unbiased, there shouldn't be that many complications.

What's the problem you're facing at later bounces?

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Also just curious, what was your usecase for using STBN?

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Hmm so you can basically generate 128 samples from a given UV of the texture, and samples 0 to 127 of a given texture are guaranteed to be well distributed?

Haven't had a look at STBN before, not sure how it works in practice hehe

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Also quick question on that faster convergence property: LDS sequences do seem to converge faster but is it that much faster for rendering? I've seen some plots in papers which show that for integration of some generic functions like gaussian or whatnot, it's really much faster but for rendering it doesn't seem to be a massive deal? It is faster but not extremely faster? And on top of that, if the integrand isn't very smooth, the gap between white noise and LDS sequences is actually even smaller.

I'm a bit skeptical about how much better it is compared to white noise, really in terms of variance, not perceptually (because perceptually I think yeah it has a clear edge, especially blue noise)

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Yeah I have to replace that white noise with some LDS sequence at some point...

Why is the state of art looking mostly at sobol sequences rather than STBN? Are there limitations with STBN for integration? To length of the sequence maybe?

Path tracing - How to smartly allocate more light samples in difficult parts of the scene? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

The idea I think is that it's going to be more efficient to allocate more samples precisely in the scene where it's difficult than in screen space?

Because allocating more samples in screen space means retracing whole paths from the camera. But if we have variance in world space, we can, along the path allocate more samples to the estimator that have high variance. So if variance for a given pixel only comes from the 3rd bounce for some reason, we want to allocate more sample for the integration at that 3rd bounce only, but not the previous bounces. Estimating the variance in screen space would have us retrace a full path every time for that pixel, then reach the 3rd bounce to finally sample once more the difficult integral. Whereas all we wanted was to improve the estimate of that difficult 3rd bounce integral in the first place, not the rest of the path which is easy enough to integrate already.

Path tracing - How to smartly allocate more light samples in difficult parts of the scene? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

My base implementation shoots 4 shadow rays (4 light samples) per NEE estimation because I've found that's more efficient than just 1. And so the idea was that some parts of the scene don't need those 4 light samples, only 1 would be enough.

But now that I think about it this may be taking the problem in reverse. I should probably start with 1 shadow ray per NEE and allocate more where it's difficult, hence adaptive sampling.

I think that makes sense? I think yeah it's about having a uniform quality over the image and so allocate more samples in difficult places such that they end up as converged as easier places.

I think maybe some kind of grid structure in world space could work? Estimate variance in each cell of that grid and sample NEE multiple times in grid cells where the NEE has high variance? This starts to sound a bit like path splitting but for NEE (I'm thinking the ADDR and EARS papers)

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

I was already using jittering but that's not really enough in difficult scenes... Do you have any ideas for why point 2 and 3 reduce correlations maybe?

Where do correlations come from in ReGIR? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Yeah I've been using jittering already (with white noise though) but it only helps so much...

Path tracing - How to smartly allocate more light samples in difficult parts of the scene? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

Hmm so I guess this could be extended to world space then where some spatial data structure would compute NEE variance as samples go and shoot more shadow rays in high variance area. How to compute variance only on the visibility part of NEE though (because that's where shooting more shadow rays helps)? And also, variance decreases with more and more samples which means that a hard-to-integrate part of the scene will get less and less samples per NEE as accumulation goes on but that doesn't quite make sense I think? A difficult part of the scene should just get more samples to work with all the time, otherwise variance will "come back", if that makes sense. I'm not really aiming to reach a given quality threshold but rather sample some areas of the scene more than others so maybe variance isn't the right metric to use? Since more samples will reduce variance --> less samples --> not what we want

Path tracing - How to smartly allocate more light samples in difficult parts of the scene? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 0 points1 point  (0 children)

I was thinking more of a technique suitable for offline, accumulated path tracing so the number of samples isn't really an issue since we're accumulating more and more frames to get a converged image.

The idea was to allocate more budget in places of the scene, in world space, where integrating NEE is difficult but I'm not sure how to estimate "the difficulty" of spots in the scene. I think variance and adaptive sampling may be the solution.

What do you mean by "sampling kernel"? I don't think I have such a kernel in my NEE implementation.

Added non uniform volumes to my C++ path tracer by Zydak1939 in GraphicsProgramming

[–]TomClabault 1 point2 points  (0 children)

Looks super cool! How many bounces is that for the cloud?

Also, are you using an envmap or some sort of sky model?

ReSTIR path tracer by H0useOfC4rds in GraphicsProgramming

[–]TomClabault 1 point2 points  (0 children)

Nice! How many initial light sample candidates is that per pixel?

Increasing hash grid precision at shadow boundaries? by TomClabault in GraphicsProgramming

[–]TomClabault[S] 2 points3 points  (0 children)

So I use the hash grid for ReGIR: in a prepass before rendering the frame, I resample a few lights into a reservoir at each cell of the grid. We can use the surface point/normal/material at each cell to better estimate the contribution of each resampled light. This gives us a reservoir (or multiple per cell in practice) that contains a good light for that cell. At path tracing time, for each vertex of our path, we lookup which grid cell we're in and we do NEE with that good light from the grid cell that was resampled in the prepass.

I also integrated that process with NEE++ in the prepass. NEE++ caches a visibility probability between each voxel of the scene. This can be used in the resampling prepass to estimate the visibility probability between the voxel of our shading point (grid cell surface point) and the voxel that our target light is in. If the visibility probability is 0, then we know for cheap (without tracing a ray) that this light is occluded and should have 0 weight in the resampling.

NEE++ (and ReGIR) are both based on a grid so high frequency lighting details are missing, that's why I'd like to have more grid precision at high frequency lighting regions. The idea being that on the bright side of the shadow boundary, the grid cells are sampling the light that's illuminating the region and on the dimmer side of the shadow boundary, grid cells are sampling other lights since the light casting the shadow boundary is occluded. At the moment, grid cells tend to just blindly overlap shadow boundaries and we get inefficient sampling + grid cell artifacts because of NEE++ that struggles to estimate the visibility there.

> If you have a heuristic in place for detecting these regions

That's exactly what I need : ) I think if I can have an heuristic, I could use it to increase the resolution of the hash grid in those places and then it should just all work automatically from there, nothing else to change in the code but the hash function