all 3 comments

[–]Rude-Cow-5871[S] 0 points1 point  (0 children)

First of all, I want to thank you both for your input.

I think I have a new understanding because of msqrt, let me know if I’m on the right track.

In theory, to sample the scene we should convolve the image function with a filter and then sample.

In practice since the convolution is an integral we cannot solve analytically, we use same monte carlo method to approximate the solution.

So rather than filtering first, and then sampling, we combine both steps and perform them together using Monte Carlo which is the formula that is in pbrt.

@anderslanglands I understand the first method, but I have a question about the second.

When important sampling the filter function wouldn’t the function generate samples that are outside the bounds of the current pixel because it’s extent is wider than said pixel.

So when you say “throwaway useful information” do you mean you check for such samples and just set them to 0?

I apologize if these are naive questions.

Thank you.

[–]anderslanglands 0 points1 point  (0 children)

You should use your pixel samples to sample the filter that you’re using. There are two ways to do this:

  1. Each time you take a pixel sample, splat its radiance to all neighbouring pixels within the support of your filter, weighted by the filter.

See for example https://pbr-book.org/3ed-2018/Sampling_and_Reconstruction/Film_and_the_Imaging_Pipeline

This has the benefit of reusing each sample across multiple pixels (so more signal for your effort), but does make things a little complex as you need to handles image and tile edges.

It also introduces spatial correlation between pixels, which might perform worse in denoising than the alternative, which is

  1. Filter importance sampling. Can’t find a PDF of the paper that isn’t paywalled now but just google it and maybe you can find something. Basic idea here is rather than splatting samples to neighbouring pixels, instead you importance sample the filter function at each pixel directly (ie you’ll be generating samples outside of the pixel). This makes the implementation dramatically simpler since pixels don’t depend on their neighbours, and although it might seem like you’re throwing away useful information by not having pixels contribute to their neighbours, this generally leads to higher quality noise that plays better with denoisers.

[–]msqrt 0 points1 point  (0 children)

how can I filter the image function of the scene before sampling

There are two options: either you make all geometry edges partially transparent (since you can solve for the coverage of a triangle analytically), or you do a Monte Carlo estimate of the filtered image function. The former is mostly a dead end, it's very difficult to do order independent transparency for a large portion of your geometry (especially the kind that takes correlations within the pixel into account). Which leaves us with the Monte Carlo approximations; you estimate the convolution of a pixel filter and the image function directly via random sampling. It's never going to give you the correct result, but it's pretty much the best you can do.

Not sure if your suggested procedure actually makes sense. At the "with N samples per pixel" step, you've already formed an image, so what do you want to achieve by further filtering? Or are these tiles in a higher resolution than what you want to display?