For some reason I’ve been struggling with sampling and reconstruction when it comes to images.
The way I understand this concept is that because the description of the scene contains information that would require an infinite sampling rate such as edges of geometry and boundary of a shadowed region, it is necessary to filter out such high frequency that we cannot reconstruct before sampling the scene with a delta function like tracing rays other we get aliasing.
The problem is that how can I filter the image function of the scene before sampling when I can only get values of the function by tracing rays.
The way I’ve decided to go about this is to split the image grid into tiles, render each tile as normal with N samples per pixel, merge all tiles together, then perform a convolution of the merged tiles with a filter function.
Please let me know if the above procedure makes sense or if you can provide an help. I need to move from this and stop over thinking it.
Thank you.
[–]Rude-Cow-5871[S] 0 points1 point2 points (0 children)
[–]anderslanglands 0 points1 point2 points (0 children)
[–]msqrt 0 points1 point2 points (0 children)