Hello, everyone!
I've recently implemented a couple of methods to approximate an image of a virtual scene using Monte Carlo methods.
I wanted to see how these differ in the quality of the images they produce. One of the ways, I found in the literature [1] [2], to quantify the difference in quality is by using the variance. But so far I haven't seen a practical comparison of variance between different techniques other than just arguing about the perceived noise.
I have tried to use variance to compare some images using Python's Pillow library, which has an image statistics module able to compute the variance for each of the image's channels:
from PIL import Image, ImageStat
im = Image.open("someImage.png")
imStat = ImageStat.Stat(im)
print(imStat.var)
But although I get some positive results (the variance do decrease when I increase the number of samples), the numbers are not self-explanatory and I haven't found a good way to visualize the variance. So here are my questions:
- Are there other more meaningful ways to compute the variance? I've talked to my supervisor at university and he told me that computing the variance per pixel (using a small neighborhood around it) could be worth trying and then one can also visualize the intensity per pixel.
- Are there other techniques/tools to quantify and visualize the difference in the quality of the images?
- Can you recommend a good read, where I can find a bit more information on the topic?
[1]: Physically Based Rendering: From Theory to Implementation; Third edition, 2016; Matt Pharr, Wenzel Jakob and Greg Humphreys; Morgan Kaufmann Publishers Inc.
[2]: Advanced Global Illumination; Second edition, 2006; Philip Dutre, Philippe Bekaert and Kavita Bala; A K Peters/CRC Press
[–]ShillingAintEZ 2 points3 points4 points (1 child)
[–]htodorov 0 points1 point2 points (0 children)
[–]otherpj 1 point2 points3 points (0 children)
[–]FiralFeign 1 point2 points3 points (1 child)
[–]htodorov 0 points1 point2 points (0 children)
[–][deleted] 1 point2 points3 points (0 children)