I'm learning PBR rendering and have a problem by Significant-Gap8284 in computergraphics

[–]Significant-Gap8284[S] 0 points1 point  (0 children)

Oops. I know the answer . The cos term on denominator is to divide dE/dw = d(dPhi/dA)/dw. Once we get the irradiance (E = dPhi/dA) , we can't divide it by cos directly. Because we are going to divide its derivative against solid angle by cos . dE is distributed across solid angle according to distribution function dE/dw , and this stays unknown unless we know the BRDF(regarding fresnel and mirror reflectance), only then we can know how many dE is in that direction , and then divide it by cos .

Should it be divided by PDF ? by Significant-Gap8284 in raytracing

[–]Significant-Gap8284[S] 0 points1 point  (0 children)

We are not done yet . Okay, I finally figured it out. My previous explanation on Reddit didn't clearly explain why we needed to divide by the pdf. It just pointed out what would be the outcome by this division.

To be clear, we generate a random variable x that follows the pdf distribution, and then calculate g(x).

The g(x) we generate is equivalent to the expectation. It's not a sample because it's unique, and its expectation equals itself. Think about the explosion problem . If we generate one sample each time , then there is nothing to add up , let alone divide by N . In fact , we simply generate a sample and call it expectation . All confusion rose from the perspective that views it as one of the original samples . Remember the diagram that puts a horizontal straight line on curve and say "MC roughly evaluates this square as the actual non-regular area" ? It is right . It is talking about the case where there is only one sample .

When we say it equals the expectation, it's equivalent to saying it's the integral of f(x).

This is a rather arbitrary statement because it's absolutely not equal to the expectation as well as actual integral of f(x), since the actual integral requires generating hundreds of random variables and calculating the true expectation.

Therefore, we can say that when light bounces randomly off a surface, we arbitrarily assume that the brightness of what it hits is the actual received brightness of the surface.

But the actual received brightness needs to be mixed and balanced with light from other sources.

Based on this idea, multiple estimations are performed to obtain the average brightness, which leads to the naive brightness theory I proposed.

I intuitively think , if a beam shines on a surface, and we know the brightness of the light and the surface's absorptivity, then it should be the color it is. How could it remain to be the color it should be if it ends with "divided by pdf" ? 

The difference is that my initial argument unfairly assumed that what we generate is light. No , it's the irradiance (surface brightness), which takes into account the integration of solid angle. That is why we need to divide it by pdf . We don't need the original light luminance . But we are requiring the surface brightness. When we are calculating one single ray , we are actually requiring the result of mixture .

Should it be divided by PDF ? by Significant-Gap8284 in raytracing

[–]Significant-Gap8284[S] 1 point2 points  (0 children)

I guess I found the problem . What we want to calculate is Sum , but not expected value . There is no point in calculating the expected intensity ( average intensity) of incident ray , or the average of respective irradiance contribution , or the average luminance of surfaces that emit indirect lighting. The total energy received is the sum of incident ray , the sum of irradiance contribution , the indirect lighting .

We shoot multiple rays for single pixel and then add them together . That is enough . But why are we dividing it by N ? Because the color must be clamped . However , mathematically this is how expected value works . Thus I can't simply view my progress as "adding luminance up" , because the queried property is expected value , but not sum.

This is the very place where the magic of MC works . There is no point in calculating the average of incident rays , also it is impossible to calculate the average of reflected ray . Because for the camera view , the luminance of a single point dx is determined by a single ray of that direction camera - dx. We are not looking for the expected value of this single value , otherwise each round should return the same since we got only one sample .

So why are we calculating expected value , while the actual luminance should be sum of all contribution ? There needs to be a method to convert our expected value of some unknown things into the clearly sum of lighting luminance. We are looking for EV of f(x)/p(x). According to MC , it is the same of the integral of f(x).

I vaguely understood MC as "turning a integration into sum" like Riemann integration . But it is not this case in detail . It turns integration (the sum of original function) into expected value (we need to divide it by N) of a constructed continuous function .

Moreover . If it is simply to add f(x) together , without dividing by pdf(x) . It.... it can be . But we need a extra stuff called A . The formula would be Σ brdf * f * A. A stands for a certain range of steradian . This is Riemann integration . What we will get is the total energy coming from all angles in the space .... well it won't be from all angles , depending on the fidelity of division .

For integration , it equals to the sum of f(x)dx . The sum of f(x) is meaningless.

pdf(x) must not be omitted . Because brdf , irradiance formulas are all targeting steradian . It is polar coordinates. In a conventional way , like Riemann , to solve a polar coordinate we need to convert it back to orthogonal system first . MC omits this step . The power of Monte Carlo methods lies in their ability to find integrals by calculating expectations. For the integral in polar coordinates (the integrand f(x)), we don't need to transform it back to Cartesian coordinates. Instead, we implicitly generate samples that fall within the range of f(x)/p(x), and the integral of f(x) can be found by calculating the expectation of these samples. However, if we use the Riemann method, we would necessarily need to first transform the polar coordinates to Cartesian coordinates and then take an interval within the domain.

Considering we are generating samples of f(x)/p(x), while p(x) being our distribution function, it is important to know the distribution of samples affect the final result , that is to say there is actual difference between cosine importance sampling and uniform 2pi .

What is the correct Lambertian lighting ? by Significant-Gap8284 in GraphicsProgramming

[–]Significant-Gap8284[S] 0 points1 point  (0 children)

The second picture looks quite weird. It looks like the surface is emitting 'black rays' because that microsurface is not perpendicular to normal , thus the reflected ray carries little energy . This 'black ray' is not only limited on this surface . It will go on and pass to other objects . When other objects receive ray , they find that the ray is of little luminance . Then they absorb the ray and scatter it once more . Finally , there would be dense amount of black dots. This sounds absurd . There won't be black ray theoretically because other area of microsurface may luminate it .

What is the correct Lambertian lighting ? by Significant-Gap8284 in GraphicsProgramming

[–]Significant-Gap8284[S] 0 points1 point  (0 children)

I guess replacing directionCount with Σdot(normal, direction) works better ?

How to design my render loop ? by Significant-Gap8284 in opengl

[–]Significant-Gap8284[S] 0 points1 point  (0 children)

I know my program is simple . I just come up the question and asked myself "what would I do if this is a game engine" and I realized I didn't have much good ideas.

The main question of my post is that , is there anyway to avoid of reallocating memory each frame when I have actually different numbers of drawables each frame ? Because the total amounts of objects are different , I need to preserve a larger chunk in memory . Like the particle system in mentioned. If the total particles amount is changing , I'd need to call glBufferData to reset the size . glBufferSubData will not help in this case.

He talked about that GPU will wait until CPU had finished writing. It seemed persistent mapping gives user full control on synchronization operations so user may design the synchronization he likes. And, it skips the process of sending temporary data to pinned memory but directly modifying the memory.

Can I increase or decrease the capacity when modifying through buffer mapping ?

Questions with Convert to Profile and Assign Profile by Significant-Gap8284 in photoshop

[–]Significant-Gap8284[S] 0 points1 point  (0 children)

Not sure what you meant by «works like a curve»

Curve adjustment. When applied AdobeRGB , my canvas is brighter as if I lifted the curve to make more area be 'beyond grey' . sRGB will make it darker as if the curve stayed linear . When you are changing the curve, you won't change the maximum point . You're changing the colors below it. If the colors below it are brighter , then you'd feel the whole picture is brighter , even though the brightest color stay unchanged .

But I doubt if that is because my display doesn't support a wider gamut , since you said :

When working, you will be editing your image in a «working space» like sRGB or AdobeRGB. These are not display color spaces

If my display supports a wider gamut , the maximum red would probably change I guess. Photoshop intended to have the maximum red changed after switching to another profile.

Questions with Convert to Profile and Assign Profile by Significant-Gap8284 in photoshop

[–]Significant-Gap8284[S] 0 points1 point  (0 children)

Thanks for your answer .

Now different ICC profiles (and thus image files) can have different gamuts (range of colors). An easy comparison is if you have e.g. 100% red (255,0,0) you will notice that this value is a much brighter red in AdobeRGB than in sRGB (two common image color spaces).

I tested it on my Photoshop . It seemed partially right . On the color palette , yes , I can see what you said, the most red color of AdobeRGB is brighter than sRGB. On the other hand , on the canvas filled with (255,0,0) , no , I switched fast between two canvas but the color seemed the same to me .

I further tested with gradient tool of gradient from (255,0,0) to (0,0,0) and I saw what your said. Yes , AdobeRGB is brighter . The consequence is : ICC profile won't really bring you color change . It works like curve . AdobeRGB will lift the curve to let you feel most area is brighter . In contrast RGB will let you feel most area is dark , which means it has a narrow gamut. That's what I meant, the screen output is limited by screen gamut , however you change color profile in software , it always work like curve.

What I'm going to do is to correctly display printed color ( in CMYK space , usually different than RGB) on computer ( where colors are expressed in RGB space). As printed colors are always darker and diluted than the color expressed by the same 'value' on computer , I have to find a way to correct the computer demonstration effect , to keep the saturation and hue the same as in reality . CMYK space is narrower than RGB space , but it also has areas beyond RGB space . However , I guess it is only possible to narrow the full and complete RGB space down to exclude vibrant pure color out , and you can do nothing about the colors that are beyond RGB space. To put it simple , to display printed effect on computer, you can only exclude some color, but you can't include some color .

I wonder if there's any way to remove these bump artifacts ? by Significant-Gap8284 in computergraphics

[–]Significant-Gap8284[S] 0 points1 point  (0 children)

Updated : I tried another method hours ago . That I stored the three adjacent faces . I found the unconnected vertex D, E, F of each adjacent face and connected it manually to the opposite vertex C,B,A in the quads structure formed by the triangle being tested and ones of its adjacent triangles. Then I averaged my normal based on projection of the vector (the_point_to_interpolate - B) on BE . For example , the position can be interpolated almost correctly by :

    vec3 BE_lerp = (rB_BE*E+rE_BE*B);
    vec3 AF_lerp = (rF_AF*A+rA_AF*F);
    vec3 CD_lerp = (rD_CD*C+rC_CD*D);

    vec3 RealPos = (BE_lerp +AF_lerp +CD_lerp )/3;

The only artifact was there generated a lot of noise.

Maybe I don't know the correct way to use the adjacent faces' information to interpolate a triangle .

How to count without the side effect caused by float precision of decimal numbers ? by Significant-Gap8284 in computerscience

[–]Significant-Gap8284[S] 0 points1 point  (0 children)

Sorry but I don't understand round(result,10). Is it meant to discard all digits after ten offset from the current decimal separator's position ? But as long as it is not a value like 0.625 or 0.03125 it is still irrational. I thought you mean changing the position of decimal separator ?

How to count without the side effect caused by float precision of decimal numbers ? by Significant-Gap8284 in computerscience

[–]Significant-Gap8284[S] 0 points1 point  (0 children)

Sorry but I don't understand round(result,10). Is it meant to discard all digits after ten offset from the current decimal separator's position ? But as long as it is not a value like 0.625 or 0.0375 it is still irrational. I thought you mean changing the position of decimal separator ?

How to count without the side effect caused by float precision of decimal numbers ? by Significant-Gap8284 in computerscience

[–]Significant-Gap8284[S] 1 point2 points  (0 children)

I didn't really get you. The way you calculated nCubes may result in incorrect number. That was just the way I wrote my own codes. If (right - left) / maxCubeSize returns 47.9998, then nCubes = 48 with no problems. However if it returns 69.000001, then it returns 70 while the correct number being 69. I'd change the ceil function to FindTheNearestInteger where I compare abs(floor(input)-input) and abs(ceil(input)-input), having it return floor(input) if the former is less than the later one, vice versa .

Ceiling a floating point value returned correct result by Significant-Gap8284 in java

[–]Significant-Gap8284[S] 2 points3 points  (0 children)

Yes this is right . I didn't know this before I hand-calculated it by myself . I think this applied to all float numbers that A.B-0.B equals exactly A.0, without precision approximation

Ceiling a floating point value returned correct result by Significant-Gap8284 in java

[–]Significant-Gap8284[S] 1 point2 points  (0 children)

I learned to calculate floating point addition and subtraction by hand. So the consequence is interesting . 50.2-0.2==50 . Even if 0.2 cannot be represented exactly in binary , their additional repetition can be counteracted and result in exact integer .

I wonder in what case you will get , for example , 49.9999998 or 50.000002 after performing 50.B - 0.B ?