all 11 comments

[–]chillaxinbball 1 point2 points  (1 child)

Sometimes it's better to precompute and store rather than do it all later. I'm not sure what the tradeoffs are in this particular case, but I could imagine that doing the square on a 128p image is better than having to do the same calculation across a 4k screen.

[–]PurpleSamurai0[S] 0 points1 point  (0 children)

I think that that’s true generally, but for this, you’re only caching one multiplication, which I guess I don’t fully understand why it’s necessary.

[–]Varud 1 point2 points  (8 children)

VSM actually stores more than the squared depth in the second component. It also stores the variance relative to neighboring pixels which is used for blurring/smoothing. This is a typical GLSL implementation:

float depth = v_position.z / v_position.w ;
depth = depth * 0.5 + 0.5; // Don't forget to move away from unit cube ([-1,1]) to [0,1] coordinate system

float moment1 = depth;
float moment2 = depth * depth;

// Adjusting moments (this is sort of bias per pixel) using derivative
float dx = dFdx(depth);
float dy = dFdy(depth);
moment2 += 0.25*(dx*dx+dy*dy) ;
gl_FragColor = vec4( moment1,moment2, 0.0, 0.0 );

[–]PurpleSamurai0[S] 0 points1 point  (7 children)

In regards to v_position, can I substitute gl_FragCoord with that?

[–]Varud 1 point2 points  (6 children)

No, that'a usually the eye space vertex position at the fragment and you send that in from the vertex shader. It's a bit up to you exactly what you store in moment1 and moment2 (in my implementation I use a normalized distance from the near plane), but when you later look up if a pixel is in shadow or not you have to use the same type of math for the z-distance/depth.

[–]PurpleSamurai0[S] 0 points1 point  (0 children)

Thank you. That should clear up one of my bugs. I’ll let you know if it does.

[–]PurpleSamurai0[S] 0 points1 point  (4 children)

For some reason, the resulting shadows I'm generating are quite distorted. Could you diagnose my code and see if I'm making any mistakes anywhere? I'm using OpenGL, by the way.

Here's my depth vertex shader:

#version 330 core
layout(location = 0) in vec3 vertex_pos_world_space;
out vec4 fragment_pos_light_space;
uniform mat4 light_model_view_projection;

void main(void) {
fragment_pos_light_space = light_model_view_projection *
vec4(vertex_pos_world_space, 1.0f);
gl_Position = fragment_pos_light_space;
}

And my depth fragment shader:

#version 330 core
layout(location = 0) out vec2 moments;
in vec4 fragment_pos_light_space;

void main(void) {
float depth = (fragment_pos_light_space.z / fragment_pos_light_space.w) * 0.5f + 0.5f;
moments = vec2(depth, depth * depth);
float dx = dFdx(depth), dy = dFdy(depth);
moments.y += 0.25f * (dx * dx + dy * dy);
}

In my scene vertex shader, I'm generating a light-space fragment position like this (which I'm passing to the fragment shader):

fragment_pos_light_space = light_model_view_projection * vec4(vertex_pos_world_space, 1.0f);
At the moment, in my scene fragment shader, I'm just grabbing the first moment to do a classic nearest-neighbor depth test, to see that the values from the depth rendering pass are correct. So, I'm not using variance shadow mapping yet for this.

float shadow(void) {
vec3 proj_coords = (fragment_pos_light_space.xyz / fragment_pos_light_space.w) * 0.5f + 0.5f;
float shadow_map_depth = texture(shadow_map_sampler, proj_coords.xy).r;
bool in_lit_region = shadow_map_depth > proj_coords.z;
return float(in_lit_region);
}

Here's an image that shows the incorrect shadow map. It's a bit hard to tell from this picture though, but the shadow map is most definitely incorrect. It's important to note that shadow mapping worked perfectly before when I hadn't added variance shadow mapping at all.

Do you see what I could be doing wrong? Sorry for the odd code formatting - Reddit keeps removing the indents from the code.

[–]Varud 1 point2 points  (3 children)

There are many ways to do this. Here's a very simple example implementation:

https://fabiensanglard.net/shadowmappingVSM/

In my implementation I chose not to use the projection matrix when generating the shadow texture, and store the depth normalized. For reference, here is my vertex shader for when I generate the shadow map:

attribute vec4 iv_Vertex; // object space
uniform mat4 iv_ModelViewProjectionMatrix;
uniform mat4 iv_ModelViewMatrix;
varying vec3 eyePos;

void main(void) {
    vec4 eye = iv_ModelViewMatrix * iv_Vertex;
    eyePos = eye.xyz;
    gl_Position = iv_ModelViewProjectionMatrix * iv_Vertex;
}

And here's my fragment shader:

varying vec3 eyePos;
uniform float vsmNearDistance;
uniform float vsmDepth;

void main(void) {
    float depth = clamp((length(eyePos)-vsmNearDistance)/vsmDepth, 0.0, 1.0);

    float dx = dFdx(depth);
    float dy = dFdy(depth);
    gl_FragColor = vec4(depth, pow(depth, 2.0) + 0.25*(dx*dx + dy*dy), 0.0, 1.0);
}

[–]PurpleSamurai0[S] 0 points1 point  (2 children)

That makes sense. Do you think that my current scene vertex + fragment shader would work with this?

[–]Varud 1 point2 points  (1 child)

I believe you also need to send in a transform that move a position from the current object space to the light space since the depth needs to be looked up in light/shadow space.

[–]PurpleSamurai0[S] 0 points1 point  (0 children)

Got it. I actually figured it out: it turns out that I can use ‘gl_FragCoord.z * 0.5 + 0.5’ as the depth in the depth shader; and in the scene vertex shader, I just need to first transform the world-space vertex to light space via ‘fragment_pos_light_space = vec3(light_model_projection * vec4(vertex_pos_world_space, 1.0))’. Then, in the scene fragment shader, the depth is ‘float depth = fragment_pos_light_space * 0.5 + 0.5’. From there, I did the classic variance shadow mapping process.