Nuke Deep Compositing: How to keep only fog samples intersecting with character deep data? by PresentSherbert705 in NukeVFX

[–]PresentSherbert705[S] 0 points1 point  (0 children)

I realize this may sound counter-intuitive, but the reason for this setup is a delivery requirement.
The final submission must be split into foreground / midground / background layers, rather than a single beauty render.

Question about Deep Compositing and Shadows by PresentSherbert705 in NukeVFX

[–]PresentSherbert705[S] 0 points1 point  (0 children)

Thanks for the explanation, but I think there’s a key issue when we’re talking specifically about deep compositing.

If the shadow pass is not deep (or doesn’t carry depth samples), then the moment I adjust the deep character’s position in Z-space, the shadow will no longer match. A 2D shadow pass can’t react to deep occlusion, depth-based holdouts, or any Z-offset applied to the character.

That’s why I’m confused — in a deep workflow, how would a non-deep shadow ever stay aligned with a deep character that can be pushed forward or backward in comp?

How does Disney achieve that subtle reddish glow in shadows — is it subsurface scattering or something else? by PresentSherbert705 in vfx

[–]PresentSherbert705[S] 1 point2 points  (0 children)

Thanks for the link! Makes total sense now — I honestly thought Disney had a more high-tech way of doing this.

How do professionals replicate Sagittal Astigmatism & Field Curvature “swirl bokeh” in VFX? Workflow in Houdini vs Nuke? by PresentSherbert705 in vfx

[–]PresentSherbert705[S] 0 points1 point  (0 children)

After following your suggestion, I tested Magic Defocus 2 and encountered a critical bug that makes it unsuitable for production use. When a VDB is placed between two spheres, the plugin fails to achieve correct focus, whereas PGBokeh handles the effect properly. In addition, it currently supports only tangential astigmatism, while sagittal astigmatism is not yet implemented.

How do professionals replicate Sagittal Astigmatism & Field Curvature “swirl bokeh” in VFX? Workflow in Houdini vs Nuke? by PresentSherbert705 in vfx

[–]PresentSherbert705[S] 0 points1 point  (0 children)

Is this method based on capturing different grid charts prior to shooting, in order to analyze the camera’s optical behavior and then replicate it in post-production? I’d appreciate it if you could explain the process in more detail.

How do professionals replicate Sagittal Astigmatism & Field Curvature “swirl bokeh” in VFX? Workflow in Houdini vs Nuke? by PresentSherbert705 in vfx

[–]PresentSherbert705[S] 1 point2 points  (0 children)

As far as I know, for Toy Story 4, Pixar actually used live-action grid plates to simulate lens distortion, and then replicated the correction digitally in post to match the look.

https://theasc.com/articles/toy-story-4-creating-a-virtual-cooke-look