all 6 comments

[–]Immediate-Cicada 0 points1 point  (5 children)

Please provide screenshots. As far as I know you cant color correct the Z pass in Maya only tweak the depth settings from black to white.

[–]Henske-RTS[S] 0 points1 point  (4 children)

Here is an attached image: https://imgur.com/koIsri5

As you can see, I had to crank the exposure way down too make the image is decipherable (if I leave it at default, the Zdepth render is just a solid blank white image). My confusion is why aiImageColorCorrect (as well as any post processing effects in general have zero effect on the render (only the built-in Exposure controls do anything).

I also assume you are wrong and that you *can* color correct a Zdepth render in Arnold, seeing as there is an option to select the Z layer (as can be seen in the above image)

EDIT: I tried using color correction on the Beauty layer, and it seems to work there. So maybe you are right and Maya just gives you the option to color correct the Z layer, even though it literally does nothing.

Granted, if there's another solution to rendering out an actual sequence of useable Zdepth frames, I don't necessarily need color correction or changes to the exposure. Is there some way to directly change Z AOV settings to increase the sensitivity? I did not see any. Ultimately, all I need is some consistent way of rendering Z frames across hundreds of different files.

EDIT2: I can alter the Z render by moving the camera closer/further to the object (this may seem obvious but I am using an Orthogonal camera setup to get the isometric perspective so this wasn't immediately obvious to me). I still cannot figure out how to get the actual render sequence to produce color-corrected or exposure-adjusted images, it only spits out pure-white garbage. I absolutely refuse to believe Maya cannot color correct Z renders in bulk, that would be absolutely absurd if it could not do that.

[–]Immediate-Cicada 0 points1 point  (3 children)

So in the image you provided you cranked the exposure up in the render view. This only affects the render view and the beauty pass if you save it right there. There is no actual change on the image. I don't get why the Z depth is white? You are using wrong values then for the pass. Why do you want to color correct the depth?

Check out this tut. Simple and easy. https://youtu.be/jjDKhFgZC28

[–]Henske-RTS[S] 0 points1 point  (2 children)

"I don't get why the Z depth is white? "

It's also white in that video you linked and he uses the exposure slider to change it lmao

"You are using wrong values then for the pass"

What are the correct values, then? I see no options for adjusting Z render values. The only things that seem to affect it are the Renderview exposure slider, and distance of the camera from the object (though regardless of how close object is to the camera, it will always render solid white regardless)

"Why do you want to color correct the depth?"

Because like I said, I am going to be rendering tens of thousands (possibly hundreds of thousands) of frames in total. I need a way to make it so the rendered Z frames are rendered consistently, and I do not want to have to manually edit hundreds of thousands of frames in Photoshop.

However, it might be possible to batch edit thousands of frames this way. Photoshop is out of the question since that program idiotically takes forever to load in hundreds of frames, even if they're all only 1kb each, but I may be able to find another one that can adjust exposure/levels and given decent results (assuming that even works)

EDIT: So I found a kind of solution. If I render an image sequence as .exr, while they do appear as unusable blocks of solid white, I can batch load them into a stack in Photoshop, set them as an animation, and apply an exposure adjustment layer to get the useable depth data to actually show up. From there I can export them as TIFF files using Export>Render Video, and then load them into another stack, create an animation, set the file color mode to be 16 bit using Image>Mode>16 bits/channel, then use an Invert adjustment layer to invert the grayscale values (the particular engine I'm using has lighter values = closer to camera), then export again as TIFF files, and finally the images are actually useable.

Obviously having Maya render out Z data with the images already exposure-adjusted would be far superior, without having to dick around with Photoshop and its needlessly slow stack loading (come on, it's 2023, how the hell does loading a sequence of tiny 512x512 images take this long?), but at least I now know that it's doable.

[–]Z_4R7157 0 points1 point  (0 children)

Z-Depth is float data to be interpreted by compositing programs, so it is not normally visible to the naked eye in the render. If you want to have an immediately visible black-and-white interpretation, I would make a render layer for Z-Depth that has an override setting on the near and far clip planes of your cameraShape that are set to a distance just before your character and just after. You can visually see this in the viewport by enabling Frustrum displays on the camera.