So here is the example and the question:
I have two renders from 3D, one of a sphere and one of a cube, both which live in the same 3D scene. From this particular angle, the sphere is in front (e.g. closer to the camera) of the cube, and partially overlaps it. But from other angles, the cube will be in front of the sphere and overlap it.
For our particular final 2D deliverable, we need to deliver the sphere and the cube in different layers (think Photoshop), however, the area where the sphere overlaps the cube must be "cut out". So the sphere would be whole, but the cube would have, say, a half-sphere cut out of it.
What I am hoping to do is render a depth pass (e.g. floating-point grayscale image indicating distance from camera) for the sphere and a separate depth pass for the cube. Then, my hope is that we can use Nuke to determine for pixel 1,1 (just making up terms here…) the sphere is closer to camera than the cube, and thus "keep" the sphere’s pixel and delete the cube’s pixel.
So essentially, Nuke would have to be able to determine from the depth passes which pixel is closer to camera, and then keep or delete the corresponding pixels in the main render.
My question is if this functionality exists in Nuke today, or if you are aware of that functionality in a plugin, script, or other custom code. Or even if it seems viable to try to write it ourselves.
Let me know if you need clarification of the example above – I’ve kept it very basic. In our actual deliverables, there are thousands if not millions of these operations in a single still frame – so please don’t respond by marginalizing the question (e.g. "Just mask it in Photoshop…") because the example is so simple…
Thanks a bunch.
Post a Comment