I tried different aspect ratios to se error search but it always look the same.
What is the workflow for getting deep images out of Maya using Mental Ray and into Nuke for compositing? I quite like to know how to use the Position pass to create a custom depth pass in Nuke? Such as the operations and such.
From the Sigagraph paper on deep image compositing it says they use an un-over, what is an un-over operation in Nuke?
Any information would be very useful, as this technique will be quite handy for compositing volumetric fog, something that I need to do.
Here is that paper on deep image compositing, its very short and brief. Full credit goes to the authors.
http://www.johannessaam.com/deepImage.pdf
Thanks.
sRGB colorspace workflow
Posted in: NUKE from The FoundryOk, that’s ramp, so it can be easily checked using sampler, but what if it’s a complex image with big range of brightness/color. Let say we got this image from ‘outerspace’ that we don’t know what app has been used. So i don’t have any idea what colorspace it is. And as my example above, Nuke has guess the wrong colorspace for my image(.tga shake)… not to mention other format tiff,mov etc. what suppose to be the ideal workflow?
thanx
(these 2 shots will be matched)
Basically I want to continue the camera movement in Z space from Nuke to Maya seamlessly. How can this be achieved?
I am lost.
PS: Hope you got my problem. The final composite is a camera zoom out.
Multi-input switch?
Posted in: NUKE from The FoundryThis made it easy to output a series of tests as one quicktime.
There’s a 2 input switch node but doesn’t seem to be other options. Is there something obvious I’m overlooking or do I have to get into Python, etc?
Thanks.
I can’t find a way to access the gridwarp points from a script.
I got as far as to access(print) the the grid array and value of the dstgrid or srcgrid but when I try to write to it it returns false….
to make things easy, this is what I want to do:
have a user button in a tracker or a gridwarp that automatically connects all 4 tracking points to the four corner points of a 4 point grid in the gridwarp…..does that make sense?
I know I can drag and drop the tracker curves onto the gridpoints, but it would simplify things if we could do it with the push of a button…also I could add easily some knobs for offset etc.
any help is much appreciated!
thanks guys….
another test, i set the uvproject node-> invert U. then add a scanlinerender node. in scanlinerender-> set the projection to : spherical to get my latlong back. So after comparing the output from scanlinerender vs the original latlong: everything is just the same, except the wrinkle at left edge of my latlong. If i remove/disable the uvproject node ( so using default projection), the latlong is perfect (just a bit of distortion near the edge). The question is : why this uvproject ruin it?
Can anyone explain it?
http://www.gnomonschool.com/events/mummy/mummy.php
In the "Rise of the Undermummy" (half of the video), into the many CG layers exist a "stress map" (output from Houdini or Syflex/Maya… this is another thread):
– magenta – maybe compression
– cyan – expansion??
What is the purpose of this layers, if used, inside compositing work (in Nuke especially)? Stretching the textures via warping?? Or maybe are for the Event explanation/visualization only.
Thanks in advance.
David.
Viewer question
Posted in: NUKE from The FoundryI have the tablet mapped to any one monitor that I’m switched to (using a toggle displays button). There is only one thing that bothers me. I have to swich whenever I want to start or stop playback. Is possible to have the controls on one screen and the actual viewer on another one?
This way I could switch only for roto 🙂