Generating an AO pass from your Nuke 3d scene from ScanlineRender?

Is it possible to generate a AO pass for your geometry and cards in your 3d scene in nuke? From the scanline render? I know you can generate depth, motion vectors and also the surface point and normal vectors where you can relight but is it possible to give that AO depth to your objects in Nuke? Is the only way to do that is to generate a AO pass in your 3d program?

pal 16:9 720×576 problem in quicktime

Hi I try to reformat a hd clip to pal 720×576 16:9. The problem is when i open the exported clip in quicktimeplayer or final cut the clip gets displayed almost as 4:3. Its shows black bars on the side of the clip.

I tried different aspect ratios to se error search but it always look the same.

Deep image from Position pass workflow

Hi,

What is the workflow for getting deep images out of Maya using Mental Ray and into Nuke for compositing? I quite like to know how to use the Position pass to create a custom depth pass in Nuke? Such as the operations and such.

From the Sigagraph paper on deep image compositing it says they use an un-over, what is an un-over operation in Nuke?

Any information would be very useful, as this technique will be quite handy for compositing volumetric fog, something that I need to do.

Here is that paper on deep image compositing, its very short and brief. Full credit goes to the authors.
http://www.johannessaam.com/deepImage.pdf

Thanks.

sRGB colorspace workflow

Hi, i’m having a bit problem using colorspace. I create Ramp (linear) then render it out as a .tga format and using default output colorspace : sRGB. So when i import the image back to nuke , then i check with sampler node. I got perfect linear curve (this is correct , as i render out as sRGB then read the image in using default which also sRGB). Ok , then i render linear ramp from shake using default setting. Then import to Nuke, check with sampler. Suprisingly i found that shake render default .tga as linear. So when import this , nuke has automatically apply sRGB (nuke guess it as sRGB) colorspace which is wrong, so i fix it by change it to linear.

Ok, that’s ramp, so it can be easily checked using sampler, but what if it’s a complex image with big range of brightness/color. Let say we got this image from ‘outerspace’ that we don’t know what app has been used. So i don’t have any idea what colorspace it is. And as my example above, Nuke has guess the wrong colorspace for my image(.tga shake)… not to mention other format tiff,mov etc. what suppose to be the ideal workflow?

thanx

Mixing 2 Camera Animations – Nuke & Maya

Hello, I have a Camera zoom out animation in Z space for first 50 frames in Nuke. (Basically a projection setup on cards). Then I have a 200 frames Camera zoom out animation in Z space in Maya.
(these 2 shots will be matched)

Basically I want to continue the camera movement in Z space from Nuke to Maya seamlessly. How can this be achieved?

I am lost.

PS: Hope you got my problem. The final composite is a camera zoom out.

Multi-input switch?

It’s been awhile since I used Shake but as I recall there was a multi-input switch that I used when doing a test sequence. (i.e. 24x of input 1, 24x of input 2, etc)
This made it easy to output a series of tests as one quicktime.

There’s a 2 input switch node but doesn’t seem to be other options. Is there something obvious I’m overlooking or do I have to get into Python, etc?

Thanks.

accessing GridWarp points from scripts

hey guys,

I can’t find a way to access the gridwarp points from a script.
I got as far as to access(print) the the grid array and value of the dstgrid or srcgrid but when I try to write to it it returns false….

to make things easy, this is what I want to do:

have a user button in a tracker or a gridwarp that automatically connects all 4 tracking points to the four corner points of a 4 point grid in the gridwarp…..does that make sense?

I know I can drag and drop the tracker curves onto the gridpoints, but it would simplify things if we could do it with the push of a button…also I could add easily some knobs for offset etc.

any help is much appreciated!
thanks guys….

uvproject spherical got wrinkle problem

Hi, i just got to 3d in nuke. First trial is using uvproject node. i have a latlong footage (for doing spherical-mapping), a uvproject node and a sphere node. I apply the latling to the sphere using uvproject. Set uvproject to ‘spherical’. As i check in 3d space, everything is ok until i saw a wrinkle on the sphere, so the image (latlong) is not perfectly warped on that sphere. Does anyone know how to fix it?

another test, i set the uvproject node-> invert U. then add a scanlinerender node. in scanlinerender-> set the projection to : spherical to get my latlong back. So after comparing the output from scanlinerender vs the original latlong: everything is just the same, except the wrinkle at left edge of my latlong. If i remove/disable the uvproject node ( so using default projection), the latlong is perfect (just a bit of distortion near the edge). The question is : why this uvproject ruin it?

Can anyone explain it?

“Stress map” from DD in “Mummy 3”

Viewing the case study from The Mummy 3 in Gnomon:

http://www.gnomonschool.com/events/mummy/mummy.php

In the "Rise of the Undermummy" (half of the video), into the many CG layers exist a "stress map" (output from Houdini or Syflex/Maya… this is another thread):

– magenta – maybe compression
– cyan – expansion??

What is the purpose of this layers, if used, inside compositing work (in Nuke especially)? Stretching the textures via warping?? Or maybe are for the Event explanation/visualization only.

Thanks in advance.
David.

Viewer question

Is it possible to remotely control playback?
I have the tablet mapped to any one monitor that I’m switched to (using a toggle displays button). There is only one thing that bothers me. I have to swich whenever I want to start or stop playback. Is possible to have the controls on one screen and the actual viewer on another one?
This way I could switch only for roto 🙂