Best way to get stereo cam from 3DSMax to Nuke

Hello,

I’m working on a stereo project. The 3D is done on 3DSMax and Nuke is used to the compositing and to add some things into the stereo.
What’s the best way to work?

– How to get the stereo cameras from 3DSmax to Nuke? I’ve see the Max2Nuke script, it’s seems to be perfect for that.
– How to have the work done in 3DSmax visible in Nuke (rendered in image sequence and put in Nuke)?

I’ve already done stereo compositing in nuke but never mixing stereo from 3DSmax and from Nuke…

Thank you for your help!

Using more the RGB for mask channels?

When you export mask/matte passes from maya to Nuke its good to use the RGB channels but its only 3 channels in one rendered pass. It would be nice to add more masks in one rendered exr file. You can add alot more with the exr right? What is the easiest way?

prores file format??

Hi guys,

I have a problem I cannot solve. I have an Apple ProRes 422 HQ 8 bit mov (let’s call this "Original").
I simply Read it with default 1.8 gamma.
I connect a Write node after that to write the exactly same Apple ProRes 422 HQ 8 bit mov with 1.8 gamma. (call this "Converted")

Now I read the Converted mov back. I can’t see any difference between Converted and Original in Nuke’s Viewer (swapping back and forth). If I add a Merge node with difference selected, there is a visible difference, unless I turn video colourspace on. If I do, it’s OK.

Now if I open both in QT Player and swap back and forth again, the Converted is definitely lighter than the other one.

Nuke 6.1v2 and OSX 10.5.8 here.

Any ideas what is going on here? How can I fix this issue?

Thanks,

pH.

ps: easy on me please, I’m new on Nuke..:)

Just a thought of point cloud

Was thinking if one could generate a realflow point cloud and import it into nuke… anyone ever tried this?

Could be quite amazing effects me thinks…

Cheers

Happy new Nukes 🙂

Which node do you use for a rendered DOF?

A quick question:

I have a beauty, a luminance depth, and a DOF pass of a 3D render. My impression was to create lens blur you have to apply the DOF as a blur to your image. The nodes that I thought could do this well enough would be Blur and ZBlur but which one would create a more accurate DOF?

My thought was with Blur –

Input beauty and mask the blur with DOF.

With ZBlur –

Input the beauty, use the lumDepth as a Z channel, and mask with the DOF pass.

Since ZBlur uses more information, I assumed that that would create a better blur in my scene but I could be easily wrong about that. Could anyone take a quick moment to clarify between the two? Thank you for your reply in advance.

Nuke file association help!!!!!!!!!!

:(Dear friends after using Nuke 64 for sometime with Windows Ultimate 64 I witnessed permanent file association problem.For example I create a project and I instruct the Write node to perform a render of say a 100 image sequence asking it to output in jpeg format.When I open the created file I get the 100 image sequence in anything but jpeg.The created images in the output folder are virtually no name…windows refers to them as file in the File Type column.Premiere or other NLEs are unable to resolve the file type and won’t do an import for further process.I ran a preliminary search on the internet and they say it’s probably a registry problem.They propose to download programmes like PC Mighty Max 2010 to resolve many problems along which and file association mismatches.It seems that I can go no where from here…It’s urgent for me.Can you help?Please!By the way I hope you enjoyed your Christmas and I wish to all of you a happy,prosporous and creative new year of 2011:)

Is multiple light wraps necessary?

Aside from the added ability to tweak and control objects’ lightwrap in your scene, is it really a good idea to have multiple lightwraps?

Say I have a rendered scene in 3D with a metal ball and a wooden cube, two very different materials and textures. Say I rendered the ball and the cube separately and the background with no holdout so there isn’t a black edge where the ball and cube used to be and now I need to integrate the objects back in the lightwrap node.

In a situation where there are very different materials, would it be necessary to have multiple nodes or is it fine to just clump them together and use one? My concern was that I may need to have different settings for each distinct materials or it may not look right (although I don’t know if that would actually be an issue).

I honestly have very little knowledge of the physics of light and so I was wondering which would be most physically correct/look the best.

Closing a message box with Python

Is it possible to close a nuke.message() window through python rather than asking the user to hit OK?

3d camera movement

I just seen an effects shot I thought was interesting, but a little confusing. Or, maybe I’m just not seeing the trick.

It’s an exterior shot moving in a circle until it stops at about 90 degrees and begins moving toward the door of a house, a little wobbly not much though. Its an intro to a movie…

What I am trying to figure out is if the entire door is cg. As the camera begins to get closer it gets closer and closer to the door eye hole and goes through it to the interior home.

It shook my mind up a little and made me think of 3d camera movement. Is it possible to match the camera movement of 3d elements to the camera movement of footage at a certain point in time? So in this case once it gets too close to the door stop and the 3d camera takes over? How exactly would you motion track a 3d camera to footage movement?

What other way could this effect have been done?

If the door is not cg then how would the dp get the zooming shot for the door eye hole 3d camera animation? At least with no "real"
door present I figure it would make the camera moving in that much easier having nothing in the way. This way it would be simpler to comp in the door and track it, then as it gets closer the inside door hole is revealed and the interior home easily comped in behind it.

It’s obvious the door eye hole is cg but the 3d camera movement within the hole is a mystery right now how it is combined with footage camera movement, "if" that’s the case.

Read over this a few times and let me know if this makes any sense. Please feel free to post some thoughts.

Thanks!

*If your curious in seeing the shot it’s the beginning in Masters of Horror: Family (Se. 2, Ep. 2).

Positioning lights and camera

Hello ,

is it possible to postion lights and cameras by means of looking through them and then move the camera or light (like Maya)?
thanks,

bern