export dvcprohd 1080p25 what settings? wierd problem

What settings do you have when exporting dvcproHD out from nuke. I export for use in a final cut project with dvcprohd timeline.

If I in final cut, mix a dvcpro rendered clip from nuke, with original clips from camera (hvx200 dvcpro hd 1440×1080) it looks good in timeline. But if I output from fcp to a dvcpro hd file. Then that clip gets messed up aspect and needs to be rerendered if opened up in fcp??

In nuke I do this. Import orginal dvcprohd 1440×1080, change colorspace to srgb and choose an write node and then export it as dvc prohd 1080p25. I dont check the premult, raw or use format aspect.

As long I only use clips that has been rendered out of nuke with this settings everything is good when render out of fcp. But as soon I mix in other clips I get the problem. Also I get the same problem if I only work with clips from nuke and ad an colorcorect filter in fcp.

3ds max camera export to After Effects

How to export animated camera from 3ds max to after effects??
and then how to add that camera in AEs composition??

thanks

A nuke.Boolean_Knob issue

Hi, guys.

I got a nuke.Boolean_Knob issue.

Here is my script:

Code:

# create a panel class
makeSnapsPanel = nukescripts.PythonPanel('Make Snapshots', 'makeSnaps')

useOriginalSizeBo = nuke.Boolean_Knob('useOriginalSize', 'Use Original Size')
useOriginalSizeBo.setValue(True)

# Add knobs
makeSnapsPanel.addKnob(useOriginalSizeBo)

makeSnapsPanel.show()


I want to run some script each time I click the nuke.Boolean_Knob.
For instance, each time I change the state of the nuke.Boolean_Knob, it prints something like useOriginalSizeBo.value()

How to do that?

I used help(nuke.Boolean_Knob) to get all the information of it.
But I have not found any useful function.

Anybody know this thing?

Thank you so much!

One Class Issue

Hi, guys.

I wanted to create a panel to batch making snapshot.
The idea is to render the first frame of each quicktime file.
I created a function to do the batch render thing. Then I created the panel.

Here’s the function:

Code:

# create a function to make snapshots
def makeSnap(qtPath, snapPath, snapFrame):
    qts = os.listdir(qtPath)
    qts.sort()
   
    for i in qts:
        if i[0] != '.' and i[-4:] == '.mov':
            # create a Read node with the qt file
            setFile = 'file '
            setFile += '"' + qtPath + i
            setFile += '"'
            qt = nuke.createNode('Read', setFile, inpanel=False)
           
            # create a Write node with jpg output
            splitextArray = os.path.splitext(i)
            qtName = splitextArray[0]
            snapName = qtName + '.jpg'   
            write = nuke.nodes.Write()
            write.setName('Write1')
            write['file_type'].setValue('jpeg')
            write['_jpeg_quality'].setValue(1)
            write['file'].setValue(snapPath + snapName)
            write.setInput(0, qt)
            #nuke.connectViewer(0,write)
           
            # start render
            nuke.render('root.%s' % write.name(), snapFrame, snapFrame)
         
            # after rendering, delete nodes
            for n in [qt, write]:
                nuke.delete(n)


I worked pretty well. But when I wanted to wrap the script into a Class, the problem came out.

It seems like that when the function wraps into the Class, this line, write[‘file_type’].setValue(‘jpeg’), doesn’t work out.

When this line runs just as a part of the function, it works well. But when this line runs as a part of the Class, it’s only changing the file_type property of the write node, but the relative ‘_jpey_quality’ doesn’t come out. So I got a problem when the line write[‘_jpeg_quality’].setValue(1) runs.

Is there something I need to pay attention to when building Class?

simple transform and merge help

Hi,

I tried to merge two ramps each with resolution of 1024×256 to form a final image with 1024×512, but I couldn’t figure out how to manipulate the bbox or the transform node to achieve this.


As you can see, the top ramp is totally black as I transform it up 256 pixel in Y.

Code:

version 6.1 v1
define_window_layout_xml {<?xml version="1.0" encoding="UTF-8"?>
<layout version="1.0">
    <window x="0" y="0" w="1921" h="1170" screen="0">
        <splitter orientation="1">
            <split size="733"></split>
            <splitter orientation="1">
                <split size="40"></split>
                <dock id="" hideTitles="1" activePageId="Toolbar.1">
                    <page id="Toolbar.1"></page>
                </dock>
                <split size="689"></split>
                <splitter orientation="2">
                    <split size="651"></split>
                    <splitter orientation="1">
                        <split size="685"></split>
                        <dock id="" activePageId="Viewer.1">
                            <page id="Viewer.1"></page>
                        </dock>
                        <split size="0"></split>
                        <dock id="" hideTitles="1"></dock>
                    </splitter>
                    <split size="462"></split>
                    <dock id="" activePageId="DAG.1">
                        <page id="DAG.1"></page>
                        <page id="Curve Editor.1"></page>
                    </dock>
                </splitter>
            </splitter>
            <split size="1176"></split>
            <splitter orientation="2">
                <split size="991"></split>
                <dock id="" activePageId="Properties.1">
                    <page id="Properties.1"></page>
                </dock>
                <split size="122"></split>
                <splitter orientation="1">
                    <split size="1172"></split>
                    <dock id="" activePageId="Progress.1">
                        <page id="Script Editor.1"></page>
                        <page id="Progress.1"></page>
                    </dock>
                    <split size="0"></split>
                    <dock id="" hideTitles="1"></dock>
                </splitter>
            </splitter>
        </splitter>
    </window>
</layout>
}
Root {
 inputs 0
 name "C:/Documents and Settings/Administrator/Desktop/merge_ramp.nk"
 format "1280 720 0 0 1280 720 1 HD_720P"
 proxy_type scale
 proxy_format "1024 778 0 0 1024 778 1 1K_Super_35(full-ap)"
}
Constant {
 inputs 0
 channels rgb
 format "1024 512 0 0 1024 512 1 Lat-Long 1K"
 name Constant2
 xpos 54
 ypos -112
}
Reformat {
 type "to box"
 box_width 1024
 box_height 256
 box_fixed true
 name Reformat2
 xpos 54
 ypos -31
}
Expression {
 channel0 rgb
 expr0 1-sin((y/height)*0.5*3.14)
 name Expression1
 xpos 54
 ypos 76
}
Transform {
 translate {0 257}
 center {512 128}
 name Transform1
 xpos 54
 ypos 155
}
Constant {
 inputs 0
 channels rgb
 format "1024 512 0 0 1024 512 1 Lat-Long 1K"
 name Constant3
 xpos 267
 ypos -108
}
Reformat {
 type "to box"
 box_width 1024
 box_height 256
 box_fixed true
 name Reformat3
 xpos 267
 ypos -36
}
Expression {
 channel0 rgb
 expr0 sin((y/height)*0.5*3.14)
 name Expression3
 xpos 267
 ypos 79
}
Merge2 {
 inputs 2
 name Merge1
 xpos 267
 ypos 155
}
Viewer {
 frame 1
 input_process false
 name Viewer1
 xpos 267
 ypos 231
}


Thanks,
Jason

Blending 2 cards in 3D space?

Friends,

Really not sure if this is a stupid question but how do we merge/blend cards in 3D Space (Scene setup)?:o

For eg: I have 2 cards in a scene setup/ (White smoke and Dark smoke footage) and want to blend it together (overlay, in,out, multiply etc) and get it out of the same scanline renderer.

The way I can think is to have 2 different Scene setups with the individual cards and Merge the output of the Scanline renderer with a merge node. But that would mean so many Scene nodes for each blending? Am I making sense?

In After Effects, the Blending modes affect both 3D/2D layers, was thinking in that angle.

:confused:

Lens squeeze ratio in 3DE4

hi guys,

can anyone tell me, where can i find the "Lens squeeze ratio" setting in 3d equalizer 4

thanks

50% gray in a 32 bit linear project

I am trying to understand some basic stuff in terms of the linear work flow. I hope someone will have the answer to enlighten me.

I am having trouble understanding how to interpret 50% gray while working linear, this seems like a fundamental issue to me, but maybe I am wrong? Again enlightenment would be appreciated.

To explain my confusion I have a few images:


This first image is an 8bit render from a 3D application, as interpreted by After Effects in an 8 bit sRGB color space project. As expected 50% gray is visually, as expected, halfway between black and white.


This next image is a 32 bit render from a 3D application. This one is gamma corrected so that it will be interpreted properly by After Effects in 32 bit linear color space.


This last image is the 32 bit rendered image as interpreted by After Effects in a 32 bit linear space project. As you can see the grays are identical visually to the 8 bit image, but the values have changed. Why is this occurring? Is it a good/bad thing? What are the implications for compositing?

Thanks for reading or for answering any of these questions.

how to make this vfx shot ??

i put my hand in a bag or somehting like that and then get it out and only my arm is there with blood a the end while ma hand is gone as if there was a monster inside the bag that bit (and completely cut) ma hand !! ;D
plz say it in simple words as i am a beginner
thx in advance

Propaganda3 Experience Site

There must be something in the water in Kansas City, because Propaganda3 just launched one of the most insane (and insanely good) sites I’ve seen in quite some time.