PFtrack – Determining Focal Length from a cropped sensor

Im using a red one with a 20mm lens to shoot a tracking shot for myself and am running into a big problem with my understanding of film backs and crop factors for sensors.

Say I shoot my footage at HD using my red. From what I understand the camera is essentially cropping the sensor to compensate for this. So if I want to enter my focal length into pf track, is it still 20mm (the focal length I used), or has the focal length changed since the sensor has been cropped?

Thanks in advance.

Cinema 4d and Matchmover

does anybody knows a tutorial with matchmover cinema 4d and after effects plz.

Static camera – Moving scene (Syntheyes)

Hello to all,
can someone give me an advise. I need to make camera export from Syntheyes, but I need not moving camera, I need to receive a static camera with moving scene (points).
Thanks.

Voodoo Camera Tracker, No point cloud

I’m trying to track a 67 frames long image seq. I got results with letting Voodoo estimate focal length but when i set focal to 5.6 mm i dont get any 3d point. Can any one help me?

Boujou export units problem

Hey.

I am using Boujou to track several video sequences, in order to compare the data of camera movement. I do this by extracting .txt data, and then put the x,y,z data into graphs.

My problem is that Boujou has no information about the real world distances in the video. Therefore my scaling units in the exported data are not the same, hence incomparable.
Anybody knows how to define the real world distances in my video sequences in Boujou?

Thx. 😉

Syntheyes – Hardware render

Hello all.
How can I make a hardware render in Syntheyes (with trackpoints or cones or something else..)
Thanks.

tracking in pftrack 5

Hi! I want to track using user-features in PFTrack 5.
Thing is, after I placed my user-features in the shot, they disapear after the first frame. That is of course because they arent tracked.
So my question is :
How do I track using user-features in PF Track 5?

Ivé seen video tutorials from previous PFTrack versions showing that you select your user-feature, rightclick and choose Track forward( or backward) but those options are not existing in pftrack version 5 as far as I know.

need help
Magnus

PFTrack Export Issues

Alright, got a weird one here.

So we’re undistorting some DPX plates in PFTrack, and wanting to re-distort them in Nuke with 3D elements included, using the STMap method. Pretty standard stuff, and we’ve got the workflow down, but for some reason PFTrack seems to be spitting out 8 bit images.

To explain: We create a UV ramp in Nuke (oversized to match the undistorted plates), render it to an EXR, and import it into PFTrack. Then we apply our inverted distortion coefficients to the ramp, and it all looks good. Up until this point, everything is going swimmingly. However, when we try to export this re-distortion ramp from PFTrack to an EXR to apply it in Nuke, the whole thing goes tits-up.

The EXR that comes out of PFTrack claims to be float; the metadata says it’s float, Photoshop says it’s float, but the pixel data is definitely 8 bit. As a consequence, trying to use it as any kind of STMap input results in horrible blocking artifacts.

So my question is, basically, is there a way to get PFTrack to process/export at anything higher than 8 bit? Anyone else every run into this issue?

different way to matchmove model a head

Hello people!

Just thought to share this. Maybe someone has already tried it, maybe not. At least I haven’t seen it done this way before.

All this because I don’t have a scanner and I’m not a Super Modeler Individual From Outer Space 🙂

No, I did not track hundreds of markers on the head and try to clean up a resulting mesh etc. Yes, ofcourse, I have tried that too, but it is too time consuming and complicated. My approach is different and more straight forward.

First of all, let’s make it clear we have different shots for different purposes. So I’m not talking about a shot in some scene. I’m talking about a short, separate clip of video taken from our talent for the purpose of modeling his head.

To model the face as accurately as possible, I make my talent turn his head side to side so that I have him facing the camera in the beginning and then he turns side to side. This is because I want a 180 degree, or so, rotation of the head.

Ok, You can also make him turn from one side to another and make sure you know the exact frame in the middle where the face is absolutely pointing straight forward. Whatever best serves your needs.

I did not have any trackmarkers on the face but You may want to have them because it can take a while longer without them. It depends on whether or not you need the face clean or not and about the level of visible detail on the skin. RED gives you face texture material as you work on. If you have the format available.

Anyway, all this stuff I do to achieve the following situation:

I want to model the head, so I want my geometry to be in the origin of my 3d scene and I need a rotating camera running around it projecting the footage back onto it.

So I track the head but use the 3d track as if I was solving for a moving camera. This means I get a stationary 3d world with markers and a moving camera.

Then I go to my 3d package (in my case 3dsmax) and bring in my moving camera.

Prior to this I had modeled a simple piece of geometry, only including the most important features and edgeloops of the face area only and I place it to one side of the face and roughly make it fit the areas of the real face as I want. Then I make the face geometry symmetrical sideways so that I get the other side as well. I don’t mind about asymmetry too much at this point.

Then I use camera mapping to project the tracked footage on the face geometry (using Camera Map Animate plugin (free)). When I hit visible edges, I can see myy fotage on the rough face geometry along with my edges.

Now I have a setup where I have a rotating camera projecting the tracked video footage on the geometry and I can scrub back and forth through mt timeline and it is showing me exactly if my edgeloops are sliding or not as I work on to refine the geometry step at a time.
When the face works, then go for the rest of the head.

It is up to you what framecounts, resolutions and formats you wish to use. I used 1024 as width and jpg sequence which makes it relatively light to scrub with, yet reasonable accurate in the viewport (and remember to customize how your 3d package shows your texture detail to get enough detail).

Ok.

So, seeing the edges along with the projected footage of the turning head made my life really much easier because I could model the head and scrub through the timeline as if I asked "my talent to turn his head" to some other position I needed.

And, most importantly, I got reasonable results in a much more acceptable time than I thought before trying this out. Thenagain, maybe it’s just me.

masks

Hi, I am pretty new to vfx.
I filmed my footage standing in an alley looking in the direction of a street (at the end of the alley). I have some cameramovement and tracked everything in boujou and imported it into maya. Now, if i want a cg-car passing on the street, how can I mask the two walls of the alley so the cg is not overlapping the footage(later after 3d-rendering and in compositing)???? Sorry for my english and the bad explanation