Tracking Video Screens – AE Tutorial

In the new tutorial this week I show how you can use the Corner Pin tool in After Effects along with motion tracking and masking techniques to place a piece of video footage into a billboard in a streetscape scene. It’s a pretty versatile technique with a range of uses!

:niceone:Tracking Screens

Also, don’t forget there’s heaps of other great video and audio tutorials on the site a long with a range of interviews with filmmakers, rants from Arlo about Audio and links to great work done by Doco Makers, Animators, Directors and more!

On a side note… I just finished editing a half-hour programme for the ABC on Stop-Motion Animator, Adam Elliot (2004 Oscar winner for Harvie Krumpet) and I can say that his new Feature Film, Mary and Max, looks unbelievable!! It opened Sundance this year and has just been released in Australia. I hope it does very well because it’s a beautiful film.

Peace.

Nick

XI. Grain and Noise

XI. Grain and Noise

One of the rules is to match the black point of your CG to the black point of your film plate. You would never have a black point that is lower in the background than it is in the foreground. You can sample the color values of the darkest dark in your film plate and compare it to your darkest dark in your CGI. Do they match? Is one darker than the other? The color values may not be visible to you, but bump up that gamma on your monitor or TV. You’re sure to see some discrepancies.

What’s the difference? Grain is an artifact of shooting in film, while noise is an artifact of shooting on video. They look completely different! Some people use them interchangeably, but it can get a little confusing if someone says add a little noise to your grain (which can mean something else entirely!), so I wouldn’t recommend it.

When you’re compositing 3D images into a film or video plate, it will rarely match up. Your job as a compositor is to dial in the color values, match the blacks, flesh tones, and so forth, until what was added and what is original blend together. Adding grain onto your final 3D is a necessary step to make the shot feel as if it was shot in one take.
Grain varies from film stock to film stock, and since it is a result of the film process, certain stocks are better at filming certain things. Kodak has a special stock for bluescreen and greenscreen, which cuts the amount of grain in those channels, so you can pull an easier key. Usually you can set up a grain node that contains the right information as your original plate, and it will match the rest of the plates from that film reel. Sometimes it doesn’t or it’s slightly off, and that involves adjusting it to match.

In order, RGB layer, red, green, blue.

The amount of grain in a film plate also changes depending on the luminosity of the scene.. Bright lights, clothing, etcetera, will have less apparent grain that the darker areas of the frame. Most grain packages can compensate for this. When matching grain, the easiest way is to zoom in fairly close and look at the individual channels. Match the grain for each channel, and then look at all the color channels together. Does the grain look correct? Do you need to add a little bit more in the blue channel? There are different types of grain; chunky, soft, patterned, biased to the red, green, or blue channels, and so forth. If you have the capability, shoot some frames of gray near the end of the roll, and when it’s developed, you should have some nice grain on gray that you can use as reference or even, as an application over your CG!

Once grained, render and play out your shot.. If the grain is still not matching, you may have to increase it. Sometimes watching it in motion will give a sense that it hasn’t been grained enough. Don’t push it though, and make sure the rest of your CG is integrated correctly in the first place!

Applying noise to your CG renders in video uses the same methodology. Analyze the grain in each channel, match, render, and play back. Fine tune.

Training Books

Hi, would anyone be able to reccomend the best books available for learning Maya fluid/ particles systems and also a begginers guide to vfx in Houdini?

cheers

which lcd?

Hi guy i am going to get my self lcd screen i will spend about 400, i am graphic artit and editor can you racmend any good lcd’s specily when it comes to color and cleraty what should i look in to when i am purchasing one

thanks

Colour difference between applications?!!

hi everyone,

I was wondering if somebody could explain whats the deal with all the color difference between the applications..? For example, colors in After Effects look one way, when I export the same clip from AE into Premiere it looks slightly different (more saturated) and when I export the final version from Premiere and play it in Quicktime or VLC players everything looks much brighter and nowhere near what I saw in Premiere’s viewfinder (which was set to highest quality)…

All the clips were expored as .mov or .avi files with no compression applied.. whats going on???

V. Paint

Another part in progress!

V. Paint

Paint. When do you paint frames as opposed to roto? How can you decide which method will work best given the task at hand? One thing to always remember, is that painting multiple frames to remove something is time-consuming and wasteful. There are always easier ways to get rid of a camera, or a grip, or wires. By using roto you can effectively get rid of the aforementioned items, and use your paint skills to clean up harder areas of the frame. Paint is also not only used for clean-up, but also for creation. You can use paint strokes as lightning strikes, for electrical surges, for laser blasts. Almost anything that is dynamic in action can be created by paint. During my time on Stargate SG-1, I painted items such as staff blasts, zat hits, and electrical surges.

A method I’ve seen by some beginning artists (I’ve done this as well when I started!) is to paint tracking markers out by hand. Every frame. Or paint out wires. Many wires. Things to look out for when analyzing a frame and deciding when to paint come with practice and time. Let’s say I want to remove a wire rig that’s holding up an actor. And for the sake of argument, it’s a simple rig on a simple background. An actor suspended on bluescreen. The easiest way to remove this wire is to copy a bit of the surrounding bluescreen over the wire. You’re not painting through it, you’re covering it up with other bits of the frame. You’ll have to track this little bit and cover the wire as it moves, but it’s vastly easier than painting a clean frame and trying to match it up via grain later. However sometimes it becomes necessary to do that. Pretty soon the only areas you will need to paint and touch up are where the wires meet the body.

Marker removal is tricky business. It can also be known as wire removal, grip removal, prop removal, etc. The object of doing marker removal is just that, removing a marker from an object, background or person for the comp. This could involve removing LEDs from a tracking shot, removing dots from an actors face, or removing wires and props from scenes when they shouldn’t belong. The method most often used to remove tracking markers is replacing them with a similar background of the environment. If the tracking markers are on greenscreen, you would replace the markers with parts of the greenscreen, or similar color green.

How do you replace the markers with bits of the background or foreground now comes into question. I touched upon this briefly in the Paint section. However, instead of painting a clean frame, you can use roto and mask around the offending tracking marker. By offsetting your background and using the roto to effectively cut a small swatch of background (or foreground), you can cover up the marker! Often times you’ll use tracking markers only to track roto to cover the marker up. This method can usually be used for static markers such as the ones on greenscreens and tracking markers in environments.

For more elaborate cover ups, other techniques combined with the one above will usually get you in the right ballpark. For wire removal, instead of a circle of roto, you will have to create a line of roto over the wire, and instead of tracking, you may have to manually animate the roto to cover the wire. Large roto is usually not the best. The cleanest way is to cover the wire with a sliver of roto, and have a nice feathered edge. This should give you a smooth transition from the background over the wire. You may need to approach wire removal in sections instead of as a whole. This would involve many different techniques, from using the background as a cover, to painting a clean frame of a certain section of background and regraining it to match and positioning that in place. Ideally you would use painting frame by frame as a last resort, and then only as a way to touch up edges or spots you may have missed. Prop removal usually requires either painting out a clean frame of an image, or having a clean background plate without the prop in it. While the methodology is here to start a decent clean up, it takes a little time to sometimes accomplish a good wire and/or rig removal. Tracking marker removal is much easier!

IV. Rotoscope Techniques

Another daily update.. This comp manual is still in progress, very few images are documented, so the final chapters which are published may stray slightly from what I’ve shown.

IV. Rotoscope Techniques

A compositor has many responsibilities for the successful completion of a shot.. Almost every greenscreen or live action plate that requires a background changed out needs roto. Often, roto is supplemental and created on the fly by the compositor. You may need to isolate sections of greenscreen to create a better matte. Many people liken rotoscoping to junior level work, when in reality, the techniques necessary to create an efficient matte come with years of experience and time. While some compositors are fortunate enough to have a roto artist help them with the completion of a composite, many don’t. What methods should you use?

I’ll describe articulate character or actor rotoscoping, which is the predominate form of roto that occurs in production. For roto, there are several different approaches and methods that are out there.. Some dictate the keyframing of roto every several frames. Others claim that starting with the beginning and end points of your shots and working inwards is the best way. Even another efficient way is to rotoscope one piece and track that in. Out of all these different methods, what’s the best way? The best way is the most efficient and quick way for the shot that accomplishes the necessary mask without floating or jittering. We refer to floating and jittering as a mask that has an edge that moves around, while the subject being rotoscoped is smoothly moves. This can happen by adding too many points in a rotoscope, or animating the keys too frequently. Only experience will tell you how many points to add, but I will try and clarify some of these methods.

Humans move in a fairly predictable way. They are not jittery, as muscles take time to contract and move limbs. You can take advantage of this by creating a roto keyframes at the extremes of their action. This allows for a smooth roto from extreme to extreme. If there are other changes inbetween them, you can add another keyframe. The big hint here is to not rotoscope an entire actor at one time. You’ll have way too many floating edges, and it will tedious to go and adjust them. Think of each limb as a separate roto piece. The arms, legs and body are separate. Maybe the hat or the cape of another actor is separate as well. By dividing your goal into specific tasks, you’ll be able to complete the roto much quicker. Sometimes your actor is standing fairly still, and there’s no need to rotoscope so many appendages. What do you do then? Depending on the camera movement, if there is any, you can rotoscope a frame of the actor and track that to their movement. You can also apply this same method to numerous roto pieces, track the hat, track the arms, track the shovel, and so forth. Once tracked to a point correctly, you can go back in and adjust the mask , if necessary, again taking into account the extremes of the actors action. Items that don’t move very often are best done with tracking roto, while organic things should be articulate and keyframed.

This brings us to inanimate objects. Boxes, books, wires, airplanes, ropes, and more. While they don’t exhibit the same motion as organic things, the rotoscoping for these are just as challenging. Luckily, there are even more tricks that you can apply when you encounter them! Logic dictates that simple geometric objects are easily rotoscope-able. While not far from the truth, it’s much more difficult that it seems. The problem stems from these objects having a very defined edge, which can be truly show how bad or good a rotoscope job is. A swinging pendulum, for example, is such an interesting rotoscope challenge. Or a boxcrate lying on the ground. These objects are best rotoscoped by using a combination of tracking and keyframing at extremes.

The pendulum example is one where you must rotoscope one extreme, and keyframe it through the motion of its arc. If your compositing package allows for offset rotation of its pivot point, you’re set. Just animation the pivot point of the pendulum and line it up at the extreme ends. The computer will do the rest. Unfortunately, many desktop compositing packages don’t have this offset pivot point capability, and doing so will just keyframe the points in the roto, and not the actual rotoshape. They only way around this is either set another keyframe in between your two extremes, or attach a rotate node after your rotoshape and keyframe that.

Our boxcrate on the ground is another problem entirely. Because it is made up of wooden pieces , you must cut out holes, which can become time consuming when trying to place the holes and figure out which one has been keyframed already. Instead of this, create rotoshapes for each individual brace of the crate and overlap them. While possibly tedious, this will make the edges much nicer, and you won’t suffer from jittery edge syndrome that occur when holes disappear and corners don’t line up.

Ropes and wires are another matter entirely, and may require a combination of all your talent to complete a convincing matte. Often it’s easier to key out wires and ropes than it is to rotoscope them. Unfortunately you may be stuck without a keyable solution, and roto is the only one. Unlike boxcrates and pendulums, rope and wires have a mind of their own. They are erratic and can change direction quickly, which makes keyframing at extremes almost impossible. What’s the best solution? The most unfortunate one, keyframing frame by frame. But like before, divide your goal into separate goals of individual wires or ropes, and life will be much easier. It will still take a long time though. You may be able to keyframe some extreme movements of the rope. Wires are slightly easier, as they’re usually taut. You can get away with creating six points on your rotoshape, maybe eight to ten, even. Two points for each end of the wire, and a set in the middle, to allow for sagging. The other points may be necessary if your wire has kinks. Again, things like ropes and wires are probably best keyed, if possible.

Creating rotoshapes with motion blur is another topic in itself. Some compositing packages allow you to create motion blur on the fly, requiring you to just create a hard edged matte around your actor or object. In this case, most of the work can be done for you. Other times it is necessary to feather your rotoshapes to encompass the actor. When creating these rotoshapes it’s often beneficial to view your composite as you work, so you can see how the motion blur is affecting your comp. Rotoscoping out of focus objects follows the same methodology as with motion blur. Using either soft feathers or blurs while viewing your comp will accomplish the rotoshape.

Another problem which you may encounter involves rotoscoping an extremely dark shot, which contains no discernible edge. In this case, review what’s called for in the completed shot. You will have to guess as to where the placement of the roto should be, and it is better to track a viewable point than it is to haphazardly keyframe points. An offshoot of this is rotoscoping an extreme bright or blown out shot, where your actors edge is bloomed over by sunlight. Again, decide what’s being placed behind the actor or object. It may be necessary to create a clean frame of the foreground without the bloom, which we’ll get into in the next chapter. For such shots, it’s easier to draw a simple curve through the bloomed out portion of the edge, following the curve of the object.

combustiom

hi guys i am a 3d artist and i learned Combustion i love compositing …………. i want to be more perfect in 3d as well as in compositing … so i am searching Combustion Tutororial sites is there is any sites to learn Advanced Compositing in combustion …..:)

III. Organization and Worklow

Starting with post 1995 and up to post 2000, over the next several days I’ll be posting excerpts from a book I’ve been writing (in my copious free time). Feel free to discuss the items mentioned.

III.Organization and Workflow

The great thing about this book is that it’s not the final say on any technique. The following tips are other methods that a compositor will try to make a shot work. It is up to you and your problem solving skills to figure out the best approach to making a shot perfect within your time constraints. There’s no wrong way or right way to composite.. Only a quick way and a slow way, and comp veterans will know the best quick way.

Managing your script via notes and observable nodes will make your life easier. The key is consistency. These are simple adjustments that can be made immediately upon starting a new composite. Elements brought in should correctly show which version they are. Live action plates can have abbreviated names to show what they actually are. Instead of BL0450_plate, naming it BL04550_greenscreen or BL04550_cleanplate, will make your script much easier to navigate. I often abbreviate the names even further.. Greenscreen becomes gs. Bluescreen becomes bs. A stabilized plate becomes stab. A dustbusted plate becomes db. Here are some other ones which you may find useful

_rt – a retimed plate
_mt – an articulate matte or mask
_tag – a color channel, or several
_CC – a colorcorrected plate
_grade – a graded plated

Make up your own names and abbreviations that match the image you’re bringing in. The key is consistency.

In addition to naming conventions, the way you organize your script will drastically improve your speed as a compositor. Organized, coherent scripts can be easily navigated by other artists if you need to point out a necessary technique. It will also make you faster, as you’ll know where to go to fix a problem. Each compositor has their own method to their madness. It’s up to you to decide how best to organize your work.

Layer organization. Channel management. These two (or four) words represent quite a bit to compositors these days, as comps get bigger and bigger, and supervisors and directors want more and more. There must be a way to organize your scripts and trees into organized bits of information that can be readily adjusted days, weeks, or months down the line. Today I’ll explain my methodology of organization, which you may have seen on sites like VFXTalk. If you’ve delved a bit into my gallery, you’ll notice that most of the scripts I present are organized, or at least start off that way, and then they blossom into some kind of freak, mangled tree. Once in a while the tree gets trimmed, and it gets back into some semblance of order. Follow the link below to read how I organize my mind, and how I can get through some of the more difficult comps I’ve been tasked with.

Some of the terminology that I’ll be going over is pretty specific, so I’ve labeled the image below with the jargon I’ll be using.

In each comp, you’re given a set of instructions. Given a live action plate, add the requisite effects to make it look like everything was filmed at once. This sentence is usually harder in practice. Most compositors build their scripts from the top down, where you have your plate and add things to it, one thing at a time. This is usually pretty effective since you have control over every single element that you pipe into the main trunk. Other times, building branches of effects and then piping that into your main trunk will be easier, more controllable, and often faster. The roots of your comp are usually the final outputs, but often you will have roots further up the tree as precomps, which will enable quicker interactivity during compositing.

My method is a combination of the two, as well as copious node labeling and note taking. This makes it easier for another artist to quickly take over a comp that I’ve worked on, or take select bits of my comp to use in theirs. Some artists which I’ve known in the past are very protective of their techniques, which sometimes work, and sometimes don’t. I have often taken a small script and weeded through the extraneous bits, and kept the important nodes for my own comp.


For integration of CGI and live action element compositing I approach it in several ways. What is the goal of the shot? What pieces will I have to replace or matte out? What areas will I have to roto or key? Years of production experience will help you in this manner, and that is the reason that comp leads are comp leads, they have the years of experience to direct you in the most quick and efficient manner. In organizing my comp, I will usually lump all CG elements into one branch. each live action element (smoke, debris, fire, etc) is in its own separate branch. This allows me to adjust the CGI as one unit, which can be prerendered if necessary to speed compositing. Sometimes the numerous amount of CG elements will require individual masks from the live action plate, and that will occur in that branch before being joined with the main trunk. Take a look at this script. Can you figure out where the trunk is? What about the CG elements? Keep in mind that in this comp in Shake I use over nodes, where A (left input) is over B (right input). Even though it’s been two plus years since I’ve seen this shot, I know exactly where my original plate is, and what I’ve done to it. Here’s where the vital information is.

One of the key things you should be able to do is organize in a coherent manner. This means lining up nodes, labeling nodes intuitively, a regular over will become CGoverPLATE or NEOoverSKY, and so on. Groups are beneficial as well. You should be able to group methods of working, like enclosing a sequence of nodes and labeling that Grain Work, or pulling several keys and putting them together and calling that FG key. Some compositing programs have the ability to automatically append the version numbers of your CGI inputs in your script. If it doesn’t, it is often beneficial to do so. so you can see at a glance what versions you have if anyone asks. When organizing, do it continuously. Often it can be a pain when your comp starts getting larger and larger, and you just run out of space between nodes to place new nodes. Continually improving the layout of your comp as you work is an advantage that will surely help. I’ve found that as I near the end of a comp as the deadline approaches, it often becomes easier to just place a node between two others, and slowly build out a bulge on the branch of the tree. This can get messy very quickly if there are numerous revisions near the deadline of a shot!

Another method of organization has surfaced, in addition to simple RGBA color management. Combustion has had this for a while, with their RPF file format, allowing different bits of information to reside in other channels of the image, and now Nuke has a similar capability. It allows you to have up to 1024 channels of image information. each of these channels are floating point. This allows artists to use any channel as an matte, not just the simple alpha channel included in RGBA layering. For example, my comp usually contains the RGBA layering, labelled as rgba.r, rgba,g, rgba,b, and rgba.a. For other layers there is usually a depth.z, a uv.u and a uv.v, and things like spec.r, spec.g, spec.b. Keep in mind that when you view each of these color channels, they are grayscale (of course) and can contain selected items in the scene. In addition to these regular channels, I can add my own, which I usually do if I’m dealing with a large script which involves multiple channels of information. When I create my masks I usually name them maskA, maskB, maskC, and so forth. Each of these masks can contain four channels, which I label maskA.r, maskA.g, maskA.b, and maskA.a. However I don’t have to follow the r,g,b,a naming convention, and I could just as easily label them maskA.front, maskA.middle, and maskA.back.

There is no wrong way to comp, only easier and faster. Only production experience will get you to the easier and faster method, simply because you’re running on someone else’s schedule, not your own. LIke almost every industry, the acronym, KISS, is most important to remember. Hopefully the above tips will help you in production, and make you a quicker and more efficient compositor!

Realflow interaction

Hi there.

I’m not sure if the "plugins" section is the appropriate place to post this topic but well… seemed close to me.

Ok. Here is the question:
I started to play with realwave [realflow] to generate some waves interacting with a geometry.
Here is the little problem.

I have a large scale realwave plane, (50*50) with high resolution (0.2 polygon size). And I have a sphere (realflow sphere) in the very middle of my scene.
I also have a spectrum, connected to my realwave (as my main wave generator)

However, the problem is, spectrum waves are not "interacting" with my sphere. I mean, the ripples do not be reflected (or ripple-back).

How can create this ?

Am I doing anything wrong ?