Folio updates

Designer Rob Cordiner updates his folio, Photographer Pawel Fabjanski updates his portfolio, and also McFaul Studio updates

Ryan McGinness Works @ Dietch Projects.

Ryan McGinness‘ day-glo explosions are at Dietch Projects to celebrate his new book “Ryan McGinness Works”. We dragged ass on posting this in time for the actual opening, but it’s up til April, so go get some!!!!!!

mcginness_work_27

 

picture-1

Noob Nuke Expression Question

Hi Everyone

I am new to Nuke and was wondering if anyone can tell me how to write a simple random expression? I can’t find any examples on the net or in any forum and in the manual it doesn’t really give examples of correct syntax – I just want to have the "mix" value on any given merge node to return a random value between 0 and 1 –

I have got this far: Merge24.mix = random ….something…. am I on the right track?

any help? cheers

Ken Ralston interview

Hero Complex interviews vfx supe Ken Ralston about his favourite shot from Forrest Gump.

Watchmen press release info

Some info about MPC Vancouver’s fx for Watchmen. Before and after Dr. Manhattan pics at Imageworks’ site. A press release about New Deal Studios’ work on the film. A few details of Intelligent Creatures’ contribution. Houdini release on Rorschach shots.

Keyer for Fusion that learns from the picture

Hi chaps (PVH….)

I was going to post some stuff last week, but have been a bit tied up (and not in the good way). Here is my latest poke at the Furnace tool F_ColourMate.

Oh, PVH don’s ask me any questions about Thomas or his work. The Aliens, Coins and sunrises bit is a bit long I am sorry, but stick with it…:)

Anyway, please find below a description of a tool for Fusion (actually its OFX so its for Nuke Shake, Flame, Fusion etc) which, like the Rig Removal tool, blew my socks (ok sock) off and I had to share. The post below describes the Furnace tool F_ColourMatte from The Foundry, that can pull a key from extremely hard to key images, by learning which bits of the image are foreground, and which bits are background

The problem was, I tried the demo in Fusion. Plugged in the inputs, and the key was done. A bit short for something so cool. So, I dug a bit deeper, and it turns out, this tool uses some incredible technology, it learns how to pull a key from your image, from the image itself using some extremely clever stuff invented by a Mathematician (and Minister) born 300 years ago. Reverend Thomas Bayes.

Inference Matting for Nuke Fusion and Shake

A Keyer that learns from the picture you are working with.

There are many tools available today to key, cut and remove, create mattes, and manipulate images to extract foreground elements, from the background. In many cases, with careful setup and attention to lighting, green or blue screens, and many other criteria, we can successfully remove and or separate these elements.

The real headaches come, when either, the background elements are complex, close or identical in colour values to the foreground, or if you have extremely fine detail (such as hair) that needs to be preserved. A combination of identical colour values with fine detail like hair can be an extremely difficult problem to solve.

Fig 1. Clean Green, fairly easy to separate and key

Fig 2. Harder.

This post outlines how we can use one of the Furnace tools to quickly and accurately extract the foreground element in the “ Harder” example form the background using Inference matting. If I simply plugged the tool into Nuke, Shake or Fusion, this would be a very short tutorial, and you would not get the joy of knowing how cool the technology behind how this tool works is, and maybe how you can get the best out of it in future. So, I will try to explain how it does it, and then you can try it for yourselves.

The research team at the Foundry Including Dr. Bill Collis, Dr Simon Robinson, and Dr Anil Kokaram, in collaboration with Dr Paul White, Southampton University and Trinity College, Dublin, set out to tackle the “hard to key” problem, with an extremely efficient solution called Inference matting.

To give you an idea what I am talking about, take a look at the following.

It is fairly easy to decide which one of these pixels is within a range of values, and you only really need to take a single look, to decide that, yes that is green or green ish and its background. All pixels in this image that are not green must be foreground and the job is done. Sure we can add some other nice techniques for spill suppression etc, and we can combine keyer types for the slightly harder to key shots. But the end result is we ask a question about the pixel colour, and base the result on that.

But what about below.

It is going to be a much harder job for a keyer to distinguish which part of this is foreground, and which is background, simply asking the question, is this pixel blue or green, or within these values is not enough and will clearly not give us a good result. Ideally we need a way of separating the foreground and background elements based on knowing that that grey bit above is a hair, and the grey ish bits behind are background. We do this with Inference matting and Bayesian magic**

** It not really magic but it is very clever.

Inference matting can separate foreground and background elements using Bayesian analysis from images you would not think possible. It does this by repeatedly analyzing the unknown area, and dynamically sampling pixels to greater or lesser degrees as it gets closer to or further away from the foreground element. It then asks the question:

“Is this part of the foreground, or the background”

Now if your keyer asked this question of that pixel just once, as it could with the easy green scenario, you will not get a very good result. What is required is a technique, which can look at each pixel in a defined area, ask is this a background pixel, then make a decision on that, but as it continues to decide where the boundary and separation is required, it needs to get more information by asking:

“Do we still believe this pixel is foreground or background based on all of the previous samples and data we have looked at so far”?

The Furnace Colour Matte tool does just that, and below is a simple (I hope) description of how. Take the white translucent hair over white ish background below. Tricky.

Fig3. White translucent hair over a light background

We are going to key this using Inference matting, but to understand what that is, you will need a little bit of an idea what this Bayesian Magic consists of.

Bayesian analysis of problems, uses a technique that does not assume something to be true or false right away, rather it continually re-asses the likelihood that a particular theory, or rule is true or false based on the last set of results given to it. In other words it allows you to solve a problem, by changing your belief that your answer is correct, based on new evidence, and based on the experience and knowledge you have gained so far.

In the case of deciding if a pixel is part of a background or foreground element, a non-Bayesian approach may be to look at the pixel, apply some operation or comparison, (yes that’s green) and then move on.

The decision of what to do with that pixel was only based on the information available to it at the time that particular pixel was analyzed. With Bayesian based matting, the pixel is analyzed, but our belief in its validity is not set in stone, this only happens after the probability of the solution (is this really a foreground pixel) has been compared alongside questions asked of other pixels and this new evidence being used.

A simple example of this might help.
Based on but horribly changed: The Economist. In praise of Bayes.

Rather than the complex analysis of grey hair and grey backgrounds, first lets take a look at a simple question, which will give us one result if we take a single value and a different result if we allow our existing beliefs to be changed based on some new evidence.

Imagine you are an alien (bear with me, this really might help!) and you arrive on our planet and see your first sunrise. You decide to analyze this event (for some reason that is not clear).

An Alien looking at a sunrise with his or her notepad (notepad not shown)

You have no idea if the appearance of the sun is a random occurrence. So you decide to score the likelihood that it will appear again with equal probability as it not appearing.

We (non aliens) would all score the sunrise immediately as a 100% probability of re-appearing, based on our previous knowledge. But our alien cannot assume this to be correct, and decides to wait and get more data before he/she confirms that the sun will rise again.

If we applied this way of thinking to our hard to key example we would get a poor result. Is this pixel grey? yes it is, so it must be background. What we need to do, is think about other possibilities, and probabilities that maybe the sun wont come up tomorrow, and maybe that grey pixel isn’t 100% background.

So if we agree that maybe the sun won’t rise, maybe it will be grey, how does this affect the outcome of our results? As we don’t know, we cannot disregard this data. Bayesian analysis does the same thing, it does not assume it has all the data it needs at the first pass and uses new evidence to allow us to change what we believed to be true.

You (you are an alien again, keep up PVH) keep track of how probable the sunrise is by placing a white and a black alien coin (ahem) in a bag to represent the 50% chance that it will happen again and the 50% chance that it wont. You have a 50/50 chance of pulling a white or black coin from your alien handbag.

The next day the sun rises again again. This new information changes your previous assessment of the event, and your theory on what is going on. The probability of the sun rising has changed from yesterdays 50%, to 66% today (2 white coins, one black coin). If you had made a choice based on the probability yesterday, you would have made your decision on the sun rise without having all the data you needed.

Of course as days go by, your alienness is beginning to realize that this is approaching a 100% probability, but you did not know that to begin with, and could not make a real analysis without updating your evidence with new data, and taking into consideration all the prior events. If you did not know prior events, then it would have been wrong to assume the sun was going to rise, and it would be equally wrong to assume that the grey pixel is background, just because you looked at it once and it was grey. ☺

This is basically how Bayesian analysis works, and you can see how constantly re-evaluating how certain you are that a pixel is foreground or background is far more advanced than asking if it’s green, then it must be background and not revisiting that pixel to ask: Are we sure this is background, based on all the other pixels we looked at afterwards. In other words it learns from the picture how to pull the key.


Random Fig a. Man trying to get his head around the sunrise explanation.

Bayesian only Matting solutions are computationally scary, as they have to do all of the magical re-assessment of “belief” for all pixels in the image, and for multiple regions within that image. There are seven unknowns at each pixel (rgb for fg, rgb for bg, and alpha) all requiring us to guess, assess and believe in. This really would take far too long, and a more refined solution is required.

Inference matting takes a step back from the problem and only works on areas of the separation that need it, while intelligently adapting itself to look in greater detail at the areas we are specifically interested in.

Inference matting first asks, how far away am I from the region I am interested in, and do I really need to sample a lot of data, from an area I know to be background. It then adaptively scales the amount of work it has to do, based on the distance form the inner or outer user defined mattes, and gets to work solving the intricate problem of the unknown area we need to separate. It adds the intelligent gathering of data collected from this process to the Bayesian analysis described above (but does not use Alien Coins) and gradually learns from its experience, and moves towards an informed solution. Clever isn’t it.

Fig 4. Clever. Work smarter not harder.

Fig.4 depicts the current pixel. If I am a long ways off from the current pixel and the unknown area, I am going to sample a lot of data to be added to the possible solution. If I am close to the current pixel being evaluated, and I know this area to be getting closer to the known background, I am only going to sample a small area.

As we sample these areas, we add the data for Bayesian analysis, and keep asking the questions, and assessing if we believe the result to be true (and are still true).

By approaching the position and amount of data to be looked at intelligently, the resultant separation and key is going to be much faster, and give a superior result. The performance increase using Inference matting allows us to perform other operations, such as smoothing which cannot be added to the Bayesian only solution, as it would take three weeks* to process.

Fig 5. and Fig.6 below shows the type of artifacts that can be left behind when not applying smoothing with Bayesian matting (as we didn’t have 3 weeks*) and not carefully looking at the unknown area with adaptive scaling of sampled pixels. You can clearly see the Artifacts around the eyebrow with Bayesian only matting.

Fig 5. Inference matting Fig 6. Bayesian only Matting

So how do we tell Inference matting where to look for our tricky areas?

To start with, we need to ask the artist to define a region around the “Hard” to separate foreground and background elements. Now this is not a full roto by any means, but just a rough guide as to what is 100% foreground (in the example here, 100% face and head, and not containing the unknown values in and around the hair), and another simple mask to define the outer limits to the unknown (hair) area.

Fig 6. Simple inner and outer mask shapes

If you think about the user-defined masks above, we know that the inner mask is absolutely foreground, so we do not need to sample huge amounts of data when we are close to the boundary of this mask. The outer mask, specifying the boundary between the unknown area we need to preserve (the hair) will also adaptively scale and increase the amount of samples, the further away from the local region we get.

Fig 7. The Black “ Unknown” area of focus for Inference Matting

Once you have supplied the very simple mask to the Inference matting tool, there are various controls to fine tune the result as you would expect, however in this particular example, a default setting produced the result in Fig.8 below.

Fig 8. Default settings output of Inference matting. Furnace F-colourMatte tool

Notice the extremely fine detail of individual hairs has been preserved, without additional masks or keying, and without the need for spill removal or any additional process. The Inference matting tool did not assume or believe immediately that a pixel was “green” and move on, but evolved its belief based on the collection and evaluation of data, changing where required the solution to produce jaw dropping results.

Now, I have to say you could, with a lot of work, and a combination of tools and skills pull a key from this image, I am not saying that this cannot be done. What I am saying is that working smarter not harder with intelligent tools will make your life much easier.

As always, ping me an email if you want to test for yourselves

/Flame on

Afterburner Effect?

Does anyone know how to create an afterburner for jet engines using maya? If anyone could send me some example project files mb (maya 2008) or ascii that would be great and I could figure it out myself. It would be even better if you could sen files of different variations with differnet looks. I have attached some examples of what I mean. If you post instructions can it be a more step by step guide preferable with screen caps but any help is welcome and I would be very grateful for any help thank you.

Attached Thumbnails
Click image for larger version

Name:	vlcsnap-2658717.png
Views:	N/A
Size:	325.9 KB
ID:	7354
 Click image for larger version

Name:	vlcsnap-2671128.png
Views:	N/A
Size:	441.4 KB
ID:	7355
 Click image for larger version

Name:	vlcsnap-2673983.png
Views:	N/A
Size:	148.9 KB
ID:	7356
 

Problem with particles

The first image shows what i experience sometimes when I’m using the particle layer, the second one is a problem i always experience, whenever i have my particle on a different layer, and my original footage on a separate layer, the particle system tends to have dark shades around the edges, sometimes my particles are almost black, but i usually avoid this by putting my particles as an operator on my original footage, please how do i combat this alpha problem with particles?

Attached Thumbnails
Click image for larger version

Name:	1.jpg
Views:	N/A
Size:	11.3 KB
ID:	7351
 Click image for larger version

Name:	2.jpg
Views:	N/A
Size:	7.8 KB
ID:	7352
 Click image for larger version

Name:	3.jpg
Views:	N/A
Size:	7.2 KB
ID:	7353
 

how to write out all passes in single exr file

I have all the diffuse, spec, refl etc passes from network and I want to merge all these files into one exr file. I think we can use shuffle but I dont know how to do this… please help me..

writing out files in Parallel? why nuke recomputes everytime

I have a tree of very complex nodes and when I connect two write nodes with each having 2k output in exr format and another .mov format at 1k. The problem is that nuke recomputes all the operations as the final nodes requests instead of writing in parallel.. this is just recomputing again.. How do i avoid this situation.

I mean when I render out from the lighting stage I need to writeout all the diffuse, spec reflection etc to the compositor so at my light composite stage.

I am just trying to figure out how to write images in parallel instead of recomputing again and again for every single write node…