Animatics (Space10)

Rainy City Walk (2017)


Screen capture of the Ableton Live project, playing the “Rainy City Walk” scape.

This Ableton project is a free download at https://www.francispreve.com/scapes

Higher resolution audio of this project can be heard at:

The tech of ‘Terminator 2’ – an oral history

T2_BARS_V1
Illustration by Aidan Roberts.

Ever since James Cameron’s Terminator 2: Judgment Day was released in 1991, I’ve been reading about the many ways ILM, led by visual effects supervisor Dennis Muren, had to basically invent new ways to realise the CG ‘liquid metal’ T-1000 shots in that film, of which there are surprisingly few. Tools like ‘Make Sticky’ and ‘Body Sock’ are ones that I’d heard referenced several times, but I’ve always wanted to know more about how those pieces of software were made.

So, over the past few months, leading up to the re-release of Terminator 2 in 3D, I’ve been chatting to the artists behind the technology who were there at the time. This was when ILM was based in San Rafael, and when its computer graphics department was still astonishingly small. Yet despite the obvious challenges in wrangling this nascent technology, the studio had been buoyed by the promising results on a few previous efforts, including Cameron’s The Abyss, and by the possibilities that digital visual effects could bring to modern-day filmmaking.

For this special retro oral history, vfxblog goes back in time with more than a dozen ILMers (their original screen credits appear in parentheses) to discuss the development of key CGI tools and techniques for the VFX Oscar winning Terminator 2, how they worked with early animation packages like Alias, and how a selection of the most memorable shots in the film – forever etched into the history of visual effects – came to be.

Gearing up the computer graphics department

Tom Williams (computer graphics shot supervisor): I actually worked full-time for both Pixar and ILM for most of T2. Then I realised that was really dangerous. I would fall asleep, driving home once, and freaked myself out and realised you can’t really do that. So towards the end of T2 I went over to ILM full time. The way I got there originally was, I got invited by [visual effects producer] Janet Healy and [visual effects supervisor] Dennis Muren because I had worked at a company called Alias, which did modelling and animation tools.

t2cgteam
This still from the documentary ‘Industrial Light & Magic: Creating the Impossible’ shows ILM crew members at work on Terminator 2.

George Joblove (computer graphics shot supervisor): Each single gig at ILM was a small step above what we’d done before. And we were fighting with the limited computing resources we had at the time. We had done The Abyss which was a big step forward in a couple ways. First of all, in demonstrating what was possible and achieving it. Second of all, working for Cameron who had that great vision for how it could be used in The Abyss. With that film, had we not been able to pull it off, there would have been ways to work around it. But I don’t think there was any such opportunity in T2.

Eric Enderton (computer graphics software developer): Terminator 2 was my first big movie. I saw The Abyss in the SIGGRAPH film show and thought: I want to work for those guys. Fortuitously the CG group had decided to hire their first tools writer. They had lots of software but it was all being written by the same people who were doing the shots. I was the first ‘software-only’ person in ILM computer graphics, which obviously was a huge learning experience and just an amazing time.

Jay Riddle (computer graphics shot supervisor): I was working at ILM for several years and had learned how to animate by sitting with John Lasseter when he was in the Graphics Group, which was part of the Computer Division of Lucasfilm at the time. They were using this vector graphics display that they used with their own in-house software that they’d written, and they had this frame buffer. They were still in our building, and then they moved out to one of the other Lucasfilm buildings while they were trying to spin off and get their own place, which they eventually did. And just as they left were doing The Abyss, and then they were kind of fully gone by the time T2 came around.

Doug
Visual effects art director Doug Chiang, in addition to sketching many incarnations of the T-1000 at different stages, also performed digital manipulation fix-its to final shots. Image via ILM Facebook page.

Michael Natkin (computer graphics software developer): I showed up at ILM in a suit, which was hilarious. I remember Eric Enderton and George Joblove and a few other folks took me up to the Ranch for lunch and showed me around and I was like, ‘Sure. Hell, yeah. I’ll do this. Let’s make it happen.’ I knew a lot about computer graphics, but nothing about movies whatsoever, so there was quite a learning curve.

Steve ‘Spaz’ Williams (computer graphics animation supervisor): I was at Alias and had been pushing for VA – video animation – but Alias was into the ID which stood for industrial design. At the time, VA was this very small budding thing. Then ILM called and they had purchased a cut of Alias, and so they first thing they had me do was a ride they were doing at Epcot Center called Body Wars – it was a fly-through of the heart. Then James Cameron came to ILM with The Abyss and from there we went on to Terminator 2.

“I’d point to a page and say, ‘Oh, well that looks interesting. How are you going to do that?’ And they’re like, ‘Oh, we don’t know yet.’” – John Schlag

Stefen Fangmeier (computer graphics shot supervisor): My role on T2 was as a technical director. Meaning that I would concentrate on rendering and compositing rather than modeling and animation. Back then, TDs really needed to have programming experience and since I have a computer science degree, these tasks were a natural fit for me. My tasks were to support the animators in technical areas which included writing C-shell scripts for frame to frame processing. Many of the features for doing this are now included in commercial software packages, but back then, most of the procedural, frame-by-frame batch processing had to be created from scratch.

Geofff Campbell (computer graphics animator): [I was at MPC] in the summer of 1990 when I received a phone call from ILM who wanted to set up a telephone interview regarding a new film they were starting work on. It turned out that Steve ‘Spaz’ Williams had reviewed my portfolio and had asked for the interview. The phone call came one morning at 2am and woke me out of my sleep catching me completely off guard. I remember slurring my speech while standing at the bottom of the landing freezing in my underwear. That was also before satellite phones and the static and delay of the transatlantic connection was almost comical.

Everyone on the ILM side were asking me serious questions about my abilities, schooling etc. but every now and then Steve would chime in with a question asking me things like did I have any pets? I told him I had a cat back in Toronto, and his follow up question was getting into specifics like my cat’s name and what type of cat food I served him. A week later I got the job and started working on Terminator 2 on Halloween day. Looking back I realized that Steve was serving me up a short hand during my London interview. I had already gotten the job and the interview was just a formality.

image
Storyboard example of ‘Head through bars’ shot, and the final result.

Tom Williams: When I came onto the show, ILM had all the storyboards up because there’s some particularly tricky shots that they were mulling over. They were just stuck. They were all color-coded. I was looking at them, and was like, ‘Oh yeah, the greens, I could do those and the yellows, that would be fun. I think I know how to do that.’ Then there was the blacks. I was like, ‘Wow.’ There was ‘head through bars’, and some of the stuff where the surfaces would merge with each other like when the T-1000’s hook hand gets stuck in the car and then melts back into his shoe. And ‘head through floor’. They said, ‘We want you to help us with the black ones and all the things with a black dot on it.’ I was like, ‘Awesome.’ When someone says, ‘Yeah, we’re not sure how to do this,’ you can’t do worse. My failure was to meet their expectations, I think.

John Schlag (computer graphics software developer): On my first day at work, I came in the door, they sat me down, and they showed me the storyboards, and they went through this binder. And I’d point to a page and say, ‘Oh, well that looks interesting. How are you going to do that?’ And they’re like, ‘Oh, we don’t know yet.’ I’m like, ‘You people are batshit! You’ve got to be kidding me! You bid this job, and it came in, but you don’t know how to do the work?’ So that was a big wake-up call on my first day at work in real visual effects, to realise you know, you make this Hail Mary bid, and lo and behold it comes in, and you’re celebrating, and then terrified.’

Michael Natkin: Actually, I also remember on my first day on the job, George Joblove took me down to watch them blow up the practical warehouse for Backdraft, which was amazing. It was a really neat time at ILM because it was right as the transition was happening from everything practical and optical to everything digital.

Wait, can we actually do this?

George Joblove: I think we had cautious optimism. It just felt like we should be able to do it. We knew that there were going to be some tough challenges to solve but at the same time if felt like a really fun project that would be a great challenge and would be a great thing to accomplish.

JamesCameron
James Cameron discusses a scene with actors Arnold Schwarzenegger, Linda Hamilon and Joe Morton.

Eric Enderton: Terminator 2 was this huge show because it had like 50 shots. I mean, today you can’t get out of bed for less than 300 shots.

Jay Riddle: When Robert Patrick is the actor playing the T-1000, it looks like one thing, but when we’ve got this chrome and poly-alloy character moving around, it’s like something weirdly different, right? And they had to kind of flow into each other, and re-form.

George Joblove: Chrome, in those days, was something that, you know, that computers did well. The idea of making it liquid, making it walk like a person, integrating it into a live action scene completely convincingly – those were all real challenges. But making a chrome character was going to be a lot easier than making a furry one would have been.

Doug Smythe (computer graphics shot supervisor): At that time, too, the staff at ILM for doing computer graphics was pretty small. It was like a dozen or so people, and we had to grow the department very quickly, so there was a lot of hiring that had to be done. We had divided up the shots and the teams.

“It was Terminator 2 where I thought, ‘Oh my god, we’re going to buy a million dollars worth of computers for this – what a staggeringly large number.’” – Eric Enderton

George Joblove: Hardware and software back then was so expensive. I think if you look at the hard drive storage in 1990, a gigabyte of storage was $9,000. This was also still the age of SGI boxes because they made computers that were specifically optimised for doing graphics work and with the most bang for the buck that you could get. We had a network of SGI machines that included some large servers and then a bunch of work stations.

Doug Smythe: The tools that we had at the time, well, some things were inherited from Pixar when they split off. But we kept copies of the tools, or at least some of the tools that were developed at Lucasfilm, and then we had some sort of deal back and forth with Pixar, including to use RenderMan, because we would keep in touch with the guys and they were still next door for a while. And we collaborated to the degree that our separate businesses and legal departments would allow.

StanWinston
Filming a practical make-up effects scene at the home of Miles Dyson. ILM’s CG work would ultimately work hand-in-hand with that of Stan Winston Studio. Image via Stan Winston School of Character Arts.

Eric Enderton: It was a really rare situation where you knew the film was going to be big. That hardly ever happens. We worked on stuff that we thought was going to be terrible and it turned out to be great, and then some things that went more the other direction, but this was one you just knew it was going to be big. I got to read the script and  I just thought it was great. And it was Terminator 2 where I thought, ‘Oh my god, we’re going to buy a million dollars worth of computers for this – what a staggeringly large number.’ Those 50 shots took us something like six months. I mean, that was all we could do. When I got there the CG group was 12 or 15 people and we had our meetings in the upstairs kitchen in C building. Then by the time I left it was almost the whole company – ILM had grown to 300 people and the great majority of that was CG.

George Joblove: Everything was done step by step with a lot of tests along the way guided by Dennis Muren who had great faith in what we could do. He was also excited about the prospects of being able to do things that hadn’t been done before.

Mark Dippé (associate visual effects supervisor): I give Cameron a lot of credit, the pseudopod [from The Abyss] and the liquid metal man in T2 are the same principle – they are what I would call the classic, perfect digital character. It has all the aesthetic elements that a digital system can be, and excel at.

Out from under The Abyss

John Schlag: ILM’s big splash before Terminator 2 was The Abyss. You know, the water creature, the pseudopod. They called it, internally, ‘the water weenie.’ And they had this single monolithic software that created the creature. You make a spine curve and a series of edge, profile curves. They would lock those. And then you can provide it with a Cyberware face, and it would stick that on the end. And then there were water ripples that it would add throughout the whole thing. It was like everything that you needed to do that one creature in one programme. And the programme did only that.

So one of the first things I did on T2 was get my hands on that, and started disintegrating it. Like, pulling bits of it out and turning them into separate tools. There are some places in T2 where the T-1000 gets shot, and you can see liquid metal under the police uniform, and it is sort of rippling and healing. I made a tool to do that, with [computer graphics animator] Jonathan French for the bullet hole healing, for example, which came out of pulling apart the different tools.

Mark Dippé: The pseudopod from The Abyss was an abstract alien creature that had no relationship to humanness or even livingness. But for the T-1000, the big question was, how can you make it move and behave as if it’s a human inside, whatever you wanna call it, even though Robert Patrick in this case is not a human, he’s a T-1000, he’s a machine, but that was the big concern.

“We even originally included a limp Robert Patrick had from a football injury. I noticed it in the initial test that we shot with him.” – Steve ‘Spaz’ Williams

Jay Riddle: I’d been working at ILM in the camera department before getting into digital effects. For our animation tools, there were a number of visits to Wavefront Technologies. Initially, Alias was kind of being ruled out, because it was considered a toy and not really a legitimate contender.

Part of that was because there were some personal relationships between the people that worked at ILM and Wavefront, so it felt like, ‘Oh we know them’, so if something goes wrong or we need something fixed or changed, they’ll respond to us, and as soon as we signed the Wavefront deal, that person who was at Wavefront left! So, it kind of took away the whole argument of why that was the great advantage. And in fact, from an artist standpoint, which I was doing in modelling and animating, Alias was much easier to use.

abyss
Members of ILM’s team consider the pseudopod from The Abyss. Image via Lucasfilm website.

Wavefront was definitely the industry leader at the time, and had a lot of great features, and a huge community around it, and a lot of people that were good at it, and so ILM choosing to go the Alias route was kind of, well, people just kind of went, ‘What? You’re going with Alias?’ But it really legitimised Alias as a piece of software.

And really, what we did with Alias was, we hired Steve Williams from Alias itself for The Abyss, and he animated a spine moving around, and all of the little cross section circles along the path of the spine, and then Mark Dippé had written some software to kind of place those along the path, and make sure they were skinning properly and not twisting, and things like that, and Scott Anderson was also involved in that, as were a bunch of other people.

From real to digital

Steve ‘Spaz’ Williams: We had five separate categories of shots for Terminator 2. Now, we had what was called the pseudopod team, so we could re-purpose the data from The Abyss. But as opposed to refracting, the T-1000 was reflecting. Then we had the morph team, you know, which was the more two-dimensional transformations. Then we had the death team, that was the whole death sequence at the end. And then we had the [The Human Motion Group] team.

We had Robert Patrick come up to ILM and we painted a grid on him, a four inch by four inch grid all over his body, and he was like in a crucifix pose. We had him run, and he ended up running so much on a rubber mat that we had that he ended up blistering his feet, to the point where we had to cover his feet up.

So, there was no real motion capture at that time, at all, so we shot him with two VistaVision cameras exposing simultaneously. One from the front on an 85mm lens, and one from the side on a 50mm lens, and they’re firing simultaneously. So I can look at frame one from the front, and that would match frame one from the side. From there I basically rotoscoped Robert’s walk.

Mark Dippé: It was really through hand digitization not only of his body data but of his movement data that we created a database with a virtual character. It was all hand-built.

Steve ‘Spaz’ Williams: We even originally included a limp Robert had from a football injury. I noticed it in the initial test that we shot with him. So I had to try and correct that in the bone walk. So when I went and I reanimated CC1 for real when we got the plate photography I made a lot of corrections to that, because he was supposed to walk like a machine.

Mark Dippé: It is one of those things where it’s a little subtle, but you can see it, and it just came out of the rotoscoping.

robert-patrick-t1000
Robert Patrick with the grid painted on him.

Steve ‘Spaz’ Williams: So, we had what we called RP1 through to RP5. Robert Patrick – RP – that was the actual naming convention.

Mark Dippé: RP1 is the blob, an amorphous blob. RP2 is a humanoid smooth shape kinda like Silver Surfer. RP3 is a soft, sandblasted guy in a police uniform made out of metal, and RP4 is the sharp detail of the metallic liquid metal police guy, and then RP5 is live action.

Steve ‘Spaz’ Williams: Now, to get to all those RP versions, we had to break it all down. In the script it said he migrates from the blob version into a fully clothed version. That’s Cameron’s idea – so we had to translate that. So we thought, okay, we’ll break it into four stages. Let’s just do that in data, but the control vertices have to actually share the exact same properties. But they migrate in time. That’s essentially what the MO was at that point.

“Spaz was so good at it that he could literally click ahead of the menus appearing.” – Michael Natkin

Mark Dippé: We chose those ones because we felt, first of all it was hard to do any of this, but we felt those five stages were sufficient enough for us to achieve all the story ideas that were required. You know, he’s a formless blob, oh, he’s kind of a soft humanoid form. Oh, he looks kinda like a policeman. He is the policeman, to Robert Patrick.

Steve ‘Spaz’ Williams: If you look at Robert Patrick and what we call the RP4, which is just before it becomes the real guy, all that data of his head we collected using a cyber scanner. Then what we had to do is write an equation to actually smooth it all down and make it stupid, make it essentially like ice cream for RP2. So the data all had to be the same. You were not changing the amount of control vertices in the actual data. You had to run a smoothing algorithm over it.

robertpatrick
Reference stills of Robert Patrick in police uniform.

Michael Natkin: Spaz was so good with Alias. Now, Alias was quite slow back in those days, and it had all these menus that you had to use. You’d click the bottom of the screen and a menu would pop up. Then, you’d look through it for the item you wanted, and then you’d click on that. Often, that would launch a submenu, and then you type in a couple numbers and press return, right? But it was super slow. It would do some operation. Spaz was so good at it that he could literally click ahead of the menus appearing. So he would click on the bottom of the screen, then click where the menu item was gonna be, then click where the submenu was gonna be, then type in the numbers, press return, then turn around, chat with you for a minute, and turn back around, and the screen would have done what he wanted.

Steve ‘Spaz’ Williams: In the script, the T-1000 is going to walk out of the fire and he’s going to, the term people used was ‘morph,’ but in fact it was model interpolation. He’s going to interpolate into the fully clothed version of Robert Patrick. So [the shot was called] CC1 where he migrates from RP2, which is what we call the ‘Oscar’ version, a smoothed-down T-1000, but he shares the exact same dataset or control vertices as RP4. And RP4, again, is the fully clothed version with the wrinkles and buttons. What I did is I hid all the buttons and the badge and the gun, I hid it inside his body cavity, and grew it out in time. The press called it morph. In fact, it was called model interpolation.

Geoff Campbell: Steve [Williams] had brought me on to work primarily with him on the T-1000 and I believe my first task was to take his detailed Robert Patrick model and make a smooth ‘Oscar’ like version for the liquid metal transitions. Today in just about any software that task would be a twenty minute job with a smoothing brush, but in those days the software was very limited and even a sophisticated package like Alias was ridiculously crude by todays standards. We were also using NURBs with overlapping control vertices so modeling was a very complicated process. Also there wasn’t a shaded GL mode when sculpting and on top of that you could only move one control point at a time.

ilm_terminator2
A final frame from the fire sequence.

They had something revolutionary at the time called Prop Mod which allowed you to select a cv and type in a number of cv’s in the surrounding u and v direction that you wanted to move with a fall off, but to use it you had to click down on the cv and wait for 5 seconds before you could drag your point to it’s new location. It was so slow I never bothered to use it. So for me sculpting was the tedious task of moving one point at a time. I used to joke that it was as intuitive as sculpting with chicken wire. The hardest part was sculpting those points in wireframe and not seeing the shaded form. You could only see the results of your sculpting if you clicked on the ‘quick shade’ option where your screen would go black for 5 minutes and then start building your image on the screen one line at a time. That was reserved for when you were close to finishing your model and you needed to see what the hell you had done all day. It also forced you to take a coffee break.

My first animation on T2 was of John Connor’s foster mother body transitioning back into the T-1000 and stepping over John’s dead foster father. We didn’t have inverse kinematics or constraints so you had to keep track of all your body rotations and when you overshot a particular joint’s rotation it could affect the whole arm or leg so animating was much more time consuming than it is today. Match moves were also not as accurate so you often had to cheat the feet sliding to a ground plane in order to make them appear to be locked to the floor one frame at a time.

Doug Smythe: In the hallways of ILM, we still have the little maquettes that were made of the five stages of the T-1000 and it starts from this very amorphous blob, which was actually just key frame pose of a spline surface to do whatever it needs to do, to different stages of levels of detail of Robert Patrick as silver, and then finally the live action actor.

ilm_maquettes
The various T-1000 maquettes on display at ILM.

But we didn’t have any way to go from the first to the second, or from the fourth to the fifth. So any one time went from blobby to the low-resolution humanoid version, that involved the morph. We got it as close as we could just in animation and then you let the morph take over. I think we had some sort of mesh dissolve thing so that we could take the higher resolution mesh, smooth it, and project it onto the smaller resolution mesh so we could actually transform from, we do a cut from one to the other. We may have used some morphs to help that, but I think we could do a geometric transformation as you get sharper and sharper silver detail.

Alex Seiden (computer graphics animator): One of the things I coded was an interactive lighting editor (called ‘led’) that would help artists position reflections. I rendered a ‘geometry buffer’ – pre-computed surface normals and positions – so that shading parameters and reflection planes could be re-positioned and quickly re-computed without having to do a full render. There were also some features that would allow you to place a reflection or specular highlight by clicking where on the image you wanted it to appear.

Sock stories

Steve ‘Spaz’ Williams: We were using Alias version 2.4.1. I had come up with a method to build using separate four sided b-splines for the T-1000. Then we hired a guy out of Toronto – Angus Poon who was an excellent code writer. If you have 4 sided b-spline patches and the character is breaking, well, he basically came up with ‘Sock’ [which would be revised and called ‘Body Sock’], a piece of code that stitched things together where it was all breaking.

Michael Natkin: Later, this kind of thing would be done with NURBs, but before that they were just b-spline patches. The process would be that they would make a still model that was perfect, all the surfaces were blended. Then, they would make the skeletons, and they would animate the skeletons. Of course, when you animate the skeletons, the splines would separate, right? If you imagine that your body is made up of plates of rigid armour, and then you reposition the arms and legs, or whatever, the armour plates are gonna separate, and, or overlap.

What Body Sock was doing was giving us a way to blend those patches back together. There’s certain parts of the body, particularly one of the biggest ones is the crotch area – all of these surfaces had four edges. They were rectangular, but the geometry of where the legs come together into the torso, there’s just not really a great way to do that with four-sided patches. Body Sock would basically let you specify different kinds of blends.

“A TD had to know much more of what went on ‘inside’ the software/computer then in order to achieve the desired results efficiently.” – Stefen Fangmeier 

Eric Enderton: I worked on Body Sock, and Carl Frederick, Mike Natkin and Lincoln Hu were also a big part of it. The way to think about, imagine somebody’s knee. As you bend the knee there’s going to be a separation. If you just have a rigid upper leg and a rigid lower leg, you bend the knee, there’s going to be this break. Either that or interpenetration, or something funny is going to go on. The question was, how can we do that skeletal animation but then end up with a smooth surface? So, nowadays this is built into so much software that nobody even thinks about it, but at the time it was like, ‘Oh boy, how do we do this?’

I don’t remember how we arrived at this at all, but the name came from imagining, could we put a body sock, like a stretchy nylon fabric around all these individual animated pieces of the body and have it be a smooth surface then that would follow the whole body? That was the original idea.

It ended up that that’s not what we did, instead, what we did was stitching. All of this stuff was being modelled in uniform cubic b-spline surfaces, so, NURBs, only simpler.

There was a button, a menu item in Alias that would do this for two surfaces statically. It ignored the animation, it was just a modelling operation that would stitch two surfaces together. One of the things that they had asked me to do earlier was to make an animated version of that tool. I wrote a little programme that read in a scene, you gave it the names of two surfaces and it did this stitching operation on each frame and then wrote the animation back out.

I tried it on a plane next to another plane or something, and it seemed to work, so I gave it to Spaz and he picked it up and in 20 seconds he made an arm animation with a muscle bulge, and then hooked it up and typed in the command and tried it out and there was this arm flexing back and forth. That was my first real experience of an artist picking up a tool I had made and making this beautiful art with it that I could never have made myself. I had the sense that this artist was held down by chains that were the limitations of their tools and I had just cut one of the chains. What a great feeling. I was hooked.

What we did with Body Sock was make an automatic stitching tool that would go and stitch all the seams in the entire character each frame. To do this, you needed a Sock file that told you where each of those seams was. It’d name the two surfaces and which side of each, plus U, plus V, minus V, minus U. Somebody had to very carefully figure this out. For the simple seams the math is really simple. Then you can do something a little more complicated where you have more subdivisions on one side than on the other. As long as it’s an integer multiple it’s okay.

Then the corners, if you have four surfaces that come together at a corner you can sort of imagine this same math is not too bad. You just line up all the control points and average them. But if you have three surfaces or five surfaces or some other number coming together at a corner, which you do in a humanoid form, you have to have at least a couple points like that, the math is a lot less obvious. It took us a while of poking around to figure out how to do that.

Mimetic poly alloy. Wuh?

Alex Seiden: The first thing I did on T2 was to write the ‘poly alloy’ shader for the T-1000. The mercury-like surface of the T-1000 required very specific reflections, but in those days we didn’t have ray-tracing available in a production renderer. So I came up with a way to let us do enhanced, controllable reflection mapping. TDs could place multiple reflection planes in the scene with the animation, and inside the shader I’d do a quick hit test to see if the plane was hit. It was a RenderMan shader.

It had some similarities to a shader that had been written at ILM recently before, for a Diet Coke commercial, oddly enough, but was all new code. Some of the best feedback was, no surprise, from Dennis Muren. In particular, he was really great at guiding the overall look, such as making sure we had enough diffuse shading mixed with our reflections. We called it the ‘pewter’ look. Without that, the T-1000 didn’t have any mass.

Stefen Fangmeier: I also worked on the poly alloy shader. RenderMan and its shading language were entirely new to me since I had previously only worked with Wavefront and then mental ray at Mental Images in Berlin. The ability of RenderMan to allow for complex light and reflection interaction and was essential in getting the look right.

One of the best examples of the poly alloy shader is in the shot of the T-1000 walking out of the flames. In order to have the flames reflected in the chrome as the T-1000 walks out, I placed cards into the environment on which flames elements were mapped on every frame and the shader used the transformation ability in RenderMan to calculate the proper reflections. We didn’t have ray-tracing back then due to the high rendering times it would have required but were able to achieve this effect just as well with this sort of clever cheat.

“The funny thing was that the hospital didn’t really have checkerboard floors. They were all white. Cameron thought that the black and white was much creepier looking, so we went with it. He had some poor guy stick black stickers every other tile.” – Liza Keith

The wipe to the actor at the end of the shot was also achieved using the object space of the model and animating a card over it to achieve the wipe in the shader. The transparency wipe was offset by a fractal in order to not make it a straight edge. This effect was used in several other scenes as well and required that the animator would closely match the CG geometry to the actor to which the T-1000 was transforming.

It should also be said that there weren’t a great deal of interfaces that allowed for immediate interaction with the rendering process as there are today. So, a TD had to know much more of what went on ‘inside’ the software/computer in order to achieve the desired results efficiently. Times certainly have changed.

‘Head through floor’

Eric Enderton: For this shot, the problem was that you had the face, and you had the floor, and they were two completely unrelated objects and we didn’t want to try to animate that merge, because it was going to separate from the floor. The topology was going to change, and it just would have been really hard, so what we wanted to do was somehow make a surface that lay over the face and the floor, like a cloth.

floor
A still from the checkerboard floor

The way we did that was, I made a ray casting tool. The new surface is defined by an array of control points. We’d compute those control points by shooting rays from a starting surface – a plane or curved surface – towards the combined surface, placing each control point at the ray intersection. It was new and interesting, but very difficult to control.

Jay Riddle: Liza Keith was the animator on that shot. It ended up taking quite a bit of time to figure out how to make this nice smooth transition between a flat surface, to something that has a face that starts to appear in it, and then pulls together and the texture itself feels like it’s doing something that makes sense, and not like tearing apart and going in weird directions, and looking CG.

Eric Enderton: Liza is both very technical and artistic and so she was wrestling with it and we would see it in dailies. The process took overnight to render so you couldn’t see what your animation looked like until the next morning. There was one day when she just didn’t even come to dailies because she was getting so discouraged, but that was the day that it really worked, visually, and there was spontaneous applause in the dailies room. So when she came in I told her, ‘You got it, that worked!’

Liza Keith (computer graphics animator): So they wrote that little rays programme that did an intersection and created a surface from the intersection. We ended up having to make two surfaces. One going out, that we used to do the one going in. Spaz was the one that generated the model. It was a scanned 3D model, but those things weren’t working very well at the time, so he turned it into an actual model that you could use.

floor1
A human form develops as the T-1000 mimics the guard.

Michael Natkin: Liza had modelled several frames but we didn’t have any good way to turn that into an animation, and so I worked on some software – it was really almost just like glue code. It would read the Alias files as still frames, and turn them into animations, setting them up as keyframes so that we could render the in-betweens. But these were all things that you couldn’t do in the Alias interface very easily. A lot of what I would be doing was simple tools that basically made Alias work better, or help with the translation from Alias to Renderman.

Liza Keith: I think my big contribution to that shot, beyond the actual animation, was the fact that I had written a shader to do a floor for me to do tests with, because we didn’t have the background plates at that time. So I just made a big square and put the checkerboard on it. The funny thing was that the hospital didn’t really have checkerboard floors. They were all white. And if you look in the background in some of the other shots, you’ll see that the floors aren’t checkered, but Cameron thought that the black and white was much creepier looking, so we went with it. He had some poor guy stick black stickers every other tile, every other piece of linoleum in the hospital hallway.

Michael Natkin: It was very gratifying because it was all this stuff that, from an engineering point of view, was very simple to do. It was basically just moving data around. It wasn’t fancy computer graphics, really, but it was just making the artists, the TDs, and the animators’ lives so much better. It was taking things that would have taken them days to do manually. We could write the code that would just do it instantly for them.

Making Make Sticky

Tom Williams: The shot of the T-1000 coming through the bars with his head and body was done with a program we called ‘Make Sticky’. It was the precursor to the 3D paint system we did, which was a way of letting you have a line, a geometry underneath a frame and being able to preserve a texture along that geometry. Everything at the time was assigned textures to mesh vertices. You just can’t do it that way. You end up with all sorts of UV and parametric distortion and stuff where you don’t want it.

terminator_2.bg_.3

Steve ‘Spaz’ Williams: So the way Make Sticky worked was – you take the texture map, which is a flat, orthogonal image, so it has these theoretical points on it. And you say, ‘We want to take these points of a texture map, and we want to stick them to a piece of data.’ And regardless of what the data does, we don’t want the visual image to slide around on the surface. So we want to tack them down. In three-dimensional space.

Tom Williams: From the Pixar days, we had done all this subdivision work and NURBs and micro-polygons and stuff like that. We used a lot of the breakdown of where the micro polygons would end up into indices in the frame in the scanned film frame. It worked a lot better. Then, as it distorted, we didn’t get to do a tonne of lighting, re-lighting, but we got to do a little bit so it looked better than just a 2D surface with texture mapped on it. By the way, it’d probably work much better now to use the underlying meshes. Particularly if you use subdivision surfaces or something, where you can get more consistent UV parameter space. At that time, we used a lot of these triangular patches and very large meshes. Bezier meshes. It was literally a hardware limitation and a tools limitation that it wouldn’t let you go super finely grained and it would take forever to render.

Doug Smythe: To push the head through the hospital bars we had a cyber scan mesh of Robert Patrick and I took that geometry and I just literally pulled the control CVs around the shapes of the thing. Then we had Make Sticky. It would literally just remember the coordinates that a texture was on at one frame and then as the geometry moves around, you keep those same texture coordinates stuck to the geometry at those points. It was a very simply UV mapping technique, although we didn’t really call it that at the time.

John Schlag: One of the things I wrote was that procedural displacement generator. And it would take a cyberware head from a whole model, and you could define different kinds of displacements, you know, the easiest one just being a sphere or a cylinder. And because the bars in the jail cell gate that he passes through are in fact cylinders, at least the vertical ones, the horizontal ones are very closely so, match-modeling those and feeding those as displacement generators into this gave us some nice sploogy effects as the head passes through the door.

Jay Riddle: I have to say that for me the head through bars is one of the great shots in the movie, because it was one of those that people remember, and that little thing with the gun getting caught, and he twists his arm, just such a great little moment that Cameron came up with. I mean, Cameron was an effects guy in his earlier life. He understands it. He’s not afraid of it. It gave us such freedom to talk to him and not be afraid of screwing up.

Tom Williams: By the way, Make Sticky was originally called Make Me Sticky but that wasn’t really appreciated at the time.

Un-splitting heads

Michael Natkin: One of other tools I built was this thing called Chan-Math that turned out to be pretty helpful. We used it for the scene where the T-1000’s head is split open [the practical Stan Winston effect was called ‘Splash Head’ or ‘Saucehead’]. We were building all of these one-off tools to say, make some object fall along a spline, or blend these two surfaces and so on.

And what I wanted to do was create a way where instead of writing the code from scratch, there’d be an intermediate scripting language that would let you express those things. A lot of things were done by naming conventions, like we might have a script that says, ‘Okay, find all the things have ‘skin’ in the name, and make them follow something with the same name, but ‘bone’ in the name.’ So it was, ‘Find all the things that are in ‘belly skin’ and make them track to anything named ‘belly bone,’ or whatever that might be.

So the idea of Chan Math was to be able to make it so that the TDs could do that themselves. It didn’t really quite work out that way. The language wasn’t that friendly, but at least it made it possible for me to turn around sort of custom scripts for them pretty fast. It was used to sew Splash Head back together by saying, ‘Find this particular set of patches and then create keyframes out of the named individual objects.’

Steve ‘Spaz’ Williams: Chan Math. Now that was a powerful piece of code. So what would happen is that Alias had a certain limit with what you could animation. So, every object that you stuck on the screen, had a pivot point on it, and you could animate the object as a channel of math but you could also animate the control vertices of the object. So in other words, I have channel one, it’s just the object motioning around and then channel two, three, four are like different control vertices that are animating, doing their own different thing. Every time you did that, the pivot point of the object would traverse as well in 3D or rotate or whatever it was.

“When we were doing Willow several years before, it had this transformation sequence in it and they said, ‘Well, you’re the software guy, you figure out how to do this transitioning from one animal thing to another.’” – Doug Smythe

It meant that eventually everyone got totally lost in the animation process. So for example with the death sequence in T2, I was doing what they call control vertices animation. So, the pivot points would be flying all over the place. And eventually you end up kind of like, oh shit man, if I touch one thing, the whole fucker is gonna blow up. So Mike wrote this thing called Chan Math where we compress all the pivot points and throw them down to zero – zero – zero, so you’re starting again. But you were building up on the original control vertices animation. So essentially it was like the stepping stone of that time, that was Chan Math.

Run, T-1000! Run!

Geoff Campbell: My second animation shot was HG-1 (Hospital Garage) where the T-1000 pries open the elevator doors and runs full speed toward camera. I loved this shot because it it had a lot of great elements to it. We were using command-line shape animation for the transitions from the smooth bodied T-1000 to the fully clothed police officer, or what we called the Robert Patrick version. I was given A and B camera roll of Robert Patrick running out of the elevator and I animated to the B roll which was a side view camera. In a way it was like rotoscoping the action but because we had specific timing to match I needed slow into the full on run which was not represented in the B roll.

I would then add the body transitions like his face transforming along with his uniform and gun belt. You couldn’t see those transitions in Alias, but they would show in the overnight renders which meant you had to do a lot of guess work to time the transformations. My first dailies take was embarrassing because I made a last minute rotation to the upper body the night before, sight unseen, and when I saw it in dailies the next morning his upper body was swinging from side to side as if he was joyfully skipping through the hospital garage. But the final shot looks pretty great.

Get (in)to the choppa!

Jay Riddle: The last shot that I ever animated was on T2, and that was the shot in the helicopter when he bursts through the window, Joe Pasquale animated that shot, but then once he re-forms, and you see his head, and he’s the motorcycle cop, you know, with the helmet and everything, he turns to the helicopter pilot and says ‘Get out’, that was my shot.

ILM_Terminator2_B
A still from the helicopter scene.

I went to Monterey, and Cyberware, and we scanned Robert Patrick there for that, and then we had a bunch of different software that had been written to skin these different cyborg scans, because none of them lined up perfectly.

There was just no way of registering that stuff, but Mark Dippé had actually come up with a way to try to help get the heads in roughly the right position when they were scanned so there was less work to do, and then I took all the different in between mouth and facial positions, and we did a placement animation but they actually had motion blur applied to the movement of the meshes so it looked more lifelike let’s say.

I think the timing of it, it’s funny, in retrospect, the timing of it where he just kind of looks and goes ‘Get out’ was a little too much of a gap between the two words.

Alex Seiden: For the poly alloy shader, you could put alpha maps if you wanted to do some kind of cutout. I remember using that in the shot where the helicopter pilot’s face is clearly seen as the T-1000 ‘pours’ himself into the helicopter, although truth be told, it’s really cheated a lot closer to the T-1000 than it would have been in reality! (The convex shape of the T-1000’s head, of course, making images appear farther away).

Real steel

Doug Smythe: In the steel mill one of my favourite shots is when the T-1000 does that instant turn around shot where he morphs back through himself. My other favourite one is a very slow, full-face turnaround of the T-1000 going from Sarah Connor back to his male character. I think that was really nice because it let us just sort of showcase that one morphing technique in and of itself. By that time, we had gotten I think pretty good at doing those kinds of shots, and we knew what kind of thing would artistically look good.

When we were doing Willow several years before, it had this transformation sequence in it and they said, ‘Well, you’re the software guy, you figure out how to do this transitioning from one animal thing to another.’ Back then we had considered trying to do either a 3D approach or some sort of 2D approach with elements and we said, ‘Well, hair is really, really hard. We don’t know if we can do that. And realistic animals are really, really hard. We don’t know if we can do that.’ So, it was like, ‘Okay, we’ll make puppets and shoot actors.’ And they just needed a way to do the transition between the two, so that’s how morphing came about. I like to say that’s the thing that let me keep my job.

So, for these shots in T2 we were using MORF, which I had made for Willow. We used it also on Indiana Jones and the Last Crusade. It ran on Sun 3/180s or 280s, connected to the Pixar Image Computers. These are the Suns for the file management type UI and then the actual dots and grids and artist UI was displayed on the Pixar Image Computer, but then the Sun was driving what got driven, so I had to create a menuing or an icon library to run on the Pixar as well as the actual image processing stuff.

“It’s the one that nearly killed me, that Terminator melting at the end. We worked and worked on it and they ended up picking a version that was not the most recent one. Josh Pines did some magic with scaling and printing to make it, barely.” – Tom Williams

Then, as part of ramping up to a larger crew for Terminator 2, I ported the code from that to run on the SGIs that we had gotten. We had a few SGIs from before that were much smaller, but we got the really powerful 340 VGXs, so the morph team, we all got those massively powerful, in those days. I think they’re almost as powerful as our phones, now. But, back in the day those were monster little things. So we ported that and also all of our image processing tools over to run on the SGIs. There were some that I ported previously for other work, but it was absolutely imperative to have it run on the SGIs for the project because we couldn’t afford to have Pixar Image Computers for every artist that was going to do that. I don’t know if they were even still being made at that time.

Willow
A scene from Willow which utilized Doug Smythe’s MORF tool.

The ILM MORF approach was always a double grid system, so there would be two windows, the source and the destination. You would drag the corners of the mesh, I think it was even a fixed size. I may have added the ability to change how many grid points there were when you started it up, but I think the default we pretty much always used. Then you would go to various key frames throughout your sequence and then drag the grid points around and we’d sort of learned that if you have something that’s big to change into something that’s small, how to work things. You would try to make it so that you never had one thing be fairly flat. You always pull it to one extreme on one side and then push it the opposite extreme in the other one because you would get fewer stretching artefacts and fewer overshoot sampling problems. Because this is all based on bicubic spline evaluation and because of that there were ringing problems.

So if you had a thing where you had too big of a gap from one point to the next point, and then a very small gap after that, the spline interpolation would overshoot. We had to have a way to just deal with software with what happens if you get an overshoot from the ringing. It’s just a boring implementation detail but it is something we had to work with.

Then we had an alternate view where we kept the same image on the top, but the bottom image became sort of a timeline-type thing where you could pick any of the control dots at the corners of the mesh on the thing and see the timeline of how that would move and adjust it with a few key frame points to adjust the timing curve of how that particular point would change from the source colour to the destination colour.

I had overlaid grayscale dots on that upper grid so you can sort of see at the beginning all the dots are black and whatever thing changes first, that will start getting lighter and lighter grayscale values because it’s showing that it’s progressing further towards the destination colour and then by the end frame, they’re all white dots. But you could look at just the subtle shading of the grayscale on the dots to see how far along different parts of the transformation were progressing.

You could have this part change earlier than that part, or even if you’re really clever, you could have a sort of do an animated wipe kind of thing by carefully staggering the times and you can see if your blends were smooth or if you had any outliers that would cause a pop or anything like that. So it was pretty crude and rudimentary but it worked for what we need to, and that basic idea was unchanged from the Willow days. It worked exactly the same way. So, for T2 it was basically porting to the SGIs and fixing the few things that we found along the way.

John Berton, Jr. (computer graphics animator): My job on Terminator 2 was to create the transitions between the CG T-1000 and the live-action actor, Robert Patrick. We used the now-famous MORF program for this, and this was interesting because even though the transitions were spectacular, the idea was that you had to believe it was really shape-shifting in 3D, without calling attention to the the effect being used. I was only the 2nd or 3rd person to use this program besides Doug Smythe, who wrote it, so it was very much a developing tool and it was real cool to work with Doug to make it do what we wanted. He added color keys to the interface and opened up the program to allow time distortions more easily and that made it possible to use it to really animate the transitions.

This was most effectively used in the “Turnaround” shot, when the T-1000 is thrown into a wall and instead of turning around, he morphs back through himself to face forward and charge back into action. It’s a terrific idea, one of those shots you live for because you know if you get it right it will be great. While we were making the shot and had it more-or-less working I proposed that we go one step further and animate the transition to make it look more intentional and stylish, with the front of the shirt zipping up and the wrinkles all turning at the right time, so it really looked 3D. It also enhanced the story point that the T-1000 was a bit of a show-off, which set him up for his eventual fall. It’s always been one of my favourite shots because we developed tools that helped us tell the story better and make a good visual effects shot into a great one.

Geoff Campbell: A shot I animated was cut from the original film release and I believe put back into the directors cut. It was a shot of the police booted T-1000 malfunctioning and melting into a metal warehouse floor. That shot took me weeks to animate because the boots were constantly sliding against the film plate. It also involved crude shape animation of the sole and heel of each boot melting and reforming as it took each step.

Death squad

John Schlag: I was on the ‘death squad’ – we were assigned to kill the T-1000 at the end of the show, in the lava pit. For the T-1000 dying in that vat of molten steel, [visual effects art director] Doug Chiang would do an animatic, just a pencil and paper animation. There was some very strange, weird stuff going on in that sequence. Then Steve Williams would sit down and make it real in a 3D world.

The amount of geometry being rendered there was just hideous, for that day. You’ve got the character sort of throwing up himself, he’s inverting himself, by tearing his head back, and his guts come out through his mouth, and he’s melting down, basically. So that whole series of four or five shots, something weird and very strange is happening in each one of them. So there was a fair amount of work just to get the models through the size of machines that we had at the time, and that partly led me to rewrite the rendering scripts.

Everything in a feature film, if it’s going to be realistic, it has to be rendered with motion blur, right? But we were having troubles. With motion blur, at the very least, you need to know where the model is at the beginning of the frame, and then where it is at the end of the frame. And if you know the path between the two, then you can make things, take off our samples, and make things blurry. Appropriately, as if a camera shutter were open for half of the frame time, typically. And we were having trouble just sticking together the beginning and ending geometry and getting it all through the pipeline and into the render, so that’s what caused me to roll up my sleeves and rewrite the render scripts, to make that work.

“In order to change a disc, I would have to go outside, run downstairs to the basement where the computer room was, and open up one of those platter drives, pull it out, put some in a storage place, put a different platter in there, and turn it, and go back upstairs just to see if something was on that drive.” – Jay Riddle

Michael Natkin: Tom Williams and I worked together on that final part of that melting scene. He’d come up with the idea of using fractals to basically displace the image of the Terminator as it dissolved into the molten iron. We were working like 80 hours a week. Then, Tom got really sick, and he had to disappear for a few days. Tom’s idea was using these random fractals, and Dennis Muren was like, ‘Yeah, that’s pretty cool. See that one a little bit over there? I want it to move left and I want that one to dissolve faster, and I want that one to swirl differently.’ And the problem was these were random things. There was nothing controllable about this fractal technology. So all I could basically do was keep changing random seeds and do various image processing and try to get it where he wanted it. You couldn’t direct the individual bits of molten metal.

Every day, we’d show another take. Here’s another take. So, my house was in Oakland, but I was sleeping in a motel in San Rafael just to shave like 20 minutes off my commute each way. Finally, we get a shot. We’re shooting them low-res every day because they take a long time to render. Tom’s not there, I’m about to die, and Dennis is like, ‘That’s it. Final that.’ I’m like, ‘Great, I’ll render the high-res version of it.’ And he says, “Nope. That’s it. Right there. That take. That’s it. We’re shipping it.’ I’m like, ‘But Dennis, it’s low-res. It’s 640. We gotta render this in 1280.’ And he’s like, ‘Nope, we’re shipping that.’ So if you watch the original movie, you will see there there’s one of the cuts where it gets a bit fuzzy.

 

 

 

 

 

 





Tom Williams: It was probably the hardest shot, it’s the one that nearly killed me, that Terminator melting at the end. We worked and worked on it and they ended up picking a version that was not the most recent one. The one that they took, because we were in a rush, was pretty low resolution. The final resolutions we rendered, back then, was probably nothing, nowadays. I think it was probably a 1K render. [Scanning supervisor] Josh Pines did some magic with scaling and printing to make it, barely.

Michael Natkin: And then I was TDing some of those scenes because we were just out of time and it was all hands on deck. I would send them the scenes and then every day they’d come back in the dailies and I was trying to fix the colours. And every day they’d come back in dailies and they were changing colour back again, and I’m like, ‘I don’t understand what the hell is going on here.’ I’d make it more red and it would come back more blue. And I’d make it more red and it would come back more blue. Finally, I went to talk to the film scanning and developing people because those days dailies were still done via film developing. We would take the frames, one file at a time, and send them to the optical department. Then, some magic would happen, and then it would be on film in the theatre the next morning.

It turns out that there was a guy who’s job it was to colour grade every frame for every scene, and so he was very thoughtfully, carefully colour correcting it back to how he felt it was supposed to be. I had no idea. There was lots of this happening on the optical side. We just didn’t have a clue about it.

When film was still film

George Joblove: Terminator 2 was shot on film and so we still had to scan all that and get it back out onto film again after our CG work. As we first started building a department at ILM, one of the things we inherited from the pre-Pixar group is that they had hand-built a 35mm scanner-recorder, a laser scanner-recorder. So it was a single device that could be reversed. You could put a piece of film in there that had been shot normally and scan it with a laser and digitise it. It had a CCD. Or you could reverse it and use the laser to expose raw stock. So that is what we started with and then we had some other scanners and recorders, too.

FilmScanner
ILM’s film scanner, also known as the Kodak Scanner. The developers of the scanner won a Sci-Tech Award. Image via Jason Smith.

What we definitely had to do before we could do anything else was demonstrate that we could take a piece of 35mm film that had been shot in a camera, scan it in, do nothing to the image and just put it back again onto film and do nothing in a way that the output film perfectly matched the input film so that you could intercut them and not realise that some of those frames had gone through a little detour. Once you could do that then you knew you had the power to do anything you were able to do in the digital realm while you were there.

Interestingly, on The Abyss, with only one exception, all of the shots were optically composited. So the water tentacle would be rendered against black out to film. With each shot, we would hand the pieces of film over to the optical department and they would do an optical composite. By the time we got to T2, I think we were doing digital composites that had gotten to the point where that all worked. It was cost effective and the results were great. And then that became the era when people started forgetting about matte lines which used to be an issue when doing optical composites.

Jay Riddle: I know that even before T2, the technologies were almost always proprietary back in those days, so nobody kind of knew what anybody else was doing exactly, especially in relation to resolution. So we’d just let people say things like, ‘Oh, if you really want to be cutting edge, you’ve got to do 4K,’ because that’s what people were kind of thinking at the time, but we were doing 2K, or not even, even sub 2K for the longest period of time. The thing is, we had really good sharpening algorithms, and techniques for how to get stuff onto film were being refined.

“Everyone was right there building the road beneath your feet so you always felt optimistic that the work would get done.” – Geoff Campbell

Josh Pines on the recording side of things was instrumental in that. George Joblove was doing things with compression. We were using an 8-bit log file format and George came up with how to convert something from the larger linear film space down to this 8-bit log format. Back in those days when bandwidth of how fast you could get things on and off disc was important, that was huge. I mean, that made the difference between being able to get something done in a day, and having to take two or three days to record it.

In order to change a disc, I would have to like, go outside, run downstairs to the basement where the computer room was, and open up one of those platter drives, pull it out, put some in a storage place, put a different platter in there, and turn it, and go back upstairs just to see if something was on that drive.

Into the digital realm

Michael Natkin: We were doing digital compositing, but it was all done with these crazy command-line scripts. You’d load the image into shared memory and then say, ‘Okay, composite into buffer A, and then load your matte into buffer B,’ and then say, ‘Okay, composite image C over buffer A using matte B,’ and it was just this crazy set of ridiculous set of operations.

terminator_2.bg_.4
Scenes like this could be composited digitally instead of via optical means, thus avoiding matte lines.

Doug Smythe: Compositing really involved a lot of writing of shell scripts. We’d write shell scripts and each compositing operation, whether it’s loading an image, or saving it, or doing channel arithmetic, or merging layers, or blurring, or anything like that, was a separate command line programme. We had this shared memory segment that we keep in there we call the virtual frame buffer. It stuck around between processes, so we just had this memory segment that was locked and there was a key that every programme got that because it was stored as an environment variable. So the programmes would load stuff into the frame buffer or do math on this stuff in the frame buffer, and then finally when it was all done, we would save stuff out from the frame buffer onto disc and then lather, rinse, repeat.

This is actually where digital compositing started to really show that it could be superior as far as a workflow because you had the opportunity to do quick tests. Once you figured out a recipe, the computer’s going to execute it exactly the same way every single time. You don’t ever have to worry about a human operator with an optical printer applying exactly the right steps in exactly the right order and loading the film in the right place, with the right gels and everything like that. I mean, I still worked with some guys who did that and it was a mind-tedious task, and for the complexity of certain comps that had been done. If there was any error on the lineup sheet, you had to start over from scratch.

John Schlag: There was a guy who has since been a visual effects supervisor at ILM, his name’s Dave Carson. But he was working on T2 as a paint fixer, basically. And we only half-jokingly started referring to our pipeline as ‘model, animate, render, composite, Dave.’ Because Dave would just paint out any fixes (that the CG software couldn’t deal with) in an early version of Photoshop, at the very end.

The impact of Terminator 2

Mark Dippé: I can remember T2 caused a huge explosion. The fan audience is a little bit of a specialized one but it was all over the world, because I went to some festivals and you’d see like the T2 skeleton there, it was a massive thing.

terminator_2.bg_.5
The T-1000 reforms.

Doug Smythe: Terminator 2 was simultaneously the most fun and the most difficult project of any sort, professionally or scholastically, of anything that I’d done up until that point. I still think that looking back on the work today it still holds up really well, even by modern standards. You can watch the film and enjoy it as a movie and not look at all the technical flaws, which I’m sure there are plenty, but it still works as a film. I think that the team of people that were assembled to do it collectively did a really good job overall.

Alex Seiden: Those were exciting times. The CG department was just beginning to come into it’s own, after the success of the Pseudopod from the Abyss the previous year. Still, it was still a tiny part of ILM: about 20 people in CG before T2, and we staffed up to 40 for that show. I was one of those lucky new hires, and it changed my life. We all knew the film was going to be huge, both as a movie and as a VFX milestone. We were all pretty young, filled with excitement and hubris and the energy that comes from not having families to go home to.

Eric Enderton: Later, when people asked me what my day job is, I said, ‘Well, on a good day I’m using mathematics to solve a visual problem. On a typical day I’m trying to add one more feature to a giant C++ programme without it collapsing under its own weight.’ Keeping things organised and keeping things systematic. The stuff we were doing then looks so small now and at the time it was enormously large.

Geoff Campbell: The modeling and animating of characters on T2 was rudimentary but principally what we were doing was right on the mark and is still the way we work today. I know at the time we were frustrated with how hard it was to work intuitively as artists in what was then a very technical an unintuitive field, but we had such great support from guys like John Schlag, Carl Frederick, Doug Smythe and so many others, that you could see software improvements almost daily. Everyone was right there building the road beneath your feet so you always felt optimistic that the work would get done. I think more than anything I learned how to be patient and how to sculpt that digital chicken wire.

T2_BARS_crop
Illustration by Aidan Roberts.

Jay Riddle: People really found their footing on that film and there are some people who have gone on to do some amazing things. There was a young, fresh, excited, kid, who came in that we hired to do video dailies – Jim Mitchell. So, Jim got all his work done and said, ‘Hey Jay, can I have a shot? Like, I’d love to do something.’ I said, ‘All right. Well, we’ve got this one where the T-1000’s foot comes into frame and there’s like a little blob of him that got knocked apart from him, and it’s got to rejoin at his foot.’ And Jim sat down and over the course of a few days, a week, did an animation that was like, ‘Holy crap, this is damn good!’ He became a full-fledged member of the animation team at that point, and he just had such drive to get stuff done. So, that was amazing to see people kind of grow into some roles like that, and we had actually turned him down a couple of times when he applied. He was working his butt off to try to get us to pay attention to him, and finally we did, and thank goodness we did.

Michael Natkin: I don’t know if I should really tell this story but at some point when I was working 80 hours a week, my pay wasn’t particularly great. I was fresh out of college and the VFX industry didn’t really understand software at that point, so I think I was making like 35 grand a year, something like that. Finally, I had to go to Janet Healy and say, “Janet, if you really want me to work 80 hours a week, I think you’re gonna actually have to pay me overtime.’ And she said, ‘Not a problem. Go right ahead.’

Steve ‘Spaz’ Williams: I was actually back in Toronto when the film came out, and I took like twenty of my friends to the theater. People flipped out. We went to SIGGRAPH that year for T2 and we were swamped.

Liza Keith: For me, yes, the biggest impact was at SIGGRAPH because it came out the month before SIGGRAPH and I had people asking for my autograph. I still have people asking for my autograph. It’s really bizarre.

John Schlag: We came to think of each other as family, and we sweated it out together. And it was really the experience of a lifetime for myself, and I’d wager to say, for quite a few others.

Thanks to everyone who participated in this piece, and to ILM for permissions and imagery. Please note some quotes from Steve ‘Spaz’ Williams and Mark Dippé are from earlier conversations I have had with them, which may be published elsewhere on vfxblog.

Enjoyed this retro vfx story? There’s a ton more here at vfxblog.

‘The Punisher’: First look images of the upcoming series reveal a badass costume!

Netflix already has a lot of irons in the fire with The Flash, Arrow, Supergirl, Stranger Things all lined up for their new seasons in the coming weeks. Yet, a new poster of another upcoming series has spiced the things up by a notch.

Marvel’s The Punisher is next in the conveyor belt of live-action of the superhero genre and all the anxious fans can finally heave a sigh of relief as the first look is finally here. And it reveals the costume that Jon Bernthal will be donning.

Tucked away in an all-black suit with a devilry design of a skull, Frank Castle looks all fired up to go charging at all the wrongdoers and protect the city from their draconian schemes. Like Daredevil, the Punisher too doesn’t possess any supernatural powers, but the costume design suggests he will regardless be armed with all the necessary weapons and armaments to fight the baddies.

The Punisher is one of the most awaited series of Marvel, and considering the brawny look that our protagonist wears as well as the funky costume, one can easily predict the dark, brutal settings of the series.

Produced by Marvel Television in association with ABC studios, The Punisher debuts on Netflix in fall 2017.

The post ‘The Punisher’: First look images of the upcoming series reveal a badass costume! appeared first on AnimationXpress.

Five Octane Quick Tips in Cinema 4D

Over the past couple months I have been putting some beginner Quick Tips in Octane together that I wish I knew when I started using Octane Render inside C4D.

What will I Learn?

We are going to jump into C4D and I am going to show you:

  • Material Refresh (How to fix the issue where the material preview disappears in C4D)
  • Hair Emission (How to make a spline or hair object seen by blackbody emission)
  • Fake Shadows (How to make light pass through specular materials ie. Light Bulb)
  • Fog Animation (How to animate the Fog Volume Object without VDB files)
  • HDRI & Sky Scene Lighting (How to light your scenes with more control using HDRI and Octane Lights)

Learn more about HDRI Link

All things beautiful come from nature

Age of Empires IV Announce Trailer


It’s time to battle through history once more in the latest entry of the landmark Age of Empires franchise.

Learn more: http://wndw.ms/0QSUbC

1979 Solar Eclipse – ABC News Coverage


Excerpts from an ABC News Special Report that aired at 11:00-11:29 a.m. EST on Monday, Feb. 26, 1979 as the last total solar eclipse for North America until August 21, 2017 swept across the Pacific Northwest.

Frank Reynolds anchored from New York, with live reports from former science correspondence Jules Bergman and reporter Bob Miller. Live images from Portland, Oregon, Washington state’s Goldendale Observatory and Helena, Montana.

Age of Empires Definitive Edition – Gamescom 2017 – Trailer


Welcome back to history! Age of Empires returns for its 20th anniversary in Definitive form with a host of new improvements and features. All-new 4K graphics, increased population limit, an expanded user interface, a re-orchestrated soundtrack much more await players as they build and battle through the ages.

Preorder today: http://wndw.ms/SEAel5

PS4/ Xbox One「ドラゴンボール ファイターズ」第2弾PV


【公式サイト】http://dba.bn-ent.net/?utm_source=youtube&utm_medium=direct&utm_campaign=direct

【チャンネル登録はこちら→http://bnent.jp/youtube/】

バンダイナムコエンターテインメント×アークシステムワークスが贈る新たな対戦格闘ゲームがついに開幕――。
2Dでも3Dでもない【2.5D表現】だから出来た超ハイエンドアニメ表現と、
地上・空中で炸裂する「ドラゴンボール」らしい究極の【超高速バトル】で、
未だかつてない「ドラゴンボール」の本格対戦格闘ゲームが実現!
「ドラゴンボール ファイターズ」2018年初頭 発売予定!

発売日:2018年初頭
プラットフォーム:PS4/ Xbox One
ジャンル:ドラゴンボールファイティング
価格:未定
CERO:審査予定