Tracking nightmares: behind Blur’s half-digested Billy shots in Deep Rising – from 20 years ago

DEEP_RISING_V2
Illustration by Aidan Roberts.

“We said, sure, we can do it! And we did it, although it was very, very painful.” David Stinnett, Blur Studio

While it may not have made a splash at the box office when it was released 20 years ago, Stephen Somers’ Deep Rising certainly contained some considerable ‘out-there’ CG creature work. Most of that was tackled by Dream Quest Images and ILM, but one particularly gruesome sequence in the 1998 film – the ‘half-digested Billy’ scene – was realised by Blur Studio.

Blur had only been formed a few years earlier in 1995, but already had established itself as a creative CG, animation and VFX house. It took on the tough Billy shots, in which actor Clint Curtis emerges partially digested yet still alive from a creature before collapsing, and helped generate one of the film’s classic moments.

On Deep Rising’s 20th anniversary, Blur co-founder and CG supervisor on those shots, David Stinnett, recalled for vfxblog the challenges involved, from coming on board late in production, having to hand-track every single frame, and creating the CG with a tool that many people didn’t think was up to the challenge.

vfxblog: This was still relatively early in Blur’s history – had you done much feature film effects at that point?

David Stinnett: We had done visual effects for a friend’s film, which was a feature, but it was a very small low budget one. We did a number of shots for that, that was before Deep Rising. And before then, we did some shots for the Outer Limits TV show. Deep Rising was the first big film.

Deep_Blur1

vfxblog: Some of the major creature work was done at Dream Quest and ILM. Do you remember how the Billy shots ended up coming to Blur?

David Stinnett: That’s an interesting story. I think that the shots were farmed out from the get-go to a smaller studio. I do not know the name of that studio or the people involved. But, apparently they bid on it, and they said, yeah, we can do it, no problem. And then they spent forever on it, and then finally they said they had to give up because they just could not get the shot done because of the tricky, tricky tracking involved. Especially back then.

So – it was maybe not the cleverest thing to say – but we said, sure, we can do it! And we did it, although it was very, very painful. To go back a little bit, I had started out doing make-up effects, I did that a number of years before we started Blur. Later, a friend of mine who I had worked with previously was the make-up supervisor on set, and he had seen my name on the film once it came out. And he gave me a call and said, yeah, how’d you guys do the shots? He said the supervisor for that smaller studio was on set, and my friend the make-up supervisor had asked him, do you need tracking markers or anything on this guy? And they had said, no, no problem. We can do it. And that was a huge mistake. And that was probably why they couldn’t do it, because he was glossed up so much that you just had specular highlights dancing all over the place. Add to that the fact that he was shaking and twitching, so there was motion blur changing direction frame to frame, so there was no way to get a clean auto track on it.

vfxblog: How were the plates filmed for the shot? How much make-up and prosthetics was the actor wearing? These days I guess the approach is now called digital make-up.

David Stinnett: The side of his face that we replaced was just clean. He certainly had make-up effects on other parts, but we didn’t replace existing make-up [Rob Bottin was the film’s special make-up effects designer and creator]. It was shot as a digital shot from the get-go. It was basically ‘face replacement’, was how we termed it. We didn’t really call it digital make-up, although technically that’s what it was. But it was just kind of a cool challenge.

DeepRising2k_still

vfxblog: So, back then, what was your first approach in terms of matchmoving, roto, and just sort of jumping into the scene? What tools and techniques were available to you back then?

David Stinnett: Well, we started out as a PC shop from day one. I actually remember the first few jobs that we bid on. They would come over and realise we were on PCs and they would turn around and walk away, not believing that we could do it. But, our software at the time was 3D Studio Max version 2 for the PC which we used to model the CG [note: there were some maquettes of a head, arm and leg made by production that were digitized and intended to serve the basis of the CG models]. We attempted to do the tracking in After Effects, which had just come out for the PC. And then we were also using Digital Fusion, which had just come out too, for compositing.

vfxblog: What do you remember were some of the major challenges for tracking?

David Stinnett: The major challenge in the tracking was the tracking [laughs].

vfxblog: So, basically all of it was a challenge…

David Stinnett: Yeah, all of it. Every frame was a challenge. We had three people working on those shots. We had myself who was supervising, and then we also had Tim Montijo and Greg Tsadilas. One of the guys did start tracking that in After Effects – I guess After Effects had some sort of rudimentary tracking solution back then. I believe he spent three weeks trying to get it to work in After Effects, and then he had to, in shame, say, I’m sorry, I can’t do it. At which point we said, well, we have to finish this.

So, we wound up hand-tracking every single frame in 3D Studio Max, by pulling in the background plates as backgrounds. The way he’s pouncing around, he’s got so many blurred frames and highlights changing – that was the tricky part as far as tracking goes. I think even with tracking markers it would’ve been a nightmare.

And then taking the 3D skull make-up and basically hand lining that as best we could with the rotations in position. And then we also had to do some serious warping per frame just to make it work. And it took three of us about seven weeks to track those three shots.

Deep_Blur2

vfxblog: What about clean plates?

David Stinnett: I’m almost positive we had clean plates. We had a camera move, which wasn’t automated, as far as I can recall. I’m trying to remember exactly how we did it. I keep trying to think how I would do it now, but that’s not necessarily how we did it back then! I’m sure we just pushed the clean plate through the holes where it needed to be.

vfxblog: I was about to ask you about the hand, in particular, because we see through that. Was that hand modified or was it CG?

David Stinnett: It’s a completely CG hand.

vfxblog: Oh really, I had no idea. I think what’s fascinating about that is it could easily be taken to be a make-up effect, but you’ve married it so well with whatever it was done on set. Do you remember the challenges for actually doing that in terms of rendering and just getting the lighting and lookdev correct?

David Stinnett: Back then, there was no real HDRIs, or anything like that. At least not that we were aware of, so it was all just eyeballing. But since his actual face was so slimed up, that was a good reference as far as where the highlights needed to be. So, it’s fairly easy to match that way. And his face was kind of torn up, so there wasn’t a whole lot of blending, as far as skin into his skin. It was just kind of melted away.

We just basically the chopped out concave bits of his face. And it was modelled in 3D Studio and rigged and textured. It was all pretty simple as far as that goes. It was just the integration that was a nightmare.

Deep_Blur3

vfxblog; Is there anything else, while you’re looking at that shot, that you remember from working on the show?

David Stinnett: The shots were log space Cineon files, and we weren’t experienced with that at the time. So, that actually was a challenge until we figured out what was up with that. I remember trying to correct the raw files to make them look proper just by colour corrections and stuff. And that was obviously a mistake! Luckily, we figured out what was going on fairly early on.

You know, none of us had any visual effects training. It was all learned by doing. So, we started out, like I said, doing some of the smaller shots with Max. And the very first effect shots we did were for the Outer Limits show. And actually we did some shots for the space Hellraiser film (Hellraiser: Bloodline). And we did the compositing for those in the little compositing module that 3D Studio Max had. With the Hellraiser show, it was the DOS version of 3D Studio. I’m amazed we could do that. But, there was no roto or anything on that.

It was kind of a ‘make it up as you go’ thing. It was like, okay, it’s logical that we can do this. Oh, cool, we have a roto. And oh, cool, we can blur it. So, it was all kind of on the fly and make it work any way you can. And it was a lot of work. Looking at it today, I can see a lot wrong with it, but I’m still very happy with the way it turned out.

DeepRising_head
Illustration by Aidan Roberts.

vfxblog: It feels like, for Deep Rising, you really pulled off a significant shot that many people remember well.

David Stinnett: Thanks – one thing I do remember – I don’t remember where I saw this, it was some thread somewhere online – and it was a discussion of the shots. And they had heard they were done in 3D Studio. And they refused to believe it. They said, there’s no possible way 3D Studio could’ve done that. And sure, it’s not the software, it’s the people. If you have enough time, I mean, it was capable enough. It couldn’t do it automatically. But you could force it to do pretty much whatever you want.

vfxblog: Having worked in make-up effects previously, did you consider this shot a particularly tough thing to do?

David Stinnett: Yeah, I mean, I don’t recall seeing anything like this in CG. It would be make-up, but it was so hard to take volume away. And I think when he turned his head, people were shocked that it looked that way. Because at the time there wasn’t a whole lot of convincing CG make-up. Digi-doubles were just starting to make their way out then, and not very convincingly. But, looking at these, I’m still happy with these shots. I think it’s just the shock of seeing it and you don’t think necessarily oh, how was that done? It’s just like, oh my God! And you’re caught up in the moment.

No Responses to “Tracking nightmares: behind Blur’s half-digested Billy shots in Deep Rising – from 20 years ago”

Post a Comment