In December, after the Cinecomic Experiment unexpectedly proved virtual filmmaking was viable for me, I had this spare unfinished room in my house built-up enough to put a greenscreen stage in it, featuring the new Panasonic GH4 4K DLSM camera, a professional light-absorbing greenscreen backdrop, photography lights, and a 40″ HDTV monitor. At first, I got a small 25″ inch monitor (in the above pic), but we actually missed that a take was out of focus during a wardrobe test, so I upgraded that (below) to be sure.
The implications of this were profound for me. I wanted to see if I could shoot a professional-grade short film virtually from within my studio space on a micro-budget, or even no budget at all, while being unlimited in where the film could take place. Environments could potentially be done all virtually, with only actors captured in principle photography. Essentially, what was only possible through ILM pushing the envelope with millions of dollars in 1997 (despite that film’s script being not so hot) was now within reach in my home with prosumer gear and software (a big thank-you to George Lucas and ILM for figuring out how to make this more affordable and accessible for future filmmakers now almost two-decades later). My mind was reeling with the possibilities.
So, the December Cinecomic Experiment culminating in the test shot of me on the bridge of the Prometheus was retroactively deemed Virtual Filmmaking Test 01 (VFM01). From there, a new two-page script and storyboards were made, and in mid-January (after a texture-hunt vacation around Chicago), principle photography on Virtual Filmmaking Test 02 (VFM02) began. This would eventually be known as simply ‘Prototype 2’.
For VFM02, I essentially used a scene from Prometheus (where David poisons Holloway) as a model and an emulation target, directorially. The idea being if I could re-create virtually an actual dramatic scene by Ridley Scott (in a new context with a new original environment and characters), this would be a proof-of-concept that a virtually-composed film production could be doable for me. I was also tracking my time put into this to begin developing metrics for how long it would take to create a minute and a half dramatic scene with about twelve angles over many cuts. That could then be extrapolated to estimate how long it might take to produce a short film or even a feature through these virtual filmmaking techniques.
For this I would play two characters, an older scientist and a ‘fresher’ experimental clone. I knew I’d need to really differentiate my looks for each to create such a contrast that you might forget the two characters are played by the same person when watching. We would need makeup, another new territory for me. But I wanted to learn everything involved in this process as much as possible. After some research on old age makeup and prosthetics, the wonderful Liz Irish was put to work as my makeup artist for me and both characters. Cortni Fleagle was the hair stylist– the scientist unkempt and the clone tightly-trimmed.
Between the various wardrobe and makeup tests, I first shot all of Hale’s angles, then got my hair cut and could shave (finally). Then I did the Clone’s shots. Both characters over twelve angles or less each took a five hour shooting session– I think that was largely due to the fact that I was ‘acting in the blind’, since I was both the actor and director. After I’d do a ten-minute run through take, I’d stop, sit at the monitor, watch playback, make mental notes to direct myself, and do it again. Plus, this was my first time in front of a camera acting in a long while. Logistically, the shoot was over a span of two weeks in spare time, but when it’s all cut together, it’s magically a single moment. I love stuff like that, the filmmaking magic that creates impossible results in a single frame.
I went the Fincher-50 route and did about fifty takes for each individual line. I believe in this a lot, especially with digital not costing you anything to do that. Since we’ve gone to all the trouble to get to these moments of shooting, might as well be absolutely sure we ‘get it’ and try many different things. This becomes a joy then in editing, because you can custom-tailor a seamless performance between each cut, line-by-line at times even, to get the perfect total performance. This is even more critical when your actors are acting to focus targets on a stand– you need the flexibility to gel the performances together for cohesion. With that many takes, it’s almost impossible to have a bad cut– that is, if your actors can give you those choices.
I actually don’t really want to be in my films, but it’s sort of hard to not utilize myself– I’m here, I’m always available, I’d like to think I look alright, I have acting experience, I can exhaust myself with fifty takes per angle no problem, and I’m free for the production. I’d much rather be behind the camera focusing purely on directing, but at least until I can prove out my filmmaking abilities to attract solid actors for these productions, I’m gonna have to be the lab monkey. I guess it’s good anyway, because it will force me to get warmed up with acting again (it’s been awhile), study the craft, and know what it’s like to be on that side of the camera– all good experience to make me a better director and even screenwriter. For example, I bought all of the Stanislavski books, and am learning that side of it. It’s actually my goal to know everyone’s job on set better than they do if I’m going to direct; I believe that’s required. So begins that journey…
After I did the storyboards and before I did the principle photography, I constructed the virtual set at a ‘graybox’ fidelity. This is common practice in game development as well. Once all the acting was shot, with every take cut up and selects made, I then was able to finally comp them into After Effects together. Once all of that was done per angle, I was able to export out a ‘graybox cut’, so you could -finally- watch the whole thing cohesively in one go. It took about three or four weeks to get there in spare time.
That’s basically where I am now. I’m building up the environment art more fully, first in concept art. This is definitely something I would normally do before shooting anything, but I was satisfied with having just the graybox environment and a solid idea for what it will ultimately look like to begin shooting for the sake of a test. I’m going through training right now at Lynda.com (Get 10 days of free unlimited access to Lynda.com) on Maya, Mudbox, and After Effects courses (two playlists totaling in 250 courses of stuff to get through to really up my game). So it’ll be awhile before these shots are completed.
But the idea is once I get VFM02 done and under my belt, the next go at this will be way faster. The skills will be there, I will have done this before, it will even hopefully get mundane. Then it’ll all be less about how to do it, and more about what to do. And what to do better. I am just starting out on this stuff. But for my first go at 4K digital filmmaking, a virtual cinematic pipeline, and learning film VFX, it’s very encouraging so far.
And then, suddenly, VFM02 got a bit derailed, because I got the camera ‘out of the lab’ to do straight-up practical cinematography tests. And the results of what became ‘Prototype 3’ would again surprise me. Can’t wait to share that with you too…
Until then, back to client work! (I don’t sleep much!)