The Future of Video Games is the Future of Animation

Seems I’m not alone in my assessment that game engines are now viable rendering engines for CG filmmaking. I’ve been seeing this discussion pop up a lot lately. While working at triple-A games studios during the advent of the eighth console gen, I routinely heard rumblings then that this was the big objective, to achieve ‘hollywood level CG in realtime’. With a focus on game engines at this year’s GDC, it seems it’s been truly brought into the forefront of people’s minds.

Now FastCompany is echoing these thoughts as well, with their article ‘The Future of Video Games is the Future of Animation‘.

I particularly liked the stat that a single frame from Pixar’s Monsters University took them 29 hours to render using “what’s considered one of the fastest supercomputer rigs in the world: 2,000 computers with 24,000 processing cores”. In my analysis from my own testing, a 4K frame was taking me about 4 hours in Maya Mentalray, but it wasn’t as complex as a frame set up by Pixar. Meanwhile, to capture a 4K frame out of Unity, it takes only 0.91 seconds, on my five-year-old iMac. So 29 hours or 104,400 seconds on a state-of-the-art super-computer versus 0.91 seconds on a half-decade-old prosumer desktop. As this provides literally millions of dollars in cost-savings and time-savings value to individual creators, it’s not even a question of which route an indie should take for CG filmmaking.

And check out the recently released Kite Demo from Unreal Engine 4 and the Nvidia Titan X to see yet another impressive example of where this is all going:

I haven’t tested Unreal Engine 4 yet myself, as it barely runs smoothly on my five-year-old iMac. I’m going to need to get a new computer anyway (it’s time), and I’m aiming for the 5K iMac as my primary machine with Windows on Bootcamp. If that’s not enough, then I’m also considering getting a super-beefy Windows box with the Nvidia Titan X as basically my render machine only.

nvidia-titan

But the new computer(s) will be a bit down the road. I’m content to finish the virtual set for VFM02 first in Unity 5 for now while I train myself up on CG modeling and texturing. I’ve also just finished figuring out cinematic cameras, lighting and rendering modes, and the full capture pipeline from Unity, and am thrilled to have got all of that working. I’m actually not that limited with my old iMac, primarily because uRecord can get animation out of Unity at any res in perfect lossless frame-by-frame PNGs without runtime even being a factor. Then there’s having to decide between Unity’s royalty free offering and Unreal Engine’s 5% cut. Unity’s results aren’t bad at all, but Unreal Engine’s seem pretty obviously amazing– for a price.

starwars-rebels-texture-fidelity-1b

Going through VFM02, I’m already beginning to appreciate the value in a stylized approach on an indie-scale. It really comes down to how easy it is to implement these features in the engine editor, and whether that scales throughout an entire production. Meanwhile, you can actually see low-fidelity texturing on background assets of Star Wars: Rebels produced and aired by Disney (high res example 1, example 2), so there’s a level of efficiency possible through stylization that works. People tend to focus on the characters in the frame, especially the eyes, anyway. It really just depends on your production parameters and goals, so there’s huge value in custom-tailoring a style to fit those. The Kite demo certainly wasn’t made by one person in spare time, but it nevertheless does demonstrate an enormous leap forward in efficiency at smaller production scales.

Either way, it does seem that using game engines now for filmmaking is a trending idea. Go experiment and create something awesome! There’s never been a better time.


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

VFM02: General WIPs

20150319-VFM02-WIP00

20150319-VFM02-WIP01

20150319-VFM02-WIP02

Set UVs and established textures for the lower half of the scene– next will spend some time really dressing those textures up. The holotable needs a bit more polish to the diffuse map as well.

Looking into getting Quixel Suite, but they don’t have a Mac version (it’d be the only thing in this entire pipeline between Maya, Mudbox, Zbrush, Unity, and the full Adobe CC Suite that I’d have to Bootcamp into Windows for).

I also realized as I was adding new assets to the Maya scene, Unity was not automatically setting those elements to static, so lightmapping was skipping over them. Fixed that last night. There’s still some weirdness going on with some assets’ lightmapping though, such as the pipes.

There’s still a long way to go on this stuff, my first attempt at environment art like this. Overall, making progress!


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

SXSW Keynote: Mark Duplass Tips for Indie Success

“The cavalry isn’t coming… The calvary is not coming. But YOU are the calvary.”

This. A thousand times this.

Also, at 50:46, there was an amazing part where he pretty much echoed what I said earlier in the Unreal Engine 4 post. “In 1995, a kid from Ohio in the suburbs, who is 14 years old, couldn’t turn a camera on himself and make one of the more explosive movies we’ve seen come out of Sundance, and that could happen now with the technology.”

Well, I was 11 in 1995, from Ohio. At the time, we’d borrow my dad’s boss’s VHS camcorder to make home movies. Today, I’m shooting on greenscreen in 4K, creating virtual CG sets, and compositing in After Effects, in spare time. If I were a teenager today, there’d be nothing stopping me from doing that at that age now either. The real gatekeeper at this point seems to mainly just be skill level and finding the time. So keep working to learn, expand, and refine skills so you can close that gap between vision and result, in realistic time.

For pretty much every type of creative medium today, there is now a commercial distribution channel for you to self-publish it and drive it with do-it-yourself social media marketing. You have a shot at making good money (not crazy money unless you get lucky), or enough to at least pay for the project. At the very, very least, you for-sure can do it to have done it for the satisfaction alone, and that’s a new reality of creativity today.

At this point for me, that’s mainly all I care about. If whatever I make makes some money, that’s great and extremely helpful, but I’m in it for the love of doing it above all else. I’m going to do it, either way. And now the tools and the marketplaces are totally accessible to do what you love and take it seriously, at a level I call the ‘hobby pro’ project, that can become full-scale businesses or career trajectories but don’t actually have to. Today, you can just make a thing, share it worldwide, and that can be cool and make you happy, and that can be enough. I think most people, deep down, are hoping for even just that, just to make enough doing what they love. It’s easy to get distracted by the stories of unlikely mega-success grandeur; it’s wiser to accept the realities and probabilities and to engage them to your advantage. If you can work the business side as much as the art side, your odds are increasing.

I liken the ability to make a film, game, album, etc today to the idea of writing a novel. It’s okay to self-publish today, where this used to be a stigma. It’s so okay today, that it’s really the norm. Duplass is basically saying that’s the only realistic viable option anymore, and I would wholeheartedly agree, for any medium. And just because anyone can write a novel, it doesn’t mean all succeed. Even those that do, their skill level determines whether or not it’s any good.

As both Duplass and I have said as well, success is no longer assured just on the merit that you’re doing it at all, as it used to be. You’re not one out of seven indie films at Sundance. One out of three indie game devs on Indie Game: The Movie. Really, online you’re one out of thousands spilling out in social media feeds that day alone. The same thing that happened to the music industry has happened to every other creative industry, one by one. But I’d rather have no one between me and my creativity than the old models of last century, even if the market is flooded. And if you get good, there are now new business models in place, like Netflix / Vimeo / VHX / Amazon Instant for film or ubiquitous indie support now for games, that have figured out how to make this work in the new DIY reality.

Duplass’s whole keynote is so spot-on, and it’s refreshing to hear someone actually doing it on the filmmaking side say how it is today, especially for indies, straight-up. The future is now. It has good and bad news. The bad news is the old models are dead, and many will still think they’re alive for decades, trying to resuscitate it all with shock paddles, struggling every step of that way. The good news is, you are the calvary now. Adjust accordingly.


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

VFM02: Mixamo Character Design, Animation, & Capture Test

Had some time today to experiment with Mixamo Fuse for rapid and intuitive character creation, Mixamo Auto-rigging, applied a free default animation pack, importing into Unity, capturing video at various framerates through uRecord at 4K, and, because Unity cameras do not include motion blur for gameplay reasons by default, experimenting with applying synthetic motion blur at various framerates in After Effects, until finally exporting out a 2KSCOPE 24fps H.264 final movie, as shown above uploaded to YouTube. I also learned some YouTube customization settings, such as how to loop a video, which was way, way more complicated than it needed to be… But yay.

In testing the animations in Unity, I just ran the entire default pack. This test was mainly to truly put uRecord and Unity to the test to ensure smooth and correctly timed animation could be captured out of the engine, despite hardware performance on traditional runtime fps. Turns out, this is totally the case. All it needs is then just some synthetic motion blur in After Effects. Without it, it looks uncanny and definitely more ‘computery’.

Here’s a bulleted list of things I learned / confirmed into my pipeline from my experiments today:

  • Can design and auto-rig character in minutes via Fuse; totally worth it for even just establishing a base mesh
  • Can get this character correctly into Unity
  • Only issue seems to be the eyelashes not masking out correctly for unknown reasons
  • Can create base mesh in Fuse and customize more directly in Mudbox/Zbrush and Maya
  • Can create custom elements in Maya to bring into Fuze.
  • Once in Unity, animations worked as expected.
  • Perfect framerate capture and timing confirmed through correcting framerate in ‘Interpret Footage’ in After Effects on the PNG stack
  • Can render everything at once since runtime fps is irrelevant through uRecord, at any res – capture time is still the same, 0.91 seconds per frame, regardless of what’s being rendered in the scene, eliminating the usual film CG process of rendering in passes and compositing to assemble a single shot
  • Can add motion blur in AE (Pixel Motion Blur)
  • Add FilmConvert in AE with default source setting and 50% Color, 50% Curves, 25% Grain.
  • Tried capturing from Unity in 24, 30, and 60 fps and experimenting with motion blur interpolation from those different rates, then rendering final 24fps comps for each– best results that looked less ‘CG animation-ish’ and most ‘filmic’ were found in Pixel Motion Blur over 24fps. Going off of higher framerates, After Effects and Pixel Motion Blur has more data to interpolate, creating a smoother and cleaner render down to 24fps, but it ultimately looked cartoony. Surprisingly, Pixel Motion Blur on 24fps looked most filmic; just ironic that it having less data to work with still produced a better final effect.
  • Thus, I only need to capture in Unity with uRecord at 24 fps, which will save substantial time and HDD space over the long haul.
  • Takes After Effects about 1 minute per second of final video to render with Pixel Motion Blur and FilmConvert. 16 seconds takes about 13.5 minutes to kick out of AE at 2K from a 4K comp and source.
  • Despite capturing in Unity, bringing frames into AE, setting up effects, and kicking out a final AE comp, this process still dramatically beats Maya rendering speed. Exponentially.
  • Theoretically can animate Face with Mixamo and webcam– but Face Plus is currently broken when running on my Mac OS X Yosemite and my 2011 iMac.

Here’s an overview of the amazing Mixamo Fuse. Once I saw EVE Online’s character creator first back in 2010, ever since I’ve been hoping a stand-alone app would be created to do this for general character creation outside of a game. It’s here and it’s super easy to use.

I was also blown away by the prospect of using Mixamo Face Plus to literally sit at my desk and, through a simple webcam, act out facial performance for all CG characters with ease.

However, when trying the stand-alone demo, it crashed on Mac OS X Yosemite no matter which resolution or quality setting I chose, and when trying the Unity plug-in, there were several errors from obsolete code (likely due this not being updated to the new Unity 5, I hope). So it’s currently unusable. Assuming it’s just that, I’m sure Mixamo will resolve these issues in a future update soon. If so, wow– the ability to act for all characters and map those performances to them in recorded animations in realtime is enormously valuable.

Just had a short time to experiment, so that’s all for tonight!

UPDATE: 2015.03.18
A rep from Mixamo saw this post and contacted me on Tuesday. She confirmed Face Plus currently does not work with Unity 5, that they’re looking into updating it, and that they’ll be making announcements about their plans soon (exciting!). Then, she gave me a tip on the eyelashes: the transparency usually isn’t connected by default. I’ll need to duplicate the body material and connect the alpha channel of the diffuse map to the transparency channel of the material. That they found my post and reached out to me with solutions is awesome customer service! So once I find a moment to try this, I’ll post the results!


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

VFM02: Unity + uRecord for 4K Still & Video Capture

Tonight I stumbled upon uRecord in the Unity Asset Store, which works like a charm. It can capture stills at the set resolution, with upscaling (achieving flawless 4K+ capture), and it can even capture full-quality animation renders directly out of the Unity editor, without having to create a build at all.

Click the image below for the full 4K render, captured in literally less than a second:

20150313-UnityuCapture2K

Starting with Cinema Pro Cams, I then set up a basic fly-through test animation. The f-stop was set high so the scene was generally entirely in depth of field focus as it moved through the set. Every time I tried adding the Camera Motion Blur script to the camera, however, Unity crashed. Overall though, uRecord was able to flawlessly capture 4K resolution CinemaScope frames of animation with all camera effects and lighting in tact. I then automatically compiled those frames into video through dragging and dropping into After Effects (I’d have to do this with a Maya render too).

Once in AE, I could tweak the captured imagery like it was any other video (such as applying FilmConvert, though I was able to tweak levels in Unity’s camera directly through Chromatica). From AE, I could render out a 4K MPEG4 or (for ease of playback and streaming especially) a 2K H.264. In the end, I had a 2K HD CinemaScope .mov made out of Unity and AE from start to finish in about 10 minutes total.

Please excuse the very work-in-progress set here– the point is a 4K CinemaScope capture from Unity 5 worked. Be sure to play the 4K-to-2K example here in 1080p. (Update: Now with Pixel Motion Blur Applied)

uRecord would run the scene and step through time to match the set framerate (here I set to 24fps film standard) and it would save these frames out in 4K lossless PNGs. Truly remarkable is how capture in relation to runtime performance frames per second has now been made generally irrelevant (yay! this means I don’t need to buy a new heavy-duty graphics workhorse machine to get animation capture at any resolution, namely 4K, in actual correct playback time). For $30 dollars, uRecord has solved two out of four big issues I knew would be challenges a week ago. The remaining two issues aren’t actually issues, if one can accept the tradeoff in benefits here (and I totally can).

On my 2011 27″ iMac, a still image from a scene similar to this with equivalent lighting and rendering settings rendered at 2K directly in Maya (Mentalray) would’ve taken approximately 2 hours. An instantaneous click compared to 2 hours to get the same shot is an enormous time-saving benefit. Additionally, new renders would have to be made at every point of iteration and necessary checking in Maya. In Unity and its What-You-See-Is-What-You-Get (WYSIWYG) workflow, however, iteration in fully rendered results can happen in realtime. The key with WYSIWYG realtime rendering today is that it’s now (relatively) good enough to stand up against Maya-based rendering (though of course, it can’t exactly beat it in quality yet).

For animation at standard film 24 frames per second, a 60 second shot (1,440 4K frames) would take Maya (at a 4 hour per 4K frame rate) 5,760 hours (or 240 full days or 14,440 minutes or 864,000 seconds – 8 solid 30-day months) to render. If I started the shot today, it would be done by November. However, testing this with uRecord from Unity, it took on average less than a second (0.91) per 4K frame or 1,310.4 seconds (21 minutes 50 seconds, let’s say 22 minutes) to render 1,440 frames. 1,310.4 / 864,000 seconds means Unity can provide 4K production render frames for animation in 0.15% of the time against Maya or (14,440 seconds / 0.91 seconds) a 15,868% efficiency boost. At 2K, it would be a 7,934% efficiency boost. Unless I would buy and build a render farm (which isn’t in the cards), 4K CG filmmaking at an individual indie scale with Maya animation rendering isn’t even a practical option!

This means for every one 4K frame Maya can render, Unity can produce 15,868 frames in the same amount of time. The chart below visualizes this– the Maya column here is exaggerated by a factor of 50x so you can actually see a mark in its column at all! It’s no contest!

Animation-Frames-Over-Same-Time

A render through Maya with Mentalray or VRay rendering engines would certainly be at a higher quality, but the time and thus cost savings benefit of rendering in realtime for a small independent scale production, especially primarily helmed by one person, is priceless in comparison. This makes a CG filmmaking project under those circumstances even feasible at all! If you know your daily burn rate, rendering a feature length film in Unity instead of Maya means, by the ‘a penny saved is a penny earned’ logic, you essentially just pulled up a dump-truck of cash onto your production’s doorstep in cost savings from funds you never had to begin with.

If your burn rate as an indie was, say, $100 a day, and you only rendered a 4K solid feature-length (120 minute) CG film in Maya once, that’s 172,800 frames at 4 hours per frame or 691,200 hours or 28,800 days (about 79 years). If you started rendering today, it would be done by about 2094. Meanwhile, Unity would take about 45 hours (about 2 days) to render an entire feature-length film. At the burn rate, this means rendering through Unity provides a value against Maya rendering of at least $2.9 million dollars, and most definitely more (since you would be rendering work-in-progress scenes more than just once). Likely then tens of millions of dollars in production output value, now available to the indie… for free. Thanks Unity!

I would wager in the next ninth-generation of console graphics (around maybe 2020 or 2023 at the latest), the quality will match if not surpass Maya Mentalray and even Vray in realtime game engines. By then, it would follow that no one will be rendering the old-fashioned way, even Hollywood, and instead the standard method will be realtime engines. For an indie, going to this a generation early makes so much sense, it seems to me like it’s really the only viable method at this point. Who in their right mind would not choose to take 0.15% in production time?

In summary, this should all be obvious anyway. Of course realtime rendering is more efficient. What changes the equation is how realtime game engines are now close to Hollywood filmic CG and democratically available, super-viable for virtual filmmaking and VFX at a no-budget scale. This feels hyperbolic, but it’s just simple math and an amazing time to be alive. Unity 5 was literally released last week, leaping forward to the eighth-gen of console graphics with serious R&D behind it. This essentially all came down to whether or not I could get near Maya-quality 4K lossless captures from a game engine in both still and animation forms. I can, on a four-year-old iMac. Which means, goodbye Maya rendering. I also think what I’m on about here is to simply have gone through it myself, from theory to actual proof-of-concept pipeline. All with a free-to-use and royalty-free game engine.

Once you do it, the potential this unlocks is thrilling!

Now to get back to modeling and texturing the virtual set, especially knowing it’s worth it! VFM02 as a prototype experiment is proving extremely worth it. I’ve learned so much already in such a short time!


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

VFM02: Holotable WIP

Made some progress tweaking lighting in Unity yesterday and started on modeling and texturing the holotable for VFM02. (By the way, the red geometry is block-in meant to be turned into production assets separate from the now-templated graybox in Maya). The Skybox is also definitely temp, but has the right kind of color palette I’m after.

Current glowmap:
20150307-VFM02-HolotableGlowmap

In the scene (still mostly graybox work-in-progress):
20150307-VFM02-SetWIP1
20150307-VFM02-SetWIP2

Original sketch:
20150307-Holotable-Sketch

In other news, Clive over at DarkArts has been quick to resolve the issue with Screenshot Creator, where it would crash on Mac in Unity 5. Now it reliably takes screenshots but not with shadows at higher resolutions– perhaps because of a memory issue. Anyway, he’s let me know there’s a lot of updates on the way— I’m really looking forward to them! It captures really nice quality screenshots, when it works. I’m going to get him more details for troubleshooting soon when I have a chance early this week. For now, I simply maximize the Game viewport and grab a 2.5K screenshot from there manually. UPDATE: posted latest issues for troubleshooting Screenshot Creator over at the Unity forums.

UPDATE: The above screenshots were taken after switching the Rendering mode to Deferred and the Color Space to Linear as per recommendations in Unity’s official lighting tutorial. However, the anti-aliasing was awful in Deferred mode. Just awful. So I switched it back to Forward rendering and it’s much better. Switching back to Forward also resolved a scrambling issue with Screenshot Creator, but taking 4K screenshots still drops out realtime shadows… even though the scene will run at 60+ fps.


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

VFM02: More Unity 5 Realtime Rendering Tests

Under Unity 5’s Lighting system, today I experimented with Ambient Occlusion and Final Gathering based on the values from the new Skybox, all through Continuous Baking. It took about two hours to calculate all of this, but then it became part of the scene throughout, in realtime at any angle. Getting just one shot like this in Maya would’ve taken hours to render for one shot only and then lost until rendered again.

Here are a few of the shots showing off Unity 5’s realtime rendering and ultra-fast continuous baking abilities on a still mostly graybox work-in-progress environment. It’s actually perfect that I’m mainly testing this with graybox, so I can directly see Unity’s lighting power at work without obfuscation from texture details.

VFM02-UnityRenderTest2-A00
VFM02-UnityRenderTest2-A01
VFM02-UnityRenderTest2-A04

Even after it’s done, it says there are “No Lightmaps 0 B” and under the Occlusion tab, nothing has been baked there yet either. These new continuous bakes must have generated maps somewhere, but Unity isn’t reporting that they exist. Instead, kicking off a build took an extra three minutes as it compiled all of this (though I could easily pull out screenshots from the Game View sized to exactly 1920×800 2KSCOPE HD without even making a build at all). This also added about two minutes extra to the initial loading of the scene when starting up the project in the Unity editor. Not bad at all though.

Today I also programmed a PlayMaker FSM to cycle through all 14 camera angles (set back in the storyboard stage) with a simple press of the space bar.

20150305-CameraCycleFSM

Then I broke out the stand-in positioning human models onto a separate Maya file and thus GameObject in Unity from the rest of the set, and removed the Sky-sphere and Moon objects instead for a new Skybox used by Unity 5’s Lighting system directly.

I experimented with two screenshot plug-ins from the Unity Asset Store as well, which can allow you to capture at any resolution, like 8K, right out of the editor from any camera, including ones with final shot quality effects scripts attached. However, Screenshot Creator doesn’t seem to work with Unity 5 (update: we’re troubleshooting this now) and Instant Screenshot works but doesn’t capture camera effects. Another new related issue is how I’ll need to force the camera into a specific aspect ratio, because by currently being naturally dependent on the viewport size, the framing changes between the 2KSCOPE editor’s Game Window and running a build on a 16:9 monitor. A functional screenshot plug-in will also resolve this, so here’s hoping they can get this working in an update soon.

Overall, at this point, I’m pretty convinced this is the way to go, especially as an indie that needs to work as efficiently as possible. Very exciting!


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

VFM02: Unity 5 Realtime Render Test

Within literally a single minute of opening Unity 5, I had created a new project, dropped in my (very work-in-progress) Maya scene, and bam: rendering exactly as I wanted by default… in realtime. I spent only an additional 30 minutes loading up and tweaking scripts to calibrate Unity’s camera, getting a more polished look (anti-aliasing, vignetting, chromatic aberration, slight bloom, depth of field, screenspace ambient occlusion, etc). The prototype image is nothing breathtaking (yet!) — this test is entirely about process, efficiency, and effectiveness.

20150304-Unity5Test01

For virtual filmmaking, this blows Maya out of the water for rendering, even though it isn’t as perfect as a Maya render can ultimately get. The trade off is minimal. It took me a half an hour to get this set up and only a split second to screenshot it at 2K+ res. Getting additional shots as I develop this set is going to be a simple process of opening up Unity again with changes made in Maya automatically updated in the scene, kicking off and launching a build, and taking screenshots through cycling around the same anchored cameras. We’re talking minutes, instead of hours or days. Imagine, rendering a 60 second animation will take exactly 60 seconds… instead of two days… and it will look like filmic CG. We are now there.

Beyond getting instant renders, the ability to iterate and review has launched into warp speed here, freeing up more time to try even more things and get the shot just right, ultimately through less work. And if I’d want to change things much later along in the project, going back, making that change, and getting a new capture will be trivial compared to Maya, especially for animated shots. Setting up new shots, like below, took seconds.

20150304-VFM02-Unity5-A01

I spent all day yesterday in Maya trying (and failing) to achieve a render as good as this: learning and tweaking render settings, quality levels, and testing lighting with lengthy (40 minutes to 2.5 hours) single still image test renders between 2K and 4K. With every tweak, I’d have to wait minutes for IPR to re-render even a 10% scaled preview so I could (barely) see how my changes were taking effect in as practical a workflow as it could provide.

Compared to that, the benefits of using a game engine to render in realtime are enormous, especially for an indie or small team studio. Only now that game engines can render graphics this well is that even professionally viable. Literally, Unity 5 was released yesterday. Machinema is not a new concept, but machinema now possible at this ‘next-gen’ level of quality certainly is new indeed (and it’s only going to get better from here).

I wouldn’t expect a multi-million dollar funded film and VFX house to jump over to this (though they too might find this as appealing), but for one guy wanting to tell a cinematic story on a no/micro budget, this presents a huge advantage. Additionally, I can create assets for both a film and a future related video game simultaneously, preparing them all in-engine as I go. That is also enormous.

20150304-Unity5Test-Editor

There are a few issues I’m aware of right off the bat:

1) Resolution — My now four-year-old 27″ iMac has a maximum resolution of 2560×1440. When I bake a build and run it, that’s the best I can get for the realtime render from a direct screenshot or video capture. This is perfectly sufficient for the current (outgoing) standard of 2K (1080p), but I’m going to want to get 4K renders at least, if possible. There are two solutions to this off the top of my head: 1) get the new 5K iMac. Then screenshots can actually be more than 4K when running a build at fullscreen. 2) DVI-to-HDMI my existing iMac to a 4K UHDTV as a second monitor and run the build over there. Of course, both of these solutions are a tad ridiculous and certainly expensive.

UPDATE 2015.03.05: 3) The Unity Asset Store has plug-ins that will do screenshots at any resolution– even video capture. There’s a $20 one called Screenshot Creator, but upon loading it into Unity 5, I got a bunch of errors and it was unusable. There’s a free one, Instant Screenshot, which works, but it doesn’t render with camera effects. I could get an 8K render and scale it down to 4K in Photoshop to artificially create anti-aliasing at least, but I lose all other desired effects, especially depth of field. That’s probably the most viable solution so far. It still beats the heck out of waiting hours for a Maya render at 4K. You have to wonder why a game engine doesn’t have the ability to take screenshots at any res as a built-in feature, especially these days– but I guess that’s why the Asset Store is great if not often frustrating as various assets often break or don’t play nice with others.

2) Performance Framerate — Running this build on my four-year-old iMac, I got 17fps at first. All the camera effects dragged this down. I may be able to have a ‘naked’ camera (no effects) and replicate those effects in post via After Effects. Now, if the objective is to just get still image plate renders and bring them into Photoshop and After Effects, this doesn’t even matter. But, I’m increasingly considering going all-CG with filmmaking, at least for some of the types of projects I want to do, which means I’ll need to animate. Doing everything in Unity allows me significant animation advantages, I’d think, since I can review playback and iterate in realtime. So, one solution is I could get a much cheaper Windows gaming PC with super-beefy specs and use that to capture engine video over at least 30fps to 60fps performance runtime (the capture would only get probably 30fps at best, but for film I only need 24 and will drop the video down to 24fps in After Effects anyway). Alternatively, maybe the 5K iMac can run this stuff sufficiently, which means getting it would kill two birds with one stone. But performance framerate will be exceptionally important not only to get a clean capture, but to also ensure voice actor performances and animations all sync up as intended on realtime playback.

(Later, I did some tests and discovered the screenspace ambient occlusion on the camera tanked the framerate most. Camera Motion Blur even lead to crashes consistently. Without them, I could get anywhere from 36 – 155fps (???) on my old machine and can simply bake ambient occlusion instead. With no camera effects, I get perfect 60fps+.)

3) Limitations of Realtime Assets — The gap on this is closing, especially now, but there will be certain limitations on realtime assets, when in Maya you can have as many polys and as high-res texture maps as your system can handle. I’d say it’s very likely a hybridization between using game engines and Maya to render elements of varying complexity is the way to go, compositing them together in post. For example, I can use a game engine to render environments, but use Maya to render characters. Or I can do multiple passes in a game engine by simply turning on and off layers of game objects or making them invisible, like in Maya. But ultimately— this issue may actually be an advantage, as it will force me to develop assets through games practices that prep two types of product for the same IP simultaneously: films and video games.

4) Less than Perfect Rendering — The contemporary game engines of today do a damn good job, but of course not as good as something meticulously set up and rendered multiple times in passes through Maya with Vray. Still, this is maybe something I’d just have to accept as a trade-off for working this way in exchange for the enormous time saving advantages. To work around this, I can always enhance capture in Photoshop and/or After Effects manually, which is likely still saving time. And I have to ask myself, do I really have the time or any ability to create a renderfarm to do this in Maya practically anyway? That’s probably a stretch, especially for something more than just a short film. Finally, what if photo-realism isn’t the goal for a CG-film project, but instead a stylized hyperrealism? Then this will matter much less. Look how amazing this recent short CG film is in its totally stylized look fit for a game engine, made by three people: Le Gouffre — inspiring! They even talk about how they enhanced every single shot with a ton of layers in After Effects to fully maximize their film’s look, so there really is no requirement that the plate render be perfect either way.

Unity-2KScope-Layout

Using this layout, I have a 1920×800 (2K CinemaScope) Game Viewport,
so I can grab render captures directly out of the Editor without
even having to kick off and launch a build at all.

Overall, not too bad. Will be thinking a lot about this in the future. I will probably be upgrading my Unity 4 Pro licenses to Unity 5 Pro very soon. At this point, I’ll be returning to modeling up the VFM02 set again now, knowing I can get a render of it at a desirable quality without having to painfully mess with Maya’s rendering and lights any further. This changes my virtual filmmaking formula, which is why I love prototyping– that’s exactly what VFM02 is all about.

UPDATE: 2015.03.04.23:11

Did a camera flythrough animation and video capture test. With all camera effects on, I’ll need stronger hardware to get this at a perfectly smooth playback in realtime, as it ended up dropping frames occasionally. Otherwise, I might be able to get it at a better framerate with effects off (UPDATE: yep, at perfect 60fps), and then simulate them in post via After Effects.

20150304-VFM02-Unity-Anim

For comparison, this same shot took Maya two days to render when I did it for VFM02’s initial graybox edit back in January. Using Unity 5 and lossless screen capture, I got this in a high-definition 2KSCOPE .mov within 30 minutes.


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

Maya Mentalray Final Gathering Test Render

Using the first model I made for VFM02’s virtual set, here I experimented with Maya Mentalray Final Gathering. I used tips from the first lesson in yesterday’s Gnomon lighting tutorial.

20150303-1K-LightingTestA

Click here for the full 4KSCOPE render.

This was far more straightforward to set up than the more complex full scene with many light sources. I’ll be going back to that full scene again in Maya to wrestle it to this level. Later I’ll be experimenting with lighting it in Unity 5 for possibly a much, much faster set-up and realtime render capture of complex scenes.


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home