Portfolio Addition: WWE 2K17

I’ve added the work I did last year in collaboration with Alfonzo ‘Zo’ Burton on 2K Sports’s WWE 2K17 for PlayStation 4, Xbox One, PlayStation 3, Xbox 360, and Windows. Below is an example of a UI Motion Graphics concept I did for the production in Flash and After Effects:

I’m always happy to talk about possibilities with new clients, so if you have a project, feel free to contact me at hello@xanderdavis.com and we can discuss!

-Xander


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

Portfolio Update: Moving Hazard

I’m happy to add to the portfolio a project I worked on for a year fulltime remotely with Illfonic (Denver) and Psyop Games (Los Angeles), Moving Hazard in Unreal Engine 4. I did UI / UX Design and Art Direction, produced After Effects motion graphics for the Front End background, and even wrote an original song for menu music. It was a fantastic experience and these clients were wonderful to work with!

I’m actively taking on new clients now, so if you’re hiring freelancers for UI / UX, e-mail me at hello@xanderdavis.com and let’s discuss!


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

Moving Hazard at PAX!

A game I’ve been working on for nearly a year is out in the world at PAX this weekend! Check it out!

Three words: Zombies. As. Weapons.

Moving Hazard at PAX 2016

www.movinghazard.com

Looking forward to eventually talking about this more! But for now, you’ll have to be at PAX to experience it!


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

New Client & Project: BOMBSHELL

Yet another new, awesome client! I’m helping lead UI with the international team at 3D Realms / Interceptor (Denmark) on forthcoming blockbuster action game BOMBSHELL! www.bombshell.com

And if you’re hiring freelancers, don’t hesitate to send me an e-mail ( hello@xanderdavis.com ) and let’s discuss!

—X


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

The Future of Video Games is the Future of Animation

Seems I’m not alone in my assessment that game engines are now viable rendering engines for CG filmmaking. I’ve been seeing this discussion pop up a lot lately. While working at triple-A games studios during the advent of the eighth console gen, I routinely heard rumblings then that this was the big objective, to achieve ‘hollywood level CG in realtime’. With a focus on game engines at this year’s GDC, it seems it’s been truly brought into the forefront of people’s minds.

Now FastCompany is echoing these thoughts as well, with their article ‘The Future of Video Games is the Future of Animation‘.

I particularly liked the stat that a single frame from Pixar’s Monsters University took them 29 hours to render using “what’s considered one of the fastest supercomputer rigs in the world: 2,000 computers with 24,000 processing cores”. In my analysis from my own testing, a 4K frame was taking me about 4 hours in Maya Mentalray, but it wasn’t as complex as a frame set up by Pixar. Meanwhile, to capture a 4K frame out of Unity, it takes only 0.91 seconds, on my five-year-old iMac. So 29 hours or 104,400 seconds on a state-of-the-art super-computer versus 0.91 seconds on a half-decade-old prosumer desktop. As this provides literally millions of dollars in cost-savings and time-savings value to individual creators, it’s not even a question of which route an indie should take for CG filmmaking.

And check out the recently released Kite Demo from Unreal Engine 4 and the Nvidia Titan X to see yet another impressive example of where this is all going:

I haven’t tested Unreal Engine 4 yet myself, as it barely runs smoothly on my five-year-old iMac. I’m going to need to get a new computer anyway (it’s time), and I’m aiming for the 5K iMac as my primary machine with Windows on Bootcamp. If that’s not enough, then I’m also considering getting a super-beefy Windows box with the Nvidia Titan X as basically my render machine only.

nvidia-titan

But the new computer(s) will be a bit down the road. I’m content to finish the virtual set for VFM02 first in Unity 5 for now while I train myself up on CG modeling and texturing. I’ve also just finished figuring out cinematic cameras, lighting and rendering modes, and the full capture pipeline from Unity, and am thrilled to have got all of that working. I’m actually not that limited with my old iMac, primarily because uRecord can get animation out of Unity at any res in perfect lossless frame-by-frame PNGs without runtime even being a factor. Then there’s having to decide between Unity’s royalty free offering and Unreal Engine’s 5% cut. Unity’s results aren’t bad at all, but Unreal Engine’s seem pretty obviously amazing– for a price.

starwars-rebels-texture-fidelity-1b

Going through VFM02, I’m already beginning to appreciate the value in a stylized approach on an indie-scale. It really comes down to how easy it is to implement these features in the engine editor, and whether that scales throughout an entire production. Meanwhile, you can actually see low-fidelity texturing on background assets of Star Wars: Rebels produced and aired by Disney (high res example 1, example 2), so there’s a level of efficiency possible through stylization that works. People tend to focus on the characters in the frame, especially the eyes, anyway. It really just depends on your production parameters and goals, so there’s huge value in custom-tailoring a style to fit those. The Kite demo certainly wasn’t made by one person in spare time, but it nevertheless does demonstrate an enormous leap forward in efficiency at smaller production scales.

Either way, it does seem that using game engines now for filmmaking is a trending idea. Go experiment and create something awesome! There’s never been a better time.


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

VFM02: Mixamo Character Design, Animation, & Capture Test

Had some time today to experiment with Mixamo Fuse for rapid and intuitive character creation, Mixamo Auto-rigging, applied a free default animation pack, importing into Unity, capturing video at various framerates through uRecord at 4K, and, because Unity cameras do not include motion blur for gameplay reasons by default, experimenting with applying synthetic motion blur at various framerates in After Effects, until finally exporting out a 2KSCOPE 24fps H.264 final movie, as shown above uploaded to YouTube. I also learned some YouTube customization settings, such as how to loop a video, which was way, way more complicated than it needed to be… But yay.

In testing the animations in Unity, I just ran the entire default pack. This test was mainly to truly put uRecord and Unity to the test to ensure smooth and correctly timed animation could be captured out of the engine, despite hardware performance on traditional runtime fps. Turns out, this is totally the case. All it needs is then just some synthetic motion blur in After Effects. Without it, it looks uncanny and definitely more ‘computery’.

Here’s a bulleted list of things I learned / confirmed into my pipeline from my experiments today:

  • Can design and auto-rig character in minutes via Fuse; totally worth it for even just establishing a base mesh
  • Can get this character correctly into Unity
  • Only issue seems to be the eyelashes not masking out correctly for unknown reasons
  • Can create base mesh in Fuse and customize more directly in Mudbox/Zbrush and Maya
  • Can create custom elements in Maya to bring into Fuze.
  • Once in Unity, animations worked as expected.
  • Perfect framerate capture and timing confirmed through correcting framerate in ‘Interpret Footage’ in After Effects on the PNG stack
  • Can render everything at once since runtime fps is irrelevant through uRecord, at any res – capture time is still the same, 0.91 seconds per frame, regardless of what’s being rendered in the scene, eliminating the usual film CG process of rendering in passes and compositing to assemble a single shot
  • Can add motion blur in AE (Pixel Motion Blur)
  • Add FilmConvert in AE with default source setting and 50% Color, 50% Curves, 25% Grain.
  • Tried capturing from Unity in 24, 30, and 60 fps and experimenting with motion blur interpolation from those different rates, then rendering final 24fps comps for each– best results that looked less ‘CG animation-ish’ and most ‘filmic’ were found in Pixel Motion Blur over 24fps. Going off of higher framerates, After Effects and Pixel Motion Blur has more data to interpolate, creating a smoother and cleaner render down to 24fps, but it ultimately looked cartoony. Surprisingly, Pixel Motion Blur on 24fps looked most filmic; just ironic that it having less data to work with still produced a better final effect.
  • Thus, I only need to capture in Unity with uRecord at 24 fps, which will save substantial time and HDD space over the long haul.
  • Takes After Effects about 1 minute per second of final video to render with Pixel Motion Blur and FilmConvert. 16 seconds takes about 13.5 minutes to kick out of AE at 2K from a 4K comp and source.
  • Despite capturing in Unity, bringing frames into AE, setting up effects, and kicking out a final AE comp, this process still dramatically beats Maya rendering speed. Exponentially.
  • Theoretically can animate Face with Mixamo and webcam– but Face Plus is currently broken when running on my Mac OS X Yosemite and my 2011 iMac.

Here’s an overview of the amazing Mixamo Fuse. Once I saw EVE Online’s character creator first back in 2010, ever since I’ve been hoping a stand-alone app would be created to do this for general character creation outside of a game. It’s here and it’s super easy to use.

I was also blown away by the prospect of using Mixamo Face Plus to literally sit at my desk and, through a simple webcam, act out facial performance for all CG characters with ease.

However, when trying the stand-alone demo, it crashed on Mac OS X Yosemite no matter which resolution or quality setting I chose, and when trying the Unity plug-in, there were several errors from obsolete code (likely due this not being updated to the new Unity 5, I hope). So it’s currently unusable. Assuming it’s just that, I’m sure Mixamo will resolve these issues in a future update soon. If so, wow– the ability to act for all characters and map those performances to them in recorded animations in realtime is enormously valuable.

Just had a short time to experiment, so that’s all for tonight!

UPDATE: 2015.03.18
A rep from Mixamo saw this post and contacted me on Tuesday. She confirmed Face Plus currently does not work with Unity 5, that they’re looking into updating it, and that they’ll be making announcements about their plans soon (exciting!). Then, she gave me a tip on the eyelashes: the transparency usually isn’t connected by default. I’ll need to duplicate the body material and connect the alpha channel of the diffuse map to the transparency channel of the material. That they found my post and reached out to me with solutions is awesome customer service! So once I find a moment to try this, I’ll post the results!


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

VFM02: More Unity 5 Realtime Rendering Tests

Under Unity 5’s Lighting system, today I experimented with Ambient Occlusion and Final Gathering based on the values from the new Skybox, all through Continuous Baking. It took about two hours to calculate all of this, but then it became part of the scene throughout, in realtime at any angle. Getting just one shot like this in Maya would’ve taken hours to render for one shot only and then lost until rendered again.

Here are a few of the shots showing off Unity 5’s realtime rendering and ultra-fast continuous baking abilities on a still mostly graybox work-in-progress environment. It’s actually perfect that I’m mainly testing this with graybox, so I can directly see Unity’s lighting power at work without obfuscation from texture details.

VFM02-UnityRenderTest2-A00
VFM02-UnityRenderTest2-A01
VFM02-UnityRenderTest2-A04

Even after it’s done, it says there are “No Lightmaps 0 B” and under the Occlusion tab, nothing has been baked there yet either. These new continuous bakes must have generated maps somewhere, but Unity isn’t reporting that they exist. Instead, kicking off a build took an extra three minutes as it compiled all of this (though I could easily pull out screenshots from the Game View sized to exactly 1920×800 2KSCOPE HD without even making a build at all). This also added about two minutes extra to the initial loading of the scene when starting up the project in the Unity editor. Not bad at all though.

Today I also programmed a PlayMaker FSM to cycle through all 14 camera angles (set back in the storyboard stage) with a simple press of the space bar.

20150305-CameraCycleFSM

Then I broke out the stand-in positioning human models onto a separate Maya file and thus GameObject in Unity from the rest of the set, and removed the Sky-sphere and Moon objects instead for a new Skybox used by Unity 5’s Lighting system directly.

I experimented with two screenshot plug-ins from the Unity Asset Store as well, which can allow you to capture at any resolution, like 8K, right out of the editor from any camera, including ones with final shot quality effects scripts attached. However, Screenshot Creator doesn’t seem to work with Unity 5 (update: we’re troubleshooting this now) and Instant Screenshot works but doesn’t capture camera effects. Another new related issue is how I’ll need to force the camera into a specific aspect ratio, because by currently being naturally dependent on the viewport size, the framing changes between the 2KSCOPE editor’s Game Window and running a build on a 16:9 monitor. A functional screenshot plug-in will also resolve this, so here’s hoping they can get this working in an update soon.

Overall, at this point, I’m pretty convinced this is the way to go, especially as an indie that needs to work as efficiently as possible. Very exciting!


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home

VFM02: Unity 5 Realtime Render Test

Within literally a single minute of opening Unity 5, I had created a new project, dropped in my (very work-in-progress) Maya scene, and bam: rendering exactly as I wanted by default… in realtime. I spent only an additional 30 minutes loading up and tweaking scripts to calibrate Unity’s camera, getting a more polished look (anti-aliasing, vignetting, chromatic aberration, slight bloom, depth of field, screenspace ambient occlusion, etc). The prototype image is nothing breathtaking (yet!) — this test is entirely about process, efficiency, and effectiveness.

20150304-Unity5Test01

For virtual filmmaking, this blows Maya out of the water for rendering, even though it isn’t as perfect as a Maya render can ultimately get. The trade off is minimal. It took me a half an hour to get this set up and only a split second to screenshot it at 2K+ res. Getting additional shots as I develop this set is going to be a simple process of opening up Unity again with changes made in Maya automatically updated in the scene, kicking off and launching a build, and taking screenshots through cycling around the same anchored cameras. We’re talking minutes, instead of hours or days. Imagine, rendering a 60 second animation will take exactly 60 seconds… instead of two days… and it will look like filmic CG. We are now there.

Beyond getting instant renders, the ability to iterate and review has launched into warp speed here, freeing up more time to try even more things and get the shot just right, ultimately through less work. And if I’d want to change things much later along in the project, going back, making that change, and getting a new capture will be trivial compared to Maya, especially for animated shots. Setting up new shots, like below, took seconds.

20150304-VFM02-Unity5-A01

I spent all day yesterday in Maya trying (and failing) to achieve a render as good as this: learning and tweaking render settings, quality levels, and testing lighting with lengthy (40 minutes to 2.5 hours) single still image test renders between 2K and 4K. With every tweak, I’d have to wait minutes for IPR to re-render even a 10% scaled preview so I could (barely) see how my changes were taking effect in as practical a workflow as it could provide.

Compared to that, the benefits of using a game engine to render in realtime are enormous, especially for an indie or small team studio. Only now that game engines can render graphics this well is that even professionally viable. Literally, Unity 5 was released yesterday. Machinema is not a new concept, but machinema now possible at this ‘next-gen’ level of quality certainly is new indeed (and it’s only going to get better from here).

I wouldn’t expect a multi-million dollar funded film and VFX house to jump over to this (though they too might find this as appealing), but for one guy wanting to tell a cinematic story on a no/micro budget, this presents a huge advantage. Additionally, I can create assets for both a film and a future related video game simultaneously, preparing them all in-engine as I go. That is also enormous.

20150304-Unity5Test-Editor

There are a few issues I’m aware of right off the bat:

1) Resolution — My now four-year-old 27″ iMac has a maximum resolution of 2560×1440. When I bake a build and run it, that’s the best I can get for the realtime render from a direct screenshot or video capture. This is perfectly sufficient for the current (outgoing) standard of 2K (1080p), but I’m going to want to get 4K renders at least, if possible. There are two solutions to this off the top of my head: 1) get the new 5K iMac. Then screenshots can actually be more than 4K when running a build at fullscreen. 2) DVI-to-HDMI my existing iMac to a 4K UHDTV as a second monitor and run the build over there. Of course, both of these solutions are a tad ridiculous and certainly expensive.

UPDATE 2015.03.05: 3) The Unity Asset Store has plug-ins that will do screenshots at any resolution– even video capture. There’s a $20 one called Screenshot Creator, but upon loading it into Unity 5, I got a bunch of errors and it was unusable. There’s a free one, Instant Screenshot, which works, but it doesn’t render with camera effects. I could get an 8K render and scale it down to 4K in Photoshop to artificially create anti-aliasing at least, but I lose all other desired effects, especially depth of field. That’s probably the most viable solution so far. It still beats the heck out of waiting hours for a Maya render at 4K. You have to wonder why a game engine doesn’t have the ability to take screenshots at any res as a built-in feature, especially these days– but I guess that’s why the Asset Store is great if not often frustrating as various assets often break or don’t play nice with others.

2) Performance Framerate — Running this build on my four-year-old iMac, I got 17fps at first. All the camera effects dragged this down. I may be able to have a ‘naked’ camera (no effects) and replicate those effects in post via After Effects. Now, if the objective is to just get still image plate renders and bring them into Photoshop and After Effects, this doesn’t even matter. But, I’m increasingly considering going all-CG with filmmaking, at least for some of the types of projects I want to do, which means I’ll need to animate. Doing everything in Unity allows me significant animation advantages, I’d think, since I can review playback and iterate in realtime. So, one solution is I could get a much cheaper Windows gaming PC with super-beefy specs and use that to capture engine video over at least 30fps to 60fps performance runtime (the capture would only get probably 30fps at best, but for film I only need 24 and will drop the video down to 24fps in After Effects anyway). Alternatively, maybe the 5K iMac can run this stuff sufficiently, which means getting it would kill two birds with one stone. But performance framerate will be exceptionally important not only to get a clean capture, but to also ensure voice actor performances and animations all sync up as intended on realtime playback.

(Later, I did some tests and discovered the screenspace ambient occlusion on the camera tanked the framerate most. Camera Motion Blur even lead to crashes consistently. Without them, I could get anywhere from 36 – 155fps (???) on my old machine and can simply bake ambient occlusion instead. With no camera effects, I get perfect 60fps+.)

3) Limitations of Realtime Assets — The gap on this is closing, especially now, but there will be certain limitations on realtime assets, when in Maya you can have as many polys and as high-res texture maps as your system can handle. I’d say it’s very likely a hybridization between using game engines and Maya to render elements of varying complexity is the way to go, compositing them together in post. For example, I can use a game engine to render environments, but use Maya to render characters. Or I can do multiple passes in a game engine by simply turning on and off layers of game objects or making them invisible, like in Maya. But ultimately— this issue may actually be an advantage, as it will force me to develop assets through games practices that prep two types of product for the same IP simultaneously: films and video games.

4) Less than Perfect Rendering — The contemporary game engines of today do a damn good job, but of course not as good as something meticulously set up and rendered multiple times in passes through Maya with Vray. Still, this is maybe something I’d just have to accept as a trade-off for working this way in exchange for the enormous time saving advantages. To work around this, I can always enhance capture in Photoshop and/or After Effects manually, which is likely still saving time. And I have to ask myself, do I really have the time or any ability to create a renderfarm to do this in Maya practically anyway? That’s probably a stretch, especially for something more than just a short film. Finally, what if photo-realism isn’t the goal for a CG-film project, but instead a stylized hyperrealism? Then this will matter much less. Look how amazing this recent short CG film is in its totally stylized look fit for a game engine, made by three people: Le Gouffre — inspiring! They even talk about how they enhanced every single shot with a ton of layers in After Effects to fully maximize their film’s look, so there really is no requirement that the plate render be perfect either way.

Unity-2KScope-Layout

Using this layout, I have a 1920×800 (2K CinemaScope) Game Viewport,
so I can grab render captures directly out of the Editor without
even having to kick off and launch a build at all.

Overall, not too bad. Will be thinking a lot about this in the future. I will probably be upgrading my Unity 4 Pro licenses to Unity 5 Pro very soon. At this point, I’ll be returning to modeling up the VFM02 set again now, knowing I can get a render of it at a desirable quality without having to painfully mess with Maya’s rendering and lights any further. This changes my virtual filmmaking formula, which is why I love prototyping– that’s exactly what VFM02 is all about.

UPDATE: 2015.03.04.23:11

Did a camera flythrough animation and video capture test. With all camera effects on, I’ll need stronger hardware to get this at a perfectly smooth playback in realtime, as it ended up dropping frames occasionally. Otherwise, I might be able to get it at a better framerate with effects off (UPDATE: yep, at perfect 60fps), and then simulate them in post via After Effects.

20150304-VFM02-Unity-Anim

For comparison, this same shot took Maya two days to render when I did it for VFM02’s initial graybox edit back in January. Using Unity 5 and lossless screen capture, I got this in a high-definition 2KSCOPE .mov within 30 minutes.


www.xanderdavis.com
@XanderDavisLive
IMDb / LinkedIn

← Blog Home