Had some time today to experiment with Mixamo Fuse for rapid and intuitive character creation, Mixamo Auto-rigging, applied a free default animation pack, importing into Unity, capturing video at various framerates through uRecord at 4K, and, because Unity cameras do not include motion blur for gameplay reasons by default, experimenting with applying synthetic motion blur at various framerates in After Effects, until finally exporting out a 2KSCOPE 24fps H.264 final movie, as shown above uploaded to YouTube. I also learned some YouTube customization settings, such as how to loop a video, which was way, way more complicated than it needed to be… But yay.
In testing the animations in Unity, I just ran the entire default pack. This test was mainly to truly put uRecord and Unity to the test to ensure smooth and correctly timed animation could be captured out of the engine, despite hardware performance on traditional runtime fps. Turns out, this is totally the case. All it needs is then just some synthetic motion blur in After Effects. Without it, it looks uncanny and definitely more ‘computery’.
Here’s a bulleted list of things I learned / confirmed into my pipeline from my experiments today:
- Can design and auto-rig character in minutes via Fuse; totally worth it for even just establishing a base mesh
- Can get this character correctly into Unity
- Only issue seems to be the eyelashes not masking out correctly for unknown reasons
- Can create base mesh in Fuse and customize more directly in Mudbox/Zbrush and Maya
- Can create custom elements in Maya to bring into Fuze.
- Once in Unity, animations worked as expected.
- Perfect framerate capture and timing confirmed through correcting framerate in ‘Interpret Footage’ in After Effects on the PNG stack
- Can render everything at once since runtime fps is irrelevant through uRecord, at any res – capture time is still the same, 0.91 seconds per frame, regardless of what’s being rendered in the scene, eliminating the usual film CG process of rendering in passes and compositing to assemble a single shot
- Can add motion blur in AE (Pixel Motion Blur)
- Add FilmConvert in AE with default source setting and 50% Color, 50% Curves, 25% Grain.
- Tried capturing from Unity in 24, 30, and 60 fps and experimenting with motion blur interpolation from those different rates, then rendering final 24fps comps for each– best results that looked less ‘CG animation-ish’ and most ‘filmic’ were found in Pixel Motion Blur over 24fps. Going off of higher framerates, After Effects and Pixel Motion Blur has more data to interpolate, creating a smoother and cleaner render down to 24fps, but it ultimately looked cartoony. Surprisingly, Pixel Motion Blur on 24fps looked most filmic; just ironic that it having less data to work with still produced a better final effect.
- Thus, I only need to capture in Unity with uRecord at 24 fps, which will save substantial time and HDD space over the long haul.
- Takes After Effects about 1 minute per second of final video to render with Pixel Motion Blur and FilmConvert. 16 seconds takes about 13.5 minutes to kick out of AE at 2K from a 4K comp and source.
- Despite capturing in Unity, bringing frames into AE, setting up effects, and kicking out a final AE comp, this process still dramatically beats Maya rendering speed. Exponentially.
- Theoretically can animate Face with Mixamo and webcam– but Face Plus is currently broken when running on my Mac OS X Yosemite and my 2011 iMac.
Here’s an overview of the amazing Mixamo Fuse. Once I saw EVE Online’s character creator first back in 2010, ever since I’ve been hoping a stand-alone app would be created to do this for general character creation outside of a game. It’s here and it’s super easy to use.
I was also blown away by the prospect of using Mixamo Face Plus to literally sit at my desk and, through a simple webcam, act out facial performance for all CG characters with ease.
However, when trying the stand-alone demo, it crashed on Mac OS X Yosemite no matter which resolution or quality setting I chose, and when trying the Unity plug-in, there were several errors from obsolete code (likely due this not being updated to the new Unity 5, I hope). So it’s currently unusable. Assuming it’s just that, I’m sure Mixamo will resolve these issues in a future update soon. If so, wow– the ability to act for all characters and map those performances to them in recorded animations in realtime is enormously valuable.
Just had a short time to experiment, so that’s all for tonight!
A rep from Mixamo saw this post and contacted me on Tuesday. She confirmed Face Plus currently does not work with Unity 5, that they’re looking into updating it, and that they’ll be making announcements about their plans soon (exciting!). Then, she gave me a tip on the eyelashes: the transparency usually isn’t connected by default. I’ll need to duplicate the body material and connect the alpha channel of the diffuse map to the transparency channel of the material. That they found my post and reached out to me with solutions is awesome customer service! So once I find a moment to try this, I’ll post the results!