Tonight I stumbled upon uRecord in the Unity Asset Store, which works like a charm. It can capture stills at the set resolution, with upscaling (achieving flawless 4K+ capture), and it can even capture full-quality animation renders directly out of the Unity editor, without having to create a build at all.
Click the image below for the full 4K render, captured in literally less than a second:
Starting with Cinema Pro Cams, I then set up a basic fly-through test animation. The f-stop was set high so the scene was generally entirely in depth of field focus as it moved through the set. Every time I tried adding the Camera Motion Blur script to the camera, however, Unity crashed. Overall though, uRecord was able to flawlessly capture 4K resolution CinemaScope frames of animation with all camera effects and lighting in tact. I then automatically compiled those frames into video through dragging and dropping into After Effects (I’d have to do this with a Maya render too).
Once in AE, I could tweak the captured imagery like it was any other video (such as applying FilmConvert, though I was able to tweak levels in Unity’s camera directly through Chromatica). From AE, I could render out a 4K MPEG4 or (for ease of playback and streaming especially) a 2K H.264. In the end, I had a 2K HD CinemaScope .mov made out of Unity and AE from start to finish in about 10 minutes total.
Please excuse the very work-in-progress set here– the point is a 4K CinemaScope capture from Unity 5 worked. Be sure to play the 4K-to-2K example here in 1080p. (Update: Now with Pixel Motion Blur Applied)
uRecord would run the scene and step through time to match the set framerate (here I set to 24fps film standard) and it would save these frames out in 4K lossless PNGs. Truly remarkable is how capture in relation to runtime performance frames per second has now been made generally irrelevant (yay! this means I don’t need to buy a new heavy-duty graphics workhorse machine to get animation capture at any resolution, namely 4K, in actual correct playback time). For $30 dollars, uRecord has solved two out of four big issues I knew would be challenges a week ago. The remaining two issues aren’t actually issues, if one can accept the tradeoff in benefits here (and I totally can).
On my 2011 27″ iMac, a still image from a scene similar to this with equivalent lighting and rendering settings rendered at 2K directly in Maya (Mentalray) would’ve taken approximately 2 hours. An instantaneous click compared to 2 hours to get the same shot is an enormous time-saving benefit. Additionally, new renders would have to be made at every point of iteration and necessary checking in Maya. In Unity and its What-You-See-Is-What-You-Get (WYSIWYG) workflow, however, iteration in fully rendered results can happen in realtime. The key with WYSIWYG realtime rendering today is that it’s now (relatively) good enough to stand up against Maya-based rendering (though of course, it can’t exactly beat it in quality yet).
For animation at standard film 24 frames per second, a 60 second shot (1,440 4K frames) would take Maya (at a 4 hour per 4K frame rate) 5,760 hours (or 240 full days or 14,440 minutes or 864,000 seconds – 8 solid 30-day months) to render. If I started the shot today, it would be done by November. However, testing this with uRecord from Unity, it took on average less than a second (0.91) per 4K frame or 1,310.4 seconds (21 minutes 50 seconds, let’s say 22 minutes) to render 1,440 frames. 1,310.4 / 864,000 seconds means Unity can provide 4K production render frames for animation in 0.15% of the time against Maya or (14,440 seconds / 0.91 seconds) a 15,868% efficiency boost. At 2K, it would be a 7,934% efficiency boost. Unless I would buy and build a render farm (which isn’t in the cards), 4K CG filmmaking at an individual indie scale with Maya animation rendering isn’t even a practical option!
This means for every one 4K frame Maya can render, Unity can produce 15,868 frames in the same amount of time. The chart below visualizes this– the Maya column here is exaggerated by a factor of 50x so you can actually see a mark in its column at all! It’s no contest!
A render through Maya with Mentalray or VRay rendering engines would certainly be at a higher quality, but the time and thus cost savings benefit of rendering in realtime for a small independent scale production, especially primarily helmed by one person, is priceless in comparison. This makes a CG filmmaking project under those circumstances even feasible at all! If you know your daily burn rate, rendering a feature length film in Unity instead of Maya means, by the ‘a penny saved is a penny earned’ logic, you essentially just pulled up a dump-truck of cash onto your production’s doorstep in cost savings from funds you never had to begin with.
If your burn rate as an indie was, say, $100 a day, and you only rendered a 4K solid feature-length (120 minute) CG film in Maya once, that’s 172,800 frames at 4 hours per frame or 691,200 hours or 28,800 days (about 79 years). If you started rendering today, it would be done by about 2094. Meanwhile, Unity would take about 45 hours (about 2 days) to render an entire feature-length film. At the burn rate, this means rendering through Unity provides a value against Maya rendering of at least $2.9 million dollars, and most definitely more (since you would be rendering work-in-progress scenes more than just once). Likely then tens of millions of dollars in production output value, now available to the indie… for free. Thanks Unity!
I would wager in the next ninth-generation of console graphics (around maybe 2020 or 2023 at the latest), the quality will match if not surpass Maya Mentalray and even Vray in realtime game engines. By then, it would follow that no one will be rendering the old-fashioned way, even Hollywood, and instead the standard method will be realtime engines. For an indie, going to this a generation early makes so much sense, it seems to me like it’s really the only viable method at this point. Who in their right mind would not choose to take 0.15% in production time?
In summary, this should all be obvious anyway. Of course realtime rendering is more efficient. What changes the equation is how realtime game engines are now close to Hollywood filmic CG and democratically available, super-viable for virtual filmmaking and VFX at a no-budget scale. This feels hyperbolic, but it’s just simple math and an amazing time to be alive. Unity 5 was literally released last week, leaping forward to the eighth-gen of console graphics with serious R&D behind it. This essentially all came down to whether or not I could get near Maya-quality 4K lossless captures from a game engine in both still and animation forms. I can, on a four-year-old iMac. Which means, goodbye Maya rendering. I also think what I’m on about here is to simply have gone through it myself, from theory to actual proof-of-concept pipeline. All with a free-to-use and royalty-free game engine.
Once you do it, the potential this unlocks is thrilling!
Now to get back to modeling and texturing the virtual set, especially knowing it’s worth it! VFM02 as a prototype experiment is proving extremely worth it. I’ve learned so much already in such a short time!