SAN FRANCISCO—Every year at the Game Developers Conference, a handful of competing companies show off their latest motion-capture technology, which transforms human performances into 3D animations that can be used on in-game models. Usually, these technical demonstrations involve a lot of specialized hardware for the performance capture and a good deal of computer processing and manual artist tweaking to get the resulting data into a game-ready state.
Epic’s upcoming MetaHuman facial animation tool looks set to revolutionize that kind of labor- and time-intensive workflow. In an impressive demonstration at Wednesday’s State of Unreal stage presentation, Epic showed off the new machine-learning-powered system, which needed just a few minutes to generate impressively real, uncanny-valley-leaping facial animation from a simple head-on video taken on an iPhone.
The potential to get quick, high-end results from that kind of basic input “has literally changed how [testers] work or the kind of work they can take on,” Epic VP of Digital Humans Technology Vladimir Mastilovic said in a panel discussion Wednesday afternoon.
Read 13 remaining paragraphs | Comments