I've been experimenting with using the iPhone X for facial capture to bring life into the wild, immortal Baby characters i've been working for our VR game, Bebylon. Since we're a team of about 5 its pretty hard to justify facial animation for our game at this stage considering how time consuming facial animation can be. This is an early test but it shows a lot of promise for capturing decent data right out of the phone (depending on your quality bar) and it's super duper fast and easy.
Im pretty confident that it can get a whole lot better with a little time spent perfecting the blendshapes (Should fix a lot of the mouth issues). Also adding proper wrinkle maps will go a long way as well as using the captured data to drive secondary blendshapes, which should add more life and expressivity to the character.
Next test is strapping the iPhone to a helmet and combining that with our xsens mocap suit for a full body and face test. Also I'm going to get this data into UE4 and connected to the game version of this character.
If you guys are interested in this I'll keep posting progress.
You can skip to the middle where the maya section happens.http://youtu.be/w047Dbo-fGQHere a loose snapshot of the process of the data from the iPhone X to maya.
Replies
How are you getting the captured data to text? is this using the new face recog emoji system in iPhones? I take it the analysis and conversion to maya anim file is your own software?
Edit: woop, saw some info in the earlier youtube vid. Great stuff. i wonder where the analogue is in Android, probably long ways behind since Google didn't buyout Faceshift. Buggers
Thanks Hito! Even though ARKit / iPhone X make it easy for this kind of thing, face tracking using a normal phone camera (on android) should deliver pretty comparable results. But as you say, not as convenient as Faceshift!
https://youtu.be/i51CizUXd7A