Home Technical Talk

Morph targets using 3D face scan photogrammetry data?

Jonathan85
polycounter lvl 9
Offline / Send Message
Jonathan85 polycounter lvl 9
Hello, i know you can scan real human face(s) using multiple cameras and programs like agisoft photoscan for example, to create highly realistic 3D human face/head. I think i also understand the pipeline for retopologizing in Zbrush, creating proper UVS, and importing the mesh back for texture generation, and as a result you have nice realtime application ready "low poly" realistic face.
But what i dont understand is, that as far as i know, you can scan not just the head itself in "neutral face expresion" but you can also scan "each" face expresion of the real actor (happy, sad, angry, mouth opened etc.) and use this scans of each expresion as morph targets for later facial motion capture.

My questions is, how do you do this? I know you have default head scan few milions- You retopologize the copy of head in Zbrush to few thousands polygons (for example)- low poly, you create an UV map for it, subdivide it couple of times and reproject the original few milion mesh to it for details for normal or displacement map, etc.

But how do you get the morph targets, each expresion? Should i use the default retopo mesh of few thousand polygons as a base mesh and then try to reproject mesh structure and details of each expresion to the default mesh? Is this the workflow? But what if the some expresions are too different from the original default head scan? For example i have expresion with mouth widely Opened, how to reproject that?

Replies

Sign In or Register to comment.