Home Technical Talk

Blendshapes / Morph Targets / Shapekeys - need some technical clarification

greentooth
Offline / Send Message
CheeseOnToast greentooth
I'm arguing the case for something I'm about to work on, and I'd like to make sure I have my facts straight beforehand.

As I think I understand it, game engines generally store blendshapes as an offset delta for vertices that are affected. So a vertex will have a vector and a magnitude if it contributes to the blendshape. Verts that don't contribute to the blendshape have no additional cost.

For example, for a mesh with 40k verts, you move 36 verts to deform a character's eyebrows. The additional cost of that blendshape is limited to those 36 verts only?

Am I correct in my understanding here?

Hopefully that makes sense. Btw, the engine in question in my case is Unity, if that makes any difference.

Replies

  • Eric Chadwick
    That's a question for Unity then. Optimizations like this are totally up to the engine devs themselves. But yeah in general, morph targets are stored as position deltas only. No additional vertex data is kept around... UVs or vertex colors or hard edges are simply reused from the base model.
  • CheeseOnToast
    Offline / Send Message
    CheeseOnToast greentooth
  • monster
    Offline / Send Message
    monster polycounter
    I'd suggest detaching the head as a new object. Once you start swapping costumes remaking the blendshapes will get old fast.

    You might say, "We won't have costume switching." To which I'll reply, "Next week you will." 

    I always pick morphs over bones for facial expressions. In Unity, one benefit is that if you name all your head objects "Head" and all the Blendshapes are consistent in name, then all the animation curves will just work. So you can make face idle animations and apply it to all the characters. (Blinks, micro expressions, ect..)

    Blendshapes are much cheaper than bones on the CPU and GPU no matter the game engine. But they use exponentially more memory too. So if you will have lots of characters with lots of blendshapes, you may bump into trouble later.


  • monster
    Offline / Send Message
    monster polycounter
    Oh, and even though I always use blendshapes, I always use joints for the eyes and joints and blendshapes for the jaw.

    The eyes because those tend to get controlled procedurally a lot.
    And the jaw so animators have the option to overextend it for more expressions.


  • RN
    Offline / Send Message
    RN sublime tool
    Someone that's seen the source code of Unity should be able to tell (off the record of course, because of NDAs). You could also try asking the author of that Mega-Fiers plugin if they know better about the Unity internals.

    Based on this ( https://answers.unity.com/questions/1578678/performance-impact-of-blend-shapes.html ), I'm thinking that an imported blendshape will be calculated on the entire mesh, even if that blendshape only affects a few of the vertices. This isn't to say that Unity's blendshape system couldn't be optimized further, but it mightn't optimized like that right now.

    So if you have a way of isolating that area with the 36 vertices as a different mesh, like splitting the head from the body as @monster said so the head can use both joints and blendshapes and the body use joints only, then I would try that. Then see if the profiler tool gives a better result.

    I looked at the source of the Godot engine and they do the same, each blendshape has the same topology as the mesh it affects.

  • CheeseOnToast
    Offline / Send Message
    CheeseOnToast greentooth
    Simply put, we have a bunch of characters where the head and hands share a flesh material. Separating the head from the hands into discrete meshes potentially creates another draw call. I was wondering which approach was more efficient. Thanks for all the replies everyone.
Sign In or Register to comment.