For a game character, is it typical to have a separate rig with includes face rig just for cinematics? An example case might be something like Uncharted.
In cinematics you need a full face rig for acting, but in gameplay probably just a few blendshapes for the shape would suffice. The reason to have a separate rig would be a few things:
If the answers to 2 are negligible, perhaps its worth just combining the rigs so that you only have one to deal with?
software I am dealing with specifically is Maya (using advanced skeleton) and UE5. For now I have just built advanced skeleton around the DAZ skeleton because its fast and easy, but trying to decide when its worthwhile making either a separate rig for face or just using a single rig that includes face to be used for gameplay as well.
My facial acting will be very simple though, only need a few basic expressions.
Replies
The theory checks out for me but it does sound like something only you can really test to see how much of a performance difference there is in your case. Else the answer can only be a case of 'it depends'. And it sounds cumbersome to organize your assets in split fashion if you are doing this all on your own.
What happens if you use the full rig for everything all the time but for the ingame version don't animate or weight anything to the extra bones? I.e. by swapping in a duplicate of the head that is only deformed by a bunch of morph expressions.
Btw. Daz meshes seem rather heavy, isn't the base body already in the six figures, polycount-wise? What I'm saying is there's probably some optimization potential in the geometry.
@thomasp
The DAZ 8 model I am using has 32k tris. I think they are typically subdivided a few times when used for non-game stuff. It will be retopo'd eventually - I am just using whatever is easiest to verify overall pipeline for now. It also has like 15 materials. Like separate material for ears, lips, etc.
Yeah I am curious if having a face rig attached that I just don't use would really make any difference at all. But I'm not sure how to measure it any way other than watching basic profile stats in unreal. There is probably some way to identify a specifc skeletal meshes resource usage. But like you suggested, there is optimizations to be made all over so if I can't notice anything major between swapping one rig for another, it's probably best to just go for easier workflow.
The way I see it, its like:
game rig only for now:
faster iteration, but if i need a joint face rig later and i dont want to redo animations, i would need to add a separate rig
game rig + joint face rig:
longer to setup, potentially wasteful resources in the game rig, but a single rig that works if i add some cinematics later
game rig + blendshapes later?:
IIRC i believe blendshapes work in unreal, so if I had a bunch of game animations authored and then could keep cinematics such that i just need a few basic expressions, could just add blendshapes that can be keyed. This would probably be the best option as it is simplest and most flexible?
I'll work on testing each approach out in coming days.
I checked again and indeed - the Genesis 9 figure I had looked at came with a fully subdivided mouth interior that drove up the overall polycount like 2x. Sneaky. :) Overall kind of a drag with that program to export the right subdivision level one wants. Or find any parameter in there, for that matter.
For the animation stuff I'd test with a skeleton that in one instance has facial bones animated and vertices weighted to and in another as discussed as earlier left unused. My suspicion is that the difference will be totally negligible in either case unless you use very high counts of bones. And yeah all you can do is look at the statistics in Unreal. Perhaps subdivide the mesh once to make it heavier than anything you anticipate using ingame to check how it runs worst case?
However if you have nothing prepared for cinematics and it's only a consideration at this point then setting up a facial rig now makes little sense to me. IMO you might as well down the line look at doing a new rig/bones containing a facial setup and transfer over your existing weighting and animations if that becomes a topic.
yeah lol daz is a user interface nightmare. I dread opening it up, buuuut it's still a super fast way to easily get various body types quickly and then with advanced skeleton in maya its like two button presses and you have a great rig ready to go. I am stuck using the gen8 body though because the autorig process isn't compatible with the gen9.
But like mentioned, eventually I retopo it to be a proper game model.
I did test out and confirm that blendshapes work fine in unreal. The only thing is, you have to drive them from code in unreal, keys from shape editor in maya wont carry over because its not a joint animation. For simple things like just have a character show some expression on a cue that could be fine. I think if i wanted any real talking it would just have to be a face rig. There's tons of tech for that sort of thing though so I think it could be looked into on as needed basis since I'm not even sure I do anything like that.
Thanks for the insights @thomasp, I haven't done this sort of work before so its nice to have some reassurance and make sure I'm not missing anything major
in terms of the face rig there's an old writeup on gamasutra for one of the gears of war games that talks about how they converted blendshape animation to bone animation - the principles would still hold up today I think although I expect you'd need to multiply everything by 4
The presence of the joints comes with an overhead even if they're not used so you will want to bin them if you don't need them - it can become quite significant in terms of time spent on cpu if you have a lot of joints in total but if we're talking about a handful of characters it's probably not something to worry about until it actually becomes something to worry about.
unreal had/has a means of dropping joints from skeletal mesh lods - there are rumours of it having broken in later versions of 4.x but it's worth looking into as that'll save a bunch of effort if it does still work.
thanks @poopipe i'll take a look for that.
in this case, its only the main character. At least for now, because it is the daz rig, it already has a joint face setup, I'd just need to create controls for it. But I'll also test removing the face joints and see if i can find any difference in unreal. Even if I can't measure one, it's just idiosyncracy that I hate to have stuff around that isn't doing anything anyhow.
It seems like, for the game rig, if i wanted to have some face animations i could trigger blendshapes programmatically, like from anim notifies. And if I needed more control for cinematics later, probably easier to just make a separate rig for that, rather than trying to make the game rig pull double duty. Especially because it's not a given that I'll actually make cinematics, or that they'd necessarily include any talking.