Hey folks!
So, was doing a little research on npr/anime-like modelling/texturing and took a look at the Ace Attorney models available here, specifically Phoenix Wright.
Inspecting the model, I noticed that they've a bunch of alternate parts for upper and lower faces and hands.
Trying to guess why the model is setup this way, my friend and I ended up with two theories:
1) This is how Ace Attorney 2D games were built (parts swapped in on sprites based on the script markup), therefore it was a natural technical progression to do the same in 3D. Seems a little odd to me, as these games were built on MT Framework Mobile, but it's possible the deformation features hadn't been implemented at that point?
2) Limitations on 3DS (tri/polygon count, lack of blendshapes or limitations on model bones) made it more efficient to swap in different models rather than have a model with the tri/polygon density needed to support all the deformations required - easier to have bespoke models with smaller polycounts.
It doesn't seem like this technique is replicated on the later The Great Ace Attorney Chronicles, which seems to feature either facial bones or shape keys or a mixture of both (and, man, what great work...!).
Any ideas, folks? I really know nothing about 3DS hardware or MT Framework, just kinda curious about why the model was setup in this fashion.
Replies
Likely #2 because of performance concerns. Vertex deformation often takes CPU time, not sure about 3DS hardware though
This sticky might give some insight. https://polycount.com/discussion/226167/retro-3d-art-faq-everything-you-need-to-know-to-create-ps1-n64-dreamcast-etc-3d-art
I worked with a mobile engine about 12 years ago that would split meshes based on skinning information - I forget the exact restriction (something to do with the amount of memory required to store joint bindings ) but it would effectively split models up into similar sized pieces .
That seems plausible
Alternatively perhaps they did it simply to allow mesh reuse across characters
@Eric Chadwick - Thank you... and that thread is MIND-BLOWING.
@poopipe - Hmm. That's interesting, but I'm not sure that's happening here, given the significant vertex difference between parts - it really screams 'deliberate manual work'. Reusing the meshes across characters also seems unlikely, as the faces are so different (or I'm misunderstanding you!). Thank you, however!
Appreciate your input guys! Hmm. I'll go through your thread again, Eric, but I'm starting to think I'll need to track down some MT Framework engineers in order to solve this one... =P
Both that Dual Destinies and the Chronicles games seem to be using morph targets AKA blendshapes for the facial animations.
I think those different bottom-half-of-head blendshapes are just the smallest most convenient (artist-friendly) way of having that data in the limited 3DS cartridge storage space. You don't need to keep full body copies to describe the blendshapes if, for a certain mouth shape, only the vertices at the bottom-half of the head move around of course.
There's another concern which is vertex counts. When the engine morphs blendshapes it's doing "vertex 233 of this mesh moves to vertex 233 of this other mesh", but if the meshes have different vertex counts then you need some offset to say "vertex A of this mesh moves to vertex B of this other mesh".
So those bottom-half head blendshapes are all using the same offset (in the code that uses them) so that "vertex zero" of those blendshapes begin matching at the equivalent vertex of the full-body mesh, so it can be animated properly.
The animation is a simple linear interpolation between the vertex positons. It's used mainly for facial expressions as it requires a lot of bones to get them right. It's way easier to deform the mesh the way you want with blenshapes, especially for a cartoon/manga style.
Maybe the animation is a simple mesh swap but that would be a little harsh.
RNs explanation seems plausible to me although most blendshape implementations only store transforms for vertices that actually change afaik (hence them being layerable and much less memory heavy than full vertex animation
Apologies, folks, I've just seen your messages now.
The head modules as blendshape targets - do you really think so? I know basically zero about the MTMobile Framework or the 3DS, so I don't have much to stand on here, but... that seems (to me) like a far more complex way to store the targets than just as part of the primary model information. As far as convenience goes, well, linking to the modules and then offsetting the vertex counts... seems like inefficient busywork (but I'm an ignoramous regards 3DS & MTF). Plus they're perfectly UV mapped - if you were just grabbing vertex positioning, why keep the UV mapping data?
I want to be clear, I'm not saying this can't be the solution, I'm just raising questions about it.
Mmm, yeah. I went back and watched some animations from Duel Destinies. There's animation on the mouth movements, eye blinks etc. (and I think we're all agreed this is probably blendshapes or morph targets or shapekeys or what have you), but when poses change drastically, it literally pops from one pose to the next. Sometimes this is hidden by camera changes or a flash of light etc, but sometimes they just flip the poses right in front of you. This certainly points to leaning on the visual conventions established in the Ace Attorney 2D predecessors, where animated pose transitions are extremely rare.
This definitely increases the chances of the modularity coming from the scripting precedent of the previous games, as my friend suggested, but I honestly still think that this is just down to saving memory by not recording transitions (which is the major character difference between this game and the following Ace Attorney Chronicles (covered in gorgeous character animation and transitions)).
Basically, if there's no precedent for you to animate changes in stance, and you can put that memory to better use elsewhere, then why not just swap in parts?
Agreed. But if I'm right in what I'm saying above, then are they duplicating vert transforms across multiple models, thus wasting more memory? Actually, no, probably not - presumably each pose requires a custom 'blink'.
Hmm.
Currently this is making sense to me as a reasoning. Doesn't exactly tell us if the choice stems from established code conventions or memory limitations, though. =D
I'm really not familiar with 3ds hardware but generally speaking bottlenecks occur when moving data from one processor or memory stack etc. to another.
The last thing you want to do is be pulling stuff off a "disk"
Looking at the body animation - I reckon they skinned the body and they're just switching to different poses. It's by far the cheapest way to handle full body animation in terms of memory.
I suspect the splits are down to a combination of hardware skinning limitations and a desire to minimise the size of individual blendshapes
I don't think I made it clear that I thought the bodies were rigged and were just switching between poses, but agree with you there and agree with you regards moving data. The only blendshapes they wanted were lipflaps and slight mouth changes for the lower face, and eyelid and brows for the upper face - bigger changes can be achieved by swapping out the modules.
Unless we get a 3Ds/MTFMobile coder here, I think that's probably as close as we're going to get on this question. With that in mind, huzzah, medals for all, thanks everyone. =D