Hey there! So I'm going to take a crack at the method Nintendo seems to use a lot for their characters's eyes, but I don't have any experience and limited knowledge on the topic (mostly what I've gathered the last couple days, and what my general 3D knowledge affords me by studying references). If you've worked with this method before and can share some info I'd gladly welcome it.
I found this breakdown, which is really helpful.
http://www.benjones.us/twilight-princess-eyes-breakdown/I have some thoughts about it. Wouldn't the plane with the eyelids / alpha have visible hard edges where it meets with the rest of the face geo? Can that be resolved with the type of shader they use? or are the edges of the plane faded out with alpha and blended into the texture below? It couldn't be a cutout shader though, else it'd have it's own artifacts likely. If it's anything other than a cutout shader, it couldn't receive shadows? Which works for some stylized games, I suppose. Not for me though.
Anywho, those are just some thoughts shared so you can get a sense of where I'm at with it. Any help is welcomed! Cheers.
Also, I'll post this here for anyone that hasn't seen it yet. I've been looking at it a lot for this technique.
Pokemon Let's Go
Replies
You could conceivably combine the eyelid and the eyeball into one shader too, not sure if that would be more expensive than two separate materials with one using alpha blend. Might also be they were working within performance limitations of the hardware they were targeting.
http://wiki.polycount.com/wiki/Foliage#Vertex_Normals
It's also tricky to get the weights just right, and requires more vertices for curvature. But... bones would avoid the need for large alpha blended areas, which can be a fillrate hog.
It all depends on what else is going on in the frame, and what limitations you have from the most common hardware used by your customers.
Any opinion on assigning multiple materials to a single mesh based on face selection? I've heard negative things, but have seen it in practice in a few different places. Any downsides to that practice?
I'm also thinking it could of been the same material but different UV channels, which I don't know much about.
It was a 3DS model. A bit more info, the UVs occupied the spaces above the 0,1. Sort of like UDIMs.
Try to use the least amount of materials possible, in general. Each time the game has to switch materials across a mesh (or across a scene) that forces it to send a separate batch of vertices to the GPU. Sending tons of batches slows performance.
Plus depending on the renderer, each UV can cause a separate batch of vertices to be sent to the GPU.
Having said that, it's still a common strategy to use 2 UVs, one for tiling, the other for lightmapping. Just avoid going crazy with UV channels.
Any thoughts on blendshapes for mobile? I'm considering that vs another method I seen with a pokemon wiiU model. It's basically blendshapes, but without the blendshape. They just swap geo with the verts pushed around to make the expression.
I know a lot of decisions are based on what the project needs, one thing to note, is that I don't think we'll need to see the blend from target to target, the more anime hard swap from expression to expression would be fine.
Is there any performance cost benefits over this approach that blendshapes? Or any other benefits, aside from not have to manage a bunch of blendshapes? It's still a toss up between these two methods, and a traditional bone setup.
In the case of model swaps, it seems more expensive to me to load a whole new model entity (vert/uv/normal/skin weights/meta stuff), for an animated set compared to blend shapes. (I'm no engine expert though)
Try a test! Crappy art is ok, just put 20 copies onscreen and compare stats, between morph and swap. Load times, memory use, framerate.