hey guys, quick question. I come from a film background and haven't done any game work but was wondering about normal mapping for games. I know with gams, UV and texture space must be optimized. so lots of times the left and right arm with have the exact same UVs. how does this affect normal map generation in a package like zbrush or mudbox..what is the general workflow for that normal map process for a game model?
Replies
About general workflow on normal mapping, i prefer after model the highpoly model get the lowpoly version done well optimized and UVW it. Run your favourite normal mapping tool (mine is Render to Texture or Xnormal) and voil
DO NOT detach or delete mirrored parts because you want the smoothing to interpolate over the whole model, if you delete one side its not going to be correct and you'll get seams. and the normals won't be correct and won't look right from certain angles.
general workflow is:
build a base model for mudboxing/zbrushing (or use one u already have) and do all the work u need in mudbox/zbrush, then build a game ready mesh on top of that (mostly because your topology for zbrushing/mudding is going to be way different than the topology optimized for games) then unwrap the game model and project the normal maps either in mudbox or other application like max or some people use xnormal (i haven't tried it so i can't say) I've gotten good results out of mudbox, zbrush is good if its all one mesh, like say an organic body for a humanoid creature or whatever, but if your game model isn't the lowest resolution of your highpoly model then i wouldn't recommend zbrush same goes for if you have a bunch of extra pieces u just built with subds etc, also it all depends where you're going to render it, where its going to be presented, so say if you're going to make ur final presentation in max then render to texture it out of max, some game engines have normal mappers built in, like the doom3 engine has renderbump and thats the best way to generate normal maps for it, it looks the best and usually totally eliminates seams.
what sucks about normal maps is that there isn't one clear standard, every app and every game uses different code for generating normal maps and rendering them. so you can render a normal map in max and it'll look fine in max, then you put it in a game and it looks horrible and there's differences beyond just having the Y axis (green channel) facing different directions, just in general different apps have different code for rendering normal maps, so in some you'll get horrible seams no matter what, some will recognize it and do it without seams, but at the same time if you put that normal map into a game engine that has different code that handles normal maps differently it'll look different... in general yeah every normal map will look about right just about anywhere, but every app has its own subtle differences and sometimes thats just enough to aggravate the hell out of you
oh in mudbox do NOT invert binormal lol that'll basically switch ur red and green channels and invert them, lol ... its bad real bad
I think i've gotten away with mirroring things in xnormal and not even worrying about messing with the uvs at all. It just worked. I could be completely off here but i vaguely remember doing that once.
offsetting is the easiest way i found, i use chuggnut uv tools so all i have to do is just offset by 1 on the U, it doesn't matter if it tiles or not cuz when the normal map is generated it only uses the 1x1 square and nothing outside of it. u technically don't need to clone the model either u can just select the offset piece and just repeat that offset with a negative value after you generate the normal map to get it to snap back.
I learned the differences between the way programs render each others' normalmaps are because normalmaps rely on more than just their pixels. The real-time lighting code in your game also needs to generate special per-vertex tangents (bitangents, binormals, etc).
These are used to transform the light from world-space down into the local tangent-space that the normalmap was created from, then finally the per-pixel normals affect the lighting direction. Programs calculate these vertex tangents differently, so apparently for the best normals your game should use the same vertex-tangent generation code that your normalmap baker used (or the exporter should grab and export the tangents themselves).