I recently started work on a little indi project. The lead coder has asked that I supply him height map data rather than normal maps as they're simpler to work with or some such.
Now, I've got my high and low poly models sorted and can bake out a normal map that looks nice. My next task is to generate a height map using the same data which I'd then like to be able to test out in Maya by converting it to a normal map. As you might imagine, this slightly convoluted workflow is giving me some problems - normal maps that are overly rounded and have triangular shading errors from the low poly running through them, for the most part.
I'm not really sure what my best way forward is here. I've got Xnormal, Maya and the original nDo that I can work with. Any suggestions would be greatly appreciated.
Replies
Why not just bake them?
Run as fast as you can! haha
If they're having problems with tangent generation, there's loads of good resources for this. Or use a format that includes tangent data (fbx)
Just not how it works.... wtf
A baked height map converted to a normal map will never work, because: A. the accuracy is shit, B. You're no longer accounting for the lowpoly mesh normals.
Maybe on a flat plane, but nothing else, and even then the quality isn't going to be the same because of what happens to it when you sample it.
Please for the good of humanity tell your programmer how absolutely retarded this is.
If you really see any potential in this one try to talk with him. Maybe he is open for discussion.
If not don't waste your time. There are hundreds of people who think they are indie but they just don't know what they are doing. Not worth your time and energy.
With my graphist, before he start to work on my prototype, we talk about the importance of the differents parts of the gameplay, and, very important: the importance of the view distance.
After that, he explains me how he work, what he need like shaders, specific particles, developpements or other.
I have never forced him to specifics technics or specifics workflow unless if the base engine is limited, But even in this case, I try to found a solution for him.
Yes you can get normals from a heightmap and it can theoretically give good results, but I'm pretty sure the rendering process would be different (ie you wouldn't be accounting for smooth shading), although I could be wrong as I don't know what you guys are doing.
Derivative maps actually operate on a similar principle as far as the normals go, and the code I was looking at actual considered taking heightmap values rather than slopes, so I'm assuming it's something along those lines. There are many benefits, for one you have displacement values which can be added together, rotated, mirrored, used at a whim in any way with no need to constantly renormalize them (they're just raw values unlike normal maps). The quality of the derivative map technique can actually be quite good, so I'd imagine if that's the approach your team is taking, things could work out, both technically and for the artists.
However, if there's no good reason to completely deviate from the standard workflow (quality, performance, workflow speed) don't. The tools are built around and for the standard workflow.
It just sounds like the guy doesn't understand the workflows artists use, and if he takes this mindset to AAA studios he won't last a week.
Jackablade, I would not try to convert a height map into a normal map. If you need to use a normal map, use a normal map. If you need to use a height map, use a height map. If converting one to the other is really what your programmer wants to do, let him figure it out. It's not your job to figure out how to do 3D / image math in maya.
Infact, if anything, it will be more expensive to implement, no matter for what reason, and will take more time, hence cost more.
I think he genuinely doesn't understand the difference between a normal map baked from a high res mesh versus one generated form a height map. Being paid to work on a project is quite a nice change from sitting around waiting for CTC to get a new project rolling, so I'm willing to look past what I suspect is a bit of a lack of experience in our team lead/coder, but I think this is something that's going to need to be addressed.
Does anyone have a nice technical link of how normal maps function?
In regards to implementation, your programmer needs to generate two tangents to the normal. One pointing along the U axis, one along the V. There's lots of ways of doing this, and lots of examples.
Next, in the pixel shader, you unpack the normal map from 0->1 to -1->1 space with a multiply and add (MAD is one instruction on most GPUs) and multiply each channel of the texture with its corresponding normal/tangent. (R = U-axis tangent, G = V-axis tangent, B = normal) You can pack these into a matrix on the vertex shader to make it even cheaper in the pixel shader. (though not by much)
Normal maps built by determining the angle from one point of a heightmap to the next will produce sort of correct lighting, yes, for details inside a face, but are absolutely no substitute. The heightmap is totally unaware of the normals of the low poly, and so cannot correct minor lighting distortions like a proper normal map can.
If your programmer refuses to budge, the project won't get anywhere with that mindset at the helm anyway.
http://www.infinity-universe.com/Infinity/index.php?option=com_wrapper&Itemid=89