Hey guys, first post. Sorry of this has been covered, but I'm pretty new to the whole normal mapping game and basically I've modelled a character and detailed in ZBrush. Basically, I've generated both tangent and object space normal maps for the character. I think that the object space looks pretty good and makes the low poly model look quite high, whereas the tangent IMO doesn't appear to look any better than a generic bump map.
I'll post renders later (my net on my 3D computer's playing up at the moment) but I'm trying to figure out if I'm doing something wrong? The tangent space normal still have the low-poly character look low-poly, but a with a few details I could have painted as a bump map in a few minutes compared to the time it took to ZBrush everything.
I've been reading that object/world space normals cause odd shading problems if animated, though upon a quick test I couldn't see anything particularly off when the model was deformed.
Replies
My guess is that one of your tangent space channels is flipped. You might try inverting the green channel and seeing if that corrects the issue.
The way i seem to remember it being explained to me as that the "theoretical problems" with deforming object space stuff actually exsist with deforming tangent space stuff as well, but it works fine there too obviously. Now i may be completely miss-quoting here and talking out of my ass, i'll try to get a better explanation.
But having said all that, really there just isnt much reason you should use object space on a character, unless it is a robot or something that is completely hard edged, then it would be a better use.
[edit] ok got a decent quote here
EarthQuakeblah: can you write a little bit about what your final conclusion was in regards to deforming object space normals? I cant really remember what you told me
jeffrussell: ah
jeffrussell: object space normals: you can deform them
jeffrussell: simply requires a bit more work in the pixel shader
jeffrussell: but allows for faster geometry processing, so its about a wash
jeffrussell44: faster geometry because you dont have to send tangents/bitangents etc
EarthQuake: is the idea that all the data is already there for doing the math, since you just keep track of the vert transformations with the rigged mesh? and just use that to figure out the correct lighting?
jeffrussell: right, basically we have to xform vert normals and tangents etc by the transform matrices that result from skeletal animation anyways - what you can do instead is transform each pixel in the normal map instead
jeffrussell: so again, that moves work to the pixel shader
jeffrussell: but it can be worth it
EarthQuakeblah: ok cool, thanks!
I think there's a method to convert object space into tangent space, might be the way it's done.
With tangent-space normals, only the basis vectors need to be animated. That's an inexpensive per-vertex operation. That's why everybody uses tangent-space normals for animating objects.
Uncorrected, the lighting problem looks the same in both cases. But it's cheap to correct using tangent space, and more expensive to correct using object space.
Here is a dude back from early 04 :P When normalmaps.. and my knowledge of them were pretty new haha
http://media.pc.ign.com/media/568/568737/img_1942917.html
These days I only see tangent space maps being used.
Maybe I'm wrong about deforming object-space normals being expensive then.
Currently we take our Normal (and if not using tangent space lighting, tangent and binormal), transform them based on the bones, then either convert the normal to tangent space, or transform each vector to world space.
In the pixel shader, we use the normal map as-is if using tangent space lighting, or if we are using WS lighting we multiply our WS vectors by our normal map.
So maybe we are wrong, if we are doing WS lighting we can use OS normal maps? Either way we are getting a transformation matrix to transform our normals in our pixel shader- as long as we have the correct matrix to do so, we should be alright?
I do feel like I'm missing something, though.
You can think of those spaces as a "axis cross" with 3 vectors for each major axis. Whether you feed 3 vectors to transform tangentspace to worldspace, or objectspace to worldspace makes no difference here. It's rotating the stored normal to a new orientation to take transforms into account.
What is cheaper is rotating the "tolight" vector in tangentspace on vertex level, and send that vector to pixelshader in which you just do the angle calculation with the stored tangentspace normal. Which is what Ryan suggested.
However modern pipelines do all lighting stuff on pixelshader level (and mostly worldspace, UE3 & Crysis for sure). Because it's better to handle multiple lights, especially with branching support. So they only send "tangent to world" matrices and the world position to pixel shader. And you can send "object to world" just as well.
a benefit from using OS is that you need less vertex attributes to be sent from application to vertex shader (ie no tangent/binormal). Which means less storage costs for meshes / faster upload of dynamic data. I guess I need to make some AAA eyecandy demo with luxinia to give my words some more gravity
Vertices are moving around inside your object. Their movement is not reflected in your object-to-world matrix. So if you're lighting correctly, then you must be doing something to account for the movement of vertices.
EDIT: I think I understand... do you mean that you're supplying a different "object-to-world" matrix from each vertex, to account for its deformation? So it's more like a "vertex-to-world" transform passed into the pixel shader?
This is very interesting to follow, though. I'm far from well-versed in the technical issues behind game-art, and it's nice to see some of it get worded in a few different, easy to comprehend ways.
Also, he mentioned precision issues with them, since local normal maps are just in the half-space (180 degrees on X/Y, -1 to 1 on Z), they are less likely to suffer from artifacts since each colour channel has to hold half the "range" of normals that an object-space map does.
the artefact thing is indeed true, mostly you have XY from -1 to 1, just like objspace, but Z normally is 0 to 1
however I also think most software you use to bake normalmaps will do 8-bit [-1,1] for tangent stuff for each channel...
another trick is that you do 16 bit [-1,1] for XY and store that into a 32-bit texture, and calc Z (assuming it always points "up") in the pixel shader for high precision normalmaps.
Not being able to mirror OS normals is one of the biggest cons of using them, but in most cases where i'de like to use OS maps i'de rather have one thats half the size of a tangent map than have to suffer through the ugly smoothing and compression problems that tangent maps have.
CB: About mirroring, would you have to somehow specify which polys get mirrored, or use uv orientation or something? Because arent tangents and normals pretty much thrown out hte window when calculating OS maps?
[edit] Another thing i forgot to mention is the versatility they have when doing things like LODs, you can take a 4000 poly mesh, lod it down to 400 polys and as long as the uvs are still relatively the same your lighting is still going to match up. Now i'm sure this isnt really a big deal to most people, but its quite cool. No need to rebake your textures for your lods.
About mirroring... I guess, as CB said, you could do something like use the vertex tangents to determine flipped verts... generally, we want to fix this problem so we can mirror TS normal maps, so we fix the tangents on a mirrored mesh (through some mesh processing utility for the engine), but we could potentially use something like this to determine if UV's are mirrored? I'm still not sure how that'd be done (of course, we could also just add in some new vertex data to encode an X or Y flip, such as R or G vertex colors, and use that to determine flipping of OS channels).
MightyPea: Please be my Valentine.
you send a per-vertex mirror normal
for the "correct" side (ie what is really stored in the normalmap) that would simply be "0,0,0"
and for the mirrored vertices it would be the mirror plane's normal. Using that normal being 0 or "somevalue" you can flip the texture OS normal again to the correct "side".
The plane normal you compute in the custom tool, by comparing vertex object positions of "overlapping" regions. As with tangent stuff you need to split the mesh along seams in a preprocess tool. And of course when baking you must make sure that each UV chunk has continously pixels of only one side.
the preprocess tool could sample the texture and find out which is the "stored" side.
I've never seen a game asset use world-space normal maps.
My guess is that one of your tangent space channels is flipped. You might try inverting the green channel and seeing if that corrects the issue.
[/ QUOTE ]
Thanks for the thoughts guys, it's been quite informative to a noob like myself.
And Kevin, you're right. I flipped both red And green channels and it's looking much better, so thanks a lot. (Though I think the overall shading in Object-Space looks a lot closer to the original hi-res Zbrush model than the tangent, though).