Home Technical Talk

Tangent Space Confusion

polycounter lvl 11
Offline / Send Message
jocose polycounter lvl 11
I'm really trying to wrap my head around Normal maps. I don't have any background in math so some of this is has been quite a learning experience but would I would like to be able to do is to explain to someone how they work from start to finish.

This is what I think I understand so far.

Tangent space is a coordinate system that is relative to a surface. In the case of a 3D mesh it is a coordinate system that is relative to a single triangle.

Lesson8-AxisSystems.png

With normal mapping we take points in word space and convert them into points in tangent space, which we then store in UV/texture space.


Here is where I get confused:

How exactly is the tangent basis calculated, and am I correct in assuming the the origin of this coordinate system is at the center the triangle?

Also what do the colors in the normal map actually correspond to? I know that if you have shading errors on your model the normal map will compensate for those errors how do the vertex normals of a mesh fit into how the tangents space is calculated, and why do they effect the colors in the normal map.

If it helps to understand where I am coming from I am imagining all of this kind of like object space but only relative to a single triangle so each pixel sits on the triangle like polygons sit on a flat grid. Then they are tilted relative to the high rez mesh you baked from. I don't know if that's the correct way of looking at this, and the problem is that I don't see how the vertex normals of the low poly mesh fit into that, or why they effect anything.

If someone could give their best shot at explanation it to me I would really appreciate it. Its been a struggle to understand and I don't have access to any 3D gurus.

Replies

  • MoP
    Options
    Offline / Send Message
    MoP polycounter lvl 18
    Tangent basis is usually calculated per vertex-face (kinda like per-vertex, but when you include UVs you can consider a single vertex to have multiple UVs if it's split per face).

    So you take the vertex normal, and the UV coordinates for the connected edges of that vertex, and do some calculations with that to find the appropriate tangents. You don't really need to understand that stuff all that much though (I don't :D )

    When it comes to pixels on a normal-map, you can consider them kinda like "offset normals" relative to the lowpoly mesh normals, so each pixel represents an individual normal, stored as an offset from the overall surface curvature.

    That was a pretty crappy explanation actually, I'm sure someone else has a better one, or a proper article on it...
  • Eric Chadwick
    Options
    Offline / Send Message
    I took a stab at it here.
    http://wiki.polycount.net/Normal_Map#TB

    Still working on it, some To Do notes at the bottom, and it could use more pics. Suggestions welcome...
  • jocose
    Options
    Offline / Send Message
    jocose polycounter lvl 11
    Yeah I have been reading your description over and over Eric. Its been really helpful but I need more to wrap my head around it. I just don't understand how vertex normals and triangles translate into a coordinate system. It would also be nice to see some kind of visualization of the normals of each pixel as if they were in 3D space so I could understand the relationship I a bit better.

    Oh & BTW Here are two other articles I have been reading. They go over it step by step, I'm sure it would click for a lot of people after reading them but I'm still struggling.

    http://jerome.jouvie.free.fr/OpenGl/Lessons/Lesson8.php

    http://www.gamasutra.com/view/feature/1515/messing_with_tangent_space.php
  • Eric Chadwick
    Options
    Offline / Send Message
    Do you have Maya? It has a display method that shows the tangent axes in the viewport, actually on your mesh.

    UDK also has this, though it seemed a bit strange to me last I looked at it, because the arms of each axis tripod weren't at right angles to each other.

    So does Xnormal...
    uvweld.jpg
  • Eric Chadwick
    Options
    Offline / Send Message
    Also, tangent space is very very similar to UV space. If you can visualize how UV space relates to mesh space, then you can think of tangent space as being very similar.
  • jocose
    Options
    Offline / Send Message
    jocose polycounter lvl 11
    Thanks Eric, what I am not understanding is how those tangents at the vertices are used as a coordinate system to store vectors along the surface of the triangle. I mean a triangle consists of 3 of those vertices. I thought that tangent space was relative to the surface of a triangle...I must be missing something.
  • MoP
    Options
    Offline / Send Message
    MoP polycounter lvl 18
    It's relative to vertex normals and UVs, which are used to describe the surface of a triangle, so it sounds like you're on the right track.
  • Eric Chadwick
    Options
    Offline / Send Message
    The tangent space is partly derived from the mesh vertex normals, which in turn are influenced by the directions of the triangles.

    The direction of a triangle's surface (towards us or away from us) is defined by the winding order (creation order) of its vertices. CCW order = triangle is facing us. That's one part of it.

    Then the vertex normals are (by default) basically in the same direction as the triangle. Except that their angles are further influenced by the angles of their neighboring triangles. So for example take a cube that has no hard edges (so only eight vertex normals)... each of the normals are "pulled" in the direction of their neighbors, so the normals end up angled 45° away from the triangles.

    Now, those vertex normals are re-used in tangent space as one of the three vectors that make up the vertex's axis in tangent space, you can think of it as the Z vector. The other two vectors are derived from the U and V axes.

    Dunno if that helps or hinders. :)
  • jocose
    Options
    Offline / Send Message
    jocose polycounter lvl 11
    Edit: Actually, Eric, I think after reading your wiki post you answered what I was trying to ask, what I don't get is this statement: "Each tangent basis vertex is a combination of three things:"

    What are you referring to when you say "tangent basis vertex". That doesn't make any sense to me. Are you talking about the vectors that reside in tangent space and are converted into texture space, or are you talking about the orientation of the entire coordinate system for a given triangle, or something else entirely.

    Edit: I think I am getting closer to understanding this. Eric you say "When a pixel in the normal map is neutral blue (128,128,255) this means the pixel's normal will be pointing straight out from the surface of the low-poly mesh."

    This might be true if the triangle is using all hard edges, but what about if the vertex normals of the mesh are all averaged. Its my understanding that the normals of a triangle are interpolated across the surface to produce the individual normals for each pixel. Meaning if your vertex normals aren't all perpendicular to the surface the triangle then neither will the normals for each pixel.

    Is this correct? If so I think I get this.
  • Eric Chadwick
    Options
    Offline / Send Message
    I think you have it correct. Problem is, we're artists taking about code, so we're a bit like the blind men and the elephant.

    I would suggest having a few discussions with a graphics programmer you know. Whenever I do this, I find the info sinks in much better when I try to write it all down. Maybe just how my brain works, but when I try to write (or teach) it immediately shows where I have gaps in my understanding of the topic. Then I ask more questions, rinse and repeat.
  • Parkar
    Options
    Offline / Send Message
    Parkar polycounter lvl 18
    My atempt at explaing this.

    Forget about the normal map first and just consider the lowpoly mesh.

    For each pixel that is rendered on the screen a normal is calulated by averaging the normal of the polygon being rendered and the neighbouring polygons. You can google this to get examples of algorithms to calculate this. The important thing is this gives us a normal of the pixel we are rendering. Lets call this vector the surface normal.

    What we want to be able to do with the normal map is adjust this normal to point in a ceratin direction. To do this we need to define a coordinate system and then feed in an offset normal that we add to the surface normal.

    we have one one axis for the coordinate system (the surface normal). Lets call this axis Z. We need two more to get a coordinate system. Since we want to store the offset in a texture we define the other two axis as the U and V directions of the UV map along the surface of the model. Or in other words the axis X is a vector pointing along the U axis of the UV map on a plane orthogonal to the surface normal. And the same for the Y axis but using the V axis.

    This mean we can map the RGB channels such that R is the offset in the X axis, G is the offset in the the Y axis and B is the offset in the Z axis.

    The very short version: the coordinate system the normal map "lives" in is defined by the surface normal and the U an V axis of the UVmap mapped on a plane orthongonal to the surface normal.

    So trying to awnser the last part of your post. Yes the surface normal is not the same over the whole polygon if the edges of the polygon are smooth. This means it will first need to be calculated before we can find the coordinate system and apply the normal mapping to adjust the normal.

    Not sure if this helps or makes it more confusing but hope it helps.

    Edit:

    Regarding the origin of the coordiante system you can consider it to be located at the point on the surface you are currently rendering. the origin is not that interesting though as we only need the coordiante system to define directions and not locations.
  • jocose
    Options
    Offline / Send Message
    jocose polycounter lvl 11
    Eric, Parkar, thanks so much.

    Truth be told I was actually talking to our graphics programmer through out the day while posting this, but he kept going way over my head. I did learn a fair bit about vector math from talking to him but he had a hard time getting to the heart of the issue. Especially since he couldn't draw it out for me at the time (he could only talk over msn).

    That's why I post here. A lot of the time I find you guys have a way of explaining things that is relevant to an artist and much easier to understand.


    Tangent Space normal maps, and normal maps in general make so much more sense now.

    It makes sense why tangent space normal maps are the colors they are and not rainbow colored like world or object space normal maps. It's because they are only storing small tweaks from a default which is defined by the angle of the normals that make up the polygon on which the pixel resides. If there is no tweak, there is no color change.

    The relationship between the meshes vertex normals and the normal of the pixels explains why the shading of the non-pixel shaded low poly mesh directly affects the shading of the pixel shaded one, and why those shading errors can creep into your normal mapped models.

    The final bit of information that I was lacking was how the coordinate system worked but you filled that in with your last bit Parkar. The idea that its only to store the offset and is relevant to each pixel because each pixel is its own surface and has its own tangent space makes sense to me.

    It also backs up we only store 0-1 values, and not some much larger values that describe positions that span across the entire surface of the triangle.

    I think I get it all now, polycount saves the day again. Thanks so much everyone.


    Edit: The one last thing that I don't get is why normal maps have to be normalized, and why they are being normalized to. I thought that maybe all the color values had to add up to the same value, but that isn't the case. Why would the unit length of the normal matter anyways if we are only concerned about defining a normal (angle). I think that's the only thing I don't get.
  • Eric Chadwick
    Options
    Offline / Send Message
    Nice stuff Parkar.

    jocose, if a normal is -1 (dirty yellow in a tangent space map) then it is pointing towards the back of the triangle (if the surface normal is pointing towards the front of the triangle, as is usually the case). So then that pixel won't be lit unless the light is coming from the back. I put an example here.

    If the shader doesn't re-normalize the pixel normals, they could be very different lengths from one to the next, which would make a very uneven lighting situation, depending on the variance from one pixel to the next.

    Apparently this is much more pronounced with specular lighting, so I've seen for example a non-normalized normal map produce a very speckled-looking specular, where it looked like a spotted mess.
  • jocose
    Options
    Offline / Send Message
    jocose polycounter lvl 11
    Hey Eric, I just spoke with our engine guy and I think I get this now. Often times the Lambertian Reflection equation is used to calculate the shading.
    diffuse = Kd * (N dot L)
    
    I don't get how it works, I suck at math, but I was able to understand that the equation just simply assumes the normal value will have a length of one. Its an arbitrary assumption. It could take into account that it may not equal one and compensate for that, but it doesn't. That means that artists have to take care of it on their end.
    KD = Our diffuse value, lets say 150,200,120
    and lets say N dot L works out to be 29
    <150,200,120> * 29
    that would be extra white
    but usually the result is between 0 and 1
    If you have normals that are large, everything appears white usually, that that is why
    One thing that really confused me about your post was that you used your tree leaf example with an inverted normal map as an example of an un-normalized normal map.

    I believe, and please correct me if I'm wrong, that normalize only refers to the unit length of the normal and has nothing to do with its direction. To test this I took your image into photoshop and ran the inverted normal map through the Nvidia normalize filter and it didn't change a thing, because it was already normalized.

    So direction does not mean "un-normalized" the normals length is the only thing that determines whether it is normalized or not.

    If that's the case you may want to correct that on the wiki so its not confusing.

    Provided that what I just said checks out I don't think I have any more questions about tangent space normal maps, for the time being at least. :)

    Edit: I couldn't help myself, I found a cool video explaining how a dot product works, it helps explain why the unit length has to be 1: [ame]http://www.youtube.com/watch?v=KDHuWxy53uM[/ame] I'm guessing 99% of people wont care but for anyone who does there you go.


    Basically we are getting the dot product of the light vector and the normal vector and if that number comes out being larger than 1 then we are going to push our diffuse values BEYOND 255. Obviously we can't go beyond 255 its as white as we can go. So in order to not break this rule we normalize (make it equal 1) the length of our normal vector so that its dot product with the light vector is always less than or equal to one, and we never go beyond 255.

    It really comes down to making sure the math we are doing stays in the same range as an RGB image where 0 is one extreme and 255 is the other. The math needs to be 0-1 not 0-some random big number.
  • Eric Chadwick
    Options
    Offline / Send Message
    Awesome post! And a great lecturer, I like his style a lot.

    Your description of the issue sounds good to me, I will correct the wiki on this.

    As an aside, I wonder if the >1 result of pushing beyond 255 has anything to do with HDR rendering, like Valve showcased in their Half-Life 2 Lost Coast level? Bright values can be pushed past 255, so things look "blown out" or over-exposed. Probably a different equation...
  • Parkar
    Options
    Offline / Send Message
    Parkar polycounter lvl 18
    Pushing the color brightness beyond 255 isn't really the problem with non-normalized normals. This happens all the time when rendering in HDR. The problem is that if the length is not the same of all the normals and we are not compensating for it a pixel with a "bigger" normal would look brighter.

    Of course if you aren't doing HDR rendering going over 255 would be a problem to.
  • jocose
    Options
    Offline / Send Message
    jocose polycounter lvl 11
    True Parkar, the issue is really just that the math assumes it will be normalized wether we are over 255 or not may or may not be an issue, poor explanation on my part. I didn't know that about HDR though. Very interesting.
  • r_fletch_r
    Options
    Offline / Send Message
    r_fletch_r polycounter lvl 9
    Id imagine this is an optimization. you dont really want to have to re normalize every time you do a calculation.

    I think the idea of HDR is the lighting isnt calculated with a 255 maximum. Im pretty sure HDR is caculated in 32 bit a segment which it mapped to the 255 range using tone mapping.

    You can see this when you exit the building at the top of the cliffs in lost coast. Your 'eye' adjust as a higher light level is mapped down into the 255 range of the screen
  • Parkar
    Options
    Offline / Send Message
    Parkar polycounter lvl 18
    r_fletch_r wrote: »
    Id imagine this is an optimization. you dont really want to have to re normalize every time you do a calculation.

    I think the idea of HDR is the lighting isnt calculated with a 255 maximum. Im pretty sure HDR is caculated in 32 bit a segment which it mapped to the 255 range using tone mapping.

    You can see this when you exit the building at the top of the cliffs in lost coast. Your 'eye' adjust as a higher light level is mapped down into the 255 range of the screen

    Correct on all accounts.

    It's much better to simply make sure the normals are always normalized then normalizing them when rendering.

    You can also use the blue channel to store a 255 bit map (like a height map for instance) if you are not interested in supporting normals pointing backwards(negative Z). If we can assume the normal to be normalized the shader can calculate the missing Z component using the Red and green channel. Using this technique you can save on memory at the cost of shader complexity. Not sure how common this is though.

    On the HDR issue, anything that is above 255 after the tone mapping can be used as the strength of a bloom pass. This way any pixel that's clamped down to 255 can glow to indicate that it is very bright.
Sign In or Register to comment.