Home Technical Talk

Normal map types

polycounter lvl 19
Offline / Send Message
adam polycounter lvl 19
I keep hearing malarky on tangent space, world space, (outer space) normal maps but I've not quite wrapped my head around their specifics.

Does anyone have some insight in to this malarky? Or some literature on the net somewhere that I could read? I'm wondering about their definitions and pros/cons for each method.

Thanks! <3

Replies

  • Rob Galanakis
    Options
    Offline / Send Message
    Somethings I wrote up:
    http://www.twcenter.net/forums/showthread.php?t=84491
    There are more comprehensive articles available. It depends how much you want to understand?

    Essentially, you need to know what world space is, object (or local) space is, and what tangent space. You also need to know what a "normal" is (if you don't know this...), and the idea behind lighting calculation can help. Once you understand those, things should fall into place.

    If anything isn't clear, or I can/should elaborate more, just ask. Also, check out poop's site, and www.bencloward.com has some good info. Also, googling for object space tangent world normal map may yield some good info (I'm sure I've seen some good stuff).

    Generally, do everything in tangent space. You can use it on deformable and arbitrary surfaces, it can repeat, etc., and the speed increase in going from tangent to world space normal map in the shader (multiplying your expanded RGB values by the world TBN matrix, essentially) is a trivial computation... and many complex shaders, especially those that use the tangent or binormal (such as hair shaders) can only be done in tangent space, and generally you will see higher-end stuff done in tangent space as well (parallax occlusion, relief, cone, etc), I find, for a variety of reasons which I've only discovered by trying to do them in world space that I don't understand really.
  • Eric Chadwick
    Options
    Offline / Send Message
    Whoa, awesome link. Looking forward to poring over it.

    I posted a quickie rundown ahile back, a couple posts down in this thread...
    http://forums.cgsociety.org/showthread.php?s=&threadid=138110
    On the 2nd page, Ysaneya offered some more goodies.
  • MoP
    Options
    Offline / Send Message
    MoP polycounter lvl 18
    Nice one Professor420, I needed something like that, it's always useful to know what programmers are going on about when they use technical jargon smile.gif
  • Joshua Stubbles
    Options
    Offline / Send Message
    Joshua Stubbles polycounter lvl 19
    /me runs from my old team member...
    smile.gif
    Good stuff as always, Professor.
  • Eric Chadwick
    Options
    Offline / Send Message
    Huh. I found your TW thread a bit difficult to follow. I guess being an artist I tend to learn better with more visual examples.

    I already understood (for the most part) how each vertex normal is converted into world space, so the light vector can be compared to it, then the surface is shaded appropriately. But then I read your thread and examined your image, and it seemed more confusing.

    If you're still interested in more writing, I think it might help to make a diagram using 3 axis tripods... one for world space, another for the object space, and another for the vertex normal on the surface.

    Am I right in conceptualizing that the vector of the vertex normal is extracted, rotated to match the object tripod, then rotated to match the world tripod, then finally compared with the light vector? That doesn't sound right. frown.gif
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    object space are the normals as stored. so lets say you have a cylinder then the top flat area would have (0,0,1) (assuming Z up)

    now in world space it could be that the cylinder is rotated by 45°, so the normal is after object->world transforms, say (0,0.7.0.7).

    this step could be omitted, if you either pass "light" in object space, or you do lighting in "viewspace", then you normally just transform from object->view

    in viewspace it could be you look along the rotated
    cylinder side, so normals point full to the side (0,1,0)

    now the real complex and weird thing is tangentspace. because it is made from "uv-vectors" and the actual objectspace normal. one could image a axis tripod that always has its Z Axis sticking out perpendicular to the triangle.... (very similar to the normal, but as the normal is just the Z Axis, the X,Y axis would be made from the UVs sorta)

    now think of how vertex normals are made, normally the average of the connected triangle's normals. Similar is done for those X/Y (tangent,bitangent) vectors. But compared to the surface normals, the connected triangle's UV coordinates could be very different. Hence the mess wink.gif
    And often one needs to split up and create 2 vertices (think along smoothing groups). At UV splits this is less of an issue, as we normally have 2 vertices anyway, because of different uv coordinates. But then we have the problem taht tangentspaces are very different from one "edge" to the neighbor.

    oh and yeah a image would help I guess wink.gif probalby I just overcomplicated stuff now

    some good articles here, too:
    http://www.delphi3d.net/articles.php

    ... after sketching

    spaces.png

    I kept it 2D because I think it might be easier to get.

    basically it shows that "spaces" are just different orientations of stuff. They could contain "skewing", but normally that is tried to being avoided. Without skewing, they are called "orthoromal". Especially Tangent Space may be skewed however, because "UV" triangles are typcially not same-length-sided.

    Now in first Tangent space pic, take the arrow pointing to the right, that arrow represents two in 3D. Their orientation is in the "Normal" plane (tangential to the triangle plane), but their lengths/directions within that plane depends on the triangles's UVs, which use this vertex.

    normally this stuff would run on "per triangle" space, because every triangle defines a single plane/orientation (sidenote hence the q3 tags were only a single triangle)

    but when we use smoothing groups and all that, we merge the information down to vertices. So in case we want smoothing the triangles' normals, all are summed up and get "renormalized" (unit length), and become the vertex's normal. Similar stuff happens with other vertex attributes. Of course when we dont want "smooth" results, we create as many vertices (at worst 1 per triangle that uses this point) which now have individual attributes, that can be different.

    because rasterizing hardware like what we use mostly, doesnt have much "info" about what it draws (that is before shader model4). There is no information about what a triangle is, or the vertex has no idea whom he originally belonged to, nor the other vertices that are the same triangle, they are all dealt with independently.
    The vertex shader gets a bunch of vertex attributes ( position, texturcoordinates...) and generates a bunch of them ( position in '2D', vertex color...).
    Those attributes (other than position) are not fixed in any way, they are just lots of numbers that can be used for anything, say coloring, passing vectors whatever.

    Now after vertex transforms, the triangle is shaded. Every pixel within a triangle, gets a mix of 3 vertices' attributes. The values are linearly interpolated. That is depending on the position within the triangle, you get 30% of vertex A, 50% of B and 20% of C, all summed up.

    Now with those mixed values, you run the pixelshader to compute the output color. That color is then "alphatested", and "blended" with the framebuffer or discarded.

    What this means is that, we loose lots of information and we have to live with the "linear" interpolation of those vertex attributes, that are fed to the pixel shader.

    Now linear interpolation can suck, think of two directions

    <-- and -->
    linearly interpolated half between them is 0
    but what if we actually meant an orientation, where no 0 is tolerated but we want a vector again like "up" in this case...
    things like this are the reason why a normal-mapped render in 3dsmax viewport may look completely different then when rendered with the scanline renderer. Because the scanline pipeline knows a lot more about your model, and may interpolate non linearly between vertices when creating "pixels"

    uh, not sure if that helped, this may need more time and better illustration.
  • jogshy
    Options
    Offline / Send Message
    jogshy polycounter lvl 17
    You miss the FarCry's "clone space" normal maps!
    I have no idea what the hell are.. I suspect something to avoid mirrored UV problems?
Sign In or Register to comment.