Can someone explain the advantages of a normal map over a bump map? I did some quick tests in Maya today and the only difference I can see between the two is that normal maps can be generated from hi res geo right out of the box. You will see in my examples that there are a few artifacts on the bump map but this is due to the tool I used to convert the normal map to a grey scale bump map, it left some speckles behind, as well I noticed some of the doom3 textures are regular grey scale bump maps as opposed to rgb normal maps. Any experts out there, Cody I'm sure you can answer this one.
I always thought RGB normal maps were more accurate than a greyscale heightmap (bump map) because each pixel actually stores a vector for the high-detail surface's angle, rather than just a height-difference.
I don't know much about the tech though... I just imagined RGB normal maps to be a more accurate depiction of the highpoly surface in relation to the lowpoly one.
In doom3 they use extra bump maps in combination with the normal map to do fine details, you can also overlay a bump map on your normal(after converting to normal with nvidia's plugn) and get the same result with only one map... but the way id does it it would be a little easier to edit your bump map. The most comon comparison is painted bump maps, vrs normal maps generated from high poly geo where its obvious which is better. The main reason everyone is using normals over bumps these days(even painted bumps converted to normals) is SPEED, the engine has to convert a bump map to normal space before it renders it anyway so you save the engine a little bit of work by doing it yourself. Also normal maps are 24 bit where as regular bump maps are only 8 bit. Thus extra precesion some people say.
So is the consensus that normal maps are better for render engines and that is why everyone is very excited about them? I had originally thought a normal map had an x, y, z vector and that created a better lighting model, but I'm yet to be able to create any bump maps that can't mimic the curvature of a normal map. Earthquake, wouldn't it be easier to generate a grey scale bump map from hi poly geo and then hand paint extra details in the grey scale image as opposed to trying to add detail to multi colour normal map and doing that secret composit layer trick in photoshop just to see your change?
No, it's not exactly the same...
I need someone like the esteemed Mr. Chadwick to present a nice link that will put this all into perspective... I've looked through Google a couple of times and can't find any info comparing the two processes in on easily digestible chunk...
"In 3D computer graphics, normal mapping is an application of the technique known as bump mapping. While bump mapping perturbs the existing normal (the way the surface is facing) of a model, normal mapping replaces the normal entirely. Like bump mapping, it is used to add details to shading without using more polygons. But where a bump map is usually calculated based on a single-channel (interpreted as grayscale) image, the source for the normals in normal mapping is usually a multichannel image"
Plus I tend to belive that NMaps store more information than BMaps for the same resolution. Hence for the same result you might need less rez when using normalmaps... which can explain why displacement maps used in the precomputed world are often rather large in size.
A normal map stores the slope/tilt of each pixel in the texture.
A bump map stores the simulated height of each pixel in the map. The renderer has to then convert these heights into slopes when it renders. It does this by comparing the current pixel with all it's neighbors to find it's slope.
Back in the day when all bump maps were hand-painted greyscale maps, they were converted to "dot-3" maps for game engines. It was really just a normal map. If I remember correctly, the name dot3 comes from the fact that it took 3 dot-product math operations to figure out the lighting per pixel with that map. But they are the same thing in the end.
Normal maps are better becuase:
1. The renderer wants the normal (slope/tilt) of the pixel for lighting. In a normal map, that's exactly what the data represents. In the bump map, it has to calculated on the fly.
2. They are more accurate. For the most part, it's not really noticeable though. A greyscale bump map can only hold 256 levels of height, so it is technically possible to come up with a map that can't be presented in bump form, only normal form. Very sharp edges are easier in normal maps. I think a recent Game Developer has a great image that illustrated this well: Each pixel in a bump map is a little block of a certain height. Lay down a sheet of rubber on top of this to see the shaded, sloping surface. There's a loss of detail in there.
3. They are better for representing something which is difficult to paint manually (ie high rez geo). Remember, that it's not just storing the curvature of the high rez model, but the difference in curvature of the high rez model and the low rez model. That's tough to paint
Bump maps are better because:
1. They are easier to visualize.
2. They are smaller on disk (although some of the new hardware's compressed normal map formats are quite comparable)
In the end, the best of both worlds comes from the combination of the two:
A: Generate a normal map from the high-rez model. Don't model down too small (pores, scars, stitching, etc). It just leads to model bloat and excessively dense meshes. Although cool to show off, they are pretty useless in the real world.
B: Hand paint a greyscale bump map for small details: fabric pattern, scars, stitching, pores, hairs, etc. Save yourself the trouble and paint this, not model it! Faster revision time, quicker to do, quicker to preview. I know zBrush is rad, but imagine detailing the mesh down to the pores & moles, and your AD asks for changes. Either go back to an old version, or try to smooth them out and remodel. Or just repaint in Photoshop.
C: Combine the bump map and normal map into a new normal map. This renders fast and has the best quality.
Doom 3 did it this way as well. Their pipeline was automated too: they supplied a normal map and a greyscale, and a strength for the bump in the shader. The engine would generate a new .dxt file for those when loading (composite while loading) if it needed to (ie, if the normal map or bump map were newer than the composite). Great for developing, and fast for anyone who wasn't changing the maps. Contrary to what most people think, it didn't render with both a normal map and a bump map, it was combined and rendered only with the final normal map (check the .paks, the .dxt's are the combined maps).
Both have a single failing: Imagine a flat panel on top of another flat panel. Neither can show this properly without a tiny bit of bevel around the edges. If you're capturing a normal map, it's lost completely. If you're painting a grey-scale, it's beveled a bit. Neither is really the best for that.
Here's an example of where they both suck. Yes, it's a bit contrived, but now you can see why all the techy panels in the Doom 3 world or kind of soft, or they slightly bevelled everything on the first place:
(these are sized down, so some of it's lost I guess)
The low rez fires rays up into the high rez, and stores the surface tilt it sees. Since they are all flat, the map is all flat as well (127,127,256)! Sucky. Characters rarely get stuff like this, especially since the base mesh is curved. But vehicles and envrio art can easy hit this bad case.
Right, and the problem you mention there is the reason i model everything that i plan on normal mapping with sub-d surfaces and done ever keep and 90 degree angles that dont atleast have some small bit of smoothing on the edges. But it still dosent look the greatest.
Okay now this makes more sense. Cody, so is your secret normal map workflow to bake down your normal information from the hi res model to get your base normal map and then use a grey scale image for the fine detailing and then convert to a normal map using the nVidia pluging in photoshop and then composite the two normal maps with overlay and normalize trick.
Nice objective reply there! Nothing we say from this point on will contribute to the discussion..
The workflow's not much different from cinematics is it, build a detailed med frequency mesh, and paint a grayscale bump for the super high frequency details. As far as it being a secret workflow, well, CG artists have been doing it a looong time.
I don't see why you can't generate a height map as a working file, and just convert everything to a normal map using the Nvidia filter...
Are there any drawbacks to using the Nvidia filter?
It didn't seem very clear in the above threads but the speed loss is at loading time rather than a loss of FPS in game however this only happens once (at least in doom3) as Whargoul said they are stored in a compressed format.
Also snowfly the drawbacks have been covered allready, hightmaps are harder to make to represent complicated objects and less accurate.
I meant, if you extracted a height map/displacement out of Zbrush and ran it through the Nvidia filter, would the result differ any from something out of Kaldera. I don't have any of those those tools, so obviously I can't test for myself. But they are essentially the same information right?
I think ( dont quote me on this ) that im right in saying that you can apply the bump map directly to the HIGH poly model and then render out the normal map and get a more detailed map than that of just the high poly on its own and screw the photoshop plugin!? cut out the middle man so to speak.... on a related topic iv been using the render to texture feature in max 7 lately and i get some parts of my uv map just showing up completely black, other parts perfect and others with pixels screwed up on them... why is this... i kinda thought is was something to do with the direction i projected my uv map (away from or towards the normals) ... is this right... whats wrong am i just doing it completely wrong?
Yes but for the reasons mentioned above, it could be faster to paint the bump in Photoshop rather then re-generate the combined normal/bump map every time, if you are making multiple iterations..plus, your high res model would need UV coordinates if you aren't going the procedural route.
Although yeah that's possible, the ATI Normal Mapper renders a normal map out of a high-res model and a bump map image.
Cubik - Well i think it does support mirrored UV's mabey not! but still the model wasnt mirrored DOES ANYONE KNOW WHY THIS IS HAPPENING??
Snowfly - I didnt think about the fact that youd have to unwrap the high poly ! dahhh but mabey you could hash together a crappy basic UV map... still it would be easyer with the PS tool... ahh well
ok i bet none of you know why my render to texture isnt working :P *secretly hoping someone will try to prove me wrong* erm i didnt say anything... dum dee dum dee dum
[ QUOTE ]
the esteemed Mr. Chadwick... Beat that hardcore technical explanation posm... Chadwick! Help!
[/ QUOTE ]
LOL! Wharghoul's the man, not me.
Only thing I might add is that if you "normalize" your normal map to only represent normals that are 1 unit long (which most do), then you're basically only storing the data equivalent of the surface of a sphere... a thin shell of slope data.
If you use a height map then you can store a full volume of data, a height field. So in essence an 8bit grayscale height map can represent more than a 24bit normal map. Or at least that's what they tell me.
But height maps aren't easy to generate from geometry, for the reasons mentioned above. I think the technique Wharghoul outlined is an excellent one.
Another cool thing you can do with a height map is change the strength of the normal map you make from it, so you can get varying degrees of bump out of it. Whereas with a pre-created normal map you're stuck with the strength it contains.
BTW, the nVIDIA Photoshop filter works pretty well, in my workflow.
Hi eric, I don't think what you have said is true. The length of a normal is irrelavant, normalizeing a normal map is probably only nessasary if you attempt to hand edit one. Normal data is extracted from a hightmap to be calcuated on the video card so a normal map cannot be less accurate than a hightmap.
Changing the strength of a normal map is not nessasary as if you are generating it from a high poly model it will be exactly how you want it and if you are trying to paint it by hand you are crazy!
An example of proving how a normal map is more accurate would be if you were to make a hightmap with a graident going from one side to the other if you had more than 256 pixels the result ingame would be banding across the surface where as you could represent this with a single colour in a normal map.
Yeah, same here. I post so I can learn. Please do correct me when I'm wrong.
As I understand it, a normal map is what we use on current hardware for bumpage, so a height map has to be converted into one to give you the effect you want. But the height map stores more data, you're just throwing away the extra info when you convert to a normal map.
The way I see it, a normal map is just a derivative, only a slope. But a height map is height. When I build a quality height map from geometry, and convert that to a normal map, I get the same normal map that I would have gotten if I went directly to a normal map. But... I can edit the height map much easier, overlaying fine detail, or fixing problems, etc.
If I'm doing character maps, then changing the bump strength really isn't needed. I just want the geometry reproduced. But when I'm doing effects, or level maps, or water, or whatever, then changing the strength has value (for me).
About renormalizing... the length of the normal affects the intensity of the light contribution calculated for it. If you're just generating a map from geometry, then your length will (should!) always be 1. But when you edit maps (it can be done), then the lengths can be altered.
If you have a negative normal, you'll be telling that texel to be lit from behind.
I'm not sure if I understand your gradient example. If you want a slope that's different from your actual geometry, then the normal map would also need a gradient, no?
The light intensity is effectively multiplied by the vector length so you don't want non-normalized vectors .
I think what Eric means here is that you can use the heightmap for displacement mapping.
You can change the strength of a normalmap by adding a multiple of the (0 0 1) vector and renormalizing. Won't strengthen your normalmap (only weaken) but generally, when it's rendered from geometry you can just flatten or strengthen the geometry and when you're hand painting you're not painting a normal map directly, anyway. Of course you're not going to hand paint normalmaps, just as you're not going to render hipoly to heightmaps. The heightmap is an intermediate step for humans to handle, just like no programmer writes binary directly. If you want to add detail to the normalmap, you make a heightmap, convert that to a normalmap, limit the B channel to 0-127 via levels, overlay and normalize.
A slope is one color on a normalmap and a gradient on the heightmap.
Yeah a slope like that would be a problem, good point. Frankie's example seems like an extreme one, but I'll grant that 256 grays can be limiting.
I guess my point is that both have their advantages and their disadvantages. I think it doesn't help to classify heightmaps as being strictly old school.
We're working on some cool displacement tech, I hope I can show it at some point. To do it, we must use heightmaps instead of normal maps. So... on older hardware the heightmaps are converted for plain normal-map bumpage, while current- or next-gen hardware will take advantage of the heightmap natively for displacement and auto-tesselation. Pretty cool stuff... <sigh>
For those who want a bit more technical info, like Daz...
http://members.shaw.ca/jimht03/normal.html "The blue channel encodes normal vectors in the Z direction. 100% blue points straight out of the surface. 0% blue points straight behind the surface. A value of 50% in the blue channel indicates a Z normal component of 0. Normal maps don't contain values below 50% in the blue channel since these would be pointing behind the surface."
The new compressed normal map formats (supported by Xenon and newer cards, later DirectX's, etc) throw out the blue channel completely, and recalculate it on the fly (IN HARDWARE!). Of course, it only works if all your normals were length 1 to start with.
z = (x*x + y*y )^1/2 once they are converted back to their -1,1 formats.
Since the blue contains the least significant details, the loss is extremely minimal.
Also, as for greyscale with a slope and more than 256 colours being extreme, it really isn't. As textures get larger, the limitation gets worse! If you have a 2048x2048 greyscale bump, a 256 section of constant slope isn't that difficult to achieve, and results in banding. Now you could switch to 16-bit (or heigher) displacement maps, but then they are almost unpaintable by hand again. And no tool I know of works to convert a higher-bit bump to normal map.
Haha, yeah we have some work to do. Good points. We support higher bit depths, but there just aren't many tools like you say, and memory becomes a problem also.
That Neckling is really cool, I love the motion study too, when he turns his neck. I can't wait to see micro-triangle displacement mapping supported in hardware, that'll be an eye-opener. But really, he's using three 4096's... and 4min/frame.
That relief mapping demo looks cool. I wonder where they get the depth info. The red channel of the sample normal map just has the Z normal length, so maybe they're using an alpha channel for depth info? Looks like a nice alternative to parallax mapping, I wonder what the performance cost is like...
I asked one of the coders here about the performance cost of that relief mapping method. He said it's horribly costly since it's ray-casting per-pixel to achieve the depth. Also you can't change the mesh in any way, so there's no deformation allowed (unless you animate the texture, which is then a memory hog).
Yeah, but greyscale for displacement will actually displace the geometry of a really dense mesh, so its not in the same category really.
I think a good basic point to be made for the advantage of normal maps is that even if you could theoretically paint a greyscale image of, say, a nude figure to be converted to a normal map in photoshop with the same degree of accuracy as sculpting a high poly and doing it that way, why would you want to? It would be complete hell. And you WOULD need to convert it to a normal map, there's no possible way to achieve the per pixel surface of, say, a normal mapped cylinder through a bump map.
I meant the generally juvenile references that pass as wit on polycount. I wish things were a bit more mature, but I realize it goes hand-in-hand with game dev, so I'm cool with it. Guess I'm getting old.
Replies
I don't know much about the tech though... I just imagined RGB normal maps to be a more accurate depiction of the highpoly surface in relation to the lowpoly one.
Jody
I need someone like the esteemed Mr. Chadwick to present a nice link that will put this all into perspective... I've looked through Google a couple of times and can't find any info comparing the two processes in on easily digestible chunk...
"In 3D computer graphics, normal mapping is an application of the technique known as bump mapping. While bump mapping perturbs the existing normal (the way the surface is facing) of a model, normal mapping replaces the normal entirely. Like bump mapping, it is used to add details to shading without using more polygons. But where a bump map is usually calculated based on a single-channel (interpreted as grayscale) image, the source for the normals in normal mapping is usually a multichannel image"
Plus I tend to belive that NMaps store more information than BMaps for the same resolution. Hence for the same result you might need less rez when using normalmaps... which can explain why displacement maps used in the precomputed world are often rather large in size.
Chadwick! Help!
There ya go! Beat that hardcore technical explanation posm! ;-p
A normal map stores the slope/tilt of each pixel in the texture.
A bump map stores the simulated height of each pixel in the map. The renderer has to then convert these heights into slopes when it renders. It does this by comparing the current pixel with all it's neighbors to find it's slope.
Back in the day when all bump maps were hand-painted greyscale maps, they were converted to "dot-3" maps for game engines. It was really just a normal map. If I remember correctly, the name dot3 comes from the fact that it took 3 dot-product math operations to figure out the lighting per pixel with that map. But they are the same thing in the end.
Normal maps are better becuase:
1. The renderer wants the normal (slope/tilt) of the pixel for lighting. In a normal map, that's exactly what the data represents. In the bump map, it has to calculated on the fly.
2. They are more accurate. For the most part, it's not really noticeable though. A greyscale bump map can only hold 256 levels of height, so it is technically possible to come up with a map that can't be presented in bump form, only normal form. Very sharp edges are easier in normal maps. I think a recent Game Developer has a great image that illustrated this well: Each pixel in a bump map is a little block of a certain height. Lay down a sheet of rubber on top of this to see the shaded, sloping surface. There's a loss of detail in there.
3. They are better for representing something which is difficult to paint manually (ie high rez geo). Remember, that it's not just storing the curvature of the high rez model, but the difference in curvature of the high rez model and the low rez model. That's tough to paint
Bump maps are better because:
1. They are easier to visualize.
2. They are smaller on disk (although some of the new hardware's compressed normal map formats are quite comparable)
In the end, the best of both worlds comes from the combination of the two:
A: Generate a normal map from the high-rez model. Don't model down too small (pores, scars, stitching, etc). It just leads to model bloat and excessively dense meshes. Although cool to show off, they are pretty useless in the real world.
B: Hand paint a greyscale bump map for small details: fabric pattern, scars, stitching, pores, hairs, etc. Save yourself the trouble and paint this, not model it! Faster revision time, quicker to do, quicker to preview. I know zBrush is rad, but imagine detailing the mesh down to the pores & moles, and your AD asks for changes. Either go back to an old version, or try to smooth them out and remodel. Or just repaint in Photoshop.
C: Combine the bump map and normal map into a new normal map. This renders fast and has the best quality.
Doom 3 did it this way as well. Their pipeline was automated too: they supplied a normal map and a greyscale, and a strength for the bump in the shader. The engine would generate a new .dxt file for those when loading (composite while loading) if it needed to (ie, if the normal map or bump map were newer than the composite). Great for developing, and fast for anyone who wasn't changing the maps. Contrary to what most people think, it didn't render with both a normal map and a bump map, it was combined and rendered only with the final normal map (check the .paks, the .dxt's are the combined maps).
Both have a single failing: Imagine a flat panel on top of another flat panel. Neither can show this properly without a tiny bit of bevel around the edges. If you're capturing a normal map, it's lost completely. If you're painting a grey-scale, it's beveled a bit. Neither is really the best for that.
Jody
(these are sized down, so some of it's lost I guess)
[img]http://www.members.shaw.ca/whargoul/bad normals 01.jpg[/img]
I should have provided a zoomed view of the edges. They are badly blurred, and even overlap each other in a funny way. Ugly ugly.
And here's a side view:
[img]http://www.members.shaw.ca/whargoul/bad normals 02.jpg[/img]
The low rez fires rays up into the high rez, and stores the surface tilt it sees. Since they are all flat, the map is all flat as well (127,127,256)! Sucky. Characters rarely get stuff like this, especially since the base mesh is curved. But vehicles and envrio art can easy hit this bad case.
The workflow's not much different from cinematics is it, build a detailed med frequency mesh, and paint a grayscale bump for the super high frequency details. As far as it being a secret workflow, well, CG artists have been doing it a looong time.
I don't see why you can't generate a height map as a working file, and just convert everything to a normal map using the Nvidia filter...
Are there any drawbacks to using the Nvidia filter?
Also snowfly the drawbacks have been covered allready, hightmaps are harder to make to represent complicated objects and less accurate.
Jody
Although yeah that's possible, the ATI Normal Mapper renders a normal map out of a high-res model and a bump map image.
Snowfly - I didnt think about the fact that youd have to unwrap the high poly ! dahhh but mabey you could hash together a crappy basic UV map... still it would be easyer with the PS tool... ahh well
Jody
the esteemed Mr. Chadwick... Beat that hardcore technical explanation posm... Chadwick! Help!
[/ QUOTE ]
LOL! Wharghoul's the man, not me.
Only thing I might add is that if you "normalize" your normal map to only represent normals that are 1 unit long (which most do), then you're basically only storing the data equivalent of the surface of a sphere... a thin shell of slope data.
If you use a height map then you can store a full volume of data, a height field. So in essence an 8bit grayscale height map can represent more than a 24bit normal map. Or at least that's what they tell me.
But height maps aren't easy to generate from geometry, for the reasons mentioned above. I think the technique Wharghoul outlined is an excellent one.
Another cool thing you can do with a height map is change the strength of the normal map you make from it, so you can get varying degrees of bump out of it. Whereas with a pre-created normal map you're stuck with the strength it contains.
BTW, the nVIDIA Photoshop filter works pretty well, in my workflow.
Changing the strength of a normal map is not nessasary as if you are generating it from a high poly model it will be exactly how you want it and if you are trying to paint it by hand you are crazy!
An example of proving how a normal map is more accurate would be if you were to make a hightmap with a graident going from one side to the other if you had more than 256 pixels the result ingame would be banding across the surface where as you could represent this with a single colour in a normal map.
As I understand it, a normal map is what we use on current hardware for bumpage, so a height map has to be converted into one to give you the effect you want. But the height map stores more data, you're just throwing away the extra info when you convert to a normal map.
The way I see it, a normal map is just a derivative, only a slope. But a height map is height. When I build a quality height map from geometry, and convert that to a normal map, I get the same normal map that I would have gotten if I went directly to a normal map. But... I can edit the height map much easier, overlaying fine detail, or fixing problems, etc.
If I'm doing character maps, then changing the bump strength really isn't needed. I just want the geometry reproduced. But when I'm doing effects, or level maps, or water, or whatever, then changing the strength has value (for me).
About renormalizing... the length of the normal affects the intensity of the light contribution calculated for it. If you're just generating a map from geometry, then your length will (should!) always be 1. But when you edit maps (it can be done), then the lengths can be altered.
If you have a negative normal, you'll be telling that texel to be lit from behind.
I'm not sure if I understand your gradient example. If you want a slope that's different from your actual geometry, then the normal map would also need a gradient, no?
I think what Eric means here is that you can use the heightmap for displacement mapping.
You can change the strength of a normalmap by adding a multiple of the (0 0 1) vector and renormalizing. Won't strengthen your normalmap (only weaken) but generally, when it's rendered from geometry you can just flatten or strengthen the geometry and when you're hand painting you're not painting a normal map directly, anyway. Of course you're not going to hand paint normalmaps, just as you're not going to render hipoly to heightmaps. The heightmap is an intermediate step for humans to handle, just like no programmer writes binary directly. If you want to add detail to the normalmap, you make a heightmap, convert that to a normalmap, limit the B channel to 0-127 via levels, overlay and normalize.
A slope is one color on a normalmap and a gradient on the heightmap.
this is an article that gives a overview on bumpmapping
I guess my point is that both have their advantages and their disadvantages. I think it doesn't help to classify heightmaps as being strictly old school.
We're working on some cool displacement tech, I hope I can show it at some point. To do it, we must use heightmaps instead of normal maps. So... on older hardware the heightmaps are converted for plain normal-map bumpage, while current- or next-gen hardware will take advantage of the heightmap natively for displacement and auto-tesselation. Pretty cool stuff... <sigh>
There's a real goofy example of what can be done, on this page. Click on the alien head.
http://www.pcstats.com/articleview.cfm?articleid=1109&page=5
We're doing something quite different though.
http://members.shaw.ca/jimht03/normal.html
"The blue channel encodes normal vectors in the Z direction. 100% blue points straight out of the surface. 0% blue points straight behind the surface. A value of 50% in the blue channel indicates a Z normal component of 0. Normal maps don't contain values below 50% in the blue channel since these would be pointing behind the surface."
z = (x*x + y*y )^1/2 once they are converted back to their -1,1 formats.
Since the blue contains the least significant details, the loss is extremely minimal.
Also, as for greyscale with a slope and more than 256 colours being extreme, it really isn't. As textures get larger, the limitation gets worse! If you have a 2048x2048 greyscale bump, a 256 section of constant slope isn't that difficult to achieve, and results in banding. Now you could switch to 16-bit (or heigher) displacement maps, but then they are almost unpaintable by hand again. And no tool I know of works to convert a higher-bit bump to normal map.
http://paralelo.com.br/img/relief_shadows_fulldepth.jpg
a technique called relief mapping, looks pretty fancy too
http://paralelo.com.br/img/ReliefMappingCurved.wmv
http://www.taron.de/ and http://www.taron.de/Neckling.htm
That relief mapping demo looks cool. I wonder where they get the depth info. The red channel of the sample normal map just has the Z normal length, so maybe they're using an alpha channel for depth info? Looks like a nice alternative to parallax mapping, I wonder what the performance cost is like...
Oh well. Anyhow, I thought I'd pass that along.
I think a good basic point to be made for the advantage of normal maps is that even if you could theoretically paint a greyscale image of, say, a nude figure to be converted to a normal map in photoshop with the same degree of accuracy as sculpting a high poly and doing it that way, why would you want to? It would be complete hell. And you WOULD need to convert it to a normal map, there's no possible way to achieve the per pixel surface of, say, a normal mapped cylinder through a bump map.