Normal maps holds negative and positive values too, while bump maps only holds positive. I know this is confusing, because people author normal maps as 8 bit and only positive, and it gets hdr range in the shader. So its kinda happening in the background.
With all this said, you could still author bump maps as hdr. Also, bump and height/displacement maps should be the same thing as they hold height information as a grayscale map. The difference is the depth scale. Bump maps are normally used for smaller irregulations on the surface, while a height/displacement map is usually used for large scale details with big height differences. However, they can still be used similar to a bump map. Using 8 bit on a height map with large depth can lead to stair stepping artifacts. I would assume the same thing could happen with a bump map too in some cases, as its technically the same thing, and sampled the same way. In some extreme cases, normal maps authored in 8 bit can have the same stair stepping effect.
Are you saying that when we, for instance, bake maps, we should output the normal map at 24 bit? And what of game engines? I was under the impression that -- Unity at least -- doesn't take anything more than 8 bit?
Are you saying that when we, for instance, bake maps, we should output the normal map at 24 bit? And what of game engines? I was under the impression that -- Unity at least -- doesn't take anything more than 8 bit?
All game engines compress textures to a direct X image format called DXT. For normal maps they often use either DXT1 or DXTn or 3Dc (which all are basically the same thing).
These formats are all 8-bit and designed specifically for 3d graphics.
HOWEVER these are all made this way by the engine. So you can give the engine a 16 bit image and it will often produce a better result than using a 8 bit image. This is because it can dither the 16 bit image to 8 bit. You generally don't need more than 16 bit for normal maps (twice 8 bit).
This gives the impression that the image has much smoother details than if it was rendered at 8 bit. It does depend on what kind of object the normal map is for - a smooth piece of machinery will need higher bit depth than a rock or bumpy skin.
Also there can be a difference in nomenclature. If people talk about 8 bit vs 24 bit they might simply address the number of channels of a 8 bit texture. Bump is just one channel (Grayscale), so 8 bit. Normalmaps have 2 (when the third channel is calculated from the two) or 3 channels (RGB). 3 x 8 = 24. But it does make sense to render normalmaps at higher bit depths, since you can always bring them to a lower bit depth, while the same isn't true the other way round.
32 bits per channel = more. Very rarely used in game development, sometimes for HDR. Anyone using 32 regularly? Mostly a waste of HDD space.
32 bit is what I commonly use for exporting heightmaps from World Machine to use in Unreal for a landscape or in Unity on terrain. It makes it look smoother and more accurate.
Replies
With all this said, you could still author bump maps as hdr. Also, bump and height/displacement maps should be the same thing as they hold height information as a grayscale map. The difference is the depth scale. Bump maps are normally used for smaller irregulations on the surface, while a height/displacement map is usually used for large scale details with big height differences. However, they can still be used similar to a bump map. Using 8 bit on a height map with large depth can lead to stair stepping artifacts. I would assume the same thing could happen with a bump map too in some cases, as its technically the same thing, and sampled the same way. In some extreme cases, normal maps authored in 8 bit can have the same stair stepping effect.
https://en.wikipedia.org/wiki/S3_Texture_Compression
https://en.wikipedia.org/wiki/3Dc
https://www.fsdeveloper.com/wiki/index.php?title=DXT_compression_explained
These formats are all 8-bit and designed specifically for 3d graphics.
HOWEVER these are all made this way by the engine. So you can give the engine a 16 bit image and it will often produce a better result than using a 8 bit image. This is because it can dither the 16 bit image to 8 bit. You generally don't need more than 16 bit for normal maps (twice 8 bit).
https://en.wikipedia.org/wiki/Dither
This gives the impression that the image has much smoother details than if it was rendered at 8 bit. It does depend on what kind of object the normal map is for - a smooth piece of machinery will need higher bit depth than a rock or bumpy skin.
https://polycount.com/discussion/148303/of-bit-depths-banding-and-normal-maps
If people talk about 8 bit vs 24 bit they might simply address the number of channels of a 8 bit texture.
Bump is just one channel (Grayscale), so 8 bit.
Normalmaps have 2 (when the third channel is calculated from the two) or 3 channels (RGB). 3 x 8 = 24.
But it does make sense to render normalmaps at higher bit depths, since you can always bring them to a lower bit depth, while the same isn't true the other way round.
8 bits per pixel (bpp) = 256 colors.
8 bits per channel (bpc) = 16 million colors = 24 bpp color (8 bpp red, 8 bpp green, 8 bpp blue) or 32 bpp if you add alpha channel.
16 bits per pixel = 65,556 colors.
16 bits per channel = a lot of fucking colors = 48 bpp color. Used commonly in games as source for normal mapping, and for HDR.
32 bits per channel = more. Very rarely used in game development, sometimes for HDR. Anyone using 32 regularly? Mostly a waste of HDD space.