What is the difference between a normal map and a displacement map? No one at my school can clearly tell me, and once I have a displacement map how do I add to my mesh in 3ds max?
the stone floor image above is not a displacement map... there is no hight changing in the geo... as you can see at the border of the plane... thats a parallax map... http://en.wikipedia.org/wiki/Parallax_mapping
A question about this thread for the above posters. Does Tessellation require a special map like displacement does?
I'm sure someone else knows this better than I do, but I think a tessellation map would have to be a little more neat, because you're adding geometry in real-time. I wouldn't want to get any weird spots protruding or waste any memory.
A question about this thread for the above posters. Does Tessellation require a special map like displacement does?
for standard production renderers no. only the displacement or vector displacement map is used. another method is to use the distance from the camera eye and perhaps some other perimeters to determine tessellation. mudbox for instance does a type of real time tessellation based on camera distance. you could use curvature maps or something similar. but at this point that sort of work is experimental and at the research level. nothing in production yet from what i have seen.
Tesselation: this is the process of subdividing a mesh. Quake3 have used this technique to smooth out some geometry
Occlusion/Parralax/Cone-Mapping: this is more or less a raytracing approach to sample the texture while considering the encoded depthmap. Similar to normal maps you will see the flat nature of the texture at certain angles and you need lot of pixel-processing power to do it. An disadvantage is, that you need to fake a lot of things like shadow, g-buffer depth etc. to get it right.
Displacement Mapping: this is the approach of shifting the vertices along the normal of the surface according to the encoded depth/height of the texture. This is done in the vertex and/or geometry shader stage and not like the Occ/Par/Cone mapping in the pixel shader. Either you have a high-resolution mesh at hand or you need to tessellate it at runtime to get it right. Indead terrain engines often use the hightmap to displace a flat mesh, therefore a hightmap is kind of a displacement map. The great advantage of displacement mapping is, that you have standard geometry, therefore all the nice stuff like shadowing, depth etc. works out of the box.
Modern GPU are able to get displacement mapping using the geometry shaders for auto tessellation, but the industry is still developing games for the xbox360/ps3. So, the next gen consoles will most likely introduce the use of displacement mapping to the average game.
minor nitpick: the pipeline is:
vertex -> tessellation (2 shaders) (dx11+) -> geometry (dx10+) -> fragment.
While you can do geometry generation with geometry shader it's not recommended (unless for very simple cases like billboards), it's really best done with the proper dx11 tessellation shaders, geometry shaders are hardly used.
The actual displacement is then done in the second tessellation shader (domain/eval).
And just to further clear it up on the content side:
A "displacement map" or "parallax map" is the same thing as a "height map". IE: these are all grayscale maps representing height on a 0-1/0-255 scale.
The differentiation is how you plug that map into your shader, you can use it as a bump map, a parallax map, a displacement map etc.
Often times how you paint/generate a height map will varry for different purposes(ie: converting to a normal map vs parallax/displacement), but the content is essentially the same format.
The only time you would need entirely different content is if you're using a vector displacement map, which can displace in multiple directions, as apposed to a standard grayscale height map that can only represent up and down.
Tesselation: this is the process of subdividing a mesh. Quake3 have used this technique to smooth out some geometry
Occlusion/Parralax/Cone-Mapping: this is more or less a raytracing approach to sample the texture while considering the encoded depthmap. Similar to normal maps you will see the flat nature of the texture at certain angles and you need lot of pixel-processing power to do it. An disadvantage is, that you need to fake a lot of things like shadow, g-buffer depth etc. to get it right.
Displacement Mapping: this is the approach of shifting the vertices along the normal of the surface according to the encoded depth/height of the texture. This is done in the vertex and/or geometry shader stage and not like the Occ/Par/Cone mapping in the pixel shader. Either you have a high-resolution mesh at hand or you need to tessellate it at runtime to get it right. Indead terrain engines often use the hightmap to displace a flat mesh, therefore a hightmap is kind of a displacement map. The great advantage of displacement mapping is, that you have standard geometry, therefore all the nice stuff like shadowing, depth etc. works out of the box.
Modern GPU are able to get displacement mapping using the geometry shaders for auto tessellation, but the industry is still developing games for the xbox360/ps3. So, the next gen consoles will most likely introduce the use of displacement mapping to the average game.
So are displacement maps used in games a lot or just in cinematics? Because you said displacement maps are high resolution models and high resolution models take up a lot of memory for the engine.
So are displacement maps used in games a lot or just in cinematics? Because you said displacement maps are high resolution models and high resolution models take up a lot of memory for the engine.
I think I remember Doom 3 actually using displacement maps in real-time, but that's the only direct application of it in an engine I've seen. However, now with DirectX 11, it can be used for further real-time tessellation.
In the current generation of games displacement maps are not really used at all (beside some special cases like terrain rendering), because the memory impact would be to high and the support for proper/fast tessellation processing is not given on Xbox360/PS3.
Displacement maps have always been limiting in some areas, because it only retains the height information for points on the model in a grayscale image. Displacement maps work well to represent detail on simplified meshes when the models silhouette detail or overall shape needs to be apparent to the viewer.
On low res meshes they might not give the effect you're looking for, which is where normal maps come in.
Normal maps is a 2D RGB images that records the surface normal information of a mesh by using the red, green, blue color channels to record the X, Y, Z normal vector data.
Going forward I would hope to see Vector Displacement Maps being used more, if possible, as these can potentially offer the best of both worlds. Vector displacement maps can advantageous because they record height and directional information for points on the model as 32 bit floating point image. The map stores both the distance a vertex will be displaced and the direction of the vertex displacement. They also don't rely on UV coordinates as the wiki page here states.
Well as I said, VDMs uses the point (or vertex) data from a mesh, both the height and more importantly the direction of the point vectors to create its data. UVs don't really have anything to with this. Sure, it's true that UVs can (and are) used for storing and displaying that data in a map file, it's really the additional meta data that is contained in the 32bit file, that holds the key.
Also when you look at the Ptex texture mapping system created by Disney Animation, this uses no UVs. You can texture meshes and extract maps (including VDMs, Normals, etc)in Ptex without the need to UV anything.
It's still relatively early days for this tech, but I can certainly foresee a time when it could find its way into games. Especially when real-time/game technologies are looking to improve the way they tessellate meshes on the fly. Add to this Pixar's recent open source release of its Subdivision surface library, based on the original CatmullClark algorithm, its possible to see the potential.
Replies
displacement maps are pushing verticies along there normal...
http://robertokoci.com/photorealistic-rendering-vray-materials/
there are ways to do this also in realtime with tessellation...
[ame="http://www.youtube.com/watch?v=bkKtY2G3FbU"]Hardware tessellation with DirectX 11 (Unigine "Heaven" benchmark) - YouTube[/ame]
[ame="http://www.youtube.com/watch?v=EF7uAXenlAA"]Crysis 3 - Powered by CryEngine 3 Tech Demo (3D 720p, HD 1080p) - YouTube[/ame]
I'm sure someone else knows this better than I do, but I think a tessellation map would have to be a little more neat, because you're adding geometry in real-time. I wouldn't want to get any weird spots protruding or waste any memory.
for standard production renderers no. only the displacement or vector displacement map is used. another method is to use the distance from the camera eye and perhaps some other perimeters to determine tessellation. mudbox for instance does a type of real time tessellation based on camera distance. you could use curvature maps or something similar. but at this point that sort of work is experimental and at the research level. nothing in production yet from what i have seen.
Tesselation: this is the process of subdividing a mesh. Quake3 have used this technique to smooth out some geometry
Occlusion/Parralax/Cone-Mapping: this is more or less a raytracing approach to sample the texture while considering the encoded depthmap. Similar to normal maps you will see the flat nature of the texture at certain angles and you need lot of pixel-processing power to do it. An disadvantage is, that you need to fake a lot of things like shadow, g-buffer depth etc. to get it right.
Displacement Mapping: this is the approach of shifting the vertices along the normal of the surface according to the encoded depth/height of the texture. This is done in the vertex and/or geometry shader stage and not like the Occ/Par/Cone mapping in the pixel shader. Either you have a high-resolution mesh at hand or you need to tessellate it at runtime to get it right. Indead terrain engines often use the hightmap to displace a flat mesh, therefore a hightmap is kind of a displacement map. The great advantage of displacement mapping is, that you have standard geometry, therefore all the nice stuff like shadowing, depth etc. works out of the box.
Modern GPU are able to get displacement mapping using the geometry shaders for auto tessellation, but the industry is still developing games for the xbox360/ps3. So, the next gen consoles will most likely introduce the use of displacement mapping to the average game.
vertex -> tessellation (2 shaders) (dx11+) -> geometry (dx10+) -> fragment.
While you can do geometry generation with geometry shader it's not recommended (unless for very simple cases like billboards), it's really best done with the proper dx11 tessellation shaders, geometry shaders are hardly used.
The actual displacement is then done in the second tessellation shader (domain/eval).
http://www.nvidia.asia/content/asia/event/siggraph-asia-2010/presos/Ni_Tessellation.pdf
A "displacement map" or "parallax map" is the same thing as a "height map". IE: these are all grayscale maps representing height on a 0-1/0-255 scale.
The differentiation is how you plug that map into your shader, you can use it as a bump map, a parallax map, a displacement map etc.
Often times how you paint/generate a height map will varry for different purposes(ie: converting to a normal map vs parallax/displacement), but the content is essentially the same format.
The only time you would need entirely different content is if you're using a vector displacement map, which can displace in multiple directions, as apposed to a standard grayscale height map that can only represent up and down.
So are displacement maps used in games a lot or just in cinematics? Because you said displacement maps are high resolution models and high resolution models take up a lot of memory for the engine.
I think I remember Doom 3 actually using displacement maps in real-time, but that's the only direct application of it in an engine I've seen. However, now with DirectX 11, it can be used for further real-time tessellation.
On low res meshes they might not give the effect you're looking for, which is where normal maps come in.
Normal maps is a 2D RGB images that records the surface normal information of a mesh by using the red, green, blue color channels to record the X, Y, Z normal vector data.
Going forward I would hope to see Vector Displacement Maps being used more, if possible, as these can potentially offer the best of both worlds. Vector displacement maps can advantageous because they record height and directional information for points on the model as 32 bit floating point image. The map stores both the distance a vertex will be displaced and the direction of the vertex displacement. They also don't rely on UV coordinates as the wiki page here states.
Could you be so kind as to elaborate on what you mean by this?
Also when you look at the Ptex texture mapping system created by Disney Animation, this uses no UVs. You can texture meshes and extract maps (including VDMs, Normals, etc)in Ptex without the need to UV anything.
It's still relatively early days for this tech, but I can certainly foresee a time when it could find its way into games. Especially when real-time/game technologies are looking to improve the way they tessellate meshes on the fly. Add to this Pixar's recent open source release of its Subdivision surface library, based on the original CatmullClark algorithm, its possible to see the potential.