in booth cases as far as I know the GPU generates the mesh for you based on a height map and or normal map input.
So in the end you depend on good normal maps + height maps. I do not really think that its feeded with custom highpoly Mesh input and that it crunches that - the high density meshes look generated because they are by no means optimal + just fast and easy I guess to execute for a program or chip thats on the GPU.
In Colin McRae DiRT 2 they used it for example to improve the water physics
Tessellation works by taking a basic polygon mesh and recursively applying a subdivision rule to create a more complex mesh on the fly. It's best used for amplification of animation data, morph targets, or deformation models. And it gives developers the ability to provide data to the GPU at a coarser resolution. This saves artists the time it would normally take to create more complex polygonal meshes and reduces the data's memory footprint. The Tessellation support built into the Radeon HD 5800 series, however, is different than what was offered in previous Radeon GPUs, and now programmable via two newDX11 features dubbed domain and hull shaders.
This is nothing new. Been around for a while now. You can use a standard height map and shader to do this. Could get real fancy and have the shader support self shadowing. Gets super expensive on the GPU though.
Hardware Tessellation:
From what I've seen, this can be controlled with texture maps as well. So telling the engine where to pop out or pop in with a grayscale height map or other custom map type. There was a thread on this video not too long again. You should search and check it out.
Runtime tesselation isn't anything new either, but having it supported on the gpu is fairly recent.
Really hardware tesselation isn't much different than a smoothing algorithm that you'd use in your model building, its just done with the vertex data on the GPU.
The triangulation/lighting/shading of a mesh only happens after the GPU has the mesh's vertex data in memory. That data also specifies which vertices are linked to one another via some directx primitives.
That data can be appended or modified before it gets to the "Transform and light" stage of the GPU rendering pipeline. So you just simply feed in vertex buffers, add points as needed based on some heuristic, then send the altered vertex buffer on its way.
Its kinda like a reverse of adaptive runtime LOD technology as seen in insomniac's PS3 titles, starting with Ratchet and Clank Future, only with hardware level support rather than engine level.
Steep parallax, like most GPU wizardry, works due to the fact that texture data isn't an image. Its a grid of numbers. Those numbers can be interpreted as color values, but they don't HAVE to. Every pixel in a texture file is between a one and four dimensional data construct, and the range of values it can hold are limited only by the format specifications and numeric range of the system (64bits is the longest standard floating point data type).
This means that you can use image data LIKE vertex data if you want, and all sorts of effects are possible based on data structure traversals. From a data standpoint the difference between a mesh file and an image file is trivial.
Replies
So in the end you depend on good normal maps + height maps. I do not really think that its feeded with custom highpoly Mesh input and that it crunches that - the high density meshes look generated because they are by no means optimal + just fast and easy I guess to execute for a program or chip thats on the GPU.
In Colin McRae DiRT 2 they used it for example to improve the water physics
some more pics:
some info I found regarding the tesselation:
http://hothardware.com/printarticle.aspx?articleid=1383
From what I've seen, this can be controlled with texture maps as well. So telling the engine where to pop out or pop in with a grayscale height map or other custom map type. There was a thread on this video not too long again. You should search and check it out.
Really hardware tesselation isn't much different than a smoothing algorithm that you'd use in your model building, its just done with the vertex data on the GPU.
The triangulation/lighting/shading of a mesh only happens after the GPU has the mesh's vertex data in memory. That data also specifies which vertices are linked to one another via some directx primitives.
That data can be appended or modified before it gets to the "Transform and light" stage of the GPU rendering pipeline. So you just simply feed in vertex buffers, add points as needed based on some heuristic, then send the altered vertex buffer on its way.
Its kinda like a reverse of adaptive runtime LOD technology as seen in insomniac's PS3 titles, starting with Ratchet and Clank Future, only with hardware level support rather than engine level.
Steep parallax, like most GPU wizardry, works due to the fact that texture data isn't an image. Its a grid of numbers. Those numbers can be interpreted as color values, but they don't HAVE to. Every pixel in a texture file is between a one and four dimensional data construct, and the range of values it can hold are limited only by the format specifications and numeric range of the system (64bits is the longest standard floating point data type).
This means that you can use image data LIKE vertex data if you want, and all sorts of effects are possible based on data structure traversals. From a data standpoint the difference between a mesh file and an image file is trivial.