Since tessellation seems to be a pretty hot topic right now with the Samaritan tech demo and the March UDK, realized I had some questions about it. That probably means other people do as well.
Specifically, what does tessellation mean for modelers?
Obviously it's adding more geometry to what's already there, but how do the algorithms behave when subdividing meshes?
Is it necessary to add support loops to sharp edges on your low-poly model, or is this something that the displacement map can usually take care of?
Will tessellation start to make low-poly modeling look more like high-poly stuff so that things subdivide nicely?
If any of the Epic folks or any people from other studios and projects using tessellation would care to chime in, that'd be great. I've never worked with tessellation before and I'm sure there are several other people around who've never touched the stuff either.
Replies
Being curious myself and doing a quick search I found this:
http://blogs.msdn.com/b/chuckw/archive/2010/07/19/direct3d-11-tessellation.aspx
Looks like lots to read and it's probably not all that artist friendly and I'm sure it will probably kick off a new round of tools and tech to go with it. But we're talking about extremely high end PC's and might not show up on the next round of consoles... /sadface
http://www.stonegiant.se/
[ame]
From seeing dx11 tessellation screenshots the tessellated geometry does not modify the shape of the mesh by default, so you would use a displacement map (or vector displacement map) rendered the same way you render normal maps. You then sample that in the tessellation program and offset the verts accordingly.
So from an artists view it would generally be the same as using parallax mapping, but would just look much better.
From a technical standpoint it also allows much more than just displacing surfaces. Heres a volumetric lighting implementation using hardware tessellation:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.167.5056&rep=rep1&type=pdf
shouldnt it be possible to generate mass amounts of procedural geometry like trees, hair/ feathers? seems like a bit of a holy grail feature.
Assuming down the line that something Maxwell-like will be in a console within 4 years and your dedicating research and development into
pipeline possiblities and discoveries right now.
Please sign me up fer that thread. If only to concept proof our own stone giants
for the next 3 years. sounds like 3 years of fun to me.
Even if not ready for prime time, shouldn't the wiki evolve simultaneously with the artist as the tech matures?
There seems to be next to nothing ( really nothing yet ) in way of a pipeline description.
It appears you can not just turn on tessellation and realize ideal complexity throughout all geometry.
The variation the surface takes at differing levels of tessellation is kind of scary.
( can as much be controlled by edge density map? Trying to wrap my head around how Tessellation is used to control LOD.
Seems there should be parallel handling.
Otherwise if bare low topology is used ( without traditional contributing levels of complexity ) "control" of the silhouette would be chaotic.
Is this what is happening when the silhouette "dances" around as u up the tess level.
All I have tried so far is to dissect demo assets in the DirectX SDK and Unigine Heaven.
Don't know yet if UDK works in the same way. It appears from previous versions it might:
Seems like there should be lots of room for discovery and I imagine in the end much of the pipeline will really be discovered/evolved here?
In which case the sooner the better?
Sure you can get more complex with directional maps and such, but for the basic implementation all you would need is a displacement map, which contains the difference between your high and low.
[edit] Claydough: this isn't specifically a reply to you, just for anyone who is struggling with the concept.
As far as combating completely random silhouettes, I think that starts with content. If your art content has a bunch of random noise in it, Its going to result in messy displacement. This is true of a basic parrallax shader as well, I tend to blur my displacement map a bit, so only the larger forms come through, this way you'd have less chance of a random pixel giving a totally random height value.
To get real fine detail out of this sort of tech, you need a 1:1 pixel to quad ratio(or higher) which isn't a good plan to depend on. I think the better use of this sort of tech is broad shapes, and leaving the noise type texture in the basic 2d maps, diff/spec/normal/etc.
As far as topology, I don't think it should really change the work flow of character artists (except maybe creating a new bake, being height or displacement? depending on the software/engine.) Unless you want to go for procedural changes like in the new UDK demo. Thats beyond me. :poly122: And for environment stuff it was usually just a matter of sub-ding your model a bit.
One thing bothers me however. What about using a tangent space normal map and a displacement map at the same time? Wouldn't tesselation destroy the vertex normal data used by the normal map?
You could skip the normal map and smooth the normals again in the shader, but I imagine this would be very slow.
I would imagine that in order to achieve optimal results you'd need some kind of vector displacement map, as a standard disp map will only push verts in the direction of he surface normal right?
http://www.vimeo.com/17593021
I really don't see the ingame/untesselated mesh being much different than the ones we make today for most characters. But I think meshes with multple disp maps like in the video would need a more generic, evenly quaded, base mesh, since any part can morph to many different shapes, long thin/weird poly's wouldn't tesselate well, or better put, they'd need to overtesselate in order to get it to diplace well. My brain hurts....
Here is a really great PDF from Nvidia that discusses a lot of these techniques, and is especially good at listing the pros and cons of each of the techniques. I think this gives a pretty good idea of the types of problems we could be looking at facing, but it's just as important to note that "tessellation" is not a singular hardware feature that only works one way....I think we'll see different algorithms evolve that serve different purposes well as we move forward.
Sweet PDF on this stuff -> http://www.nvidia.com/content/PDF/GDC2011/John_McDonald.pdf
Looking at the problems, they seem very similar to problems you might see when subdividing or tessellating stuff in 3dsmax. Have you ever tried tessellating a textured lowpoly model in max? and seen lots of problems? Same type of stuff. Want to test displacement mapping? Very easy to do in max....just try it and see how well it works for you. I think you'll see a lot of similar problems that you will end up seeing from the various tesselation algorithms.
^ This is the coolest thing I've seen in a long while. Very cool! :thumbup:
At first I was all
Then I was all :O
I got a few questions though. First, how do you get texture coordinates for the new vertices? When you displace a surface, does it just use whatever texture was already there? That would mean that for a displacement that goes well beyond the silhouette of the original object (like the spikes in the demo), you would need a pretty high-rez diffuse to support all those details.
Also, were there 3 different base-meshes in that demo? That skeleton one looks vastly different to the point where I don't think a displacement map could do that.
-basemesh with a couple of branches
-noise displacement to make trees and branches wobbly
-tesselation to get rid of the
But most of the work would really be done by a displacement map. It also sounds fairly restrictive. But hey I'm no pro at technical details and coding kick-assery.
BigJohn - one part to solve that is to have multiple unwraps, so you can redistribute detail to where you need higher texel density. Because as you can see in the demo they blend between textures (diffuse/spec) as well. Though I reckon for most applications (such as bumpy rocks or tree bark).
What might be even more awesome is that you could stack displacements like one would in zbrush. So you could dynamically grow mushrooms for instance:
-tesselate ground surface, displace center polygons upward
-load/blend secondary unwrap that puts higher texel density on those polygons
-tesselate and displace again, creating the 'hat' of the mushroom
-load/blend tertiary unwrap and textures
What I wonder, how flexible are tesselations? Are they bound to whole objects or could you tesselate 1 arm but not the other (without using floats)? Are there fixed iterations like 2x, 3x, or can you say "okay give this 50% more triangles in the places you think need them most" (or will the latter actually be less efficient than simple 2x everything)?
Perhaps in the next-next gen we'll have tesselation-density maps? White spots get tesselated faster/higher priority than dark spots?
UDK is also changing terrain density based on distance...which I figured was done in the shader using a depth buffer or something.
1. You create a LP mesh.
2. Subdivide the LP mesh to turn it into a HP mesh
3. Sculpt the HP mesh.
4. Extract a displacement / vector displacement map.
At realtime, the LP mesh will be subdivided with the DX11's hull/domain shader and the displacement/VDM map will be applied to reconstruct the HP mesh.
I really prefer to use vector displacement maps (VDM) because are much faster than displacement maps ( they don't require subdivision really ) and also because they allow you to create mushrooms/ears which is impossible with a scalar displacement map.
Oh btw, you don't really need DX11 to perform tessellation. See the xN's DX10 spiked ball example ( altough DX11 will render it much faster and it will be possible to use distance-adaptive tessellation ):
[ame]
And, yep, the UDK demo is awesome :poly142:
Vector Displacement in XN sounds awesome as well.
the link to the updated tessellation documentation in the April 2011 beta is broken...
The correct link is:
http://udn.epicgames.com/Three/TessellationDX11.html
Hope to have time to dive in soon and see whats good vs what needs werk