Home General Discussion

tessellation, displacement maps.

polycounter lvl 12
Offline / Send Message
Rockley Bonner polycounter lvl 12
I have a general idea of what tessellation is (its when the camera gets close enough the mesh subdivides accordingly?) and I know what displacement maps are but what I want to discuss here is that how will they be more relevant in game development and to what degree. Will it be as bad as when normal where first introduced where information was scarce?

Also whats the deal with Dx 11?

Im bringing this up because of on crunchcast they mentioned on the most recent episode that displacement and tessellation will be used in next gen titles and it would be in an artists best interest to learn these.

Thanks!!

Replies

  • leleuxart
    Offline / Send Message
    leleuxart polycounter lvl 12
    RJBonner wrote: »
    I have a general idea of what tessellation is (its when the camera gets close enough the mesh subdivides accordingly?) and I know what displacement maps are but what I want to discuss here is that how will they be more relevant in game development and to what degree. Will it be as bad as when normal where first introduced where information was scarce?

    Also whats the deal with Dx 11?

    Im bringing this up because of on crunchcast they mentioned on the most recent episode that displacement and tessellation will be used in next gen titles and it would be in an artists best interest to learn these.

    Thanks!!

    I believe they will be used eventually. Right now, consoles don't support DX11 and that's the biggest market for the game industry. UDK supported DX11 awhile ago and CryEngine just got it too, so I don't think it's been out long enough for any major developers to really utilize it for PC games.
  • Snader
    Offline / Send Message
    Snader polycounter lvl 15
    Tesselation is already being implemented on a limited scale. Most commonly in terrains, where you have an on-the-fly made planar grid that gets moved around by a heightmap. Or to round out some rocks/cliffs as you get closer, so it doesn't feel like you're playing Q3 when you walk up to them. And Metro 2033 for instance uses tesselation on characters and some props. So we already have some experience with it.

    That said, there will still be instances where people don't handle tesselation well, because it requires a slightly different approach to low-poly modeling. Namely one that's more similar to sculpting basemeshes: getting an even distribution of square-ish polygons instead of long rectangles/triangles.

    I think it's also a much more simple and straightforward method (code-wise) than normalmapping, though I must admit I don't know a lot about it and am not very certain about this.
  • ambershee
    Offline / Send Message
    ambershee polycounter lvl 17
    Snader wrote: »
    I think it's also a much more simple and straightforward method (code-wise) than normalmapping.

    It really, really isn't ;)

    Normal mapping is a couple of lines of shader math. Dynamic displacement and tesselation is a whole different kettle of beans. In either case, I fully expect the next generation of games (and current PC games) to make use of displacement ever increasingly over the next couple of years, alongside better lighting models. I'd strongly recommend becoming familiar with it.
  • almighty_gir
    Offline / Send Message
    almighty_gir ngon master
    my current workflow is based around the fact that, from what i know anyway, displacement works by moving vertices.

    with that in mind, when it comes to baking or generating your displacement map, you need to account for the volume of vertices you're going to be going from, and to.

    so the limited experience i've had so far, has been in UDK. and while i can't show you the shader (NDA) i can explain the basic principal i used.

    1. generate 2 displacement maps. the first with the low poly model subdivided once, the second with the low poly model subdivided twice. if you generate this map in xNormal, then it will be accurate to the vertex positions of your low poly model once it's tesselated.
    2. ALWAYS generate the displacement map with a triangulated mesh, that's been tesselated (see number of iterations above).
    3. the shader in UDK then tesselates the mesh based on camera distance from the target, and the maximum tesselation value is whatever the highest value you subdivided to when baking the displacement map.
    4. the displacement map portion of the UDK shader is a linear interpolate function between the lowest and the highest subdivided displacement maps you generated in step 1. the alpha portion of lerp is also the camera distance from the model.

    essentially, as the camera gets closer to the model, it tesselates it, and then displaces it. the closer you are, the more tesselation and higher quality displacement.
  • Snader
    Offline / Send Message
    Snader polycounter lvl 15
    I don't mean in terms of transforming the vertices, I mean in terms of baking a map and directions. Because AFAIK, they currently are simple greyscale textures, just like how bumpmaps used to be black and white, and then later they evolved into normal-bump.

    I realize that there are also more complex implementations like this one:
    https://www.youtube.com/watch?v=VJAYNNfYCqs
    But from what I know those aren't used much yet. Again, I might be wrong. I should probably go study a bit more on this. I should study next-gen stuff more in general.
    Well, there's my goal for november!
  • Jedi
    Offline / Send Message
    Jedi polycounter lvl 12
    ambershee wrote: »
    It really, really isn't ;)

    Normal mapping is a couple of lines of shader math. Dynamic displacement and tesselation is a whole different kettle of beans.


    When you use a normal map in code those 'few lines' reference multiple functions that can add up to total hundreds of lines in length.

    Those 'few lines' you're referring to arent really a few lines, believe me...
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    actually those few lines for normalmap, really are just a few instructions, not sure where your "hundreds of lines" would come from, but for tangent space normalmap you only need to "rotate" your normalvector in the right direction, which is just a handful instructions, as ambershee said.
  • JamesWild
    Offline / Send Message
    JamesWild polycounter lvl 8
    In most cases, a normal map can be applied thusly:
    -build a 3x3 matrix in the vertex shader of the normal, tangent, bitangent.
    -pass it as an interpolated attribute to the fragment shader (yes you can do this with matrices it works on the raw values of matrix not any "clever" interpolation)
    -sample the texture
    -unpack it into -1->1 space
    -transform it with the matrix.

    Granted the texture sampling isn't very simple at all nowadays but most of it can be done in very few instructions.
  • leslievdb
    Offline / Send Message
    leslievdb polycounter lvl 15
    so normal grayscale tessellation/displacement only stores displacementamounts and displaces according to the surface normal while vector displacement holds a direction and length/amount in which the displacement needs to happen right?
  • ambershee
    Offline / Send Message
    ambershee polycounter lvl 17
    so normal grayscale tessellation/displacement only stores displacementamounts and displaces according to the surface normal while vector displacement holds a direction and length/amount in which the displacement needs to happen right?

    Yeah, this sounds about right. The thing to remember is that much like normal mapping circa 2004, tesselation and dynamic displacement are not very standardised yet, so you will encounter a fair few variations.
  • JamesWild
    Offline / Send Message
    JamesWild polycounter lvl 8
    It'd be nice if there was a way to store and retrieve data for subdivided vertices. Could discard textures entirely and use vertex colour with subdivision. (like how Polypaint is in ZBrush)
  • ambershee
    Offline / Send Message
    ambershee polycounter lvl 17
    If you did that you'd be effectively using textures anyway albeit less effectively; think about what information is in a vertex versus what information is in a pixel.

    Textures also have other advantages that verts don't (mip-mapping, preprocessing, etc).
  • Gestalt
    Offline / Send Message
    Gestalt polycounter lvl 11
    I think the op meant for the artists how the two will compare, and in my opinion things will pretty much be as easy if not easier. In fact I think a lot of artists already work with heightmaps as well as normal maps, even without the tessellation. The way we get heightmaps is pretty much the same deal in terms of workflow.

    In terms of difficulties, with normal maps you have lots of little things you have to be account for like the rotation of things and how you're mirroring your uv's and all the ways seams could come up, how 127 is not 128, how you need to renormalize when you do anything with them; in some ways they can be a big mess. Heightmaps you need to account for the range you're dealing with, but overall they are much more raw in terms of their utility.

    Heightmaps, I'd think, would greatly simplify things for an artist; it's one channel that moves tessellated verts based on the normal, so no varying directions in the actual maps, making them much more versatile if you wanted to use them in different situations. They put precision where it counts. Normal maps, by their cartesian nature, are amazingly inefficient in terms of describing vectors. For all practical uses the blue channel will only use half of its values, .5 to 1 (or 0 to 1 expanded), and even then your values are skewed towards 1 (the normals not facing a vector have lower z values and less area that's projected, think of Lambert's law of cos). Heightmaps can have information in a much more linear and usable way from the start.
  • JamesWild
    Offline / Send Message
    JamesWild polycounter lvl 8
    ambershee wrote: »
    If you did that you'd be effectively using textures anyway albeit less effectively; think about what information is in a vertex versus what information is in a pixel.

    Textures also have other advantages that verts don't (mip-mapping, preprocessing, etc).

    Mipmapping would be done implicitly anyway at lower subdivs due to there being fewer vertices.

    No need to unwrap or pack at all, lightmaps and all their problems history.
  • leslievdb
    Offline / Send Message
    leslievdb polycounter lvl 15
    the no texture argument doesnt make a lot of sense.
    a vertex could hold more information but you`re going to end up with very big meshes if you want a vertex to store a bunch of potential outcomes for the tesselated interpolations.

    lets say you have 2 verts and inbetween those verts you have 100pixels in your texture, those pixels store information for 100 different potential vertices. if you`d store that data in a mesh you`d end up with information of another 100 verts and maybe even more than that depending on what kind of map you`re trying to keep inside of them.

    I`m no expert on the technical side of these things but i don`t think it would be beneficial for games
  • Gestalt
    Offline / Send Message
    Gestalt polycounter lvl 11
    I'd be curious to know how performance would be if everything were switched to dynamic 'verts' based on resolution, or more likely some sparse tree structure, rather than rasterization and the UV system using textures and 'texels'. As for tiling and the other benefits of UV textures, 3d blocks of color information could also tile and be scaled, have different resolutions relative to the 'mesh' resolution etc (I don't know what merit there would be to explicitly assign color information to verts like polypaint; having textures and geometry separate has benefits).
  • ambershee
    Offline / Send Message
    ambershee polycounter lvl 17
    JamesWild wrote: »
    Mipmapping would be done implicitly anyway at lower subdivs due to there being fewer vertices.

    Not really. I suspect you'll instead start to introduce artifacts.
    JamesWild wrote: »
    No need to unwrap or pack at all, lightmaps and all their problems history.

    The issue here is that generally speaking lightmaps want to be higher resolution than the geometry it represents. If this is not the case, then you can simply default to old-school vertex lighting instead of per-pixel lighting.

    I'd expect that in future tesselation / dynamic displacement heavy renderers for next-gen titles most of, if not all of the lighting would be dynamic anyway.
  • JamesWild
    Offline / Send Message
    JamesWild polycounter lvl 8
    Gestalt wrote: »
    I'd be curious to know how performance would be if everything were switched.

    That's the dream. Totally unified pipeline. A pipe dream perhaps (haha!) but an ideal goal.
    ambershee wrote: »
    Not really. I suspect you'll instead start to introduce artifacts.



    The issue here is that generally speaking lightmaps want to be higher resolution than the geometry it represents. If this is not the case, then you can simply default to old-school vertex lighting instead of per-pixel lighting.

    I'd expect that in future tesselation / dynamic displacement heavy renderers for next-gen titles most of, if not all of the lighting would be dynamic anyway.

    Yeah what I'm talking about would pretty much be vertex lighting.

    Do engines like UE4 not use any static lighting at all? I know the system is designed so radiosity and dynamic lights are cheap, but I would assume that precomputing any static lights with static actors could still be pretty advantageous in terms of performance, even if it could introduce inconsistencies when dynamics move into the scene. It may be 2012, but the cheapest operation is still the one you don't do.

    I'm still pessimistic about there being a next gen for a long time. The developers want it, but even after all this time the marginally more powerful Wii U has to be sold at a loss, we still have dying console problems (even if they are fewer and far between) consumers are probably not going to be interested in slapping down the cash judging by the reception of the 3DS and Vita (lukewarm and nonexistent respectively) and the vendors themselves probably have no interest in starting the loss cycle again. We're also seeing companies continue to develop titles like GTA V and Halo 4 (granted they've probably been in development for some time but none of the studios seem to be slowing down for the next gen)
  • gray
    awesome thread.

    if there was any advantage to useing vertex colors over textures then offline renderers and film studios would have adopted those techniques. textures give you consistent surface coverage independent of topology and there are well defined sampling and filtering algorithms, storage size is constant regardless of face counts etc.

    if anyone has links or research papers on real time displacement give it up. id like to do some reading on this, seems other do to.

    one interesting development on the art side is open subdiv with ptex in maya and mudbox. i don't think its exactly how things will work out in game engine code but for sculpting animation etc it looks like we will have everything in realtime in package.

    http://area.autodesk.com/blogs/craig/pixar--opensubdiv-with-mudbox-and-maya



    watch?feature=player_embedded&v=Y-3L9BOTEtw
  • gray
    ambershee wrote: »
    I'd expect that in future tesselation / dynamic displacement heavy renderers for next-gen titles most of, if not all of the lighting would be dynamic anyway.

    i think that is the single most important advance for more realistic images. baking lights and shadows into the color channel is destructive and inaccurate. when you render in v-ray or mental ray the lights do the lighting and shading and the textures do the color and surface properties. as it should be. it looks a million times better.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    It seems people get hyped a bit too much, I would not expect much of a change for the artist, fundamentally normal-maps will still be around as other maps. You just have "one" more map now for some assets. A displacement map just moves a vertex along some direction, it doesn't encode the surface's normal which you still need for shading.

    I agree with gray that you should simply look at what film does, they have had tess. forever and aren't working fundamentally different (except maybe for the ptex stuff).

    but anyway, as artist, you do your work and hope there is an established baking pipeline.

    there is of course special purpose tessellation usage like in hair, grass, terrain... I guess that will be handled through custom tools.

    fundamentally all the ts shaders do is allow a programmer to define how much a triangle's edges and interior should get subdivided, and then later where the new vertices should be positioned. That's all, what kind of tess. algorithms people end up using is totally up to them. Engine X could do that Y this, and Z a blend or whatever...

    So technically the situation is as bad as the lack of standard tangent space ;)

    There is some hope that Pixar's style of subdividing things could get adopted more in the industry ( the opensubdiv release at this years siggraph).

    some tech background:

    http://www.nvidia.in/object/gpubbq-2008-subdiv.html

    http://developer.download.nvidia.com/assets/gamedev/files/gdc12/GDC12_DUDASH_MyTessellationHasCracks.pdf

    http://developer.download.nvidia.com/presentations/2010/gdc/Tessellation_Performance.pdf

    http://www.opensubdiv.com/

    in my opinion there is not much "preparation work" for the artists, the big jump was when hi-poly was introduced and the baking as such.

    The traditional art skills will continue to dominate :) At some point there will be new tools or some new "options" for baker settings, but until you are actually working with such tools on a real project, no need to worry too much about it.
  • gray
    cheers for the links.
  • ambershee
    Offline / Send Message
    ambershee polycounter lvl 17
    As it happens, I've seen examples of theoretical next-gen renderers that make use of Ptex or Ptex-like solutions.
    JamesWild wrote: »
    Do engines like UE4 not use any static lighting at all? I know the system is designed so radiosity and dynamic lights are cheap, but I would assume that precomputing any static lights with static actors could still be pretty advantageous in terms of performance

    UE4 uses no static lighting, but I have concerns over the performance of a voxel based GI system - it is not going to be cheap at all, since it's borderline raytracing. There are plenty of decent solutions to fully-dynamic lighting that are entirely feasible though, but they have issues with handling translucency and handling global illumination accurately and efficiently (which UE4's voxel scenario should actually handle quite well in theory).

    The trouble with lightmaps is that at the higher resolutions we're going to find demanded for next-gen games, the lightmaps themselves are going to start occupying a lot of space in memory; space which is already occupied by an increasing plehtora of other texture maps which are in turn higher resolution. The increased emphasis on dynamic lighting for appropriate light sources as well as global illumination means that there will still be a great deal of dynamic light to handle anyway - so you may as well do everything dynamically.
  • JamesWild
    Offline / Send Message
    JamesWild polycounter lvl 8
    ambershee wrote: »
    As it happens, I've seen examples of theoretical next-gen renderers that make use of Ptex or Ptex-like solutions.
    I think I saw ATI's stuff on this, and some work done along these lines for Milo, looked like it could be quite good.
    ambershee wrote: »
    The trouble with lightmaps is that at the higher resolutions we're going to find demanded for next-gen games, the lightmaps themselves are going to start occupying a lot of space in memory.
    This is very true, and I suspect a megatexture solution for lighting could be a nice tradeoff, maintaining the reuse of high quality textures with streamed, atlased lightmaps.
  • ZacD
    Online / Send Message
    ZacD ngon master
    I hope there isn't a megatexture solution, baking a mega texture takes a ton of time, so baking a megatexture+lightmap = waiting hours to test lighting. Realtime with decent shaders please. I love the way specular and reflections are handled with the UE4 engine, glossy reflections are cheaper than normal reflection :D
  • oglu
    Offline / Send Message
    oglu polycount lvl 666
  • SHEPEIRO
    Offline / Send Message
    SHEPEIRO polycounter lvl 17
    ambershee wrote: »
    UE4 uses no static lighting, but I have concerns over the performance of a voxel based GI system - it is not going to be cheap at all, since it's borderline raytracing.

    it wont be cheap but as I understand it, its a fairly fixed cost which makes it nice to work around to get decent framerates consistently, the production cost of lightmapping (unwrapping, rendering, iteration time) will become too expensive and restrictive (no TOD changes ect) for alot of companies/games
  • EarthQuake
    It seems people get hyped a bit too much, I would not expect much of a change for the artist, fundamentally normal-maps will still be around as other maps. You just have "one" more map now for some assets. A displacement map just moves a vertex along some direction, it doesn't encode the surface's normal which you still need for shading.

    Yeah so many people seem to be confused by this. Displacement maps only effect silhouette, you still need a normal map for your shading. Even if you re-normalized the displaced mesh, your shading would "swim" as the LOD on the tessellation changed and would look really ugly.

    Try to think of proper displacement maps as a replacement for parallax mapping instead, normal maps aren't going anywhere for a long time.
  • littleclaude
    Offline / Send Message
    littleclaude quad damage
    Realtime dx11 tessellation in Maya

    http://youtu.be/L5fOwSmSaW8

    post-3d.jpg
  • JacqueChoi
    Offline / Send Message
    JacqueChoi polycounter
    We use Realtime Phong Tessellaton on Thief.



    From what I can tell; some of the tech will be shot down on merits of it's practicality (but please prove me wrong).

    From my understanding of Displacement Maps.... they won't be very cost efficient, as an UNCOMPRESSED texture will likely require a LOT more resources than tens of thousands of additional triangles that are simply modelled.

    PTex won't be used, because they don't inherently solve the 'Mipping' issue.. and we will likely require completely new hardware to properly run it in real-time.
  • marks
    Offline / Send Message
    marks greentooth
    JacqueChoi wrote: »
    From my understanding of Displacement Maps.... they won't be very cost efficient, as an UNCOMPRESSED texture will likely require a LOT more resources than tens of thousands of additional triangles that are simply modelled.

    Hmm ... possibly. However, a lot of people are starting to use a more layered approach to shaders (possibly more applicable to Environment production rather than characters) so if you're using multiple material layers in your shader, of small, tiled textures then the majority of the cost goes directly onto the vertex/pixel shader rather than huge uncompressed/16-bit textures.

    Which is again (as I understand it) a lot more similar to what film VFX is doing.

    Useful for some things, not useful for others I guess on a cost-benefit level.
Sign In or Register to comment.