I have a general idea of what tessellation is (its when the camera gets close enough the mesh subdivides accordingly?) and I know what displacement maps are but what I want to discuss here is that how will they be more relevant in game development and to what degree. Will it be as bad as when normal where first introduced where information was scarce?
Also whats the deal with Dx 11?
Im bringing this up because of on
crunchcast they mentioned on the most recent episode that displacement and tessellation will be used in next gen titles and it would be in an artists best interest to learn these.
Thanks!!
Replies
I believe they will be used eventually. Right now, consoles don't support DX11 and that's the biggest market for the game industry. UDK supported DX11 awhile ago and CryEngine just got it too, so I don't think it's been out long enough for any major developers to really utilize it for PC games.
That said, there will still be instances where people don't handle tesselation well, because it requires a slightly different approach to low-poly modeling. Namely one that's more similar to sculpting basemeshes: getting an even distribution of square-ish polygons instead of long rectangles/triangles.
I think it's also a much more simple and straightforward method (code-wise) than normalmapping, though I must admit I don't know a lot about it and am not very certain about this.
It really, really isn't
Normal mapping is a couple of lines of shader math. Dynamic displacement and tesselation is a whole different kettle of beans. In either case, I fully expect the next generation of games (and current PC games) to make use of displacement ever increasingly over the next couple of years, alongside better lighting models. I'd strongly recommend becoming familiar with it.
with that in mind, when it comes to baking or generating your displacement map, you need to account for the volume of vertices you're going to be going from, and to.
so the limited experience i've had so far, has been in UDK. and while i can't show you the shader (NDA) i can explain the basic principal i used.
1. generate 2 displacement maps. the first with the low poly model subdivided once, the second with the low poly model subdivided twice. if you generate this map in xNormal, then it will be accurate to the vertex positions of your low poly model once it's tesselated.
2. ALWAYS generate the displacement map with a triangulated mesh, that's been tesselated (see number of iterations above).
3. the shader in UDK then tesselates the mesh based on camera distance from the target, and the maximum tesselation value is whatever the highest value you subdivided to when baking the displacement map.
4. the displacement map portion of the UDK shader is a linear interpolate function between the lowest and the highest subdivided displacement maps you generated in step 1. the alpha portion of lerp is also the camera distance from the model.
essentially, as the camera gets closer to the model, it tesselates it, and then displaces it. the closer you are, the more tesselation and higher quality displacement.
I realize that there are also more complex implementations like this one:
https://www.youtube.com/watch?v=VJAYNNfYCqs
But from what I know those aren't used much yet. Again, I might be wrong. I should probably go study a bit more on this. I should study next-gen stuff more in general.
Well, there's my goal for november!
When you use a normal map in code those 'few lines' reference multiple functions that can add up to total hundreds of lines in length.
Those 'few lines' you're referring to arent really a few lines, believe me...
-build a 3x3 matrix in the vertex shader of the normal, tangent, bitangent.
-pass it as an interpolated attribute to the fragment shader (yes you can do this with matrices it works on the raw values of matrix not any "clever" interpolation)
-sample the texture
-unpack it into -1->1 space
-transform it with the matrix.
Granted the texture sampling isn't very simple at all nowadays but most of it can be done in very few instructions.
Yeah, this sounds about right. The thing to remember is that much like normal mapping circa 2004, tesselation and dynamic displacement are not very standardised yet, so you will encounter a fair few variations.
Textures also have other advantages that verts don't (mip-mapping, preprocessing, etc).
In terms of difficulties, with normal maps you have lots of little things you have to be account for like the rotation of things and how you're mirroring your uv's and all the ways seams could come up, how 127 is not 128, how you need to renormalize when you do anything with them; in some ways they can be a big mess. Heightmaps you need to account for the range you're dealing with, but overall they are much more raw in terms of their utility.
Heightmaps, I'd think, would greatly simplify things for an artist; it's one channel that moves tessellated verts based on the normal, so no varying directions in the actual maps, making them much more versatile if you wanted to use them in different situations. They put precision where it counts. Normal maps, by their cartesian nature, are amazingly inefficient in terms of describing vectors. For all practical uses the blue channel will only use half of its values, .5 to 1 (or 0 to 1 expanded), and even then your values are skewed towards 1 (the normals not facing a vector have lower z values and less area that's projected, think of Lambert's law of cos). Heightmaps can have information in a much more linear and usable way from the start.
Mipmapping would be done implicitly anyway at lower subdivs due to there being fewer vertices.
No need to unwrap or pack at all, lightmaps and all their problems history.
a vertex could hold more information but you`re going to end up with very big meshes if you want a vertex to store a bunch of potential outcomes for the tesselated interpolations.
lets say you have 2 verts and inbetween those verts you have 100pixels in your texture, those pixels store information for 100 different potential vertices. if you`d store that data in a mesh you`d end up with information of another 100 verts and maybe even more than that depending on what kind of map you`re trying to keep inside of them.
I`m no expert on the technical side of these things but i don`t think it would be beneficial for games
Not really. I suspect you'll instead start to introduce artifacts.
The issue here is that generally speaking lightmaps want to be higher resolution than the geometry it represents. If this is not the case, then you can simply default to old-school vertex lighting instead of per-pixel lighting.
I'd expect that in future tesselation / dynamic displacement heavy renderers for next-gen titles most of, if not all of the lighting would be dynamic anyway.
That's the dream. Totally unified pipeline. A pipe dream perhaps (haha!) but an ideal goal.
Yeah what I'm talking about would pretty much be vertex lighting.
Do engines like UE4 not use any static lighting at all? I know the system is designed so radiosity and dynamic lights are cheap, but I would assume that precomputing any static lights with static actors could still be pretty advantageous in terms of performance, even if it could introduce inconsistencies when dynamics move into the scene. It may be 2012, but the cheapest operation is still the one you don't do.
I'm still pessimistic about there being a next gen for a long time. The developers want it, but even after all this time the marginally more powerful Wii U has to be sold at a loss, we still have dying console problems (even if they are fewer and far between) consumers are probably not going to be interested in slapping down the cash judging by the reception of the 3DS and Vita (lukewarm and nonexistent respectively) and the vendors themselves probably have no interest in starting the loss cycle again. We're also seeing companies continue to develop titles like GTA V and Halo 4 (granted they've probably been in development for some time but none of the studios seem to be slowing down for the next gen)
if there was any advantage to useing vertex colors over textures then offline renderers and film studios would have adopted those techniques. textures give you consistent surface coverage independent of topology and there are well defined sampling and filtering algorithms, storage size is constant regardless of face counts etc.
if anyone has links or research papers on real time displacement give it up. id like to do some reading on this, seems other do to.
one interesting development on the art side is open subdiv with ptex in maya and mudbox. i don't think its exactly how things will work out in game engine code but for sculpting animation etc it looks like we will have everything in realtime in package.
http://area.autodesk.com/blogs/craig/pixar--opensubdiv-with-mudbox-and-maya
i think that is the single most important advance for more realistic images. baking lights and shadows into the color channel is destructive and inaccurate. when you render in v-ray or mental ray the lights do the lighting and shading and the textures do the color and surface properties. as it should be. it looks a million times better.
I agree with gray that you should simply look at what film does, they have had tess. forever and aren't working fundamentally different (except maybe for the ptex stuff).
but anyway, as artist, you do your work and hope there is an established baking pipeline.
there is of course special purpose tessellation usage like in hair, grass, terrain... I guess that will be handled through custom tools.
fundamentally all the ts shaders do is allow a programmer to define how much a triangle's edges and interior should get subdivided, and then later where the new vertices should be positioned. That's all, what kind of tess. algorithms people end up using is totally up to them. Engine X could do that Y this, and Z a blend or whatever...
So technically the situation is as bad as the lack of standard tangent space
There is some hope that Pixar's style of subdividing things could get adopted more in the industry ( the opensubdiv release at this years siggraph).
some tech background:
http://www.nvidia.in/object/gpubbq-2008-subdiv.html
http://developer.download.nvidia.com/assets/gamedev/files/gdc12/GDC12_DUDASH_MyTessellationHasCracks.pdf
http://developer.download.nvidia.com/presentations/2010/gdc/Tessellation_Performance.pdf
http://www.opensubdiv.com/
in my opinion there is not much "preparation work" for the artists, the big jump was when hi-poly was introduced and the baking as such.
The traditional art skills will continue to dominate At some point there will be new tools or some new "options" for baker settings, but until you are actually working with such tools on a real project, no need to worry too much about it.
UE4 uses no static lighting, but I have concerns over the performance of a voxel based GI system - it is not going to be cheap at all, since it's borderline raytracing. There are plenty of decent solutions to fully-dynamic lighting that are entirely feasible though, but they have issues with handling translucency and handling global illumination accurately and efficiently (which UE4's voxel scenario should actually handle quite well in theory).
The trouble with lightmaps is that at the higher resolutions we're going to find demanded for next-gen games, the lightmaps themselves are going to start occupying a lot of space in memory; space which is already occupied by an increasing plehtora of other texture maps which are in turn higher resolution. The increased emphasis on dynamic lighting for appropriate light sources as well as global illumination means that there will still be a great deal of dynamic light to handle anyway - so you may as well do everything dynamically.
This is very true, and I suspect a megatexture solution for lighting could be a nice tradeoff, maintaining the reuse of high quality textures with streamed, atlased lightmaps.
http://vimeo.com/55032699#t=1378
it wont be cheap but as I understand it, its a fairly fixed cost which makes it nice to work around to get decent framerates consistently, the production cost of lightmapping (unwrapping, rendering, iteration time) will become too expensive and restrictive (no TOD changes ect) for alot of companies/games
Yeah so many people seem to be confused by this. Displacement maps only effect silhouette, you still need a normal map for your shading. Even if you re-normalized the displaced mesh, your shading would "swim" as the LOD on the tessellation changed and would look really ugly.
Try to think of proper displacement maps as a replacement for parallax mapping instead, normal maps aren't going anywhere for a long time.
http://youtu.be/L5fOwSmSaW8
From what I can tell; some of the tech will be shot down on merits of it's practicality (but please prove me wrong).
From my understanding of Displacement Maps.... they won't be very cost efficient, as an UNCOMPRESSED texture will likely require a LOT more resources than tens of thousands of additional triangles that are simply modelled.
PTex won't be used, because they don't inherently solve the 'Mipping' issue.. and we will likely require completely new hardware to properly run it in real-time.
Hmm ... possibly. However, a lot of people are starting to use a more layered approach to shaders (possibly more applicable to Environment production rather than characters) so if you're using multiple material layers in your shader, of small, tiled textures then the majority of the cost goes directly onto the vertex/pixel shader rather than huge uncompressed/16-bit textures.
Which is again (as I understand it) a lot more similar to what film VFX is doing.
Useful for some things, not useful for others I guess on a cost-benefit level.