I wont pretend to understand everything I read in the short paper they've published but it looks like it'll be almost completely transparent to 90% of people on here.
The TLDR seems to be 'more geo, more efficiently'
It's pretty low level stuff so obviously the efficiency improvements would lead to us being allowed more features to play with as artistic types but I'm not seeing a paradigm shift or anything.
Yes it's mostly targeting programmers to allow more efficient encoding/processing of geometry. If there is anything you are specfically interested in, I've worked on that feature (research, sw-design) extensively the past years. Here is also an open-source sample https://github.com/nvpro-samples/gl_vk_meshlet_cadscene
The primary motivation is that we overcome the inefficiencies of "geometry shaders" (introduced with dx10 era chips) and the lack of flexibility of "tessellation shaders" (dx11).
Hi CrazyButcher, thanks for stopping by. I think I understand the most of it, at least on some level. So I would just have a few things to confirm. - With this, people are able to partially cull meshes, for example unseen patches (meshlets) can be skipped? Does the culling technique use the acceleration structure of ray tracing? Also, if we unrender the unseen parts of the mesh, won't they show up open in the ray tracing passes? - Did the demo took advantage of instancing? - How realistic are those triangle numbers seen in the video, compared to a more life-like scenario? I mean it was all asteroids and probably instanced geometry. How much gain would we get in a regular scene where instancing can't be utilized that much? - Are those lods made at runtime, or pre-baked? - So should we expect near future games to rely on this and use heavily tessellation based geometry and such intense LOD system? - And finally, what do you think, when all of this will actually get used products?
- yes you can cull on multiple levels now: per-drawcall using draw indirect (older hw supports this), per-triangle cluster (within task shader), per-triangle (in the mesh shader, if feasible). This feature is completely orthogonal to raytracing, you feed all data yourself. So you come up with your own culling etc. strategies. - the geometry is not procedurally generated in this demo, but uses instanced variations of some asteriods. Once you use modern apis, like vk/dx12 the per-drawcall costs are so low, that old-school instancing is not giving you much benefits, compared to say multi-draw-indirect. - "it depends" as always, the more you can cull, and the better you can organize your data for it, the more you will benefit. - lods were pre-baked, and the shader chooses different one depending on size
This particular "asteroid" demo, doesn't truely require mesh/task shaders, because you could use compute shader + execute indirect to do per-asteroid culling etc. However you would loose the benefits of custom data structures + sub-drawcall culling and especially low-lods would be not so efficient if rendered as their own drawcall. Now for this sample no alternative mode was added, but in the github sample I linked I showed comparison to classic rendering.
In the end I expect we will see full procedural use as well as more dynamic lod, etc. algorithms being developed in the future. I also would hope other hardware vendors will allow similar programmability in the next couple years so that we have a standardized approach to a "modernized" geometry pipeline.
Replies
The TLDR seems to be 'more geo, more efficiently'
It's pretty low level stuff so obviously the efficiency improvements would lead to us being allowed more features to play with as artistic types but I'm not seeing a paradigm shift or anything.
*Not an engine programmer
The primary motivation is that we overcome the inefficiencies of "geometry shaders" (introduced with dx10 era chips) and the lack of flexibility of "tessellation shaders" (dx11).
- With this, people are able to partially cull meshes, for example unseen patches (meshlets) can be skipped? Does the culling technique use the acceleration structure of ray tracing? Also, if we unrender the unseen parts of the mesh, won't they show up open in the ray tracing passes?
- Did the demo took advantage of instancing?
- How realistic are those triangle numbers seen in the video, compared to a more life-like scenario? I mean it was all asteroids and probably instanced geometry. How much gain would we get in a regular scene where instancing can't be utilized that much?
- Are those lods made at runtime, or pre-baked?
- So should we expect near future games to rely on this and use heavily tessellation based geometry and such intense LOD system?
- And finally, what do you think, when all of this will actually get used products?
- the geometry is not procedurally generated in this demo, but uses instanced variations of some asteriods. Once you use modern apis, like vk/dx12 the per-drawcall costs are so low, that old-school instancing is not giving you much benefits, compared to say multi-draw-indirect.
- "it depends" as always, the more you can cull, and the better you can organize your data for it, the more you will benefit.
- lods were pre-baked, and the shader chooses different one depending on size
This particular "asteroid" demo, doesn't truely require mesh/task shaders, because you could use compute shader + execute indirect to do per-asteroid culling etc. However you would loose the benefits of custom data structures + sub-drawcall culling and especially low-lods would be not so efficient if rendered as their own drawcall.
Now for this sample no alternative mode was added, but in the github sample I linked I showed comparison to classic rendering.
In the end I expect we will see full procedural use as well as more dynamic lod, etc. algorithms being developed in the future. I also would hope other hardware vendors will allow similar programmability in the next couple years so that we have a standardized approach to a "modernized" geometry pipeline.
another cool tutorial series where the shaders are used is this one
https://github.com/zeux/niagara