Home General Discussion

hardware subdiv/displacement

polycounter lvl 20
Offline / Send Message
CrazyButcher polycounter lvl 20
stumbled upon these slides
http://developer.download.nvidia.com/presentations/2008/Gamefest/Gamefest2008-DisplacedSubdivisionSurfaceTessellation-Slides.PDF
by ignacio castano (who is a researcher at nvidia, and a very helpful one hehe).

anyway its about programmable tesselation and how it will be possible in the next generation of gpus/draw apis. It mentions the technical hurdles to be taken and so on. Probably too technical, but anyway looks like in a few years when the new consoles come out, the game models will probably look just like their hi-res source art. Which means less "downgrading" in the baking/export process... Still years away from the mass use, but I am pretty sure that until the new successors for current consoles come out, the tech and such will have matured.

Replies

  • Toomas
    Offline / Send Message
    Toomas polycounter lvl 18
    ATI did realtime tesselation ages ago with their truform, some Matrox card(s) featured realtime displacement maps also ages ago.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    it never became part of mainstream api like now so
  • Keg
    Offline / Send Message
    Keg polycounter lvl 18
    It was an interesting read, a little above my head at the moment.

    With the realtime displacement being api based, that means it will work across ati, nvidia, matrox, etc on any gpu that supports dx10 or higher. Such a thing will make it more appealing to developers to use since it's not alienating any one sector of owners. And adding features for just one make of card never goes well in the internet forums.
  • [Deleted User]
    Offline / Send Message
    [Deleted User] polycounter lvl 18
    Wow. I didn't follow most of the technical parts, but it looks like Gregory ACC patches are going to be the new hotness. Does this mean that displacement maps are going to be completely replacing normal or bump maps though, or would they still be able to be combined with those for smaller details? It mentioned that the new baking software would be able to extract normal maps too, but I don't think it mentioned whether they could be used in addition to the displacement maps.

    If the next batch of GPUs supports this, I wonder how long it'll be before it's widely used in PC games.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    the next console generation will decide the mainstream techniques again. Anything else wouldnt be commercially succesful for the mainstream market.

    I would think normalmaps are still used. Because when using displacement map, you would have to calculate normals again from the displaced geometry, and why do that, when its faster and probably less of an issue to just use baked normal map...

    especially for LOD and all that, you would still want proper shading, even when the silhouette is more low-res.

    I think this is only to get better LOD behavior, ie nicer silhouettes up close, for important models.
  • Eric Chadwick
    We talked about using this for some cool tech at Whatif, but never got the time to actually put it into play. The idea was to paint dynamically into a texture based on collision (sword strikes, etc.) and use that to displace wounds and the like, with hardware tesselation adding detail to the wound. Also mixing in subcutaneous shader using same map. Can't wait to see someone take advantage of this. :)
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    http://www.beyond3d.com/private/staff/rys/beyond_programmable_shading_siggraph08.tar.bz2

    http://s08.idav.ucdavis.edu/ (individual slides)

    this contains many "next-gen" technical infos, among which is also slides by Jon Olick (id software) with details on a possible megageometry architecture for new hardware generation.

    http://s08.idav.ucdavis.edu/olick-current-and-next-generation-parallelism-in-games.pdf

    basically they change the "triangle rasterization" to "voxel rasterization", and both can still work at same time.

    they predict their techniques could be mainstream use with the next hardware generation assuming its 4x as fast as current. At the end is a screenshot from a live demo showing it on current hardware (gf8800 at 15 fps)

    Anyway of course it all would be easier if the hardware vendors (nvi/ati) take on their idea. This may actually not be super unlikely, as it sounds feasible for them to integrate such a "ready to use" idea, to counterattack intel's larrabee even more.

    I copy pasted the image out of the slides and added the text info.
    megageometrydl3.th.png

    however not having seen the presentation, of course I cannot say whether thats just a screenshot from the model (looks a lot like zbrush), or the actual live presentation.
  • Eric Chadwick
    Cool stuff CB, thanks.
  • Zephir62
    Offline / Send Message
    Zephir62 polycounter lvl 12
    http://www.beyond3d.com/private/staff/rys/beyond_programmable_shading_siggraph08.tar.bz2

    http://s08.idav.ucdavis.edu/ (individual slides)

    this contains many "next-gen" technical infos, among which is also slides by Jon Olick (id software) with details on a possible megageometry architecture for new hardware generation.

    http://s08.idav.ucdavis.edu/olick-current-and-next-generation-parallelism-in-games.pdf

    basically they change the "triangle rasterization" to "voxel rasterization", and both can still work at same time.

    they predict their techniques could be mainstream use with the next hardware generation assuming its 4x as fast as current. At the end is a screenshot from a live demo showing it on current hardware (gf8800 at 15 fps)

    Anyway of course it all would be easier if the hardware vendors (nvi/ati) take on their idea. This may actually not be super unlikely, as it sounds feasible for them to integrate such a "ready to use" idea, to counterattack intel's larrabee even more.

    I copy pasted the image out of the slides and added the text info.
    megageometrydl3.th.png

    however not having seen the presentation, of course I cannot say whether thats just a screenshot from the model (looks a lot like zbrush), or the actual live presentation.


    I worked with Jon throughout the development of his presentation and his live demo, and that is a screenshot from the actual tech running. We contacted Dmitry Parkin who won first place on the Dominance War 3 contest and he was more than happy to let us use his model in the presentation and demo.

    Back at the office we tried various different models on the tech but settled on Mr. Parkin's as it was the most game-related and appealing solution we could come up with under such tight time-constraints. We even had a 16 billion polygon fractal running in realtime at over 60FPS, which was also shown in the slides for anybody who attended.

    Yes, the voxel technology does run at 15FPS on a 8800, but when we tested it on newer cards such as the Geforce 280 GTX it ran at 60+ frames per second.

    We worked closely with nVidia, and with their help we have developed an engine capable of producing some very amazing graphics with practical applications in the gaming-field for the next-gen. While animating the voxels isn't really feasible, static objects and environments are what really benefit - imagine scrapping the low-poly models in your game and using the high-poly realtime :)


    If you guys have any questions, feel free to ask.
Sign In or Register to comment.