stumbled upon these slides
http://developer.download.nvidia.com/presentations/2008/Gamefest/Gamefest2008-DisplacedSubdivisionSurfaceTessellation-Slides.PDF
by ignacio castano (who is a researcher at nvidia, and a very helpful one hehe).
anyway its about programmable tesselation and how it will be possible in the next generation of gpus/draw apis. It mentions the technical hurdles to be taken and so on. Probably too technical, but anyway looks like in a few years when the new consoles come out, the game models will probably look just like their hi-res source art. Which means less "downgrading" in the baking/export process... Still years away from the mass use, but I am pretty sure that until the new successors for current consoles come out, the tech and such will have matured.
Replies
With the realtime displacement being api based, that means it will work across ati, nvidia, matrox, etc on any gpu that supports dx10 or higher. Such a thing will make it more appealing to developers to use since it's not alienating any one sector of owners. And adding features for just one make of card never goes well in the internet forums.
If the next batch of GPUs supports this, I wonder how long it'll be before it's widely used in PC games.
I would think normalmaps are still used. Because when using displacement map, you would have to calculate normals again from the displaced geometry, and why do that, when its faster and probably less of an issue to just use baked normal map...
especially for LOD and all that, you would still want proper shading, even when the silhouette is more low-res.
I think this is only to get better LOD behavior, ie nicer silhouettes up close, for important models.
http://s08.idav.ucdavis.edu/ (individual slides)
this contains many "next-gen" technical infos, among which is also slides by Jon Olick (id software) with details on a possible megageometry architecture for new hardware generation.
http://s08.idav.ucdavis.edu/olick-current-and-next-generation-parallelism-in-games.pdf
basically they change the "triangle rasterization" to "voxel rasterization", and both can still work at same time.
they predict their techniques could be mainstream use with the next hardware generation assuming its 4x as fast as current. At the end is a screenshot from a live demo showing it on current hardware (gf8800 at 15 fps)
Anyway of course it all would be easier if the hardware vendors (nvi/ati) take on their idea. This may actually not be super unlikely, as it sounds feasible for them to integrate such a "ready to use" idea, to counterattack intel's larrabee even more.
I copy pasted the image out of the slides and added the text info.
however not having seen the presentation, of course I cannot say whether thats just a screenshot from the model (looks a lot like zbrush), or the actual live presentation.
I worked with Jon throughout the development of his presentation and his live demo, and that is a screenshot from the actual tech running. We contacted Dmitry Parkin who won first place on the Dominance War 3 contest and he was more than happy to let us use his model in the presentation and demo.
Back at the office we tried various different models on the tech but settled on Mr. Parkin's as it was the most game-related and appealing solution we could come up with under such tight time-constraints. We even had a 16 billion polygon fractal running in realtime at over 60FPS, which was also shown in the slides for anybody who attended.
Yes, the voxel technology does run at 15FPS on a 8800, but when we tested it on newer cards such as the Geforce 280 GTX it ran at 60+ frames per second.
We worked closely with nVidia, and with their help we have developed an engine capable of producing some very amazing graphics with practical applications in the gaming-field for the next-gen. While animating the voxels isn't really feasible, static objects and environments are what really benefit - imagine scrapping the low-poly models in your game and using the high-poly realtime
If you guys have any questions, feel free to ask.