Just read this
PDFReal-Time Creased Approximate Subdivision Surfaces seems to be one awesome technique to achieve super-smooth Models. Not sure if any engine can yet handle this stuff but the approach how they implented this into Source (yet not public though -.-) sounds rather good and performant. Microsoft is going to build this into DirectX11, they won't like the fact they showed you can do this in DirectX9 with no problems
Replies
I am clueless
also, is it worth the effort of having to model using a completely different set of techniques for something that will only look correct in one particular method of rendering?
interesting stuff, but i don't really see this catching on for games. reminds me of that "truform" subdivision stuff ATI did a while back, basically subdivision in realtime, but it meant you had to model stuff specifically with that technique in mind, and i think partly because of that, it never caught on.
yeah ati used a tesselation module years ago but all it did was ruining lowpoly models - i remember playing half-life with it and the guns started looking like micky mouse.
http://developer.download.nvidia.com/presentations/2008/GPUBBQ/GPUBBQ-Subdiv.pdf
a) Increase number of polygons until number of edges limits infinite (pytagoral approximation of the circle)
b) Develop such Algorithms that will create mathematical functions (Beziers) for given points (Function defined by given coordinates, math lessons in 10th grade at school).
While point A is the approach until today it comes to a limit already. You would need to get away from rastarization renders to raytracing renders because the number of polygons uses equates roughly the factor the usual rastariazation runs slower, while the number of polygons used in a raytracing scene doesn't matter (it does but the factor is infinite times lower than in rastarization graphics). It's higly unlikely nvidia or ati are going to invest billions into new raytracing technology (actually they are researching this but until now realtime systems with raytracing tend to be too slow, see OpenRT, QuakeRT and simmilars) so they gonna look for other technologies.
So if a) is not an option for long now, it becomes clear that b) will be the solution of choice.
Normally a developer should know what his characters are made for and therefore create the asset, if you never see the face upclose why put in thousands of tris there?
It's just like automated performance waste, sure if it doesn't matter why not do it, but it only works for clean surfaces, what dou you want to do with it on a realistic character?
it can't get more detailled then the initial model, just rounder and texturespace is already more the bottleneck then polygons, displacing this surface would still need textures to displace from and the won't stay as detailled unless you go with fractals and such things to randomize the details on a macro level.
Something like ZBrush is far more suited to this kind of work, and as a result it's going to be far more likely to get a good look by just throwing lots of polygons at it - in mudbox or zbrush you don't even have to think about polys, you just sculpt... whereas this crease stuff is really not artist-friendly, it would take a LOT of management and technical awareness to make it work and look good. Again another reason why it will not catch on in general.
The only thing I can see this technique really being used for is stuff like racing games or simulators where stuff like the car models need to be really smooth and detailed, and can actually get away with having the "ultra sharp" lines that creasing produces. I can maybe see it catching on for stuff like that.
It's also possible that stylised stuff like TF2 might make use of it, but again that's a very limited scenario and only works for certain things.
Again I really don't see it catching on for the whole industry just because it's something that only works in really specific scenarios and visual styles.
the back is supposed to be round but the silouette clearly gives away the lack of polys.
What kind of technique (ie standard rasterizer with tessalation, real-time reyes rendering or other funky stuff, voxel sampling..) will become standard (or if any) is "unsure" imo.
My personal feeling is that the top engine makers will try to cook their own soup. And hints by id & epic make me think they will.
The platform that will decide is the "next" generation of consoles due in 2012 or whenever. And when you think about cost cutting and current trend, I would assume it will be something like PS3 was originally meant to be (ie just Cell, no graphics card). Reason is intel is trying to push its many-core pseudo graphics card (larrabee), nvidia and ati will generalize their graphics card even more. The Geforce GTX 3xx coming end of year is rumored to be quite another big jump in architecture (from SIMD to MIMD) and would allow more customized rendering.
with dx9 still dominating I also fail to see dx11 make a big difference here, surely it brings the stuff that was supposed to happen with dx10 (geometry shader can do tessalation, just not performant enough, so we get other shaders now to make it useful...)
in the long run I think we will get back to "software" rendering days, just that this time the hardware can do much more. This is much better in terms of performance per watt and cost cutting (ie have one central cpu that can do "more" and then many specialized chips that do any form of number crunching tasks)