I was just browsing through for an answer to this question, but don't see to much. I understand that this is a gaming forum, but it's still a relevant question in some senses.
Now, as I understand it, pretty much all film models rely heavily on displacement maps for details. However, since everything is pre-rendered, are you only limited to a polycount of what the machine can handle (entire scene included)? Because I recently started trying my hand at a film level model, and have found myself at an awkward in-between phase, as it is essentially looking like a high detail game model. Maybe a boss for next gen, something along those lines. I figure in film i would just keep adding polygons until i got the silhouette I desired, but I have no real benchmark to shoot for (and it's very possible none exists).
if anyone has any input on the matter, would be much appreciated, spanku
Replies
Don't just think of the model itself, but think of how it's used overall and you can build along those lines.
Film 3d is always a matter of pushing technology, when you're looking at periods of 24 hours sometimes for a FRAME of render on huge projects like star trek (the new movie), not including time to composite it together, polycounts can get pretty disturbing.
The guys at ILM were saying the romulan ship in the new movie was built in 3d at 1:1 scale... 7 kilometers long or something ridiculous and took 24 hours to render 1 frame of just the ship. There's a lot of detail that isn't seen at all, but when you get close to it, the texturework and sense of scale still exists, in proper proportion to the objects and world around it.
You could be surprised of the density of some models for animation.
http://www.monstersculptor.com/portfolio_Mist.htm
make yourself an idea
http://features.cgsociety.org/story_custom.php?story_id=3452&page=1
I'd imagine this is before subdivision at render-time?
i.e. optimize when possible, but the sky is the limit so long as the machine can handle it and time permits.
http://www.cgarena.com/freestuff/tutorials/xsi/monster/index.html
Daaark the machine can mean anything. for me, it means my laptop.
In answer to your question, as much as you need to make it look nice. There are many varying factors when considering resolution such as distance from camera, type of camera shot (is it zooming into their face?), final output resolution (theres no point rendering 5 million polys on a 100px shot etc etc.
King Kong had around 10Mio or something only in his Face, thats why they initially invented Mudbox, as there was no Software to handle it when they started working on King Kong.
Definitely not in the 3d Mesh but in the final displacement, and then it doesn't matter if you have the 10mio in your app and push render or if they are created while rendered 10mio are 10mio.
Polygons are really not the biggest issue when rendering, turn on GI and it will kill your rendering, or SSS, of course polycount has an impact but the impact is not as high as using fancy shaders.
Pretty much the same as with realtime engines, look at Uncharted2 and the crazy polycounts they move around, polys are not really the limiting factor anymore, but of course that doesn't mean you should waste them.
but this is not only about characters, clever systems like instancing makes it possible to have millions of polys in one scene, or what about mudbox/zbrush? thats rendering too, they render quite a hell lot of polygons at once, but then start adding effects like shadows, AO and stuff, the framerate will drop quite fast.
Polygons play their role, for sure, but these days its not like "cut down those 50 polies it will save us XX fps".
As for removing things that won't be on camera manually; frustum, and backface culling is the first step of any rendering system, and now occlusion culling (if something is in the field of view, but is completely occluded by a closer object, cull it) too.
dunno never really worked with a backface culling engine, but it might be so good that i didn't realize, so dunno
Backface determination happens automatically, it's a by-product of winding order. Points that make up a polygon are either specified clockwise or counter clockwise, and when they face away, their winding order is the inverse, so automatically aren't facing the camera.
If a model of a sphere has 5000 polygons, 2500 will never make it past that stage. Unless you disable that stage. Some objects will be marked as 'double sided' and disable that.
The rendering pipeline has many stages.
-Setting up the view tranforms and world matrices
-Testing all objects against the frustum (don't render what you can't see)
-creating a list of models to be rendered
-...
-...
At render time, all that data has to be processed. Polygons with the wrong winding order are simply never sent down the pipeline. It's free. and it saves a ton of time, especially on surfaces with complex shaders.
There are other steps that can be taken, including rendering out low quality versions of the scene with no textures, and using that information to decide if objects are fully occluded (drop them altogether), or if they are too far for expensive shading operations to even produce a perceivable result (bind a cheaper version of the shader).
Winding order is why importing some models in some packages often produces a result where all the polygons are facing the opposite direction.
AND NOW YOU KNOW THE REST OF THE STORY...
If max wants to run add an expensive shader in there, or do some other processing from within the software itself which takes a hit on the framerate, that's their business. But the culling itself at the card level happens at the gpu level when it's processing the array of vertices to be drawn, and is part of the fixed cost of every triangle that is rendered.