I know it’s hard to pin down a single best practice method for creating models for baking but I was wondering how you like to have your wireframes before taking them to Zbrush, Mudbox, Mari and then exporting them to Xnormal, Maya – Turtle, Floating Geometry baking or whatever it is you use.
For example in film we create even quad edge loops to help with pipe line and displacement although that is changing with the introduction of Arnold to render and now in some companies we just render 24 million poly objects.
Here are some examples from Harry Potter, Narnia, Avatar and some more. I will load up more later.
Replies
So here are some more images to help compare, its Hermione from Harry Potter.
A cow with nice even topology
From what I hear from one AAA company in London (I wont give their name) is that some times they just wack in the high resolution model rather then using tessellation and then in some areas they use it.
It all seams a little hit and miss at the moment and it would be great to start forming some best practice.
But your eyes are rarely drawn to the feet of a character. And in that screen that is from the movie, the upper body is in your face, incl the hands.
Both will require clean geometry and good even topology, and both will also use things like LODs. Games however, even on high end engines, will require a tighter budget to adhere to if assets are to fit into memory and also not drag a framerate. Games also, being realtime playback and interaction, present other challenges in that the player can interact and see assets more closely.
With film, its all about what's visible in the shot and frame, so there's alot more freedom. But that doesn't mean that you can go crazy with polycounts etc. Film make alot more use of shaders for all types of things, and can get smooth looking results without having to push an objects poly count. For example, if you look at the Anubis statues, some parts look lower in res then other, but looking at the render, it all looks smooth. It's probable that they're using a geometry approximation type shader in there to smooth things off. When it comes to render time, using the shader can be quicker and more economical that just adding more geometry. And this is where the a renderer like PRman (which is what most films use) comes into its own, as it has some exceptional shaders, as well as handling subdivs and geometry appox very well.
I don't know where the 24 millions comes from, because (from what I see) that seems high even for film. Is that one 24million poly object, or 24 one million poly objects?
In Transformers, not even all the robots added together came to 24 million, Optimus Prime was approx 1.8 million polys.
On modern games, for characters, the topology is getting pretty close to what you see on that Hermione model already, so for the coming generation I think it'll be the same, just in movies there's usually some extra sub-d going on at render I imagine.
Still not quite sure how real-time tessellation will change all that, as it might actually simplify the exported models, but I think it'll be a while before companies actually start using it properly as it seems to require quite a bit of re-thinking of the production pipeline etc.
< for instance >
lets assume you have a foot with higher density then a knee as in that pic. well you also have a rig that can articulate each toe. probably with a variety of IK solvers and controllers for the animators. and those toes get animated touching surfaces. the envelope weighting on the whole foot is probably just as good as it is on the face and you have to have the density in the cage to enable that animation. in addition there is probably a secondary pass to get a compression effect when the flesh touches the surface. that could be sculpted by hand for the shot or build into the deformation chain. in either case you have to have the geometry to enable that type of deformation. in contrast the knee+displacement has relatively little extreme deformation and would not need as much topology. but that model is lower res then anything you would see at this point. i would say subd the body 1 time and you would have an acceptable res for a background character.
~ there is no 'poly budget'. you model what ever resolution it takes to get the deformation that is needed and the creature td's can rangle it through the pipeline and get it rendered.
~ density distribution is in relation to deformation not 'poly budget'. your topology decisions are dictated by the animation required.
One more thing, before you export your model to Zbrush, Mudbox, Mari do you make it an even quad topology like in the film industry?
(Figure 1) this is how I see lots of artists in the games industry making there models before they export it to Zbrush, Mudbox. Is this okay or are they being lazy?
(Figure 2) we have to make each poly no larger than 60% of the last one with no triangles or it gets kicked out of the pipeline.
(Figure 3) It looks like it gives the same results but it does effect the UVs for film and that's why it has to lots of loops and even quads like the cow above.
Figure 1 looks more like the final low poly result not a base mesh for sculpting.
Fig1 needs to look a lot more like fig2 if you're going to sculpt on it, as is, it will be nothing but trouble.
A base mesh for sculpting plays pretty much by the same rules film meshes do.
The final low poly (fig1) is where it branches off and optimization becomes more important. BUT as other pointed out that's changing, with tessellation becoming a bit more prominent and thanks to faster CPU/GPU's pushing more and more polys, (new consoles right around the corner) fig 1 might not be the preferred method for too much longer. I'm sure it will still be used for a lot of things that don't tessellate but it might be the beginning of it being phased out?
I also agree about the anubis statues, those knees and shoulders... oh the weighting... it looks really bad. I guess they came out ok in the end but it looks cringe worthy in that wire shot. I want to reach for the blend weight brush, and fix it. Maybe there was a reason for it?
You're also compairing a quaded model from Avatar with subdivisions cranked up, to a triangulated version of Drake from Uncharted? I would be shocked if people where working on a triangulated version of Drake. That is a side effect of importing it into the engine, engines triangulate everything, artists don't.
Great answer thank you,
Would you say for the PS4 and Xbox 720 even quads are the way to go for tessellation purposes so you get the most even UVs?
[ame="http://www.youtube.com/watch?v=-uavLefzDuQ"]DirectX 11 and Tessellation - Enabled vs. Disabled - YouTube[/ame]
[ame="http://www.youtube.com/watch?v=XsKFLcBWjkU"]T-rex UDK DX11 ????? - YouTube[/ame]
[ame="http://www.youtube.com/watch?v=KmmIAHMtGpU"]Gregory Patches - Direct3d11 Tessellation Test - YouTube[/ame]
even topology is what you want for sculpting. otherwise when you subdivide and sculpt detail will be of different quality over the surface.
your cage is only an issue after resurfing and it does not need to be totally even.
any algorithms that will get widespread adoption would have to work with almost any type of topology layout. otherwise it would be to much of a constraint on model production. that would be costly in terms of time and money.
that is the base animation cage with no smoothing.
In real time I suppose not but elongated polys do look crap when they are animated and you get stretch and squash.
Once the export process to Zbrush/mudbox is finished then some pinching/collapsing of polygons goes on to optimise the models, thus leaving triangulated meshes as can be seen below in this next gen model.
So this practice is still going on as we go into next generation. I wonder if the practice will continue or will the practice stop to help the pipeline process. It seems a little odd to have a model that you can never export to Zbrush/Mudbox again for tweaking purposes and you have to go back to the quad version.
I don't know of any companies still using PRman apart from Pixar but they will see I used to use PRman back in 2003 but Arnold is what everyone has been using for years thanks to Marcos Fajardo.
On a major film project recently the team found that just chucking the high poly object to render (24 million polys with 1.7 TB of textures) was quicker then using displacement maps thanks to a Mari/Nuke and Arnold pipeline.
This is the render Guru "Marcos Fajardo and his company is called- http://www.solidangle.com/"
[ame="http://www.youtube.com/watch?v=ldwRpJP6ApA"]SolidAngle's Arnold Renderer at SIGGRAPH 2012 - YouTube[/ame]
The Ironman dance
https://www.youtube.com/watch?v=3eAishlu4WM
of course they have more loops, they also have finer definition, its based on an antique sculpture they just don't have as much definition in the knee. just look at the final image, do you recognize the mesh, no? then its all good. even for games if you have individual toes (or lets say fingers). they are far more detailed than knees (or lets say ellbows)...
You dont just add topology to support the model detail but to support good deformation for animation. In this case, knees, elbows, shoulders etc are the main focus parts. even if the model huge etc, you wont be noticing the toes as much as you will all the other parts i listed.
Now if they subdivide it, it will obviously smooth over a lot, but your foundation is still not built with the topological focus in areas that deform the most and attract the most attention.
When you model, be it for film, games, advertisement etc, any kind of topological consideration should be focused on how it effects the model for animation primarily. (unless obviously if its not meant to animate).
I would recommend going over everything on this site, http://www.hippydrome.com/
I don't know where your information is coming from, but it's not entirely true. Just about every one of the big major visual effects companies uses PRman, and have done for years. Every year Pixar have a PRman usergroup at Siggraph and/or in London and it's rammed full of people.
It's true that some people are using Arnold and in film also, but PRman is still the most widely used and primary renderer.
I know Marcos pretty well, Arnold was started in the late 90's, but from the early 2000's it was co-developed by Sony Pictures Imageworks, while Marcos was working there. Since moving on and creating SolidAngle, Arnold has been in Beta for the Softimage and Maya versions. The Softimage version is the most developed and many of the Soft houses in London Soho use it. Many have lost confidence in Mental Ray, even though its still a very competant renderer, so many have also adopted Arnold. Even though its still in beta, and you can't actually buy it as normal.
I don't know many studios that are rendering 24million poly objects even after subdividing. One studio that I have been working with, had a big asset that was 5 million polys and that was the biggest/highest that they have ever done. Even if you took the rendering out of it, any simulation time needed for an object like that, would be insane.
For every vertex you have a lot of information floating around inside the engine this giant block of information has to be juggled and loaded in and out of memory in and out of the CPU and GPU. Certain things have to traverse this block quickly in order to keep things moving. If a vert isn't doing anything functional then it needs to be removed, if it serves a purpose then it can stay. If not then its just bloating the list.
If I had to put it in analogy form, I would say the vert list in games is a bit like getting directions. You can fill that list up describing every rock and pot hole, or you can give the person just the street signs and what direction to turn.
Adding hundreds if not thousand of verts to this list can create a real bottle neck, especially if you have a bunch of these bloated characters or props running around. You will need to draw down resources somewhere else to keep things moving along at a decent frame rate. If you bloat every character, you might have to give up some particle effects, or make the terrain more blocky and all for what? Loops that aren't doing much but making it look nicely quad-ed? Players don't run around in wireframe mode and if they did, they would see triangles not quads, heh.
It's not unheard of to go back to the sculpting phase and redo some work and then redo the process to get an updated finalized model, there are a lot of tips and tricks that minimize the work/rework, but studios know it does eat up valuable time that could be spent on other things. As a result they are often a bit more rigid when approving a sculpt.
Once the final mesh is skinned you really don't want the designer getting a wild hair up their ass and telling you to make sweeping changes to the models, they know they had their chance for input early on. Even then there are plenty of work around and people who have suffered through such scenarios at least once, know how to deal with them better in the future. They expect them and plan for them and are pleasantly surprised when they don't happen.
So quick recap
Mark - fantastic recap and great answer lots to think about, God knows where this thread is going but its been interesting.
I have a feeling by the end of the next generation we will be working on one mesh rather then using work around's but I guess we will have to wait and see. For now I will carry on optimising my meshes before exporting them in to a games engine.
Having been through console cycles before, I always have a saying that often rings true - 'bigger box, same problems'
http://vimeo.com/55032699#
This is true. Most big companies still use PRman/Renderman, nothing else can handle hair and motion blur quite like it. Pointclouds and brickmaps are still invaluable for very dense scenes. At the same time, a lot of them are starting to adopt Arnold into their pipelines..but rarely will you find a big studio (Digic would be the exception, although I think Vargatom could explain more about this?) completely replace Renderman in favor of Arnold.
great examples on that side, now look at the topology, more detail in hands than in knees, or shoulders, or whatever. That was my whole point, in their examples, they don't have articulated feet, thats why the detail there is not higher than the rest. A knee or a shoulder that deforms nice and looks round just needs less polygons than a foot with articulated toes and modelled in toenails.
Here is one example. The wireframe had to be light to avoid pinching, and work well with the skeleton. I also ended up having to add a lot more edge rings along the limbs to work with the squash and stretch bones (than is pictured here). You´ll notice the toes and fingers are denser than the knees and elbows, for what I would hope are obvious reasons. Mainly like Neox and Mark have mentioned, that they also have to deform. I cannot remember any model I have ever seen where the knees were as dense as the fingers/toes.
That is not subdivided up. That is the animation mesh, it is that detailed so that they can form every wrinkle in the skin using blendshapes. 20-30k quads per head.
We've actually gone from Renderman with a very small crew, through Renderman and Mental Ray for AO/RO passes, to Mental Ray on its own; until we've ended up with Arnold, starting on the Assassin's Creed 2 Brotherhood trailer in early 2010.
The main reason is that it's far more simple and quick to set up, more robust, and produces some nice results at reasonable render speeds. MR wasn't working out for us and PRMan just took far too long to work with. Also MTOR and other tools were quite unstable at the time, we had a lot of annoying little issues like shaders not connecting to the objects, rendering some nice LEGO looking Warhammer characters (no textures or displacements at all, looked kinda funny).
Arnold is working well so far, although we've grown so big that I'm no longer in touch with that end of the pipeline.
Arnold is used in big CG houses by the way. Sony did the Smurfs movie and 'Cloudy with a chance of meatballs' and Alice in Wonderland and a few others. I guess Amazing Spiderman too. Basically Sony Imageworks has stopped using PRMan completely. There are some other houses working with Arnold too but large VFX houses take time to change direction, lots of legacy tools and such.
The reason to switch to a full raytracer is that the artist time to generate all those temp passes and acceleration structures and shadow maps and other intermediate stuff for PRMan can be more expensive and longer than just buying more CPUs for the render farm. There are a lot less knobs and no intermediate anything in Arnold, it's quicker to get to similar results. Arnold is also optimized well, switching to fixed point calculations whenever possible and so on.
Not sure if it could handle Avatar or Transformers 3 levels of complexity though.
Do they have even lower model for a rig, and then transfer all that stuff to higher model * like this one?
What is the best way to understand topology?
There are several tutorials and videos on game art on the net. Check them out and see how they model. It's just a matter of getting the best results within a polygon budget limit.