http://www.ericchadwick.com/examples/provost/byf1.html
"Beautiful, Yet Friendly, Parts 1 & 2"
by Guillaume Provost, Pseudo Interactive.
These great articles originally appeared in Game Devloper magazine. If you missed them, they're now available online, courtesy Mr. Provost (and a little time I spent with some optical character recognition software).
He offers tons of useful info about how game engines treat your art, and how to squeeze the most juice out of your engine's performance.
Many of the senior artists probably know this kind of info already, but I think many more of you will appreciate the insights. I know I did, even though I've been at it for awhile. Most of this I learned the hard way, on the job, pestering programmers to explain it in a way my non-coder brain could understand. Or by struggling through technical whitepapers and hardware reviews. I wonder if any school courses are teaching this to the game art students these days?
Admittedly, some of the subject matter may be difficult to wrap your head around. It may take multiple readings for things to sink in, it did for me. But I think much of it is essential information if you want your framerate to improve.
Some of the points I tend to disagree with, like keeping texels an even density across the entire character (although he makes it clear this is an issue for particular hardware). Other issues may not be relevant on next-gen hardware either, but most of it still holds true.
I'd like to hear what you think.
Replies
very well done
while we're at it, the nvidia/ati white papers, while often more tech-based, they are sometimes artist friendly as well.
like this one here
http://mirrors.wamug.org.au/nvidia/developer/presentations/2004/GPU_Jackpot/Models_and_Textures.pdf
I wonder if anyone who is in school these days (MoP?) could comment on whether they're teaching the nuts and bolts of what art techniques work best from the game engine's point of view. Are courses like this avoided by the artists because they're perceived as "dry" subjects? Or is this kind of info a part of the core curriculum for those studying game art?
I'm glad people are enjoying the articles. There's a lot of tasty meat in there.
some things i need clarification on...
-graphics engines treat objects as surfaces, in effect making all objects open meshes (it's impossible to create totally seamless UV's), so does that mean creating water-tight meshes is irrelevant in game art?
-on a character using a single texture sheet, different UV shells within the same character are treated as separate surfaces although they share the same material.. correct?
-on a skeletal mesh with verts weighted to 1.0 and below 1.0, are verts with rigid weighting less costly to transform than verts with blended weights, even if they exist in the same object? (in UnrealEd, i remember some commands to flag verts weighted to 1.0 so the engine doesn't perform unnecessary calculations on them during animation)
-on characters that use a skeletal system for body animation and morph shapes for facial animation, are the head and body separate models? (okay, this isn't exactly derived from the article)
I'm not exactly sure what constitutes a surface but AFAIUnderstood all faces assigned to a material form one surface. It could be that every trisrip counts as one surface but that's unlikely.
Vertices without weights (all bones equal) are faster to transform because you just need to average all bonepositions, with weights you have to add multiplications.
A morph should only influence the transformation process, while your engine could require you to split them from the SKA the API won't demand that and therefore it's up to your programmer.
but since I am no pro take it not as "the truth" (there is none anyway )
-closed mesh: better because if there are small seams you would see aliasing along the edge. also think that stencil shadows, using volume extrusions of models require closed meshes
-surface: you render primitives in the engine, so each tris/quad is rendered on its own, however when rastered they know that they are connected, and above mentioned aliasing wont happen at edges that 2 primitives share
for efficiency it's best to render everything that has the same material in a single batch, but it doesnt matter if they are connected or not
- weighting: if all have just a single weight, its easier since no blending is needed, as KDR mentioned, however for efficiency I also think that if you have both 1.0 and blended weights in a single mesh, then the rigid weight will be treated as the others like bone1 * 1.0 + boneX * 0.0, because its easier when everyting is calculated the same way and you can just plow thru all vertices at once.
-morph: just added that to the engine hehe, so well for speed's sake, better to split the objects that morph from the ones that dont. you can still skeletal animate the morphed stuff too, but in general if its 2 objects, we can do he morph only for the head ans save a lot of time when nothing of the rest body changes (only skeletal)
what i understood from the article about surfaces is that any batch of triangles sharing the same smoothing group are considered a single surface. now this is all verbatim, but breaks in meshes are caused by smoothing groups, and UV seams. that's why i thought a UV shell of an arm might be considered a unique surface even if it corresponds to the same material as a leg.
i see about the weighting thing...the equation you stated, i'm assuming it would cost the same as a vert with multiple weights. hm, maybe this is more of an engine issue than anything else, but as i understand it videocards also have routines to handle these things.
and as for morph, definitely an engine thing then.
thanks!
CrazyButcher mentioned batches... I've been hearing some grumbling across the industry that these vertex batches are killing performance, since generally each break in surface data causes a new batch, which causes a reset, which slows things down. It seems the hardware folks are aware of this, there are some whitepapers around about how to optimize your meshes for batches. But I think we're stuck with it for the foreseeable future.
that is : shader or texture. then you need a new drawcall, the more drawcalls the worst.
batching is good as it collects everything that has teh same "appearance" and therfore can be rendered in a single call.
I refer to OpenGL here, but I assume DX wont be different.
because you cannot change textures while rendering, also any state changes, like blending and so on will require a new batch.
there is no breaking on uv coords or smoothing groups, those are just per-vertex attributes. however lets say a smoothing group changes along an edge, you will end up with 2 more vertices, lets say 2 pointing up and 2 pointing left. Therefore its more efficent if they would share the same normal (smoothing group). but it does not interrupt the drawcall, it just adds more vertices (which btw arent much of a problem these days anymore, what really sucks performance is the pixel operations)
still, it's good to know there's more we can get away with on the highest end hardware
Thanks for the info guys.
basically its everything that forces you to make a new drawcall, among the state changes there are ones that are mor expensive than others.
e.g. the "shaders" like bump mapping, reflections... are a pretty heavy change. changing a texture is rather cheap on modern hardware, and changing color per vertex is free, since its a attribute every vertex has.
other state changes would be blending, culling, depthtesting, fog,...
so lets say you have a bumpy floor and wall, and a transparent polygon that simulates light "glow" slightly above floor. ideally you would render floor then wall (both same bump shader) and then the blended polygon, eventhough other orders might give the same result (of course blends require the correct background)...
in fact I think doom3 uses almost the same shader for everything, in that "specular" thread someone mentioned it, for a similar reason, its too expensive to change shaders for like every nuts'n bolts. Same with characters...
coders do a lot of optimizations to minimize such state changes, ie sorting stuff drawn in a fashion that everything with the same material makes a batch. But the best optimization is the one you dont have to do...
so especially for portable games you will likely not see smoothing groups (remember the all round guns in quake2) and it would be better a character uses a single texture instead of many, eventhough multiple might be smaller in overall size, changing textures might end up being slower.
I am not too familiar with consoles and alike though, but since the engine I work on is sorta like "older hardware" ie geforce1-3 mostly, I use a few of those old optimizations...
you basically create a face that has twice the same vertex, ie a edge of zero length, that will result into a "non visible" triangle. because it doesnt generate any pixels, it's a pretty good trick to keep a strip running on some other place. However not all platforms might support it (dont know which, just read it somewhere)