say a charachter has a shirt. the body mesh under the shirt would not need rendered so I would delete the polys under the shirt wich would leave two floaty arms which would be more draw cells. When would the benifite of doing this out way creating more draw cells?
Replies
It is like when some engines do batching, where for instance there are a bunch of rocks in a certain area that share the same shader, the engine might group all of those together and send them to the GPU as one draw call instead of many. So the engine combined the objects into 1, and while they don't share vertices they are 1 draw call.
The way it works is there's usually one of two things: a special break command sent added as part of the mesh to say we want to start drawing a new section, or (more commonly) "degenerate" geo, which are just zero-sized triangles.
So your left and right arm would be connected by an invisible quad that has two verts in the same place on the left arm and two verts in the same place on the right arm. The graphics card doesn't draw it because it is zero-sized, meaning it doesn't cover any pixels.
Lets say i have an object that has round-ish smooth edges, and the rest of the surface is rather clean, like a table or a chair, so the normal map would only hold important information for the rounded edges. Whats better in this case, actually bevel the edges and get rid of the normal map (more polys)? Or use a normal map and use less polys?
Lets say the normal map is 512 x 512, and by not using it i have to add more polys but what should be my polycount then? how many polygons can I add until its less optimal? I know this kind of question has a lot of variables but can we generalize it a little bit?
Typically having one contiguous mesh is better for characters than having stacks of layered polys that could come busting out or deform oddly once the character starts moving.
For example:
Just deleting those polys will help reduce the overall triangle count but it will be a nightmare to skin and those two pieces should be welded together or the topology should match almost identically so that it can be weighted and deform the same way. If it doesn't...
Necks get welded to shirts, hands get welded onto the end of sleeves, feet on the end of pants ect.
I've heard in the past that a 512x512 normal map might take as much memory as around 2500 polygons, but to be honest I have absolutely no idea where those numbers came from so I'd take it with a grain of salt, and when it comes to performance I imagine there are several factors which need to be considered as well (like long and thin triangles which could be the product of a bevel)
http://en.wikipedia.org/wiki/S3_Texture_Compression#DXT1
16 pixels per 64 bits of data.
512x512 / 16 * 64 / 8 = 131072 bytes.
Let's add mipmaps.
Total comes to about 174762.5 bytes though there is some overhead due to the smaller mips being whole blocks.
This 170kb.
Now, worst case scenario for 2500 polygons is that none of them share vertices and they're all triangles without any triangle stripping.
I'd say most engines would probably use a 16-bit indices array, so that's 3*2*2500 = 15000 bytes right off the bat.
Keeping with our theme of worst case scenario, let's consider the memory usage of a vertex in a static model with no lightmapping data or anything using single floats for all.
Location = 3x float
UVs = 2x float
Normal = 3x float
Tangent = 3x float
Bitangent = 3x float
Total = 14 floats per indice. Multiply by four bytes per float for 56 bytes per indice.
This totals 140000 bytes.
140000 + 15000 = 155000 bytes = 152kb.
Given that's an absolute worst case scenario for the indices (probably less than half that figure) but they probably carry extra data in reality and the normal map format might be smaller the figures aren't far off.