I've been trying to understand this so that I can optimize models with transparent portions as best as possible and I'm having a hard time finding info.
Where exactly is the drain from overdraw issues? The name "overdraw" would seem to imply that the more transparent space that is rendered on screen, the worse it is, if that is the case does it help to add geometry to reduce the amount of transparent space on screen? Also is it ONLY the amount of transparent space that gets rendered on screen? Does the issue affect the entire material, or just the portions of the material that are transparent in the alpha mask?
Help illustrate the point:
(Sorry for busting out the good old transparent magenta)
A) where we're reducing geometry as much as possible, but it results in more transparent space being rendered
We've doubled the geometry count to make the shape fit the model better, reducing the transparent space being rendered
C) Exact same size as B, but it is completely transparent space unlike B which has an opaque portion of the material in it.
1) Between A and B which is the more ideal solution? More geometry for a better shape? The main question I'm trying to answer is, is the amount of transparent space on screen important to overdraw, and is adding more geometry to match the shape a worthwhile effort? I'm not looking for a hard rules to follow, I'm just trying to understand where the performance problem is.
2) Are B and C equivalent performance wise? Or does the fact that there is an opaque area of the texture make B more attractive? The question I'm mainly getting at here is whether or not the entire material becomes a drain on performance, or if JUST the transparent area is? Meaning would it be better for example if you were making a tree, to seperate the trunk texture (which is probably 100% opaque) from the canopy leaf textures, or does it make no difference and you can include them all in the same material?
I apologize if this is confusing, I had a hard time trying to think of the right way to ask this question.
Replies
I try to keep my meshes as 2 pieces. trunk and leaves. Not sure if this is a big performance hit but it's the way I've done it and it's always worked.
I remember GDC a few years back when the guys from Crytek were talking about this stuff. They went with method B as well.
You also need to factor in alpha test and alpha sort/blend.
Alpha test is an easier check, is the pixel transparent Y/N.
With sort/blend you have various degrees of transparency and it takes a lot longer to calculate, specially when they are stacked.
2) I think its based on the number of transparent pixels but only to a certain degree even for the opaque pixels there is still a hit because its has to figure out if its opaque or not but it doesn't do the calculations for whats behind it. So C is worse than B. Because of the slight hit with opaque pixels, I think its better to put your transparent materials on one set of meshes and the opaque on another. Something like a tree trunk that is 100% opaque should get its own material that never has a check for opacity.
In general I think giant transparent cards are a bad idea in most games, unless the engine is specifically designed to handle them in an efficient way.
I explained a bit of this a while back here:
http://www.polycount.com/forum/showthread.php?p=1310622#post1310622
for question 2 I think if you have a lot of overlapping geometry then B would be faster than C because the renderer will check the if the pixel is occluded (using the depth mask) before even trying to render the material. Of course that assumes its rendering front to back and in most cases its sorting will be random.
http://wiki.polycount.com/Overdraw
In my experience, when you have a forest of trees it is better to have a single alpha-test shader for the whole tree, including the trunk. Just one texture for the whole tree. This way each tree is a single draw call. Multiple draw calls and state switching tend to be more expensive than simply alpha testing the whole surface.
~Use alpha textures, but don't go overboard. The more the space on screen covered by an alpha plane, the more expensive it will be to render. Alpha planes use alpha testing, and thus, require a second drawcall.
~Keep the overdraw in mind. It is often better to cut the alpha-mapped polygon plane to match the shape that it represents. This keeps the amount of alpha on screen as low as possible. This also allows you to put more objects on the texture map.
You can divide this problems into different areas (applied to only alpha blended geometry):
1. special solutions:
Some major feature of an engine get a special solution, like water rendering where you have large areas of the scene covered by transparent geometry.
2.a. Particle I: Light emitting particles are often no problem and could be rendered in a forward pass, no need to consider any lights etc.
2.b Particle II: Particles, which interacts with light, are quite expensive or are rendered with a lower quality level. Most particles will only be lighted by some kind of fake light or of the most important light sources in the scene.
3. foilage,leafes: Same like 2.b.
Overdraw:
It discribes the number of polygons rendered ontop of a "single" pixel. With modern engines this could be handled quite effective (deferred + early-z-rejection) as long as you don't need to use alpha-blending/transparency.
Simple example:
Think of a forest, rendering 1000 trees, but only the first trees will be visible. When you don't use alpha-blending, you could reject rendering any tree once you are sure, that an other tree is in front of it. No need to render the tree including expensive texture fetches and shaders. This is done on a per pixel level (early-z-rejection). Now think about leafes which are semi transparent. This is renderwise a pure nightmare. First you need to render the leafes in the correct order (back to front) and you need to fully render them, including texture fetches and shaders(including lighning).
Using alpha masking (on/off) instead of alpha blending would solve this problem.
The large particles aren't as bad as you'd think - it's better to have 100 large planes make up a tornado (and this is a HUGE tornado, seen from EVERY angle) than hundreds of smaller ones, simply because what kills your performance is when there's a bunch of particles in front of one another.
If they overlap a handful either side of themselves, it's not too bad (using alpha testing), or a few if they're blended and sorted, but stack up 10 taking the same pixel and you're computer's taking a hit.
Also, B is easier to get the most out of the UV space. Since our UV's are a square, any 4-sided shape will be better than 3-sided shape. Also you can get some bend with the B, if you feel like it.
I'm looking to compare LOD0 with 1 and 2. Crunching polygons seams to give me more overdraw then I'm saving on triangles.
As far as I know if in example B transparent shader/material is used then every texel in this geo is considered transparent. Even if alpha of that texel is 0 (when it looks opaque).
If alpha masking is used then only texels with alpha less than 1 are causing overdraw.
So in general there will be less overdraw in ex.B (with alpha blended shader) compared to ex. A.
But there'll even less overdraw if we use alpha masked shader for ex. B.
Am I right?
With 8-bit alpha opaque pixels are just as expensive as partially transparent pixels. When it gets expensive, is when you have several layers of alpha cards on top of each other - because with opaque pixels, the render does this:
Render the pixel - oh cool it's opaque, draw it to the framebuffer.
When the pixel you're drawing has multiple layers of transparency, you get this:
Render the pixel, oh it's not opaque. Render the thing behind it aswell. Oh it's not opaque. Render the thing behind that. And again. And again. And again until your pixel becomes opaque or you run out of objects.
You basically have to render the same pixel multiple times. Which is why it matters how big your alpha stuff is relative to the screen size. If you have alpha which is covering a lot of your screen (eg covering a lot of pixels per frame) then that's gonna be mega expensive because there is a large percentage of the pixels in your frame need to be rendered multiple times.
If you have like 20 layers of transparent polys, but they're really small in screen-space and not covering many pixels on the frame - that's kind of okay. If it's like 2% screen-space coverage, don't even worry about it...
For some reason I thought that 1 bit alpha (or alpha-testing) can define/separate opaque and completely transparent pixels and therefore tells render not to "check" pixels behind those where alpha is 1.
But in the case of alpha-blending - the render will "check" pixels behind all the pixels with any value of alpha. Even those with alpha 1.
I think what Computron said confused me:
Maybe in the first pass (drawcall) they use some sorting to distinguish pixels with alpha = 1 and then "mark" them as 'opaque' and "mark" pixels with alpha = 0 as 'transparent'. And in the second pass they just render these opaque pixels as normal opaque pixels and do not render those transparent ones at all.
Not completely sure about it.