Overdraw, how does it work and how bad is it?

I've been trying to understand this so that I can optimize models with transparent portions as best as possible and I'm having a hard time finding info.

Where exactly is the drain from overdraw issues? The name "overdraw" would seem to imply that the more transparent space that is rendered on screen, the worse it is, if that is the case does it help to add geometry to reduce the amount of transparent space on screen? Also is it ONLY the amount of transparent space that gets rendered on screen? Does the issue affect the entire material, or just the portions of the material that are transparent in the alpha mask?

Help illustrate the point:
(Sorry for busting out the good old transparent magenta)

A) where we're reducing geometry as much as possible, but it results in more transparent space being rendered
B) We've doubled the geometry count to make the shape fit the model better, reducing the transparent space being rendered
C) Exact same size as B, but it is completely transparent space unlike B which has an opaque portion of the material in it.

1) Between A and B which is the more ideal solution? More geometry for a better shape? The main question I'm trying to answer is, is the amount of transparent space on screen important to overdraw, and is adding more geometry to match the shape a worthwhile effort? I'm not looking for a hard rules to follow, I'm just trying to understand where the performance problem is.

2) Are B and C equivalent performance wise? Or does the fact that there is an opaque area of the texture make B more attractive? The question I'm mainly getting at here is whether or not the entire material becomes a drain on performance, or if JUST the transparent area is? Meaning would it be better for example if you were making a tree, to seperate the trunk texture (which is probably 100% opaque) from the canopy leaf textures, or does it make no difference and you can include them all in the same material?

I apologize if this is confusing, I had a hard time trying to think of the right way to ask this question.


  • Jesse Moody
    Offline / Send Message
    Jesse Moody polycounter lvl 11
    From my experience I have always gone with B. More geo yes but less area to render out in alpha = less overdraw.

    I try to keep my meshes as 2 pieces. trunk and leaves. Not sure if this is a big performance hit but it's the way I've done it and it's always worked.

    I remember GDC a few years back when the guys from Crytek were talking about this stuff. They went with method B as well.
  • Mark Dygert
    1) As far as I know, B is the better option because it is only the pixels that are marked as transparent that cause the problems. In general its easier for an engine to render a few more tris than it is to process giant transparent areas, so trimming down those places is pretty key. Espseically when you have several stacked in front of each other.

    You also need to factor in alpha test and alpha sort/blend.
    Alpha test is an easier check, is the pixel transparent Y/N.
    With sort/blend you have various degrees of transparency and it takes a lot longer to calculate, specially when they are stacked.

    2) I think its based on the number of transparent pixels but only to a certain degree even for the opaque pixels there is still a hit because its has to figure out if its opaque or not but it doesn't do the calculations for whats behind it. So C is worse than B. Because of the slight hit with opaque pixels, I think its better to put your transparent materials on one set of meshes and the opaque on another. Something like a tree trunk that is 100% opaque should get its own material that never has a check for opacity.

    In general I think giant transparent cards are a bad idea in most games, unless the engine is specifically designed to handle them in an efficient way.

    I explained a bit of this a while back here:
  • dii
    Wow greatly appreciate the quick response guys, pretty much clears up all the confusion I was having!
  • commander_keen
    Offline / Send Message
    commander_keen greentooth
    In general B should be the best, although if the tris are far away enough so that A would appear at about 4x4 pixels or less on screen than A will probably be faster.

    for question 2 I think if you have a lot of overlapping geometry then B would be faster than C because the renderer will check the if the pixel is occluded (using the depth mask) before even trying to render the material. Of course that assumes its rendering front to back and in most cases its sorting will be random.
  • Eric Chadwick
    I added a wiki glossary page for overdraw, hope it helps.

    In my experience, when you have a forest of trees it is better to have a single alpha-test shader for the whole tree, including the trunk. Just one texture for the whole tree. This way each tree is a single draw call. Multiple draw calls and state switching tend to be more expensive than simply alpha testing the whole surface.
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    Crydev Wiki:

    ~Use alpha textures, but don't go overboard. The more the space on screen covered by an alpha plane, the more expensive it will be to render. Alpha planes use alpha testing, and thus, require a second drawcall.

    ~Keep the overdraw in mind. It is often better to cut the alpha-mapped polygon plane to match the shape that it represents. This keeps the amount of alpha on screen as low as possible. This also allows you to put more objects on the texture map.

  • Ashaman73
    Offline / Send Message
    Ashaman73 polycounter lvl 6
    Most modern engines are based on some kind of deferred shading/lighting. This has the benefit, that all these engines could handle lot of geometry, lights and post processing effects. One of the major drawbacks is, that you can't handle transparency at all or only very clumpsy. The most often used solution to this kind of drawback is to use a forward renderer for transparent polygons. But a forward renderer needs to handle lighting in a different and very expensive way. Therefore the quality and preformance of forward rendered transparency is quite low. Additionally almost all alpha blended geometry must be sorted to be rendered correctly (alpha masking is not an issue!).

    You can divide this problems into different areas (applied to only alpha blended geometry):
    1. special solutions:
    Some major feature of an engine get a special solution, like water rendering where you have large areas of the scene covered by transparent geometry.
    2.a. Particle I: Light emitting particles are often no problem and could be rendered in a forward pass, no need to consider any lights etc.
    2.b Particle II: Particles, which interacts with light, are quite expensive or are rendered with a lower quality level. Most particles will only be lighted by some kind of fake light or of the most important light sources in the scene.
    3. foilage,leafes: Same like 2.b.

    It discribes the number of polygons rendered ontop of a "single" pixel. With modern engines this could be handled quite effective (deferred + early-z-rejection) as long as you don't need to use alpha-blending/transparency.
    Simple example:
    Think of a forest, rendering 1000 trees, but only the first trees will be visible. When you don't use alpha-blending, you could reject rendering any tree once you are sure, that an other tree is in front of it. No need to render the tree including expensive texture fetches and shaders. This is done on a per pixel level (early-z-rejection). Now think about leafes which are semi transparent. This is renderwise a pure nightmare. First you need to render the leafes in the correct order (back to front) and you need to fully render them, including texture fetches and shaders(including lighning).
    Using alpha masking (on/off) instead of alpha blending would solve this problem.
  • Brendan
    Offline / Send Message
    Brendan polycounter lvl 6
    I'm working on the particles for a racing game at the moment, and that includes other enviro effects like a tornado and storms.

    The large particles aren't as bad as you'd think - it's better to have 100 large planes make up a tornado (and this is a HUGE tornado, seen from EVERY angle) than hundreds of smaller ones, simply because what kills your performance is when there's a bunch of particles in front of one another.

    If they overlap a handful either side of themselves, it's not too bad (using alpha testing), or a few if they're blended and sorted, but stack up 10 taking the same pixel and you're computer's taking a hit.

    Also, B is easier to get the most out of the UV space. Since our UV's are a square, any 4-sided shape will be better than 3-sided shape. Also you can get some bend with the B, if you feel like it.
  • cupsster
    Online / Send Message
    cupsster polycounter lvl 8
    at any time you must decide what will be better for given situation.. general rule should be: be wise with anything you do and if in doubt do compare test ;) truth will be revealed to you///
  • sltrOlsson
    Offline / Send Message
    sltrOlsson polycounter lvl 9
    On this note, anyone know of a plugin/software (for maya) that can tell me how much overdraw I have on screen? Just an aproximation would help too.

    I'm looking to compare LOD0 with 1 and 2. Crunching polygons seams to give me more overdraw then I'm saving on triangles.
  • DA_Fox
    Sorry for necro'd this thread, just want to be sure about one thing regarding overdraw.
    As far as I know if in example B transparent shader/material is used then every texel in this geo is considered transparent. Even if alpha of that texel is 0 (when it looks opaque).

    If alpha masking is used then only texels with alpha less than 1 are causing overdraw.

    So in general there will be less overdraw in ex.B (with alpha blended shader) compared to ex. A.
    But there'll even less overdraw if we use alpha masked shader for ex. B.

    Am I right?
  • marks
    Offline / Send Message
    marks polycounter lvl 10
    Yeah that's almost right - 1-bit alpha also causes overdraw on every pixel, even the opaque ones.

    With 8-bit alpha opaque pixels are just as expensive as partially transparent pixels. When it gets expensive, is when you have several layers of alpha cards on top of each other - because with opaque pixels, the render does this:

    Render the pixel - oh cool it's opaque, draw it to the framebuffer.

    When the pixel you're drawing has multiple layers of transparency, you get this:

    Render the pixel, oh it's not opaque. Render the thing behind it aswell. Oh it's not opaque. Render the thing behind that. And again. And again. And again until your pixel becomes opaque or you run out of objects.

    You basically have to render the same pixel multiple times. Which is why it matters how big your alpha stuff is relative to the screen size. If you have alpha which is covering a lot of your screen (eg covering a lot of pixels per frame) then that's gonna be mega expensive because there is a large percentage of the pixels in your frame need to be rendered multiple times.
    If you have like 20 layers of transparent polys, but they're really small in screen-space and not covering many pixels on the frame - that's kind of okay. If it's like 2% screen-space coverage, don't even worry about it...
  • DA_Fox
    Thanks, Marks for clarification!

    For some reason I thought that 1 bit alpha (or alpha-testing) can define/separate opaque and completely transparent pixels and therefore tells render not to "check" pixels behind those where alpha is 1.

    But in the case of alpha-blending - the render will "check" pixels behind all the pixels with any value of alpha. Even those with alpha 1.

    I think what Computron said confused me:
    Crydev Wiki:

    ~Use alpha textures, but don't go overboard. The more the space on screen covered by an alpha plane, the more expensive it will be to render. Alpha planes use alpha testing, and thus, require a second drawcall."

    Maybe in the first pass (drawcall) they use some sorting to distinguish pixels with alpha = 1 and then "mark" them as 'opaque' and "mark" pixels with alpha = 0 as 'transparent'. And in the second pass they just render these opaque pixels as normal opaque pixels and do not render those transparent ones at all.

    Not completely sure about it. :)
Sign In or Register to comment.