Home Technical Talk

Alpha to coverage

ngon master
Offline / Send Message
almighty_gir ngon master
so anyway, i was reading the FFXIII thread and came across the term:
"alpha to coverage".

what does it mean? how is it useful? what's the best way to make alpha'd hair? what engines handle alpha's better?

BRING FORTH THE GOODNESS BITCHES!

Replies

  • Rick Stirling
    Options
    Offline / Send Message
    Rick Stirling polycounter lvl 18
    It's cheap/fake dithering to get antialiased alpha. It works almost as fast as 1bit alpha test/cutout but looks a little more like 8bit alpha blend.

    It's very useful for grass and hair where you don't want harsh 1bit alpha but can't afford to sort multiple layers of blended alpha.

    It's cheaper to do on the 360 than it is on the PS3 (it's ALMOST free on the 360 as I recall). No idea about the PC.
  • almighty_gir
    Options
    Offline / Send Message
    almighty_gir ngon master
    and how is it implimented?

    i mean, take mass effect 2 for example, that seems to be using 1 bit alpha, but it looks poop (hair wise, the rest looks great). so is UDK even capable of using alpha to coverage?

    is alpha to coverage even the correct term?
  • MoP
    Options
    Offline / Send Message
    MoP polycounter lvl 18
    Alpha to coverage is the correct term, yes. Usually a screen-space stipple effect, IIRC.
    To implement it is a code solution, don't think you can do it any other way. Don't think it's a shader thing?
  • Rick Stirling
    Options
    Offline / Send Message
    Rick Stirling polycounter lvl 18
    OK, take this all with a pinch of "I did this 12 months ago and can't remember the details"... Ask your graphics programer, however I worked with a shader programmer to get it implemented - that's not to say he didn't require engine code support to work with the shader. I think it's hardware related, you can do it DX9 and DX 10, and the last few versions of openGL (but it's more work in OGL as I understand, making it slightly slower on the PS3).

    You still paint the alpha as 8bit, and then you can then specific how many other threshold divides there are - so 2 for cutout (on/off), 4, 8 etc. I think we were using 4 levels on the PS3 for ingame hair on the two Episode from Liberty City games (now it's been announced). It's nowhere near as pretty as fully blended alpha, but with all the hairy bikers in The Lost And Damned I had to get some anitaliasing in there.

    I know Source has it, no idea about Unreal.
  • Rick Stirling
    Options
    Offline / Send Message
    Rick Stirling polycounter lvl 18
    Found two shots with it applied to a chainlink fence. Open both and compare (and note that it drops from 18 to 16fps).

    Off: http://img205.imageshack.us/i/glo2x6dy.jpg/
    On: http://img205.imageshack.us/i/glo2xac9ew.jpg/
  • Eric Chadwick
  • almighty_gir
    Options
    Offline / Send Message
    almighty_gir ngon master
    Thanks a lot guys!

    it would appear that UDK does support it to an extent, as it's built into speed tree and all the in-engine foliage rendering.

    now, i need to figure out how to apply it to character assets.

    on a side note. why is it that engines can't seem to handle translucent alpha blending?
  • Vailias
    Options
    Offline / Send Message
    Vailias polycounter lvl 18
    Short answer: Because its not simple to do, which makes it difficult to do fast.

    Long answer:
    In order to keep rendering times low as little of a scene as possible must be drawn each frame. So objects and polygons need to be culled, one part of this is depth from camera. Objects which are behind other objects are flagged as not needing to be rendered. IN cases where part of an object is visible there are a few approaches. The simplest is to draw all visible objects in reverse depth order, so parts of an object that are behind another object are simply overdrawn.

    This same sorting can be applied to individual polygons on a single mesh using the vertex data that makes up a polygon as the test for whats ahead of what, and or the depth buffer (Zbuffer). The trick with using the Zbuffer is it is essentially image data and does not respect transparency. So a simple implementation of which pixels to draw in an image via zbuffer will result in big blank spots wherever there is partial alpha, as those pixels of hte model behind it will be discarded for drawing, so you get a seethru model in somewhat unpredictable ways.

    The other way to sort for draw order per polygon relies on vertex data, whiever polygon has a vertex with a closer verted is flagged to draw last. Then you get stuff like this
    TransparencySorting.jpg


    The solution is to be able to sort and draw per pixel, or to subdivide the mesh at every internal intersection so there are no interpenetrating polygons, but that has the potential to s increase the vertex count of a model to prohibitive levels very quickly. Drawing and sorting per pixel is great, but still not conducive to realtime gameplay and, as far as I know, most 3d api's in use presently.
    I'd like to go into more detail, but the deeper mechanics of a non-game oriented rendering engine are still beyond my scope of knowledge.

    Also as far as translucent alpha blending goes, that also has to do with the 8 bit per channel color model, and simple color math. If you overdraw a red plane with a blue plane as above, what color should the resulting pixel be? Some form of purple of course, but should it be a brighter purple? A darker purple? what?
    If you simply add the color values of two translucent objects you'll get a brighter pixel than either of the original values, if you multiply or subtract they'll likely be a darker value than previously, but will go black very quickly. If the values are right, so instead you wind up needing to do some trickery like multiply each pixel color by its alpha value, then multiply the results together, then multiply by some constant, like 2, to get a value approximating the correct end color. ((C1*A1)*(C2*A2))*2

    To get CORRECT translucency you'll need to basically do photon mapping, and account for viewer position to lightsource, and alter the color of the light striking the second surface, after it passes through the first surface, and then accumulate that color to a final pixel seen by the camera.

    (if there are any graphics pipeline guys out there please feel free to correct any of this that is out of order)
  • chronic
    Options
    Offline / Send Message
    chronic polycounter lvl 10
    my hacky artist understanding of this: (feel free to correct)

    'coverage' refers to a concept linked to anti-aliasing - when you smooth(blur) the boundary where two objects overlap you get pixels that are not 100% foreground and not 100% background but a range in between. 'Coverage' is a visualization of that anti-aliased boundary.

    this was useful when i was doing compositing - when you do depth of field or comp'ing with the depth buffer you need the 'coverage' information to recreate smooth edges on objects

    useful in games - alpha to coverage - you can convert the info in an alpha channel to give you the 'coverage' info associated with the alpha's boundary edges - in many cases(?) this can be done in hardware/gpu or is offered for free (always calculated) in DX10/11(?)
    you can combine the info about an alpha channels 'coverage' with alpha testing to recreate a smooth edge using simple math.

    alpha blending (the other guy) requires strict in order - back to front processing of all objects(?)

    what this thread really needs is a graphics engineer.
  • chronic
    Options
    Offline / Send Message
    chronic polycounter lvl 10
    from the DirectX MSDN resource:
    http://msdn.microsoft.com/en-us/library/ee416415(VS.85).aspx
    http://msdn.microsoft.com/en-us/library/ee415665(VS.85).aspx
    Drawing Transparent Objects with Alpha To Coverage
    The number of transparent leave and blades of grass in the sample makes sorting these objects on the CPU expensive. Alpha to coverage helps solve this problem by allowing the Instancing sample to produce convincing results without the need to sort leaves and grass back to front. Alpha to coverage must be used with multisample anti-aliasing (MSAA). MSAA is a method to get edge anti-aliasing by evaluating triangle coverage at a higher-frequency on a higher resolution z-buffer. With alpha to coverage, the MSAA mechanism can be tricked into creating psuedo order independent transparency. Alpha to coverage generates a MSAA coverage mask for a pixel based upon the pixel shader output alpha. That result is combined by AND with the coverage mask for the triangle. This process is similar to screen-door transparency, but at the MSAA level.

    Alpha to coverage is not designed for true order independent transparency like windows, but works great for cases where alpha is being used to represent coverage, like in a mipmapped leaf texture.
Sign In or Register to comment.