Home Technical Talk

Transparencies, sorting and intersecting

I was having a conversation with a co-worker last week about transparencies. I found out that we both have pretty different ideas about how to handle transparencies and I wasn't sure what was hearsay, fact or just plane BS. We're out of the office until after the first of the year and we'll probably get back on the subject at some point as we left the convo mid point (oh shit its snowing like crazy!)

I wanted to pick everyone's brain and and see what we think about transparencies. I'll start:

- Sorting a bunch of trans is hard on the CPU or GPU, (not sure which).
- Sorting B/W trans is easier then sorting gray scale.
- Intersecting polys aren't a good idea, because the same poly is behind and in front at the same time. Instead where the two polys meet, place an edge in each.
- It's better to have a few planes with a larger texture that don't intersect, instead of a bunch tiny textures placed on polys that intersect. I think if the overall pixel density is the same between the sum of the tiny and the large, its a wash, except for sorting, and possibly texture draw calls (if I'm even using that term correctly)

I think about does it for what I think I know about transparencies... anyone have any corrections, additions?

Replies

  • Xenobond
    Offline / Send Message
    Xenobond polycounter lvl 18
    -=Sorting B/W trans is easier then sorting gray scale.=-
    You must be referring to alpha test vs. alpha blending, right? You can have a full gray scale alpha and have it be used in alpha test. You get smoother results that way.
  • Mark Dygert
    I think we're talking about the same thing.
    - Alpha test is just checking to see if it is transparent or not? True false, on/off kind of thing?
    - Blending is checking to see how transparent it is? Then deciding how much of the transparencies behind it are seen and if they're partially transparent more code grinds on?

    Just to be sure I meant THIS. Left is blending, right is test? Test is faster/smoother then Blending?
  • Farfarer
    As far as I understand it...

    Alpha-tested stuff can usually get rendered on the same pass as the rest of the solid geometry, so it sorts fine and it's cheap to render. Blended alpha has to be rendered in a second pass, ontop of the solid geo pass.

    Then there's oddities with the sorting order of the blended alpha polygons in one mesh, if one sits behind another but gets rendered afterwards, it clips through - there's tricks you can pull that ensure they get rendered in a certain order (but this means that it only looks correct from one direction). If each is their own object then it'll render fine as it can sort them based on distance to mesh origin.

    Intersecting them... I dunno, I've not done that with non-additive or non-alphatest stuff before.
  • Rob Galanakis
    One disclaimer I will give, for my post and ALL others (except maybe if crazy butcher or Ryan Clark post), is that we are not graphics programmers. It does NOT mean graphics programmers are always right (they're not), but it does mean we can very possibly be wrong, since we cannot work or discover at as a nuts-and-bolts level as true programmers. I'd caution again for non-technical or artists or those without shader programming experience (making an unreal shader is not shader programming) who are going to answer, and consider how is it you learned this information- how much of it is hear-say and from whom, and how much have you actually tested or experienced, and in how many platforms? Even amongst experts you'll find difference in experience and opinions, amongst us artists it is going to be much worse- you may have better luck asking at programming forums, like gamedev.net. You're going to get many answers that really don't mean much, especially if you want to feel like you have a solid understanding when you return from break. My post doesn't mean much and I'm sure it will be founded on more than most- I'd encourage you to look elsewhere.
    - Sorting a bunch of trans is hard on the CPU or GPU, (not sure which).
    Sorting is done on the CPU, the actual rasterization on the GPU. Most games don't sort per-polygon, they sort objects and they will render how they will render (usually in vertex order and sub-object/element order, I would assume, and it will probably vary according to rendering engine, as this is CPU stuff). There are, of course, ways of enforcing specific subobject rendering orders depending on the engine (the setup usually happens in DCC or a tool), but this is not a sure thing since sorting can ever only be correct from one view.
    - Sorting B/W trans is easier then sorting gray scale.
    Well, yes and no. If you are doing this all on the GPU and using an alpha test/rejection per pass, I am pretty sure the cost is about the same (get a second opinion on that). The problem is that alpha blending probably won't look right because the scene isn't sorted correctly, whereas alpha testing will always look right. I could be off with this, but yes your assumption is correct in many ways because it is so general- easier to deal with from a content-perspective, easy to deal with that you won't get bad sorting, and it is possibly easier on the GPU. I haven't done any shader stuff since my last job, and don't work next to a programmer anymore, so I am a bit rusty.
    - Intersecting polys aren't a good idea, because the same poly is behind and in front at the same time. Instead where the two polys meet, place an edge in each.
    If they are transparent, yes. One must render before the other so in one pass, it cannot possibly have correct sorting. For general rendering, it doesn't matter.
    -It's better to have a few planes with a larger texture that don't intersect, instead of a bunch tiny textures placed on polys that intersect. I think if the overall pixel density is the same between the sum of the tiny and the large, its a wash
    Hmmm not quite. 4x512 is not the same as 1x1024, you generally want to pack textures together. On the other hand, smaller textures can 'fit' more places in texture memory, and simpler hardware can be optimized to handle certain sizes (not talking about PC here). I think it should be the same work in the pixel shader given the same texel (NOT pixel) density, but your going to have to do more pass setup and draw calls with more textures, so generally better to use fewer.
  • Xenobond
    Offline / Send Message
    Xenobond polycounter lvl 18
    The left and right alphas there can be used as both alphatest and alphablend. What you are showing is how the alpha is being stored. Alphatest and alphablend are just the methods of applying the transparency. With the gray scale alpha on the left, you can apply alphatest to it, adjust the cutoff value and achieve better results than the b/w only alpha.

    Alphatest is faster/cheaper than alphablend and you don't have any sorting issues with sorting between different elements intersecting/overlappin in the same mesh.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    sounds pretty good what Rob and Xenobond said.

    alphatest = 0/1 decision is fast and can be treated same as regular geometry. It's also compatible with deferred shading, if the engine does that sort of stuff. Ie. no worries.

    alphablend = the "sorting" issue as mentioned. Basically Rob summed all the sorting options, whether an engine really allows these sorting modes or not, depends.

    In case of alphablending frequently both alphatest and alphablend are enabled, so that "wrong" sorting is only visible on those fine borders, and not over the whole plane (as most of the plane will be killed by alphatest, ie not blended). Alphatest is important so that less pixels are written to the depthbuffer. Else our full plane (even all those black spots) would prevent drawing behind, due to depth-testing.

    We may all remember seeing a "sky background / non-self background" as thin border around trees in some games (GTA3), when there actually should have been buildings behind the tree...
    http://scalegamer.com/images/eeepc/games/gta3/Gta3%202008-01-20%2002-23-21-81.jpg
    (see the tree in the middle, one right branch doesnt blend to self, but to background, if alphatest wasnt on, the full rectangle might have killed the branch behind, looking ugly).

    alpha to coverage = There is also a new kid on the block, when it comes to multisampling. Basically with antialias we have multiple pixels per pixel, hence we can soften alpha-test. Ie the hardware makes it less harsh and we still don't need to sort.

    obliv%20eatm.png

    I have no console experience, so I dont know if that is efficiently supported there.

    alphablended stuff is still ugly for most rendering pipelines, especially the new deferred renderers, where we only have 1 pixel infromation (1 depth) per pixel. Mostly all alphablended stuff is rendered after the rest and with less correct shading. So prefer alpha-test always.

    as for the texture question, not sure if I understand it, but Robs reply sounds good. If you think of vegetation, you might want to sum many "grass/bush" textures into one atlas texture, to be able to draw all the geometry at once (if such codepaths exist in your engine).
  • Neox
    Offline / Send Message
    Neox godlike master sticky
    when it comes to alpha sorting, i've recently seen a lot of hard alpha sorting with softalpha blending mixed, how is that achieved?

    http://ps3media.ign.com/ps3/image/article/903/903022/final-fantasy-xiii-20080826034718288.jpg

    if you look close you can see some sort of dithering going on, which i believeis used to sort the whole thing and soft alpha is maybe unsorted so save performance?
  • thomasp
    Offline / Send Message
    thomasp hero character
    it's a technique that from what i've been told is very cheap on the PS3 (no significant performance hit). 1 bit alpha get's offset and blended together. that creates a fuzzy effect up close but looks just fine and very smooth from a distance.
  • warby
    Offline / Send Message
    warby polycounter lvl 18
    that final fantasy shot is very interesting i have no idea whats going on there.

    my philosophy is stay away from transparency as far as you can.

    - id always choose to add an extra N-amount of triangles to model each hair or tooth or burned wall carpet or curtain before i use alpha.

    -id always choose to have the hair or other transparency usual suspect on the same texture/material as the rest of the opaque geometry for batchabilitys sake.
    and i would never make one of those textures where everything in the alpha channel is white except for those 3 black pixels for memory efficiency and because the entire model would gain the extra pixel draw cost from the transparent areas ... when ever i see a texture like this it makes my crinch.

    -if you absolutely have to have alpha like lets say on leaves on a trees. something that just cant be modeled for real use alpha test that sorts and draw and costs just like opaque geometry (not sure if on all hardware) plus some good old fashioned aa.

    -for decals it can be ok to use alpha blending but make sure your engine draws all decals in a specific render pass before all the actual scene transparency like fire smoke etc happens. same goes for stuff like transparent background trees that only blend down onto the opaque skybox.

    -the only thing that continuously breaks my balls and has no perfect solution is window glass ... multiply and additive transparency are just not cutting it and they always blend down onto large areas of the screen causing massive over draw and make other windows behind them disappear or have muzzle flashes or smoke draw over it ... ... sigh ...we need a world without windows i say !
  • thomasp
    Offline / Send Message
    thomasp hero character
    warby wrote: »
    - id always choose to add an extra N-amount of triangles to model each hair or tooth or burned wall carpet or curtain before i use alpha.

    you can get away with some modelling but try creating eyelashes on a face. try any sort of feathered hairstyle (or realistic hairstyle, for that matter) they will stand out like a sore thumb if you go for any sort of look that's not nasty retro nineties CG. ;) plus at least on characters you end up creating deformation issues and in the end again performance issues if you use loads of skinned vertices.

    that being said 1-bit alpha can be tricky to get to look decent a well. i still get the shivers from meryl's MGS4 haircut, it has burned my retina badly.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    looking at the image neox posted, I would say the alphatested surfaces (ie most hair) has its own more complex shading.
    Whilst in another pass the soft stuff is blended on top with simplified shading (as mentioned before when hinting to deferred shading). Hence you see that rapid change of shade in left side.

    In the article by blizzard on starcrafts deferred renderer, they mentioned that they simply use backgrounds normal for shading, that could explain those changes in the left, with the highly transparent parts being lit similar to the background, and the solid ones having other shading.

    personal speculation here.. but its obvious the shading of the "solid" hair is different from the more transparent hair
  • Eric Chadwick
    Humus has a couple interesting alpha-sorting demos on his site. Not sure how performance-friendly these are, curious to try them out sometime.

    Order Independent Translucency

    Alpha to coverage
    (what CrazyButcher mentioned)
  • Rick Stirling
    Offline / Send Message
    Rick Stirling polycounter lvl 18
    An interesting side effect of using an 8bit alpha image for 1bit alpha test is that you can choose the threshold for what is on and what is off. With a bit of fiddling you can actually make 1bit alpha hair/grass appear to grow.
  • Xenobond
    Offline / Send Message
    Xenobond polycounter lvl 18
    Even more dramatic, you can have it go the other way.... and the hair recedes!
    SCIENCE!!!
    XD
  • Farfarer
    An interesting side effect of using an 8bit alpha image for 1bit alpha test is that you can choose the threshold for what is on and what is off. With a bit of fiddling you can actually make 1bit alpha hair/grass appear to grow.
    Yeah, I'm pretty sure FarCry 2 uses something like this to make leaves burn off the trees.
  • Ryan Clark
    Offline / Send Message
    Ryan Clark polycounter lvl 18
    We may be able to quit worrying about sorting in the near future... check out this demo on nvidia's site: http://developer.download.nvidia.com/SDK/10/opengl/samples.html#dual_depth_peeling
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    order independent transparency techniques have been around for long, mostly as they are important for vis stuff outside games. The problem is the techniques are not the lightest on the hardware and often require a bit more fancy setup.
    I.e. its cool for CAD/Medical where you have just one type of object you show, but in the high diversity of games it's even worse to put some effect in that needs such a specific setup...

    the technique Eric linked to (different setup makes use of multisampling to store multiple layers per pixel), is similar cool, but also too "unique" in sense of setup...

    however my hope lies that the next gen hardware in 2011/2012 consoles will be "open" enough to allow developers to do more custom rendering, that would allow new methods on handling transparency. If that is what you meant with near future :)
  • arshlevon
    Offline / Send Message
    arshlevon polycounter lvl 18
    crysis does it as a post process, we are looking in to this, they go into detail about it in gpu gems3.

    basically you make 1 bit alpha, then as a post process it blurs the it , you can control the blur amount for softer alpha.
  • Ryan Clark
    Offline / Send Message
    Ryan Clark polycounter lvl 18
    Yeah, "near future" was a bad choice of words... I don't know enough to make predictions like that. I was just thinking we're bound to abandon sorting as polycounts continue to rise.

    But maybe that's not correct either; sorting is pretty parallelizable. Is anybody using the GPU to sort polys for games? These guys seem to have interesting results: http://gamma.cs.unc.edu/SORT/#poly
  • Eric Chadwick
    Didn't Dreamcast use depth peeling?
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    the powerVR chips (dreamcast, now iphone) use tile-based rendering, which means they collect all triangles for a tile, and then shade at the end. Which also means in theory they should be able to handle transparency correctly (because they have all fragments at hand), whether they do it or not I dont know...

    read more on wikipedia on the PowerVR
    It also allows for correct rendering of partially transparent polygons independent of the order in which they are processed by the polygon producing application. (This capability was only implemented in Series 1 and 2. It has been removed since for lack of API support and cost reasons.)

    too bad hehe
Sign In or Register to comment.