I was having a conversation with a co-worker last week about transparencies. I found out that we both have pretty different ideas about how to handle transparencies and I wasn't sure what was hearsay, fact or just plane BS. We're out of the office until after the first of the year and we'll probably get back on the subject at some point as we left the convo mid point (oh shit its snowing like crazy!)
I wanted to pick everyone's brain and and see what we think about transparencies. I'll start:
- Sorting a bunch of trans is hard on the CPU or GPU, (not sure which).
- Sorting B/W trans is easier then sorting gray scale.
- Intersecting polys aren't a good idea, because the same poly is behind and in front at the same time. Instead where the two polys meet, place an edge in each.
- It's better to have a few planes with a larger texture that don't intersect, instead of a bunch tiny textures placed on polys that intersect. I think if the overall pixel density is the same between the sum of the tiny and the large, its a wash, except for sorting, and possibly texture draw calls (if I'm even using that term correctly)
I think about does it for what I think I know about transparencies... anyone have any corrections, additions?
Replies
You must be referring to alpha test vs. alpha blending, right? You can have a full gray scale alpha and have it be used in alpha test. You get smoother results that way.
- Alpha test is just checking to see if it is transparent or not? True false, on/off kind of thing?
- Blending is checking to see how transparent it is? Then deciding how much of the transparencies behind it are seen and if they're partially transparent more code grinds on?
Just to be sure I meant THIS. Left is blending, right is test? Test is faster/smoother then Blending?
Alpha-tested stuff can usually get rendered on the same pass as the rest of the solid geometry, so it sorts fine and it's cheap to render. Blended alpha has to be rendered in a second pass, ontop of the solid geo pass.
Then there's oddities with the sorting order of the blended alpha polygons in one mesh, if one sits behind another but gets rendered afterwards, it clips through - there's tricks you can pull that ensure they get rendered in a certain order (but this means that it only looks correct from one direction). If each is their own object then it'll render fine as it can sort them based on distance to mesh origin.
Intersecting them... I dunno, I've not done that with non-additive or non-alphatest stuff before.
Sorting is done on the CPU, the actual rasterization on the GPU. Most games don't sort per-polygon, they sort objects and they will render how they will render (usually in vertex order and sub-object/element order, I would assume, and it will probably vary according to rendering engine, as this is CPU stuff). There are, of course, ways of enforcing specific subobject rendering orders depending on the engine (the setup usually happens in DCC or a tool), but this is not a sure thing since sorting can ever only be correct from one view.
Well, yes and no. If you are doing this all on the GPU and using an alpha test/rejection per pass, I am pretty sure the cost is about the same (get a second opinion on that). The problem is that alpha blending probably won't look right because the scene isn't sorted correctly, whereas alpha testing will always look right. I could be off with this, but yes your assumption is correct in many ways because it is so general- easier to deal with from a content-perspective, easy to deal with that you won't get bad sorting, and it is possibly easier on the GPU. I haven't done any shader stuff since my last job, and don't work next to a programmer anymore, so I am a bit rusty.
If they are transparent, yes. One must render before the other so in one pass, it cannot possibly have correct sorting. For general rendering, it doesn't matter.
Hmmm not quite. 4x512 is not the same as 1x1024, you generally want to pack textures together. On the other hand, smaller textures can 'fit' more places in texture memory, and simpler hardware can be optimized to handle certain sizes (not talking about PC here). I think it should be the same work in the pixel shader given the same texel (NOT pixel) density, but your going to have to do more pass setup and draw calls with more textures, so generally better to use fewer.
Alphatest is faster/cheaper than alphablend and you don't have any sorting issues with sorting between different elements intersecting/overlappin in the same mesh.
alphatest = 0/1 decision is fast and can be treated same as regular geometry. It's also compatible with deferred shading, if the engine does that sort of stuff. Ie. no worries.
alphablend = the "sorting" issue as mentioned. Basically Rob summed all the sorting options, whether an engine really allows these sorting modes or not, depends.
In case of alphablending frequently both alphatest and alphablend are enabled, so that "wrong" sorting is only visible on those fine borders, and not over the whole plane (as most of the plane will be killed by alphatest, ie not blended). Alphatest is important so that less pixels are written to the depthbuffer. Else our full plane (even all those black spots) would prevent drawing behind, due to depth-testing.
We may all remember seeing a "sky background / non-self background" as thin border around trees in some games (GTA3), when there actually should have been buildings behind the tree...
http://scalegamer.com/images/eeepc/games/gta3/Gta3%202008-01-20%2002-23-21-81.jpg
(see the tree in the middle, one right branch doesnt blend to self, but to background, if alphatest wasnt on, the full rectangle might have killed the branch behind, looking ugly).
alpha to coverage = There is also a new kid on the block, when it comes to multisampling. Basically with antialias we have multiple pixels per pixel, hence we can soften alpha-test. Ie the hardware makes it less harsh and we still don't need to sort.
I have no console experience, so I dont know if that is efficiently supported there.
alphablended stuff is still ugly for most rendering pipelines, especially the new deferred renderers, where we only have 1 pixel infromation (1 depth) per pixel. Mostly all alphablended stuff is rendered after the rest and with less correct shading. So prefer alpha-test always.
as for the texture question, not sure if I understand it, but Robs reply sounds good. If you think of vegetation, you might want to sum many "grass/bush" textures into one atlas texture, to be able to draw all the geometry at once (if such codepaths exist in your engine).
http://ps3media.ign.com/ps3/image/article/903/903022/final-fantasy-xiii-20080826034718288.jpg
if you look close you can see some sort of dithering going on, which i believeis used to sort the whole thing and soft alpha is maybe unsorted so save performance?
my philosophy is stay away from transparency as far as you can.
- id always choose to add an extra N-amount of triangles to model each hair or tooth or burned wall carpet or curtain before i use alpha.
-id always choose to have the hair or other transparency usual suspect on the same texture/material as the rest of the opaque geometry for batchabilitys sake.
and i would never make one of those textures where everything in the alpha channel is white except for those 3 black pixels for memory efficiency and because the entire model would gain the extra pixel draw cost from the transparent areas ... when ever i see a texture like this it makes my crinch.
-if you absolutely have to have alpha like lets say on leaves on a trees. something that just cant be modeled for real use alpha test that sorts and draw and costs just like opaque geometry (not sure if on all hardware) plus some good old fashioned aa.
-for decals it can be ok to use alpha blending but make sure your engine draws all decals in a specific render pass before all the actual scene transparency like fire smoke etc happens. same goes for stuff like transparent background trees that only blend down onto the opaque skybox.
-the only thing that continuously breaks my balls and has no perfect solution is window glass ... multiply and additive transparency are just not cutting it and they always blend down onto large areas of the screen causing massive over draw and make other windows behind them disappear or have muzzle flashes or smoke draw over it ... ... sigh ...we need a world without windows i say !
you can get away with some modelling but try creating eyelashes on a face. try any sort of feathered hairstyle (or realistic hairstyle, for that matter) they will stand out like a sore thumb if you go for any sort of look that's not nasty retro nineties CG. plus at least on characters you end up creating deformation issues and in the end again performance issues if you use loads of skinned vertices.
that being said 1-bit alpha can be tricky to get to look decent a well. i still get the shivers from meryl's MGS4 haircut, it has burned my retina badly.
Whilst in another pass the soft stuff is blended on top with simplified shading (as mentioned before when hinting to deferred shading). Hence you see that rapid change of shade in left side.
In the article by blizzard on starcrafts deferred renderer, they mentioned that they simply use backgrounds normal for shading, that could explain those changes in the left, with the highly transparent parts being lit similar to the background, and the solid ones having other shading.
personal speculation here.. but its obvious the shading of the "solid" hair is different from the more transparent hair
Order Independent Translucency
Alpha to coverage (what CrazyButcher mentioned)
SCIENCE!!!
XD
I.e. its cool for CAD/Medical where you have just one type of object you show, but in the high diversity of games it's even worse to put some effect in that needs such a specific setup...
the technique Eric linked to (different setup makes use of multisampling to store multiple layers per pixel), is similar cool, but also too "unique" in sense of setup...
however my hope lies that the next gen hardware in 2011/2012 consoles will be "open" enough to allow developers to do more custom rendering, that would allow new methods on handling transparency. If that is what you meant with near future
basically you make 1 bit alpha, then as a post process it blurs the it , you can control the blur amount for softer alpha.
But maybe that's not correct either; sorting is pretty parallelizable. Is anybody using the GPU to sort polys for games? These guys seem to have interesting results: http://gamma.cs.unc.edu/SORT/#poly
read more on wikipedia on the PowerVR
too bad hehe