Thanks, Marks for clarification! For some reason I thought that 1 bit alpha (or alpha-testing) can define/separate opaque and completely transparent pixels and therefore tells render not to "check" pixels behind those where alpha is 1. But in the case of alpha-blending - the render will "check" pixels behind all the pixels…
Yeah that's almost right - 1-bit alpha also causes overdraw on every pixel, even the opaque ones. With 8-bit alpha opaque pixels are just as expensive as partially transparent pixels. When it gets expensive, is when you have several layers of alpha cards on top of each other - because with opaque pixels, the render does…
In general B should be the best, although if the tris are far away enough so that A would appear at about 4x4 pixels or less on screen than A will probably be faster. for question 2 I think if you have a lot of overlapping geometry then B would be faster than C because the renderer will check the if the pixel is occluded…
1) As far as I know, B is the better option because it is only the pixels that are marked as transparent that cause the problems. In general its easier for an engine to render a few more tris than it is to process giant transparent areas, so trimming down those places is pretty key. Espseically when you have several…
I'm working on the particles for a racing game at the moment, and that includes other enviro effects like a tornado and storms. The large particles aren't as bad as you'd think - it's better to have 100 large planes make up a tornado (and this is a HUGE tornado, seen from EVERY angle) than hundreds of smaller ones, simply…
Most modern engines are based on some kind of deferred shading/lighting. This has the benefit, that all these engines could handle lot of geometry, lights and post processing effects. One of the major drawbacks is, that you can't handle transparency at all or only very clumpsy. The most often used solution to this kind of…