Yeah a side effect (or design!) of a lot of SSAO systems means that "flat" stuff will be grey, occluded goes to black, and jutting edges get lighter:
It doesn't have to be done like this, but I think it's a useful thing, and as Peris says it's definitely one of the things that can make even an average model look a lot more detailed. There's a lot of stuff using tiling simple textures in Crysis that would look terrible in other engines - which is why when you turn down the Shader detail, it's the most noticeable hit in scene consistency quality really.
You can drop the texture detail down and it's not as noticeable that stuff is blurry when you have all the SSAO working, and the colour correction and post processing - I think that's where all the magic is.
Damn makes me really want to get the expansion... the art as a whole was incredible, and as they promised, it just keeps scaling upwards! The idea of a Jurassic Park-style game with this engine would be incredible!
So SSAO is based on the geometries relation to the camera and not its position in world space? Is that it? And those when we have a window overlooking, say, nothing, the inside edges of that window would get this SSAO magic and not being entirely correct? Fuck I'm tired so I hope this makes sense.
EDIT: My above example assumes there's more geometry behind that window for the SSAO to cast on to.
AB - yeah it's screenspace ambient occlusion and it's a 'trick' the objects arent being shaded by their relationship in world space, it's all based on the depth buffer. Which is why you will see little halos or even objects shading things in the distance (although i think there is some stuff that can be done to alleviate that).
About floating objects casting AO shadow : if I understand correctly you could get around this thanks to the Zbuffer.
Basically you render the AO shadows using the screen normals so there is no notion of what blue plane is in front of what blue plane. However the Zbuffer can tell depth relationship between objects.
I would imagine that once you 'tag' the screen surface of each object being drawn(example, floating object gets tag#1 and background object gets tag#2), you can can easily extract the average depth value of such surfaces and compare them. The bigger the difference in Z, the less AO you create where the two objects meet? You could even prevent the bleeding of the AO from the background over the object in the foreground.
the "highlights" are really a side-effect of how SSAO works. Yes one could take care of it and remove it, but as mentioned, it does add more contrast to the shading...
which also illustrates how the "highlights" and darkening is done in principle.
Now the difference of SSAO to that one is, that depth isnt blurred, but per depth pixel you look around neighboring depth pixels and therefore find out how much occlusion is going on. Of course this is a simplifcation, as you only have an object's front depth, and not its back. Also as the effect is in screenspace, you get "wrong" results at the screen edges.
That "looking around neigboring pixels" is similar to placing a sphere on the depth point and shoot random rays with raising distance around it. As this is very costly, only a few rays are shot, and SSAO is performed in a smaller resolution, and then upsampled and blurred...
You may remember my pdf (whats cool in graphics) There is a few shots in there that illustrate the effect (page 51-53). Nvidia has a demo of a variant that uses the normal as well (which you normally have around in deferred shading) and then you can sample around a hemisphere.
Replies
It doesn't have to be done like this, but I think it's a useful thing, and as Peris says it's definitely one of the things that can make even an average model look a lot more detailed. There's a lot of stuff using tiling simple textures in Crysis that would look terrible in other engines - which is why when you turn down the Shader detail, it's the most noticeable hit in scene consistency quality really.
You can drop the texture detail down and it's not as noticeable that stuff is blurry when you have all the SSAO working, and the colour correction and post processing - I think that's where all the magic is.
Have any other games even come close?
ss? sub surface?
EDIT: screen surface AO
So SSAO is based on the geometries relation to the camera and not its position in world space? Is that it? And those when we have a window overlooking, say, nothing, the inside edges of that window would get this SSAO magic and not being entirely correct? Fuck I'm tired so I hope this makes sense.
EDIT: My above example assumes there's more geometry behind that window for the SSAO to cast on to.
[ame]http://www.youtube.com/watch?v=VBnkJQWe0JQ[/ame]
Basically you render the AO shadows using the screen normals so there is no notion of what blue plane is in front of what blue plane. However the Zbuffer can tell depth relationship between objects.
I would imagine that once you 'tag' the screen surface of each object being drawn(example, floating object gets tag#1 and background object gets tag#2), you can can easily extract the average depth value of such surfaces and compare them. The bigger the difference in Z, the less AO you create where the two objects meet? You could even prevent the bleeding of the AO from the background over the object in the foreground.
But I have on idea of what I'm talking about hehe
no.
you said that if other games added all the stuff that crysis has, their performance would be as bad as crysis on low-end computers.
A prequel/similar effect to SSAO is this paper:
http://graphics.uni-konstanz.de/publikationen/2006/unsharp_masking/webseite/
which also illustrates how the "highlights" and darkening is done in principle.
Now the difference of SSAO to that one is, that depth isnt blurred, but per depth pixel you look around neighboring depth pixels and therefore find out how much occlusion is going on. Of course this is a simplifcation, as you only have an object's front depth, and not its back. Also as the effect is in screenspace, you get "wrong" results at the screen edges.
That "looking around neigboring pixels" is similar to placing a sphere on the depth point and shoot random rays with raising distance around it. As this is very costly, only a few rays are shot, and SSAO is performed in a smaller resolution, and then upsampled and blurred...
You may remember my pdf (whats cool in graphics) There is a few shots in there that illustrate the effect (page 51-53). Nvidia has a demo of a variant that uses the normal as well (which you normally have around in deferred shading) and then you can sample around a hemisphere.