Hey to all,
Now before we start, I would like to say that I'm an artsy-fartsy kind of guy, so I apologize if I use a few terms here and there out of place, or say something under which I understand nothing of, hopefully, you peeps can help me out.
So here is my question, I have been trying to make a shader in UDK related to Skin, so far it consists a mixture of JIStyle's shader for Max (to which I lost the file, so cannot check up on) as well Miguel's SSS shader for UDK.
I was doing some extra research, and came along several users (who are pro's in the area of shaders, but for the sake of simplicity, will omit this point) who basically have written their entire shader around the basis of Eye/Camera vector to the basis of the skin.
Now here is my dilemma, I extensively studied my skin and that of people around me, and hell, even do the same thing for movies or pictures of people, and all them seem to follow the same pattern as that of leaves and snow, which is conservation of energy (through the surface) is always light dependent.
So basically, I guess my question is, whats the actual 'working' of the skin in shader format that would be correct and make sense? If you guys could net in some feedback, that would fantastic, because it's perplexing me on why people insist to use Eye/Camera vector to do some of the more heavy calculations for skin when Light seems to the major trend-setter.
Cheers and thanks.
Replies
I think aspects of this are why people (at least myself) would use a falloff relative to the camera to describe the (potential) scattering properties of a material relative to the viewer. Anything that you can see is reflecting light, all visible objects are 'light sources' with varying intensities based on their unique properties.
So, theoretically, if I were starting and had the ability, I'd use an HDR environment map for all of my lighting and use a falloff rate relative to the camera to qualify it (probably by imagining the falloff as a sort of 'percentage' map and multiplying the environment map by it). I'd do this for both the diffuse and specular, with the specular using a fresnel for its falloff (offset and clamped to move toward describing overall reflectivity) and the diffuse using a diffuse shader (ideally something that would take roughness into account, where the roughness would change not only change the bsrp but also the blurriness of the environment map used(perhaps by lerping between different levels of blurred environment maps)).
For SSS I'd break the diffuse into two different parts added together (the diffuse multiplied by x and the sss multiplied by 1-x, with x in the range of 0-1). The SSS would have its own falloff and 'roughness' scatter and absorption (diffuse texture). I'd combine the final diffuse and the specular by adding them together after multiplying them each by some percentage (x and 1-x in the range 0-1).
This is mostly theoretical as I haven't put this together yet and can't say exactly what aspects would need to be readdressed(I'm currently working on learning programming and need to write a script for my texture setup before I commit to anything), but if you were to try something like this you should probably figure out how lightmass could/would/should factor into everything (perhaps it could be used to qualify the environment maps somehow, if that's even possible). If anyone knows if lightmass info can be accessed earlier on somehow in the material setup using a node or if its implementation could be modified using unrealscript, that'd be important to know.
It is better to use a single space and transform all other data from their original space to this common space. Most modern game engines use the camera space as common space and transform all other data into it ( more technically: nowadays most rendering pipelines use some kind of deferred rendering which creates a g-buffer which is represented in camera space). The alternative would be to transform all data into light space, but with several 100 lights in a scene you would need to do it 100 times, which is much more overhead compared to 2-3 vectors which needed to be transformed from light to camera space.
The issue with skins or more general with sub-surface scattering is not the light sampling. The issue is, that you need to sample surface data not visible from the camera view or light view (i.e looking at a head, a light source behind it. The light would hit the back of the head, go into the skins, bounced inside the skin - changing color in the meantime - and leaves the skin at a now visible point). This can be done 'easily' with ray-tracer, but is incredible hard with a real-time rendering pipeline and shaderes. Most shaders only use some hacks, i.e. changing the color of reflected light to more red tones to simulate human skin, doing some blurring to simulate multiple skin layers etc.
Good point, I didn't think of taking HDR into consideration although this makes me ask, should a HDR (even a highly Hemispherical one) be taken into consideration during the Specular calculations, or should it be an Ambient one?
But isn't the rapid falloff from the camera a drawback, as the person can see before them the change of material on large surface which doesn't respect the energy information?
Almost all of my characters have this issue, while stuff like nose and hands can be faked easily with a camera, boobage and forehead, parts which strike a majority of the character space information, has too much of a rapid falloff (in the case of Specular as an example) so wouldn't be using your Light math (in the this case, Blinn) be sampled instead of Camera vector? Wouldn't this retain better the information of what should retain what?
I guess that is possible, hmm, this might sound like a stupid question, but lets say the Max value of Roughness is 256 (I'm using Gloss numbers for now), in what kind of state would the CubeMap be blurred? I mean how blurry would it be? Or should the user be able to have separate control over the blur?
Interesting, I wonder if there is a more dynamic solution to this instead of pushing two textures, hmm.
I didn't know lightmass had a preliminary effect on materials, interesting.
Argh, you're right, I always considered Light as a Light, not in what Space and View it is from.
Ah, again, that makes sense, but what about a single light source? What kind of effect would that have? Surely it doesn't have the weight of 100 lights, does it? Also, would the effect in question calculate the 100 lights even if they're out of distance range? Basically, what I'm asking is, does the effect/material always access those 100 lights, regardless of where they are in the scene?
[/QUOTE]
Hmm, this part me no understand, mind showing me a reference/wiki on how a tracer would do this in an easy/dummies way to understand? Yeah, yeah, I know, I'm slow.
But... when you are more comfortable about handling SSS in light space, just do it. It might be slower at first, but eventually you can rewrite your calculation once you get it working to gain more performance.
Easy is always relative :poly142:, a more accurate, but somewhat demotivating, description is:
it is hard with a tracer or photon mapper etc, it is impossible with realtime shader :poly141:
Whatever, here are some links.
Some theory:
http://graphics.ucsd.edu/~henrik/images/subsurf.html
http://www.neilblevins.com/cg_education/translucency/translucency.htm
One example (hack) shader:
http://http.developer.nvidia.com/GPUGems/gpugems_ch16.html
Cheers!