I was curious to know how people here handle skin in UDK. I tried out the SSS feature of the DX11 renderer, and frankly I thought it looked like a pathetic excuse for SSS. That plus the DX11 renderer gives me tons of issues.
Anyway, here is what I'm working on at the moment. I'm interested to know how it looks so far. It doesn't use multiple pre-blured normal maps like some (
http://forums.epicgames.com/threads/732232-DT3D-UDK-Skin-Shader) implementation have. I gave it a try but the results didn't seem to be any better than what I'm currently using, which doesn't require any additional normal maps or convolutions.
Basic Blinn-Phong material on the left, and custom SSS material on the right.
http://s20.postimage.org/woses0h4b/sss_comparison.jpg
Replies
If you blend his setup that with a blue color, you will literally get the same effect, you could also try Mipping the Normal Map, but CustomTexture nodes don't like Normal Maps.
As for 'more robust' SSS solution, I don't think it's that possible in UDK, especially with the shadow issues and lack of control for light attenuation, unless you use a Proxy-Mesh for Shadow casting, but last I checked, UDK doesn't have such a solution in place.
That's essentially all mine is too, only I found that multiple normal maps make no major difference.
I tried that already. The results were similar. Memory usage was obviously smaller than D3TD's, but instruction count was higher. I had no issues with the CustomTexture node.
What exactly are the shadow and attenuation issues you are talking about?
I'll post my technique as soon as I'm comfortable with the results and have had time to optimize it.
Also, I tend to use PointLights in my scenes alot, and these can very easily shoot through the mesh and light up unwanted parts of your model, especially if you're using an inverted lambert term for the transmission effect on skin.
So it's a more technical issue thing, where due to graphical fidelity being sometimes gimped in UDK will cause unwanted effects I guess, not sure if it's simply my UDK having a lower setting or not.
Oh alright. What's the instruction could like at the moment?
http://s20.postimage.org/k249dhmzv/sss_skin.jpg
Here are some new images.
(2 pointlights + constant ambient term modulated by occlusion map)
http://s20.postimage.org/prz8tb7ot/skin1.jpg
(1 pointlight + constant ambient term modulated by occlusion map)
http://s20.postimage.org/vu6vjsw4t/skin2.jpg
(1 pointlight + diffuse and specular irradiance environment maps)
http://s20.postimage.org/jgu1cw6gd/skin3.jpg
Very cool!
On a side note : where can one download that generic head ?
http://www.ir-ltd.net/infinite-3d-head-scan-released/
First I take my normal map into Photoshop and create two versions of it. To one I apply a Gaussian blur (8 pixels wide in my case), to the other I apply a high-pass filter (same diameter as before). This splits the normal maps into two parts, the first being the low frequency part (contains the large scale details), and the second being the high frequency part (contains high frequency details like pores and tiny wrinkles). Then I store the X and Y part of the low frequency normal into the R and G channels of my normal map, and I store the X and Y part of the high frequency normal into the B and A channels. I do this so that I can import the texture as TC_VectorDisplacmentMap, which gives me the highest quality possible (normals are important!) To get the low frequency normal, simply sample R and G as your X and Y and then derive Z. For the high frequency normal, use RG + BA, then derive Z. The real reason I do this will be more apparent when I explain the environment mapping portion.
Analytical lighting:
I calculate diffuse as max(0, pow((dot(LightVector, Normal) + 1) * 0.5, 3)). Once with low frequency normals and once with high frequency normals. Then I lerp between them using the coefficient (0.2, 0.8, 1.0). Many people simply use the geometry normals for the low frequency portion. This will yield bad results if your normal map contains anything but high frequency details.
For the transmission effect I use DICE's technique for cheap translucency (http://www.slideshare.net/colinbb/colin-barrebrisebois-gdc-2011-approximating-translucency-for-a-fast-cheap-and-convincing-subsurfacescattering-look-7170855). I calculate the translucency map with xNormal the same way they describe.
For my specular I use Cook-Torrance with a Beckmann distribution. I used the code from here to implement it (http://content.gpwiki.org/index.php/D3DBook:%28Lighting%29_Cook-Torrance).
Environment map lighting:
Getting HDR environment maps to work inside UDK was a major pain. It would be less difficult if you weren't so concerned with quality. Anyway, I take an equirectangular environment map (you can find them online for free in EXR or HDR format) and I import it into HDRShop (v1.0 is free for non-commercial use) and I use the convolution filter to generate separate diffuse and specular versions. I then use a simple program written in C to convert the images into 32-bit RGBE encoded Bitmap files. The problem with using RGBE is that any type of interpolation of the texture will cause artifacts. To prevent that I first import the textures as TC_VectorDisplacmentMap. Any sort of compression would cause artifacts. Next I set mipmaps to none, as the mip levels would be generated using linear interpolation. Then finally I set the texture filtering mode to nearest neighbor. Again, linear filtering would cause artifacts. Then, in my HDR sampling material function, I convert the incoming reflection vector into a 2D UV coordinate to look up in the equirectangular texture. The code for that looks like this:
The reason I use a 2D equirectangular texture instead of a cubmap is because with the filtering mode set to nearest neighbor, the environment map would look awful. By using a 2D texture, I can implement custom linear interpolation in the shader. Linear interpolation samples the four nearest texels and then lerps between them based on the texture coordinate. By implementing it in shader, you can take each of the four texture samples and convert them from RGBE to RGB and then perform the interpolation, giving you the correct results every time.
For the specular portion of the environment lighting I simply sample my specular irradiance environment map with the reflection vector, and then multiply it by a fresnel and specular term.
For the diffuse portion, I sample the diffuse irradiance environment map twice. Once using the low frequency normal, and again with the high frequency normal. Then I lerp between them the same way I did for the diffuse. Then I multiply it by the diffuse texture and the occlusion texture. Since the diffuse irradiance environment map is generated using a simple cosine filter (normal lambert) it doesn't have the same benefit of smoothing out the lighting like the modified lambert I use for the analytical portion. This makes the normals look like they are too harsh (IMO) so what I do is I multiply the high frequency portion of the normal map by 0.5 before combining it with the low frequency. This gives it a softer look without messing up the large scale details.
You can get transmission from the environment by sampling the environment map using the negated viewing angle. The problem is that you get transmission at times when there should be none, and it is quite obvious. This is actually a problem for the specular reflection as well. I'm looking into ways of solving this, if anyone has ideas I'd like to hear them.
I think that covers everything. This post sort of turned out longer than I anticipated, hopefully it makes sense and doesn't ramble.
[EDIT]What about using the blur tool in ps to selectively blur your normal maps so you can reduce the sss on the bridge of the nose? Or even just masking back in 'hard' normals after your blur filter?
Great work !
Where w = 0 is essentially normal Lambert and w = 1 is essentially half Lambert, only energy conserving. This produces pretty nice results, but I have since switched to Pre-Integrated Skin Shading. It's not energy conserving (something for me to explore) but the results look pretty good already.
http://s20.postimage.org/pgscu5ktn/update3.jpg
Is it the same Pre-Integrated Skin Shading from Eric Penner ?
I'm using the one from Penner but I've got difficulties with the tonemapper. When I use the Pre-Integrated Skin Shading, the skin is extremly shiny.
Did you use a custom tone mapper ?
edit :
good read : http://seblagarde.wordpress.com/2012/01/08/pi-or-not-to-pi-in-game-lighting-equation/
No, I'm just using the default settings.