Home Technical Talk

Reconstructing the ZBrush Wax shader

Since the nVidia shader contest was extended, I thought I'd try my hand on remaking the ZBrush wax shader that everyone loves oh-so-much (I just switched back to zbrush from mudbox and wow is it awesome).

First I just want to get the surface properties right for a sphere before I move onto complex surfaces and all the different shading that creates (that waxy cavity feel). Anyway, here's a pic from ZBrush:
clipboard01ps4.jpg

And my WIP shader (started last night):
hlslqc0.jpg

Seeing them side by side instantly some things pop out now (I need to adjust the specular color to make it more bluish/whitish and not reddish-yellow), and adjust the wax color some as well. I'm concerned right now with techniques, effects, and shading, and not so much the specifics of what RGB color I should be choosing, though.

The ZBrush wax shader is such a doozy, its so complex and nuanced, but I want to see how far I can take it in HLSL.

I'm just looking for general crits and suggestions, but mostly to spark my thinking as I've had some brain drain lately and only one or two people to provide crits or ideas.

Just for reference and what I'd hope to approximate eventually:
headnv0.jpg

Nvidia has some sample shaders with self-shadowing I think so I can use that.
However there are some surface effects I really can't wrap my head around. For example on the head, the "cavities" receive that waxy grey. But what's it based on? It doesn't seem to be any sort of facing ration between light, normal, or view; is it based on cavities? Would creating a cavity map to control that waxy effect be a decent idea or is there a better way?

The shader must fake lots of these effects, such as the shadowing (which is on lots of things just a self-shadowing 2D dropshadow, complex to implement I think but very fast and a great fake). How does it figure out things like cavities, or AO, cavities, etc? They must be faked, and I know I can't do much of it in a shader most likely, but I just can't figure out how some things are done/faked so quickly.

Having looked into it some more, I am going to implement the soft shadowing today after I get the parameters otherwise set (gloss, colors, etc)since it effects lots of the shading nuance on the higher-res models.

Thanks for any suggestions, advice, crits, etc.

Replies

  • Whargoul
    Offline / Send Message
    Whargoul polycounter lvl 18
    Maybe have a look at screen-space depth compares to see if it's a cavity.
  • JordanW
    Offline / Send Message
    JordanW polycounter lvl 19
    what's weird about the matcap materials is that they don't respond to light, which makes me wonder how exactly they're shading the mesh. It makes me wonder if their lighting is somehow image based, or if it just stores light info when creating a matcap
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    normal = derivative of position
    cavity = derivative of normal (= 2nd derivative of position)

    since you already have the normals, you can work from those. and compute rate of change into a render target.
  • Rob Galanakis
    hlsl2mr3.jpg

    Using RdotV instead of NdotH for specular component helped.

    I also had the "wax" as a specular function instead of a fresnel function, fixed as well.

    It looks like the waxy sheen is shifted in the direction of the light and not just a pure fresnel component, I'll be trying that next. I need to start thinking about the more complex bits.

    "since you already have the normals, you can work from those. and compute rate of change into a render target."
    I'm new to render targets so I'll have to ask you to explain. I'm sure its because of my ignorance but since shaders only take into account one vertex/pixel, how can I calculate rate of change purely in the shader? I'm guessing it can be done but I have no idea how.

    Also I was mistaken about the nVidia sample for self-shadowing, though I can swear I've seen something, somewhere... any examples or sources you guys know of?
    EDIT: Found it, it was with FXComposer 1.8 and not with 2... anyway, implemented it.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    I assumed you know render-targets, as you use them when doing shadows. Dont copy paste source from samples, analyze them and find out what they do.

    shadowmapping is rendering depth into a texture from light's point of view, and later projecting that texture back on the geometry. Although I am not sure if shadows are really wanted in a viewport shader used for modelling. you'd see no topology hints if "in the shadow" and would need to move light source constantly.

    you can render the model's normals encoded as RGB colors into another texture before, the same way. Just using the cameras point of view.
    then project it on the model matching screenspace, and you can sample "neighbor" pixels (using a texcoord offset of 1.5/texturesize, the 0.5 is needed as d3d samples textures from corners not centers as opengl does). ideally you would mask unwritten pixels, like using the alpha channel, so that you dont do cavity across silhouettes. another issue might be inner silhouettes, but probably you can get away with ignoring those.
Sign In or Register to comment.