Home Technical Talk

Anime Process + Shader Techniques

vertex
Offline / Send Message
astraldata vertex
Been doing lots of studying lately on making shaders and have decided to go the route of making an anime shader for Unity in the style of both 2D anime (in order to achieve a particular look) and also a shader in the style of Zelda Botw (in order to learn how to do fancy lighting stuff with ramp shaders.)


Here's 2 examples, one of each style I'm going for:


https://www.youtube.com/watch?v=difliSBnnU8
and
https://www.youtube.com/watch?v=dcErKLeqGhw


There are two issues I'm finding hard to deal with though.


1)  I'm using totally free tools (i.e. Blender) and I'm having a hard time figuring out an artist-friendly way to paint normal directions for portions of the model where I want to control lighting more easily to attain the anime look. I was considering something they said in the GDC talk with "Guilty Gear Xrd" about using an Occlusion Map to indicate areas on the model that they want lighting to go, but I'm wondering if (and how well) this might work as a replacement for normal editing, especially if I only want the model to be seen from one camera angle (but I would like to turn him toward the camera from the front and sides too, something akin to the SNES Zelda overhead view, so I'm wondering if I'd have to swap Occlusion Maps for this).



2)  The actual ramp shading looks like total garbage in most shaders I've seen due to them highlighting or shading parts of a model that clearly shouldn't be shaded until at a certain angle. What's the best way to go about this, say, for the 4 views I mentioned in issue #1, for example. Is it possible to draw maps where you want the shadows to exist and interpolate between the different angles?

I saw something like this in the Zelda Botw where you see the pencil strokes at certain angles shading the clothing, but I'm wondering how they did this since I've got no idea how to do things with shaders at different angles code-wise. Any help in this area would be appreciated too!


Replies

  • RN
    Options
    Offline / Send Message
    RN sublime tool
    If I were you I would worry about one problem at a time, otherwise you won't know exactly how to proceed.
    I'm wondering if (and how well) this [using an occlusion map] might work as a replacement for normal editing, especially if I only want the model to be seen from one camera angle
    Normals are interpolated between vertices in the same way that vertex colours are. The unique values are at each vertex.
    An occlusion map (a texture) is going to be sampled throughout the faces (unique values at each texel), so you'll always have more room for detail when using textures. A way to visualise this is when comparing a mesh lit by vertex-normals vs. lit by normal-map.

    So if your mesh is not dense enough for you to rely on normals \ vertex colours to precisely define the shadow information you want, you'll have to use textures.
    The actual ramp shading looks like total garbage in most shaders I've seen due to them highlighting or shading parts of a model that clearly shouldn't be shaded until at a certain angle.
    You need to start from somewhere. Post your shader network and people will give feedback.
  • astraldata
    Options
    Offline / Send Message
    astraldata vertex
    RN said:
    Normals are interpolated between vertices in the same way that vertex colours are. The unique values are at each vertex.
    An occlusion map (a texture) is going to be sampled throughout the faces (unique values at each screen pixel the faces get when rendered), so you'll always have more room for detail when using textures. A way to visualise this is when comparing a mesh lit by vertex-normals vs. lit by normal-map.

    So if your mesh is not dense enough for you to rely on normals \ vertex colours to precisely define the shadow information you want, you'll have to use textures.

    Wow, good point there @RN !! -- I felt like it should work on a low-poly character, but I didn't quite realize the inherent power of pixel shaders until that image. Thanks!


    I agree I should tackle these issues one by one -- so the first issue I want to tackle is the occlusion map.

    Because this sprite gave me lots of lulz, I tried to spoiler it (can't seem to do it though), but as you can see, there are certain areas on the, erhm, "geometry" where the light pools and areas where it pools shadows of slightly darker tones:



    What I am going for is being able to rotate a 3D version of this 'sprite' in a few different directions, but having the light only pool in certain areas such as the face or upper body and in some places on the hair. I also want to make the back leg have shadows something like what you see here.

    From what I saw in the Guilty Gear Xrd GDC notes, they did this using a combination of occlusion and normal editing, but I'm thinking you could get something almost as good using occlusion maps alone. On the other hand, would I have to swap out my occlusion maps if I were to need a different view of the character?



    From what I see here, I'm thinking a simple ramp shader would have a hard time replicating the look in this sprite -- especially when movement is applied.

    Additionally, if editing normals is a necessary part of the process, does anyone know if it's feasable for Unity to load different sets of normals for a model depending upon what direction is pressed? Or would it be better to interpolate the occlusion maps themselves somehow through the shader?
  • RN
    Options
    Offline / Send Message
    RN sublime tool
    For reference, the Guilty Gear Xrd GDC presentation is this:
    https://www.youtube.com/watch?v=yhGjCzxJV3E
    They mention a few things in there, maybe this helps:

    - They preferred to use vertex attributes (colours and normals) instead of textures to store information because it was easier to edit (instant feedback) and these don't suffer from pixelation like textures would. So what I said about textures giving more details is incorrect, in this case. For a cel-shaded look vertex attributes will give cleaner results because of interpolation.

    - They used vertex colours as a factor for forcing parts of the model to become shadowed more easily.

    - They edited vertex normals only on some parts of the models, like the heads, to make sure that they had clean areas of only shadow or light, no messy patches.

    - Each character has its own light vector (a 3D direction that defines where the light is coming from). The light vector can be animated along with the character to adjust for the best results at every frame. Vertex colours and normals are not animated.

    - They didn't use a ramp, they used a single diffuse texture with flat colours that represent the lit state, and a "shade texture" with all colours as factors that will be used on the shadowed state, blended in a special way.
    On lit parts of the model, you just output the diffuse texture. On shadowed parts of the model, the diffuse and shade texture are blended with multiply: diffuse.rgb * shade.rgb.
  • kodde
    Options
    Offline / Send Message
    kodde polycounter lvl 18
    Not familiar with Unity, but here's an idea for using several Vertex Normal sets. Store them as Vertex Color but use them as Vertex Normals :)

    Mesh.VertexNormals = normalize(lerp(Mesh.VertexNormals, Mesh.VertexColor[x], alpha));
  • kodde
    Options
    Offline / Send Message
    kodde polycounter lvl 18
    I tried to play around with something alike the GGXRD way of modulating the tendency to get lit with one of my toon shader projects for Maya. Skip to 29:14 if video doesn't already.

    https://youtu.be/FCyRJlSYXXc?t=29m14s
  • astraldata
    Options
    Offline / Send Message
    astraldata vertex
    @RN

    Despite remembering them saying that stuff, apparently I still really needed an experienced eye on that video to help me see what I was missing/overlooking in shader terms (still very new to shader design), so thank you!

    I forgot to mention I'm trying for a lower-poly version of this technique (so your initial advice was still very helpful!) The GGXRD characters are *very* high poly in general, and this was considered okay since there were only 2 on screen at once. I, however, am looking for a way to make a shader like this useful for many characters on-screen at once, so any advice you can offer on this would be especially appreciated!


    @kodde

    So, basically, I never even considered using the color channel for normal editing and then just changing its use in the actual shader lol. No idea why since I've seen it done for other stuff a thousand times. Thanks for the reminder! Also, that's a great idea about storing multiple normals. Could I make the shader lerp between them potentially, depending on an east/west/north/south camera-facing direction?


    --

    @anyone

    To clarify, I'm trying to create a GGXRD-like shader for a model who's resolution is something along the lines of the following model (but with a higher-res texture and only slightly more polys to *just* offset the pixely/sharp look):



    My biggest concern now is that using the normals approach may not be as effective on this sort of model as it is on a higher-res one. Perhaps a higher-res-texture-based shader would be better-served here?

    Also, I noted that in the GGXRD models, they do have an occlusion map, possibly to help with light pooling, but I don't understand the reason why since they edited the normals on the face to cover for the pixel-blur issue. Would a model this low-poly need/benefit from any sort of occlusion map, and if so, why or why not?

    Lastly, what would I need to do to make the highlight speculars like in the GG shader? Maybe that's actually the main purpose of the occlusion map in their shader setup?
  • pior
    Options
    Offline / Send Message
    pior grand marshal polycounter
    Your questions are hard to answer because you are not clearly stating what your current skillset is (or isn't).

    You mention being "still very new to shader design". Does that mean that you know already know how to write/develop rudimentary shader code for your target environment, and you are simply looking for help as to which math/operation to implement ?

    Or, does that mean that you want to achieve something similar to the many examples shown in this thread, and are looking for an already working solution that you could just use straight away ?

    For instance, regarding this question :

    - - - - - - - - - -
     Is it possible to draw maps where you want the shadows to exist and interpolate between the different angles? 
    - - - - - - - - - -

    The answer is yes ... if you have access to a shader that does exactly that.

    So, just be more clear and you'll get all the answers you need. Looking at the GGearXrd talk won't help you much if you cannot develop the shader yourself (or find a readymade one for the target environment of your choice, or hire someone to write it for you).

    All that said you can also bypass the whole topic altogether and simply paint in the shadows directly, keeping everything "self illuminated/fullbright". It worked perfectly well for Fear Effect on PS1, it will work great for you too.





    Exact same technique in Megaman Legends 2 on the PS1, as shown here :



  • astraldata
    Options
    Offline / Send Message
    astraldata vertex
    pior said:
    Your questions are hard to answer because you are not clearly stating what your current skillset is (or isn't).

    You mention being "still very new to shader design". Does that mean that you know already know how to write/develop rudimentary shader code for your target environment, and you are simply looking for help as to which math/operation to implement ?


    Sorry, I didn't realize -- yes, exactly that -- I know enough that I can make rudimentary shader code, I'm just looking for operations/math/advice/direction/etc that might help me to achieve something like the above images/examples, especially with the constraints I've mentioned (performant enough for many onscreen chars / works with low-ish-poly).

    And thanks for your Fear Effect example -- (I loved that game btw) -- I chose not to do that because I wanted to see if there was a way to use some modern techniques that would allow one to use real lighting in order to attain something that doesn't look too "painted-on" and can potentially still be a little dynamic, but I was looking for guidance on how or what might be used to achieve a look like that.

    A good example is just about any top-down Capcom sprite, but all I have to show is this guy:



    As you can see, he's got a top-down light source, but just as much of it is in the front also.

    The way Capcom style sprites work is that, when they turn around and you see their backs, the light is still from the top, but it lights up less of their back than it would if their front were exposed. Also, as with Psylocke in my previous post, sometimes these sprites have both speculars in particular places (such as Psylocke's hair and, erhrm, "geometry") and pools of dark shadow in other places, such as the back leg of Psylocke and *most* of Link's back leg there (except he *doesn't* have that extra shade or two of shadow like Psylocke has, which I *do* want).

    So basically, I'm trying to figure out how to pool light only in the places I want, add speculars only in particular areas, and how to, perhaps, define a particular "plane" or something that lets shadows pool if they exist behind that area (for example, Psylocke's back leg is on a totally different 2D 'plane' of shadow so perhaps I can find a way to use this to my advantage in replicating that look.)

    Any pointers on any of this would be greatly appreciated!!

    Thanks for everything guys -- I hope that clears some stuff up!
  • pior
    Options
    Offline / Send Message
    pior grand marshal polycounter
    Right on - so in my opinion what you have to do at this time is not so much diving in and trying to recreate any fancy shader, but rather, using a very simple one (with just a hard light/shadow separator, and without advanced light pooling control or highlights for now) and see if that even works well visually with your assets. That is to say, making things meet in the middle and iterate from there.

    You've shown the Link model example, but not the *actual* model you are building. I don't think there's much point worrying about advanced shader effects without also working on the actual model these effects will be used on.
  • Yadoob
    Options
    Offline / Send Message
    Yadoob polycounter lvl 7
    That's a complex subject. Anime look are actually full of exceptions, see for example the eyebrows that are on top of everything. That's the same for the lighting : 2D artists will simplify a lot the lighting scheme and also won't be steady in the way they apply them which is the exact opposite of how a standart shader works (very precise and uniform light construction).

    So if you really want to have a anime look shader you have to look closely at your reference and break down your shader in smaller steps to handle all those exceptions.

    In GuiltyGear there's a lot of masks that try to correct the lighting : one is mean to add shadows that won't be affected by the light (for the neck and armpits). Two of them handles where and how much specular will be shown. On top of that there's a hand painted AO information and normal editing.

    I've put a pdf which is mainly the Guilty Gear talk (and you can ask me the .blend by message) on that subject : https://www.dropbox.com/s/65cwf6kjuezhwdl/GuilltyGearXrd_shader.pdf?dl=0

  • astraldata
    Options
    Offline / Send Message
    astraldata vertex
    pior said:
    Right on - so in my opinion what you have to do at this time is not so much diving in and trying to recreate any fancy shader, but rather, using a very simple one (with just a hard light/shadow separator, and without advanced light pooling control or highlights for now)
    Excellent advice. That's the idea. I just wanted to be sure I started with the right fundamentals and guidance to take me down the path to the place I want to eventually arrive. After all, practicing something poorly leads to being an expert in poor practices. This is unfortunately all too common in the indie world, so please try to excuse my jumping ahead, as I am only attempting to see where I am actually heading before I get there and have to unlearn everything.
    pior said:
    You've shown the Link model example, but not the *actual* model you are building.
    Granted, if this were a simple rendering, that would be easy enough, but since this is for multiple-characters in-game, and since my goal is to make this look right with any model of the sort of geometry Link has (lots of round volumes, few angles), I can't really give an exact model. Additionally, I'm also in the middle of a prototyping phase, so Link is all I have to work with for now.


    Yadoob said:
    So if you really want to have a anime look shader you have to look closely at your reference and break down your shader in smaller steps to handle all those exceptions.

    In GuiltyGear there's a lot of masks that try to correct the lighting : one is mean to add shadows that won't be affected by the light (for the neck and armpits). Two of them handles where and how much specular will be shown. On top of that there's a hand painted AO information and normal editing.

    Thank you so much for this man! That is awesome advice.

    I think what's getting me the most is that I don't know the gist of what's needed for controlling stuff such as the movement/placement of specular/shadows/ramps by way of using texture-channel-data to do so (though I do understand the concept of multiplying colors together). That writeup of the GGXRD does help me understand necessary techniques in handling shadow control though, which is half the battle, so I'm not a lost cause. 

    There are three things I dont yet understand about that writeup though:

    1. Would you mind elaborating on what the purpose of the -1 to 1 vs the 1 to 0 portion of the normal direction and light vector has to do with anything and how this translates to shader stuff and/or what it actually does?
    2. The purpose and use of a low-poly version of the model to get the normals doesn't make much sense to me -- how would something like this be done in, say, Blender, and what benefit does it have compared to painting the normals or calculating them automatically -- what can a low-detail model actually truly help with in this case?
    3. Not sure I follow the part about the interior lines and how they're modeled on the character himself (I don't see any in the images outside the nose) or if this can apply to a lower-poly character, but I do understand that by keeping the UVs thinner than the line and stretching them across it where necessary the texture can be blurred and faded somehow -- still not too clear on how this works in practice though.

    I'm also having a hard time deciding whether the shader should be camera direction dependent or independent. I'm sure I'm going to have to define my own lighting model but I'm having difficulty understanding how to proceed with that because the only real place I'd need to control the lighting direction is in specular placement. Most other stuff I could control with normal painting and shadow-control.

    I know these are dense questions, and it's a complex topic, but as said before, I'm just looking for some insight to give me an idea of what's going on inside a shader like this so that I may eventually program my own.

    Thanks everyone for your time and patience with me! -- I'm learning a lot already!
  • astraldata
    Options
    Offline / Send Message
    astraldata vertex
    After studying the shader to the left extensively (along with other various tutorials), this is the result:



    It is still a little ways away from what I was going for.

    First off, nice dark outlines done in the style to the left are hard to achieve on low-poly (there are angles where they appear broken, looking something akin to Okami) without the normals being smoothed. On the other hand, by smoothing the normals on the model in Unity, I lose the nice normals I've modeled that define how light should pool when exporting from my modeling program. In that case, I am forced to smooth the model in Unity, then, after the smoothing has taken place, I have to somehow get my nice light-pooling normals back in. Is this possible?

    In addition to that, I am also worried this might require a second pass.



  • radiancef0rge
    Options
    Offline / Send Message
    radiancef0rge ngon master
    Doesnt shaderforge support outlines by default?
  • RN
    Options
    Offline / Send Message
    RN sublime tool
    It doesn't look like you're using the step function in your shader, and it's absolutely necessary to make the sharp shadowing.
    They mention it in the GDC presentation on this slide:



    So you take the dot product (also called "scalar" product) between the vertex-to-light vector (going from the vertex to the light position) and the vertex normal vector and plug this dot into a step function which uses the vertex colour as the threshold, like in the image: step( threshold, dot( light, vertexNormal ) ).
    If you're using nodes (like in Blender, or with node-based shader editors in game engines) then the step function can also be called "Math -> Greater Than", with the test being "is dot( light, vertexNormal ) greater than threshold?".
    The result of this (either the step or greater-than functions) is always 0.0 or 1.0, and you can use it to turn shadowing on and off in this way:
    float isLit = step( threshold, dot( light, vertexNormal ) );<br>return ( diffuseTexture * isLit ) + ( diffuseTexture * shadeTexture * ( 1.0 - isLit );
    The "return" line says: "if isLit is 1.0, output just the diffuse texture. If isLit is 0.0, output the multiply blend of shade texture and diffuse texture."

    You can treat the R channel of the RGBA vertex colour as the threshold by subtracting 0.5 from it, so that 0.5 R means a threshold of 0.0 (default, good for most cases), 0.0 R means a -0.5 threshold (shadowed areas become lit), and 1.0 R means a +0.5 threshold (lit areas become shadowed).
    It's easier to visualise this once you test it.

    - - - - -
    Forgot to add, because of the way it behaves, the dot product will only be in the range [-1.0, +1.0] if both vectors are normalised. The vertex normal is usually already normalised, but the vertex-to-light vector might not be yet, so make sure to normalise it before plugging it into the dot( light, vertexNormal ) call.
    This dot product represents the lambert term, something used a lot in lighting for computer graphics. Read more about it in here: http://buildnewgames.com/lighting/
  • astraldata
    Options
    Offline / Send Message
    astraldata vertex

    Doesnt shaderforge support outlines by default?

    Yes, but I *was* actually coding my shader manually when I posted that, but I've since switched to learning shaderforge (because why not?). Even so, I was looking into a potential new way to make outlines than the standard 'upscaled-reversed-normals' approach used by most (including shaderforge) as I've got an idea carried over from my 2D sprite days I'm wanting to try out once I finally 'get' what all shaders can do.



    RN said:
    It doesn't look like you're using the step function in your shader, and it's absolutely necessary to make the sharp shadowing.

    Nope, I guess I did not -- I used this instead:

    float&nbsp;nDotL&nbsp;=&nbsp;saturate(dot(input.normalDir,&nbsp;input.lightDir.xyz));&nbsp;
    Seems to do something similar, but I'm wondering... am I really better off with step vs. saturate since step is a sort-of 'conditional'?

    Also, speaking of outlines... I was thinking about that aspect a while ago and was wondering if there was any way to put an outline inside the silhouette of my model, for example just around the inside/internal edges only? Since I'm now using shaderforge, perhaps it would be easy to do this (but without post-processing?)

    Thanks for your clear explanation @RN -- it really put a lot into perspective for me. Took me a good 45 mins to decode that simple 2 lines of code, but it was so worth it, so thank you again for that.

    Still playing with shaderforge for now, but I'll update once I have something pretty to show.




  • SnowInChina
    Options
    Offline / Send Message
    SnowInChina interpolator
    model

    you could use this method for the outlines
    (btw, kudos to gonkichi, awesome model)
  • bitinn
    Options
    Offline / Send Message
    bitinn polycounter lvl 6
    Did you check out Unity Chan toon shader 2.0? it's a pretty great shader implementation for Anime style, and you can see the full source code. (it was created using shader forge, with a bit of manual coding)
  • RN
    Options
    Offline / Send Message
    RN sublime tool
    am I really better off with step vs. saturate since step is a sort-of 'conditional'?
    If you want a sharp transition from shadow to light (a sharp terminator), you need to use 'step' or something equivalent, like a 1D texture acting as a colour-ramp that maps your U texture-coordinate to opaque white and transparent black, values that you can multiply back in your shader to turn on and off stuff. You could also use IFs (if nDotL > 0.0 etc.), but 'step' is a built-in way to do the same so it should be much faster.

    But if I were in your situation I would make both versions and compare them. Good luck.
  • ActionDawg
    Options
    Offline / Send Message
    ActionDawg greentooth
    adding to the above, saturate limits an input between 0 and 1. step forces a value to be 0 or 1.

    the 1D texture idea could work so long as it's not interpolated on at all, but that's introducing another texture sample (and potential user error) when an intrinsic function is provided. it's virtually guaranteed to be the fastest method.
  • astraldata
    Options
    Offline / Send Message
    astraldata vertex
    Thanks @RN and @somedoggy for the technical input on that -- it makes a lot of sense. I'd prefer to use step in this case since the ramp, in my case, doesnt seem necessary, especially after looking at the shader @bitinn suggested.



    bitinn said:
    Did you check out Unity Chan toon shader 2.0? it's a pretty great shader implementation for Anime style, and you can see the full source code. (it was created using shader forge, with a bit of manual coding)
    Great advice man! -- I sure did, and it supports all the toon-lighting I could ever want -- and more. The right Link is closest to what I wanted shade-wise, but left-Link's outline is still closer to what I want for that:



    Other than the special outline, the only real problem I have now is that the shader doesn't support vertex-normal-maps from a painted RGBA channel.

    Unfortunately I can't modify my model's normals in my modeling program since the internal shader outline (see Left-Link's outline) depends on the model's actual normals being perfectly smooth. I need the shader I use to have an alternate version of the normals for controlling where light pools (such as on the face). Although the shader is open-source, I can't for the life of me figure out where I might modify it to make it respect the vertex normals I painted into the R channel of the RGBA, so the shader source is unfortunately useless to me until I can understand at least that.








  • astraldata
    Options
    Offline / Send Message
    astraldata vertex
    So now that I've looked into shader forge extensively enough to (kind of) know what I'm looking at, i guess what I'm hung up on now is semantics between code and node.

     I know I must seem really dense here, but I can't seem to put the right node connections to achieve a basic diffuse/toon shade (i.e. telling SF what parts are lit using the step function + threshold), much less a way to setup the R channel to control the normal direction. In addition to that, after searching for days, I still can't seem to find a single hint of a toon shader to use as an example that's not in code, and translating that code into a node graph that works in shader forge is proving very difficult.

    The only toon examples I've found using SF nodes so far are older and have a CubeMap for some reason, which is not at all how I did toon shading in my code.

    I'm apparently very bad at translation here... I admit the math is my weak point, and abstraction by nodes isn't helping with that it seems... Is there some place I can look to understand this better?
Sign In or Register to comment.