Also, take note of the shadows here. There is absolutely some magic going on in the way of camera based map interpolation? Otherwise there would absolutely be a cast shadow off the massive strong nose, from some angle - and again under the eyebrow on HIS left hand side. Definitely not simply a case of build a model, apply a shader - set a light off in the distance and render.
Whatever it is, its high level unreal engine 3 sorcery and I badly want to know what it is......
Its funny I didnt really understand what was going on. I just watched the video and thought it was just another 2D Jap Fighter. Then I realised the camera moved. Wow! Thats insane.
Good news, Jap 2D anime isnt going to die. Its just going to change form and yet look completely the same.
Arc System Works has been researching this for YEARS. Ever since they realised that the sprite process for Guilty Gear was just ridiculously painstaking they've been r+d'ing to get 3D in. They did a lot of 3D work for BlazBlue. It shows that they still draw over things, its a perfect marriage of 2D and 3D:
There are five key stages that each character goes though in the process from concept to final the sprite:
1 All the characters in BlazBlue start life as 2D concepts and each animation frames pose is drawn by hand.
2 A 3D model is made of each character and posed according to the concepts and animation frames.
3 The 3D model is then used to create a consistent 2D line image as a guide for the final sprite.
4 Light and shadow is then applied and additional detail worked into each individual frame
5 which is then turned into the dot image the sprite itself. But the work doesnt end there!
Whatever it is, its high level unreal engine 3 sorcery and I badly want to know what it is......
Argh.
I THINK I saw another article that talked about this particular thing from Arc System, but it was in Japanese and the Google translate was spewing gibberish. I'll try to find it once I get a minute Troy. Unless I imagined it and my memory is a lie, in which case I'll be back with nothing.
But that still results in 2D sprites as the final ingame art, not 3D models. I don't think this is what they're doing for GGX; why would they even switch to Unreal Engine then, as that doesn't even support this kind of sprites out of the box.
Also, this is what their latest BlazBlue looked like: http://operationrainfall.com/wp-content/uploads/2013/04/blazblue-chrono-phantasma-playstation-3-ps3-1355320795-028.jpg
Even with their efficient 2D-over-3D pipeline, as Del detailed, they still get fairly lowres looking sprites. They'd need to double or triple resolution to get the sort of fidelty we're seeing here, and I can't believe that plays nicely with memory.
Also, take note of the shadows here. There is absolutely some magic going on in the way of camera based map interpolation? Otherwise there would absolutely be a cast shadow off the massive strong nose, from some angle - and again under the eyebrow on HIS left hand side. Definitely not simply a case of build a model, apply a shader - set a light off in the distance and render.
Whatever it is, its high level unreal engine 3 sorcery and I badly want to know what it is......
Argh.
They're not doing the usual self-shadowing, that's for sure. Perhaps their shadows are being cast by a simpler, shadow-only model that moves along but stays invisible.
But that still results in 2D sprites as the final ingame art, not 3D models. I don't think this is what they're doing for GGX; why would they even switch to Unreal Engine then, as that doesn't even support this kind of sprites out of the box.
no no! You misunderstand! I'm saying they've been R+D'ing the use of mixing 3D and 2D, and have been moving closer to 3D since BlazBlue.
If it's not that technically challenging then the people that understand how to do it should start sharing that knowledge for a community effort to replicate the look, similar to the Uncharted 2 texture blending thread here
I THINK I saw another article that talked about this particular thing from Arc System, but it was in Japanese and the Google translate was spewing gibberish. I'll try to find it once I get a minute Troy. Unless I imagined it and my memory is a lie, in which case I'll be back with nothing.
Man I think I know the article your referring too, I know I've read it or at least something like it, I remember trying to track down a japanese friend to do some translating for me - because as you say the google translate of the page was like reading a white mans gizoogle.
They're not doing the usual self-shadowing, that's for sure. Perhaps their shadows are being cast by a simpler, shadow-only model that moves along but stays invisible.
Yeah man, interesting idea... gah! I want to know!
It really goes to show that the 2D anime head "abstract code" really is impossible to model in 3D. The shaded model looks creepy as fuck ! No wonder why physical desk sculptures of most anime characters look off... Thank you for linking it, that's great information !
Yeah Pior, totally agree man, its an absolute nightmare to try and get that anime head to look right in 3d. It's something that I've been fawning over for years, and never really come up with any decent results - just a craptonne of failed sculpts and tests that I would never show anyone haha!!
I guess thats the other reason why this is so exciting to me haha.
Totally, Though i think if this says anything, it is perfectly possible to model an anime head, and make the geometry look great without needing to morph things for different angles.
The secret sauce is totally the shading and strokes.
Did anyone remember that amazing mai from totoro that some dude sculpted in zbrush?
Also, this cute character doesn't have the usual anime teardrop shaped contour and physically impossible nose ... Like Haz I think that it simply cannot be done in full 3D (or at least not without killing the form shading altogether) because the front and side views of the common anime code for "cute female head" are just not compatible with each other ...
But yeah - that's a fascinating subject. For what it's worth, the team behind Appleseed Ex Machina and Vexile did a great job with their Deunan/base female. They didn't follow the original manga design too closely, and turned the face into something more human (as opposed to the creepy dool look that was developed for the previous Appleseed CG movie and the head used in the recent Appleseed XIII TV series too)
Appleseed Ex Machina (Movie, 2007) - a free interpretation of the original, making it work in real 3d space. In a way she is a bit westernized a, but that fits the character well since she is supposed to be of a untraceably mixed type anyways : http://cdn.myanimelist.net/images/characters/12/89260.jpg
I think this also explains part of the art style change between the 2D street fighter games and Street Fighter 4 - the characters evolved quite a bit from their orignal designs and became chunkier - maybe because of similar reasons ?
well what i am saying is that it seems to me that its the shading which is allowing them to get away with modelling an anime head with the usual stylistic conventions of anime.
It's just a theory, but i think that if you got that creepy Appleseed head and applied the same guilty gear techniques it wouldn't looks nearly as creepy.
I suspect without the shading their model would also look creepy.
But i could be completely wrong and they managed to model around a problem nobody else has.
Ps: the star ocean 3d models i think are a great example of full shading anime 3d .
I see what you are saying, yeah - killing the shading is probably the only way to get that look just right, and it can totally make the average "creepy anime CG model" look more like the intended 2D look.
Thanks for mentioning Star Ocean, I didnt give that one much attention before. More neat stuff to look at ! hehe.
It seems like they were referencing the work done on FF Advent Children with it - the noses are sharper and straighter than in regular 2D manga : http://hanifanyou.files.wordpress.com/2010/09/edge-reimi.jpg
So indeed - they might have been able to achieve a look closer to their original 2D if they had been using the Guilty Gear approach. But then of course ... making high-end CGI cutscenes with fully rendered skin shaders would have been impossible ... This makes my head spin hehehe
Has anyone ever considered using normal maps to remove shading detail as opposed to adding detail? It seems plausible just thinking about it (though would probably make a ton of shading errors).
With with star ocean i think they did a far better job on the main 2 characters than the rest, the little girl on the far right is creepy as hell. Same with cat girl. I think maybe it worked better because the characters were sci-fi?
Oooooh ... that would be somehow similar to what is done with the face normals of trees, to make them look fuller ... So basically it might be possible to simply bake the normals of an anime face from ... a sphere ? Now that's interesting haha, I need to try that
So all i did was bake a turbo smoothed version of the mesh that lacked eyebrows and nose onto the low mesh that had those details, and used a simple ramp shader (via xolilul shader).
I don't think this is anything close to what they are doing, but its interesting none the less. The more i look at it it's probably baked shadows that are touched up, as there parts in the cutscene where the shadows change they snap in, especially on his nose at the 1.40 mark.
Now of course this is not a great example because this Ryu model from SF4 does *not* follow the abstract anime features symbols, and therefore shades quite nicely without any tricks ... but still it seems like custom normals can ease out some unwanted noise, especially in strong shadows. There is alot of many little glitches everywhere but I am sure they can be smoothed out in some kind of post. Also I suppose that instead of baking the custom normals to a a texture like I did here, they could simply be stored as plain custom normals at the vertex level ...
Awesome, That works even better than i was expecting it too!
You are right about the shading errors, but for the most part the ramped shading covers that up. It seems to be the sort of thing that will amplify any issues with a normal map shader, as it is relying on a perfect match to cancel out detail.
Really cool!
EDIT: I want to see haz try this out on some of his anime experiments!
You bastards!! Didn't check this for a few hours and its blown up with some goodness!!! You're on to something here, awesome idea Muzz! Pior thats perfect man, I definitely think you guys have unlocked something. I'll have to take a shot at this and see what happens!!!!
This has to be it, guys. I think removing shading is the key judging from those examples, good job Pior & Muzz. Subtractive art instead of additive. We need to stop thinking so 'western' about our gameart ^ _ ^
Well no it's not it, i am 95% sure what we are seeing in guilty gear is actually just hand touched up light maps that are swapped per pose. (i say just but that is still f**cking awesome)
With shaders and multiple uv's its something that is achievable in stock udk i think.
This is a novel solution to doing it real-time without the crazy amount of man hours required.
Do you think they could perhaps just have different lights for different materials? So your skin is affected by a light that isn't so top-down in comparison to the clothing? Seems like a quicker solution no?
Ach, you clever buggers - nice work pior! I was thinking some kind of custom normals using vertex maps interpolating that react to camera angle but maybe Muzz is more on the money and it's much more hand crafted than that. That looks like a really promising technique, though!
It could also be possible to offset the ramp shading with a texture, which would allow for some areas being more easily or always lit (faces) or in shadows (underside of Sol's kneecaps). It'd be a bit more intuitive to manipulate for an artist than the normals. Kind of like an ambient occlusion term but in both positive and negative ranges.
Chev, could you elaborate, i can't visualize what you mean.
Pior, looking at your image again, the nose shading looks a little like a badly grabbed high poly in the projection? Or is it just the high density end of the nose messing it up?
I think Chev's approach would be pretty similar to the vertex map approach used in that softimage shader I posted? As in, marking areas for shadowing to ignore, whether it's by texture input or vertex tagging?
Chev, could you elaborate, i can't visualize what you mean.
Simple ramp shader mockup via photoshop follows. Consider the sphere as what your mesh + normal map + lighting system would output as light value, then on top of that blend a hand-drawn map that allows for offsets both towards light and dark, here a few random brush strokes. Basically you're tweaking the light level before the ramp is applied.
Stabbington, the main difference towards what you're proposing is that rather marking areas to ignore you'd have an offset, which means you can go both ways and more or less subtly (ie instead of ignoring the shading outright you could just bias it, but full white here is equivalent to ignoring shading).
mmm, i think i get what you are proposing, how is this different than having a normal map adding that extra detail?
A ramp gradiates based on lighting, so to offset the ramped lighting you have to offset the normal data. (making it a convoluded way of doing my normal map method)
So say take the problem of an anime nose, how would you stop the shadows being cast there using this method?
I'd love to see you put this technique into practice, but I'm skeptical of it even being possible.
First difference from normal map is the result is mostly independent from light direction -you can get things that are always in light or always in dark (like a broody character that'd always have his eyes in the shadow of his hat, here you can make sur it *is* the case no matter the light angle. but again, check Sol's knees in the trailer always cast that small triangle shadow even when it makes no sense). To really get a light-independent result with a normal map you have to introduce aberrations (normals that aren't normalized). Second, it's a less physically correct but closer to 2D drawing way of thinking about lighting: you aren't thinking "how to tweak normals to get those shadows I want there" but directly painting said shadow, yet it'll still play nice with other bits since it's using the same ramp. implementation is very simple, shader-wise it's really just an overlay like the one in photoshop. Maybe I'll see if I can whip up a directx sample in two weeks when I have vacations.
As for stopping nose shadows being cast, the solution with this method would be to paint the face in perma-light, but it's only a good idea if that part is supposed to stay light. The real answer is if you don't want the nose to cast a shadow you don't include it in the shadow casting geometry proxy in the first place.
Do keep in mind the game we see there is a combination of a lot of effects. There may be that one plus a normal map plus special geometry, etc. For example I'm pretty sure they also do that thing like in Donkey Kong Country Returns where both characters are drawn to textures then those textures are drawn over the field, which allows them to overlap like sprites instead of clipping through each other when they're too close.
Nice job on the simplified normal bakes! I'm not 100% sure if you actually need to bake for this: explicitely setting vertex normals could work too if the polygon density is high enough
The lighting bias map that chev describes (sounds like a good name to me; it's a bias that forces lighting to always go darker or lighter) could well be part of it, but I think it could easily have unforeseen effects: a part that is fully biased to lit could look really strange when it goes in shadow: you'd fully see the texture doing its part and have strange seam lines. It would really have to be inconspicuous areas I think.
Also i don't think they have per-pose lightmaps like Muz is saying: you can clearly tell during the idle anims that the shading on their legs changes slightly along with the movement. Plus ti would involve having to paint or bake lightmaps for every animation, that sounds like it would severly limit workflow...
Also here's a thought:
they might be animating their keylight per character, per pose and perhaps even split for the face and body (unlikely though). This sort of thing could be done easily in unreal by having sockets on a character that are then just animated along with it.
I'm not sure about shadows though: they do exist as seen under the jaw, but there's some extra rules. Hair doesn't cast shadows onto anything else, and I haven't seen arms cast any drawn out shadows on the rest of the body...
Maybe there aren't any cast shadows at all. They could have painted the cast shadows into a texture, and then lerped that texture on top according to light direction.
Yeah you wouldn't have to use normal maps for this method. But if you want it to play nicely with normal maps it is one of the easier ways to do it.
I'm not sure why people always go away from normal maps when it isn't a highly detailed piece, normal maps just add shading data and remove shading errors from vertex calculated normals. Maybe team fortress characters would have less shading errors if they used normal maps more liberally.
But Xoliul, i am certain that it is animated light maps. I don't see any proof that they are not, in fact i think i may have to make some gifs to prove my point. (i'll be back in like 20 min haha)
Chev, sorry if i was a bit more skeptical than i should have been at first, it can be hard to listen to a new forum poster with 1 post as someone who knows what they are talking about as more often than not they don't. But you have some really cool ideas.
That being said what you are describing doesn't actually sound that useful to me, i can understand how it works but i don't believe it would help in the puzzle of making 3d look 2d, as it wouldn't allow you to sharpen shading lines, or artistically direct where the shadows fall.
It instead of sounding like a solution for this problem, it sounds like an improvement to self illumination maps, where 50% grey is fully lit, black is shadow and white is unaffected by light.
Yes the shadows are changing per frame they are moving in this, but also pay attention how the shadows seem to hold their stylized shape, and keep details from the last frame.
I know it probably sounds like a limiting and time consuming workflow, but keep in mind these same guys are used to drawing all this stuff from scratch, adding in an animated light map seems like a lot less work to me.
Shadow on the white clad guy's back under the half cape does not change. Nor do the shadows under his shoulder pads even when the right one goes vertical at the end of the move.
Shadows do move on the flowing cloth.
Also in the lower gif the shadow and light on the ear appear constant while the face does change. Hair is nearly constant lighting except for the most mobile and flexible of strands.
Doesn't rule out an animated lightmap but I think the painted in shadow sensitivity map or vertex colors is the more likely viable option of those yet presented.
Also there is zero reason these characters can have their own independent "lights" represented as a vector in the shader. Could even connect it to a hidden object in the scene so you could have interactive per character light control.
The shading looks mostly unlit with a selective ramp shader, but the outlining is quite nice.
I got some nice tapering strokes the other year by using a controllable Gaussian function with a depth map input and depth cutoff as a post process shader, but it wasn't to this quality and was, of course, full scene.
The combination of a depth based stroke and a light/shading sensitivity map might open up potentials though.
I think Pior's iteration on the smoothed-normals idea is the right direction.
Keep in mind - he just smoothed the fuck out of the model. but if you wanted a specific stylized line on the cheek, you could model that in as well. This is how the shadow "keeps it's stylized shape" - it's baked into a normal map.
Then take this to the next step. People have already done normal-map blending based on animation poses (think wrinkle maps). Think of these this way - they are normals maps but really just for stylizing shadows. You could have multiples for different poses or even artist-controlled blending at the animation level, some of them explicitly smooth, some of them with stylized shapes and sihlouettes.
This sounds like a workable pipeline to me....you have the animation in-engine and it's always rendered the same, then you can just tweak the normal-map blending per animation over a timeline...no?
Muzz, i really disagree man, what you're showing is clear proof to me that this is (at least partially) dynamic lighting and not prebaked. From a technical standpoint, I find it impossible to believe they'd have per-frame lightmaps for stuff like the cloth or the hair. If you're willing to go that far, you might as well just invest the time and effort in a proper dynamic solution; it's much less work in the long run!
I am starting to become convinced that the cast shadows are prebaked though: you can tell that under their jaws and on the coat beneath the shoulder part, there's often/always a slightly darker shade. So they probably have dynamic lighting without shadows for the first shade, and then multiply in the baked shadowmaps for the darker shade.
Anybody willing here to really try and match their results? I'd be up for creating some shaders for this
I'm planning to do some tests to try and prove my hypothesis, but i don't think i'll need any custom shaders to do it .
My main reason for the theory is that there are absolutely no lighting artifacts whatsoever, and i have never seen dynamic cell shading make as artistically directed lighting.
Like take the nose on the second gif, that triangle of shadow pops in without the face changing angle at all. I don't see how that can be achieved without hand drawn shadows.
I'm not even convinced its light mapping, just that it is artist directed and not the direct result of a lighting setup.
Yeah...the nose thing is a great example of something to try and emulate. But I do think it could be done in a shader.
If you had that model & shader in your 3d app, and you could tweak parameters and move lights around, you could probably get the shadows to work just like that - just the way you want them to. All we are talking about here are cutoffs of angles and stuff. I dunno, seems challenging but plausible to me.
If you see the popping as implausible, that's just because of the nature of a stepped shading terminator: it's either shaded or not shaded because the value are clamped to 0 and 1.
I agree that cell shading has never looked this good, but isn't that what it's all about and what we're trying to figure out? The simplified normals would greatly reduce the amount of artifacts i'd think.
THe shadow on the final frame of the guys face looks polygonal and not 'hand crafted' as well........ I mean the lit area just below his left eye, check it.
Pior is right about the style. This tech is just trying to really match the style of the series so far, imperfections and all, only because it's super identifiable as GG and to go 'typical' 3d would not help the brand. The style and aesthetic overrides any physical accuracy with lighting that you'd expect with 3d because it looks cool and that's what matters [for this game anyway] which is awesome.
I wonder if special model/normal/vert data is stored in morph targets that are called on a shot by shot basis. Then the artist can selectively find the right morph that best matches the tone of the scene that is trying to be conveyed. If this is the case then I'd imagine most of the game to have a flat or generic directional lighting setup with special case scenarios that don't ruin the overall look. It's not like animes or the original games are accurate with lighting anyway.
Guys! great creativity! I join those who are excited about this; this problem puzzled me since I learned the normals and shading stuff (6 years ago).
And I have to say, I was totally lobotomized by all the next gen detailed normal map stuff to ever think out of the box like you did! you just blowed my mind! Thx!
Replies
Also, take note of the shadows here. There is absolutely some magic going on in the way of camera based map interpolation? Otherwise there would absolutely be a cast shadow off the massive strong nose, from some angle - and again under the eyebrow on HIS left hand side. Definitely not simply a case of build a model, apply a shader - set a light off in the distance and render.
Whatever it is, its high level unreal engine 3 sorcery and I badly want to know what it is......
Argh.
Good news, Jap 2D anime isnt going to die. Its just going to change form and yet look completely the same.
More info:
Part 1: http://www.siliconera.com/2012/02/08/the-art-of-blazblue-part-1-concept-phase/
Part 2: http://www.siliconera.com/2012/02/09/the-art-of-blazblue-part-2-animation-phase/
Part 3: http://www.siliconera.com/2012/02/10/the-art-of-blazblue-part-3-background-phase/
I THINK I saw another article that talked about this particular thing from Arc System, but it was in Japanese and the Google translate was spewing gibberish. I'll try to find it once I get a minute Troy. Unless I imagined it and my memory is a lie, in which case I'll be back with nothing.
Also, this is what their latest BlazBlue looked like: http://operationrainfall.com/wp-content/uploads/2013/04/blazblue-chrono-phantasma-playstation-3-ps3-1355320795-028.jpg
Even with their efficient 2D-over-3D pipeline, as Del detailed, they still get fairly lowres looking sprites. They'd need to double or triple resolution to get the sort of fidelty we're seeing here, and I can't believe that plays nicely with memory.
They're not doing the usual self-shadowing, that's for sure. Perhaps their shadows are being cast by a simpler, shadow-only model that moves along but stays invisible.
no no! You misunderstand! I'm saying they've been R+D'ing the use of mixing 3D and 2D, and have been moving closer to 3D since BlazBlue.
Clearly they've made another huge step since then
Also it's worth considering what the outline in Borderlands 2 looks like, a game that's refined this thing quite a bit by now:
http://i.i.com.com/cnwk.1d/i/tim/2012/09/17/Borderlands2_2012-09-17_21-52-28-55.bmp
That's nowhere as good as what ASW is doing.
Man I think I know the article your referring too, I know I've read it or at least something like it, I remember trying to track down a japanese friend to do some translating for me - because as you say the google translate of the page was like reading a white mans gizoogle.
Yeah man, interesting idea... gah! I want to know!
http://storage.siliconera.com/wordpress/wp-content/uploads/2012/02/image4.jpg
It really goes to show that the 2D anime head "abstract code" really is impossible to model in 3D. The shaded model looks creepy as fuck ! No wonder why physical desk sculptures of most anime characters look off... Thank you for linking it, that's great information !
I guess thats the other reason why this is so exciting to me haha.
The secret sauce is totally the shading and strokes.
Did anyone remember that amazing mai from totoro that some dude sculpted in zbrush?
http://cfile7.uf.tistory.com/image/197D730C4C8301AEB34A19
Also, this cute character doesn't have the usual anime teardrop shaped contour and physically impossible nose ... Like Haz I think that it simply cannot be done in full 3D (or at least not without killing the form shading altogether) because the front and side views of the common anime code for "cute female head" are just not compatible with each other ...
But yeah - that's a fascinating subject. For what it's worth, the team behind Appleseed Ex Machina and Vexile did a great job with their Deunan/base female. They didn't follow the original manga design too closely, and turned the face into something more human (as opposed to the creepy dool look that was developed for the previous Appleseed CG movie and the head used in the recent Appleseed XIII TV series too)
Appleseed XIII (TV, 2011), trying to follow the original manga design - super creepy !
http://www.myconsol.net/uploads/images/o/Appleseed_XIII_01.png
Appleseed Ex Machina (Movie, 2007) - a free interpretation of the original, making it work in real 3d space. In a way she is a bit westernized a, but that fits the character well since she is supposed to be of a untraceably mixed type anyways :
http://cdn.myanimelist.net/images/characters/12/89260.jpg
I think this also explains part of the art style change between the 2D street fighter games and Street Fighter 4 - the characters evolved quite a bit from their orignal designs and became chunkier - maybe because of similar reasons ?
It's just a theory, but i think that if you got that creepy Appleseed head and applied the same guilty gear techniques it wouldn't looks nearly as creepy.
I suspect without the shading their model would also look creepy.
But i could be completely wrong and they managed to model around a problem nobody else has.
Ps: the star ocean 3d models i think are a great example of full shading anime 3d .
http://www.rpgamer.com/games/socean/so4/art/so46.jpg
Thanks for mentioning Star Ocean, I didnt give that one much attention before. More neat stuff to look at ! hehe.
It seems like they were referencing the work done on FF Advent Children with it - the noses are sharper and straighter than in regular 2D manga :
http://hanifanyou.files.wordpress.com/2010/09/edge-reimi.jpg
And still, comparing with the original concept art pieces, there is quite a bit of a jump it comes when it comes to facial features.
http://www.videogamesblogger.com/wp-content/uploads/2010/01/star-ocean-the-last-hope-characters-artwork-cast.jpg
So indeed - they might have been able to achieve a look closer to their original 2D if they had been using the Guilty Gear approach. But then of course ... making high-end CGI cutscenes with fully rendered skin shaders would have been impossible ... This makes my head spin hehehe
With with star ocean i think they did a far better job on the main 2 characters than the rest, the little girl on the far right is creepy as hell. Same with cat girl. I think maybe it worked better because the characters were sci-fi?
So all i did was bake a turbo smoothed version of the mesh that lacked eyebrows and nose onto the low mesh that had those details, and used a simple ramp shader (via xolilul shader).
I don't think this is anything close to what they are doing, but its interesting none the less. The more i look at it it's probably baked shadows that are touched up, as there parts in the cutscene where the shadows change they snap in, especially on his nose at the 1.40 mark.
Edit: Added a flipped mesh normals stroke.
Also ... that's an animated .gif waiting to happen !!
Btw this is what i looks like without the ramp.
Now of course this is not a great example because this Ryu model from SF4 does *not* follow the abstract anime features symbols, and therefore shades quite nicely without any tricks ... but still it seems like custom normals can ease out some unwanted noise, especially in strong shadows. There is alot of many little glitches everywhere but I am sure they can be smoothed out in some kind of post. Also I suppose that instead of baking the custom normals to a a texture like I did here, they could simply be stored as plain custom normals at the vertex level ...
Fun stuff
You are right about the shading errors, but for the most part the ramped shading covers that up. It seems to be the sort of thing that will amplify any issues with a normal map shader, as it is relying on a perfect match to cancel out detail.
Really cool!
EDIT: I want to see haz try this out on some of his anime experiments!
This has to be it, guys. I think removing shading is the key judging from those examples, good job Pior & Muzz. Subtractive art instead of additive. We need to stop thinking so 'western' about our gameart ^ _ ^
With shaders and multiple uv's its something that is achievable in stock udk i think.
This is a novel solution to doing it real-time without the crazy amount of man hours required.
The problem with shading anime faces isn't light positions it is reduction of shading complexity, and the stylization of the lines the shadows make.
Pior, looking at your image again, the nose shading looks a little like a badly grabbed high poly in the projection? Or is it just the high density end of the nose messing it up?
Stabbington, the main difference towards what you're proposing is that rather marking areas to ignore you'd have an offset, which means you can go both ways and more or less subtly (ie instead of ignoring the shading outright you could just bias it, but full white here is equivalent to ignoring shading).
A ramp gradiates based on lighting, so to offset the ramped lighting you have to offset the normal data. (making it a convoluded way of doing my normal map method)
So say take the problem of an anime nose, how would you stop the shadows being cast there using this method?
I'd love to see you put this technique into practice, but I'm skeptical of it even being possible.
As for stopping nose shadows being cast, the solution with this method would be to paint the face in perma-light, but it's only a good idea if that part is supposed to stay light. The real answer is if you don't want the nose to cast a shadow you don't include it in the shadow casting geometry proxy in the first place.
Do keep in mind the game we see there is a combination of a lot of effects. There may be that one plus a normal map plus special geometry, etc. For example I'm pretty sure they also do that thing like in Donkey Kong Country Returns where both characters are drawn to textures then those textures are drawn over the field, which allows them to overlap like sprites instead of clipping through each other when they're too close.
The lighting bias map that chev describes (sounds like a good name to me; it's a bias that forces lighting to always go darker or lighter) could well be part of it, but I think it could easily have unforeseen effects: a part that is fully biased to lit could look really strange when it goes in shadow: you'd fully see the texture doing its part and have strange seam lines. It would really have to be inconspicuous areas I think.
Also i don't think they have per-pose lightmaps like Muz is saying: you can clearly tell during the idle anims that the shading on their legs changes slightly along with the movement. Plus ti would involve having to paint or bake lightmaps for every animation, that sounds like it would severly limit workflow...
Also here's a thought:
they might be animating their keylight per character, per pose and perhaps even split for the face and body (unlikely though). This sort of thing could be done easily in unreal by having sockets on a character that are then just animated along with it.
I'm not sure about shadows though: they do exist as seen under the jaw, but there's some extra rules. Hair doesn't cast shadows onto anything else, and I haven't seen arms cast any drawn out shadows on the rest of the body...
I'm not sure why people always go away from normal maps when it isn't a highly detailed piece, normal maps just add shading data and remove shading errors from vertex calculated normals. Maybe team fortress characters would have less shading errors if they used normal maps more liberally.
But Xoliul, i am certain that it is animated light maps. I don't see any proof that they are not, in fact i think i may have to make some gifs to prove my point. (i'll be back in like 20 min haha)
Chev, sorry if i was a bit more skeptical than i should have been at first, it can be hard to listen to a new forum poster with 1 post as someone who knows what they are talking about as more often than not they don't. But you have some really cool ideas.
That being said what you are describing doesn't actually sound that useful to me, i can understand how it works but i don't believe it would help in the puzzle of making 3d look 2d, as it wouldn't allow you to sharpen shading lines, or artistically direct where the shadows fall.
It instead of sounding like a solution for this problem, it sounds like an improvement to self illumination maps, where 50% grey is fully lit, black is shadow and white is unaffected by light.
Yes the shadows are changing per frame they are moving in this, but also pay attention how the shadows seem to hold their stylized shape, and keep details from the last frame.
I know it probably sounds like a limiting and time consuming workflow, but keep in mind these same guys are used to drawing all this stuff from scratch, adding in an animated light map seems like a lot less work to me.
Shadows do move on the flowing cloth.
Also in the lower gif the shadow and light on the ear appear constant while the face does change. Hair is nearly constant lighting except for the most mobile and flexible of strands.
Doesn't rule out an animated lightmap but I think the painted in shadow sensitivity map or vertex colors is the more likely viable option of those yet presented.
Also there is zero reason these characters can have their own independent "lights" represented as a vector in the shader. Could even connect it to a hidden object in the scene so you could have interactive per character light control.
The shading looks mostly unlit with a selective ramp shader, but the outlining is quite nice.
I got some nice tapering strokes the other year by using a controllable Gaussian function with a depth map input and depth cutoff as a post process shader, but it wasn't to this quality and was, of course, full scene.
The combination of a depth based stroke and a light/shading sensitivity map might open up potentials though.
Keep in mind - he just smoothed the fuck out of the model. but if you wanted a specific stylized line on the cheek, you could model that in as well. This is how the shadow "keeps it's stylized shape" - it's baked into a normal map.
Then take this to the next step. People have already done normal-map blending based on animation poses (think wrinkle maps). Think of these this way - they are normals maps but really just for stylizing shadows. You could have multiples for different poses or even artist-controlled blending at the animation level, some of them explicitly smooth, some of them with stylized shapes and sihlouettes.
This sounds like a workable pipeline to me....you have the animation in-engine and it's always rendered the same, then you can just tweak the normal-map blending per animation over a timeline...no?
I am starting to become convinced that the cast shadows are prebaked though: you can tell that under their jaws and on the coat beneath the shoulder part, there's often/always a slightly darker shade. So they probably have dynamic lighting without shadows for the first shade, and then multiply in the baked shadowmaps for the darker shade.
Anybody willing here to really try and match their results? I'd be up for creating some shaders for this
My main reason for the theory is that there are absolutely no lighting artifacts whatsoever, and i have never seen dynamic cell shading make as artistically directed lighting.
Like take the nose on the second gif, that triangle of shadow pops in without the face changing angle at all. I don't see how that can be achieved without hand drawn shadows.
I'm not even convinced its light mapping, just that it is artist directed and not the direct result of a lighting setup.
If you had that model & shader in your 3d app, and you could tweak parameters and move lights around, you could probably get the shadows to work just like that - just the way you want them to. All we are talking about here are cutoffs of angles and stuff. I dunno, seems challenging but plausible to me.
I agree that cell shading has never looked this good, but isn't that what it's all about and what we're trying to figure out? The simplified normals would greatly reduce the amount of artifacts i'd think.
Hell the best outcome is to get Multiple methods of doing this . I didn't mean to come across at just dogmatically pursing one idea.
I wonder if special model/normal/vert data is stored in morph targets that are called on a shot by shot basis. Then the artist can selectively find the right morph that best matches the tone of the scene that is trying to be conveyed. If this is the case then I'd imagine most of the game to have a flat or generic directional lighting setup with special case scenarios that don't ruin the overall look. It's not like animes or the original games are accurate with lighting anyway.
And I have to say, I was totally lobotomized by all the next gen detailed normal map stuff to ever think out of the box like you did! you just blowed my mind! Thx!