~ Okay, I'm bored of looking through the wiki, and the sticky'd threads now. I can't find what I'm looking for I'm hoping I could get some help?
Are there any images of normal maps that are generated from flat images (e.g. crazybump normal from a photo of bricks) compared to normal maps that are baked from a high poly. I know that recently studios are pushing towards baking everything because the results are much crisper/richer/accurate, but I just want to see a comparison for myself.
Thanks V much (Hope this is the right section for this request)
Replies
But for rocks, and sometimes, bricks, (most organic stuff) generating the normal map from crazybump yields very nice results.
The normal map is there to compensate for the lack of vertex resolution in a typical low poly model. This is really visible in the lack of gradients in a typical 2D generated map. The gradients help compensate for the lack of verts that would be needed display the model's desired form and shape.
If you use normal maps generated from a high poly model then you get all the good lighting information from it on how the surface interacts with light for every pixel you have on the texture.
On the other hand with a 2D generated normal map your low poly model is stuck with using vertex normals (smoothing groups/hard-soft edges) to calculate the way light moves across the surface. Admittedly, you can do a LOT with vertex normals, but you start with a much lower resolution of lighting information to work with. Because you then have to paint or photoshop in all the missing surface lighting information. Which IMHO is much harder to do convincingly than to just model the high poly in the first place.
HP Model = Quality
Generated = Saves time
Sometimes a combination of the two is a great way to go. But it really depends on the asset, too. And art-style as SHEPEIRO mentioned. I can't confirm it, but from what I've heard, at DICE they tend not to work with hp models at all unless necessary. Then again, they use quite a specific art-style, a bit like what is commonly seen with the source engine
Cheers
But I still found the results quite interesting, and tbh, the Nvidia filter can work quite well if you combine multiple layers and overlay them on top of each other like I did here.
oops, forgot the normal-maps:
Cheers
The main difference between PS and Crazy Bump is not about building up volume from 2D source because both can do this well, but instead in the way they handle combining normal maps.
Crazy Bump can take two normal maps that would point light in different directions and combine them correctly using vector math. Photoshop just can't do it quite right.
So the results could potentially be really off (most artists know how that looks, and avoid it like the plague)
But with baking you can be sure that the shape your baking will light exactly the same on a flat plane.
One thing I find really annoying lately is seeing the CrazyBump tell-tale diagonal "slant" to some normalmaps in games. Then again I'm sure it has its uses, and Photoshop as well, for additions to modeled stuff or on its own for simple stuff.
Know how to use all your given tools, and you'll get great results.
I've seen fantastic artists work magic using just very little baking.
http://www.philipk.net/tutorials.html
I know, as said, I still need to brush up on that a bit :poly124:
The intensity was more or less up at the top though, and the normalmap volume couldn't be enhanced much more in there, without it just ending up in a big pile of bump.
About the normalmap's green channel being flipped, it's not, not for max at least. Would prolly come out wrong if you were to use Maya tho
Edit: just did a quick test using a texture i did a while ago. I used nDo, xnormal, and nvidia. i dont have crazybump anymore so couldnt use it for an example.
generally, i would say that the zbrush bake came out the best, BUT, if i were to use a combination of the nDo and xnormal filter, i think that could give better results.