To make this simple. Think of pores on a normal mapped face. Something you dont want to try to spend the time creating on the hi-poly master.
So after creating a high-poly character and rendering the normal map unto the low poly version.(3D) I was shown a method of then creating items like pores with just black areas with a nvidia normal plug in tool (2D).
(Im using 2D vs 3D after this point to separate the difference)
The next step I was given was then to do a light transparency layer of the "2d" normal map over the 3d version for the final. The question I have, or more a problem Im seeing is this.
Since the 2d version has only one static background depth, while the 3d version has multiple depths. Well, wont this create some areas from the 2d version that are raised to high or too low on this mixed version?
In example in a concave area, the 2d normal map in the mixed version would be more apparent in the "mid" portion of the curve. At the top or bottom of the concave it would be too apparent, or too lacking if you simply did a transparency. It also would effect the 3d versions overall depth as you are again adding a one color background over something that isn't.
Am I missing something or another method of combining them that would make the 2d version "follow" the 3d versions depth? (Other than say zbrush)
Replies
Hope this helps!
PS description of the overlay:
[ QUOTE ]
Overlay
Multiplies or screens the colors, depending on the base color. Patterns or colors overlay the existing pixels while preserving the highlights and shadows of the base color. The base color is not replaced but is mixed with the blend color to reflect the lightness or darkness of the original color.
[/ QUOTE ]
So is the base color being sampled from the layer below or the layer the overlay is on itself? "base color is not replaced but is mixed with the blend color" Which still seems to imply a mixture happening versus just using the orginal layers depth?
Edit: Stealing your image pior for future reference.. MU HA HA (thanks)
Thats what Im trying to figure out.
Gmanx, that was my original plan, but most engines only support one or the other. The one in specific Im thinking of, only does normal.
Gmanx, that was my original plan, but most engines only support one or the other. The one in specific Im thinking of, only does normal.
[/ QUOTE ]
Disclaimer: I haven't done this myself yet, but I was told it works 100%
With ORB you can just put a (handpainted) bumpmap on the low(!!no need to uv map the high!!) poly model before rendering the normal-map. The bumpmap is then automaticly included in the normalmap (your normalmap needs enougth resolution of course).
In the end you have just one normalmap but the extra detail from the bumpmap is included.
Pior: When I did the detail pass on Ryoka I used a normalmap generator material (one that represents the surface normals as colors, using three gradients with texture coordinates set to normals). That's more accurate and saves you the NVidia pass. It also allows you to resume a surface on another height since it doesn't care about height, just the normals. I've faked a lot of stuff that way.
Compare:
Perspective, using the diffuse colors, shadowing and ambient occlusion to show the trickery:
Ortho, normalcolor texture and fullbright, used as the detail normalmap (after a levels -> B 0-127 pass):
Note how the height discontinuities in the first render don't show up in the second?
1. Half the intensity of your blue channel on the top layer (make it go to 128 max instead of 256)
2. Set top layer to overlay
3. Merge layers
4. Renormalize
Works perfectly for me. If you forget to lower the intensity of the blue layer, you'll destroy any info from your bottom layer of the original (in the 2d case, it's almost always pure white, which replaces your generated map which will have important values in there). Putting it to 128 will preserve it.
[edit] shit, just noticed kdr basically pointed out the same thing
and fixed it from screen to overlay[/edit]
Also, in Max7, if you apply a bump-map to the highpoly mesh (procedural textures work well), then render that down to a low-poly model's normal-map, it includes the bump information in the normal render, which is nice.
Then again, if what Mr Makowka says is true, ORB has an even better method.
Anyway, if you did that, you might as well just go the Photoshop route, since you'd have a painted greyscale map anyway. I was mainly thinking of this technique as using procedural textures to add quick repetitive bumpiness detail (like for cast plastic, or beaten copper etc.) ... much faster than painting it all by hand.
Yeah the method I posted (some months ago) might not be 100% accurate but since there is always a lot of tweaking to be done, I think that it's the end result displayed on the model that matters
Thanks for the levels trick, sounds good.
KDR, this 'normalmap generator material' sounds like a great tool. Is that app-specific ? (I believe you use Blender?) I guess it's also doable by doing a Max7 render to texture with rays launched from a top-down flat plane ... Or by going the oldschool way, with 3 R,G,B lights around the objects to be normalrendered, with no shadowing and no light distance attenuation either.
Hmmm can't wait to try this out
If you as example wanted a seam that goes through a large part of a model it would have to change the colors depending on the direction the faces are pointing, just like the baked normal map, which it wont do.
So while on some parts of a model a detail might look beveled, on other areas it will look inverted, or a mix of both. It's usually good for local details in areas but any extensive stuff would require retouching of colors in a reas or inverting some of the colors completely.
Or just Z-Brush the whole thing, pores and all!
pior: In Blender you can define the maping coordinate source for a texture, for the normalmap material you need to use "normals" as the source and map X to U for red, Y to U for green and Z to U for blue. The textures are gradients (special texture type) that go from rgb 0 0 0 to the corresponding color. All channels are set to additive and the material to shadeless. I'm sure your app can do something similar. Lights might not work because the lighting algorythms would include the other two coordinates of the normal vector and because it might be hard to get an exact gradient from 0-255 going.
http://www.blender3d.com/cms/Normal_Maps.491.0.html
Maybe a bit complicated to set up but the results seem nice and crisp. Too bad that Blender can't display tangent-space maps properly tho.
I guess that a procedural texture could do that in max too... Got to try this out
I guess you know about the lighting method.
http://www.pinwire.com/article82.html
I've used this, but it's view-dependant.
Procedurally I think you could setup a combo of Falloff maps, which I guess in theory could be rendered properly in UV space with Render To Texture... kind of a cool idea.
I don't know much about procedural textures and now than I think of it i believe it might be a bit hard to setup.
Hey by the way KDR I've tried to use a flat plane as a source in order to get normalmap details from a complex scene and it worked wonderfully well, exactly like what you showed in the shadowed/normal comparaison pic. Really easy a technique, very fast and easy to setup in Max7. I'm loving it
Hence the technique I explained in the minitut is only worth it for apps with no render to texture/baking options... Aw, time for a tut update