i would really like some feedback from the pros here(not just the pros...but some pro input would be nice). right now my workflow is as follows: build low res in max/detail in zbrush/import highres from zbrush and build final low-med res mesh over highres/unwrap low-med res model/bake normals etc.
i'm just curious(i've been out of the business for a year and where i worked we never touched next-gen or high res), since the introduction of zbrush...is this workflow standard for the industry? is there a better way to get normal maps than using this method? am i thinking about it too much? hahaha.
Replies
Building the lowres model in XSI, create the UVs for the lowres model in XSI, then optimize this lowres model for subdivision (quads) in XSI. Import the optimized version into Zbrush, create the highpoly model with 3d paint tool and projection master.
Create the normal map in Zmapper (free Zbrush Plugin)and save the normal map as a targa.
Load the normal map into XSI and render the model with DirectX.
For technical stuff I would prefer working in the 3d app of your choice and create the normal map with its own render to texture functions.
hmmmmmmmm.....
What I do is uv map half my mesh, mirror the geometry and uvs and shrink the uvs for the mirrored half into a corner of the uv map. Then use zmapper to generate the normal map. Go back into the other app, delete the mirrored half and remirror but this time leave the uvs overlapping. Then import the mesh back into zbrush. Now select the normal map as the texture and go into zmapper and voila.
You can take a mesh back and forth between zbrush and another app all you want, changing uv layout, pose etc. Just don't add/delete geometry.