This is probably a total newb question but I'm trying to figure out how projection works in zbrush. How does the low res mesh retain the high res detail? What's that whole process in the character production pipeline?
In short , it will look from each point of the low res asset, and see how far the high res asset is from that point along the normal direction of the point.Then take that distance (or the normal of the high res viewed from that point)and store it in a texture. Zbrush projection is different from texture projection though. The principles are the same but instead of storing the result in a texture, it applies it as an offset on the lowres mesh vertices. Zbrush texture projection is also different from regular texture projection. Regular texture projection uses ray casting (the thing I described above) to get the surface properties of a given point. Zbrush texture projection uses uv matching. Imagine laying out both meshes in the shape of the uvmap. They must have the same uv layout. Then imagine taking a screenshot from the lowres meshes perspective. This works with some types of maps but doesn't work accurately with vector maps such as a normal map where the vertex normals of the lowres needs to be taken into account during the tracing. The need ot uving the highres is also a downside of this technique, since usually the lowres mesh is quite different topology wise in a lot of cases.
Replies