Hi,
I am making models in 3ds max for 2 years. I am learning Zbrush for a week and have a question.
I studied Introduction to Zbrush course from Digital Tutors , and saw that it creates normal maps by using the subdivision levels. Author creates the map from the highest level (4.4 million polygons) and uses it on the lowest level (4500 polygons) on Maya and recevies perfect result. So, why I see on every other source that people always creates another low res mesh (modelling from scrath) by referencing the high poly mesh (the one which has 4.6 million poly).
Isn't that making a job twice, why do they do it? What do I miss and don't catch here. Why don't we just use the lowest subdiv mesh as a low res mesh.
I am asking this question regarding game development, I want to learn zbrush to create characters and other assets to use in games.
Thanks in advance to everyone who tries to help answering this.
Replies
Creating another lower poly version of that mesh gives you greater control on what geo you need and what geo can be just faked by using normal maps.
What if creating the low resh mesh by referencing the lowest subdiv level from zbrus, since the normal map perfectly applies to it. I imagine it would be easier that way, and you don't have to deal with cage while creating normal map in zbrush, but in other programs you do and it also creates problems like stretching, waveness etc. ?
This. Topology that is ideal for subdividing and sculpting on can look completely different from topology that is ideal for being animated in a video game. This is true regardless if you're doing an organic character or a hard-edged object. There are a few times where a basemesh might work out well enough to be used for both cases, but I've found those situations are rare enough that you can't count on them always occurring, and even then there are sacrifices (using Exocortex's Species as an example: it can be an excellent time saver when the scenario allows, but the meshes still have some areas that are problematic to sculpt on, and small/dense faces from its edgeloops can result in a lot of wasted vertices when subdividing)
I wouldn't recommend editing a mesh after a normal map has already been created. The information on the map is designed to work with a specific set of data from the model it was created from, and once you start removing and tweaking vertices (possibly changing the vertex normals in the process), the normal map may no longer have the same data it needs to correctly make the result it was created to.
Then again, god only knows how zbrush bakes its maps since you don't get to control things like vertex normals in zbrush anyway. I've found you can get away with using zb normal maps when you're using a software renderer like Mental Ray for something like an illustration, but when it comes to games I can't readily think of a time where I've seen a model and (tangent) normal map go straight from zbrush to an engine without some kind of error as a result of the baking simplicity (in 2009 pixologic removed the ability to preview normal maps inside of zbrush because there was no telling how the result will actually look in the final engine).
If you're going to retopologize anyway (which more times than not you probably should), it really isn't that much work to just do a proper job all around (including a cage), and then its smooth sailing.