Hi all,
I'm baking my maps with xNormal and I have found out, that I pays off to keep a separate "baking mesh", with decent polygon distribution to use with xNormal. I then use the generated maps with my game mesh, which has identical UVs as the baking mesh, but a more optimized geometry...
It just gets a bit difficult if you need to change the UVs. I just redid UVs on one -fortunately- simple piece, rebaked my maps and tried copying the UVs on the game mesh - doesn't work! If I try to paste the UVs of the bake mesh on the game mesh, it won't affect it at all - the previous UVs remain.
I ended up redoing the entire (game) piece, which is not an option for anything more complicated. Is there a better way for doing this?
Replies
As a matter of interest are you baking normal maps with a different mesh to the game res asset? This sounds like an exceptionally bad idea as normal maps are tied to the shading of the model. if the topology/smoothing changes it can cause really bad artefacts.
I think perhaps your better off doing things right in the first place and making a low res model that is baking friendly rather than fiddling with hacks.
Yes, I am baking the normals on a "different" mesh, but the only difference is a bunch of edge loops that do not alter the boundary edges.
I first tried to bake directly on the game mesh, but it seems that I get a very poor map, if I do that. For example consider a squarish metal plate with screws in each of it's four corners. The in-game mesh would obviously be a simple 2-tri plane, right?
But if I bake on that, I get a very distorted map, as if seen through a fish eye lens or something. The screw heads get elongated ridiculously.
I found that it helps a lot, if I instead bake on a very dense version of that mesh and then remove the extra loops (UVs are not affected) for the in-game mesh.
I have no idea why this happens and if it's a good idea to fix it like this, please tell me if there's a way to bake good maps without this intermediate mesh.
I just tried to do a simple test to prove my point and of course the maps came out identical this time! I wish I knew what caused the distortion before, because I though I did everything in the same way, but now I can't reproduce it.
I'm baking the maps with xNormal. I export the hi-poly from ZBrush and the low-poly and the cage from Max.
For this example I made a box in max, converted it to editable poly and added something like six loops both horizontally and vertically. I took it to ZBrush and sculpted some "screws" on the corners. Then I took everything into xNormal and baked away. Next, I tried to prove my point by taking the extra edge loops from the lowpoly and the cage - alas, the result was an identical map with no distortion whatsoever.
If anyone has an idea what may have caused that, please let me know so I can avoid it in the future. It looked like watching straight down on an object in perspective view rather than orthogonal view.
Instead you can use the same mesh and unwrap the same model to UV2 and then in the RTT window set it to render to UV2. It will transfer everything from UV1 to UV2. Then you save out UV2 and load it into channel UV1.
A while ago I wrote a mini tutorial on doing it.
http://www.polycount.com/forum/showthread.php?t=58411
I didn't use the projection modifier at all for these, as I baked them in xNormal.
Correct me if I'm wrong, but it seems to me that you can simply delete the extra edges without affecting the UVs - and the maps look perfect on the optimized meshes as well.
Now, the real question becomes if I have been doing that in vain, as it seems from yesterdays test. Oh well, I'll post here again if I catch that distortion again.
Yes, except edges that are on a UV border. If you delete one of those then things get messy.