Of interest to most who will read this thread, I wrote an extensive
tutorial that covers this topic and many other common baking issues.
It's geared towards baking in Toolbag, but most of the Basics and Best
Results sections apply universally. Check it out on the Marmoset site: https://www.marmoset.co/posts/toolbag-baking-tutorial/
WATCH THIS FIRST:
Alec Moody put together some fantastic video tutorials to go along with the release of Handplane
which do a very good job of explaining a lot of the concepts in this thread along with providing some great real world examples
Ok, so this has always been a common topic on polycount/tech talk. How should you smooth your model? Smooth the entire thing? Use hard edges? Its something that there had a certain amount of confusion surrounding it. So I am going to try and do a thorough writeup here.
First things first, lets define some terminology. “Cage” can be a little confusing, and app specific, so I’ll try not to use that. There are two basic methods for projecting a normal map bake.
- “Averaged projection mesh”
This method will ignore your lowpoly mesh normals, meaning if you have hard edges/smoothing or any sort of custom edited normals on your lowpoly the “bake” mesh will average everything. The biggest advantage here is that you can use hard edges without any seams or negative artifacts.
This method uses the lowpoly mesh normals directly for your projection direction. What this means is if you have hard edges or smoothing group splits on your lowpoly your “bake mesh” will also have those splits, and you will get gaps in your bake in those areas.
If you’re using this method, but making sure to avoid any hard edges/smoothing group splits, all you’re really doing is using an “Averaged projection mesh” but without the inherent benefits of using an “Averaged projection mesh”(like the ability to use hard edges). Suggesting others should do so is akin to saying “never use triangles”, or “always use quads”, it’s simply bad advice that doesn’t tell the whole story, and confuses novice users more than it helps.
How do I know which method I am using?
By default you are using an “Averaged projection mesh”.
If you go into RTT options and turn on “Use offset” you are essentially switching from “Averaged projection mesh” to “Explicit mesh normals” for projection.
When you select “match using: Geometry Normals”, which is the setting by default in the transfer maps dialog, you are using an “Averaged projection mesh”.
If you change the setting to “match using: Surface Normals” you are using “Explicit mesh normals” for projection.
By default, if you use the simple ray trace distance settings, you are using “Explicit mesh normals” for projection”.
If you load your own cage, exported from Max for instance, you are using an “Averaged projection mesh”. If you set up your own cage within Xnormal’s 3d editor and save it out, you are also using and “Averaged projection mesh”.
So why would you ever use “Explicit mesh normals” for projection? Well at first glance the quick Xnormal ray trace settings seems faster, but what you gain in a minute amount of workflow speed, you lose in flexibility. You can’t use hard edges without gaps, and you don’t get a nice visual representation of your projection mesh like you would in Max’s projection modifier or XN’s cage editor. In Maya you get the same “envelope view” either way.
Some users will use “Explicit mesh normals” as a means to get around “waviness” or “skewed details” by using hard edges, and thus opting for gaps/seams instead of projection errors. This sort of mentality is flawed however, and understanding how your mesh normals work/affect projection direction is usually the solution. See thread: http://www.polycount.com/forum/showthread.php?t=81154
When you use an “Averaged projection mesh” mesh, the use of hard edges is completely irrelevant to the baked end result. Both methods will look exactly the same.
When considering the use of hard edges, you need to consider the implications of such. Whenever you have a hard edge, or a uv seam, you “in-game” vertex’s are doubled in those areas. More than your triangle count or your vertex count, the “in-game” vertex count is what really effects performance in most game engines.
Some general rules with that in mind:
- To avoid artifacts, you must split your uvs where you use hard edges.
- You do not however, NEED to use hard edges wherever you split your uvs(as some may suggest)
- With an “Averaged projection mesh” you can use hard edges along uv borders with no negative side effects. Because the verts are already doubled at your uv seams, you get it for free. Your verts do not “triple” or “quadruple” if you have both a uv seam and a hard edge in the same place.
These are the basic facts of life when dealing with uv seams, hard edges, and “in game” vertex count. There are other things that will contribute, like material seams and so forth but that is a different topic.
When using a synced normals workflow. Meaning that your normal map baker and game engine are synced up to provide accurate, reliable display of normal maps between the two, you can often get away with less uv seams, because you no longer need to use as many hard edges to avoid smoothing errors, and in some cases you will not need to use any hard edges at all! This is really awesome, and makes a huge difference when it comes to production speed and quality, as it enables you to simply model, and not worry about doing 20 test bakes to avoid smoothing errors and things of that nature.
However, and this is really important, just because you have a synced workflow does not mean you should NEVER use hard edges(again, assuming you’re using an “Averaged projection mesh”). In fact, there is simply no drawback to using hard edges along your uv seams with a synced workflow. None whatsoever, it doesn’t increase your in-game vertex count(as long as the uv layout is exactly the same between both meshes).
But why would you want to? Aren’t hard edges a thing of the past and only needed to correct old broken workflows? Not really, there are a variety of benefits to using hard edges even with a synced workflow, and most importantly, no drawbacks. Here are a few of the benefits:
- Less extreme gradients in your normal map content, which makes it easier to pull a “detail map” out of crazybump without all of those artifacts from the extreme shading changes
- Less extreme gradients which means you will get better, more accurate results when doing LOD meshes that share the same texture, as the normal map doesn’t need to rely so heavily on the exact mesh normals. You may need to have a separate normal map baked for LOD meshes otherwise, which uses up more VRAM.
- Better texture compression, because well, you guessed it, less extreme gradients
- Will reduce what I like to call “resolution based smoothing errors” that happen when you have a small triangle but not enough resolution to properly represent the shading. These usually show up as “little white triangles” ingame. In the same regard it improves how well your normal map will display with smaller mip maps.
- Its actually very easy to add hard edges along your uv borders with a Max/Maya script. In max there is a function in Renderhjs’ awesome Textools script set: http://www.renderhjs.net/textools/ . In maya you can use this script written by Mop/Paul Greveson: https://dl.dropbox.com/u/499159/UVShellHardEdge.mel . So its not a huge workflow hit to do any of this stuff, its actually really simple and easy. Though you may occasionally need to go in and set some of your edges back to soft, in cases where you have complex partial mirroring or on the “seam” edge of cylinders or other soft objects.
Now you may be thinking to yourself “But I’m using a synced workflow, I don’t need to use any hard edges” and you would be correct. If the benefits brought up above do not appeal to you in any way, and you’re using a synced workflow, there is absolutely no reason that you need to use hard edges. There is also absolutely no reason that you need to avoid hard edges either, provided you are using an “Averaged projection mesh”, which you should be doing!.
Destructive baking workflows
Ok, so now I want to go over something I refer to as “destructive baking workflows” to me, this is anything that needs to be redone entirely when you re-bake your mesh. So often in production you will get change requests that will involve editing uvs and rebaking, or re-baking for some other reason. When we start doing a lot of stuff to our normal maps after the bake, you’re piling on all of this work that needs to be redone if you get a change request. Often times another artist entirely will need to work on your asset, and if you’ve done all sorts of voodoo magic to your maps after the bake the poor SOB will have no idea how to reproduce your bake.
So what do I generally consider “Destructive baking workflows”?
- Painting out “wavy lines”, again when we understand how the projection direction of our mesh works this is easy to fix in geometry, and quite often results in a more attractive model
- Hand editing your “cage” mesh, its fiddly work that will need to be re-done for every re-bake. Generally manually moving/scaling vertexes in your cage. This is mostly typical of Max users. You can do it in Maya too, but it only affects projection distance, not angle like in max, so its mostly useless. I think Xnormal works the same as max. Generally speaking if you’re getting errors that you need to do a lot of hand cage editing to fix, you can probably make a lot of improvements to your actual lowpoly geometry to improve the “bakability” of your model
- Combining a “Averaged projection mesh” bake and a “Explicit mesh normals” bake when using hard edges to get the projection error related benefits of explicit normals and the edge/seam benefits of averaged normals. This is a method I actually wrote about many years ago, but it really isn’t worth the hassle.
Note: I do not consider the basic “push values” in max or the “envelope %” settings in maya as “hand editing your cage” or “destructive baking workflows” as these settings will need to be made in some form regardless of baking method.
Now, there are a couple situations where you should be able to use these sort of “destructive baking workflows” without consequence:
- If you’re doing personal art and you know your model will never need to be rebaked
- If you’re doing professional work and you know your model will never need to be rebaked. This is often almost impossible to know however, many things can happen during the course of production that would require a model be edited and thus rebaked.
IMAGES YOU SAY?
Ok, so first things first. This is a test mesh I created to show off 3Point Shader quality mode. What that means is this mesh was created specifically for a synced normals workflow and to show off the benefits of that workflow, there isn't an excessive amount of uv seams, in fact there are less than I normally would use even with a synced workflow.
A: Soft edges for the entire model, "Averaged projection mesh"
B: Hard edges at uv seams, "Averaged projection mesh"
C: Hard edges at uv seams, "Explicit mesh normals" projection.
As you can plainly see, there are absolutely no visual drawbacks to using method B. None, nadda, zip. There aren't visual seams, aliasing, or any other issues like that. Even where you might expect, like the softer shapes in the front of the sight where I would normally soften the edges(helps with lods mostly).
In fact, if anything B looks the best, as there are extra artifacts, again what I like to call "resolution based smoothing errors" on A in more spots than on B(they have the same issues where the smoothing is the same, of course).
Now you may think this is super subtle, and yes it is, but the simple fact is B gives better results than A. When we get further down the mip chain this issue becomes more apparent on the larger shapes.
Now, we can get into a very subjective discussion about how important this really is, because naturally with lower mip-maps you're going to be viewing the object from a further distance, but it is again very clear that B gives you better results. Also, most games have texture quality settings, so if a user is playing on low, or medium, he's going to see these mips sooner. Even if your model uses a 4096 texture, that doesn't mean that is what will be displayed in game, and in fact, it will almost never be using that high of a mip in a real situation, only when the mesh in question is larger than your screen resolution would it use a mip that high.
Here we have method C, and the problems here should be evident. With this projection method, you create seams along any hard edges from the gaps in projection. I won't do a huge write up here, this is simply something you should avoid doing.
I also did not bother to show Soft edges with "Explicit mesh normals" projection, as that gives you the same results as method A.
Here is the normal map content. I did not bother doing a compression comparison, simply taking a look at the content should suffice here.
Here is an example showing what happens when you try to pull a "detail map" out of crazy bump with methods A and B. 300% from photoshop so its clear what I'm talking about. This stuff can be a pain to edit out if pulling these detail maps are part of your workflow. The less of these little artifacts the better.
There, that should cover the visual examples for all of the benefits of using method B over method A.