So I've been having a problem in unreal with smoothing errors. It looks like it's displaying my normal map properly, much like the max viewport does for me. The big triangular smoothing artifacts that are on the low poly in max's viewport are still there on the final normal-mapped asset in unreal. When I view it in the marmoset toolbox viewer though it looks perfect.
I've realized that max's bakes appear to have an inverted green channel for some reason? A max bake looked really bad in marmoset until I fixed this, then it looked more or less like the bake I got from xnormal. Does unreal require this inverted green channel? Or perhaps another channel to be inverted? The problems I'm getting look like the ones I had in marmoset before I inverted the green, which is why I ask.
I've been doing my baking in xnormal with a high and low obj that I exported, but I noticed that it also accepted .ase, which is what I've been feeding UE3, so I thought I'd give that a go too. XNormal gives me this error though:
Invalid function parameter 'msh': The mesh has vertex UV indices but the count is not the same than the position vertex indices. Please sure th mesh->UVIndices.size() is ZERO or the same than the mesh->PositionIndices.size() as specifyed in the SDK documentation.
...which I don't understand at all.
Why does unreal give me shading errors while marmoset displays everything correctly? Would getting my .ase file into xnormal and baking with that fix it?
[edit] Oops I didn't mean to post this yet, I was still editing it into something legible. I'm installing the ActorX plugin now and I'm going to do some tests with it as well.
Replies
This makes sense, Xnormal uses tangents/binormals more accurate to marmoset's engine than max or UE3. Max renders its normals to display correctly in offline render, not realtime. Which explains smoothing errors in max renders. And UE3 probabbly displays the same as max's realtime shaders(again not accurate to max bakes).
Marmoset uses the maya/nx format for green channel, afiak UE3 uses max's format.
This means your mesh doesnt have uvs, or your uvs are messed up. Check your export options, and make sure you dont have multiple uv channels in max(it may be exporting the wrong uvs)
The best solution would be to use a format that correctly stores the mesh tangents/normals/bi-normals as they display in UE3, in xnormal. I'm not sure if there is such a format supported by XN.
Here are images that describe my issues a lot better:
So low poly geometry has smooth errors in the max viewport. In the normal map that is baked out from here there are undulations that match these errors, and I assume are equal and opposite to them to counteract them once the map is applied (I'm looking mainly at the big flat part of main platform that has those long triangular smoothing errors in it). In every case except the unreal viewport, when the mesh is viewed without normals applied, the smoothing errors are visible, then the normals are applied and the errors are corrected. In the unreal viewport it appears *without* errors, and thus when the normals are applied they try to correct something that isn't there, and thus *create* the errors.
This is the frustrating part: the following two images show the unreal material editor, first with the normal texture node disconnected, then connected. Here there are smoothing errors visible, that are then fixed when the normals are applied. Why the hell does the viewport work differently than the material preview window!?
I know a lot of people use this engine, how does everyone else handle this? I couldn't find much on this issue in my google travels.
I double-checked by triangulating it, but the errors were still there.
This could have to do with Unreal's unit granularity. I recall that on export verts are effectively snapped to a grid, albeit a fairly fine grid.. I think it was on the order of 100ths or 1000ths of a UU, but those are often larger than the finest granularity of a 3d app where vertices are expressed with a 32-64 bit float. If the changes between the vertices are fairly below that threshold they will still show in max and potentially in other engines, but for UT the surface will be planar.
Try this. Export the model from unreal back to an OBJ and load it up into max to see if the smoothing errors are still there. If not, you've found the issue. Make the faces planar, and rebake.
edit: and even more off-topic: you could easily share UV space for those 6 equal parts, no reason to unwrap all 6 of them uniquely.
The obj that unreal spits out looks like a single triangle, but is actually 906 tris sitting in the exact same place. The uvs are totally messed up as well. Is this indicative of a problem, or does unreal's exporter just suck?
Xoliul: The problem is there are also smoothing errors on each of the 6 flat pieces that sit on top of the large platform. These already have bevels to fix smoothing issues as well. I don't want to just keep adding geo until I'm back at my high poly. I shouldn't need to add any more, especially since it's showing up properly in another engine. This seems like an unreal issue to me? Maybe I'm wrong. And yes, only one side of those side pieces has a unique unwrap. If I ever get this working properly and do some tests, I may reduce it to only two pieces having a unique unwrap then just move the other pieces around to mix-and-match. But first, the issue at hand.
Would you mind posting the geo and normal map so I can mess with it in ut3 later on?
On another note, I know I can 'fix' this problem by assigning more smoothing groups to my mesh (aka. hardening a bunch of edges). Here I ran an autosmooth at 30 degrees, then re-baked normals:
The issues are visible in the blown up section. This solves the large-scale smoothing errors at the expense of smaller-scale errors due to all of the hardened edges on the mesh. I consider this a last-resort rather than a fix. The epic guys always talk about thinking of your low poly as 'putty' that your normal map gives definition to. Hard edges break this concept, and really mess with the look imo.
Read this:
http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html
Lots of people miss the basic concepts of smoothing groups & normalmaps, these are especially important when working with mechanical stuff.
I've always wondered if there was a better way of doing this for mechanical objects, because using multiple smoothing groups shoot your vert-count through the roof and can get quite expensive. I'm hoping someone will have a better solution to this problem.
Can it be seen when you finally get a texture on it? Ignore it.
Can you add some chamfers and fix it? Do that.
But today, a few of us here who were aware of it kicked around a few ideas and tested it, and I think we found what the issue is.
Apparently, something wonky happens when you either
A) Use Reset-Xform on the mesh before exporting can cause it to do this noticeably inside of Max.
Simply importing it into Unreal after collapsing everything together.
C) Simply attaching pieces together can cause it as well inside of Max.
What it seems like is happening, is that through either attaching or resetting X-Form in Max it is altering the way the normals on the object work, and when you bring an object that appears correct in Max, Unreal decides to "auto reset" for you and breaks the mesh.
The screenshot below shows my workaround. Long story short - Reset X-Form BEFORE you bake, and it should mostly take care of the issues. In extreme cases, the only real workaround would be to use different smoothing groups.
Then, when using our custom exporter that has a Reset-XForm option on it, I would again notice a visual change in my meshes in 3dsMax after running the script. Again I thought it was an Unreal problem. We've noticed it before and just adjusted smoothing groups instead.
Realistically it's rare for small details for it to be noticeable with this issue, and you shouldn't be attempting 90 degree angles on large surfaces without chamfering anyhow, so I think it slipped past most of the artists. I'm usually given a pretty low budget for some of my props and have definitely seen it on several objects.
We had a little bit of downtime today so we ran some tests. Looks like it is fixing the biggest offenders so far.
It just looks like I need to use much more geometry in unreal than in other (better?) engines. A lot of what I had hoped to pick up with the normal map will instead need to be modeled in. Frustrating.
If you don't mind posting up the Low / High poly Max files it would be easier for me to check it out and test it to see where you might be having problems. The only other time I have also seen this happen is when you have overlapped UVs and you have to go to the low poly and turn edges for the collapsed / attached overlapping faces.
I have a feeling that this is what is happening sometimes when you collapse the mesh, hence the reason I see a visual change in the geometry when collapsing / Resetting X-Form...for no obvious reason.
This is infuriating. I'll post the normals, .max and .obj files tomorrow. I'm going to finish my drink and go to bed before I punch something.
1. Yes reset Xforms, I thought everyone knew this about making any art in max whatsoever.... basically if you ever scale ANYTHING in max on an object level or mirror it, it will screw up your normals. It has something to do with them keeping their normal information from the original scale and just moving the geometry.
2. I've said this in other threads but I really do not trust 3dsmax to bake my normals, I havent baked a normal map in max in 3 years and my life has been easier because of it. It will crash when you get lots of geometry and it just doesnt bake normals well (Obviously this is my opinion and there are other people out there who use max just fine, I just find it a pain to get it to do what I want) I recommend Xnormal for baking normals, it's faster and my results are more consistent.
3. Having said that I export an .obj to Xnormal from max. I make sure my whole object is one smoothing group. (this is an area of debate in other threads but in gears2 all of my low poly meshes utilize 1 smoothing group and I'm happy with my results)
4.I dont think your base low poly geometry is helping any of the matters. There's a lot of waviness in the smoothing and you could help the renderer get past some of your problems by adding more supporting geometry in there, maybe a ring of edges around the top of your mesh. Just try to get rid of those big dark triangles.
5. When you bake are you exploding your mesh or baking it all together? You should really be exploding if you aren't. It'll make processing a lot easier.
And yes this works perfectly.
2. In addition use exploded baking like JordanW suggested.
First off, its important to understand why these issues are being caused in the first place. The recurring theme here is that none of this stuff is standardized, pretty much every baker,engine,file format, etc tends to do things slightly different. By that i mean calculating tangents/normals/binormals. Now there have actually been a few apps/engines to get this right. Doom3 for the most part, if you rendered your maps inside of doom3 was great in this regard. And if you're rendering and viewing inside of maya, your results will be nearly perfect without having to do much extra work.
Ok, so now that we know the issue is slight deviation from app to app, we can explain why the two accepted methods tend to help.
So, the greater the change you have in your normals, the more apparent any deviation is going to appear. So the smoother/flater your normals tend to be, the less likely you're going to have visible errors.
When you're adding geometry, beveling edges etc what you're reallying doing is softening the normals, making the transitions less harsh. This is going to help with smoothing errors, and its also going to help to bake more accurately(less skewing etc) because the normals are more accurate to the surface. This is of course going to be the ideal solution, but far from the only thing that should be considered.
When you add in hard edges, what you're doing is breaking the normals, and this will gives you pretty much the same result as adding in more geometry, in that it makes a more "suitable" surface to display a normal map that is essentially created incorrectly(mathimatically). The main problems here are: 1. you need to split your uvs along the same edges that you make hard, this isnt really a huge deal if its something you're aware of when going into it. And 2. the less geometry you have in your cage(assuming your cage is averaged, and not using the split edges, which will give you even more problems) the less accurate your bake is going to be. So if you've got small details, they very well may end up be projected skewed etc.
Now the really important thing here, is that you need to take into account various factors. What sort of budget are you dealing with in the engine, how important is the object, etc. Its silly to say always add in more edges, because that simply is not always an option. Its also silly to say always add hard edges whenever you have problems, you need to understand how this stuff works and make these sort of decisions on a case by case basis.
So what must have been happening is this:
- base mesh has smoothing issues
- lighting takes these into account and lights it as if 'flat', correcting these smoothing issues as it does so
- normal map then tries to correct smoothing issues that have already been fixed, and thus causes them
This explains why the mesh looks okay in the static mesh viewer and material editor preview windows; they must use a different lighting system (since the material editor preview window would be rather useless if it *didn't* take the material into account when lighting). This also explains why I would need to use hard edges/smoothing groups before baking in unreal, since this base geo is what the lighting is generated from. Finally, this explains why the base geo with no normalmap applied looks different in unreal versus the max viewport or other engines.
So with this in mind I deleted my point light that was the sole light in the scene previously, and added a skylight (aka. ambient light). The result was exactly the same as what I'd seen in the material and static mesh editors:
So now that I've figured out what the issue is, I'm not quite sure how to approach fixing it. I guess I'm stuck with the lighting the way it is, despite it not making sense to me. I suppose this is why I would need to use smoothing groups and hard edges to try to get a more consistent result with unreal's inconsistent lighting setup. On a sidenote, is this 'per-vertex' lighting and I was expecting 'per-pixel'?