http://www.unrealengine.com/news/epic_games_releases_july_2012_unreal_development_kit_beta/http://udn.epicgames.com/Three/XNormalWorkflow.html
UDK finally has synced normals.
Normal Map Workflow Improvement- We have improved the workflow of baking and rendering normal maps
- Notice its a far lower polycount than we could use before since there is no supporting chamfers or subdivisions
- This workflow currently relies on using XNormal to bake normal maps but produces much higher quality shading than before
- The Normal map workflow youve used in the past will still work, but if you want higher quality shading, this is currently the best workflow to use
- It also allows you to use much less supporting geometry since you dont have to fight incorrect shading
- The first step is to make sure your model consists of 1 smoothing group
- It also helps to triangulate your model using a modifier, that way you dont have to worry about any application changing your triangles
- When exporting your lowpoly model for xnormal, use these settings:
- The important points are the smoothing groups, and tangents & binormals
- When loading your lowpoly model into Xnormal make sure Use exported normals is enabled
- After baking your normal map, export your low poly models for Unreal using the SAME FBX settings as before
- When importing your model into Unreal, enable Import Tangents and Explicit Normals
Nothing else too exciting, there is some bug fixes I'm sure some people will be happy about.
Replies
I haven't been able to test it yet, but does this fix only work when importing static meshes? Skeletal Meshes can't import with explicit normals, so won't the problem persist with vehicles and guns?
if so then asfuahfiouahfpiauhfpaofhu best game development news all year.
They will still work.
Not what he was asking, there were methods to get explicit normals in UDK before, but they didn't work on rigged meshes. So it will be curious to see if that is fixed now.
Also, heres to hoping this is actually "synced" and not just "better", from the example mesh there it looks like there are a *lot* of hard edges there already(otherwise the "old" example would look much worse than it does). So looking forward to seeing some 3rd party examples here.
Yeah exactly, count me as *cautiously excited* for this.
EDIT: updated image to no longer use point lights, which don't work correctly in UDK.
FROM LEFT TO RIGHT: Skeletal mesh dynamically lit. Skeletal mesh with Light Environment. Static Mesh with 256 baked lightmap. Static Mesh unbaked with preview lighting.
Rendering of Skeletal meshes does not appear to have improved at all.
Rendering of Static meshes without lightmass looks pretty good, except for a few obvious errors. Lightmass brings some of the errors back in.
At any rate, this at least answers the questions Racer and I had.
Is there any way you can do a similar comparison, but with the old version as well? The far right looks very very good, but I'm curious to see if the skeletal meshes have improved at all.
I just want to touch on this quickly. In general terms this is bad advice. Even with a perfectly synced workflow, you still want to split your edges/SGs at your UV borders. Because:
1. Its free, there is no extra vertex cost.
2. Your normal maps will have less artifacts, even with 100% synced workflow
3. Your normal maps will generally compress better(less crazy gradients etc)
4. Reuse may be easier in some instances.
5. A script can do this for you in 1 second in Max/Maya.
And there are literally no cons. The only possible con is if you've got a weird baking workflow where your cage is not averaged(ie: offset method in max, not using a proper cage in Xnormal) than your projection mesh gives you a gap at your hard edges, but nobody should be baking like this.
TL;DR: "Using one smoothing group" isn't the point of a synced workflow, it is easier to do with a synced workflow, but it isn't the end goal.
What's a better way to bake?
I've always just reset the cage in max, then globally pushed it out until it looks about right. Is there some better way of doing that?
That is one of the proper ways. He is just saying "don't use offset when a cage is needed".
Correct, if your UV seams share the same borders as your smoothing groups/hard edges, there is no additional vert cost, and that is what performance in games comes down to the *actual* vert cost(uvs, hard edges, material breaks, etc) not the listed verts or triangles in your 3d app. There are some cool scripts to calculate this.
Sorry, I wanted to make a quick point so I wasn't super specific. In max you can go into "bake options" and click on "use offset" this ignores the standard max cage and uses an entirely different projection method with a simple ray distance. You really have to go in and "break it" to work like this in max, but it used to be pretty common. This is also the default way that Xnormal works if you just enter a ray distance, which I would highly suggest not doing, its much better to export a cage from max/maya or set up a cage in xnormal's 3d viewer. See the "waviness" thread in tech talk for more on this.
Now, there could be some entirely unknown but perfectly valid reason to use 1 SG for this method in UDK, I can't say for sure, maybe the exporter takes a crap if you have more than 1 SG or something.
But in very general terms "Use 1 SG" is at worst terrible advice(with an un-synced workflow), bad advice to possibly harmless(with a synced workflow) or not nearly specific enough to actually be helpful(if you're baking with a simple ray distance instead of an averaged projection mesh - then there are a whole different set of rules).
There's a set of normals and tangents per vertex, and a UV-split will always mean that new vertices have been created.
So what he is saying is that what you're doing is just modifying these already created vertices normals without actually adding anything.
As you can see, static meshes look mostly okay, and skeletal meshes still artifact badly (and look horrible with light environments when there's no environment to get bounces).
Here's from the May build. Just for fun, I exported a set with qualified normals just to compare.
I apologize for the blown-out look of these images -they look fine inside UDK and Photoshop, but for some reason something is interfering with photoshop's color profile when they are saved.
Or do I need to set it up via FBX? In which case, I haven't seen any Documentation on how to set up a Skeletal Mesh via FBX.
Thanks for the comparison!
Earthquake, I understand your skeptical but you should totally give this a try and see what kind of results you get.
Is that the command "Tiledshot"
Because it doesn't seem to capture Bloom or anything else really when I do that now. I'm not sure if I'm misunderstanding.
in these shots the current results for a static mesh look about the same as the old results using a qualified export.
that's good, but it's not the same as a fundamental fix to the problem until we have a solution for all asset types including skeletal meshes.
For what it's worth, this is what I got once I imported with the new build. Old on left, new on right. This was triangulated and exported in Maya, baked in xNormal, and imported with tangents and explicit normals.
@aajohnny I think it means for the screen capture actors used for reflections.
Assuming that both this update and qualified normals mode work as advertised, they should be identical. Any differences in quality here could be from the position of the lights, or the fact that I just reused the xnormal map for the qualified example instead of baking out a new one in Max.
I kind of figured that the inconsistency in UDK's tangent basis was obvious enough and discussed enough (especially here at Polycount) that the only reason it never gets fixed is that it can't be-- that the calculations are happening somewhere deep in the code where altering it would break other things. For that reason, I wouldn't expect to see the problem truly fixed until UE4.
I'd love to be wrong, though.
I would like to know too.
Qualified normals doesn't exist in Maya.
also about skeletal meshes, as the question popped up, did you use fbx or the old skeletal mesh format?
Well, not for me at least : http://www.polycount.com/forum/showpost.php?p=1636140&postcount=22
There is an option inside the Lightmass setting which affect the normals of the static mesh. Maybe you can try to disable it just for test ?
totally agree!
even we got synced workflow ,but when it comes to production, still need to use this wisely, especially to the hard surface /environment stuff.
Indirect normal influence boost? Turning it to zero doesn't help.
Wow, quite a boost in quality!
With this update, can i finally import these models in udk without having to rebake my normalmaps using hard edges/edge splits?
Without normalmap:
With normalmap: