So I was going to use Substance Designer to bake my meshes but I noticed right away that something was off about the bake. So I decided to test out my theory that SD's bakes were sub-par.
The goal is to have a cage-less workflow. This elimates 3Dsmax right away. So we are left with Xnormal and Substance Designer.
This image is the result:
I figured this could be useful to some so I thought I would post. Long story short, don't use Substance Designer to bake your meshes using Raycast. I recommend using Xnormal and supersample your map at twice the resolution and scale down.
Let me know if you find it useful or if I should fix anything.
Replies
Thanks for the test! Could you send us your asset so we can fix the potential problems on our side ?
I sure will! I will package up the files and send them off, do you want a substance file with everything all set up? Or do you just want the geometry files so you can run your own tests?
Substance Designer uses an averaged normal calcuation when ray casting. So you don't get those black lines on seams and means you can forgo a cage if you used it to avoid those errors.
I THINK that xNormal does this too, but I am running some tests and doing some research to confirm this.
3Dsmax of course does not and shouldn't be used without a cage. I used it in this image to show quality of the details.
Yea, it is unfortunate because I want to keep my workflow all inside of Substance Designer, but the normal map quality is too poor for me to use right now.
The reason for me to avoid cages is to just remove that step if it isn't necessary. It isn't usually a massive time sync, but it can be, especially on a 50k triangle hard surface mesh with tons of parts. And we are going to be encountering more and more meshes in and well above that range.
As for the comparison and different tangent spaces, the bakers I used pull the tangents and binormals from the FBX file, so they should technically be close to synced, unless I am mistaken. The outlier being max, which I just included for posterity and for the flat face details. This all started because I noticed that Substance Designers normal map bakes looked really weird over my entire model and wanted to make sure it wasn't something I was doing that was causing that. It is a bit less about being 'synced' and more about the cleanliness of the normal details.
Xnormal needs a cage mesh or it will give errors at hard edges afaik. Though I vaguely remember something about hitting the use cage checkbox without a mesh loaded that would force it to use an averaged projection mesh, haven't tried that myself though.
Personally, setting up a quick cage and trial/error-ing the correct ray distance settings takes about the same amount of time, except with a cage you get a visual indication of what is happening.
Funny thing about that is, the first day I was baking the mech you guys released 2.05 and had read that you stated if you set the tangent space to Marm that it reads from the FBX file. So I baked in SD and took it to TB2 and had some bad errors that weren't present in either SD or UE4 even with setting the tangent space settings.
Now, I could have been doing something wrong as it was at the end of a long day at 3:30am when I did this, so I will try again with my much more finalized mesh and see out it goes.
[edit]
Now that I think about it 3Dsmax2014 has a broken FBX exporter, so that may have been the issue, I now go through 3Dsmax2015 to export FBX with proper results and will try this with TB2.
Its not really that well presented, sorry.
Also if you've got some test content, I will be happy to look at it. From our tests here the results worked very well from a map baked in XN and loaded into both UE4 and TB2.
The geometry files should be enough.
Thanks!
Concerning tangent basis, for the moment their are two options in SD:
- if the option "Always recompute Tangent Frames" from the preferences is unchecked, then we read and use the tbn from the file for both the baking and the 3D display
- if it's checked, we recompute the TBN the same way as Unity (for both baking and display)
We are working on adding multiple tangent basis though.
For me, this is definitely the best option as I can pass one FBX and normal map bake around to various tools and they all sync fairly well. But having multiple tangent bakes would be great to ensure a sync.
As for the files, I will get you those tonight. We went to the UE4 meet up last night and I didn't get home early enough to get work done. Expect some files tonight!
While I have you Jerc, I do have one request. From my image you can see that the smoothest normals come from supersampling the normal map, then using photoshop to downsize the image. This downsizing cleans up the normal map in a very positive way. Is there a chance of a similar, but automated, way to do this in SD? I don't use the Supersampled, quadruple res map for anything but this, so having SD automatically bake a map at say 8k, do a nice downres to 4k, and output that 4k map would be great, as long as the downsizing algorithm can achieve Photoshop level of results.
The supersampling in SD basically works the same way, if you choose 8x it computes the map at 8 times the output resolution.
The only difference may be the algorithm to downsize the image. Which one do you use in PS ?
Nicolas
Hahah, sorry Nicolas! The avatar confused my brain. Best name ever btw.
I use Bicubic Smoother(best for enlargement) in Photoshop. The name is counter-intuitive but this produces the best looking normals in my opinion.
Yea it looks like it does a couple things correct, but in general looks pretty wild with it's accuracy.
Cageless baking really is not such a crazy thing; we were doing it at Splash Damage; ask Ben or Vincent, the results we got on weapons were definitely on par with what they did before for Brink, and it saved us time.
Averaged ray casting envelope is correct!
might also wanna mention that with higher polycounts you can model in a lot more detail with 3 bevels or hardedges so that you should be able to ditch almost all the larger surface variation information in the normal and just photoshop in detailed information like scratches and stuff.
First off thank you for the information it has been a huge help, I am curious to know what version of SD you are working in. I am currently researching map baking options for a more efficient texture pipeline and so far this article and one other have basically left me saying "SD works great but I will want XN for backup or just having a more accurate NRM map bake out of it. I also would like to point out that your other post on bit map information and dithering was really informative as well. Thanks for putting the time.
We are looking forward to receiving them!
Max 2015 bake (auto-dithered)
Xnormal Bake (.tiff converted to .tga in photoshop for dither)
Substance Designer 5.03 bake (.psd file converted to .tga in photoshop for dither)
FBX 2014 Export from max
No Cage
I included a simple SD5 scene with all 3 normals imported and ready to go. Let me know if there is anything else needed.
LINK
Out of curiosity have you tried baking it in Painter? I was working on a project the other day and got significantly better bakes there than in SD. I may have just had some settings wrong in sd... But everything was set up correctly Afaik
I'll run some tests on my end as well
I haven't yet, but I am moving into a more R&D role here at work that will allow me time to go crazy testing those things. So I expect to know very soon. I would ASSume that SD and SP are identical, but that may be a poor assumption.
Regarding the differences in antialiasing, could you redo the test for Substance ? The method we use has changed since then.
Regarding the difference in the amount of skewing (caused by the averaging of normals to compute the direction of projection), I would not draw any conclusion based on just one mesh since it is very mesh-dependant. All programs do not compute the averaged normals exactly the same way: some may average for all neighbouring triangles, some may weight based on the triangle areas, some may weight based on the angle at the given vertex, etc. This may cause some visible differences but I don't think any method will give better results than any other in all cases.
E.g. for a setup where the low poly is a simple box, if there is less skewing with one given baker for the details on the front face, it means that the average normals shared with the side faces are closer to the front face normal. So it means also that they are further away from the side faces normals, which makes it likely that any detail on the sides will be more skewed with this baker.
Thanks for the response Oblomov!
I did redo the test with substance 5.03 and the files I post HERE are with the updated files. The SD5 scene I included has the 3 bakes updated with the latest versions. Take a look, I am sure the Knald peeps are up to some testing too, so we may see some comparisons there.
As for the skewing. I feel this part of the mesh represents such a wide variety of angles and details across many UV islands, that I can safely conclude that cageless baking in substance is still unusable for hard surfaces. I understand the limitations, and using a cage isn't the worst thing in the world, but the ability to lose that step would be a boon.
With that said, I will still recommend baking in Substance, it's just that a cage is still required.
Anti-aliasing of any edges is one of my biggest gripes with substance at the moment. It seems to be fine for noisy detail, but as soon as you want to generate hard surface or clean normals the aliasing seems of a much lower quality than photoshop and ndo.
Thank you for doing this!
Could I possibly see your smoothing groups and UV islands for this object? I have been testing things at work an despite all my readings about smoothing groups and UV islands for bakes. On objects like rocks i am finding that if I bake with 1 smoothing group despite UV islands yields me the best results. For objects such as this that are more rigid and mechanical in nature, I am curious about your workflow.