I haven't read the entire thead, but couldn't a lot of these issues be avoided by using more geo and splits to avoid harsher gradients in your normal maps?
I haven't read the entire thead, but couldn't a lot of these issues be avoided by using more geo and splits to avoid harsher gradients in your normal maps?
I thought the same, but read page 1-2. If you don't eliminate the gradients all together it doesn't quite work that way.
So what I'm wondering now is if there's a better solution for converting down to 8 bit? Especially after seeing the breakdown with the cubes. Or at least a smarter method in Photoshop.
I'm wondering this too. Seems weird to apply noise to the 16bit source, then downscale. In a similar train of thought (and looking at that bc5 goodness) I wonder how mips are handled. Like if BC5 from 16bit is good enough for the full size texture, could dithering be done only at mip2 and beyond? Not that it's a big deal, but if you're playing a game with low texture settings it would be cool if dithering was there when needed. But that's probably putting to much thought into something so insignificant.
Another great topic Earthquake!
I've been using 16bit bakes when relevant since an engine coder mentioned it to us at work, it's not easy to get other artists to switch over though, this topic should help.
The data EarthQuake is using (he also provided them to us. Link: https://dl.dropboxusercontent.com/u/499159/bitdepth.zip) don't work for me, unfortunately.
The arc-shaped object has messed up UVs. I want to try a bit with the normal maps and so on but I don't know how to fix this or if I do something wrong. For metalliandy it seems to work, why not for me?
Besides that, I would bring the mesh into Toolbag and assign the Normal maps, change the Tangent Space of the Object (if needed) and make it 100% Gloss and Reflective, is that correct?
Seems weird to apply noise to the 16bit source, then downscale.
Just rephrasing, switching the image mode from 16 bit to 8 bit in PS dithers it with noise automatically. My new workflow is to bake normal maps with a 16 bit TIFF in xNormal, and then convert it to 8 bit TGA (not sure what file types you guys normally use, but I use .tga for some reason... and it works fine).
Kon, can you post an image of what the uvs look like for you? Also, I re-uploaded the zip file at some point as initially the lowpoly was missing. If you're trying to bake the high poly onto the high poly that may explain your uv problems.
Forgive me if this is a dumb question, but can someone explain why we don't use object space normals? They use a wider range of the colors available to them and should theoretically have less pronounced banding issues, right?
I'm sure some shader math needs to be done to make those work in animation, is that very expensive to do?
Forgive me if this is a dumb question, but can someone explain why we don't use object space normals? They use a wider range of the colors available to them and should theoretically have less pronounced banding issues, right?
I'm sure some shader math needs to be done to make those work in animation, is that very expensive to do?
Object space maps have a number of limitations unfortunately. Animation can be done with OS contrary to popular belief, but it does require a little extra shader code. However, that's not the worst of it.
1. You can't mirror object space normal maps. You can make a special shader to do so but it will only work in certain ways, eg, mirrored on a specific axis.
2. You can't instance and rotate elements. Lets say you have a repeating element that shares uv space but is instanced around a cylindrical shape, can't do that with OS.
3. Its much more difficult to add painted/converted normal map overlay detail after the fact.
4. You can't modify the rotation of your object after baking. You can if its in a level editor and the transformations are tracked, but you can't say, model 1 crate, and then arrange a stack of crates rotated to different degrees to make them look unique and safe that as a prefab mesh.
Forgive me if this is a dumb question, but can someone explain why we don't use object space normals? They use a wider range of the colors available to them and should theoretically have less pronounced banding issues, right?
I'm sure some shader math needs to be done to make those work in animation, is that very expensive to do?
The same limitations with banding still apply unfortunately. It's down to a fundamental lack of available resolution where 256 levels of grey per channel isn't enough to accurately represent the curvature of the surface.
In addition to what EQ said object space normals come at extra cost as you cant junk the blue channel and then recreate it in the shader, which is standard practice with tangent space (I think Jeff posted above about this). They also compress poorly with DXT1 so you would have to use something like BC7 instead.
I've also been thinking that, could we convert something useful out of object space normals? Like cavity, curvature, occlusion, transmission, height maps, etc. Just because most of the time it ignores low poly normals.
EDIT: Or, is there a way to convert OS normals to TS normals STRAIGHT without the low poly mesh? Just for using it for curvature conversion, because I'm using this method:
I've also been thinking that, could we convert something useful out of object space normals? Like cavity, curvature, occlusion, transmission, height maps, etc. Just because most of the time it ignores low poly normals.
EDIT: Or, is there a way to convert OS normals to TS normals STRAIGHT without the low poly mesh? Just for using it for curvature conversion, because I'm using this method:
I'm asking, because I'm getting ugly low poly "details" for my curvature maps with just baked TS normals.
No, unfortunately you cant convert TS to OS without a mesh. If you are baking to a plane the channel information is essentially identical to that found within a TS normal map, though possibly with the channel order swapped around depending on the orientation of plane you are baking to.
I've also been thinking that, could we convert something useful out of object space normals? Like cavity, curvature, occlusion, transmission, height maps, etc. Just because most of the time it ignores low poly normals.
EDIT: Or, is there a way to convert OS normals to TS normals STRAIGHT without the low poly mesh? Just for using it for curvature conversion, because I'm using this method:
No, unfortunately you cant convert TS to OS without a mesh. If you are baking to a plane the channel information is essentially identical to that found within a TS normal map, though possibly with the channel order swapped around depending on the orientation of plane you are baking to.
You can bake curvature maps directly from the highpoly, thats what I would suggest instead of converting from a normal map.
Seems like this is the only option, even though it takes so much time to bake it in xNormal with higher quality settings and with CPU... and all the tweaking in the settings to get a satisfying result. Same goes with AO and thickness maps. :P
I forgot to mention that it's possible to circumvent the dithering that Photoshop performs when converting between 16bit and 8bit documents by doing either of the following:
Go to Edit> Color Settings> More Options> Uncheck 'Use Dither (8-bit/channel images)
Creating a new 8bit document and pasting the 16bit data directly into it rather than doing the usual Image> Mode> 8-bits/Channel conversion.
These methods don't result in textures that look any better visually than baking natively to 8bits but if for some reason that people want to convert without the dithering it works well enough.
Yeah, for the small percentage of cases where dithering isn't ideal, I guess you could save out two copies (one without dithering) and mask them together for the best of both worlds.
I think I found out to replicate this behaviour in Substance Designer and how to bake the model information there.
The first Attached image shows you the standard 8 bit normal map output on the mesh side by side with the normal map. Like Joe did it in his opening post, I increased the contrast of the normal maps to uncover the banding.
Now it becomes interesting... In Substance Designer, even if you set the output normal map to 16 bit, the banding it still there. I am not a tech Artist, maybe someone has an idea why it behaves this way?
In the Baking Option (2nd Attached Image) you have to set the "Antialiasing" to 8x8. (marked in the image)
Even then, if you set your output image to 8 bit, it will look like the first screenshot.
BUT: When you sampled the Antialiasing in 8x8 & set the output normal to 16 bit, you get a result like in screenshot 3.
The only Problem: In Marmoset they both look like 8 bit images; when they are PNGs. If the 16 bit image is a PSD file it looks good. Why is that? I thought only the TIFF-import is broken in TB2?
EDIT: 4x4 Antialiasing seems to be enough. It's a faster than 8x8 (isn't it?) and it has the same outcome.
I forgot to mention that it's possible to circumvent the dithering that Photoshop performs when converting between 16bit and 8bit documents by doing either of the following:
Go to Edit> Color Settings> More Options> Uncheck 'Use Dither (8-bit/channel images)
Creating a new 8bit document and pasting the 16bit data directly into it rather than doing the usual Image> Mode> 8-bits/Channel conversion.
These methods don't result in textures that look any better visually than baking natively to 8bits but if for some reason that people want to convert without the dithering it works well enough.
That could be handy. I haven't had time to check much in photoshop, but seeing EQ's hard edge cube with dithering across what should be flat areas makes me think that it's dithering the RGB 127.5 to 127 and 128. So maybe that could be scripted to use a curvature mask to at least keep the flats flat if it's actually a problem.
The only Problem: In Marmoset they both look like 8 bit images; when they are PNGs. If the 16 bit image is a PSD file it looks good. Why is that? I thought only the TIFF-import is broken in TB2?
I'm not sure if our PNG loader supports 16 bit files, if it looks like the 8 bit version, probably not. PSD supports 8, 16 or 32 bit, which is why looks better.
Yeah, for the small percentage of cases where dithering isn't ideal, I guess you could save out two copies (one without dithering) and mask them together for the best of both worlds.
This seems like the best way to get the best final result (if the effort is worth it in each case). I imagine masking by the 16bit TS normals flat value would work fine. That could probably be automated too. It's a shame photoshop isn't smarter about how it dithers with noise though.
Then again, I suppose the fact that this behaviour hasn't really been noticed until now says a lot about how unlikely this is to cause any actual issues.
I'm not sure if our PNG loader supports 16 bit files, if it looks like the 8 bit version, probably not. PSD supports 8, 16 or 32 bit, which is why looks better.
It would be great if TB2 would support 16 Bit PNG files, because many Artists work with PNG.
For me, PNG is the way to go, because it has a lot of benefits.
Please add 16 bit
So in short, about 97% of the full RGB color space simply goes unused when normals are encoded in this way. There are other ways of doing things, but this is a very common way to treat normal map data.
This bothers the heck out of me. Why don't we encode normals as azmith/altitude pairs? There'd be no wasted space and you could throw negative altitudes out for even further information density.
This bothers the heck out of me. Why don't we encode normals as azmith/altitude pairs? There'd be no wasted space and you could throw negative altitudes out for even further information density.
But again, you can use more of the "resolution" by optimizing the vector-length for best direction (after quantization to 8bit) - at the expense of normalizing the normal after read.
I'm not sure if our PNG loader supports 16 bit files, if it looks like the 8 bit version, probably not. PSD supports 8, 16 or 32 bit, which is why looks better.
While we're on the subject, can you fix support for 16-bit .tif and .exr please Joe? The world will thank you
Hmm i seem to run into an issue with loading the 16 bit TIF(baked in xNormal) into Photoshop CS5. The bake preview of xNormal shows a normal looking normal map(128,128,255 as average etc.) but when opened in Photoshop, i get the 'This document contains Adobe Photoshop data which appears to be damaged.' warning, and when opened the file looks like the attached image.
I feel like i'm overlooking something very simple..
Hmm i seem to run into an issue with loading the 16 bit TIF(baked in xNormal) into Photoshop CS5. The bake preview of xNormal shows a normal looking normal map(128,128,255 as average etc.) but when opened in Photoshop, i get the 'This document contains Adobe Photoshop data which appears to be damaged.' warning, and when opened the file looks like the attached image.
I feel like i'm overlooking something very simple..
I don't suppose you have CryENGINE installed do you? Their CryTif plugin causes issues with the .tif extension. Try renaming it to .tiff and see if that helps.
I don't suppose you have CryENGINE installed do you? Their CryTif plugin causes issues with the .tif extension. Try renaming it to .tiff and see if that helps.
Also, you can use "Open As" in Photoshop and explicitly select the regular .tif format in the drop down menu below, that will override CryTiff's bullshit.
hey guys, got an issue with normal maps and i am not really sure that it is banding problem, but take a look please, i'm baking in maya and all the formats leads to same results, exept exr, but i don't know how to convert it properly to 8bit, and i think i just miss some check box, image below has an upper contrast http://i.imgur.com/rgVj12J.jpg?1
sorry, working without weekends makes me a little bit zombie, yes i read it. i need to bake 16 bit normal and use dithering, but the thing is - how to bake 16 bit in maya? i tryed all the formats, even then it says Tiff16 the result is 8 bit, and where can i enable dithering in maya?
Oh yeah, as far as I know, you can't bake 16 bit maps in Maya...even though you choose 16 bit tga or tiff or whatever, there's no solution for this, you can bake in xNormal and to convert simply open the 16 bit image in photoshop, and choose Image - Mode - 8Bits/Channel.
thx huffer, it explains many things so i get a solution for maya lovers, cos baking 60+ separated meshes with cages and constant reviewing results is pretty painfull with additional exporting to Xnormal, so in maya i bake exr, and convert it no 8 bit, i had a wrong color profile that doesnt show me exr properly, but i set to it to something else and it's working now, and it makes me happy
if someone converts a tangent space normalmap to 8 bit with this method and normalizes the output prior to rendering into the screen normals, dies it benefit from this technique too?
Just wanted to mention here that GIMP does poor job handling 16-bit images (normal maps baked in xNormal in my case). It doesn't support 16-bit images and it automatically converts them to 8-bit when importing, without dithering. So, it looks bad in MT2, having nasty banding artifacts.
My next question is that how the new open source painting and image editing program - Krita - handles 16-bit to 8-bit conversion? If it dithers with a noise, and it looks good in MT2 (or other game engines), that's just perfect.
Krita doesn't dither by default. I recommend working with 16 bits integer at all times within Krita for normal maps and using your engine to do the conversion to 8-bit. That way, the color management doesn't mess with the normal data. If you need to add some dithering on top of your normal map to break up banding that happens in both UE4 and Unity, you can easily do that with the following layer stack:
It's a Noise filter mask with Level set to 99 and Opacity set to 2, and a fill layer with blend mode set to Combine Normal Maps, opacity set to 73% and color set to 187, 187, 255. (For 8-bit images you'd use a color of 128, 128, 255 because color management, although I recommend sticking with 16-bit images.) I found this adequate to hide most DXT5n compression artifacts that Unity has, although it might be too much noise and you might prefer some compression artifacts instead. For an organic model you would probably need less dithering, in which case you would lower the opacity of the fill layer accordingly. Then whenever you export the image to Unity use a 16-bit tif and it will work correctly and hide most of the compression artifacts.
UE4 also has some banding when you import a 16-bit normal map straight from xnormal and it also doesn't import tiffs, so you have to export with PNGs which take a tiny bit longer to save even if you save them uncompressed. Setting the fill layer opacity to 14% is enough to conceal any compression artifacts. (Or, you could set the opacity in the noise filter layer properties to 1 and set the fill layer opacity to 28%.) The BC5 compression contributes to the noise somewhat so this is actually the lowest amount of noise that you'll ever need to add to an 8-bit normal map.
For Toolbag you can just use 16-bit tiffs because why not? For Marmoset Viewer you need to add some noise--for uncompressed normals, about 18% was good and for compressed normals about 44% although this didn't give me the greatest of results on High settings--it seemed to have worse compression than Unity, which is understandable because the compression is there to speed up downloads.
Also if you need to flip Y nondestructively for UE4 you can do that with an Invert filter layer set to Copy Green blending mode.
I didn't read the whole thread, but I wanted to quickly drop in and thank you for sharing your knowledge. It helped me already when I recently stumbled accross a banding problem in a normal map. I just fixed it by putting a tiny bit of white noise in the Substance Painter height channel of the problematic part. That seemed to work just fine in my specific usecase.
Dan Olson (@FoldableHuman on Twitter) made these images explaining Bit Depth and Sample Rate in audio. I feel they apply to images too, you could think of Sample Rate as Resolution in this example.
Thank you all for the informative thread. Noob question, would you suggest to use a mesh with just smoothing groups for a large flat surface like a door? Maybe adding details with substance painter, instead of trying to bake the NM and then have to pick between dithering or banding?
Well, if the normals on the lowpoly door are totally flat, you shouldn't get banding. For instance of the door is a simple cube and the face of the door is a single quad with it's own smoothing group/hard edges at each edge.
If they're not totally flat, you'll need that information to correct the shading and adding the geometric detail in substance painter or whatever won't look quite right.
Sorry for posting on this old topic but the techniques shown here are very important (anyway it's sticked in the front page) and game artists should be aware of this and always bake on a 32-16 bit depth raw color space then generate an 8 bit normal map or vertex curvature or displacement with dithering.
I have tried to bake a normal map in Blender and checked 32 bit Float after creating an image in Blender but i don't know why the 32 bit version is baked in sRGB and fixing the gamma generates artefacts:
Edit: using Raw color space fixes that.
Also Blender does not apply dithering, for example if i save two curvature map one in 16 bit and one in 8 bit then use the same color ramp, I get that:
As you can see color ramp texturing is impossible on a 8 bit image. The problem is fixed if i apply the color ramp on the 16 bit image then save it in 8 bit. But i get lines if i save a 32-16 bit normal map into a 8 bit (+ i get even more lines after fixing the gamma of the 32 bit sRGB image (sRGB normal 32b works well in Blender but not in a game engine, and linear normal 32b converted adds lines)).
Is there a free software that can generate a good dithering when reducing the color depth?
Note: you should also post this video in your first post that shows how dithering works, this helps to understand the theory with a very limited color depth: https://youtu.be/51f1m_cj7aA?t=36s
Replies
aww yiss
I'm wondering this too. Seems weird to apply noise to the 16bit source, then downscale. In a similar train of thought (and looking at that bc5 goodness) I wonder how mips are handled. Like if BC5 from 16bit is good enough for the full size texture, could dithering be done only at mip2 and beyond? Not that it's a big deal, but if you're playing a game with low texture settings it would be cool if dithering was there when needed. But that's probably putting to much thought into something so insignificant.
I've been using 16bit bakes when relevant since an engine coder mentioned it to us at work, it's not easy to get other artists to switch over though, this topic should help.
The arc-shaped object has messed up UVs. I want to try a bit with the normal maps and so on but I don't know how to fix this or if I do something wrong. For metalliandy it seems to work, why not for me?
Besides that, I would bring the mesh into Toolbag and assign the Normal maps, change the Tangent Space of the Object (if needed) and make it 100% Gloss and Reflective, is that correct?
I appreciate any help, thanks
Just rephrasing, switching the image mode from 16 bit to 8 bit in PS dithers it with noise automatically. My new workflow is to bake normal maps with a 16 bit TIFF in xNormal, and then convert it to 8 bit TGA (not sure what file types you guys normally use, but I use .tga for some reason... and it works fine).
the attached images show the content of the downloadable folder you made (thanks for this!) and the UVs in Maya.
There is the "bitlow2" Object missing, I think...
And maybe because of that the UVs are broken, because it's the Highpoly?
I'm sure some shader math needs to be done to make those work in animation, is that very expensive to do?
Aha sorry, I thought I added that to the zip file.
Here is a direct link to that obj: https://dl.dropboxusercontent.com/u/499159/bitlow.obj
Object space maps have a number of limitations unfortunately. Animation can be done with OS contrary to popular belief, but it does require a little extra shader code. However, that's not the worst of it.
1. You can't mirror object space normal maps. You can make a special shader to do so but it will only work in certain ways, eg, mirrored on a specific axis.
2. You can't instance and rotate elements. Lets say you have a repeating element that shares uv space but is instanced around a cylindrical shape, can't do that with OS.
3. Its much more difficult to add painted/converted normal map overlay detail after the fact.
4. You can't modify the rotation of your object after baking. You can if its in a level editor and the transformations are tracked, but you can't say, model 1 crate, and then arrange a stack of crates rotated to different degrees to make them look unique and safe that as a prefab mesh.
The same limitations with banding still apply unfortunately. It's down to a fundamental lack of available resolution where 256 levels of grey per channel isn't enough to accurately represent the curvature of the surface.
In addition to what EQ said object space normals come at extra cost as you cant junk the blue channel and then recreate it in the shader, which is standard practice with tangent space (I think Jeff posted above about this). They also compress poorly with DXT1 so you would have to use something like BC7 instead.
EDIT: Or, is there a way to convert OS normals to TS normals STRAIGHT without the low poly mesh? Just for using it for curvature conversion, because I'm using this method:
http://www.bs3d.com/index.php?page=7
I'm asking, because I'm getting ugly low poly "details" for my curvature maps with just baked TS normals.
No, unfortunately you cant convert TS to OS without a mesh. If you are baking to a plane the channel information is essentially identical to that found within a TS normal map, though possibly with the channel order swapped around depending on the orientation of plane you are baking to.
You can bake curvature maps directly from the highpoly, thats what I would suggest instead of converting from a normal map.
Ah, bummer.
Seems like this is the only option, even though it takes so much time to bake it in xNormal with higher quality settings and with CPU... and all the tweaking in the settings to get a satisfying result. Same goes with AO and thickness maps. :P
These methods don't result in textures that look any better visually than baking natively to 8bits but if for some reason that people want to convert without the dithering it works well enough.
The first Attached image shows you the standard 8 bit normal map output on the mesh side by side with the normal map. Like Joe did it in his opening post, I increased the contrast of the normal maps to uncover the banding.
Now it becomes interesting... In Substance Designer, even if you set the output normal map to 16 bit, the banding it still there. I am not a tech Artist, maybe someone has an idea why it behaves this way?
In the Baking Option (2nd Attached Image) you have to set the "Antialiasing" to 8x8. (marked in the image)
Even then, if you set your output image to 8 bit, it will look like the first screenshot.
BUT: When you sampled the Antialiasing in 8x8 & set the output normal to 16 bit, you get a result like in screenshot 3.
The only Problem: In Marmoset they both look like 8 bit images; when they are PNGs. If the 16 bit image is a PSD file it looks good. Why is that? I thought only the TIFF-import is broken in TB2?
EDIT: 4x4 Antialiasing seems to be enough. It's a faster than 8x8 (isn't it?) and it has the same outcome.
That could be handy. I haven't had time to check much in photoshop, but seeing EQ's hard edge cube with dithering across what should be flat areas makes me think that it's dithering the RGB 127.5 to 127 and 128. So maybe that could be scripted to use a curvature mask to at least keep the flats flat if it's actually a problem.
EQ, thanks for the info on object space maps.
I'm not sure if our PNG loader supports 16 bit files, if it looks like the 8 bit version, probably not. PSD supports 8, 16 or 32 bit, which is why looks better.
Then again, I suppose the fact that this behaviour hasn't really been noticed until now says a lot about how unlikely this is to cause any actual issues.
It would be great if TB2 would support 16 Bit PNG files, because many Artists work with PNG.
For me, PNG is the way to go, because it has a lot of benefits.
Please add 16 bit
This bothers the heck out of me. Why don't we encode normals as azmith/altitude pairs? There'd be no wasted space and you could throw negative altitudes out for even further information density.
Now you've done it :poly124: http://aras-p.info/texts/CompactNormalStorage.html (different use case though).
But again, you can use more of the "resolution" by optimizing the vector-length for best direction (after quantization to 8bit) - at the expense of normalizing the normal after read.
While we're on the subject, can you fix support for 16-bit .tif and .exr please Joe? The world will thank you
I feel like i'm overlooking something very simple..
I don't suppose you have CryENGINE installed do you? Their CryTif plugin causes issues with the .tif extension. Try renaming it to .tiff and see if that helps.
That did the trick! Thanks
I had your tutorial here open on my tabs for a long time but just tested it today in marmoset
I was experiencing a lot of those problems in Dota over metalic materials. I am going to test this on the next texture for sure!
thanks for taking some time to share this:)
Also, you can use "Open As" in Photoshop and explicitly select the regular .tif format in the drop down menu below, that will override CryTiff's bullshit.
http://i.imgur.com/rgVj12J.jpg?1
(http://sebh-blog.blogspot.de/2010/08/cryteks-best-fit-normals.html)
this
if someone converts a tangent space normalmap to 8 bit with this method and normalizes the output prior to rendering into the screen normals, dies it benefit from this technique too?
My next question is that how the new open source painting and image editing program - Krita - handles 16-bit to 8-bit conversion? If it dithers with a noise, and it looks good in MT2 (or other game engines), that's just perfect.
It's a Noise filter mask with Level set to 99 and Opacity set to 2, and a fill layer with blend mode set to Combine Normal Maps, opacity set to 73% and color set to 187, 187, 255. (For 8-bit images you'd use a color of 128, 128, 255 because color management, although I recommend sticking with 16-bit images.) I found this adequate to hide most DXT5n compression artifacts that Unity has, although it might be too much noise and you might prefer some compression artifacts instead. For an organic model you would probably need less dithering, in which case you would lower the opacity of the fill layer accordingly. Then whenever you export the image to Unity use a 16-bit tif and it will work correctly and hide most of the compression artifacts.
UE4 also has some banding when you import a 16-bit normal map straight from xnormal and it also doesn't import tiffs, so you have to export with PNGs which take a tiny bit longer to save even if you save them uncompressed. Setting the fill layer opacity to 14% is enough to conceal any compression artifacts. (Or, you could set the opacity in the noise filter layer properties to 1 and set the fill layer opacity to 28%.) The BC5 compression contributes to the noise somewhat so this is actually the lowest amount of noise that you'll ever need to add to an 8-bit normal map.
For Toolbag you can just use 16-bit tiffs because why not? For Marmoset Viewer you need to add some noise--for uncompressed normals, about 18% was good and for compressed normals about 44% although this didn't give me the greatest of results on High settings--it seemed to have worse compression than Unity, which is understandable because the compression is there to speed up downloads.
Also if you need to flip Y nondestructively for UE4 you can do that with an Invert filter layer set to Copy Green blending mode.
If they're not totally flat, you'll need that information to correct the shading and adding the geometric detail in substance painter or whatever won't look quite right.
So it's not really an either or thing.
Sorry for posting on this old topic but the techniques shown here are very important (anyway it's sticked in the front page) and game artists should be aware of this and always bake on a 32-16 bit depth raw color space then generate an 8 bit normal map or vertex curvature or displacement with dithering.
I have tried to bake a normal map in Blender and checked 32 bit Float after creating an image in Blender but i don't know why the 32 bit version is baked in sRGB and fixing the gamma generates artefacts:
Edit: using Raw color space fixes that.
Also Blender does not apply dithering, for example if i save two curvature map one in 16 bit and one in 8 bit then use the same color ramp, I get that:
As you can see color ramp texturing is impossible on a 8 bit image. The problem is fixed if i apply the color ramp on the 16 bit image then save it in 8 bit. But i get lines if i save a 32-16 bit normal map into a 8 bit (+ i get even more lines after fixing the gamma of the 32 bit sRGB image (sRGB normal 32b works well in Blender but not in a game engine, and linear normal 32b converted adds lines)).
Is there a free software that can generate a good dithering when reducing the color depth?
Note: you should also post this video in your first post that shows how dithering works, this helps to understand the theory with a very limited color depth:
https://youtu.be/51f1m_cj7aA?t=36s