Home Technical Talk

Of Bit Depths, Banding and Normal Maps

13
Of interest to most who will read this thread, I wrote an extensive tutorial that covers this topic and many other common baking issues. It's geared towards baking in Toolbag, but most of the Basics and Best Results sections apply universally. Check it out on the Marmoset site: https://www.marmoset.co/posts/toolbag-baking-tutorial/

---


This is an issue that pops up more and more frequently these days. Now that many engines/renderers are finally using synced normal map workflows, the weaknesses in our pipeline become easier to spot.

First off, what is bit depth and why is it important? Bit depth refers to how many values an image can store. A 24 bit image can store 8 bits, or 256 values of information per channel. From this point forward, if I say 8, 16 or 32 bit, assume I mean per channel. 256 values are usually enough for content like diffuse, specular or gloss maps because these maps tend to use up a broader range of the value spectrum.

Tangent space normal maps, on the other hand, tend to use a much narrower value range. Worse yet, the more similar the low poly surface is to the highpoly, the more precious the bit depth precision gets, as the difference that is recorded the between the high and low poly normals (which is essential for accurate shading) is very small, often to a degree lower than 1/256th, which is the most precise value that can be stored in an 8 bit texture.

So here we have the perfect storm, a lowpoly object with smooth curves that very closely matches the surface of the highpoly, with a fully reflective glossy material.



If we bake an 8 bit map with a baker that does not dither the result (eg, xNormal and Maya) we get stair-stepping artifacts where the texture can’t record precise enough values.



If we increase our bit depth to 16 in xNormal by baking a TIFF file, we get a much smoother, cleaner result.



So, if 16 is good, 32 bit must be better, right? In theory yes, however, in practice its not generally the case. Here we have a 32 bit map, and we can see there is essentially zero difference between the 16 and 32 bit bakes.



Storage space is trivial these days, so why wouldn’t you always bake with 32 bit? Well, first off, the difference typically is not noticeable. Secondly, its difficult to work with 32 bit files from bakes to final texturing (many features are disabled in PhotoShop in 32 bit mode), you can, if so inclined, with 16 bit files. Thirdly, converting from 32 bit to 16 or 8 bit space is more prone to user error (by default, when you convert from 32 bit image to 16 or 8 bit, you will be presented with tone mapping options in photoshop, which is something you never want to do to a normal map).

What about in game? 16 and 32 bit file formats are simply not supported (or at least not commonly used due to texture memory constraints) by most game engines. So, at the end of the day you most likely need to output a 8 bit file. Starting from 16 or 32 bit source allows the content to be dithered while down-sampling. Dithering adds noise which breaks up the stair stepping pattern. You might not like the noisy look, but it usually looks way better than banding/stair stepping. Additionally, most assets won’t have perfectly smooth mirror-like surfaces, so it’s unlikely the noise will be noticed in the final product. I’ll mention Max here, as Max dithers when baking out 8 bit files, so if you bake in Max you really don’t need to worry about any of this (unless you want to generate cavity, ao, etc maps from your normal map, then having a higher bit depth file is handy).



Now, lets look at a real world asset. Here is a sword I modeled recently.



Yet again, baking to a higher bit depth helps to remove banding, and down-converting to 8-bit dithers the result to more effectively cram the values into the 8 bit space.

Now, to maintain a bit of perspective, take particular note of the final textured shot with 8-bit (undithered) normal map. What were previously very obvious and annoying artifacts have gone away almost entirely. You can still see some issues here and there if you know where to look, but in game/motion and viewed from a reasonable viewing distance you would be very hard pressed to tell. This doesn’t even take into account the horrors that texture compression will play on your normal map.

One final image to help visualize why this happens. Here I've increased the contrast of the normal map significantly to expose the problem. Even pushed to this extreme level, which you would never do in production, the 16 bit map holds up relatively well.











Replies

  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    Nice post, Joe.
    It's great to get this written down for people. 16bits FTW!

    I just wanted to chime in real fast and note that the banding can appear on areas with low curvature in addition to the high curvature examples that Joe made above (though his examples are more than adequate). Mainly so people don't assume because their asset doesn't have large curves makes it immune to such banding.

    Also for those who are interested the maximum amount of unique values per channel for each bit depth are listed below:
    • 8bit Integer: 256 Values
    • 16bit Integer: 65536 Values
    • 32bit Integer: 4294967295 Values
    • 16bit Float: 65536 Values
    • 32bit Float: 4261412864 Values
    Normal maps are only 0-1 so the float values are only there for completeness and when baking height maps for which there is a big difference between 16 and 32bit.
  • EarthQuake
    Yeah, the value range difference between 8 and 16 bit files is absolutely massive. The difference between 16 and 32 bit is even more massive, however, normal maps really do not need 32 bit float worth of data. HDR Panoramas with 25 stops of dynamic range on the other hand, do. :poly124:
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    EarthQuake wrote: »
    Yeah, the value range difference between 8 and 16 bit files is absolutely massive. The difference between 16 and 32 bit is even more massive, however, normal maps really do not need 32 bit float worth of data. HDR Panoramas with 25 stops of dynamic range on the other hand, do. :poly124:
    Yea, I agree 100%. 16bits is plenty for normal maps for sure. I wanted to show the value ranges we are talking about so people have a point of reference mainly.
    As you said HDR panos and height maps do require vastly increased precision but baking 32bit int for normal maps does seem a little overkill :)
  • Vrav
    Offline / Send Message
    Vrav polycounter lvl 11
    Noice. Might be fun to interlace the text with interactive image flipping instead of strips. Maybe polycount needs a little image gallery plugin (with tile separation options). Could be cool for wire overlays in ye olde 3d pimping sections. Hhmmmm
  • Racer445
    Offline / Send Message
    Racer445 polycounter lvl 12
    EarthQuake wrote: »
    What about in game? 16 and 32 bit file formats are simply not supported (or at least not commonly used due to texture memory constraints) by most game engines. So, at the end of the day you need to output a 8 bit file.

    just to chime in here, the last 3 clients i've worked with have requested 16-bit normals. i know two of those clients are using them in game that way, not sure about the third one. in any case, i think you'll start seeing this trend on all of the real big AAA shit as developers deal with the texture memory problem.
  • Clark Coots
    Offline / Send Message
    Clark Coots polycounter lvl 13
    Very nice write up. How does one go about converting a 16 bit image to 8 bit with dithering? And how would you just dither an 8 bit image if you did not want to bake at 16 bit?
  • EarthQuake
    coots7 wrote: »
    Very nice write up. How does one go about converting a 16 bit image to 8 bit with dithering? And how would you just dither an 8 bit image if you did not want to bake at 16 bit?

    1. Load your 16 bit map up in Photoshop, go to mode -> 8 bit. Thats it, resave the file and you'll have your dithered 8 bit map.

    2. You can sort-of dither from an 8 bit source. First, bake at 2x the resolution, then open it in photoshop, set it to 16 bit (mode -> 16 bit), then resize it to half, and then convert back to 8 bit. This will dither to a degree, but is not as good as baking in 16 bit.

    Really there isn't much reason not to bake in 16 bit. You don't need to create all your textures in 16 bit, you can simply bake the normal map in 16 bit and convert it to 8 bit right away.
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    Racer445 wrote: »
    just to chime in here, the last 3 clients i've worked with have requested 16-bit normals. i know two of those clients are using them in game that way, not sure about the third one. in any case, i think you'll start seeing this trend on all of the real big AAA shit as developers deal with the texture memory problem.

    I find that quite encouraging. Even if we don't see a move toward 16bit in game I think we will at least see a trend of compression moving over to newer DX10/11 formats such as BC5 rather than the legacy DXT awfulness that engines have used in the past.
    coots7 wrote: »
    Very nice write up. How does one go about converting a 16 bit image to 8 bit with dithering?

    Just save the image in photoshop after converting from 16bit to 8bit
    coots7 wrote: »
    And how would you just dither an 8 bit image if you did not want to bake at 16 bit?
    You wouldn't tbh. Sure, you could add dithering noise to the bake after the fact but it would be much easier and faster to bake at 16bit natively.
  • EarthQuake
    Racer445 wrote: »
    just to chime in here, the last 3 clients i've worked with have requested 16-bit normals. i know two of those clients are using them in game that way, not sure about the third one. in any case, i think you'll start seeing this trend on all of the real big AAA shit as developers deal with the texture memory problem.

    Yeah, I should clarify by saying that 16 bit textures aren't supported by most standard compression methods, which is a big reason why they aren't used. Really there is no technical reason why they can't be used, they just take up a lot more texture memory, which will be less of an issue going forward. I could see 16 bit normal maps being viable for hero assets, or for say a car game or something like that where the primary assets are the sort of worst case scenario, though those sort of models often skip normal maps entirely.
  • Clark Coots
    Offline / Send Message
    Clark Coots polycounter lvl 13
    Thanks EQ and metalliandy. I thought there may have been an option to dither an image similar to a compression setting depending on the image format you save as. From what you're saying sounds like dithering automatically happens when you convert from 16 to 8 bit and is independent of final 8 bit image format. Cool - will start baking normals at 16 bit! Awesome information thank you!
  • EarthQuake
    Yeah, it happens automatically and if you don't have good source data to start from there is only so much you can do. Baking out larger and downsizing re-samples the image which gives PhotoShop a chance to dither it, but without good input you can't expect much. Also that's even more work than simply baking 16 bit in the first place.
  • radiancef0rge
    Offline / Send Message
    radiancef0rge ngon master
    BC5 with a derived z vector is becoming very common in DX10+ games.
    8 bit R and G channels the B & A channel are dropped on import.

    As developers (finally) drop DX9 BC5 and BC7 become very appealing compression methods for using 16bit textures. Although BC7 is DX11 only
  • EarthQuake
    Chris, what's the compression ratio on BC5/BC7?
  • Guedin
    Offline / Send Message
    Guedin polycounter lvl 11
    Thank you a lot EarthQuake for this useful post, and all the previous ones ;)
  • jeffdr
    Offline / Send Message
    jeffdr polycounter lvl 11
    I should jump in here to mention that baking high precision normals (16bit and up) is not only a good idea for the reasons EQ mentions, but this actually also benefits the case where an engine will ultimately compress the normals as well.

    The reason is that conversion from uncompressed to compressed normals can do a better job when the uncompressed version is as accurate as possible. This way you aren't daisy-chaining two lossy processes (conversion to 8-bit followed by compression), you only have one lossy step (compression of the true signal). This sometimes requires special tools as many texture compressors don't take high precision inputs.

    Since it came up: compressed sizes for various D3D compression formats are 1/2 byte per pixel for DXT1 and BC4, and 1 byte per pixel for everything else (compare this with 3 bytes per pixel for 8-bit RGB). This includes the newer BC5 through BC7 formats.

    One last note about the precision of 8-bit normals - it's even worse than you'd think, because generally they only store unit vectors. Picture a cube representing the full RGB color space, now imagine a sphere perfectly contained within it. The surface of that sphere is the space of possible normals, but the rest of the volume, both inside and outside the sphere, is wasted - the colors in those locations do not represent unit normals.
  • Vrav
    Offline / Send Message
    Vrav polycounter lvl 11
    Excellent points jeffdr
  • Bek
    Offline / Send Message
    Bek interpolator
    And of course if you have an asset like a sword with a very glossy blade, it might be worthwhile to use extra geometry in that area to lessen the gradients the normal map has to use to make up for the lowpoly smoothing. Ideally you would notice this during a test bake and consider if dithering is good enough.
  • dzibarik
    Offline / Send Message
    dzibarik polycounter lvl 10
    I wish I knew it when I was learning normal maps and spent a few hours of trial and error trying to fix this.
  • Maximum-Dev
    EQ Thanks for covering this.
    It was really needed.

    Here is the result of my 16Bit tiff dithered to 8Bit tiff.

    m9poc45t5qpy6qakz77h.jpg

    Though the 16Bit tiff itself had a lighting problem in Toolbag2.
    Anyone know of a way to make the noise look better?

    Thanks again.
  • Kon
    jeffdr wrote:
    One last note about the precision of 8-bit normals - it's even worse than you'd think, because generally they only store unit vectors. Picture a cube representing the full RGB color space, now imagine a sphere perfectly contained within it. The surface of that sphere is the space of possible normals, but the rest of the volume, both inside and outside the sphere, is wasted - the colors in those locations do not represent unit normals.

    I don't really get what you mean by that, can you explain it a bit further, please?

    As mentioned by EarthQuake, 3ds Max dithers the normal maps, even if exporting in 8 bpc. Why is that? Do I have to test this behaviour for every software now or is it better to export maps in 16 bit manually to convert them to 8 bpc afterwards?
    Do I really have to convert them, or am I safe to use 16 bpc Normal maps inside the engine? Does it increase the size that much?
  • hronet
    Offline / Send Message
    hronet null
    It is somewhat unusual to save all three channels (RGB/XYZ) in a normalmap (for games); usually only two channels are saved and the final one derived (using the fact that the length is always 1).

    If you -are- saving all three coordinates, you can use Crytek's method of optimising the length of the normal to improve the direction. This works by finding the length that gives the most directionally accurate normal, after quantization. They use a cubemap lookup-table for this to make it fast enough for realtime. See p38 and forward in the following presentation: http://www.crytek.com/cryengine/presentations/CryENGINE3-reaching-the-speed-of-light

    The tradeoff is that when using the normalmap, you need to normalize the vector which is more math per normalmap-sample. Oh, and you can of course still dither the result after length-optimisation :)
  • EarthQuake
    Bek wrote: »
    And of course if you have an asset like a sword with a very glossy blade, it might be worthwhile to use extra geometry in that area to lessen the gradients the normal map has to use to make up for the lowpoly smoothing. Ideally you would notice this during a test bake and consider if dithering is good enough.

    Actually, thats not really here nor there when it comes to banding. Less contrasty gradients typically will have more issues with banding as the values are more subtle and require more precision.
  • EarthQuake
    @Earthquake I'm confused. The takeaway from all of this is to bake 16-bit normal maps for more precision, and convert them to 8-bit in Photoshop, which will dither them. But in your final image you used 8-bit with no dithering, and basically said that once you get all the other texture maps on it and it's in game you'll never see the problems. It just struck me as sort of contradictory, but I probably misunderstood.

    Right, it was intentionally contradictory. Its meant simply to maintain perspective, sometimes all the little things that we obsess over when viewing assets untextured matter much less in the end result. Still, I would recommend baking in 16 bit and down-converting because there are no real disadvantages and it only takes a few seconds extra to do it. It's also better to have 16 bit source for generating accompanying cavity maps and such in programs like dDo or Knald.
  • Bek
    Offline / Send Message
    Bek interpolator
    Huh. I would've thought larger gradients would be worse from a greater variety of colours required, but I hadn't considered that. So then it'd only make sense if you add enough geo to make say, a large important area completely flat shading (or using hard edges to do the same).
  • mLichy
    Definitely looks better. Awesome stuff :)

    However, we tried 16bit PSDs at work for a bit, and it was pretty insane. They were pretty ridiculous to try and work with/save rapidly. Especially when we have to open sometimes 12 PSDs for a single asset. You might waste 10min or more just in saving a few things at a time. Well, then HDD space, lol.

    But that does look much nicer.... damn.


    We had to go back to 8bit though unfortunately from the time sink and space. Photoshop had to be restarted probably 20 times a day to stop maxing out Ram and keep useable performance.

    Even on 8core Machines with 32GB of ram and a higher end GPU.
  • sgtkoolaid
    Offline / Send Message
    sgtkoolaid polycounter lvl 11
    great post. Very informative. Crucial on space saving optimization while minimizing image quality loss. Will be saving this to refer to while I work on my projects. Why I love this industry, learn something new every day. :)
  • RogelioD
    Offline / Send Message
    RogelioD polycounter lvl 12
    This is amazing, thank you! I really wish I had seen this before the weekend though because I just submitted an art test where I had a hero prop that had really bad banding like this. Hopefully it doesn't effect whether or not I passed but either way this is great info moving forward. Thanks again!
  • JacqueChoi
    Offline / Send Message
    JacqueChoi polycounter
  • Kon
    How can I replicate this behaviour? I tried out several simple meshes in Maya but never got a result like that. Can you please share this arc-like mesh for us, EarthQuake? I would like to test a few things out.

    And because I think my questions got a bit lost, here is my post again:

    As mentioned by EarthQuake, 3ds Max dithers the normal maps, even if exporting in 8 bpc. Why is that? Do I have to test this behaviour for every software now or is it better to export maps in 16 bit manually to convert them to 8 bpc afterwards?
    Do I really have to convert them to 8bpc, or am I safe to use 16 bpc Normal maps inside the engine? Does it increase the size that much?
  • EarthQuake
    Bek wrote: »
    Huh. I would've thought larger gradients would be worse from a greater variety of colours required, but I hadn't considered that. So then it'd only make sense if you add enough geo to make say, a large important area completely flat shading (or using hard edges to do the same).

    Yeah, getting rid of the gradients is the only real way to avoid banding entirely.

    More tests.

    bitdepthg.jpg

    Here, a cube with edge bevels with varying geometry density, both show banding, the major difference being the specific pattern. Without banding, a cube with all edges set to hard shows no banding as there there is no gradient at all and no precision concerns.

    Interestingly, converting the 16bit file to 8 bit to dither the result adds noise to the hard edge cube's normal map as well, introducing artifacts that don't specifically need to be there. I wonder if baking out of Max natively gives the same dithering result or if it does a better job.
  • EarthQuake
    mLichy wrote: »
    Definitely looks better. Awesome stuff :)

    However, we tried 16bit PSDs at work for a bit, and it was pretty insane.

    Yeah, me personally, I'm not a proponent of the "everything must be 16 bit" workflow. I know some guys swear by creating all their PSDs in 16 bit, but it's significantly less efficient in terms of resource,s especially if you're working with big files, and only shows clear and obvious benefits with certain map types like normals and displacement. For diffuse/spec/gloss, unless you're doing very heavy value/levels editing, you're not generally going to see any difference.

    Though again, you can still bake your normal map in 16 bit, use that to generate any secondary maps, and immediately convert to 8-bit when you start actually texturing. This is what I generally do.
  • EarthQuake
    Kon wrote: »
    How can I replicate this behaviour? I tried out several simple meshed in Mayaans never got a result like that. Can you please share this arc-like mesh for us, EarthQuake? I would like to test a few things out.

    And because I think my questions got a bit lost, here is my post again:

    As mentioned by EarthQuake, 3ds Max dithers the normal maps, even if exporting in 8 bpc. Why is that? Do I have to test this behaviour for every software now or is it better to export maps in 16 bit manually to convert them to 8 bpc afterwards?
    Do I really have to convert them to 8bpc, or am I safe to use 16 bpc Normal maps inside the engine? Does it increase the size that much?

    Here are a variety of files to play with, both the high and low files for each of the test cases meshes I've created and a variety of baked maps. These are all baked with XN, so set tangent space to XN if previewing in TB2. Also of note, in 2.06 the tiff loader is broken and won't import 16/32 bit files correctly, this is fixed in our internal 2.07 release, however, you can resave those files as .PSDs and they will load the correct bit depth.

    https://dl.dropboxusercontent.com/u/499159/bitdepth.zip

    Whether or not you can use 16 bit files directly in engine depends on the type of file formats/compression your game supports and the texture constraints of your project. If in doubt, talk to an engineer or technical artist.
  • Farfarer
    Strange, I had always thought that the dither amount was based on the values of nearby pixels... so if it was a pixel in the middle of a large area of flat value, you wouldn't get noise added.
  • FelixL
    Offline / Send Message
    FelixL polycounter lvl 9
    If you're interested in technical details, here's a talk on something similar in Cryengine: http://advances.realtimerendering.com/s2010/Kaplanyan-CryEngine3%28SIGGRAPH%202010%20Advanced%20RealTime%20Rendering%20Course%29.pdf

    It also mentions storing 16 bit normal maps in 8bit 3dc through encoding improvements and the same for DXT1 via "tonemapping" or histogram renormalization. It's rather old and not that relevant anymore now that BC1/BC5 are supported.
  • hronet
    Offline / Send Message
    hronet null
    Farfarer - when dithering to improve on quantisation, noise is added per pixel (with a magnitude of 1-Least-Significant-Bit). I'll just plug this here as it is related and may help with some concepts: http://loopit.dk/banding_in_games.pdf

    - slide 59 has an example of dithering the normals to the gbuffer during deferred rendering.
  • AdvisableRobin
    Offline / Send Message
    AdvisableRobin polycounter lvl 10
    Once again, super awesome post EQ.
  • Farfarer
    hronet wrote: »
    Farfarer - when dithering to improve on quantisation, noise is added per pixel (with a magnitude of 1-Least-Significant-Bit). I'll just plug this here as it is related and may help with some concepts: http://loopit.dk/banding_in_games.pdf

    - slide 59 has an example of dithering the normals to the gbuffer during deferred rendering.
    Ah yeah, I've read that before.

    I didn't realise dithering was just a vague noise applied uniformly across the image, though - I thought it was a bit smarter than that.
  • JasonHeckmen
    Offline / Send Message
    JasonHeckmen polycounter lvl 8
    This is a hero thread. Thank you EarthQuake!
  • hronet
    Offline / Send Message
    hronet null
    Farfarer wrote: »
    I didn't realise dithering was just a vague noise applied uniformly across the image, though - I thought it was a bit smarter than that.

    Importantly, it is a vague noise applied to the high-precision source image, before quantization :) But yea, more advanced algorithms exist that take surrounding pixels into consideration, though I am not aware of any uses in rendering (it's mostly for print).
  • Kon
    EarthQuake wrote: »
    Here are a variety of files to play with, both the high and low files for each of the test cases meshes I've created and a variety of baked maps. These are all baked with XN, so set tangent space to XN if previewing in TB2. Also of note, in 2.06 the tiff loader is broken and won't import 16/32 bit files correctly, this is fixed in our internal 2.07 release, however, you can resave those files as .PSDs and they will load the correct bit depth.

    https://dl.dropboxusercontent.com/u/499159/bitdepth.zip

    Whether or not you can use 16 bit files directly in engine depends on the type of file formats/compression your game supports and the texture constraints of your project. If in doubt, talk to an engineer or technical artist.

    Big Thank you! It's great you share it with us.
    The only thing I am complaining... I think the low poly model of the arc-shaped object is missing in your files.
  • EarthQuake
    Kon wrote: »
    Big Thank you! It's great you share it with us.
    The only thing I am complaining... I think the low poly model of the arc-shaped object is missing in your files.

    Sorry about that, re-download the file and you should get everything.
  • jeffdr
    Offline / Send Message
    jeffdr polycounter lvl 11
    Kon wrote: »
    I don't really get what you mean by that, can you explain it a bit further, please?

    I once saw an excellent illustration for this, but I can't seem to find it anywhere now. Anyway its hard to explain without pictures to someone unfamiliar with the math, but I'll give it a shot.

    So picture a cube that is the full 8-bit RGB color space (each axis corresponding to R, G, and B respectively, and varying from 0 to 255). This cube contains all possible 8-bit colors. There are 256*256*256 = 16,777,216 such colors.

    Now, normal maps can be encoded in this color space. Normal vectors (each pixel in the normal map) are just sets of 3 values, x,y and z, which denote a surface direction. So these vectors are usually encoded by mapping -1.0 to +1.0 onto the 0 to 255 range (so that -1.0 maps to 0, and +1.0 maps to 255).

    A quick example - say we have a surface normal of {0.0,1.0,0.0}, which is just the +Y axis. This is encoded in the 8-bit RGB space as {127,255,127}.

    For the purposes of rendering, these normals usually need to be "unit" vectors, e.g. of length 1. This means that x*x + y*y + z*z = 1 (some of you may recognize this as the equation for a sphere). So if we look at all the RGB values which fit this equation, we end up getting a spherical subsample of our RGB cube. It actually ends up covering about 3% of the possible RGB values as I recall.

    So in short, about 97% of the full RGB color space simply goes unused when normals are encoded in this way. There are other ways of doing things, but this is a very common way to treat normal map data.
  • throttlekitty
    Offline / Send Message
    throttlekitty ngon master
    Great post as always EQ, and thanks for the extra info jeffdr.

    So what I'm wondering now is if there's a better solution for converting down to 8 bit? Especially after seeing the breakdown with the cubes. Or at least a smarter method in Photoshop.
  • hronet
    Offline / Send Message
    hronet null
    jeffdr wrote: »
    I once saw an excellent illustration for this, but I can't seem to find it anywhere now.

    - could be this one from the Crytek presentation, http://www.crytek.com/cryengine/presentations/CryENGINE3-reaching-the-speed-of-light

    crytek_best_fit_normals.png

    - have been wanting to try this out for a while.
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    So I thought it would be cool to show how much a difference there is between the uncompressed shots that Joe posted and their DXT1 and BC5 (3Dc) counterparts.

    First we have DXT1 which has been the de-facto standard in normal map compression for years and is RGB/0.5 bytes per px

    DXT1_from_8bit_zpsqzchk9op.jpg~original

    DXT1_from_16bit_zpshzoyiz6g.jpg~original

    DXT1_from_8bit_with_dithering_zpsbutkupug.jpg~original

    Next we have BC5 (3Dc) which is 2x greyscale channels/1 byte per px

    BC5_3Dc_from_8bit_zpszaeffwur.jpg~original

    BC5_3Dc_from_16bit_zpsedtoo3ix.jpg~original

    BC5_3Dc_from_8bit_with_dithering_zpsbixct0rh.jpg~original

    It's pretty obvious that DXT1 fails miserably even with a 16bit source.
    BC5 (3Dc) on the other hand performs extremely in all cases with the 8 and 16bit images all looking almost exactly the same as the source images.

    The extra quality does come at a cost however as BC5 is 2x the cost of DXT1, but with the extra memory that is available with this generation this isn't such a trade off any more.

    Cool beans.
  • Kon
    The data EarthQuake is using (he also provided them to us. Link: https://dl.dropboxusercontent.com/u/499159/bitdepth.zip) don't work for me, unfortunately.
    In the attached image you can see how the UVs are messed up. I want to try a bit with the normal maps and so on but I don't know how to fix this or if I do something wrong.

    Besides that, I would bring the mesh into Toolbag and assign the Normal maps, change the Tangent Space of the Object (if needed) and make it 100% Gloss and Reflective, is that correct?

    I appreciate any help, thanks ;)
  • radiancef0rge
    Offline / Send Message
    radiancef0rge ngon master
    Id be curious what a half res bc5 looks like tbh, I seem to recall during testing they looked very similar..
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    Id be curious what a half res bc5 looks like tbh, I seem to recall during testing they looked very similar..

    Yea, they look pretty much identical at half res (25%). which is pretty cool when you think that last gen the only option for this kind of quality was pretty much using an uncompressed 8bit texture at 25% of original size so it matches the footprint of DXT1. BC5 is still larger than DXT1 and uncompressed at half res but the quality is much better ofc.

    BC5_3Dc_from_8bit_half_res_zpsplr6tmgd.jpg~original

    BC5_3Dc_from_8bit_with_dithering_half_res_zpshs4yhnuo.jpg~original

    BC5_3Dc_from_16bit_half_res_zpsyfswlnmd.jpg~original

    BC5_3Dc_from_16bit_difference_zpsdg0ttnda.jpg~original
  • Gestalt
    Offline / Send Message
    Gestalt polycounter lvl 11
    Thanks for putting this together! Cool comparisons. Writing 3d unit vectors as 8bit Cartesian coordinates is such poor use of information that it's actually pretty surprising 16bit normals haven't caught on as a standard yet, at least to act like an archival format for assets. Then let the game engine convert into whatever it chooses, lookups and octahedral maps and whatever else is trendy and suitable.
  • EarthQuake
    That BC5 looks really nice.
13
Sign In or Register to comment.