I got tired of not knowing what all the different options in the Nvidia DDS Photoshop export dialog box did, so I sat down and tried them all. I also did some online research for the types that weren't obvious. This isn't complete, as I'm sure I'll get some feedback on things I got wrong, or omitted, so please let me know about any parts that need changing. When it's finalized I'll add a PDF link so it can be kept handy.
http://www.poopinmymouth.com/tutorial/dds_types.html
Replies
This is the uncompressed 32-bit targa (8 bit alpha) equivalent DDS format. However it's larger than the original targa in filesize!
[/ QUOTE ]
DDS can contain mipmaps, so the additional size could well be those.
"8:8:8 has no alpha."
"8888.jpg" shows an alpha being used and labled as "8:8:8:8 (8-bit alpha)"
your 8:8:8 sextion,
"8:8:8 has no alpha."
"8888.jpg" shows an alpha being used and labled as "8:8:8:8 (8-bit alpha)"
[/ QUOTE ]
Whoops, I'll fix that, since it didn't have it, I didn't paste it into the photoshop file. Thanks for that.
[ QUOTE ]
[ QUOTE ]
This is the uncompressed 32-bit targa (8 bit alpha) equivalent DDS format. However it's larger than the original targa in filesize!
[/ QUOTE ]
DDS can contain mipmaps, so the additional size could well be those.
[/ QUOTE ]
Hah! I forgot all about the mips, let me redo it without mips to give an accurate comparison.
I think your DXT5_NM info might be a tad off. In my experience, you leave the green channel where it is and you put the red channel in the alpha channel. That's because the compression places more bits in the green channel than the red or blue channels. As far as I know, this mode is exactly like DXT5 with alpha except it does the channel rearranging for you.
This info could be wrong. I haven't looked it up. But that's just my gut feeling.
The Palette options don't crash Photoshop if you actually Palette the image (make it 256 colours yourself)
[/ QUOTE ]
Should have tried that! I'll add those this weekend, thanks Thnom.
[ QUOTE ]
Nice Job, Ben!
I think your DXT5_NM info might be a tad off. In my experience, you leave the green channel where it is and you put the red channel in the alpha channel. That's because the compression places more bits in the green channel than the red or blue channels. As far as I know, this mode is exactly like DXT5 with alpha except it does the channel rearranging for you.
This info could be wrong. I haven't looked it up. But that's just my gut feeling.
[/ QUOTE ]
Sounds more right than my guess, I'll edit that part too, thanks Ben.
http://riccardlinde.com/gBxttut.html
Yeah the photographic texture didn't work so well, if I get some time I'm going to redo it with a part normal map, part gradient image that you can blow way up to examine as a hyperlink.
The rest of the fixes have been updated.
I found dxt3 better when the alpha had islands of black and white that were antialiased, like interface or font bitmaps. If I had gradients bigger than 4 pixels across, then dxt5 was always better. Here's a poor example.
Nvidia is unlikely to update their PS plugin, until or unless they make a Cuda version. The zip on my site has a bunch more fixes though, have you tried it? I think it's v7.81 or something.
Taking a look at it again, the 3D Preview has a nice comparison feature built in, shows up to eight formats side-by-side incl. uncompressed. Pretty cool, might help out. If you hit Preview Settings, you can also toggle a "difference" view.
There are also more formats to convert to in that version, here's a screengrab. I asked the guy at Nvidia to re-order them so it was easier for me to make sense of it all. Too bad Doug left, kinda pisses me off they aren't updating this thing, but I'd guess the power users are all using the command-line instead.
"Most textures used in games seem to fall in roughly four
categories:
1. Textures with a lot of relatively high-frequency content everywhere.
Those are quite common and pretty well-behaved under DXT compression
(i.e. they tend to come out quite nicely no matter how smart or
stupid your DXT compressor is).
2. Textures with mostly mid-frequency content and some localized
high-frequency features. (Like textures for mechanical parts, fonts,
and the like). They are also relatively common. Blocks with high-freq
features come out pretty jaggy with all DXT compressors - no way to
get nice edge antialiasing when you only have 4 shade levels per
block . The rest is usually OK.
3. Textures with mostly low-frequency content. (Gradients, skies and the
like). This is where differences between DXT compressors are most
apparent, but in my opinion, even the best DXT compressors look
pretty bad on this kind of textures (i.e. tend to produce a lot of
banding no matter what).
4. "Special-case" textures, i.e. those not intended to be viewed alone,
but used as intermediate inputs in some shading calculation - this
ranges from lightmaps over normalmaps to LUTs for pixel shaders.
What they have in common is that they usually don't work well with
stock DXT compressors and need some extra work to come out nicely
(like considering DXT block boundaries in lightmap packing etc).
"
RE: #3...
"There a good feature in [ATI's] TheCompressionator that can help in some hard cases.
If you get banding in a blue sky, just increase the blue weighting in the options and the gradient quality should improve."
RE: DXT1 vs. DXT3 vs. DXT5...
"RGB accuracy of all the DXTn variants is identical."
"You probably don't want to ever use DXT1 for stuff with alpha channels. The problem is that even if your original art only has full or nothing alpha, as soon as you generate mipmap levels, you need partial alpha or it starts to look really ugly. Unless you're using premultiplied alpha-blending (which you should, of course), this can cause problems with bilinear filtering."
"DXT3 gives you 4-bit alpha, meaning 16 values evenly spaced between 0 and 1 for your entire image. DXT5 gives you 3-bit alpha that is interpolated between 2 grayscale endpoints, meaning you get 8 values linearly spaced differently for every 4x4 block of pixels in your image. In general, DXT5 provides better results than DXT3 because most 4x4 blocks of alpha don't cover the full 0-1 range. However, for textures that do cover that range well, DXT3 would be better quality."
RE: Must dxt textures be powers of two?
"DXTn [DXT1 thru DXT5] texture dimensions must be multiples of 4. This is because the compressed format breaks textures up into fixed 4x4 blocks.
If the texture is being mip-mapped, then it must be a power of 2 in size. This is true whether it is DXTn compressed or not.
So a DXTn texture can be 36x36 if it not using mip-mapping, but will fail if mip-mapping for that texture is also enabled."
"ATI 3Dc format (ATI2N) is similar to DXT5 in that ATI 3Dc is like having two DXT5 compressed alpha channels." More about 3Dc here:
http://ati.amd.com/developer/samples/dx9/3dcnormalcompression.html
RE: Using DXT1 for normal map compression, using the R and G channels for X and Y...
"if you're still targeting DX8 level hardware i'd strongly discourage using DXT1 as there's a NVIDIA hardware bug that produces wrong texels on GF3 and GF4 class cards (including the one in a certain console when using DXT1. The error is small enough not to affect texture maps visibly, but can lead to a screen full of garbage pixels when you're compressing normal maps with DXT1 and use them for specular or reflection mapping.
"DXT1 is expanded to 565 in the texture cache (dithered on the GF4, not dithered on DX3) rather than the 8888 it should have been expanded to. Quantising normal maps to 565, even with dithering, is going to cause some outrageous errors in lighting.
RE: comparing different DXT compressors (ATI, Nvidia, Microsoft)...
"After looking at all 3 tools here is what I concluded:
- nVidia and ATI algorithm is more pleasing as they dont over dither compared to the D3DX october 2005 SDK release.
- Older D3DX Microsoft implementation are really bad. I'm guessing the algorithm was recently revised ?
- The ATI library is very clean, small and simple. (one header + one 65k .lib)
- ATI and nVidia algorithm are so close to each other that I could not find an image that looked better (even zoomed 400%) using one or the other."
RE: How both the DXT5n format (and the Nvidia DXT compressor) work...
"A = norm.x
G = norm.y
R and B are unused so they are set to norm.y so that the compressor does not try to compress two colors, only the green channel.
The compressor has to take into account R G and B for compression. The values are not independent, adding unrelated R and B values will compromise the G channel.
Because you have only two endpoints in the DXT compression scheme, the compression algorithm attempt to fit a 3d line to the all 16 colors in a block. Treating the r g b values as 3d points.
...
You are allowed two endpoints (and only two) that represent two palette entry colors. This is for every 4x4 block of texels.
Then you can interpolate from these two colors. To compress well, you need to fit a line through your colors and only store the two endpoints.
The index values tell you where along the line you can sample from for a grand total of 3 or 4 colors for the 16 texel block."
"Actually, we do use signed textures for normal maps, although we dont support loading them at the moment. The unsigned normal maps are converted. The 8-bits-per-component signed normal maps we use store numbers in the range [-128, 127] whereas the equivalent unsigned normal map stores numbers in the range [0, 255]. The advantage of the signed normal map becomes apparent if we consider how the normal maps are sampled in the pixel shader.
Values from a signed texture are mapped so that [-127, 127]->[-1, 1] when sampled in the pixel shader. Notice that -128 is not included in the source range. It appears that it is clamped to -127. Values from an unsigned texture are mapped so that [0, 255]->[0, 1]. The unsigned values must be scaled by 2 and biased by -1 in the pixel shader to move them into [-1, 1].
The problem is that there is no way to represent 0 in the [-1, 1] range in an unsigned texture. If we try to transform 0 back from [-1, 1] into [0, 255] we get ( 0 + 1 ) / 2 * 255 = 127.5, which is not an integer. Therefore when it is stored in the texture it is rounded to 128, and when we map that back to [-1, 1] we get 128 / 255 * 2 1 = 0.003921568627450980392156862745098. With the signed texture, on the other hand, 0 represents 0 in both ranges.
That means only a signed texture can represent a normal with no bend in the pixel shader, and thats why we use it.
We convert unsigned textures to signed by subtracting 128 from all the components, which maps 1 to -127, 128 to 0 and 255 to 127.
"
And I prefer the DXn format for normal maps rather than the DXT5 method. It uses two channels, and has the same file size as the DXT5 map. So, same bits but packed into 2 channels = more accuracy for the same cost. And the hardware should return the 3rd channel (normalized) for you!
DXn = 3Dc? Or something different?
I didn't know what unsigned vs. signed means, so I asked one of the programmers here. Here's how he explained it to me. I added emphasis to highlight the meat of the issue...
"Actually, we do use signed textures for normal maps, although we dont support loading them at the moment. The unsigned normal maps are converted. The 8-bits-per-component signed normal maps we use store numbers in the range [-128, 127] whereas the equivalent unsigned normal map stores numbers in the range [0, 255]. The advantage of the signed normal map becomes apparent if we consider how the normal maps are sampled in the pixel shader.
Values from a signed texture are mapped so that [-127, 127]->[-1, 1] when sampled in the pixel shader. Notice that -128 is not included in the source range. It appears that it is clamped to -127. Values from an unsigned texture are mapped so that [0, 255]->[0, 1]. The unsigned values must be scaled by 2 and biased by -1 in the pixel shader to move them into [-1, 1].
The problem is that there is no way to represent 0 in the [-1, 1] range in an unsigned texture. If we try to transform 0 back from [-1, 1] into [0, 255] we get ( 0 + 1 ) / 2 * 255 = 127.5, which is not an integer. Therefore when it is stored in the texture it is rounded to 128, and when we map that back to [-1, 1] we get 128 / 255 * 2 1 = 0.003921568627450980392156862745098. With the signed texture, on the other hand, 0 represents 0 in both ranges.
That means only a signed texture can represent a normal with no bend in the pixel shader, and thats why we use it.
We convert unsigned textures to signed by subtracting 128 from all the components, which maps 1 to -127, 128 to 0 and 255 to 127.
"
[/ QUOTE ]
In English signed and unsigned means that you use a single bit that is a boolean value for whether the current number(integer) is a minus value or not.
By using unsigned you can get higher values but sacrifice the negative spectrum.
This is something that is mostly a performance issue. If you need that extra positive range and have absolutely no use for the negative range you use unsigned. You use it [unsigned integers] vigorously in C and C++ but I have had bad experiences with it in C# as various conversions don't handle them very well, and are mostly used when you're interfacing with legacy code or the Windows XP api(such as mouse messages for DirectX and OpenGL).
As an artist, I usually need it explained in visual terms. I think that's the whole reason poopinmymouth started his doc. MSDN's DXTn pages are gibberish to most game artists like me.
I think we'll find some of the options in the DDS plugin are probably way beyond what a game artist needs to understand in their daily use. Signed/unsigned is likely one of those.
http://www.rsart.co.uk/2006/08/27/green-tangent-space-normal-maps-2/
.
Poop, when you get a chance to work on this some more, I found some good image examples to build off of. UDN chose images that are both typical and atypical, to highlight exactly what's going on with the compression. I thought they did a great job. Also shows their "bright" format, same basic thing as DDS' 8bit paletted RGB.
http://udn.epicgames.com/pub/Content/TextureComparison
Though I think they focus a bit much on the Nvidia GF3/GF4 DXT1 problem, which makes it seem like DXT1 is always bad. Not so true anymore, since Nvidia fixed it post-Geforce4.
Which console uses GF4 class hardware? Was it the original Xbox?
Looked up DXn, found this info. Thanks for the heads up Whargoul.
"
Xbox (NV2A) supports DXT1, DXT2/DXT3, and DXT4/DXT5 formats. That's S3TC.
Xbox 360 (Xenos) adds some new compressed texture formats.
DXT3A/DXT5A - single component, 4 bits precision
DXN - two components, 8 bits precision per component
CTX1 - two components, 4 bits precision per component
DXN and CTX1 are useful for normal map compression (3Dc).
Also, Xbox 360 DXT1 decompression is 32-bit, while Xbox DXT1 is 16-bit.
"
http://forum.pcvsconsole.com/viewthread.php?tid=19951
http://developer.nvidia.com/object/photoshop_dds_plugins.html
I added some info in a thread on the CrazyBump forum...
DXT5nm = y in green, x in alpha, red & blue empty.
When creating a DDS normalmap, the mip generation method is important. Nvidia has a great pdf called Let's Get Small that talks about the best way to make normalmap mips (grayscale->mips->normals, not grayscale->normals->mips). Makes a difference.
Not good to use DXT1 for normalmaps because DXT5nm takes advantage of how DXT compresses the alpha separately from RGB, they don't influence each other the way the RGB channels do. DXT1 only has binary alpha.
DXT5nm is great because the normalmap stays compressed on the card. You get some artifacts from the compression, but mostly in the Y/green channel, less in the X/alpha channel.
3Dc looks pretty good, less artifacts than DXT5nm, but the same filesize. We don't support 3Dc at this point, otherwise I'd ask for that!
DXT1 vs. DXT3 vs. DXT5:
DXT1 is great if you don't have alpha. Same RGB compression method as DXT3 & DXT5, but about half the filesize.
DXT3 is better if you have alpha with mostly solid black/white and thin antialiased edges. It creates less artifacts for rapid value changes within each 4x4 block of pixels.
DXT5 doesn't do as well in that case, works better with slower value changes in each 4x4 block, like smooth gradients.
I use DXT1 for color maps that don't have alpha, as long as I don't mind DXT artifacts. For critical colormaps (splash screens, etc.) I use 888 RGB dds, basically uncompressed 8bit color, plus mips if needed.
Maybe this can nudge the Poop tutorial machine. More likely it's just a place to store info until I can pare is down into a wiki.