I'm currently curious about optimizing textures. I read once that if you've got a mapped model, to make all the unused areas in the texture, black - so the engine doesn't waste calculations on that area. Is this true when it comes to mapped textures for games?
I'm currently working on a stone statue and there's areas on the texture that aren't being used by the model. So does masking off those areas as black help the engine or is it to just make your texture cleaner, or does it even matter? (artist preference?)
If this does help optimize the texture, are there any other ways of optimizing your textures?
Replies
I would tend to make unused space a kind of average of all the colours in the texture (or the most dominant and common colour) so that when the texture is mip-mapped to its lowest level, seams do not show up as obviously (learn more about mip-mapping here).
Ideally you'd have as little unused texture space as possible - that is the most memory-efficient way of doing things.
If we are talking about JPEG compression, solid colors give much better compression than mixed colors and values. The actual color makes almost no difference (a couple of bytes, usually), whether black or periwinkle. Better compression means smaller files which means better performance. But since its unlikely you'll be using JPEG compression in any large numbers (I use it for a giant lightmap for my level but that's about it... would never use it for a real texture), and your texture should be mostly used anyway, this isn't going to make much difference.
But as MoP said, unused areas should be the majority color of the texture because of mipping.
Disk format is irrelevant for performance. Use .bmp if you want - it will affect loading times and size on disk, but not performance.
Then smudge your edge-pixel colors outwards a bit around each chunk. Most texture-bakers have an option to do this automatically for you, called Edge Padding or somesuch. If you use Photoshop, you can use an Action script to do it, here's mine. After that, if you still have unused pixels, it's a good idea to flood-fill them using MoP's idea.
Another article about MIPs.
http://developer.nvidia.com/object/lets_get_small.html
Compression... I'm surprised no one mentioned DXT, a widely-used format for bitmap compression in games. Nvidia has a great Photoshop plugin here.
It's a lossy format, so you want to save your originals for edits, but it's a big win on the video memory end of things.
Except that jpeg compression won't help performance at all, because jpeg is a disk format and has zero influence on how texture data is represented on video cards.
Disk format is irrelevant for performance. Use .bmp if you want - it will affect loading times and size on disk, but not performance.
[/ QUOTE ]
exactly, but i remember one textur format being able to, can .dds textures stay compressed within memory?
edit: nm, just read erics reply...
See http://www.gamasutra.com/features/20051228/sherrod_01.shtml
We use targa's, not jpeg's, bmp's or dds's. So with the statue I'm working on, I'm working on a 512x512, and I've got the whole thing in a stone texture - detailing the UVd areas where needed of course. Would it be fine just leaving it all stone or filling in the unused areas with a common color? Or would that just be for more complex textures? (multiple types of materials, etc)
DXT compresssion isn't dependent on assets being in .dds format on disk, as the API can do conversion when the texture is loaded. Whether assets are DXT compressed on the card is entirely controlled by API calls and the driver. The reason for using precompressed .dds files is because loading them from disk is quicker than on-the-fly conversion, and also gives the artist direct control over mipmaps.
See http://www.gamasutra.com/features/20051228/sherrod_01.shtml
[/ QUOTE ]
very interesting, didn't know that. so from what i understand the texture could be in .tga but as long as the graphics api tells it to be dxt compressed it will be compressed/converted on the fly and be dxt compressed to save video mem? if so then that is pretty cool.
Yeah, that's a good point Black_Dog. I've seen programmers change texture formats on load, or during runtime, whatever is needed, usually to squeeze out better performance. It's a good thing, generally. Sometimes though you'll find your textures being downsampled without your consent. Maybe the artists are exceeding the available memory for a particular zone, or maybe the artist didn't understand why 2048's couldn't be used on everything. But in the end the framerate is king.
We don't do it here yet, but I love the systems studios have in place where the artists author in whatever format they're used to (PSD or whatever), and the runtime toolchain precompresses them automagically, not wasting artist time on it. Although hand-tweaking mips does have its place, I recently had to do it for a water-wave shader, since it was pixelling badly toward the horizon. Smoothing out the lower mips made it fade better.
...as long as the graphics api tells it to be dxt compressed it will be compressed/converted on the fly and be dxt compressed to save video mem? if so then that is pretty cool.
[/ QUOTE ]
Well, it's not perfect. DXT is a lossy compression format and it can have a visible quality impact in some cases, notably normalmaps. It's definitely a good choice to have available though.
Eric, that's an pretty creative use of mipmap control. Gotta remember that one...
Battlefield 2 has what I was trying to avoid. Here... left side horizon, you can see both the mips and the tiling causing some dots. Much more obvious in action though.