Home Technical Talk

Optimization in a Static World

Obviously most of the optimization talk around here relates mostly to current gen/hardware practices, or to functionally last-gen mobile and handheld practices.

I am starting on a project that is sort of neither here nor there. I am making a 3D world that is completely static (no animation, no interactivity except walking and looking), with exclusively unlit shaders, and polycounts a bit closer to mobile than to current gen. BUT this project will also have a pretty large amount of non-reusable textures (for the most part diffuse only) and will run on modern, non-mobile hardware.

I'm not an expert on optimization, but I'm assuming (reasonably, I think) that all this already gives me a big leg up. However, I'd really like to take full advantage of my blessings, and go about this as sensibly as possible. I'd hate to have a big optimization opportunity and miss it. Plus, the more extra resources I can pour into textures, the better.

I have a few specific questions to that end, and obviously if I'm saying anything terribly wrongheaded or missing something obvious I'd like to hear about that too.

First of all, I'm curious if overdrawing is as big a deal with unlit shaders. I've been having trouble getting a clear answer about how much of the problem with overdrawing stems from lighting and how much of it is intrinsic to using alpha transparency. After all, older games used alpha to fake geometry quite a lot. Was overdrawing always a problem, but polycount was just a bigger one? Or is this problem somehow exclusive to this brave new gen? I'm asking because I'd LIKE to use alpha in places, if I could.

Second, I've of course also heard a lot about draw calls being a big problem. Originally I thought I'd have to break up my big world model into smaller pieces for optimization, but after doing some research and learning about draw calls it seems that people are suggesting just the opposite. So my question is, given a completely static world with non-reusable textures and at most 2 or 3 simple shaders, how should I break up my meshes and textures? Should the meshes be as big as possible while keeping a reasonable level of detail given 4096 textures? Should I break the meshes up into smaller chunks but use shared materials and texture atlases? Should I make the world one huge mesh, but apply the different textures as different materials to different areas? Knowing the most efficient way will have a big effect on my workflow.

Also, what would be a good way of compressing textures? I like the look of low-color dithering and for textures with high resolutions it's actually a desirable look for me. I'm not sure that sticking a 64 color gif into Unity is the best idea, though. Is there a common texture format that supports compression via dithering and color reduction and is efficient about it?

Thanks ahead of time, I hope I'm not out of line asking about such a specific set of restrictions. Hopefully my questions are general enough to still be of interest.

Replies

  • Der Hollander
    Options
    Offline / Send Message
    Without images to kind of illustrate what you're shooting for, my initial gut-reaction is that "a pretty large amount of non-reusable textures" and mentioning that you're going to use high-resolution textures is going to be what kills your scene performance straight out of the gate.

    From what I understand, poly counts are less of a concern with current-gen hardware than texture memory, which will probably be the #1 bottleneck for runtime graphics for a long time to come. From the mesh standpoint, having one big mesh also sounds like a bad idea given that the engine will not be able to independently LOD or even cull objects at a large distance, since it would essentially have to reference the entire scene at all times. For example, 3DS Max/ZBrush will eat millions of triangles in a scene no problem, as long as the scene is broken up into managable pieces of say, for the purposes of this example, less than 50k tris per asset. But if you ask either program to crunch the same amount of tris as a single piece, it's either going to chug horrendously or throw a big "NOPE" your way.

    At the end of the day, the way things are handled this gen is very much "second verse, much like the first" especially if you're not utilizing the changes in the specular rendering formulas.
    I'd personally recommend just eating the tri counts, using smaller, tilable textures and letting the geo handle your details. Even with unlit rendering, you could use the geo to generate vertex colour masks or even vert texture your assets and then use small, dithered textures or even a material/shader that does dithering transitions for you on shader compile.
  • tisTree25
    Options
    Offline / Send Message
    You're right, I should post an example. I don't have a full scene example presently, unfortunately. I do have a few smaller objects that should show what I'm going for overall, though. Here's one:

    4L1XP3l.gif

    Basically, contrary to the usual "lit from above, no strong shadows" approach to hand-painting textures, I'm trying to go for strong directional lighting. Possibly for areas in shadow or for noon-time scenes I'd reuse textures to a greater extent, but generally I'm going to be using unique textures for most objects. I also want the shading to look more arbitrary and painterly, so I'm hesitant to use any computer-generated lighting at all. Ideally I'd want the geometry to only be in obvious display at the edges of objects.

    Also, my textures are going to be pretty loose and not very detailed in most areas, so I don't need the biggest textures, just big enough to keep them from looking too pixelated. I also don't need too many colors, I'm going to be keeping to a modest palette and dithered transitions between colors are 100% good by me. But again, I'd like to keep the textures unique as much as possible, so I need to reduce the hit I'd take from this as much as possible.

    Thanks for your advice on models. I guess keeping the mesh all as one piece is pointless, sounds like it's more about assets sharing texture atlases and materials rather than the mesh itself being kept together in a small number of assets.

    EDIT: I just noticed that I wasn't clear about the sizes of the textures I was planning to use in my original post, leading to your confusion about me using high-resolution textures. I mentioned using 4096 textures not because that would be my standard texture size, but because as I understand it that's the biggest texture size anybody would use. I was talking about using a 4096 texture not for one asset, but for large sections of a scene. The skull example uses a 1024 texture, and to me looks pretty good close up, and acceptable even at 512. Those are more the sizes I'd be using for individual assets.
  • Der Hollander
    Options
    Offline / Send Message
    Based on what you've said, I think you're totally headed in the right direction.
    One thing I'm definitely curious about is the limited colour palette approach, as I'm not certain how you could get memory savings by restricting your colours in texture. I know back in ye olden tymes it was a hardware restriction where the NES had specific colours it could display, the SEGA Master System had a different set, Windows/Apple colour profiles and so on and so forth.

    But today, aside from compression profiles for different image formats, textures basically run 128 values per channel and say, a .TGA using the full range it's capable of displaying vs. another .TGA that uses a limited palette still takes the same amount of memory. Will you be using an older file format (You mentioned a 64 colour .gif) for smaller texture memory cost, or force an engine colour profile through some code?
  • tisTree25
    Options
    Offline / Send Message
    That's something I'm wondering about, actually. I tried out importing some GIFS into Unity to judge them VS. the full-color TGA textures, and there was SOME size reduction, but not as significant as when I just viewed the TGA and GIF side by side in a file manager. SO I think Unity is probably converting the GIF and for that matter the TGA into something else, which makes sense since it would be silly to wrap a 3D mesh in a GIF. I'm not familiar with the image formats used for 3D textures, so it may well be that limiting the colors will not be as much help as I'm hoping. In the end it's also a stylistic choice, so if the size reduction doesn't work out I still may go this route of converting GIFs to whatever Unity actually uses, but it would be nice to find a way to benefit from it too...

    Here are the size differences

    ACCORDING TO FILE MANAGER
    Full-Color TGA -- 4097kb
    64-Color GIF ---- 235kb

    ACCORDING TO UNITY, AFTER IMPORT/MIPMAPPING
    Full-Color TGA -- 1300kb
    64-Color GIF ---- 700kb

    FORMAT AFTER IMPORT INTO UNITY
    Full-Color TGA -- RBGA Compressed DXT5
    64-Color GIF ---- RBG Compressed DXT1

    Seems like the reduced size is probably due more to the loss of the Alpha Channel than to any color reduction
  • ZacD
    Options
    Offline / Send Message
    ZacD ngon master
    DXT5 and DXT1 are two common file formats that games use for textures. Basically they are designed to heavily compress an image and work well in real time. You should be able to turn any file format to DXT5 or DXT1 in Unity, although you should just use DXT1 unless you are using the alpha channel for something. You can probably turn off the compression or use a different format if you want. Here's the formats from Unity's wiki.

    http://docs.unity3d.com/ScriptReference/TextureFormat.html
  • Eric Chadwick
    Options
    Offline / Send Message
    We have a bit more info here too about DXT, might help
    http://wiki.polycount.com/wiki/DXT
  • tisTree25
    Options
    Offline / Send Message
    Thanks for all the resources guys. A lot of it is over my head, but I have tested some things out based on what I do understand.

    You were right, it seems like it doesn't matter how many colors (or how much complexity) a DXT1 file has. I imported a 24-bit 1024 TARGA black square and a 24-bit 1024 square full of multi-color noise. Unity says both are 0.7 mb. The only difference was the colored noise took noticeably longer to import, despite both TARGAs also being the same size. Converting manually with nvidia's DDS plugin in photoshop had nearly identical results (assuming Unity rounds the file size up.)

    Importing 16-bit or 32-bit TARGAs made no difference for file size in Unity either, though the quality loss with the 16-bit file was huge.

    The other export options in the nvidia plugin seem to be mostly for specific purposes like alpha and normal maps.

    It seems like color restrictions are not going to get me anywhere with DXT files, not from a file size standpoint anyway. I haven't looked into the other Unity-supported file types yet, will do that next.

    At least I learned about DXT compression from all this. Seems pretty likely I'll end up using it, since the other options honestly don't look too promising. Looking at it more closely, without filtering and in comparison to truecolor, I kinda like how DXT compression looks. It's fuzzy, in a grainy/blocky way. Not blurry like a JPEG, like I feared.
  • marks
    Options
    Offline / Send Message
    marks greentooth
    DXT compression is great, especially DXT1 which is super cheap on filesize. You only ever need to use DXT5 if you need an 8-bit alpha channel, or DXN if you're going to store normalmaps as 2-channels and reconstruct the binormal in the shader.

    Something useful to note is that DXT1 always stores a 1-bit alpha regardless of what your actual content is, so using it for something is basically free because its always going to be there anyway.

    If you're using DX10 then BC7 compression is super good aswell, but compression times can be quite long for large textures.
  • Avvi
    Options
    Offline / Send Message
    Avvi polycounter lvl 3
    As for the alpha problem... remember that cutout transparency is very cheap. With big enough textures it looks like normal geometry.
    If you stay with blended alpha (transparent or fade) you can run into sorting problems.
    (There's a workaround for this in some cases. Enter 'debug' inspector mode and change the render queue value to make a material always in front or in back of other transparent materials. Issues with Transparency on Transparency )

    Keep meshes small in dimensions (split if you need) and bake occlusion in Unity. The occlusion culling system is great. You can improve results with visibility volumes. Enable 'Batching Static' on objects to reduce draw calls. Unlike combining meshes manually, it properly responds to culling.
    http://blogs.unity3d.com/2013/12/26/occlusion-culling-in-unity-4-3-best-practices/

    DXT is a fixed ratio compression. The file size is always the same. There's a new thing in Unity 5.1 though: 'Crunched' format. The compression ratio is insane, at the price of quality and looong file saving time (use it after you're done with the texture).
    http://www.polycount.com/forum/showthread.php?p=2330886

    The profiler is now bundles with Unity free. Just make a test scene with abundant amounts of geometry, enable GPU profiling, disable VSync. Set the camera in a fixed place and see what takes most time :) The profiler is really cool and easy to use.
    Measure everything. Does occlusion help? Does alpha blending cause FPS drop? Some advice can be good but still pointless if it gains you almost nothing in your case :)
  • Thane-
    Options
    Offline / Send Message
    Thane- polycounter lvl 3
    Finally an application for the Infinite Detail Engine.

    That reminds me its time to do my yearly checkup on that thing with a short prayer said just before i hit the searh button.
Sign In or Register to comment.