Home Technical Talk

Texture workflow for RTS game

matt881
polycounter lvl 9
Offline / Send Message
matt881 polycounter lvl 9

Hi! Last time I was wondering how modern RTS games like Age of Empires 4 deal with texture optimization. I'm familiar with approach from 0AD that is amazing but in some aspects is outdated (like separate heads or atlas texture for all buildings per faction).

https://web.archive.org/web/20170327055932/http://trac.wildfiregames.com/wiki/ArtDesignDocument


Things that boggles my mind:


1. Player Color

Every objects needs a distinguish between factions so in 0AD there was mask nested in Diffuse texture Alpha channel. Black parts were multiplied color layer. But alpha channel is costly as separate three channel texture so I would not use Alpha channel and include Player Color in other texture.

Cossacks 3 has probably separate mesh added for buidlings but for units it has texture mask.


2. Object Color (for details variation like hair or ethnicity)

Actually I would away from another channel or texture only for this. I noticed that people are including texture variation in one atlas.

(texture by mylae https://sketchfab.com/3d-models/hafsid-horse-general-machiavello-mod-8530cf6caf234e61a71f85dec5d3a467 )


3. Transparency

I noticed that there is small amount of transparency in units or buildings in AoE4, so I guess it's design choice to avoid overdraw. But some units like trebuchet have got netting/sling with visible holes. Or thatching in AoE3.

And because there's no free channel in Albedo or Masks (Metallic, Roughness, Player Color), it could be nested in Normal Map last channel like this approach (look for comments section https://80.lv/articles/material-design-tricks-in-ue4/ ).

Is this valid and better than Alpha channel in Albedo texture?

4. Units/buildings variety/trimsheets

Units in AoE4 can be upgraded so I guess that four skins are included in one texture? There's also separate texture for common props, I guess? Also, weapons, shield, helmets etc. could be also modular, for all units in game? (it would save texture space).


To sum up I would do this approach:


- Albedo (with baked AO)

- Normal Map with R and G two channels (one channel - B - for Transparency)

- Masks (Metallic, Roughness and Player Color)


Is it good in terms of best optimization or maybe it's not worth of work because there are easier and better approaches? What do you think about it?

Replies

  • poopipe
    Options
    Online / Send Message
    poopipe grand marshal polycounter

    In my experience its almost never a good idea to muck around with normal maps - just leave them be.


    You're thinking about the right things but you can't make the best decision for your project without analysing your project.

    eg. when you've profiled the game It may emerge that it is desirable to reduce texture reads at the expense of memory (in which case you should pack the alpha into one of the other maps) - it might equally be the opposite case.

    if you're in Unreal you don't have the luxury of changing your mind after you've profiled because it makes you pack textures manually - fortunately it has really good texture streaming so it's not really a big deal if you use a lot of texture memory. I'd probably suggest you pack the alpha into the masks texture just for convenience in that case.


    Atlasing is generally a good idea if you are likely to see the (mostly) whole atlas at the same time - it doesn't make sense to atlas textures that belong to units from different armies together since its highly likely you'll only ever see half the texture at any given moment but it does make sense to atlas your goblin archer textures since they'll hang around in a little group most of the time

    On the other hand - if you're forcing your atlas to be resident in memory (eg, an atlas of projectiles, terrain decals or something else ubiquitous) it doesn't matter how they're arranged.


    TLDR - it depends

  • Alex_J
    Options
    Offline / Send Message
    Alex_J grand marshal polycounter

    That sort of profiling you typically wouldn't be able to do until late into production?

    If that is true, any way to manage the art production in the mean time? Like maybe you just keep separate maps/materials for every little thing?


    That's what I've done, but I'm just winging it. No idea what professionals do.

  • poopipe
    Options
    Online / Send Message
    poopipe grand marshal polycounter

    yeah - you don't really know what you're dealing with until it's fairly late in production and if you're packing manually it's almost certainly too late to do anything about it when it becomes an obvious problem

    There's a few things you can do to mitigate against the problem - using material functions to unpack your textures can insulate your materials against this sort of change. similarly, using packing nodes in designer allows you to make global changes which you can batch export - this reduces the amount of manual work involved significantly but you're still faced with the problem of importing all your new files (may have different names / may be a different number of files) and cleaning up the ensuing mess.



    wrt Unreal

    You could absolutely build tools to manage all this, but that would require that you predicted the problem and built the tools before you started shoving content into your project - there's a good chance these tools would unstreamline the import process which is a bit sad.

    In practice - just doing what everyone else does with their packing (i.e Basecolor, Normal, Metallic/Roughness/AmbientOcclusion/Opacity(opttional)) is the most sensible approach for 99% of the materials in a project, You can specialise for landscapes/other weird shit as required.

    Your main real enemies are texture size and quantity - which can be resolved by hitting people with sticks politely reminding people of budgets and are relatively easily resolved using batch processes/device profiles

  • sprunghunt
    Options
    Offline / Send Message
    sprunghunt polycounter

    You absolutely can plan performance in advance. The trick is to approach it like an engineer. Setup a test level with test assets that tests for the thing you want to profile. Then use the performance numbers you get from that to inform your decisions.

    For example if you want to see the effect of atlassing on performance you can setup two different sets of assets using very basic geometry and basic flat color textures and have some test levels with the assets laid out in large numbers and run profiling tools on those.

  • poopipe
    Options
    Online / Send Message
    poopipe grand marshal polycounter

    That shows you the effect of doing something - it doesn't tell you which thing to do when your level runs like crap.

    Since all optimisations are some sort of compromise you need to know what's making your level slow to know which optimisation to make.

    eg. atlasing will reduce drawcalls but requires an increase in texture size (for equivalent fidelity) that can have an impact in terms of streaming flexibility, static memory usage (and in stupid cases texture read times) . Atlasing isn't a great example since it's almost always a good idea but you get my point.


    Its true that you can detect this stuff earlier if you profile regularly and cross reference historic results but you have to do it in your actual level (or a level that closely represents your actual levels) .

  • sprunghunt
    Options
    Offline / Send Message
    sprunghunt polycounter

    No you can absolutely plan performance in this way. A level is just a collection of objects being drawn at any one time. If you work out the cost of drawing bunch of objects you can determine how your objects should be made by experimenting with different ways of doing it.

    I've done it. I've had projects where we didn't optimize at all. We planned to hit a performance target and did it. You don't need to work out why your level is running badly if it never runs badly.

  • poopipe
    Options
    Online / Send Message
    poopipe grand marshal polycounter

    Perhaps I'm just jaded...

    That's obviously the correct way to work, I've just never encountered a game team that was willing to compromise their vision early enough in the process to make it actually happen.

  • sprunghunt
    Options
    Offline / Send Message
    sprunghunt polycounter

    Well it's not about compromising the vision. That's a bad way to phrase that. It's about doing tests to discover the limits of the engine/platform. These limits are going to exist whenever you make a game.

    By establishing the limits at the start of production you can tailor your solution to your needs instead of having to come up with a band-aid solution later.

    If you optimize only after you've created all your art then you have an uncontrolled risk. You don't know how long you're going to take to optimize. It could cost more, and take more time, than you've budgeted for.

  • poopipe
    Options
    Online / Send Message
    poopipe grand marshal polycounter

    Perhaps I should have said commit to rather than compromise

    What you're describing is the ideal - I just haven't seen it happen like that in practice


    The main point is that at the time you're making these initial decisions the game is not designed yet - the camera isn't where it will be later on, the number of enemies you throw at the player hasn't been defined, you don't know how much memory code need for their stuff, how big a level is and so on.

    In essence you simply don't have all the information you need to make optimal choices and are really just making educated guesses. Usually your guess will be good enough but sometimes it won't and then you have a problem to solve.


    The approach I've adopted over the years is to try and build pipelines that support cheap, large scale changes so when the worst case happens we have a way to recover.

  • okidoki
    Options
    Offline / Send Message
    okidoki polycounter lvl 2

    I guess this is called experience.. if some team starts to make up a game all this things have to be developed.. actually you can see in all the history of game studios: they made it this way.. starting by blocking out things.. trying different movements and controll.. testing the abilities of the engine (hardare and software) , investigating what is posibile or too expensive.. (just as an example Lara Croft doesn't had here famous hair tail in the first game.. and there where also no torches for her..)

    And if you make too much.. then you have to optimize... i just remenber.... ah yes.. here.. listen to the next sentence at (pos 1549.. put into the url..)

    Or simply: Rome wasn't build in one day..

  • sprunghunt
    Options
    Offline / Send Message
    sprunghunt polycounter

    Yes you should do that sort of performance work as the design is being developed. The way they describe it in the video.

    What I see a lot is doing it by guesswork that causes problems. Instead of setting up a test level to see how many polygons you can actually fit in one view people just go off estimates - some of these estimates are based off real experience - but some of them are based off wishful thinking. It's a very unreliable way to work. If you develop test environments for performance you can get exact figures which are much more useful than vague guesses.

Sign In or Register to comment.