Packing all textures into one map. Pros and cons?

Ludvigs
polycounter lvl 2
Offline / Send Message
Ludvigs polycounter lvl 2
So, I've had some free time on my hands, and wandered into that there rabbit hole named texturing.

I originally were just experimenting with non-square UV's, when it occurred to me that if I'm doing a 4:1 texture anyway, I just might benefit from packing it all into one texture.

I.E. Normal at the top, then Albedo, Occlusion/Roughness/Metal and Emissive at bottom (as an example).

Example map. (Don't have emissive on this one, but example still applies). This is for a sword, which tends to not fit in a 1:1 space if anyone wondered. 


Now, Can this be done in a feasible manner? I imagine I would somehow need to use 4 different UV sets on the model. Not sure how to implement this, and what engine would support it, but I still find it interesting. The thought here is mainly to make it cost effective to put everything into one large texture instead of 3-4 smaller ones. The usefulness on this would likely depend on engine and asset, but having to load one texture instead of three should have  benefits in quite a few cases.

I imagine since I have not heard of anyone doing this, there are some serious drawbacks, so I'd like some critique on this here idea, for educational purposes :open_mouth: I'll be running practical tests myself, but I thought if anyone already knew why this is silly, then I'd save myself the trouble. 

So have at it, point out where this will go wrong  :#

Replies

  • Sigmafie
    Offline / Send Message
    Sigmafie polycounter lvl 4
    I think you need to understand some fundamentals about textures first, as this is not feasible (in the same texture file). If I am incorrect, I ask anyone else to correct me and follow up.

    1. Normal Maps: Comprised of 3 greyscale channel images (RGB) being combined to generate the lighting information to be interpreted. Cannot be combined with other textures (AFAIK). Each channel is necessary, each greyscale image in each channel is necessary.

    2. You can combine Roughness, Metallic, Occlusion and Emissive into a single texture file by placing the greyscale image for each into one of each channel (4 leaving you one for emissive/opacity if you use the Alpha channel as well). This will compact the textures, freeing up some resources in your final render package (typically speaking). The Unreal Wiki, for example, explains how to accomplish this. There are tutorials on Youtube regarding how to do this in other programs such as Substance Painter.

    3. Albedo is a separate image file similar to Normal Maps. It requires all three color channels in a single image file to be correct. Cannot be combined with other images (AFAIK).

    EDIT: I misunderstood the question. See below for better responses/discussion.
  • SnowInChina
    Offline / Send Message
    SnowInChina polycounter lvl 8
    i am not a programmer, but i know that you can set the tiling for parts of a map via script
    so it should be possible to output different parts of one map into different slots like normal, emissive etc
    but this would most likely need some wizard-level programming skills to do that, correct me if i am wrong
  • Axi5
    Online / Send Message
    Axi5 polycounter lvl 4
    This is totally possible but makes no difference since the shader would still have to do the same texture lookup 4 times but with different input UV coords.

    My advice: don't try to make this work.

    It's more work for a graphics programmer (are those textures hard coded in that order or could they vary? What if you don't have an emissive channel, do you just have 3 rows of textures? The shader has to account for this which is time wasted and if you're checking this on the GPU then it's another performance cost), for probably the same performance results, maybe slightly better maybe much worse. With more data loss and artifacts due to the fact your textures and UV's are squashed. It's harder to author for the texture artist too.

    The only benefit I can see here is memory, and this should honestly be a last minute hack if you were running out of memory on a low-end device like mobile or something.
  • Ludvigs
    Offline / Send Message
    Ludvigs polycounter lvl 2
    Axi5 said:
    This is totally possible but makes no difference since the shader would still have to do the same texture lookup 4 times but with different input UV coords.

    My advice: don't try to make this work.

    It's more work for a graphics programmer (are those textures hard coded in that order or could they vary? What if you don't have an emissive channel, do you just have 3 rows of textures? The shader has to account for this which is time wasted and if you're checking this on the GPU then it's another performance cost), for probably the same performance results, maybe slightly better maybe much worse. With more data loss and artifacts due to the fact your textures and UV's are squashed. It's harder to author for the texture artist too.

    The only benefit I can see here is memory, and this should honestly be a last minute hack if you were running out of memory on a low-end device like mobile or something.
    Ah, yes. I had figured it would be something like this. I got it to work by panning the texture in UE4 to where i needed it to be. Not sure how that is handled by the engine, but at least I don't need 4 different materials for each UV set. 
    With more data loss and artifacts due to the fact your textures and UV's are squashed. It's harder to author for the texture artist too.
    How is my texture squashed though, when I've rendered them 4096x1024 and pasted them together in Photoshop? Don't notice a difference in artifacting, but that may be in the nature of the asset. I know UE4 shows them as squashed, but I thought that was just a display thing. Would be the same for all non-tiling textures :o 1 minute job to paste them together anyway, so I'm not bothered about that.

    There is probably a brainfart in my logic here, but I should still think that having to load only one texture is a good thing (is memory the only gain?). Of course, I am no programmer as you say. I am sure I will realise why the idea is crap when i have stewed on it long enough :p

    Another snag is that I need different compression for Normal, Albedo and Occlusion/Rough/Metal, which I solved by cloning the texture, which I then suspect leaves me at the same drawcalls anyway. *sigh* I don't suppose finding a way to compress the same texture 3 different ways will make a difference.

    Here is my Material in any case.


  • Axi5
    Online / Send Message
    Axi5 polycounter lvl 4
    All of what I said above is still valid. One other thing you would have to worry a lot about is mip-mapping. As you get further from the screen it'll blend other parts of the texture in. This'll be even more noticeable with a very rough material I believe. I'm mistaken on this part!

    Mip-mapping should still be a concern though!

    EDIT: To go into more detail on the artifacts/stretching I meant.

    So you've rendered 4096*1024 which is fine, but on a complex asset that there might not be enough vertical data to keep up with your horizontal quality. It'll look a bit odd, especially in your normal map if you have some strong gradients, since there won't be enough pixels to describe the surface.

    Edit edit:
    Further understanding you that they're not squashed textures. Ignore last part. Sorry I've been a bit occupied while reading this.
  • Ludvigs
    Offline / Send Message
    Ludvigs polycounter lvl 2
    Ah, the mip-mapping. Forgot about that. 

    Axi5 said:
    Edit edit:
    Further understanding you that they're not squashed textures. Ignore last part. Sorry I've been a bit occupied while reading this.
    No worries, I was a bit unclear as I found this difficult to wrap my head around, let alone explain :p I also managed to call it a non-tiling texture instead of a non-uniform one :#

    So to sum up (feel free to correct!)

    Cons:
    The shader will still use the same amount of draw calls since it is still looking it up 4 times.
    Is it not one draw call per texture? Does this have to do with panning it? I'll have to dig further into draw call operations
    There may not be enough vertical data to support normal map gradients due to lack of vertical/horizontal pixels.
    This is a general drawback of non uniform textures then. I suppose the ideal thing would be to split up more UV shells, though sometimes one would want something like a sword blade as one shell. Any extra pixels in the U or V direction would then remain unused unless one put more assets into the same texture.
    Mip-maps may bleed over in the textures
    I admit I am a bit blurry on the nitty gritty of mip map operations. More research to do.
    Waste of space if there is no emissive texture.

    Pros:
    A possibility of slightly improved functioning  depending on engine and shader.
    Saving texture memory.

    Thanks a lot for the replies to this. Done a good learning today!



  • Noors
    Offline / Send Message
    Noors polycounter lvl 8
    "Mathematical maps" like normal maps or gloss should not be read in the same gamma space as albedo.
    Pretty sure there would be 4 lookups anyway.
    Normal maps may benefit from better compression algorithm aswell like 3dc.
    You'd need some more instructions in your shader to define the number of row/columns, and/or add new uv channels == more data
    It's harder to maintain for the artist, tho it could only be packed during build.
    And it's not saving any memory.
    So basically i don't think it's a good idea.

  • poopipe
    Offline / Send Message
    poopipe polycounter lvl 7
    Interesting idea but probably not going to work as a general solution.

    Some thoughts... 

    It doesn't affect drawcalls - that's per material (if we keep things simple)

    If you only use one map then you'll reduce texture lookups - this is good but only works if you stick to one uv channel

    Shifting/scaling UVs is cheap but not free 

    Multiple uv channels will cost you a texture sample per channel and  significantly increase mesh memory costs

    Compression, as you've discovered is an issue WRT normal maps. There's also linear/gamma space to consider as there's liable to be variation between maps. 

    I don't think you're saving any memory and you could certainly end up wasting a fair amount 

    The most efficient setup is what people usually do (surprisingly) .  Pack roughness, metallic and ambient occlusion  together,  leave the full colour maps to their own images and pack anything remaining into an extra one.  

    If you're interested in experimenting, 
    I'd be curious to see what happens if you try to fit two grayscale maps into a single channel  eg. AO from 0-0.8 and metallic from 0.8-1.0.  
  • Axi5
    Online / Send Message
    Axi5 polycounter lvl 4
    Okay now I'm actually paying proper attention.

    poopipe said:
    If you only use one map then you'll reduce texture lookups - this is good but only works if you stick to one uv channel
    This is incorrect. You're accessing the same texture with a different uv coordinate each time, this counts as a separate texture lookup.

    The only thing you save by doing this is probably copy time to the GPU.

    The memory would be the same as 4 4096*1024 maps since thats all they are.

    You're introducing extra instructions to "calculate" which pixel to sample.

    Poopipe's other concerns were also valid such as colour transformations. You could do this manually in the shader as well. I.e. you could do the texture as linear then for the base colour portion you could raise it to the power of 1/2.2 to bring it into sRGB range. You're basically fighting against the inbuilt system there though.

    I still think mip maps are going to be your biggest enemy with this. You'll start having your normal map blur into your roughness into your base colour etc.

    You also limit yourself to being only able to tile horizontally.

    When I think it through the only benefit I can see to this is the copy time to GPU and that's it.

    It'll cost the same in memory and cost slightly more in instructions.
  • poopipe
    Offline / Send Message
    poopipe polycounter lvl 7
    I did add the caveat about multiple uv channels.

     Correct me if I'm wrong but if you were to handle the uv offset/scaling after sampling then surely you'd only need to look at the texture once - it'd be the same principle as scrolling UVs? 




  • Axi5
    Online / Send Message
    Axi5 polycounter lvl 4
    poopipe said:
    I did add the caveat about multiple uv channels.

     Correct me if I'm wrong but if you were to handle the uv offset/scaling after sampling then surely you'd only need to look at the texture once - it'd be the same principle as scrolling UVs? 

    Yeah you're right about the multiple UV channels too.

    Scrolling UVs are a bit of a different use case because they're still only sampling the texture once per frame, it's just the UV coordinate has changed.

    In this case you're sampling the same texture 4 times in a single frame.

    Edit:
    To be clear, in shading languages there's a common function called tex2D which does your texture lookup/sample. It's essentially the same thing as the texture sampler node in Unreal, since that's based on tex2D. It only accepts a sample buffer (such as 2D texture) and a coordinate (if it's a 2D texture it needs a 2D coordinate). It can do some other stuff (involving finding partial derivatives for a quad) but it's above my typical use case (and comprehension) of it. The long and short of it is, the GPU is very simple, it can only look in one place at one time in one instruction.

    You're actually doing millions of texture samples per frame if you render a textured quad close to the camera since for example if you're on HD, you'll be rendering 1920 * 1080 with a texture lookup per pixel (a bit of a simplification but is the basic gist). This is one of the reasons fillrate is (was) such a great thing to reduce.

    FYI I'm not an expert, I've dabbled a lot and written shaders and work closely with people responsible for our graphics pipeline at work but I am not one of those responsible for it.
  • poopipe
    Offline / Send Message
    poopipe polycounter lvl 7
    Yep,  I get it (and feel slightly ashamed  given what I do for a living)


  • Axi5
    Online / Send Message
    Axi5 polycounter lvl 4
    poopipe said:
    Yep,  I get it (and feel slightly ashamed  given what I do for a living)


    Don't at all. Artists don't often get involved in this and I had to think a moment about the texture panner example.
  • Ludvigs
    Offline / Send Message
    Ludvigs polycounter lvl 2
    Ah yes. I have the gist of it now. Conclusion is, this won't really work, as I suspected. And now I now why  :#  Thanks for the input guys, really appreciate it.

    If you're interested in experimenting, 
    I'd be curious to see what happens if you try to fit two grayscale maps into a single channel  eg. AO from 0-0.8 and metallic from 0.8-1.0.  
    Now that is a thing! Gave me an idea that could be interesting if it works. Say I pack my maps like this: (Excuse the handwriting, couldn't be bothered to actually use the text tool)
     


  • Eric Chadwick
  • poopipe
    Offline / Send Message
    poopipe polycounter lvl 7
    The potential killer for packing multiple maps into a single channel is going to  be compression (if you discount the tricky nature of extracting the information when you want to use it)  if you're compressing down to 5 or 6 bits per channel you're going to have significantly less values to play with than it looks like in the original images.


    Axi5 said:

    Don't at all. Artists don't often get involved in this and I had to think a moment about the texture panner example.
    That would be fine if I was just an artist  :D


  • leleuxart
    Offline / Send Message
    leleuxart polycounter lvl 5
    I think channel packing in the Normal map is more acceptable, but you run the risk of introducing horrible artifacts with that texture, so it's better to keep the Normal blue channel something that resembles the red and green. Because you have to compress the texture with DXT1(all 3 channels together at 5, 6, 5 bits for R, G, and B respectively, compared to BC5 which is 8 and 8 for only R and G I believe?), if the Blue channel is something like a random grunge texture, you will get crosstalk between the channels and your RG will have some of that information. Plus its more blocky overall. 

    It also gets expensive when you do it a lot in a material, like say for a terrain. As experience as shown  :(
  • Ludvigs
    Offline / Send Message
    Ludvigs polycounter lvl 2
    Some info that might help on the technique in that last reply
    http://wiki.polycount.com/wiki/ChannelPacking

    Oh, cheers for that. I keep forgetting that there are so many good resources about here, if one really looks. 
    leleuxart said:
    I think channel packing in the Normal map is more acceptable, but you run the risk of introducing horrible artifacts with that texture, so it's better to keep the Normal blue channel something that resembles the red and green. Because you have to compress the texture with DXT1(all 3 channels together at 5, 6, 5 bits for R, G, and B respectively, compared to BC5 which is 8 and 8 for only R and G I believe?), if the Blue channel is something like a random grunge texture, you will get crosstalk between the channels and your RG will have some of that information. Plus its more blocky overall. 

    It also gets expensive when you do it a lot in a material, like say for a terrain. As experience as shown  :(
    Yea, I noticed this as well. BC5 is the best compression I've seen so far, especially where banding is concerned, so at least on hero/close-up assets that's what I'll be using.

    Seemingly one can pack something in the red and blue channels with DXT5nm as well, though there is the artifacting issue still. 

    Well, it seems I have gone and beaten a dead horse again, but at least I learned a thing  :#
Sign In or Register to comment.