Home Unreal Engine

Modelling, texel density and Unreal Engine

hghtch
node
Offline / Send Message
hghtch node
Hi,
I want to be sure how to handle this, as I now came across a whole lot of stuff regarding texel density considerate UV packing, which kinda confused me, especially since it's recent information.(2018) https://www.youtube.com/watch?v=5e6zvJqVqlA
As far I understand, UE handles it's textures via texture streaming, which only streams in the resolutions required to display the needed LOD. This makes manual texel density considerations to a degree obsolete, right?

With UE, I just need to make sure, that the biggest prop has a, for my needs acceptable, texel density (or higher), and the rest can get adjusted in engine to match this density.

To give an example: I have a hand painted soda can prop on a 4k texture, and a cupboard on a 4k texture. Naturally, the can will have a much higher texel density than the cupboard. But if the cupboard looks fine with the 4k resolution, I can just downsample the can to match the native cupboard texel density, or if 4k is too much for my needs entirely, downsample both to a floor-level which still is acceptable and takes resources into account.

A video I saw, suggested authoring the can texture a lot smaller than the big-prop texture, having to manually make sure the UV islands have the same texel density in regard to their textures as all other islands of other props sharing the same environment.But I guess I can forget about that using UE.

Did I get this all right?

Because I'm a little worried that I later have to redo a bunch of stuff. At the moment I have things using 4k tiling textures that end up having texel densities of 5k+/m, while I have props that have much lower texel density and use packed and painted textures.

Would it make sense to use 8k for those props so that I later have fewer restrictions regarding the lowest common denominator? I'd like to aim for 4k/m on a production level, so that with newer gpu generations, I can utilize increasing VRAM without having to re-author stuff.

I'd be happy if somebody could verify if I got it right, or point me to mistakes I'm making in my considerations.

Thanks in advance!




Replies

  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Lot of people tends to overthink and overrate this topic especially nowadays. Long story short, when you determine the texel density, you are assuming some worst case scenario, when the asset is the closest possible to the viewer (sometimes they cant get close) and you are calculating the maximum res / density. Texture streaming will stream in the mip level needed based on some settings and current screen size taken. But when you go closer, it will stream a lower mip level / higher res version and eventually you get to the highest res. 5k / m sounds like a lot by the way. Having 1k / m means that you have roughly 1 pixel / 1 milimeter which sounds a lot more reasonable. Having 4k / meter would mean that you have 1 pixel every quarter milimeter. That is absurd. It would only make sense if you are working on something that has the camera really close to the ground. Like a toy game or something. The virtual texturing feature is great, check it out. In theory you could use much higher res maps with less memory usage. It also compresses useless pixels out so the memory size of a not fully filled texture would be noticably smaller than the non virtual texture equivalent. Tiling textures are the same though... Also, good luck relying on the streaming system. Once it reaches the streaming pool limits or gets close, mip levels will be all over the place. This is happening with virtual and non virtual texture streaming too on low hardware. I also noticed that it very often doesn't "unstream" the textures and mip levels that are not needed anymore. Well, this happens almost all the time if we want to be honest. So if you have a level with a bunch of 8k textures, and you go close to all of them even once, they are basically stuck in the video memory and you end up with an unrealistic amount of videomemory used. Unfortunately this was always the case so maybe this is the intended behaviour? I don't know.
  • hghtch
    Offline / Send Message
    hghtch node
    Thanks for the reply!
    I already took a sneak peek into virtual textures, but I wasn't sure if sacrificing shader-simplicity to gain memory would be worth the deal. I really don't have a clue how these relate to each other. But now that I'm somewhat deeper into the matter and making my memory calculations, I see my potential vram melting away, so I guess it could really be worth it.

    I didn't test so far, how much I can downsample the resolution before I loose the detail I'm after. But yes, 5k is over the top. It's just that way since it's using a 4k tiling texture and needs to be tiled that way for the content of the texture to make sense. I could now swap this with a 2k texture, but I don't know if 2.5k/m might already remove detail in close up view. I might sound like a silly noob (Likely I am one), but the moment I saw the detail of 4k/m on a model, it blew my mind. I know that games also look great at 2k/m, but I don't think that 2px/mm can convey this ridiculous amount of crisp I'm after.
    I made an eternit roof tile texture with tiny specks of moss, which simply are <1mm, and I needed to do part of the coloring in 4k to get that detail I was after. And even though I'm aware this might be absurd, I'd still like to try to preserve this level of detail in the game. Retina displays are also absurd, yet I love it. Using 2px/mm might not be noticable, but 4px/mm sure is (and even if only when being close up to a wall or looking through a scope). So I want to at least give it a try.

    To preserve memory, I'd refrain from using packed unique textures as much as possible, and instead use a combination of hi-res tiling textures blended by (in comparison) lowres masks. On objects where I have by nature a high vertex count (like a teddy bear), I'd use vertex painting instead of the masks (though I'm not sure if this is more efficient). Only when neither of these blending methods are applicable, like when I need too many unique layers which aren't used elsewhere, I'd use packed unique textures. This way I can achieve very high level of detail with only a few textures.

    Now I also don't know how far I can play this game, but I saw a setting "max resolution" where the texture in the asset window just gets downsampled. I assume that this happens before things are entering the streaming system, so that even if the 4k/m are unreachable, I can just set the max. res for all textures to be 2k instead of 4k, and everthing will be fine. Then I can use the mip level to balance out mismatches in texel density. In the end I'll have to work with what's possible and try to tickle out what's in it. If 4k/m are impossible, then I'll have to make cuts.

    If I'm understanding you correctly, it's a act of balance between source-resolution, being able to cater to low-end hardware and making use of high end hardware resources. Is that the reason why you often can get seperate texture packs (I remember there was a hd-pack for bf3), so that on low systems, the streaming system can't break as easily because of lower-res source textures, while with a high end system, you can use high-res source files? I assume that this "max resolution" setting is not something you can do at runtime, so that the game deploys with either whatever settings you made and sticks with it.

    Anyway, I guess the guideline of authoring textures at as much resolution as possible and to do the rest in-engine is right. I just got really confused when I heard so much about texel density and the importance of authoring in it's consideration. I guess that's something to consider when working at a studio using an inhouse engine.

    I will take a deeper look into virtual texture streaming. Thanks a lot for the input!




Sign In or Register to comment.