Home Technical Talk

Can someone explain this metallness/PBR thing to me.

13

Replies

  • FelixL
    Options
    Offline / Send Message
    FelixL polycounter lvl 4
    cryrid wrote: »
    It combines an RGB diffuse and and RGB specular into a single image, and the metalness map can be packed in with the roughness. It may not work out well for all materials, but I can see there being a benefit for the ones it does work with.

    Yeah, but the roughness already needs to be packed with the normal map, as the two are related and need to go through the same processing.
  • BuLL
    Options
    Offline / Send Message
    FelixL wrote: »
    I don't see how the metalness model is an improvement over regular diff/spec/roughness in any way, both from a workflow perspective and from a memory perspective.

    In pretty much all real-world cases, you would need a metalness map, because the materials in a texture are never 100% metal. Dirty metal, partly painted metal, metal with dust on it would already require a metalness map.

    So for mixed materials you'd need:

    Oldschool diff/spec workflow: RGB diff + RGB spec
    Metalness workflow: RGB basecolor + grayscale metalness

    Since there is still no texture format that I know of that saves greyscale textures with a lower memory footprint than RGB textures, you wouldn't even save memory. This is assuming that RGB+A is the same memory footprint as two RGB as it was before.



    What am I missing? How is it advantageous?


    I understand entirely where you are coming from m8..I had my doubt`s initially but persevered & began to understand that a black and white metalness map is

    I understand entirely where you are coming from m8..I had my doubt`s initially but persevered & began to understand that a black and white metalness map is far less time consuming to produce than a specular map.
    It`s purely a case of practice and altering values within the albedo and gloss to produce the final material.
    I can produce a metalness map in seconds,where as a specular would quite probably take me hours.I wouldn`t produce the realism in such a short time either if I`m honest.ie....

    https://www.dropbox.com/s/ke42xllmjphc0my/scope1.jpg?dl=0

    30 minutes work to get to that point...And I know there`s quite a few tweaks to be made as of yet...but I`ll take 30 mins any day of the week.
    When all said and done,you have to put the initiall time in. Benefits can be reaped though m8..Cheers.
  • EarthQuake
    Options
    Offline / Send Message
    Only counting the time it takes to make the metalness map isn't a realistic look at content creation time.

    With the metalness workflow, you still need to make a specular map, however, only for metals, the albedo acts as both the diffuse and specular map. Authoring a specular map that gives you the same amount of detail and control as you get in the metalness workflow should take no less or more time than the metalness workflow.

    Authoring a full color spec map will only really take a significant amount more time if you spend a lot of time tweaking values and adding in detail and variation that don't really make sense in the first place, as with most insulators, there is no variance in the specular response unless the material changes, so you can get away with basically flat values, which take very little time to set up. For the asset posted above, you can get the same results just as quickly with the metalness or specular method.

    In either workflow with modern/pbr shaders, most of the time will generally be spent authoring the gloss/roughness map.
  • cryrid
    Options
    Offline / Send Message
    cryrid interpolator
    FelixL wrote: »
    Yeah, but the roughness already needs to be packed with the normal map, as the two are related and need to go through the same processing.

    I never heard a hard rule like that. I would assume the map can be packed anywhere the shader wants. In the end its still saving a full RGB map per asset.
  • EarthQuake
    Options
    Offline / Send Message
    cryrid wrote: »
    I never heard a hard rule like that. I would assume the map can be packed anywhere the shader wants. In the end its still saving a full RGB map per asset.

    Right, such a rule is absolutely incorrect. Roughness maps to not need to be "processed" like a normal map or anything close to that. They can be packed into any channel you please (provided your shader can read from that channel, reading from normal.a may be hard coded into your shader but that is a different issue entirely).

    In addition to that, packing into alpha channels should generally be avoided at all cost (unless your shader is heavy on texture calls and needs to run on mobile or other hardware that is limited by the amount of textures you can load).

    The reason for this is texture compression, a 32 bit (RGB+A) image compresses to exactly twice the size of a 24bit image with DXT. So putting a grayscale map into an alpha channel uses the same amount of VRAM as loading an extra full color 24 bit image.

    What you generally should do is pack your roughness into a single channel of a 24 bit image. For instance:
    R: roughness
    G: metalness
    B: cavity or ao

    This gives you 3 inputs for the cost of 1 alpha channel. Even if you only use 2 channels and leave the 3rd blank, your vmem is still more efficiently allocated.

    Consequently, we see one of the main benefits of the metalness workflow when we do this, contrary to common misconceptions its not because it's required by pbr systems or because its more accurate, its simply a matter of optimization. Roughness, metalness and ao/cavity maps can be packed into the same texture space as a full color spec map.

    The other primary benefit to the metalness workflow is that it acts as a safegaurd to keep artists from setting up illogical materials. However, this relies on the artist understanding how the metalness works, and if they do not, and throw random grayscale values into the metalness map (which I see frequently) its no more accurate than throwing random values in to a spec map. Also, if you understand how the metalness workflow works (metals have full color spec, non-metals have greyscale spec in the 2-8% range which lacks variation unless the material actually changes), you can apply the same basic the prinipals to the specular workflow and make content just as accurate in about the same amount of time.

    The biggest con to the specular workflow is that it uses more texture memory. Both methods require artists to have some basic understanding of material properties otherwise the system is easy to break. The biggest con to the metalness workflow is that it's not as accurate (can't set a value other than 4%, or if you can, you have to load an extra input which nullifies texture memory savings) or straight forward (potentially, depends on how you prefer to work) as the specular workflow, and you may get nasty artifacts at transition points from metal to non-metal.
  • FelixL
    Options
    Offline / Send Message
    FelixL polycounter lvl 4
    A roughness map basically contains micro-normals so it definitely has to go to the same processing as the normal map. Our programmers insisted on it and it's one of the major reasons why Ryse has such good IQ and such little noise/in-surface aliasing, as you can see for yourself in the 4K PC port. Not processing and sampling these two maps in the same go is a good recipe for specular aliasing (or so I'm told)
  • FelixL
    Options
    Offline / Send Message
    FelixL polycounter lvl 4
    As for the workflow:
    I wonder if metalness is only feasible for a materialstack-mask painting workflow like with dDo or substance designer. But then again, it doesn't really matter which workflow you use as you never actually directly work on the maps, just on the stack and the masks.

    If you manually create such maps in PS: What about a metal structure, say, on mars? It has steel, maybe some brass, plus dirt, and on top a fine layer of martian dusty soil (red). The "base map" needs to contain all these colors, and they smoothly need to blend into each other, as the red dust doesn't cover the metal 100%. The metalness map needs to fit to this 100% and have the exact same matching soft gradients, or it won't look right.
    I have a hard time imagining how to author something like this without a mask-based workflow.
  • EarthQuake
    Options
    Offline / Send Message
    FelixL wrote: »
    A roughness map basically contains micro-normals so it definitely has to go to the same processing as the normal map. Our programmers insisted on it and it's one of the major reasons why Ryse has such good IQ and such little noise/in-surface aliasing, as you can see for yourself in the 4K PC port. Not processing and sampling these two maps in the same go is a good recipe for specular aliasing (or so I'm told)

    I think you may have misunderstood what your programmers told you.

    Gloss/roughness maps do define the microsurface of the material, this is correct, they even theoretically do a similar job to the normal map in that they define qualities of the surface. However, they do not need to be "processed" or even packed with the normal map.

    What you're talking about may be temporal anti-aliasing, which applies to the entire frame and does not require the roughness maps to be processed in a special way.

    Anyway, there are clear and public examples where this is certainly not a requirement, namely Unreal 4 and Toolbag 2. If this is something required for Cryengine, that would be an engine-specific technical detail and not typical of PBR systems in general. If this is the case I would love to learn more about it, so if you can provide a detailed technical explanation or link to a whitepaper that would be awesome.
  • FelixL
    Options
    Offline / Send Message
    FelixL polycounter lvl 4
    No, I am talking about shimmering and in-surface aliasing, which is produced in the shader as a result of the mip-map process and cannot properly dealt with by post-process anti-aliasing.

    In any given frame, pretty much all textures are displayed at high mips, unless you're in a close-up shot or zooming into something. The smaller the mip-maps get, the more the noisy areas in a normal map get blurred and averaged. This causes a perceived lower roughness of these areas when looking at objects with higher mips. In textures where you have smooth and noisy parts in both the normal and roughness only a few pixels apart, the result is specular aliasing or shimmering as the values inevitably bleed into each other.

    The solution is to counteract the loss of normal variance in higher mips by boosting the roughness by the same amount. For this, they need to go through the same processing. Processing in this case means converting unprocessed TIF files into direct3d .dds files.

    This may not be exactly what's happening, but it's what I understood from talks with programmers and from this presentation, starting on page 15: http://crytek.com/download/2014_03_25_CRYENGINE_GDC_Schultz.pdf

    Believe me, I wish it was different as it causes us a bunch of problems with various DCC packages. But I've seen the difference in quality first hand and it's too significant to ignore.
  • EarthQuake
    Options
    Offline / Send Message
    FelixL wrote: »
    No, I am talking about shimmering and in-surface aliasing, which is produced in the shader as a result of the mip-map process and cannot properly dealt with by post-process anti-aliasing.

    In any given frame, pretty much all textures are displayed at high mips, unless you're in a close-up shot or zooming into something. The smaller the mip-maps get, the more the noisy areas in a normal map get blurred and averaged. This causes a perceived lower roughness of these areas when looking at objects with higher mips. In textures where you have smooth and noisy parts in both the normal and roughness only a few pixels apart, the result is specular aliasing or shimmering as the values inevitably bleed into each other.

    The solution is to counteract the loss of normal variance in higher mips by boosting the roughness by the same amount. For this, they need to go through the same processing. Processing in this case means converting unprocessed TIF files into direct3d .dds files.

    This may not be exactly what's happening, but it's what I understood from talks with programmers and from this presentation, starting on page 15: http://crytek.com/download/2014_03_25_CRYENGINE_GDC_Schultz.pdf

    Believe me, I wish it was different as it causes us a bunch of problems with various DCC packages. But I've seen the difference in quality first hand and it's too significant to ignore.

    Thanks for the detailed writeup, I'll check out that paper.
  • almighty_gir
    Options
    Offline / Send Message
    almighty_gir ngon master
    That's an extreme example of an engine specific implementation. It's something that a couple of places are using but it's by no means something you have to do, or need to do.

    All they're really doing is converting normals to roughness at further distances from the camera to help break up the roughness distribution.
  • oblomov
    Options
    Offline / Send Message
    oblomov polycounter lvl 8
    This is known as LEAN mapping, and there are a couple variants with similar names. The first reference I can remember for this is from Fireaxis : http://www.csee.umbc.edu/~olano/papers/lean/. Ryse seems to be based on this : http://advances.realtimerendering.com/s2012/Ubisoft/Rock-Solid%20Shading.pdf
    As gir wrote, it only forces you to build the roughness mipmap pyramid based on your normal map, but it does not require you to pack them in the same texture. Maybe this was a requirement of the tools FelixL's team used to edit their game, but there are no technical reason I can see to have the normal packed with the roughness in the shiipping game. This is confirmed by the Ryse presentation FelixL posted which only said the normal had to come with the roughness in the "source assets" (presumably for pipeline reasons).
  • FelixL
    Options
    Offline / Send Message
    FelixL polycounter lvl 4
    Yes, my bad, I didn't mean to say that this is a general requirement.
    As for it having to be in the same texture, well, even if it doesn't make a difference for sampling, if you change either normal or roughness, you have to change the other as well. So for version control reasons in a production alone, it makes sense to have them in one file.


    Still, assuming you pack the metalness into an RGB with roughness and cavity, you still haven't saved any memory. Basecolor, Normal, metal/rough/cavity vs Diffuse, Spec, Normal/Roughness.

    Cavity is not needed for the latter (I believe it applies to the specular term only, so it would just be multiplied over the spec map).
    I believe you could get away with DxT1 for the metal/rough/cavity at the expense of the cavity compression, but still... it eludes me what people find so comfortable/intuitive about this workflow. I'll just go ahead and try it myself soon.
  • oblomov
    Options
    Offline / Send Message
    oblomov polycounter lvl 8
    Sorry to be nitpicky, but yes you saved memory in that case since the Normal+Roughness texture in the spec/gloss shader would need to be in a format which encodes alpha, which is usually bigger than one with only RGB channels (e.g. DXT5 is twice larger than DXT1).
    Regarding the question of which workflow is simpler/more intuitive, I suppose it's really a case of personal taste. Some people find metalness/roughness simpler (I do), some don't.
  • FelixL
    Options
    Offline / Send Message
    FelixL polycounter lvl 4
    That's true, but the same goes for the packed metal/rough/cavity. I doubt the quality would be acceptable if you pack them into a DXT1 texture. Since roughness is pretty much the most important texture in PBR shading, it deserves "special treatment", so it makes sense it goes into an alpha. Does UE4 really use a regular DXT for this?
  • EarthQuake
    Options
    Offline / Send Message
    If you pack roughness into normal you double the size of that texture in DXT5, so yeah, you do save memory.
    Normal + roughness in A
    Full color spec
    albedo
    =
    Size of 4x full color maps/24 bit maps, if you pack caviity or AO into another channel, thats the same memory usage as 5x 24 bit maps

    Albedo
    Normal
    Metalness, roughness, cavity/ao packed into one map
    Thats 3x 24 bit maps

    The memory usage is unquestionably lower with the metalness workflow when compressed with standard DXT methods, if you can live with the quality of packing into an RGB channel vs alpha is subjective and probably even varies per asset. Personally, I prefer the full color spec workflow as I'm more familiar with it and you don't get those artifacts along transition points, but the metalness workflow is more efficient in terms of vram.

    In terms of what texture compression UE4 uses, its very flexible, you can pick the compression type you need.
  • oblomov
    Options
    Offline / Send Message
    oblomov polycounter lvl 8
    I don't know for sure what UE4 offers, but I suppose that there are different possible choices.
    In any case, whether the ~6 bits of precision in the green channel of DXT compression is "acceptable" probably depends on your expectations and on the type of assets. If you are trying to render fine details of fingerprints on a smooth surface like a car hood for example, that probably won't cut it, but it's probably more than enough on your average brick wall. In any case, modern graphics hardware offer other choices of texture formats (BC7, ASTC, ...) that are far less destructive than DXT1-5, so you may not need to put the roughness in an alpha channel to increase its bit-depth.
  • EarthQuake
    Options
    Offline / Send Message
    FelixL wrote: »
    No, I am talking about shimmering and in-surface aliasing, which is produced in the shader as a result of the mip-map process and cannot properly dealt with by post-process anti-aliasing.

    Just to follow up with this a bit. Post-processing AA (like FXAA) does indeed have issues, but what I am talking about is temporal AA, or temporal super-sampling, where you average multiple frames together for AA. This is what TB2 does and from my understanding what Epic has done with UE4 as well.

    http://advances.realtimerendering.com/s2014/index.html#_HIGH-QUALITY_TEMPORAL_SUPERSAMPLING

    Theres a interesting paper on it and a video there.

    more:
    http://www.geforce.com/hardware/technology/txaa/technology
  • BuLL
    Options
    Offline / Send Message
    EarthQuake wrote: »
    Only counting the time it takes to make the metalness map isn't a realistic look at content creation time.

    With the metalness workflow, you still need to make a specular map, however, only for metals, the albedo acts as both the diffuse and specular map.

    I`m assuming what you are saying, in a totally misleading & confusing way btw.Is that you should use reflectance values for metals in the albedo of a map,if that particular piece is metallic?...As in...

    http://i.imgur.com/k9jKjiW.png
  • EarthQuake
    Options
    Offline / Send Message
    Yeah thats pretty much it. I wrote a new tutorial that covers this in more detail:

    http://www.marmoset.co/toolbag/learn/pbr-conversion
13
Sign In or Register to comment.