I've been having this discussion with a co-worker for a while now and figured I would throw out here:
Obviously, color maps, the pixels we want our screens to display, need to be converted to Linear and fed into (let's say) UE4 and we check the sRGB box on.
Information maps (as I call them): Spec, Roughness, anything that provides scalar info from 0 to 1. If we are painting them in PS or Mari or Mudbox, we are painting what we see through the monitor, which is providing us with sRGB gamma 2.2 images, are our gradients truly Linear? OR do these maps need reverse gamma correction applied before they go in engine (understanding that we leave the sRGB checkbox off)? Normal maps are an exception because they are baked, not painted (hopefully), but regardless, is 50% grey really 50% grey when I throw that map in a shader? In my mind it isn't, but I know there are a few people here that can clarify.
Thanks!
Replies
The data in the image is still 0-1, regardless of color space. If you paint a linear gradient in SRGB space, then import it with the SRGB option on, its a linear gradient. Its the same math used to store it as to unpack it. If you map the raw values as stored in an SRGB image without applying that colorspace's conversions first, then no, it won't be linear.
Assuming an even gray gradient, they'll follow a curve approximating to N^(1/2.2) (very generally speaking), but if you do apply the inverse transformation of the raw data (N^2.2) you get your linear data back.
TLDR: The SRGB box is an inverse gamma correction.
Edit: The wikipedia article on the what and why of SRGB may be useful, or at least interesting. http://en.wikipedia.org/wiki/SRGB
Also, if you are concerned about this color profile in your images, or are getting incorrect results, be sure to turn off the color profile usage in photoshop for that document. On an open document go to edit->Assign a profile and select "Don't color manage this document"
Other than that, if your gloss/roughness/metalness/etc maps are in linear or sRGB space or how you/paint them is not particularly important. What is important is that if you're authoring the map and previewing somewhere, you should make sure that the final result has the same gamma/linear space option checked.
If you're painting a gloss/roughness map, what you see in photoshop has very little relevance. I mean you can make some basic assumptions like X value is brighter than Y, but you can't see the effect of these maps in photoshop, so there is little need to be concerned about authoring them in the "correct" space. You shoud be authoring while previewing with some sort of realtime shader to see the end result. Thus, what color space they are authored or previewed in photoshop doesn't matter.
However, for a project wide basis, you should make sure these maps are authored in the same consistent way. If one artist is painting gloss maps in sRGB and another in linear, you'll have problems when people try to create new shaders reusing those assets. So it's best to pick some standards for each map type.
Diffuse/Albedo and spec masks should be in sRGB and normal maps and spec roughness shouldn't.
EQ, that's the explanation I was using, that what mattered most is what you were seeing in engine. I was curious if there was a more scientific thinking to it.
I found this was very useful as a resource too: http://artbyplunkett.com/Unreal/unrealgamma.html
Thanks for the help!
By far the most important thing is that you aren't fucking up your texture inputs by doing unnecessary gamma conversions at any point in your pipeline, cos that breaks everything. What most people do is just author everything in sRGB (gamma space) (except normals which are always linear) and then either in the shader when the texture is loaded OR via hardware support, convert those textures from gamma-space to linear-space (because in general all shader calcs need to be done in linear otherwise everything breaks) and then the renderer processes everything in linear, and converts the final frame back to sRGB before it writes it to the screen.
Upon a few minutes reflection, I'd probably say that gloss textures should absolutely be authored in linear-space if possible. Because god damn the amount of artifacting I've seen with very glossy materials. Linear would help there. If you're using roughness though, sRGB all the way. it's to do with where in the histogram you need your precision tbh.
What I am trying to get at is that even though the sRGB Photoshop color profile does indeed rely on the curve mentioned earlier, from what I understand it is also the default behavior when displaying images on any computer screen/device - so my guess would be that Photoshop displays images through this profile by default anyways, it being present in the file or not. I might be wrong though.
Now as far as texture authoring is concerned my experience with all this is limited to UE4, which probably relies on quite a few optimizations of its own. All I can say is that things seem to behave quite well when working with PSD master files stripped off any custom profile, exported as TGA, and with the sRGB option in UE turned on for color maps and off for data/value maps. I am certainly not 100% sure about all that though, so I too would appreciate any further info on the best practices for this - especially when it comes to establishing a perfectly synchronized pipeline between the Substance Painter output and UE4/Unity.
But while we're on the subject... has anyone seen the HDR monitors that are literally IN linear space and don't have/need gamma? http://www.trustedreviews.com/opinions/hdr-tv-high-dynamic-television-explained
I feel like the world is about to change yet again lol! The sweet thing is that for film HDR tvs mean that literally what is captured by the camera on set and seen by the director during editing sessions can be displayed in your living room 1:1 with no bullshit h.264 codec washing out all your colors
For games I'm not sure what this means yet since the only game I know of that uses ACES (the gold standard in film color space setup) is Ashes of the Singularity.
https://facepunch.com/showthread.php?t=1446269&p=50225536&viewfull=1#post50225536
If the link doesn't work properly, it's post #3644 in that thread, or just read off this screenie:
I've had a 3+ part series of articles that I'm slowly working on talking about the entire rendering pipeline and heavily analyzing gamma. I should pick up the speed on it. It seems people really need more good information.
Gamma issues have been well established for a long time in offline, but I still see a lot of experienced artists in the game industry who really struggle with it. As @artquest says, it shouldn't even be something to worry about. It should really be going on behind the screen somewhere we don't have to deal with it. And thankfully we really don't: asset from Substance to UE4....2 roughness map settings to click and a 'yes' confirmation on the normal map import.....done.
Fiddling around in gamma space is a pain in the arse.
Secondly, if you're converting to linear in the shader, you're not going from 8bits per colour channel to 8 bpc just in a different colorspace, you're going from 8bpc to float which is MUCH higher precision, so you don't lose really anything in the conversion process. In fact, your texture is the thing which has the least amount of precision in the entire process which is why it's worth getting it right so that you're getting as much precision out of your textures as possible.
All normalmaps are baked in linear space already. Colourspace is pretty much an abstract concept which is separate from your image file. Some formats (not many though I think) do store metadata in the image file itself to tell software how they should be interpreted, but the colour values in the image are just numbers. Most common formats (jpeg, png, tga etc) afaik don't reliably store colourspace / colour profile metadata and are usually assumed to be stored in sRGB. High-precision and 32-bit formats such as .hdr and .exr are commonly assumed to be stored in linear. Colourspaces do pretty much just come down to the reader knowing (or being told by you) how to interpret the raw colour data in the image.
Okay so, with gloss/roughness in particular, they are exactly the same thing they just store the data back to front or forward (eg with Glossiness white pixels are glossy, black ones are rough, whereas with Roughness maps the black pixels are glossy and the white ones rough). Now, with areas of your texture which are rough, any artifacts in the texture are gonna be difficult to see. Whereas in very glossy areas of your texture, texture artifacts are often quite obvious (especially because with most gloss/rough curves even minor changes in the 10% most glossy part of the histogram have a hugely obvious visual impact).
So really what we're doing here, is trying to use the larger part of the histogram to store the more important information in the texture (the high glossiness part). So if that data is being stored in the white part of your image, you probably want to use linear because linear affords more histogram space to bright values in 8-bit images. If that data is being stored in the black part of your image, you're better off using sRGB because sRGB gives you more precision in the darker values.