Home General Discussion

PS4 and Gamma Correction, SRGB

quad damage
Offline / Send Message
littleclaude quad damage
Hello

I was watching through the PS4 conference, which all looked fantastic but I noticed that not many of the games are working in SRGB so the blacks are coming out very crashed and the highlights very bleached.

I passed a couple of shots from Watch Dogs as an example through Nuke just to see how it looks in SRGB and you can see it needs dialling down quite a lot as its way to bright. More or less all the games apart from the tech demo of the old man needs adjustment.


I thought all the next gen games would be working in SRGB but is there a reason why developers are choosing not to? The reason I ask is that I was going to teach some lessons in Gamma Correction and this will effect textures, lighting and reflections - uhanimation but if no one is doing it then I should explain this to the students.


Watch_Dogs.jpg

Here are some tutorials on gamma correction

http://forums.newtek.com/showthread.php?102397-Video-The-Beginners-Explanation-of-Gamma-Correction-amp-Linear-Workflow&highlight=linear%20workflow

http://galannicolas.com/mediawiki-1.13.3/index.php?title=Color_Space_101

Notice more high dynamic range in the blacks and the highlights in the SRGB, I will make some files in real time this weekend to show you what I mean.

Capture.jpg

Replies

  • Andreas
    Options
    Offline / Send Message
    Andreas polycounter lvl 11
    Hang on, are we supposed to want the image produced with SRGB? I'd rather have the one on the left. (??)
  • JamesWild
    Options
    Offline / Send Message
    JamesWild polycounter lvl 8
    Always been curious how gamma correction came to be; why doesn't all software work in linear space with monitors/cameras/scanners trying to be as linear as possible? (doing the gamma correction in firmware if necessary) Seems like it'd be a lot simpler. Gamma correction is like including the size of the room a song was recorded in and the type of mic used as tags so your MP3 player can equalize it automatically.

    I also agree that the one on the left looks considerably better.
  • marks
    Options
    Offline / Send Message
    marks greentooth
    How can you *POSSIBLY* know what's going on inside their rendering pipeline?!
  • Andreas
    Options
    Offline / Send Message
    Andreas polycounter lvl 11
    marks wrote: »
    How can you *POSSIBLY* know what's going on inside their rendering pipeline?!

    It's next-gen time mate, you'll find pages and pages of pure conjecture and personal opinion on PC over the next few months; best settle into it and be nice about it. :):thumbup:
  • JordanW
    Options
    Offline / Send Message
    JordanW polycounter lvl 19
    I think you're being mislead by video/streaming compression, which tends to kill the blacks/shadows of an image.
  • MM
    Options
    Offline / Send Message
    MM polycounter lvl 17
    huh ? why would you prefer adaptation of sRGB over RGB ? sRGB is inferior.

    RGB is more color space than sRGB and therefore superior.

    i hope wide gamut RGB becomes the new standard in both softwares and hardwares.

    this:
    http://en.wikipedia.org/wiki/Wide-gamut_RGB_color_space
  • mdeforge
    Options
    Offline / Send Message
    mdeforge polycounter lvl 14
    Depends on the viewing device, the rendering pipeline, and texturing process. Haven't done much with sRGB for game work, but I pay attention to a linear workflow when rendering stuff in Mental Ray.
  • stabbington
    Options
    Offline / Send Message
    stabbington polycounter lvl 10
    All my experience of LWF is for TV and film, so can't say for sure with games, but I'd presume the final output of any game will still be 8bit sRGB or YUV or the like, as that's the output colour space monitors and TV's work in, and I'd wager what you've done in Nuke is double gamma correct it, as you've taken a grab from an sRGB/YUV video or screenshot?

    The engine itself could well be working in linear, especially with all those HDR's and tone-mapping whizz-bangs going on, but in the end it's still being displayed on a non-linear output device, so screengrabbing that will only give you the crushed down 8-bit channels resulting at render, which is why linear correction blows out the highs and shows the lows clamped like that.
  • littleclaude
    Options
    Offline / Send Message
    littleclaude quad damage
    Andreas wrote: »
    Hang on, are we supposed to want the image produced with SRGB? I'd rather have the one on the left. (??)

    No you should calibrate your lighting to SRGB so you see how your eyes see. In this case the image needs to be taken down in its brightness, it would then help with colour range so the blacks come out better as they are being crushed at the moment.
  • Two Listen
    Options
    Offline / Send Message
    Two Listen polycount sponsor
    Pretty sure stabbington nailed it. The images you see through your browser are already sRGB (though they don't have to be, since most common browsers these days do support embedded color profiles for images - Opera, IE, Chrome, Firefox, Safari).

    And as MM said, why would you intentionally work in sRGB? It's a limited color palette, yeah it was the default color space you'd see on most monitors/non color managed browsers and it could be good to convert them to sRGB before posting them online, but that isn't quite as necessary these days and even if it is, I wouldn't work in it.
  • IchII3D
    Options
    Offline / Send Message
    IchII3D polycounter lvl 12
    JamesWild wrote: »
    Always been curious how gamma correction came to be; why doesn't all software work in linear space with monitors/cameras/scanners trying to be as linear as possible? (doing the gamma correction in firmware if necessary) Seems like it'd be a lot simpler. Gamma correction is like including the size of the room a song was recorded in and the type of mic used as tags so your MP3 player can equalize it automatically.

    I also agree that the one on the left looks considerably better.

    I could be wrong, but I think its actually got something to do with the way your eye works.... I think...
  • rube
    Options
    Offline / Send Message
    rube polycounter lvl 17
    From what I understand, it's to compensate for the loss of input energy vs output light on a monitor, which is more extreme on the darker end of the scale. IE as you linearly increase the voltage going to the monitor it only fractionally gets brighter at low levels, the starts to climb quickly. I don't know how bad displays are with that these days, it may now just be something held over because it's always been there.
  • JamesWild
    Options
    Offline / Send Message
    JamesWild polycounter lvl 8
    Yeah, screens are nonlinear, and so are data sources like cameras and scanners, like I said. What I don't understand is why we leave it up to every single piece of software in the equation to sort out, not the actual device that's non-linear. I guess to make absolute maximum use of bit depth, but it still seems backwards to me to have every single program in the chain need to be gamma aware (converting to linear space and back to perform operations) rather than have the devices that are nonlinear fix it themselves.
  • littleclaude
    Options
    Offline / Send Message
    littleclaude quad damage
    The performance and programmability of modern GPUs allow highly realistic lighting and shading to be achieved in real time. However, a subtle nonlinear property of almost every device that captures or displays digital images necessitates careful processing of textures and frame buffers to ensure that all this lighting and shading is computed and displayed correctly. Proper gamma correction is probably the easiest, most inexpensive, and most widely applicable technique for improving image quality in real-time applications.

    http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html

    Also

    http://docs.unity3d.com/Documentation/Manual/LinearLighting.html

    LinearLighting-0.jpg

    LinearLighting-1.jpg
  • JamesWild
    Options
    Offline / Send Message
    JamesWild polycounter lvl 8
    Yes, I know. The data isn't linear space, and the output isn't linear, so we have to transform it into linear space to perform operations and then to a different nonlinear space to put it on the screen properly. So why don't screens take in linear space data and cameras/scanners output linear space data? Surely they would know best, rather than passing an approximation exponent of their linearity. I don't understand, at all.
  • littleclaude
    Options
    Offline / Send Message
    littleclaude quad damage
    I will look into it soon and do some experiments in UDK.

    Are you sitting down, it all beings with the RGB values in a texture, to be continued :)
  • stabbington
    Options
    Offline / Send Message
    stabbington polycounter lvl 10
    As far as I've managed to understand it over time, is that it's to do with the monitor's relationship to our eye and some entrenched approaches to image display formats. It's all bloody confusing stuff, though, so I hope I can make this vaguely understandable (and hopefully also get it vaguely correct!)

    Most digital cameras and render engines do actually capture their data in linear space, but our eyes just don't perceive light in a linear fashion. If twice the amount of photons hit a camera sensor, then twice the signal is output. Not so our eyes!

    Our optical system actually applies a gamma curve when viewing the 'linear data' present in the world, so that twice the photons (input) doesn't actually result in twice the signal information to the brain (output). As a result, humans have a much greater sensitivity in perceiving midtones than very darks or very lights.

    Conversely, there's some mathy physics stuff about how voltage (input) and intensity (output) have a nonlinear relationship which results in monitors also inheriting a gamma curve in regards to how images display. This means viewing linear data from a camera or renderer also results in a pixel that was captured twice as bright (input), not displaying twice as bright (output).

    By some stroke of luck, CRT's naturally had almost exactly the inverse of the gamma curve of our eyes. This meant that the two non-linear curves of eyes and monitor cancelled each other out - the wrongness of how a monitor shows an image is corrected by the wrongness of how our eyes view things!

    The gamma correction used in our files, encoders and broadcast signals today extend the idea of this CRT gamma curve by optimising images to give the best perceptual result in as few bits per channel as possible. As far as I can tell, this is often how 8-bit images, broadcast signals and encoders/compressors save a lot of their space - throwing all those darks and lights we can't actually perceive and optimising based on the nonlinear behaviour of our eyesight, using the nonlinear output of the monitor.

    Linear formats like floating point .exr's, .hdr's, .raw's and the like can hold all that extra data, but you simply can't 'see' that it's there until you start mucking about in post with things like exposure. Gamma correction basically is baking in a corrected curve and discarding any unneeded data, much like the way mp3's discard audio frequencies our ears aren't particularly sensitive to.

    sRGB, as an example, offers a predictable and standardised way for non-floating-point luminance data to be stored and optimised using a curve that opposes our eye's curve in an efficient way on as many kinds of monitors as possible.

    Now, I think one reason this has started to become important recently is that we've started to discover that the mathematics in lighting and rendering in engines comes out consistently and predictably incorrectly if we don't work our lighting and textures in linear space.

    Working with images and renders that have already had some kind of luminance or gamma correction applied only results in incorrect values - 1+1 no longer = 1... 1+1 = 1 + (1*multiplied by whatever gamma correction is taking place).

    Working in linear means we're not fighting to compensate for a weird artificial curve or value correction added over the top of every number and calculation. You can work with a straight correlation in 0-1 for lighting, shader maths works more predictably, whilst avoiding odd side effects like lighting inexplicably blowing out, because working without gamma correction means twice the light intensity will actually result in twice the luminance.

    What working in a linear workflow does do is give you more flexibility (greater range in image format information for post-processing, increased dynamic range for lighting and reflection, easier plate matching for film VFX) and more predictable maths (lighting maths, like inverse square laws actually behave predictably to the maths behind it, etc) but at the end of the day, to view your render or game on a TV or monitor in a way that our eyes predictably understand, and in a manageable file size, it all has to be gamma corrected in the end.

    I'd bet whatever engine Watch Dogs uses would calculate all the lighting and shaders internally linearly, like pretty much all 3d programs and engines do, and then output each frame to the screen as gamma corrected. What happens inbetween, I couldn't possibly guess, but looking at the quality of the work I'm sure they're already using the best possible approaches!
  • littleclaude
    Options
    Offline / Send Message
    littleclaude quad damage
    Thank you stabbington

    [ame="http://www.youtube.com/watch?v=JJwoPhvQZBE"]The Beginners Explanation of Gamma Correction and Linear Workflow - YouTube[/ame]
  • stabbington
    Options
    Offline / Send Message
    stabbington polycounter lvl 10
    Great video, cheers! I indirectly worked with Matt and Newtek when LWF was first implemented in Lightwave, so probably explains our similar thoughts on this. Except he explains it far better!

    I should probably clarify, what I think is happening in your original post is that it seems like you've taken an already gamma corrected (not to mention probably compressed) image and applied gamma correction again. Even if Nuke de-gamma'd it on load, you won't be able to get back all that dynamic range that was present in-engine just by arbitrarily changing the display colour space and playing with the levels - I'm presuming any extra luminance info gets thrown away as soon as the image is sent to the screen by the game engine, if not before. On top of that any video or image compression from wherever your screengrab came from would further clamp and break the dynamic range.

    It might look like it works on some images with mostly midtones and no major lights or darks but I'd think that you can only really do your initial test with a true floating point image format output straight out of the engine.
  • JacqueChoi
    Options
    Offline / Send Message
    JacqueChoi polycounter
    I was under the impression it was a choice, to emulate te aesthetics of 35mm film.


    Hence the lens flares, depth of field, motion blur, and chromatic aberration.



    I believe Half Life 2 the first game to really played with dynamic ranges and clamping values greatly added to the gameplay.
  • Kwramm
    Options
    Offline / Send Message
    Kwramm interpolator
    can't name any clients or projects or specific platforms, but we do already work on projects for the next console generation. From big-name clients we see they're moving into using physically based shaders (similar to the one Kodde made) and linear textures. One of them even supplied a whole reference linear library of a few gigs and there's a lot of things you're just not allowed to do in Photoshop any more when creating textures for those games.

    If you want to teach students for the next generation of games then I would definitely prepare them for working with linear sources, gamma and the whole process. Physically based shaders also seem to go very well with all this.
  • Kurt Russell Fan Club
    Options
    Offline / Send Message
    Kurt Russell Fan Club polycounter lvl 9
    Naughty Dog had a lot of discussion of gamma correction in some presentations in the past few years.

    Most licensed engines now support gamma correction pipelines, but it's generally not well supported for mobile versions since post-processing is so slow. UDK and Unity both support it for PC and not for mobile.
  • littleclaude
    Options
    Offline / Send Message
    littleclaude quad damage
    Thanks everyone for your interesting thoughts, I have a lot to chew on, I will show my results soon. Work have just given me a new Alienware Laptop 2.7 ghz, 16gb ram and a great Geforce card, I am so looking forward to getting my teeth into lots of new toys, lots of Dir X 11 stuff coming up :)


    ......such a shame the first thing I need to do is finish a research paper in word! what a waste of all that power :)
  • littleclaude
    Options
    Offline / Send Message
    littleclaude quad damage
    The incorrect way
    -textures are normally painted in photoshop (jpeg/tga for example) which will have a gamma of 2.2 (sRBG)
    -lighting is always done in linear.

    The correct way
    -textures are normally painted in photoshop (jpeg/tga for example) and have to be turned to linear (saving in exr format automatically does this)
    -lighting is always done in linear.
    -a filter for gamma is applyed to the engine 2.2 (sRGB)
    This way everything stays linear for the light/texture calculation and the human eye filter 2.2 (sRBG) is put on at the end.
Sign In or Register to comment.