Hello
I was watching through the PS4 conference, which all looked fantastic but I noticed that not many of the games are working in SRGB so the blacks are coming out very crashed and the highlights very bleached.
I passed a couple of shots from Watch Dogs as an example through Nuke just to see how it looks in SRGB and you can see it needs dialling down quite a lot as its way to bright. More or less all the games apart from the tech demo of the old man needs adjustment.
I thought all the next gen games would be working in SRGB but is there a reason why developers are choosing not to? The reason I ask is that I was going to teach some lessons in Gamma Correction and this will effect textures, lighting and reflections -
uhanimation but if no one is doing it then I should explain this to the students.
Here are some tutorials on gamma correction
http://forums.newtek.com/showthread.php?102397-Video-The-Beginners-Explanation-of-Gamma-Correction-amp-Linear-Workflow&highlight=linear%20workflowhttp://galannicolas.com/mediawiki-1.13.3/index.php?title=Color_Space_101
Notice more high dynamic range in the blacks and the highlights in the SRGB, I will make some files in real time this weekend to show you what I mean.
Replies
I also agree that the one on the left looks considerably better.
It's next-gen time mate, you'll find pages and pages of pure conjecture and personal opinion on PC over the next few months; best settle into it and be nice about it. :thumbup:
RGB is more color space than sRGB and therefore superior.
i hope wide gamut RGB becomes the new standard in both softwares and hardwares.
this:
http://en.wikipedia.org/wiki/Wide-gamut_RGB_color_space
The engine itself could well be working in linear, especially with all those HDR's and tone-mapping whizz-bangs going on, but in the end it's still being displayed on a non-linear output device, so screengrabbing that will only give you the crushed down 8-bit channels resulting at render, which is why linear correction blows out the highs and shows the lows clamped like that.
No you should calibrate your lighting to SRGB so you see how your eyes see. In this case the image needs to be taken down in its brightness, it would then help with colour range so the blacks come out better as they are being crushed at the moment.
And as MM said, why would you intentionally work in sRGB? It's a limited color palette, yeah it was the default color space you'd see on most monitors/non color managed browsers and it could be good to convert them to sRGB before posting them online, but that isn't quite as necessary these days and even if it is, I wouldn't work in it.
I could be wrong, but I think its actually got something to do with the way your eye works.... I think...
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html
Also
http://docs.unity3d.com/Documentation/Manual/LinearLighting.html
Are you sitting down, it all beings with the RGB values in a texture, to be continued
Most digital cameras and render engines do actually capture their data in linear space, but our eyes just don't perceive light in a linear fashion. If twice the amount of photons hit a camera sensor, then twice the signal is output. Not so our eyes!
Our optical system actually applies a gamma curve when viewing the 'linear data' present in the world, so that twice the photons (input) doesn't actually result in twice the signal information to the brain (output). As a result, humans have a much greater sensitivity in perceiving midtones than very darks or very lights.
Conversely, there's some mathy physics stuff about how voltage (input) and intensity (output) have a nonlinear relationship which results in monitors also inheriting a gamma curve in regards to how images display. This means viewing linear data from a camera or renderer also results in a pixel that was captured twice as bright (input), not displaying twice as bright (output).
By some stroke of luck, CRT's naturally had almost exactly the inverse of the gamma curve of our eyes. This meant that the two non-linear curves of eyes and monitor cancelled each other out - the wrongness of how a monitor shows an image is corrected by the wrongness of how our eyes view things!
The gamma correction used in our files, encoders and broadcast signals today extend the idea of this CRT gamma curve by optimising images to give the best perceptual result in as few bits per channel as possible. As far as I can tell, this is often how 8-bit images, broadcast signals and encoders/compressors save a lot of their space - throwing all those darks and lights we can't actually perceive and optimising based on the nonlinear behaviour of our eyesight, using the nonlinear output of the monitor.
Linear formats like floating point .exr's, .hdr's, .raw's and the like can hold all that extra data, but you simply can't 'see' that it's there until you start mucking about in post with things like exposure. Gamma correction basically is baking in a corrected curve and discarding any unneeded data, much like the way mp3's discard audio frequencies our ears aren't particularly sensitive to.
sRGB, as an example, offers a predictable and standardised way for non-floating-point luminance data to be stored and optimised using a curve that opposes our eye's curve in an efficient way on as many kinds of monitors as possible.
Now, I think one reason this has started to become important recently is that we've started to discover that the mathematics in lighting and rendering in engines comes out consistently and predictably incorrectly if we don't work our lighting and textures in linear space.
Working with images and renders that have already had some kind of luminance or gamma correction applied only results in incorrect values - 1+1 no longer = 1... 1+1 = 1 + (1*multiplied by whatever gamma correction is taking place).
Working in linear means we're not fighting to compensate for a weird artificial curve or value correction added over the top of every number and calculation. You can work with a straight correlation in 0-1 for lighting, shader maths works more predictably, whilst avoiding odd side effects like lighting inexplicably blowing out, because working without gamma correction means twice the light intensity will actually result in twice the luminance.
What working in a linear workflow does do is give you more flexibility (greater range in image format information for post-processing, increased dynamic range for lighting and reflection, easier plate matching for film VFX) and more predictable maths (lighting maths, like inverse square laws actually behave predictably to the maths behind it, etc) but at the end of the day, to view your render or game on a TV or monitor in a way that our eyes predictably understand, and in a manageable file size, it all has to be gamma corrected in the end.
I'd bet whatever engine Watch Dogs uses would calculate all the lighting and shaders internally linearly, like pretty much all 3d programs and engines do, and then output each frame to the screen as gamma corrected. What happens inbetween, I couldn't possibly guess, but looking at the quality of the work I'm sure they're already using the best possible approaches!
[ame="http://www.youtube.com/watch?v=JJwoPhvQZBE"]The Beginners Explanation of Gamma Correction and Linear Workflow - YouTube[/ame]
I should probably clarify, what I think is happening in your original post is that it seems like you've taken an already gamma corrected (not to mention probably compressed) image and applied gamma correction again. Even if Nuke de-gamma'd it on load, you won't be able to get back all that dynamic range that was present in-engine just by arbitrarily changing the display colour space and playing with the levels - I'm presuming any extra luminance info gets thrown away as soon as the image is sent to the screen by the game engine, if not before. On top of that any video or image compression from wherever your screengrab came from would further clamp and break the dynamic range.
It might look like it works on some images with mostly midtones and no major lights or darks but I'd think that you can only really do your initial test with a true floating point image format output straight out of the engine.
Hence the lens flares, depth of field, motion blur, and chromatic aberration.
I believe Half Life 2 the first game to really played with dynamic ranges and clamping values greatly added to the gameplay.
If you want to teach students for the next generation of games then I would definitely prepare them for working with linear sources, gamma and the whole process. Physically based shaders also seem to go very well with all this.
Most licensed engines now support gamma correction pipelines, but it's generally not well supported for mobile versions since post-processing is so slow. UDK and Unity both support it for PC and not for mobile.
......such a shame the first thing I need to do is finish a research paper in word! what a waste of all that power
-textures are normally painted in photoshop (jpeg/tga for example) which will have a gamma of 2.2 (sRBG)
-lighting is always done in linear.
The correct way
-textures are normally painted in photoshop (jpeg/tga for example) and have to be turned to linear (saving in exr format automatically does this)
-lighting is always done in linear.
-a filter for gamma is applyed to the engine 2.2 (sRGB)
This way everything stays linear for the light/texture calculation and the human eye filter 2.2 (sRBG) is put on at the end.