Home Technical Talk

Let's talk about HDR gaming...

blankslatejoe
polycounter lvl 18
Offline / Send Message
blankslatejoe polycounter lvl 18
Hi guys.. I did some quick searches for this on the forum/wiki, but couldn't find an appropriate "master thread"..perhaps someone could steer me towards a primer?

Today I was talking to a graphics guy about HDR TVs and what it takes for games to take advantage of them fully, and it seemed that the engineer was under the impression that ALL textures in the game could be authored to be "HDR Ready". I was under the impression that that would be prohibitively expensive, since all the textures would be a significantly larger memory cost.  I also realized that maybe what I think I know and what I actually know aren't aligned. Rather than look like an idiot in front of him, I'd rather look like an idiot in front of you all and the public internet.

So, here's what my assumptions are about HDR for gaming...can anyone verify these or correct me?

Please Note: I am NOT talking about HDRI skyboxes used for lighting only..I'm talking about a full HDR pipeline...I think...?

General principle assumptions:
The basic principle of HDR: using a higher bit depth to allow for more information in super bright and super dark places.

There are two sides to "HDR gaming":
1: the engine output, which involves the game's final rendered image having a high range of brights/darks AFTER it's incorporating the scene, lighting, post FX, color correction, etc, rather than being clamped to standard television range.
2: the source textures, which need to have high enough depth that there is actually information available to show when the engine slaps them into bright or dark areas (or, at all times in the case of unlit materials)..so they don't just turn white/black with some banding.

Does that understanding sound accurate?

More specific assumptions:
1: HDR images are, in this application, basically, a marketing term for 16 bits-per-channel images...(so 48/64 bit images?).
2: In order to properly take advantage of a 48 bit texture, it needs to ALWAYS be 48 bit, all the way down the authoring chain. Meaning, a Substance Painter file exported as 48 bit tgas won't really see any benefit if the base materials used in its authoring weren't ALSO 48 bit..
3: Authoring 16 bit textures is time consuming (slower tool performance) and a lot of tools don't offer great solutions for them yet (Does Substance Painter even allow for this gracefully? Does Unreal/Unity?)
4: Authoring something like a "common brick wall", or other relatively mid-tone assets, in 48 bit may offer some gains, but those may not be worth it the added cost to memory/download footprint.

And here's my final final assumptions:
1: You get more bang for your buck by focusing on specific KINDS of images to make 48 bit--unlit skyboxes (which would really show off their depth as they'd closely map to the TV's range), FX, glowy materials designed to overbrighten, etc.
2: Trying to author everything, or even most stuff, using higher bit depth images is probably a bad idea currently given the tools/tech state


I've never really explored this road in a production environment before, and I'd really appreciate insight from anyone who has. How far off are my assumptions? I assume my understanding of this is super primitive/flawed since I'm new to the HDR way...but can someone correct me or point me toward a good primer?

Does anyone know the state of some of the common tools, regarding dealing with 16bit-per-channel images? (I know photoshop has had the option to do basic authoring with them for forever, but I'm asking more about engines/PBR texturing solutions/etc)
Thanks guys!

Replies

Sign In or Register to comment.