Hey so I have some questions and I'm interested in a general question on the topic.
So I've been working for a VFX house now for about 2 months and we are integrating the Substance suite into our pipeline, of course some general testing needs to be done before anything is set in stone and this becomes our preferred workflow (which I really think will happen in time).
First of the Substance suit is really great at what it does well in my opinion and not so great at other things you might do in Mari when working in VFX, so right now we are making materials in Designer and I took them into painter to make some general look dev work.
Now then comes the question, can you really take it any further down the pipeline (Of course pipelines differ from companies) when Painter does not yet support ACES workflow .. I've seen some general ACES hacky methods using filters like then ones made by Jose Linares (
https://forum.substance3d.com/index.php?topic=27727.0) and from my understanding, by no means smart enough about colorspace to make a good argument with or against it .. you get a passable result in Painter using these methods but your will be like I said using hacky methods so you wont get the range of values you can in a proper ACES workflow.
I know some studios/houses have been used by ILM, DNEG and in recent things like Substance for their VFX work on big projects like Stranger Things S03, The Umbrella Academy, The Expanse and more.
So what do you think? Have you used the Substance tools for VFX? Did you use both Painter and Designer or just one of them?
Have a great weekend or day what ever time you see this
Replies
First of all, I'm not sure if I unserstand it correctly but I think I do. And then this is basically a tonemapper. You give it whatever input, and it transforms it into a spacific space. I don't think artists have much to do with this. Not much to do with pbr either. This is a post process that you can put in your post process chain, replacing the existing tone mapper.
Feel free to correct me if I'm talking bs.
Saw people storing base color textures as 16 bits/channel. Even when its not necessary. For a 1000px long gradient, yeah its reasonable. But with the exceptation of vector textures, most textures are fine with 8 bits/c and you literally can't tell the difference between that and the 16 bits one.
So...Some people may think that you do.
I know this isn't directly related to the topic, but its a similar case.
When you do things properly everything is stored using linear values at a high bit depth (16 seems to be conventional).
Each application in the pipeline is then able to directly use the stored values for its internal maths (in linear space) and convert them to whatever display color space is appropriate for the situation.
Storing colorspace information in the files means an application can tell which conversion to apply to an image when it reads it in but from my understanding, anything other than linear space for stored values has the potential for loss of precision in areas where the color space is distorted (Eg. sRGB crushes low values in particular)
Practically speaking you probably won't see any effect at 32bit precision but I feel like 16 could be an issue for vfx work.
As I say, I'm no expert at this point (I'm going to have to become one fairly soon) so please correct me if I'm wrong.
As far as painter integration goes....
https://docs.substance3d.com/spdoc/color-profile-154140684.html
Looks like you can work with pure linear values so you'll be able to fit it into a managed pipeline one way or another
16 bits per channel is a lot more colors (65k values per color channel) than 8 bits (256 values), but it's still compressed, you can still see dithering or banding artifacts once you start to manipulate render passes. 32 bits per channel is 16.7m values per channel. I can see a difference when using render passes for compositing, like relighting in comp, replacing textures using a UV pass, etc.
32 bits I hear is fairly common workflow in VFX as well. Only passes that get stored in 16bit are the "beauty" passes, the traditional fully-lit renders, since they're usually not tweaked very heavily.
I agree though, most textures are fine in 8 bits per channel. Except for linear textures like normal maps, displacement, etc.
I tend to disagree on 8 bits being enough - even for games where the results are compressed.
I've noticed distinct improvements in final compressed image quality when starting with 16bit images over 8bit on our engine.
It could just be our texture compiler but I suspect it's common to others since the sums are likely to be the same.
But the quality gain is subjective. What you and I notice, the average consumer is much less likely to. Helpful for me keep in mind, as there are trade-offs to choosing higher bit depth source files... less tools, longer transfer times, more storage, slower loading & processing, etc.
https://www.toadstorm.com/blog/?p=694&fbclid=IwAR2tj1VImG0w71LMMpB6x5LoZzqx-K_pKelbAVFRanyrilSGSJs8MfO-4NI
The main take away for me is.
[quote]
For CG artists, a big benefit is the ACEScg color gamut, which is a nice big gamut that allows for a lot more colors than ye olde sRGB. Even if you’re working in a linear colorspace with floating-point renders, the so-called “linear workflow”, your color primaries (what defines “red”, “green” and “blue”) are likely still sRGB, and that limits the number of colors you can accurately represent.
[/quote]
So what I've been told by the guy in the company is that this is usaful when you are recording some real life footage, and you have different cameras. They can get matching look using aces. In terms of renders, this can be useful in grade when you get different inputs - say one shot is in srgb and the another is linear. But then again, like Eric said, just slap your guy in the face and tell him to give you a proper thing.lol.
I still kinda feel like this is not the whole picture.
https://github.com/bleleux/CustomTB/tree/master/shader/post
I'm always amazed by the magic numbers. It says its a reference shader so it should be pretty accurate.
My Toolbag example shared by Obscura is a good example of the "full" pipeline for ACES tonemapping specifically for SDR displays. There would be other ODTs for P3, Rec2020, etc. UE4 and Unity also have the full versions with other ODTs, with Unity defaulting to a simplified approximation as well.
That is not what the image is showing. The left image highlights the limitation of a simple gamma correction on a scene-referred image down to a smaller display-referred space, while the right shows the tonemapping that takes place to reduce clipping(luminance and saturation). In my experience, as far as tonemapping goes, a simple luminance fit for the ACES RRT/ODT handles most of these cases, while the added color space transform handles the very specific, highly saturated colors by desaturating them. Comparing the full ACES linked above to the approximation included with Toolbag will show the difference.
Is this a tonemapper?
- As far as I understand, yes.
Should realtime artists care about this?
- No
Should compers care about this in offline rendering?
-Yes
Why is the saturation shift?
-sRGB is still not good enough
Is this correct?
My own questions:
- Do you need to know the input color space in order to get this to correctly work?
- Does the code or things needed to be done differs when you are viewing on a low range monitor or a higher one?
- Do regular users (movie viewer) need to do anything to get this working correctly, or there is some metadata embedded in the media that tells the output software what to do?
@leleuxart
If you're interested in more of the technicalities of ACES, check out ACESCentral. It just got a face lift with easy to find documentation.