Home Technical Talk

Advice* VFX - Substance - Colorspace - Workflow

arnareli
polycounter lvl 4
Offline / Send Message
arnareli polycounter lvl 4
Hey so I have some questions and I'm interested in a general question on the topic.

So I've been working for a VFX house now for about 2 months and we are integrating the Substance suite into our pipeline, of course some general testing needs to be done before anything is set in stone and this becomes our preferred workflow (which I really think will happen in time).

First of the Substance suit is really great at what it does well in my opinion and not so great at other things you might do in Mari when working in VFX, so right now we are making materials in Designer and I took them into painter to make some general look dev work. 
Now then comes the question, can you really take it any further down the pipeline (Of course pipelines differ from companies) when Painter does not yet support ACES workflow .. I've seen some general ACES hacky methods using filters like then ones made by Jose Linares (https://forum.substance3d.com/index.php?topic=27727.0) and from my understanding, by no means smart enough about colorspace to make a good argument with or against it .. you get a passable result in Painter using these methods but your will be like I said using hacky methods so you wont get the range of values you can in a proper ACES workflow.

I know some studios/houses have been used by ILM, DNEG and in recent things like Substance for their VFX work on big projects like Stranger Things S03, The Umbrella Academy, The Expanse and more.

So what do you think? Have you used the Substance tools for VFX? Did you use both Painter and Designer or just one of them? 

Have a great weekend or day what ever time you see this :) 

Replies

  • Eric Chadwick
    We use Substance for archviz work, thousands of shots per month. The key is to train your team to work within PBR limits from the start, rather than relying on someone's funky filter to downsample. Fix it upstream.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    https://www.cinema5d.com/the-aces-workflow-future-digital-color/

    First of all, I'm not sure if I unserstand it correctly but I think I do. And then this is basically a tonemapper. You give it whatever input, and it transforms it into a spacific space. I don't think artists have much to do with this. Not much to do with pbr either. This is a post process that you can put in your post process chain, replacing the existing tone mapper.

    Feel free to correct me if I'm talking bs.
  • gnoop
    Offline / Send Message
    gnoop sublime tool
    II don't think ACES is a tone mapper , it's rather what they say Color Encoding Standard   we already had gazillion  before.    The one that stores linear true color values whatever energy  the light source is.

    Still whatever different  way colors had been encoded in your files  with properly set color management  you have to see them  SAME  and ideally ACCURATE on your screen with only minor  difference  of those  nuke  acid  super wide gamut  primary colors  or super bright sun like   things on fancy HDR monitors.       

      On typical  sRGB   screen you shouldn't see any difference at all whatever way colors are encoded. ( if color management works)     Like what Photoshop does   comparing   what was encoded in the file to your individual monitor profile  you did with a colorimeter and show you what it should be looking like.

    So I don't understand all those pictures of "better"  contrast  with ACES etc.     All those 2d composing  VFX  softs  couldn't read  sRGB encoded  texture  file and  re-encode it in whatever they need ?        Their  IDT  can read and re-code multiple  cameras RAW standarts but couldn't do it  with regular  JPG file for example? 

    And BTW all the purpose of that  new super wide gamut  unrestricted light energy  pixel encoding  is for huge light range of true world.  We need it in PBR textures  where we usually clip black and white ?

    I bet there should be some way to tell this VFX soft  your texture is sRGB  and that's all, issue solved.


    ps. The tone mapping is for special cases wen you color graded  your final product to look certain deviated from true colors way. More contrast  or that notorious bluish indigo style.   In such  case you could load this "wrong" deviated  color profile  in SPainter  to see  how your current work match with decided style







  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    "We need it in PBR textures  where we usually clip black and white ?"

    Saw people storing base color textures as 16 bits/channel. Even when its not necessary. For a 1000px long gradient, yeah its reasonable. But with the exceptation of vector textures, most textures are fine with 8 bits/c and you literally can't tell the difference between that and the 16 bits one.

    So...Some people may think that you do.

    I know this isn't directly related to the topic, but its a similar case.
  • gnoop
    Offline / Send Message
    gnoop sublime tool
    One thing I wonder about that ACES is why they call it "workflow" .  It's basically just a way to re-code  RAW  files  into something  standard   and similarly looking for all cameras done by composing software.         Kind of more advanced  DNG . Cool beans indeed

    Still a key thing to see consistent and accurate colors from different  cameras on different monitors is and has always been  profiling/calibrating  each and every camera and monitor  you are working with.    Putting color checker box  and gray cards next to each scene you are shooting.         Worked perfectly fine during even  film era.


  • poopipe
    Offline / Send Message
    poopipe grand marshal polycounter
    I'm still looking into this myself but as I understand it so far.. 

    When you do things properly everything is stored using linear values at a high bit depth (16 seems to be conventional).
     Each application in the pipeline is then able to directly use the stored values for its internal maths (in linear space) and convert them to whatever display color space is appropriate for the situation.

    Storing colorspace information in the files  means an application can tell which conversion to apply to an image when it reads it in but from my understanding, anything other than linear space for stored values has the potential for loss of precision in areas where the color space is distorted (Eg. sRGB crushes low values in particular) 
    Practically speaking you probably won't see any effect at 32bit precision but I feel like 16 could be an issue for vfx work. 

    As I say,  I'm no expert at this point (I'm going to have to become one fairly soon)  so please correct me if I'm wrong. 

    As far as painter integration goes.... 
    https://docs.substance3d.com/spdoc/color-profile-154140684.html
    Looks like you can work with pure linear values so you'll be able to fit it into a managed pipeline one way or another 
  • Eric Chadwick
    We're moving to 32bit linear for our archviz pipeline. Makes sense once you start to look at more post process automation, in apps like Nuke.

    16 bits per channel is a lot more colors (65k values per color channel) than 8 bits (256 values), but it's still compressed, you can still see dithering or banding artifacts once you start to manipulate render passes. 32 bits per channel is 16.7m values per channel. I can see a difference when using render passes for compositing, like relighting in comp, replacing textures using a UV pass, etc.

    32 bits I hear is fairly common workflow in VFX as well. Only passes that get stored in 16bit are the "beauty" passes, the traditional fully-lit renders, since they're usually not tweaked very heavily.

    I agree though, most textures are fine in 8 bits per channel. Except for linear textures like normal maps, displacement, etc.
  • poopipe
    Offline / Send Message
    poopipe grand marshal polycounter


    32 bits I hear is fairly common workflow in VFX as well. Only passes that get stored in 16bit are the "beauty" passes, the traditional fully-lit renders, since they're usually not tweaked very heavily.
    That's probably what I'm remembering from my reading into what studios do.

    I tend to disagree  on 8 bits being enough - even for games where the results are compressed. 
    I've noticed distinct improvements in final compressed image quality when starting with 16bit images over 8bit on our engine. 
    It could just be our texture compiler but I suspect it's common to others since the sums are likely to be the same. 
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Rendered passes and textures are different things. I agree that some render passes requires higher precision (depth, lighting, etc). I also agree that some specific types of textures require higher precision. Though some normal maps can live without problems in 8 bit range too (4 negative, 4 positive), and the difference between that and the 16 bits counterpart would be negligible. But this really depends on the content. With tight memory budget, storing normals as dxt1 with something in the blue channel, is a neat trick. Unrelated to the topic though.
  • Eric Chadwick
    Uncompressed source always produces better compressed quality, in my experience. 

    But the quality gain is subjective. What you and I notice, the average consumer is much less likely to. Helpful for me keep in mind, as there are trade-offs to choosing higher bit depth source files... less tools, longer transfer times, more storage, slower loading & processing, etc.
  • oglu
    Offline / Send Message
    oglu polycount lvl 666
    That is a good read for ACES interested. 

    https://www.toadstorm.com/blog/?p=694&fbclid=IwAR2tj1VImG0w71LMMpB6x5LoZzqx-K_pKelbAVFRanyrilSGSJs8MfO-4NI

    The main take away for me is. 

    [quote] 

    For CG artists, a big benefit is the ACEScg color gamut, which is a nice big gamut that allows for a lot more colors than ye olde sRGB. Even if you’re working in a linear colorspace with floating-point renders, the so-called “linear workflow”, your color primaries (what defines “red”, “green” and “blue”) are likely still sRGB, and that limits the number of colors you can accurately represent.

    [/quote] 
  • gnoop
    Offline / Send Message
    gnoop sublime tool
    I  imagine  Adobe reaction  to that ACES.   They  see  it  and shrugging .   Why those weird VFX  guys invented this new color management system  while evryone  else use ICC profiles  for decades  including  for cameras and why they think theirs is better.      That's probably why there is no ACES profile in Photoshop.

    As of benefits of  super wide  gamut  I wonder do people realize  that most of earthly colors of real world are inside sRGB and those extra colors beyond sRGB primaries  triangle  are nuke oversaturated ,  super acid colors  like notorious violet  , almost ultra-violet ,  led color or colors of some neon lamps etc.         

    And to see those extra colors you need special wide gamut + HDR montor, more wide gamut than typical Adobe RGB ones.


    That link  oglu posted  compares  two renders  with red/green walls and does it totally misinforming way.  ACES  has noting to do with how realistic things look.    it's just when you show sRGB space picture on a wide gamut monitor it gets super over-saturated.  With proper color management and accurate monitor profile   both should look same , just accurate.   

  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Ok I went into a conversation about this with some camera guy at the company where I work and based on what he told me, it makes no sense to talk about this in the context of textures. I also checked out some code today thanks to @leleuxart which he posted today. Probably he can shed some more light on this too. What I've seen there is basically a bunch of conversion functions from x color space to y.

    So what I've been told by the guy in the company is that this is usaful when you are recording some real life footage, and you have different cameras. They can get matching look using aces. In terms of renders, this  can be useful in grade when you get different inputs - say one shot is in srgb and the another is linear. But then again, like Eric said, just slap your guy in the face and tell him to give you a proper thing.lol.

    I still kinda feel like this is not the whole picture.



  • gnoop
    Offline / Send Message
    gnoop sublime tool
    Somebody can explain me why  all those ACES lut vs Standard    rendered pictures  represent different visible gamma ?   Clearly ACES more contrast. Like more toward  linear shift ?

    Since typical rendering pipline is also linear  with de-gamma all sRGB inputs into linear  first  and then applying  gamma  back on final output for monitor    why I see the difference with what people show as ACES ?     

    ACES workflow should does the same basically  with only difference it should do  it accurate for wide gamut (outside of sRGB) colors  .     Still  the contrast /gamma  should have nothing to do with it?


    It looks a bit like those PBR vs "old"  examples   we saw a lot before  while those  "old "  ones  with every input set in physicaly sane and accurate way + fresnel  looked no different.


      
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    That is a thing that I also asked the guy, but didn't get an answer to, so I'd like to know too.

  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    @oglu do you know why is this?
  • gnoop
    Offline / Send Message
    gnoop sublime tool
    Another thing I wonder why I couldn't find anything about individual device profiles  in this ACES  color management approach.

       Like  what Photoshop color engine does while matching camera profile,  file format color space and monitor profile , not just say sRGB  profile  but rather exact profile of your monitor built by i1 device.   ie  standard  ICC color management  that exists decades already.  

    That include possibility to built icc profiles from color checkers   for each and all your cameras too  and get 100% consistent colors .    

    it had always been too heavy for 3d applications , including games   so my pure guess ACES  is just a simplified version of it , better than nothing.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Not sure if this helps you but here is the code by Brian I was referring to, so you can see what happens:
    https://github.com/bleleux/CustomTB/tree/master/shader/post

    I'm always amazed by the magic numbers. It says its a reference shader so it should be pretty accurate.
  • leleuxart
    Offline / Send Message
    leleuxart polycounter lvl 12
    Not sure to where to start in this thread, but I appreciate the tag Obscura!

    For starters, it's worth pointing out the ACES workflow is different for film/vfx and games. In fact, in games its not much of a workflow at all as there is little to no work involved for the artist. For film/vfx, there's more involved like the numerous Input Device Transforms for converting all assets to ACES2065-1 or ACEScg, transporting assets in ACESproxy, grading in ACEScc/ACEScct, archiving in ACES2065-1, and numerous Output Device Transforms to transform down to display spaces.

    If you're rendering/compositing in an ACES environment, you need to convert your textures to the ACEScg color space. Most industry-standard tools should support this natively or through the use of OCIO, but if you want to do it in the source and you're texturing in Painter you will need to use Jose's Filter or make your own. The math is relatively simple(3x3 matrices for the primaries and chromatic adaptation) so you could do it in house to ensure accuracy, but it would require Designer to make. And if you're texturing in Designer, the last 2 updates have included a few ACES-specific nodes and OCIO integration. 

    Since textures are all in the same color space for games, there's no need to pre-convert. The conversion can be done as a whole during tonemapping, or not at all if you use an approximation just to get the ACES "look."

    My Toolbag example shared by Obscura is a good example of the "full" pipeline for ACES tonemapping specifically for SDR displays. There would be other ODTs for P3, Rec2020, etc. UE4 and Unity also have the full versions with other ODTs, with Unity defaulting to a simplified approximation as well.

    gnoop said:
    That link  oglu posted  compares  two renders  with red/green walls and does it totally misinforming way.  ACES  has noting to do with how realistic things look.    it's just when you show sRGB space picture on a wide gamut monitor it gets super over-saturated.  With proper color management and accurate monitor profile   both should look same , just accurate.    
    That is not what the image is showing. The left image highlights the limitation of a simple gamma correction on a scene-referred image down to a smaller display-referred space, while the right shows the tonemapping that takes place to reduce clipping(luminance and saturation). In my experience, as far as tonemapping goes, a simple luminance fit for the ACES RRT/ODT handles most of these cases, while the added color space transform handles the very specific, highly saturated colors by desaturating them. Comparing the full ACES linked above to the approximation included with Toolbag will show the difference.
    Somebody can explain me why  all those ACES lut vs Standard    rendered pictures  represent different visible gamma ?   Clearly ACES more contrast. Like more toward  linear shift ?   
    ACES is designed with theatrical viewing conditions in mind and while there is a gamma compensation added to the ODTs, it may still feel too dark to some even in perfect viewing conditions. It's a highly subjective area that may be addressed in future versions of the RRT. There are a few ways to alleviate the issue, like pre-multiplying the linear color. The RRT/ODT resembles the range of a film stock, so you have the luxury of over-exposing a few stops while still maintaining visibility in the bright areas.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Thanks for the detailed explanation. There were some words in there that I haven't even heard about but lets pretend they are some color spaces. Lets simplify some of the questions here:

    Is this a tonemapper?
    - As far as I understand, yes.

    Should realtime artists care about this?
    - No

    Should compers care about this in offline rendering?
    -Yes

    Why is the saturation shift?
    -sRGB is still not good enough

    Is this correct?

    My own questions:
    - Do you need to know the input color space in order to get this to correctly work?
    - Does the code or things needed to be done differs when you are viewing on a low range monitor or a higher one?
    - Do regular users (movie viewer) need to do anything to get this working correctly, or there is some metadata embedded in the media that tells the output software what to do?

    @leleuxart

  • leleuxart
    Offline / Send Message
    leleuxart polycounter lvl 12
    Is this a tonemapper?
    Not the original intention afaik, but yes, it can be. In real-time, when anyone talks about ACES they're likely referring to it as the tonemapper.
    Should realtime artists care about this?
    Not as far as needing to know the details. I think it's helpful to understand what tonemappers in general do at a high level since tools like Blender, Toolbag, Unity, etc. offer different tonemapping operators. 
    Should compers care about this in offline rendering?
    Yep. It's important for them since they may have to request files to be in a specific format for them or they need the details of the format(s) they're receiving. 
    Why is the saturation shift?
    Depending on the implementation, it could be just desaturation of the high values(luminance only fit) OR desaturation and the shift in RGB primaries(luminance and color transforms).
    Do you need to know the input color space in order to get this to correctly work?
    For realtime, no since all content is assumed to be in the same space. For offline renderers/color grading, yeah. In order for your image to be accurate and processed correctly, you have to tell ACES that the asset is in Alexa's LogC, linear Rec709, Cineon log, etc. From there it will do the necessary transform, if there is one, from that input to the unified ACES space.
    Does the code or things needed to be done differs when you are viewing on a low range monitor or a higher one?
    You need to make sure you're using the correct Output Device Transform, otherwise the rest of the code is pretty much left alone. The reference code includes support for a few different output standards. There are some in-between steps in games and film where you may need to do a custom viewer/display LUT, but that's only based on your pipeline and not necessary for the base workflow.
    Do regular users (movie viewer) need to do anything to get this working correctly, or there is some metadata embedded in the media that tells the output software what to do?
    Not that I'm aware of. An asset rendered through the ACES pipeline with the correct ODT should be all you need to be viewed correctly since the transforms are applied to the asset. There is ACES metadata but I believe it's only used when sharing assets to also share the necessary viewing information for it, but from what I've seen this is still being improved. This doesn't include the HDR metadata either, but I'm not sure where this comes into play and I think the HDR metadata is only a fallback if the content goes out the display's capabilities. This is getting out of my area though, so I could be entirely wrong! Still living in the old days with Rec709 displays  :'(

    If you're interested in more of the technicalities of ACES, check out ACESCentral. It just got a face lift with easy to find documentation.




  • gnoop
    Offline / Send Message
    gnoop sublime tool


    gnoop said:
    That link  oglu posted  compares  two renders  with red/green walls and does it totally misinforming way.  ACES  has noting to do with how realistic things look.    it's just when you show sRGB space picture on a wide gamut monitor it gets super over-saturated.  With proper color management and accurate monitor profile   both should look same , just accurate.    
    That is not what the image is showing. The left image highlights the limitation of a simple gamma correction on a scene-referred image down to a smaller display-referred space, while the right shows the tonemapping that takes place to reduce clipping(luminance and saturation). In my experience, as far as tonemapping goes, a simple luminance fit for the ACES RRT/ODT handles most of these cases, while the added color space transform handles the very specific, highly saturated colors by desaturating them. Comparing the full ACES linked above to the approximation included with Toolbag will show the difference.
    Somebody can explain me why  all those ACES lut vs Standard    rendered pictures  represent different visible gamma ?   Clearly ACES more contrast. Like more toward  linear shift ?   
    ACES is designed with theatrical viewing conditions in mind and while there is a gamma compensation added to the ODTs, it may still feel too dark to some even in perfect viewing conditions. It's a highly subjective area that may be addressed in future versions of the RRT. There are a few ways to alleviate the issue, like pre-multiplying the linear color. The RRT/ODT resembles the range of a film stock, so you have the luxury of over-exposing a few stops while still maintaining visibility in the bright areas.



    I am still not agree. they don't explain it   . They say : look at those '" nicer more realistic colors"   forgetting to mention  same  color transform for sRGB monitor should be done for any render if  anything out of sRGB range  had been involved. And if you choose kind of a neon primary RED , Green colors as a material  they SHOULD  look primary.    And nothing prevent you from getting same "more realistic" render  without ACES.

    That theatrical viewing is nothing to do with color accuracy  classical ICC color managment was designed for.    in fact it just means that  it's  NOT  sRGB     ODT.     Not for viewing on sRGB or Rec709 monitor.    And  just part of inbuilt  personal preference ,  kind of a "post effect" basically.

    Same post effect you could easily do with usual render pipline .  

     I am not questioning  virtues of ACES.  it's cool to keep archived some currently invisible  "scene-referenced" colors reserved for a future  uber monitors.  and cool to keep soft automatically match gazillion of camera standards ( not that we couldn't do it before too through usual ICC profiles).

    I just question that multiple  assertions I see like "look at those more realistic colors  we couldn't have before"

    And to original poster question  why it might be necessary to do  something In SPaInter if  Maya would convert sRGB textures to ACES space on it's own ?   Well, beside that "theatrical" LUT  you could load  to have same look.





Sign In or Register to comment.