Any new tech artists need to be aware of for the coming Next-Gen?

polycounter lvl 5
Offline / Send Message
Blond polycounter lvl 5
The PS4/XBO generation saw the rise of PBR shading/lighting, complex vertex/normal shaders allowing artist to animate foliage, create fake depth on geo, even more realistic lighting/cubemaps setup, new reflective solution (SSR), more polycount limit on assets, 4k Textures, complex particle effects, volumetric smoke/cloud and so on.

Now, I'm pretty sure most of you probably don't know how all of that stuff works, focusing mostly on your specialization. 
What new techs coming for the upcoming generation do you think you'll need to learn to stay updated? 

PS: I'm an animator so I really don't think will change much on our side? I might be wrong?

Replies

  • Obscura
    Offline / Send Message
    Obscura veteran polycounter
    RTX will simplify the work of lighting artists somewhat, and let them focus on the actually interesting part, instead of adjusting lightmap resolutions, trying to fix shadow acnes,  and baking for hours.
  • poopipe
    Offline / Send Message
    poopipe quad damage
    HDR pipelines are going to fry everyone's brains.

    Good luck :D
  • Eric Chadwick
    Yeah, everyone knows 16k is the sweet spot.
  • poopipe
    Offline / Send Message
    poopipe quad damage
    Marks makes some good points I think. 

    The big difference between this and the previous gen switch is that anything material/rendering based will be an iterative change rather than the paradigm shift encountered when we all embraced pbr. 

    I'm (somewhat) hopeful that when we go linear there'll be a lot less pissing and whining from artists than last time round and I'll finally be able to get photoshop removed from everyone's machine.. (we can dream right?) 
  • zachagreg
    Offline / Send Message
    zachagreg polycounter
    @poopipe Can you expand on that a little bit? Or perhaps point me in a resource so I can look into it? Do you mean linear as linear colorspace? and why would that eliminate photoshop for your artists? I'm just ignorant in that area is all.
  • marks
    Offline / Send Message
    marks polycounter lvl 11
    Games "went linear" like 10-15 years ago as a flippant estimate (perhaps less). Shading and lighting has been done in linear space for a long time now.    

    I don't see much advantage in storing the source textures in linear colorspace (which is what I think you're referring to), but that would definitely complicate pipelines and debugging for the average artist. 
  • poopipe
    Offline / Send Message
    poopipe quad damage
    It's standard behaviour (as far as i understand it) to store HDR stuff in linear space for print/vfx etc. I assume this is because it means you only need to be concerned about colour management at the point the image hits the screen - be that your authoring tool, image viewer or the rendered result.

    If it's done right, the artist will never know its getting stored all funny and your pipeline gets a lot simpler - it does mean tools (and artists) need to be colour management aware but I don't see that being anywhere near as bad as trying to explain specular reflectance to everyone.


  • radiancef0rge
    Offline / Send Message
    radiancef0rge Polycount Sponsor
    Honestly most people haven't even mastered the previous gen stuff (and in a lot of cases the previous previous gen stuff).
    this is such a good point. 

    its important to remember that hardware upgrades and rendering advancements are two different things.  budgets may increase slightly - however there are already current gen titles that are hitting diminishing returns on higher budgets.  higher texel density and tri counts are generally not noticeable if done correctly on current gen.  

    raytracing is pretty cool, i guess.  ssds are pretty cool, i guess. 

    i am not hopeful that we as an industry switch to linear, that sounds like a nightmare.  i cant even get everyone to calibrate their monitor (that takes like.. 5 minutes). 

    edit: a word
  • poopipe
    Offline / Send Message
    poopipe quad damage
    Uncalibrated monitors aren't going to be any more of a problem in a linear workflow than they are on whatever system you're using now surely? (depends how shit the monitors are I guess) 

    I have to admit I'm a little taken aback by the concerns from yourself and marks on this - You're both clever and grown-up so I'm beginning to wonder if I'm under-estimating the confusion it'll cause for the average artist. 

  • marks
    Offline / Send Message
    marks polycounter lvl 11
    I mean, consider what you have to gain from moving your texture source to Linear space ....
    Currently, the lifetime of a regular texture during a rendered frame, as regards colorspaces, goes like this : 

    • Texture loaded/sampled by shader.
    • Texture sample converted in the shader from sRGB to Linear (if needed, some formats like DXT5n or BC5 are implicitly assumed to be linear-space, these are the compression formats Normal Maps typically use, which are linear data)
    • < LITERALLY EVERYTHING ELSE IN THE ENTIRE RENDERING PIPELINE >
    • Final frame converted from Linear to the display colorspace (sRGB, Rec709, Rec2020 or whatever depending on the display monitor)
    • Final frame pushed to the display

    The only place you're gaining anything is by removing the colorspace conversion step at the very start (which in terms of performance, is relatively cheap and is hardware-accelerated (read : almost free) on a lot of hardware).

    I can tell you first-hand that the downsides of this are that it is a nightmare to manage, because I tried to move an art department to using linear-space stored textures and it was a nightmare to manage with little benefit.

    Regarding radiancef0rge's comment about monitor calibration - that was more intending to show how difficult it is to get an art department to consistently do anything, not specifically calibrate monitors.

     

  • poopipe
    Offline / Send Message
    poopipe quad damage
    marks said:


    Regarding radiancef0rge's comment about monitor calibration - that was more intending to show how difficult it is to get an art department to consistently do anything, not specifically calibrate monitors.
    Probably the most compelling argument anyone could make against it tbf.. 

    The main benefit as I see it is that you don't irreversibly crush the bottom end of your data - not for render time so much but as maps move their way through a pipeline.
    I'll grant that shitty quality compression all but eliminates that as a practical concern right now but I have an eye on the future. 

    The best course of action to me seems to be to try it out with material artists, see how confused they get and make a judgement based on that. 

    Thanks for the insight,  it's caused me to stop and think a bit harder 

  • marks
    Offline / Send Message
    marks polycounter lvl 11
    Yeah there isn't much of an argument that it isn't a better process, it clearly is better (even if the current advantages are minimal), it's more about how manageable the drawbacks are. 

    Two of the major obstacles you're going to run into are : 
    • Tons of DCC applications do implicit colorspace handling, and if the app chooses to do the wrong thing it can be hard to correct
    • Once you've exported your textures, they will be more difficult to open and look at if you (read: artists) need to debug anything.
    If you're going to do that route, I'd strongly recommend you need your export process to be automated as possible and managed by your tech team with a careful eye (and preferably regression tests).
  • TheGabmeister
    Offline / Send Message
    TheGabmeister greentooth
    I'm seeing a lot of Houdini stuff lately. Excited to see more procedural art.
  • poopipe
    Offline / Send Message
    poopipe quad damage
    marks said:
    Yeah there isn't much of an argument that it isn't a better process, it clearly is better (even if the current advantages are minimal), it's more about how manageable the drawbacks are. 

    Two of the major obstacles you're going to run into are : 
    • Tons of DCC applications do implicit colorspace handling, and if the app chooses to do the wrong thing it can be hard to correct
    • Once you've exported your textures, they will be more difficult to open and look at if you (read: artists) need to debug anything.
    If you're going to do that route, I'd strongly recommend you need your export process to be automated as possible and managed by your tech team with a careful eye (and preferably regression tests).
    Sage advice.. :) 

    One thing we have very much in our favour is that the art teams are now used to not fiddling with textures - the only manual texture editing that happens (on the whole) is in the form of inputs to the material pipeline rather than to the outputs. 

    Regression testing is an interesting problem..  What to test for....





Sign In or Register to comment.