The PS4/XBO generation saw the rise of PBR shading/lighting, complex vertex/normal shaders allowing artist to animate foliage, create fake depth on geo, even more realistic lighting/cubemaps setup, new reflective solution (SSR), more polycount limit on assets, 4k Textures, complex particle effects, volumetric smoke/cloud and so on.
Now, I'm pretty sure most of you probably don't know how all of that stuff works, focusing mostly on your specialization.
What new techs coming for the upcoming generation do you think you'll need to learn to stay updated?
PS: I'm an animator so I really don't think will change much on our side? I might be wrong?
Replies
Good luck
All of the shiny "new tech" .. expect it to be much less prevalent that people claim it will be, in 5 years time the significant challenges we face as studios as a whole will be largely the same as they are now.
Expect the RTX trend to be largely a high-end PC phenomenon, consoles likely won't have the DXR hardware acceleration.
Lighting and post is going to become lots more complicated (though it's getting there already if you're doing it properly).
Expect to (still) not be using 4k textures on almost anything in a real production of any scale.
The big difference between this and the previous gen switch is that anything material/rendering based will be an iterative change rather than the paradigm shift encountered when we all embraced pbr.
I'm (somewhat) hopeful that when we go linear there'll be a lot less pissing and whining from artists than last time round and I'll finally be able to get photoshop removed from everyone's machine.. (we can dream right?)
I don't see much advantage in storing the source textures in linear colorspace (which is what I think you're referring to), but that would definitely complicate pipelines and debugging for the average artist.
If it's done right, the artist will never know its getting stored all funny and your pipeline gets a lot simpler - it does mean tools (and artists) need to be colour management aware but I don't see that being anywhere near as bad as trying to explain specular reflectance to everyone.
its important to remember that hardware upgrades and rendering advancements are two different things. budgets may increase slightly - however there are already current gen titles that are hitting diminishing returns on higher budgets. higher texel density and tri counts are generally not noticeable if done correctly on current gen.
raytracing is pretty cool, i guess. ssds are pretty cool, i guess.
i am not hopeful that we as an industry switch to linear, that sounds like a nightmare. i cant even get everyone to calibrate their monitor (that takes like.. 5 minutes).
edit: a word
I have to admit I'm a little taken aback by the concerns from yourself and marks on this - You're both clever and grown-up so I'm beginning to wonder if I'm under-estimating the confusion it'll cause for the average artist.
Currently, the lifetime of a regular texture during a rendered frame, as regards colorspaces, goes like this :
The only place you're gaining anything is by removing the colorspace conversion step at the very start (which in terms of performance, is relatively cheap and is hardware-accelerated (read : almost free) on a lot of hardware).
I can tell you first-hand that the downsides of this are that it is a nightmare to manage, because I tried to move an art department to using linear-space stored textures and it was a nightmare to manage with little benefit.
Regarding radiancef0rge's comment about monitor calibration - that was more intending to show how difficult it is to get an art department to consistently do anything, not specifically calibrate monitors.
The main benefit as I see it is that you don't irreversibly crush the bottom end of your data - not for render time so much but as maps move their way through a pipeline.
I'll grant that shitty quality compression all but eliminates that as a practical concern right now but I have an eye on the future.
The best course of action to me seems to be to try it out with material artists, see how confused they get and make a judgement based on that.
Thanks for the insight, it's caused me to stop and think a bit harder
Two of the major obstacles you're going to run into are :
- Tons of DCC applications do implicit colorspace handling, and if the app chooses to do the wrong thing it can be hard to correct
- Once you've exported your textures, they will be more difficult to open and look at if you (read: artists) need to debug anything.
If you're going to do that route, I'd strongly recommend you need your export process to be automated as possible and managed by your tech team with a careful eye (and preferably regression tests).One thing we have very much in our favour is that the art teams are now used to not fiddling with textures - the only manual texture editing that happens (on the whole) is in the form of inputs to the material pipeline rather than to the outputs.
Regression testing is an interesting problem.. What to test for....
Or a totally crazy ACES workflow?