And soon theyll use 3d scanners for everything and only have 2 inhouse modelers to touch up
You joke and yet I already know a good number of studios actually using/testing out things like 3d scanners/photo capture to turn photos into in game assets.
Procedural texturing isnt going to be like, "Click the rock button" and my rock texture is created. Its in a way similar to dDo. Using your Normal Map/Height Map you derive from Zbrush you take into substance designer and using networks that you setup, adjust parameters mixed with different noise maps and such to create things like your diffuse/specular/gloss/ao/cavity map. You get results comparable to those you create by hand in a matter of minuets compared to the HOURS you would by painting everything by hand.
You can then feed any Normal/Height Map into that network with those parameter settings and output similar results. So when you need to create 4 different tiling textures of all the same rock type, instead of painting those 4 by hand, you set one up, get it looking the way you want and then just run the other 3 through the same network.
And if it can be implemented into your engine it becomes even more powerful where all you would need to supply would be the Normal/Height map of a texture, input the parameter values you want and it creates the Diffuse, Specular, ext textures at run time. This will help alleviate texture memory/streaming issues which are going to be some of the biggest issues with next gen. Not to mention creation time, more textures/assets then ever before will be needed. Which happens with every generation leap.
This of course dose not mean everything will be made this way, but a lot of things such as tiling textures will be. You still will have your nice hand painted super important textures.
You joke and yet I already know a good number of studios actually using/testing out things like 3d scanners/photo capture to turn photos into in game assets.
Procedural texturing isnt going to be like, "Click the rock button" and my rock texture is created. Its in a way similar to dDo. Using your Normal Map/Height Map you derive from Zbrush you take into substance designer and using networks that you setup, adjust parameters mixed with different noise maps and such to create things like your diffuse/specular/gloss/ao/cavity map. You get results comparable to those you create by hand in a matter of minuets compared to the HOURS you would by painting everything by hand.
You can then feed any Normal/Height Map into that network with those parameter settings and output similar results. So when you need to create 4 different tiling textures of all the same rock type, instead of painting those 4 by hand, you set one up, get it looking the way you want and then just run the other 3 through the same network.
And if it can be implemented into your engine it becomes even more powerful where all you would need to supply would be the Normal/Height map of a texture, input the parameter values you want and it creates the Diffuse, Specular, ext textures at run time. This will help alleviate texture memory/streaming issues which are going to be some of the biggest issues with next gen. Not to mention creation time, more textures/assets then ever before will be needed. Which happens with every generation leap.
This of course dose not mean everything will be made this way, but a lot of things such as tiling textures will be. You still will have your nice hand painted super important textures.
Exactly, you still need a skilled artists' eye to understand and judge what looks good. It's a tool to save an artist time, same with 3d scanning, you still need good topology, alterations, things adding, things taken away and scanning only works for stuff that actually exists.
I see all this as tools that speed up 'content creation' and thats a good thing, it means, more content, less development time and time for better art!
totally agree with autocon. The only thing I really hate about Substance is that they're fighting Photoshop - they will tell everyone how they want to replace it - whereas the combination of those awesome tools would make sooo much more sense (at least for the next couple of years that is)
I guess "we want to replace it" is more a long term side goal than an actual current focus for us.
Let' say it's an other way of saying "we want to do better than Photoshop".
Proof is, the next update will bring full support for PSD files as inputs
that's fast. and that's good to hear. my company (and maybe other too?) has given you guys a nudge about that feature There's just no way for us right now to get rid of PS, so having both tools play together nicely is really important.
Replies
textures generated procedurally!? ffff, I'm switching to concept art
serious question though, is substance designer worth messing around with?
...And as you go forth today remember always your duty is clear: To build and maintain those robots.
You joke and yet I already know a good number of studios actually using/testing out things like 3d scanners/photo capture to turn photos into in game assets.
Procedural texturing isnt going to be like, "Click the rock button" and my rock texture is created. Its in a way similar to dDo. Using your Normal Map/Height Map you derive from Zbrush you take into substance designer and using networks that you setup, adjust parameters mixed with different noise maps and such to create things like your diffuse/specular/gloss/ao/cavity map. You get results comparable to those you create by hand in a matter of minuets compared to the HOURS you would by painting everything by hand.
You can then feed any Normal/Height Map into that network with those parameter settings and output similar results. So when you need to create 4 different tiling textures of all the same rock type, instead of painting those 4 by hand, you set one up, get it looking the way you want and then just run the other 3 through the same network.
And if it can be implemented into your engine it becomes even more powerful where all you would need to supply would be the Normal/Height map of a texture, input the parameter values you want and it creates the Diffuse, Specular, ext textures at run time. This will help alleviate texture memory/streaming issues which are going to be some of the biggest issues with next gen. Not to mention creation time, more textures/assets then ever before will be needed. Which happens with every generation leap.
This of course dose not mean everything will be made this way, but a lot of things such as tiling textures will be. You still will have your nice hand painted super important textures.
Exactly, you still need a skilled artists' eye to understand and judge what looks good. It's a tool to save an artist time, same with 3d scanning, you still need good topology, alterations, things adding, things taken away and scanning only works for stuff that actually exists.
I see all this as tools that speed up 'content creation' and thats a good thing, it means, more content, less development time and time for better art!
procedural paneling, Hawken levels for everyone!!!
I couldn't have explained it better myself, thank you
Let' say it's an other way of saying "we want to do better than Photoshop".
Proof is, the next update will bring full support for PSD files as inputs
You should be pleased with what comes next.
If you plan on modifying it, please make sure you read this post as there is a known bug in the February release.