Home Adobe Substance

[Pipeline] Substance Designer cliffs and rocks to in-game assets.

ng.aniki
polycounter lvl 13
Offline / Send Message
ng.aniki polycounter lvl 13
Hi there, this will be more of an open topic, and I hope I will get some insight from people experienced in this topic.
This topic is on the subject of the pipeline you would use to translate a cliff made in Substance Designer into an optimized asset to use in a real game production environment.

Here is the thing: I keep seeing those great substance designer screenshots of great procedural cliffs and rock surfaces:







(The 4 last ones are all from Daniel Thiger - https://www.artstation.com/dete)

What those Substances have in common is that they are relying on heavy tessellation to look good, so here comes the problem:
While they are great for offline rendering and still pictures, I don't imagine that you can use it in a game-engine, applied on a simple geometry with real-time tessellation shader to render those assets:
*Just like Substance tesselation, any real time tesselation will be highly unoptimized and will need to be pushed to a really high density to produce satisfying render.
*Since it has a really strong effect on the silhouette, this tesselation will need to be set to fade at a quite high distance, making it even worse.
A single of those cliff assets would generate a really dense geometry and I would not consider it safe to have it duplicated many times in the scene.

So my question is here: Has anyone figured a smart pipeline to use similar substances in a game engine, avoiding relying to intensely on tesselation for the silhouette of the asset (So tessellation can be faded out at a reasonable distance). And preferably in a way that avoids the need for unique modified texture sets for each variation of the cliff asset.

Maybe by applying that heightmap vertex offset on the source geometry, but then you would probably can't use the substance normal map anymore, as it would be added to the vertex normals of the geometry. Maybe then projecting the source mesh normals onto the modified geometry to cancel the vertex normals offset? A process like this could be automated through Houdini.

What are your thoughts, have you experimented solutions for that problem?

Replies

  • Gannon
    Options
    Offline / Send Message
    Gannon interpolator
    Substance > displacement in Zbrush > retopo > bake

    Seems like it'd be similar to the workflow some hand painted artists use when they paint first and then model second.
  • poopipe
    Options
    Offline / Send Message
    poopipe grand marshal polycounter
    I'd be inclined to either write some code or use houdini to generate a decimated mesh based on the displacement maps - the zbrush route is fine if you're making modular assets but if you want genuinely unique surfaces based on blended materials (which you do) then you'll need to do something a bit cleverer.  We had some tools at my last studio that did this  but for the most part couldn't afford the geometry cost.

    Realistically for most games  on the current generation you simply can't do it and these fancy portfolio shots bear little to fuck all resemblance to in-game textures.


  • ng.aniki
    Options
    Offline / Send Message
    ng.aniki polycounter lvl 13
    So, I had some reply on Twitter, I will share them here, so something stays:

    Joshua Lynch:
    Thanks! I will reply with a more lengthy response on @polycount so it's more permanent. Typically the rock props require less displacement than the tiling world textures do, and use more surface level details. Tiling terrain / wall textures rely on more depth and even if displacement isn't used the normal map will have all of the info and carries the shadow where the displacement breaks the silhouette. You can make low poly geo in Zbrush from the height, that does an amazing job as well! I hope this helps and look forward to more discussion :-)


    DThiger :
    Offline Displaced and reduced cliff and rock meshes using tiling textures is what I use in production. Saves texture mem since it doesn’t rely on unique textures for each asset. Also helps stuff look consistent.


    Gannon said:
    Substance > displacement in Zbrush > retopo > bake

    Seems like it'd be similar to the workflow some hand painted artists use when they paint first and then model second.

    Yes, but if you do that, you don't have generic/tiling textures anymore, which is one of the biggest strength of substances. With that, every cliff asset become has it's own texture set, and if you have 5 of them, that's 5x the whole texture set in memory. A good pipeline to use them would mean: Zero bake, no new textures, we just use the tiling substance generated textures, but working on many set of geometry.

    poopipe said:
    I'd be inclined to either write some code or use houdini to generate a decimated mesh based on the displacement maps - the zbrush route is fine if you're making modular assets but if you want genuinely unique surfaces based on blended materials (which you do) then you'll need to do something a bit cleverer.  We had some tools at my last studio that did this  but for the most part couldn't afford the geometry cost.

    Realistically for most games  on the current generation you simply can't do it and these fancy portfolio shots bear little to fuck all resemblance to in-game textures.
    Yes, I agree...
    I was thinking of a similar - Houdini based - solution. I would love to give it a try.
    If we were to make a similar tool, that decimates the mesh based on the displacement maps, and generates lods and all, the normal map would not be usable anymore as it would be added on top of the decimated mesh and its vertex normals, so my guess is that we would need to, after decimation and the displacement are applied, then transfer the vertex normals from the source geometry onto the generated geometry.

    I am wondering if this would be sufficient. What do you think ?

    If it does, a tool that does that, and also support a blending of 3-4 different textures, could be really great.
  • ng.aniki
    Options
    Offline / Send Message
    ng.aniki polycounter lvl 13
    Or, another idea would be to split the Substance Designer's generated heightmap into two different heightmaps:
    -One high frequency detail, used to generate the normal map (and eventually the realtime displacement map used with tessellation).
    -One low frequency, which are more the large shapes of the rock/cliff, used only for the offline displacement.
  • Gannon
    Options
    Offline / Send Message
    Gannon interpolator
    Aaaah, I understand now. You want to retain the ability to adjust the texture as well. That's a different beast entirely. This will be super cool if a process is figured out.
  • poopipe
    Options
    Offline / Send Message
    poopipe grand marshal polycounter

    Yes, I agree...
    I was thinking of a similar - Houdini based - solution. I would love to give it a try.
    If we were to make a similar tool, that decimates the mesh based on the displacement maps, and generates lods and all, the normal map would not be usable anymore as it would be added on top of the decimated mesh and its vertex normals, so my guess is that we would need to, after decimation and the displacement are applied, then transfer the vertex normals from the source geometry onto the generated geometry.

    I am wondering if this would be sufficient. What do you think ?

    If it does, a tool that does that, and also support a blending of 3-4 different textures, could be really great.


    You don't need to worry about normals etc. 

    What I mean is to displace geometry using on a heightmap to form the big shapes and then apply what is effectively an unrelated tiling material to it. 

    This is I think also what DThiger is saying he does. 
  • motionblur
    Options
    Offline / Send Message
    motionblur polycounter lvl 11
    Sorry to revive an old thread but I've been thinking about all of this lately, as I'm getting back into Substance designer myself.

    My initial tests a few weegs ago were similar with using a tesselated mesh in Blender, applying a displacement modifier, then a poly reduction modifier. Then making a copy of the heavily reduced mesh (because static geo doesn't always need clean topology when you are looking for a fast and mostly automated way). Applying all modifiers -> tile it two times for a wall segment and snap the vertices to the other pieces so it tiles --> bake. Tilability could also be somewhat automated with a boolean mesh intersection of a tiled length, maybe ....

    I was wondering, though. Is this process of Substance height maps -> retopology or poly reduction -> rebake details actually used for tiling geometry like walls as well or only on mostly chaotic surfaces like cliffs/rocks, etc?
    Are there any real-world demos, talks or videos about who people go about doing this?
    As this is usually the part where one can find not so many good tutorials or best practice videos about and I am really wondering how people are doing this in actual game productions in different studios. Everybody seems to have their own kind of workflow here.
    I'd really would love to hear some more takes on this subject as I'm still trying to figure out whether all these super complex Substance shader graphs are actually usable or more of a portfolio/ proof of concept flex. :D



  • poopipe
    Options
    Offline / Send Message
    poopipe grand marshal polycounter
    to your last question :
    As a rule of thumb the heavily displaced material ball stuff you see all over Artstation is utterly useless in production (it also wont get anyone a job unless it does something super clever)

    and the rest:

    I've used texture derived geometry for all sorts of things - as have many people I've encountered in many places. It's almost always cheaper to render than displacement/POM so it's quite a common thing to need to do. 
    You can dig into it deeper (deal with funny shaped meshes, automate LOD chain creation etc.) but fundamentally it all boils down to what you've already done in blender. 

    The problem I've found with man-made surfaces is that you tend to need a disproportionate amount of geometry to feed your decimation algorithm in order to get a good shape at the end so it can get pretty clunky to work with. 
    To the best of my knowledge this isn't a problem that's really been tackled academically - I've got a couple of ideas on how to generate a semi efficient mesh in one pass but haven't dug into actually doing it yet.  
  • gnoop
    Options
    Offline / Send Message
    gnoop polycounter
    In my experience  those rocky textures from examples like those are  repeating  like hell  and nothing helps to cover/hide it.  For some reason SD is not quite  comfortable to make anything bigger than 4x4 meters . 
        And they are always missing  something.      Proper  dirt/twigs/moss/twigs accumulation which both SD and Alchemist often make a bit of a  naive style  and some of macro structure distinctiveness. 
     
    Imo it's totally misguiding to show materials on those balls  and not on actual rock walls or something.

    I try to use photogrammetry scans + some sculpted fragments + plus smaller details scattered with physics  and   baked/rendered down    to make it look more natural .   
    When you re-shuffle those things manually,  even in 2d . it's kind of easier to reach what you need   than  countless  re-seed tries in SD.    Less visually repeating  at least.    Too bad there is no soft where you could do it comfortably.

    ps. There was actually a soft : Map zone , a  parent to SD where you could manually shuffle  procedurally placed things or rather delete some  too obviously repeating pieces   but Algorithmic  decided to kill the feature   




  • motionblur
    Options
    Offline / Send Message
    motionblur polycounter lvl 11
    Huh, okay. While on one hand this confirms my suspicion to quite some degree on the other hand I'd really have hoped that there was a way these elaborate displacement materials could actually be made usable. So for the time being rather keep it to flat-ish surfaces and sculpt/model any larger plasticity the traditional way for now, I assume ...

    Wasn't MapZone the original Version Allegorithmic created before Substance Designer?
  • gnoop
    Options
    Offline / Send Message
    gnoop polycounter
    They could still  be usable.  I sometimes  start in SD , then bring color and displacement  to Zbrush , then  cut it into pieces/sub-tools and re-compose manually  with a slightly varying scale   into bigger cover  tilable  textures ( from several  SD seeds)    

    For some reason ( a sense of balance probably)  it allows to  make less obviously repeating things  in bigger cover .   

    Yeah . Mapzone was an ancestor of SD .   Wasn't especially comfortable either. 




  • poopipe
    Options
    Offline / Send Message
    poopipe grand marshal polycounter
    That's the way to do it. Not that I generally bother with zbrush.

     Layer the information at different frequencies (tile rates/whatever you want to call it)  and you break repeating patterns. 

  • XilenceX
    Options
    Offline / Send Message
    XilenceX polycounter lvl 10
    We actually use Substance Painter's ability to export highpoly tessellated meshes that have been deformed by Displacement map.
    So first you create "base"materials in SD. Then apply them in SP to your model and randomize & blend them as needed. 
    Finally, export the displaced mesh from SP. For right now, we use the UE4 LOD tools to get the resulting mesh to acceptable polycount.
    The hope is, that UE5 + Nanite will just automatically scale the resulting mesh to the required polycount.

    IMHO displacement based modelling is the future for creating UE5 ready models efficiently. 
    So I would be very interested in continuing this discussion! 
    I would definitely also be open to develop some tools together for such a workflow, like some people mentioned earlier in this thread.

    That hunger of Nanite for high res geometry has to be fed somehow.
    And manually sculpting 100 rock variations is just too slow and tedious at that detail level. 
    (Although we still sculpt the rough base shapes of the rocks in Zbrush, but that can be done in a few minutes per rock.) 
Sign In or Register to comment.