Home Adobe Substance

Workflow question: generating height/normal before Substance

Hey folks,

I’m doing some R&D around Substance workflows and automated texture map generation,
mostly as a side experiment.

In some cases (quick scans, old assets, noisy photos) I found myself missing a fast
way to extract usable height/normal information before bringing things into
Substance Designer/Painter.

So I built a small helper tool that generates height/normal/roughness from images,
mostly to speed up early iteration and blockout stages, not to replace Substance
in any way.

I’m curious:
– do you usually solve this directly inside Substance?
– or do you preprocess textures before bringing them into SD/SP?

Would love to hear how others approach this.

Replies

  • gnoop
    Online / Send Message
    gnoop sublime tool
    Substance Designer and  Painter don't have anything to imagine height from regular image .  Sampler  does  but it does it so awful it's almost  an advertising gimmick IMO,  very blurry  unspecific height  often totally wrong.      The only thing  Sampler good is de-lighting ambient shadows .    What would really be helpful  is image segmentation tool based on detected edges   to feed it to flood node   in Designer.   Or maybe some  cleaning    for typical hipass  to get rid of random dots and noise without killing continuous features aka cracks  :  
    But I never seen such segmentation working  .   it never following logical visible features  so it's often manual work.  Something that would turn those contours , edges itto enclosed areas to work for flood fill node, shift noise bubles randomly in Designer


  • RKey4
    Yeah, that makes sense.The biggest issue seems to be that most height extraction approaches are purely contrast-based, so they don’t really understand structural boundaries — just tonal differences.Edge detection alone isn’t enough either, since it rarely produces clean enclosed regions that would work nicely with Flood Fill in Designer.I’ve been experimenting a bit with combining edge-aware filtering and segmentation approaches to preserve continuous features like cracks while suppressing random noise, but it’s definitely not trivial.If segmentation actually followed perceptual contours instead of just gradients, it would make procedural workflows way more powerful.Out of curiosity — have you tried any custom graph setups in Designer to approximate that?
  • gnoop
    Online / Send Message
    gnoop sublime tool
    RKey4 said:
    have you tried any custom graph setups in Designer to approximate that?


    Sorta, bit it doesn't work well .  I tried to  go that hi pass way  . blur a bit and  treshold  dark lines   and then add white   value to initially highpassed picture ( right in my example)  to isolate  cracks only  and clean small dots noise  but it kills a half of tiny crack lines  too.   then I tried to expand /bevel  what left of those crack  truing to generate sorta distance field around them , blur and treshold again to connect  closely  spaced  cracks  . But it changes resulting geometry   turning it into bubbles  like contours too not following original image any more  . Maybe I   just havent gave  it enough thoughts  althought and there is still a path , but I gave up.   Maybe it should be rather more like Median filter in Photoshop rather than simply blur   but I am not sure.

    Ironically decades ago I used "Fluide mask" plugin  for Photoshop . It shown pretty nice  image segmentation in its preview , very much edge following. But never letting you to export that segmentation as random ID color patches .    It's long dead and I never seen anything close.    G'MIC plugin  does some segmentation but it never follows  the edges nicely  for example. 

  • RKey4
    Yeah, that’s exactly the trap I kept running into as well.High-pass + threshold always ends up being contrast-driven rather than structure-driven. As soon as you start blurring or expanding to reconnect gaps, you’re no longer preserving the original topology — you’re essentially reshaping it. That’s where the “bubbles” start to appear.Distance-field style expansion is great for continuity, but it completely ignores semantic structure. It treats everything as geometry, not as perceptual contours.Fluid Mask was interesting because it seemed to operate more on region segmentation rather than raw gradients — almost like it was grouping pixels into coherent areas before outlining them.I think the core issue is that most approaches are gradient-based, while what we actually need for Flood Fill workflows is region-aware segmentation that respects perceptual boundaries.Have you ever tried combining edge detection with some form of region growing instead of thresholding?
  • gnoop
    Online / Send Message
    gnoop sublime tool
    Nope , never found a way to do it  reliably .   I once tried to use   a screenshot from fluide mask  to turn it into closed segments in Designer . But it not even  works anymore.    

  • RKey4
    That seems to be the common conclusion — it works “almost”, but never reliably enough for production use.
    Fluid Mask definitely felt ahead of its time in terms of region segmentation. It’s interesting that even today most approaches still operate mostly on gradients and contrast rather than coherent region grouping.I’ve been experimenting a bit with region-based preprocessing ideas lately, trying to keep structural continuity without reshaping the geometry like blur/distance-field methods do. It’s surprisingly tricky to avoid either noise or topology distortion.
    Feels like this is still an unsolved niche in texture workflows.
  • gnoop
    Online / Send Message
    gnoop sublime tool
    Fiji.cs  software chat GPT suggested me   has a few segmentation  tools.   One seams like ai training tool to draw edge lines  where  a segment should be stopping.     I haven't gave it enough time  although. 
  • RKey4
    Yeah, Fiji is interesting. It feels more research-oriented compared to most production tools.That interactive training approach sounds closer to what’s actually needed, since it’s more about guiding region boundaries instead of relying purely on contrast or gradients. The tricky part is always making it consistent and fast enough to be usable in a real workflow.One thing I’ve been wondering about is whether something semi-automatic could work — where you get a rough segmentation first and then only guide it in problematic areas, instead of drawing everything manually.If that kind of region-aware segmentation could be made stable, it would probably change how a lot of early texture work is done before bringing things into Substance.
    Did you feel like Fiji was getting close, or was it still too experimental?
  • gnoop
    Online / Send Message
    gnoop sublime tool
    Nope I tried but gave up with Fiji.    
  • RKey4
    Yeah, same here. I tried a few research tools as well, but most of them feel too experimental or slow to fit into a real production workflow.
    It seems like a lot of these segmentation approaches work in controlled cases, but once you throw real-world textures at them — noise, compression, lighting, material variation — they break down pretty quickly.What makes it tricky is that texture work often needs something fast and predictable, even if it’s not perfect. Fully manual segmentation is too slow, but fully automatic rarely respects the structure.Lately I’ve been thinking more in terms of hybrid approaches — something that gives a rough region structure first, and then lets you guide or refine it only where needed. That feels more realistic for day-to-day work.Feels like there’s still a gap between research segmentation tools and production texture workflows.
  • gnoop
    Online / Send Message
    gnoop sublime tool
    I think the whole  texture /material side  of CG is dying slowly.   You can see it just here in this part of polycount lack of new posts   and in modern games in general.    They have same  plastic look,  PBR much  ha-ha  with generic  2x2m  textures  deprived of any individual traits  in perfectly same looking games.  Same procedural edge wear everywhere , same procedural details.   Creating unique materials is too much time consuming and expensive.  its  getting weed off  production pipelines.   So the software  stopped to innovate.   Krita has  some assisted manual tool for years   but it's a hell of annoying  and slow to work with.

  • RKey4
    Yeah, I can see where that feeling comes from. A lot of pipelines today optimize for scalability and consistency rather than uniqueness. Procedural tools made production much more predictable, but at the same time they tend to converge toward similar visual patterns.
    At the same time though, I’m not sure the material side is really dying. It feels more like it’s shifting. Instead of handcrafted uniqueness everywhere, studios seem to focus their effort only on hero assets, while the rest is handled by procedural or scanned data to save time.
    Photogrammetry and scanning changed things a lot as well. A lot of variation now comes from real-world data instead of manual work. The challenge is that the tools for controlling and editing that data are still not very mature.
    So maybe the next step isn’t more procedural wear or generators, but better ways to guide and refine real-world textures faster, without fully manual work. Something in between automation and artistic control.
    Curious how you see that balance evolving in the next few years?
  • gnoop
    Online / Send Message
    gnoop sublime tool
    I sorta agree  about manual  craftsmanship  , Designer included .  Every time I do a material in Designer I know  they are  paying me  for gaps  and mistakes of scanning contractors  .   And often  I just take my own camera at weekends.  Feel myself a camera replacement.   It's so weird . Yet we are discussing  extracting  material id regions from plain photos without  true height info which is not  hard to do from a proper series at all  and it's always wrong  capture , wrong  light,  too off axis  flash, mushy tele  lens with soft corners  etc.  Whatever many instructions I wrote for years.  They now do huge pixel count  shots with awful kit lens taking forever to calculate in something like reality capture.   
        
    And I agree   there's absolutely zero nice  tools  to do a simple  height aware  combining collage outside expensive  2d compositing software . I  have wasted so much time trying to persuade  chatGPT  to make me  a few height channel aware mask and transform syncing scripts for Photoshop  or making something usable in Designer  which hates hi res bitmaps  so much. 

    Tried couple years ago  to persuade  Affinity  devs to add  2.5D mode Zbrush style but with non destructive " smart objects"   and height channel   but nobody heard .  It looks so close and  so ahead of photoshop  ancient  architecture  but yet hardly usable.  
  • RKey4
    Yeah, that “camera replacement” feeling is very real. A lot of the work ends up being about compensating for capture problems rather than creating anything new.In theory scanning should reduce manual work, but in practice it just shifts the problem. Instead of sculpting or painting, we spend time fixing lighting, alignment, and missing structure. And the gap between ideal capture and production reality is still huge.The weird part is that the industry invested heavily in acquisition, but much less in tools for interpreting and controlling the data afterwards. We have amazing capture and reconstruction, but not enough ways to interact with the result in a height-aware or structure-aware way.That 2.5D space between image editing and full 3D still feels very underdeveloped. ZBrush handles height and form well, but it’s not really designed for non-destructive material workflows. Photoshop and similar tools are still mostly 2D, and Designer is powerful but not always comfortable when you need to manipulate high-resolution source data directly.Feels like there’s a missing layer in the pipeline — something that bridges raw scans and procedural workflows in a more intelligent and controllable way.Do you think studios will eventually push for that, or is production still too focused on speed and consistency for this to become a priority?
  • gnoop
    Online / Send Message
    gnoop sublime tool
    RKey4 said:
    Do you think studios will eventually push for that, or is production still too focused on speed and consistency for this to become a priority?
    Doesn't looks that way. I had a hope anything  convenient would appear soon two decades ago and here we are.   In all honesty  the content creation software doesn't look like evolving much in general.  it evolved and frozen already  with a few cosmetic changes each years and rarely something new.  

     It's stack in old approaches , punish heavily even a slightest detour  from standard  ways  , even non-square textures and  you are  in a dangerous  zone already.  And it awards  sameness  , tries to automate  those standard approaches , make them human input  irrelevant .    

         Any  2.5D height aware image "easy" editor/compositor   could  be just a slightest modification for Photoshop probably , after all it needs just  a couple of new blending modes and  a sorta live frequency separation  widget with a bit more options than just Gaussian blur like Affinity. I love their recent  Median option.         Yet it  would be  totally off course in  the direction  content creation software is going IMO.       I have more hope on Krita  in that regard or anything open source.    

       
  • RKey4
    I get that feeling too. A lot of mainstream tools seem to optimize for stability and compatibility rather than radical change. Once a workflow becomes industry standard, everything around it tends to reinforce that direction, because production pipelines depend on predictability.But historically most shifts in content creation didn’t really come from established tools. They usually came from the edges — scanning, photogrammetry, procedural workflows, even PBR itself started outside of traditional pipelines before becoming standard.Maybe the current phase is similar. The big software ecosystems focus on incremental improvements and integration, while more experimental ideas happen in smaller or open-source environments first. If something proves to save time in real production, studios eventually adopt it regardless of where it came from.Open-source projects are interesting in that sense, because they can explore directions that commercial tools avoid. At the same time though, they often struggle with usability and consistency, which production teams still need.Feels like the real challenge isn’t just new algorithms, but finding approaches that are both flexible and predictable enough to fit into real pipelines.
  • RKey4
Sign In or Register to comment.