Home Adobe Substance

Workflow question: generating height/normal before Substance

Hey folks,

I’m doing some R&D around Substance workflows and automated texture map generation,
mostly as a side experiment.

In some cases (quick scans, old assets, noisy photos) I found myself missing a fast
way to extract usable height/normal information before bringing things into
Substance Designer/Painter.

So I built a small helper tool that generates height/normal/roughness from images,
mostly to speed up early iteration and blockout stages, not to replace Substance
in any way.

I’m curious:
– do you usually solve this directly inside Substance?
– or do you preprocess textures before bringing them into SD/SP?

Would love to hear how others approach this.

Replies

  • gnoop
    Offline / Send Message
    gnoop sublime tool
    Substance Designer and  Painter don't have anything to imagine height from regular image .  Sampler  does  but it does it so awful it's almost  an advertising gimmick IMO,  very blurry  unspecific height  often totally wrong.      The only thing  Sampler good is de-lighting ambient shadows .    What would really be helpful  is image segmentation tool based on detected edges   to feed it to flood node   in Designer.   Or maybe some  cleaning    for typical hipass  to get rid of random dots and noise without killing continuous features aka cracks  :  
    But I never seen such segmentation working  .   it never following logical visible features  so it's often manual work.  Something that would turn those contours , edges itto enclosed areas to work for flood fill node, shift noise bubles randomly in Designer


  • RKey4
    Yeah, that makes sense.The biggest issue seems to be that most height extraction approaches are purely contrast-based, so they don’t really understand structural boundaries — just tonal differences.Edge detection alone isn’t enough either, since it rarely produces clean enclosed regions that would work nicely with Flood Fill in Designer.I’ve been experimenting a bit with combining edge-aware filtering and segmentation approaches to preserve continuous features like cracks while suppressing random noise, but it’s definitely not trivial.If segmentation actually followed perceptual contours instead of just gradients, it would make procedural workflows way more powerful.Out of curiosity — have you tried any custom graph setups in Designer to approximate that?
Sign In or Register to comment.