Do somebody know a good, convenient soft to compose multiple rendered fragments? mostly from Zbrush into tileable textures ? The one able to combine Z depth in a way, fragments would intersect to each other properly , according to their depth. Rock pieces/stones for example .
Many programs can do so to some extent but all they are pretty inconvenient.
Zbrush 2,5d layers - totally destructive and have only color and depth channels .
Substance designer - capable to work with two or three things to combine their depth but you can't really select a separate piece / stone to tweak / scale /move or it would require tremendous amount of time and efforts and a huge node network. Imo the ability to tweak any given piece easily is a key thing to make things less repetitive and it's kind of hard to do in the soft too focused on procedural stuff exclusively.
Fusion - better and more elegant node structure , still same drawbacks - too many nodes if you need to re-compose a couple dozen of stones/rocks into a new cliff. No convenient support of texture specific channels .
Depth combine is always aliased.
Blender - best Z combine feature, no transform gizmo , same drawbacks of node based systems . Every subtle move/scale requires its own node.
Photoshop - possible in its last version but depth combine done with layer blending requires so complicated stack of layers it works only with 2 objects really.
Anything else I missed? Do somebody have any suggestions?
Replies
as for the OP: i think you just have to choose between a layers or node-based approach. the only blend between those i have ever dealt with is (autodesk) flame's action module. and i think that may be a bit of a stretch to use for your purpose.
But thomas is right...either you choose a layered solution or node solution. No holy grail and both have caveats.
For some uncertain reason video composers don't find it necessary while for textures it's imo a key thing. In Photoshop at least you could set some blur in fixed pixel values . In Substance designer you have to adjust same kind of blur manually for every fragment each time you rescale something . In Fusion for getting nice anti -aliased masks you have to work with huge source resolution making everything too slow.
Z Depth with anti aliasing is handled by the render app actually. Compositing apps just work with 2d images. Deep EXR images will be multi channel and floating point.
I wasn't able to recreate same behavior in Fusion. Wonder if Nuke has a workaround or perhaps just same feature?