I am just wondering, how far we could go with native pixel processor of Substance Designer. For example: can we generate Screen Space Local Reflections using depth image and world normals? Or it is too complicated for SD?
I've done some pretty strange things with the pixel processor but it rapidly becomes unmanageable.
the limiting factors are the fact that you cant loop (and thus can't sample other pixels effectively) and that it takes a shit load of nodes to do anything. Eg. It's possible to make an auto levels node with it but you'll be there forever.
The fact we don't have a "loop" node make complex algorithms tedious to implement (you have to unroll manually basically). Also we don't support Matrix higher than 2x2 (aka Vector4), so that can become cumbersome to workaround.
Replies
the limiting factors are the fact that you cant loop (and thus can't sample other pixels effectively) and that it takes a shit load of nodes to do anything.
Eg.
It's possible to make an auto levels node with it but you'll be there forever.