I've set out to get a vector direction pointing down (-Y) from one single position bake that I could bake in Painter. My goal is to get something that looks like a custom position texture that's baked as
R:0 G:-1 B:0 X:0 Y:-1 Z:0 in Designer. FYI, I'm pretty sure my math for the direction might be slightly off, but that's a problem for later.
I'm very much new to wrapping my head around this kinda math and the pixel processor was done by my lead tech artist Robert Krupa. And I'm sure there's a slightly more elegant way of putting it together.
Adding the workfile if you wanna take a look as well. This is what both position maps looks like:
Replies
Make sure you're using a floating point bit depth for your graph and in your pixel processor just run each channel through a range [-1,1] node or do (y*2)-1
It'll tend to increase the contrast of the map as you'll be adding a load of black to the image
To extract the Y information only, simply put 0 in the X and Z channels.
You'll need to output an HDR image format for values outside 0-1 to be any use.
Generally speaking though for in game use we'll use a 0-1 range image and do the range shift in the Shader cos it means we can use a standard image format.
This is what I meant with "baking the position map X:0 Y:-1 Z:0. (My bad, there's no RGB parameters)
so
I can see where you're going with this and I've hit a number of roadblocks caused by the same underlying issue - which is of course that you have no access to mesh information within the compgraph.
firstly - your lead tech artist is very clever
secondly - I think it looks faceted because of the way the map samples against itself - when you increase the sample distances it yields a softer result but loses accuracy.
You could sample more times and get a better result but it's only ever really going to give you an approximation (and it's a lot of nodes to plonk in a graph).
Is there no way you can precompute the map? I'm assuming it's a pipeline issue preventing it..
If it's a matter of not everyone having designer licenses in the studio you should be able to build a shaderFX material that could render it to texture in max/maya or ask a render programmer to build you a little tool that can be distributed to the team - the sums are simple if you can access the mesh data and the tool would be pretty straightforward as you won't have to worry about gui etc.
Indeed he his, we call him the wizard for a reason
Upping the samples might not be a bad idea! Im aiming to use this with a vector morph and that's when those 1 pixel transition facets come in as an issue. Just getting one more sample would most likely be enough.
This won't just end up in an production pipe, the end goal is for a vanilla painter user to be able to use it. My tech artists did similar things in Painter before, calling on Substance Batch tools to spit out different things.