Hey guys,
I was just wondering if there was any way in a Post Process to compute the position in 3D space of each pixel?
My first thought was to take the Camera position and add to it the Camera vector multiplied by the value in the depth buffer. But I soon realized that by that stage of the rendering process the camera is most likely at (0,0,0) and facing -y.
Any ideas? Thanks.
Replies
(edit: in the material editor, to be specific)
Not so long ago I did reconstruct the view space position from depth in a pp material, world space might be doable as well.
It's not exactly this, but the idea is similar. I essentially need to know during the post process stage is the pixel is near the player/pawn (which is a position i send to the shader in the form of 3 floats).
If it's a 1st person camera, could you just used pixel depth instead?
the basis is the scene depth, then I used a screen-covering horizontal gradient for X and a vertical one for Y. I figure if you can turn those gradients "3 dimensional" and "rotate them accordingly", and feed the shader with the camera's location and rotation, you might get it done