Hi,
First post and am stuck rapid prototyping a game.
I want to compare the Z-Depth between the player camera and another camera I have in the scene.
The second camera updates every frame.
How would I get the Z-Depth of the second camera's multiple pixels into the material editor so I could run it through a shader?
I am using SceneDepth to get the first players camera and have tried using PixelDepth but don't know its particular usage.
Also is this even possible?
Thanks,
dnc
Replies
Vertex Shading -> Pixel shading -> Post processing
In the vertex shading steps all the information about the verts making up a model are calculated. Then for the Pixel shading step the information about those vertices is interpolated across every pixel of the screen. This is done Locally on your system as these pixels always change relative to where your camera is. When a Z-Depth is figured out via the UDK node it's already at the pixel step.
What you'd have to do is include extra information into every vertex based upon the position of every player's camera and from there you'd be able to do your effect during the pixel step; however, that's not something you can really do with the UDK node editor. Your going to have to learn REAL shader programming to stand a chance.
But why would you want to compare one depth to the other? o_O Here's an example I did where it's the difference between two different views
- A spy detection system where the pixels visible by another player are highlighted in your view?
- A trippy out of body effect where your compositing 2 seperate camera positions together?
- Some kind of crazy echo location based game where player send out signals to highlight the geometry for each other?
Well I have been finding that UDK's mat editor is being quite limited and as much HLSL I know it doesn't quite cut into the things I would like to be doing. We have hit on some problems and not having access to source has troubled our team, we know the effect is theoretically possible but boundaries have been hit.... etc trying not to give too much away yet
@Angry Beaver
As I understand it the depth pass is done first and then all the other rendering of meshes/alpha/fx/hud/post comes after, all going through the shader pipeline (as you mentioned). But as I said UDK is quite fixed and as you mentioned some real hard coding may have to come into it.
But you were right with your guesses, we are trying to get the players position to change how the scene renders.
@Drew++
That image you posted was nearly spot on, except you made me realise that the camera may not always have the same scene depth and the comparison would then fail. That's another problem we would have to look into. But thanks anyway.
@blankstatejoe
That seems like the best solution which I was going to explore, it's very similar to what Drew was looking at.
Anyway thanks again for all your responses, we may have found a work around to it from this post here: http://www.polycount.com/forum/showthread.php?t=97548
So I will find out what the team have been up to tomorow and then see what is what.
Thanks again,
dnc.
World postion in a distance node and in the other input the vector position of the other camera that you get into the material by modifying a vector parameter of the Material.