Home Unreal Engine

Getting Z-Depth from another player camera in Mat editor

dnc
dnc
Hi,

First post and am stuck rapid prototyping a game.

I want to compare the Z-Depth between the player camera and another camera I have in the scene.

The second camera updates every frame.

How would I get the Z-Depth of the second camera's multiple pixels into the material editor so I could run it through a shader?

I am using SceneDepth to get the first players camera and have tried using PixelDepth but don't know its particular usage.

Also is this even possible?

Thanks,
dnc

Replies

  • Angry Beaver
    Options
    Offline / Send Message
    Angry Beaver polycounter lvl 7
    It's not going to be as simple as you want it to be. A shader has a few processing steps.
    Vertex Shading -> Pixel shading -> Post processing
    In the vertex shading steps all the information about the verts making up a model are calculated. Then for the Pixel shading step the information about those vertices is interpolated across every pixel of the screen. This is done Locally on your system as these pixels always change relative to where your camera is. When a Z-Depth is figured out via the UDK node it's already at the pixel step.

    What you'd have to do is include extra information into every vertex based upon the position of every player's camera and from there you'd be able to do your effect during the pixel step; however, that's not something you can really do with the UDK node editor. Your going to have to learn REAL shader programming to stand a chance.
  • Drew++
    Options
    Offline / Send Message
    Drew++ polycounter lvl 14
    If could store the players scene depth as a texture, or multiply that depth onto a constant value, then maybe pass that along to kismet, then to the material again or something, then do a comparison (x) - (y) that might work.

    But why would you want to compare one depth to the other? o_O Here's an example I did where it's the difference between two different views

    wat.jpg
  • blankslatejoe
    Options
    Offline / Send Message
    blankslatejoe polycounter lvl 19
    hm...getting Zdepth from another camera? ..one way to do it would be to hook up a camera/scenecapture actor to a postproc effect that only captures scene depth, and then have that write to a render-to-texture actor...which you can then reference in your material. It might involve some scripting or kismeting to make the scene capture actor attach/animate/move..and it's definitely going to be expensive to have the render-to-texture update 30times a second....probably prohibitively so...but you could see if the look gets you what you want...but for a legit, performant solution you'd probably need a coder and/or source access.
  • Angry Beaver
    Options
    Offline / Send Message
    Angry Beaver polycounter lvl 7
    See the different responses have made me realize something, are you trying to different images in some form of composite or are you trying to let another player's position adjust how things look to you? e.g. which is most similar to what you're doing?

    - A spy detection system where the pixels visible by another player are highlighted in your view?

    - A trippy out of body effect where your compositing 2 seperate camera positions together?

    - Some kind of crazy echo location based game where player send out signals to highlight the geometry for each other?
  • dnc
    Options
    Offline / Send Message
    dnc
    Thanks Guys, all great and good replies.

    Well I have been finding that UDK's mat editor is being quite limited and as much HLSL I know it doesn't quite cut into the things I would like to be doing. We have hit on some problems and not having access to source has troubled our team, we know the effect is theoretically possible but boundaries have been hit.... etc trying not to give too much away yet :D

    @Angry Beaver
    As I understand it the depth pass is done first and then all the other rendering of meshes/alpha/fx/hud/post comes after, all going through the shader pipeline (as you mentioned). But as I said UDK is quite fixed and as you mentioned some real hard coding may have to come into it.

    But you were right with your guesses, we are trying to get the players position to change how the scene renders.

    @Drew++
    That image you posted was nearly spot on, except you made me realise that the camera may not always have the same scene depth and the comparison would then fail. That's another problem we would have to look into. But thanks anyway.

    @blankstatejoe
    That seems like the best solution which I was going to explore, it's very similar to what Drew was looking at.

    Anyway thanks again for all your responses, we may have found a work around to it from this post here: http://www.polycount.com/forum/showthread.php?t=97548

    So I will find out what the team have been up to tomorow and then see what is what.

    Thanks again,
    dnc.
  • Ace-Angel
    Options
    Offline / Send Message
    Ace-Angel polycounter lvl 12
    In RenderMonkey they have a Distance based Geo rendering shader. Just open it up, and translate it back to UDK.
  • Davision3D
    Options
    Offline / Send Message
    Davision3D polycounter
    Possible solution:
    World postion in a distance node and in the other input the vector position of the other camera that you get into the material by modifying a vector parameter of the Material.
Sign In or Register to comment.