Home Unreal Engine

The wonders of technical art (Unreal Engine)

123457

Replies

  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Also, does anyone know a way for generating uvs for sdf triangles?
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    So unfortunately there is no easy way to get all intersecting faces within a volume so I kinda need to be brute force here and test them one by one. Which makes the grid construction stage much slower, but until I find some better way, it'll do it. Hit result of blueprint traces only returns the first face that was hit, even when I use multi trace.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    I found this which confirms my finding...
    https://forums.unrealengine.com/unreal-engine/feedback-for-epic/71860-multi-line-trace-that-returns-all-hit-surfaces-not-just-actors

    "RaycastMulti only returns 1 hit result back and removes all of the rest of the results, it even says it in the source code:

    // Now eliminate hits which are farther than the nearest blocking hit, or even those that are the exact same distance as the blocking hit,"


    So basically this should be used to find multiple hit actors along the trace, and not multiple hit points of one actor.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Good news it that a mesh with ~3000 tris is scanned very fast (but too slowly to call it real time) still. I don't really mind this situation for now, if the actual grid turns out to be as much faster than without it, as I would think. I can optimize the contruction stage later. Construction speed can also be reduced to gain framerate. This is done by using a timer and test one face when the timer ticks. Using large number will distribute the contruction to multiple frames so fps stays high but the triangle array is iterated very slowly. Using tiny number can force more faces to be processed within one frame, for the price of losing fps.


  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Obscura said:
    Also, does anyone know a way for generating uvs for sdf triangles?
    Not certain about generating "true" UVs per se....  But I usually just use the final raymarch pos and normal to drive basic tri-planar mapping.  May not be what you're looking for, tho.  And obviously that falls apart under any sort of animated deformation.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Yeah I know about triplanar. I used it on some of my earlier experiments with ray marching (page 3). Its kinda trivial in this case. But thanks for the input anyways. We could probably just take some 3d coord, make xy parallel to the triangle, and then something... :D Just throwing in ideas though. I will probably need to look into matrices and proper rotation very soon. I know about using acceleration structures to speed up ray tracing from the cameout of rtx cards, and even back then I played with ray marching and I knew I will get there at some point where I will want to trace meshes etc, but  thought I will reach that point in years. And, now I'm here. Same with matrix stuff, I was hoping to get there much later, but it would be advised to get into them very soon. 
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    I kinda feel like I'm making good progress though. I achieved everything and even more than what I was mentioning on an earlier page when I came back to this topic. I tried out some less used features of ray marching including gi, realistic transparency, multi bounce reflections, and now starting with meshes and acceleration structures. I'm enjoying the ride  B) 
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Its like the ray tracing in one weekend series except that its far from one weekend...Still good.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Any input from anyone regarding usage of render target buffers, except that I can make the whole thing half res relative to the actual resolution? For things like the dfao from the last page, thats cool but for the pure ray marched ones, it would be just like as if you would use 50% screen percentage. So I don't see much benefit there. But this would work around the custom node returning only one variable.So thats definitely there. Not a real benefit though because you could just modify it in code for yourself. But keeping such things up to date with versions is a pain. I don't like that.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    I also figured, culling of static objects that exists in the textures when using the uniform grid method, happens automatically! Because your rays will only hit cells with objects in front of you along their path. I also think that the uniform grid isn't bad with ray sphere marching, because it doesn't have the downside of traversing and sampling many cells along the ray because ray marching works differently.You still travel multiple cells but only fresnel pixels would be more expensive. This is all theory yet because I don't have the grid working yet. I'm pretty sure about the object and triangle culling though. Its basically the nature of this kind of acceleration structures.
  • serriffe
    Offline / Send Message
    serriffe polycounter lvl 7
    nice thread! keep posting! 
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    RTX and chill again. This is based on this concept art of "DOFRESH.":
    https://www.artstation.com/artwork/gRrqQ

  • radiancef0rge
    Offline / Send Message
    radiancef0rge Polycount Sponsor
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Hey Chris. If I would gather your comments from this thread into a single post it would only fill up a single row. :D Even though you made several comments. Thanks anyway !
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Trying out cascades with volume textures. This can be used as a lod. Just like Epic did with their sdf implementation.So you can have longer render distances. The texture resolution is the same for all cascades,but the captured area increases.The rig would move with the player camera, and the player would stand in the center.

  • rollin
    Offline / Send Message
    rollin interpolator
    Eww.. can you explain this a bit more?
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
  • rollin
    Offline / Send Message
    rollin interpolator
    All ;)
    - What is the blue / red stuff? The different cascades?
    - How doe the cascades work? Simply more and more space captured per cell? How does this look geometrically?
    - What kind of information you capture in the volume texture and what kind of LOD would you drive with the captured information?
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Differently colored areas shows different cascades intersecting the subject. 
    The 2 boxes shows the sizes of the 2 cascades. All cascades are the same resolution, so this can be used to double/quadruple etc the view distance.Futher cascades will look more low res due to the spatial upscale while keeping the original resolution.Similarly to cascaded shadows. The ue4 sdf implementation uses the same thing so you can have relatively low res global volume with reasonably long render distance. its a "lod" in a sense that it decreases detail over distance.

    For now, I work with distance (sdf) and color. Other types of data could be stored too such as metallic, roughness, etc.
    Visually, the 2 captured volumes looks like they were captured from the player position, but the second one captures a bigger area.

    The rig needs to move with the player camera in order to properly work, so the highest res version is always around the player.
    Why am I doing this? Because the default Unreal one only stores distance so the options are pretty limited. If the global volume had other types of data, such as color, you could do a lot more thing with it. 

    Ultimately, I would like to have some voxel looking game or scene using this tech. I also like to experiment and learn from this stuff. My plan is  to make a tool , similar to magicavoxel, that can be used to create content - small volumes of objects. And then take those object volumes with all their stored channels, and make something bigger from them, involving optims such as this. A complete voxel renderer solution in short.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    The very basics of the editor are working. I can trace through the volume using a hybrid method, and determine the entry and exit cell, where the painted voxel needs to be placed, unless there is something in the way. That part is not done yet but it would be the equivalent of "attach" mode in Magica Voxel. I'm using a cpu line trace as a first pass, to determine the actual ray entry position. Then this is fed to a shader that starts tracing through the grid. I use DDA (digital differential analyzer) to determine the cells to visit. Like I said I don't have the "attach" behavior working yet, but this video shows the dda working (correct gpu line trace). The next step would be to involve the previous volume state(previous capture) in the tracing to add the attach.

  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Finally managed to come up with a decently working denoising method that I could use in my raymarchers. Currently it 
    would get confused from a high frequence normal map - as it works based on normals. So I'd need to check the surrounding color pixels too, but it seems like a good start. It uses an 5x5 filter.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Little breakthrough again. Until now, I didn't have to render passes into rendertargets, so the camera setup was fully straight forward. But, when you only draw to a render target and not to the screen, camera related nodes immediately stops working (camera vector, camera position). So I had to recreate the functionality of them. It took a few hours, but now it seems to work correctly, and I can draw directly into a render target without outputting anything to the screen. Camera position was obvious, but the camera vector wasnt.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
     It actually isn't perfect still... When I turn fully backwards, it does some weird squishing and stretching still  :'(
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    If anyone could help me constructing camera vector from screen pos and rotation input, I'd be very happy. Its starting to drive me nuts.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    You already have the camera vector (the vector of the camera's local X-axis)... so I assume you mean constructing the Eye Vector (aka Ray Vector: the vector of each individual pixel)?

    If so, you could certainly calculate that by building a ViewProjectionMatrix manually, based on an arbitrary camera position, FOV, orientation, etc...  But I feel like that might be way over-complicating the problem, not to mention a LOT heavier on the GPU (matrix multiplication, etc).

    Since you already know the "camera position" (whether it be an actual camera, or even just an arbitrary helper widget or whatever), just feed that to the material, either as a MaterialParamCollection or directly as a param to a DynamicMaterial, and then just use the pixel WorldPosition (which should be correct, regardless of the "camera"/location you're rendering from) and calculate the worldspace eyeVector on the fly:



    It's entirely possible I'm missing a specific reason you'd need to factor in "Camera"Vector, ScreenSpacePos, etc...  but based on your description, it doesn't seem like you need to factor in your "camera"Vector at all.  And if for some reason you do, just feed the "camera" Front(X)Vector into the same MPC (in the BP) and access that the same way (in place of the intrinsic CameraVector node).
  • sprunghunt
    Offline / Send Message
    sprunghunt greentooth
    edoublea said:
    You already have the camera vector (the vector of the camera's local X-axis)... so I assume you mean constructing the Eye Vector (aka Ray Vector: the vector of each individual pixel)?

    If so, you could certainly calculate that by building a ViewProjectionMatrix manually, based on an arbitrary camera position, FOV, orientation, etc...  But I feel like that might be way over-complicating the problem, not to mention a LOT heavier on the GPU (matrix multiplication, etc).

    Since you already know the "camera position" (whether it be an actual camera, or even just an arbitrary helper widget or whatever), just feed that to the material, either as a MaterialParamCollection or directly as a param to a DynamicMaterial, and then just use the pixel WorldPosition (which should be correct, regardless of the "camera"/location you're rendering from) and calculate the worldspace eyeVector on the fly:



    It's entirely possible I'm missing a specific reason you'd need to factor in "Camera"Vector, ScreenSpacePos, etc...  but based on your description, it doesn't seem like you need to factor in your "camera"Vector at all.  And if for some reason you do, just feed the "camera" Front(X)Vector into the same MPC (in the BP) and access that the same way (in place of the intrinsic CameraVector node).

    you don't need to feed the camera position into a shader using blueprint - you can access it directly from inside the shader  using the cameraPositionWS node. 

    https://docs.unrealengine.com/en-US/Engine/Rendering/Materials/ExpressionReference/Coordinates/index.html

  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Thanks for the help guys, appreciate it. but neither the world position or camera position works - nor camera vactor. This is because I'm not rendering to the screen. Basically nothing camera or world related works.I'd need the shadetroy method to work inside Unreal. So, construct both the camera vector and camera position from uv and fed in camera pos (like on the example image). The problem is that I don't understand matrices so I can't get the camera rotation to work properly.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9

    you don't need to feed the camera position into a shader using blueprint - you can access it directly from inside the shader  using the cameraPositionWS node. 

    https://docs.unrealengine.com/en-US/Engine/Rendering/Materials/ExpressionReference/Coordinates/index.html

    >.<   Yes, of course, but please read his previous posts...

    He's doing raymarching on the GPU, and is refactoring his system to render into a RenderTarget buffer, instead of directly to the screen.  When you're rendering into a buffer, the CameraPosition (as well as all other camera-specific nodes) do not work, so you must feed data to the shader manually.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Thanks for the help guys, appreciate it. but neither the world position or camera position works - nor camera vactor. This is because I'm not rendering to the screen. Basically nothing camera or world related works.I'd need the shadetroy method to work inside Unreal. So, construct both the camera vector and camera position from uv and fed in camera pos (like on the example image). The problem is that I don't understand matrices so I can't get the camera rotation to work properly.
    Okay, so you're not even wanting to use ANY proxy geometry in the scene at all.... doing it purely with data that doesn't exist in worldspace at all?  Hmmmm...   Yeah, in that case you probably will need to go the "build a ViewProjectionMatrix" route anyway.  Same workflow applies tho, you should just be able to feed the "camera" Front, Right, and Up vectors into an MPC, and then reconstruct that matrix in the shader and mul() it against the viewspace eyeVector, which should give you the same vector in worldspace.  (I might have my matrix math backwards there, it might be an InverseTransform)
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Here is a bit more info. I think I got the non rotated camera vector right. But after it goes through the rotate vector material function (with camera direction fed in from blueprints) to align it to the player view, something goes wrong and when I make a 180 degree turn, it stretches. Its possible that even the input of the rotate vector isn't fully correct. Seems like it does this only on one axis which would suggest that one axis of the made up cameravector isn't correct. I'm unsure. Maybe I should use another node to rotate it?
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Obscura said:
    after it goes through the rotate vector material function (with camera direction fed in from blueprints) to align it to the player view, something goes wrong
    Could you post a screenshot of this portion of the shader?  I think I know what might be going wrong.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    @edoublea I'll show what I have currently, and if you could show me how to do what you described, that'd be great. Like I said I'm not really good with matrices.

    So this is what I have currently. This kinda works on 2 axes, and it does make some sort of y (I also want to keep how x-y-z are oriented and not go the open gl way), but when I turn from y+ to y- it shows extreme stretching.


    Yeah I don't want to use any proxy geo. I believe it is doable without that, since shadertoys are done this way.

    My main issue is with the camera vector. Once I have that, extending it to output the world position at the camera lens, should be straight forward. Like in my example, simply adding the position to the camera vector seems to work.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    I also tried to construct a proper z but the results were not much different. Different stretchig basically.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Yeah, the thing I suspected it might be, it was not (some people use the "Rotate About Axis" node to rotate a unit vector, not realizing that node is meant to be used with WPO, and strips out the current position, making it screw up results when people try to use it to actually just rotate arbitrary vectors).  But yeah, not the case here.

    I'm doing some tests now to see if I can provide better information (this is very relevant to my own interests as well, so it's no trouble).
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Thanks, you are the man.


  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Holy hell, this one sucked (for several stupid reasons), but I think I have it working.  Let me see if I can summarize:

    First things first, in a BP, you'll need to kick out some basic "camera" data.  Again, this doesn't have to be an actual camera, it could be a helper widget, or mesh position, or whatever.  but you'll want to store the following in an MPC: (Position, XVector, YVector, ZVector):



    Then, in the shader, sample those back in from the MPC.  Use the CamPos like you normally would, and then use the XYZ Vectors to build a transform matrix like so, and use that to reorient your UV-based RayVectors:



    You'll notice I had to swap the X and Z vectors when building the transform matrix, as using them as-is was causing ray directions to come out of the "top" of my camera.... so be aware you may need to swizzle these depending on what your desired "front" orientation is.

    That last ConstBiasScale probably won't be necessary the way that you are using this  The vector coming out of Transform3x3Matrix _should_ just be your correct WS vector.  In my test case, I was actually rendering the vectors directly into a RenderTarget to verify they were being processed correctly, and ran into a few snags.  Mainly, it seemed like the buffer was REALLY having a hard time with negative values (even tho my buffer was RGBA16f, which is supposed to support full 16bit FP values, including negative ones... but WHO KNOWS).  To work around that, I am re-framed all my buffer values into 0-1 range (hence that last ConstantBiasScale).

    Anyway, in order to verify everything was working, I added a debug function to the BP that just re-samples the buffer I wrote out, using the ill-advised "Read Render Target Raw UV" node (which is suuuuper inefficient), to sweep over the render target, and then draw debug rays using the buffer-value ray directions (after converting them back to proper [-1,+1] scale, of course).  All the values are coming out solid, and it works in all directions and orientations:



    Hope that helps some!

    -eaa

  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    uu. I'm gonna try this out now. Negative emissive needs to be enabled separately though. Its a material property and only works with unlit mode.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Oh, good tip!  Unfortunately, even after that change, the negative-values thing still persists.  I suspect it's an issue with the "Read Render Target" node... seems like it refuses to return negative values, even when the Material and RT are both set up to support them.

    ¯\_(ツ)_/¯
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Unfortunately this still isn't working. If you draw this as a post process, and compare it to the camera vector node output, you'll see that they look different. Its the easiest to inspect one axis at the time. For example, if we display the y of the camera vector, we see a radial gradient (2 but the another side is negative so not visible). If we turn the camera, we can see that the radial gradient moves according to the camera rotation. Now the output of this doesn't move this way but spins or I don't know how else to describe it. I made an example video:


    Swizzling the axes doesn't help. I also tried different value ranges but its pretty much the same.

    Thanks for the help though, I really appreciate it.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Ahh, bummer.  Yeah, keep us posted on how it goes.  I'll be curious to see what solution you land on.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    No results still, but it looks like the x-y is correct, they look the way they should, if I don't normalize them. But the z is doing bullshit, so probably simply using 1 as z is what breaks this.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Got it working better (tho I can't sufficiently explain why this works... MatrixMath is not my strong suit).
    Try this matrix construction order:
    float3x3(vecY, -vecZ, vecX)

    I'll post more, hold on....
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Alright, so in an effort to figure this out (like I said, I need to do this at some point as well, so I'm glad to have gone thru it), I implemented a simple raymarcher rendering directly in the shader, with no actual camera data involved - using basically the same method we've been discussing.

    The matrix transforms are indeed being fuckier than expected, and I don't understand why the 3 matrix components end up needing to be swizzed in such weird ways, but it seems to be working pretty solidly, at the moment.

    All the BP is doing is filling in MPC data as shown previously - but no more render_to_texture.  Everything just lives right in the shader now (aside from the "camera data" which is still fed in via MPC).  this is the material, doesn't get much simpler:


    The custom node is just a basic raymarcher doing some sphere crap.  That's not the important part.  The only thing relevant to this discussion is the handling of the eyeRay LocalSpace to WorldSpace transformation:



    Using that weird-ass matrix ordering, it works as expected.

    Here's the result.  The white sphere is just a location marker for the origin (so I knew where the raymarch effect would be located), and the quad in the back is literally just a quad with the material applied, so I could see it in scene.  Obviously, rendering that to a RT buffer instead would be trivial, and wouldn't change the shader at all:


    Try that and see if it works for you.

    -eaa

  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
     I got your last method to work !!! 



    Thats awesome, many thanks Eric! Will post some updates soon. I can start doing half res lighting, denoising, and depth based post processes, such as dof which uses the raymarch scene depth.


  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Obscura said:
    Thats awesome, many thanks Eric! Will post some updates soon. I can start doing half res lighting, denoising, and depth based post processes, such as dof which uses the raymarch scene depth.

     Great!  Glad it helped.  Can't wait to see more.
  • RadiusGordello
    Just a quick note, if your render target's aspect ratio isn't 1:1 the result will be warped, so you have to scale your UVs by the aspect ratio to account for that:


  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Yep, that was clear, but thanks.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    My note on this. Since I'm rendering into a render target that is created by blueprints or by hand, and not a native buffer, the view property node doesn't work, and you need to use texture property instead, and hook up the render target to it. I guess the view property node would not work anyways, because of the same reason as why other camera related nodes doesn't work. I'm just guessing this last one, I haven't actually tried.
  • Obscura
    Offline / Send Message
    Obscura godlike master sticky
    Found a cool way to have real time performance with many sdf shapes.
    -Make a data texture out of placeholder objects from the scene. Store type, position, size.
    -Bake all primitives into a volume texture in one pass, by looping through the data texture. Store sdf or more...
    -Sphere trace the volume texture


    This way, all objects still needs to be evaluated at the same time, but only once, when it gets baked. Instead of many times, when they are not in textures. If you still want higher quality, you could switch to the math one when you are close to the surface. But with many objects, this would cause very low performance.
123457
Sign In or Register to comment.