Home Unreal Engine

The wonders of technical art (Unreal Engine)

1234579

Replies

  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Yeah I know about triplanar. I used it on some of my earlier experiments with ray marching (page 3). Its kinda trivial in this case. But thanks for the input anyways. We could probably just take some 3d coord, make xy parallel to the triangle, and then something... :D Just throwing in ideas though. I will probably need to look into matrices and proper rotation very soon. I know about using acceleration structures to speed up ray tracing from the cameout of rtx cards, and even back then I played with ray marching and I knew I will get there at some point where I will want to trace meshes etc, but  thought I will reach that point in years. And, now I'm here. Same with matrix stuff, I was hoping to get there much later, but it would be advised to get into them very soon. 
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    I kinda feel like I'm making good progress though. I achieved everything and even more than what I was mentioning on an earlier page when I came back to this topic. I tried out some less used features of ray marching including gi, realistic transparency, multi bounce reflections, and now starting with meshes and acceleration structures. I'm enjoying the ride  B) 
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Its like the ray tracing in one weekend series except that its far from one weekend...Still good.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Any input from anyone regarding usage of render target buffers, except that I can make the whole thing half res relative to the actual resolution? For things like the dfao from the last page, thats cool but for the pure ray marched ones, it would be just like as if you would use 50% screen percentage. So I don't see much benefit there. But this would work around the custom node returning only one variable.So thats definitely there. Not a real benefit though because you could just modify it in code for yourself. But keeping such things up to date with versions is a pain. I don't like that.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    I also figured, culling of static objects that exists in the textures when using the uniform grid method, happens automatically! Because your rays will only hit cells with objects in front of you along their path. I also think that the uniform grid isn't bad with ray sphere marching, because it doesn't have the downside of traversing and sampling many cells along the ray because ray marching works differently.You still travel multiple cells but only fresnel pixels would be more expensive. This is all theory yet because I don't have the grid working yet. I'm pretty sure about the object and triangle culling though. Its basically the nature of this kind of acceleration structures.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Some RTX renders.


  • serriffe
    Offline / Send Message
    serriffe polycounter lvl 9
    nice thread! keep posting! 
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    RTX and chill again. This is based on this concept art of "DOFRESH.":
    https://www.artstation.com/artwork/gRrqQ

  • radiancef0rge
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Hey Chris. If I would gather your comments from this thread into a single post it would only fill up a single row. :D Even though you made several comments. Thanks anyway !
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Trying out cascades with volume textures. This can be used as a lod. Just like Epic did with their sdf implementation.So you can have longer render distances. The texture resolution is the same for all cascades,but the captured area increases.The rig would move with the player camera, and the player would stand in the center.

  • rollin
    Offline / Send Message
    rollin polycounter
    Eww.. can you explain this a bit more?
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
  • rollin
    Offline / Send Message
    rollin polycounter
    All ;)
    - What is the blue / red stuff? The different cascades?
    - How doe the cascades work? Simply more and more space captured per cell? How does this look geometrically?
    - What kind of information you capture in the volume texture and what kind of LOD would you drive with the captured information?
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Differently colored areas shows different cascades intersecting the subject. 
    The 2 boxes shows the sizes of the 2 cascades. All cascades are the same resolution, so this can be used to double/quadruple etc the view distance.Futher cascades will look more low res due to the spatial upscale while keeping the original resolution.Similarly to cascaded shadows. The ue4 sdf implementation uses the same thing so you can have relatively low res global volume with reasonably long render distance. its a "lod" in a sense that it decreases detail over distance.

    For now, I work with distance (sdf) and color. Other types of data could be stored too such as metallic, roughness, etc.
    Visually, the 2 captured volumes looks like they were captured from the player position, but the second one captures a bigger area.

    The rig needs to move with the player camera in order to properly work, so the highest res version is always around the player.
    Why am I doing this? Because the default Unreal one only stores distance so the options are pretty limited. If the global volume had other types of data, such as color, you could do a lot more thing with it. 

    Ultimately, I would like to have some voxel looking game or scene using this tech. I also like to experiment and learn from this stuff. My plan is  to make a tool , similar to magicavoxel, that can be used to create content - small volumes of objects. And then take those object volumes with all their stored channels, and make something bigger from them, involving optims such as this. A complete voxel renderer solution in short.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    The very basics of the editor are working. I can trace through the volume using a hybrid method, and determine the entry and exit cell, where the painted voxel needs to be placed, unless there is something in the way. That part is not done yet but it would be the equivalent of "attach" mode in Magica Voxel. I'm using a cpu line trace as a first pass, to determine the actual ray entry position. Then this is fed to a shader that starts tracing through the grid. I use DDA (digital differential analyzer) to determine the cells to visit. Like I said I don't have the "attach" behavior working yet, but this video shows the dda working (correct gpu line trace). The next step would be to involve the previous volume state(previous capture) in the tracing to add the attach.
    https://www.youtube.com/watch?v=CAR5yl35B54&feature=youtu.be
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Finally managed to come up with a decently working denoising method that I could use in my raymarchers. Currently it 
    would get confused from a high frequence normal map - as it works based on normals. So I'd need to check the surrounding color pixels too, but it seems like a good start. It uses an 5x5 filter.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Little breakthrough again. Until now, I didn't have to render passes into rendertargets, so the camera setup was fully straight forward. But, when you only draw to a render target and not to the screen, camera related nodes immediately stops working (camera vector, camera position). So I had to recreate the functionality of them. It took a few hours, but now it seems to work correctly, and I can draw directly into a render target without outputting anything to the screen. Camera position was obvious, but the camera vector wasnt.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
     It actually isn't perfect still... When I turn fully backwards, it does some weird squishing and stretching still  :'(
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    If anyone could help me constructing camera vector from screen pos and rotation input, I'd be very happy. Its starting to drive me nuts.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    You already have the camera vector (the vector of the camera's local X-axis)... so I assume you mean constructing the Eye Vector (aka Ray Vector: the vector of each individual pixel)?

    If so, you could certainly calculate that by building a ViewProjectionMatrix manually, based on an arbitrary camera position, FOV, orientation, etc...  But I feel like that might be way over-complicating the problem, not to mention a LOT heavier on the GPU (matrix multiplication, etc).

    Since you already know the "camera position" (whether it be an actual camera, or even just an arbitrary helper widget or whatever), just feed that to the material, either as a MaterialParamCollection or directly as a param to a DynamicMaterial, and then just use the pixel WorldPosition (which should be correct, regardless of the "camera"/location you're rendering from) and calculate the worldspace eyeVector on the fly:



    It's entirely possible I'm missing a specific reason you'd need to factor in "Camera"Vector, ScreenSpacePos, etc...  but based on your description, it doesn't seem like you need to factor in your "camera"Vector at all.  And if for some reason you do, just feed the "camera" Front(X)Vector into the same MPC (in the BP) and access that the same way (in place of the intrinsic CameraVector node).
  • sprunghunt
    Offline / Send Message
    sprunghunt polycounter
    edoublea said:
    You already have the camera vector (the vector of the camera's local X-axis)... so I assume you mean constructing the Eye Vector (aka Ray Vector: the vector of each individual pixel)?

    If so, you could certainly calculate that by building a ViewProjectionMatrix manually, based on an arbitrary camera position, FOV, orientation, etc...  But I feel like that might be way over-complicating the problem, not to mention a LOT heavier on the GPU (matrix multiplication, etc).

    Since you already know the "camera position" (whether it be an actual camera, or even just an arbitrary helper widget or whatever), just feed that to the material, either as a MaterialParamCollection or directly as a param to a DynamicMaterial, and then just use the pixel WorldPosition (which should be correct, regardless of the "camera"/location you're rendering from) and calculate the worldspace eyeVector on the fly:



    It's entirely possible I'm missing a specific reason you'd need to factor in "Camera"Vector, ScreenSpacePos, etc...  but based on your description, it doesn't seem like you need to factor in your "camera"Vector at all.  And if for some reason you do, just feed the "camera" Front(X)Vector into the same MPC (in the BP) and access that the same way (in place of the intrinsic CameraVector node).

    you don't need to feed the camera position into a shader using blueprint - you can access it directly from inside the shader  using the cameraPositionWS node. 

    https://docs.unrealengine.com/en-US/Engine/Rendering/Materials/ExpressionReference/Coordinates/index.html

  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Thanks for the help guys, appreciate it. but neither the world position or camera position works - nor camera vactor. This is because I'm not rendering to the screen. Basically nothing camera or world related works.I'd need the shadetroy method to work inside Unreal. So, construct both the camera vector and camera position from uv and fed in camera pos (like on the example image). The problem is that I don't understand matrices so I can't get the camera rotation to work properly.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9

    you don't need to feed the camera position into a shader using blueprint - you can access it directly from inside the shader  using the cameraPositionWS node. 

    https://docs.unrealengine.com/en-US/Engine/Rendering/Materials/ExpressionReference/Coordinates/index.html

    >.<   Yes, of course, but please read his previous posts...

    He's doing raymarching on the GPU, and is refactoring his system to render into a RenderTarget buffer, instead of directly to the screen.  When you're rendering into a buffer, the CameraPosition (as well as all other camera-specific nodes) do not work, so you must feed data to the shader manually.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Thanks for the help guys, appreciate it. but neither the world position or camera position works - nor camera vactor. This is because I'm not rendering to the screen. Basically nothing camera or world related works.I'd need the shadetroy method to work inside Unreal. So, construct both the camera vector and camera position from uv and fed in camera pos (like on the example image). The problem is that I don't understand matrices so I can't get the camera rotation to work properly.
    Okay, so you're not even wanting to use ANY proxy geometry in the scene at all.... doing it purely with data that doesn't exist in worldspace at all?  Hmmmm...   Yeah, in that case you probably will need to go the "build a ViewProjectionMatrix" route anyway.  Same workflow applies tho, you should just be able to feed the "camera" Front, Right, and Up vectors into an MPC, and then reconstruct that matrix in the shader and mul() it against the viewspace eyeVector, which should give you the same vector in worldspace.  (I might have my matrix math backwards there, it might be an InverseTransform)
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Here is a bit more info. I think I got the non rotated camera vector right. But after it goes through the rotate vector material function (with camera direction fed in from blueprints) to align it to the player view, something goes wrong and when I make a 180 degree turn, it stretches. Its possible that even the input of the rotate vector isn't fully correct. Seems like it does this only on one axis which would suggest that one axis of the made up cameravector isn't correct. I'm unsure. Maybe I should use another node to rotate it?
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Obscura said:
    after it goes through the rotate vector material function (with camera direction fed in from blueprints) to align it to the player view, something goes wrong
    Could you post a screenshot of this portion of the shader?  I think I know what might be going wrong.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    @edoublea I'll show what I have currently, and if you could show me how to do what you described, that'd be great. Like I said I'm not really good with matrices.

    So this is what I have currently. This kinda works on 2 axes, and it does make some sort of y (I also want to keep how x-y-z are oriented and not go the open gl way), but when I turn from y+ to y- it shows extreme stretching.


    Yeah I don't want to use any proxy geo. I believe it is doable without that, since shadertoys are done this way.

    My main issue is with the camera vector. Once I have that, extending it to output the world position at the camera lens, should be straight forward. Like in my example, simply adding the position to the camera vector seems to work.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    I also tried to construct a proper z but the results were not much different. Different stretchig basically.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Yeah, the thing I suspected it might be, it was not (some people use the "Rotate About Axis" node to rotate a unit vector, not realizing that node is meant to be used with WPO, and strips out the current position, making it screw up results when people try to use it to actually just rotate arbitrary vectors).  But yeah, not the case here.

    I'm doing some tests now to see if I can provide better information (this is very relevant to my own interests as well, so it's no trouble).
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Thanks, you are the man.


  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Holy hell, this one sucked (for several stupid reasons), but I think I have it working.  Let me see if I can summarize:

    First things first, in a BP, you'll need to kick out some basic "camera" data.  Again, this doesn't have to be an actual camera, it could be a helper widget, or mesh position, or whatever.  but you'll want to store the following in an MPC: (Position, XVector, YVector, ZVector):



    Then, in the shader, sample those back in from the MPC.  Use the CamPos like you normally would, and then use the XYZ Vectors to build a transform matrix like so, and use that to reorient your UV-based RayVectors:



    You'll notice I had to swap the X and Z vectors when building the transform matrix, as using them as-is was causing ray directions to come out of the "top" of my camera.... so be aware you may need to swizzle these depending on what your desired "front" orientation is.

    That last ConstBiasScale probably won't be necessary the way that you are using this  The vector coming out of Transform3x3Matrix _should_ just be your correct WS vector.  In my test case, I was actually rendering the vectors directly into a RenderTarget to verify they were being processed correctly, and ran into a few snags.  Mainly, it seemed like the buffer was REALLY having a hard time with negative values (even tho my buffer was RGBA16f, which is supposed to support full 16bit FP values, including negative ones... but WHO KNOWS).  To work around that, I am re-framed all my buffer values into 0-1 range (hence that last ConstantBiasScale).

    Anyway, in order to verify everything was working, I added a debug function to the BP that just re-samples the buffer I wrote out, using the ill-advised "Read Render Target Raw UV" node (which is suuuuper inefficient), to sweep over the render target, and then draw debug rays using the buffer-value ray directions (after converting them back to proper [-1,+1] scale, of course).  All the values are coming out solid, and it works in all directions and orientations:



    Hope that helps some!

    -eaa

  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    uu. I'm gonna try this out now. Negative emissive needs to be enabled separately though. Its a material property and only works with unlit mode.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Oh, good tip!  Unfortunately, even after that change, the negative-values thing still persists.  I suspect it's an issue with the "Read Render Target" node... seems like it refuses to return negative values, even when the Material and RT are both set up to support them.

    ¯\_(ツ)_/¯
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Unfortunately this still isn't working. If you draw this as a post process, and compare it to the camera vector node output, you'll see that they look different. Its the easiest to inspect one axis at the time. For example, if we display the y of the camera vector, we see a radial gradient (2 but the another side is negative so not visible). If we turn the camera, we can see that the radial gradient moves according to the camera rotation. Now the output of this doesn't move this way but spins or I don't know how else to describe it. I made an example video:
    https://www.youtube.com/watch?v=GCbGEYZRbnU&amp;feature=youtu.be

    Swizzling the axes doesn't help. I also tried different value ranges but its pretty much the same.

    Thanks for the help though, I really appreciate it.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Ahh, bummer.  Yeah, keep us posted on how it goes.  I'll be curious to see what solution you land on.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    No results still, but it looks like the x-y is correct, they look the way they should, if I don't normalize them. But the z is doing bullshit, so probably simply using 1 as z is what breaks this.
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Got it working better (tho I can't sufficiently explain why this works... MatrixMath is not my strong suit).
    Try this matrix construction order:
    float3x3(vecY, -vecZ, vecX)

    I'll post more, hold on....
  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Alright, so in an effort to figure this out (like I said, I need to do this at some point as well, so I'm glad to have gone thru it), I implemented a simple raymarcher rendering directly in the shader, with no actual camera data involved - using basically the same method we've been discussing.

    The matrix transforms are indeed being fuckier than expected, and I don't understand why the 3 matrix components end up needing to be swizzed in such weird ways, but it seems to be working pretty solidly, at the moment.

    All the BP is doing is filling in MPC data as shown previously - but no more render_to_texture.  Everything just lives right in the shader now (aside from the "camera data" which is still fed in via MPC).  this is the material, doesn't get much simpler:


    The custom node is just a basic raymarcher doing some sphere crap.  That's not the important part.  The only thing relevant to this discussion is the handling of the eyeRay LocalSpace to WorldSpace transformation:



    Using that weird-ass matrix ordering, it works as expected.

    Here's the result.  The white sphere is just a location marker for the origin (so I knew where the raymarch effect would be located), and the quad in the back is literally just a quad with the material applied, so I could see it in scene.  Obviously, rendering that to a RT buffer instead would be trivial, and wouldn't change the shader at all:
    http://youtu.be/q8oB2iIeUPM

    Try that and see if it works for you.

    -eaa

  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
     I got your last method to work !!! 

    https://www.youtube.com/watch?v=2KYbHsiHmYo&amp;feature=youtu.be

    Thats awesome, many thanks Eric! Will post some updates soon. I can start doing half res lighting, denoising, and depth based post processes, such as dof which uses the raymarch scene depth.


  • edoublea
    Offline / Send Message
    edoublea polycounter lvl 9
    Obscura said:
    Thats awesome, many thanks Eric! Will post some updates soon. I can start doing half res lighting, denoising, and depth based post processes, such as dof which uses the raymarch scene depth.

     Great!  Glad it helped.  Can't wait to see more.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Yep, that was clear, but thanks.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    My note on this. Since I'm rendering into a render target that is created by blueprints or by hand, and not a native buffer, the view property node doesn't work, and you need to use texture property instead, and hook up the render target to it. I guess the view property node would not work anyways, because of the same reason as why other camera related nodes doesn't work. I'm just guessing this last one, I haven't actually tried.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Found a cool way to have real time performance with many sdf shapes.
    -Make a data texture out of placeholder objects from the scene. Store type, position, size.
    -Bake all primitives into a volume texture in one pass, by looping through the data texture. Store sdf or more...
    -Sphere trace the volume texture


    This way, all objects still needs to be evaluated at the same time, but only once, when it gets baked. Instead of many times, when they are not in textures. If you still want higher quality, you could switch to the math one when you are close to the surface. But with many objects, this would cause very low performance.
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
  • rollin
    Offline / Send Message
    rollin polycounter
    Hehe.. I see.. so you can now bool-model directly in-engine ;)
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Made some optims to the data texture writes, so now in this same scene, i have more than 120 fps even when I update everything on tick (it was like 40 before the optim). Drawing the distance field texture is still expensive but I'm not sure what could be done against that. Its a 4k texture...
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Got colors to work as well.

  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    It can be also used kinda like Clayxels if you heard about it... Its similar sdf modeler thing but for Unity.

    This is a building made out of sdf shapes, and processed into a volume texture for real time display.
    A cool thing in this is that later I can add those fancy lighting and reflections, so they can be used better in more complex scenes.



    I need to add "insert,remove,add" functionality to the data texture baking. 
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    And, with the usual stochastic skylight:

1234579
Sign In or Register to comment.