Be sure and exclude the mesh with your shader applied from the GDF (so you don't get false-positives by it sampling itself). [Disable "Affect Distance Field Lighting" on the Actor]
The 5 rays 20 max steps is around 2,5 ms on 1080p. It could be optimized a little bit by using a lookup texture for the random directions, instead of using cell noise. But the global distance field samples are much more expensive than the cell noise itself...
I'm wondering if the initial hits can be reduced by calculating the normal of the sdf first, making an offset along that, and then use the world normal.
I didn't find the option to directly increase the res of the global sdf. But I found the view distance option, and pulling it closer makes things somewhat more highres close to the camera. Its still fairly lowres though.
Thanks, will check that out at the next possibility. I need to say that "ao" appearing in the command makes me kinda sceptical, as I'm using this in an indirect way. Hopefully it will yield the result I'm looking for. I also noticed that disabling dfao in the console makes my code not working.
Yeah, I agree that the parameter names are confusing, but can guarantee that r.AOGlobalDFResolution controls the overall Global Distance Field (regardless of what samples it).
Wearing a cube on my "head" if I want to see the effect is also ridiculous. Is there any way to fix this?
For recap, if I use a post process utilizing distance to nearest surface function in code, it only works if I have a mesh using the same function as a node, on screen. Otherwise I get black image. Like it doesn't render the global sdf if there isn't a mesh on screen using it and it doesn't take pp-s into account.
In my implementation, the dfao is a custom skylight as a post process. So. I haven't tried , but does this work without a skylight? Ideally, I'd also like to disable every other feature, but keep the global sdf for my traces. Do you think thats possible? Like I said disabling dfao from the console basically disables the global sdf.
In my implementation, the dfao is a custom skylight as a post process. So. I haven't tried , but does this work without a skylight? Ideally, I'd also like to disable every other feature, but keep the global sdf for my traces. Do you think thats possible? Like I said disabling dfao from the console basically disables the global sdf.
Yes, it works fine with out a skylight.
Not certain about the second part... I just tested locally, and I can absolutely set ShowFlag.DistanceFieldAO to "0" (which disables the visible DFAO effect), and the GDF sampling is still working fine. So I'm not sure why yours seems to be disabling.
Cool. I find the custom dfao being pretty good even on low sample count.If you've been following the more recent posts, you can see that I use a method of sphere tracing multiple light rays if an sdf medium is available. otherwise you can use uniform step size. so i use cell noise function to generate random ray directions per pixel in a 180 degrees angle cone. this setup requires a nested loop. like light rays multiplied by sphere trace max step count. My recent results are 5-6 light rays, 10-20 max steps. If max steps were reached without a hit, sample the sky texture on some mip. Ray results are averaged at the end to get the final skylight value. Ao radius is max "t".
Oh yeah, I've been following along. I've been doing raymarching tests in Unreal for a few months, and I'm super impressed by the stuff you've been posting. Haven't gone down the fractal structure or AO-sampling rabbit hole yet, but I'm sure I'll be doing that eventually. Great work.
Yeah, I agree that the parameter names are confusing, but can guarantee that r.AOGlobalDFResolution controls the overall Global Distance Field (regardless of what samples it).
This worked wonderfully. Although setting it to 512 puts my video memory almost full and its a 2080ti :'D So it should be used with caution. Hoping to see 32 gb consumer videocards in the upcoming releases :P
A discovery. Using the volume texture asset (you can create a volume texture asset from your regular texture), you get seam free tiling by default with those pseudo volume textures. This opens up a lot of interesting opportunities with volumetric fog and ray marching in general.
Partially. You can make any of them tiling, except simplex, if they were baked inside unreal. Yeah, those kind of pseudo volume textures. And now you can convert them to be seamlessly tiling textures, instead of going through the pain of this: https://shaderbits.com/blog/tiling-within-subuv-or-volume-textures Basically, by default, because the slices are next to each other on the texture, you get incorrect interpolation across the edges of the slices. And the volumetric texture asset solves this whithout doing those things described in the link.
Having the classic issues with refractions in ray marching atm. Does anyone have some good resources, excluding the ray tracing in one weekend series? I already checked that out, and I'm not sure how is it supposed to fix my issue. I added the part where it flips the normal direction on exiting the medium but I still have the outline error. I also tried out the default "refract()" function, and custom refraction functions, but the results are similar. Also, when I debug the refracted ray directions, it turns out that the final ray direction on the wrong pixels has value of 0! Why would that be?
Thanks. Its starting to look like something. Somehow it was expecting the ior to be in -1 0 range instead of being bigger than 1. And it gives correct result with that so I was looking at results with wrong ior before, this is why I thought it was wrong. Not sure why is that but whatever. I'll add some absorption and stuff soon. Also some different primitive types for better showcasing.
Ray marching GI experiment... Its the skylight from the earlier posts but it got bounces and it picks up some color. It isn't physically accurate, but its a step forward. Also got a system working where you drag and drop an actor into the level and a new sdf shape automatically gets added to the shader, and you can set some things such as the primitive type, color, size, and more later. This is really cool compared to the hard coded objects. I will need to add some acceleration structure soon though, populating the sdf scene on a per object base isn't cheap... This way I could also get away with not sampling anything in empty spaces. I was thinking about a 3d grid with object list lookup texture. Because this sounds relativaly easy.
So I was thinking about the acceleration structure. I also made some initial tests. After all, I don't think uniform grid is a horrible idea for ray marching. I have something in mind, kinda like there is a float2 volume texture grid. and it holds start and end coordinate for a 2d texture containing object lists (objects in cells). Like if a cell has a box and a sphere in it and they have object id of 1 and 2 then if our ray is in such a cell that has these two objects, we look up a third texture, which is the actual object list of the world. There would be also a lot of empty cells. In this case, if we would do traditional uniform grid traversal, we would need to take several steps sometimes to reach objects, if the grid is dense. So maybe instead of float2, the volume grid could be float 3 and have a rough distance field representation of bounding boxes extended to match the grid in the third channel. This is still kinda bad when the ray is nearly parallel to surfaces but we get the benefit of sphere tracing when traversing the object list index volume. I would expect to see great improvement in performance with math shapes especially if there are many. I'm hoping triangle meshes to come to an acceptable speed. Hopefully we only cover a few tris per cell. The gpu normal map baking guide implementation by nvidia from an eariler page of this thread is similar.
Would ray traced caustics just being the refraction applied when tracing the light through a translucent object? Because then it would be super easy to add, because everything it needs is already implemented.
Anyways, I started preparing the uniform grid stuff. I'll generate the necessary texture using blueprints and render targets. There is a material that I can use to write pixels one by one. The object list should be generated first (all objects inside the grid). Then, the grid volume texture and the per cell object list comes. The volume texture will cointain a start and end pointer(float2) to the second 2d texture.I'll also try to add the bounding box distance field and put it in the third channel of the 3d texture. So far, I have the texture writer material and blueprint functions, and the generation of the grid center points, and a function to return the object list of a given cell.
I did some tests with using a simple volume texture "cache" of an expensive sdf. It was the mandelbulb. I used a 256^3 volume texture (I could probably use lower) to store rough distance, and switch to the math based one when the ray is near the isosurface. I got like 3x fps! from 70 to 200. This is without fancy lighting of course... But its with much higher iteration count on the mandelbuld. 20 instead of 10.
Holy poop! That is looking fantastic. Nice work on the volume-cached "LOD" trick, too. I realize it wouldn't work with the volume-texture-caching enabled, but I would love to see a short clip of that thing with a couple of the params animated... that's stunning.
It actually would. Drawing it once per frame to a render target turns out to be cheap enough that we can render both, while still get higher fps than before. I would guess that only the quarter of the steps or less draws the actual math one. Lod Iso offset can be adjusted. I've set it prettty close to the actual one in these tests.
While I was working on the grid thing today, I also made a tool that turns a polygon mesh into a signed distance field volume texture - That is visible as an asset.. Not very useful in this context, but maybe I can use it somewhere else later. This is the normals of the generated sdf.
I know what you are thinking, but unfortunately, drawing the triangles even of this lowpoly mesh cripples the performance like crazy so I need to stay with some proper acceleration so only a few tris gets drawn per step. In my original plan, there was a rough sdf in one of the volume texture channels anyways.
So unfortunately there is no easy way to get all intersecting faces within a volume so I kinda need to be brute force here and test them one by one. Which makes the grid construction stage much slower, but until I find some better way, it'll do it. Hit result of blueprint traces only returns the first face that was hit, even when I use multi trace.
Good news it that a mesh with ~3000 tris is scanned very fast (but too slowly to call it real time) still. I don't really mind this situation for now, if the actual grid turns out to be as much faster than without it, as I would think. I can optimize the contruction stage later. Construction speed can also be reduced to gain framerate. This is done by using a timer and test one face when the timer ticks. Using large number will distribute the contruction to multiple frames so fps stays high but the triangle array is iterated very slowly. Using tiny number can force more faces to be processed within one frame, for the price of losing fps.
Also, does anyone know a way for generating uvs for sdf triangles?
Not certain about generating "true" UVs per se.... But I usually just use the final raymarch pos and normal to drive basic tri-planar mapping. May not be what you're looking for, tho. And obviously that falls apart under any sort of animated deformation.
Replies
@edoublea - Thanks for the help, but I'm applying the shader as a post process.
radius 200
radius 700
radius 1500
Not good, not horrible.
For recap, if I use a post process utilizing distance to nearest surface function in code, it only works if I have a mesh using the same function as a node, on screen. Otherwise I get black image. Like it doesn't render the global sdf if there isn't a mesh on screen using it and it doesn't take pp-s into account.
Not certain about the second part... I just tested locally, and I can absolutely set ShowFlag.DistanceFieldAO to "0" (which disables the visible DFAO effect), and the GDF sampling is still working fine. So I'm not sure why yours seems to be disabling.
My stuff has been more focused around integrating pure raymarched assets in and amongst "regular" geometric assets:
https://twitter.com/edoublea/status/1169812517574434817
https://shaderbits.com/blog/tiling-within-subuv-or-volume-textures
Basically, by default, because the slices are next to each other on the texture, you get incorrect interpolation across the edges of the slices. And the volumetric texture asset solves this whithout doing those things described in the link.
Increasing ior from left to right.
Its the skylight from the earlier posts but it got bounces and it picks up some color. It isn't physically accurate, but its a step forward. Also got a system working where you drag and drop an actor into the level and a new sdf shape automatically gets added to the shader, and you can set some things such as the primitive type, color, size, and more later. This is really cool compared to the hard coded objects. I will need to add some acceleration structure soon though, populating the sdf scene on a per object base isn't cheap... This way I could also get away with not sampling anything in empty spaces. I was thinking about a 3d grid with object list lookup texture. Because this sounds relativaly easy.
Anyways, I started preparing the uniform grid stuff. I'll generate the necessary texture using blueprints and render targets. There is a material that I can use to write pixels one by one. The object list should be generated first (all objects inside the grid). Then, the grid volume texture and the per cell object list comes. The volume texture will cointain a start and end pointer(float2) to the second 2d texture.I'll also try to add the bounding box distance field and put it in the third channel of the 3d texture. So far, I have the texture writer material and blueprint functions, and the generation of the grid center points, and a function to return the object list of a given cell.
The bounces.
With the volume texture "lod" method, 4k 60 fps is possible with simple lighting on a 2080 ti.
I realize it wouldn't work with the volume-texture-caching enabled, but I would love to see a short clip of that thing with a couple of the params animated... that's stunning.
I know what you are thinking, but unfortunately, drawing the triangles even of this lowpoly mesh cripples the performance like crazy so I need to stay with some proper acceleration so only a few tris gets drawn per step. In my original plan, there was a rough sdf in one of the volume texture channels anyways.
https://forums.unrealengine.com/unreal-engine/feedback-for-epic/71860-multi-line-trace-that-returns-all-hit-surfaces-not-just-actors
"RaycastMulti only returns 1 hit result back and removes all of the rest of the results, it even says it in the source code:
// Now eliminate hits which are farther than the nearest blocking hit, or even those that are the exact same distance as the blocking hit,"
So basically this should be used to find multiple hit actors along the trace, and not multiple hit points of one actor.