Ok this is really some freaky stuff.. So you are bound to a certain volume - you have to define first - and in there you have volumetric math-shapes you process with a compute shader to generate a mesh?! Which then can be normally lit and shaded?
Almost. I process things in an area, put them into a volume texture (color and distance field), and then render the volume texture as regular distance field.
So essentially, a voxel texture gets created, and then I output data from a ray marcher, that the regular unreal shader can use. So world normal, color, etc.
So this is what I do:
1. have a blueprint class that can be used for placing primitives. This class will also contain basic info about the primitives, such as color, blend softness, etc. Have some of these placed in the level.
2. have a second blueprint class, that can be placed in the level in an area (box volume). Upon a transform change of any of the intersecting primitives, or when a new one is placed, gather info about them, and store them in a data texture so a shader can actually use ot. The texture that this makes looks like this, and one row represents one primitive's data:
3. Have a shader that takes this data texture as input, and spawns math based distance field shapes, and bakes them into a volume texture. At this point, the color also gets rendered:
distance field volume:
color volume:
4. These 2 textures can be used to drive a regular sphere tracing based ray marcher. So render the result using ray marching:
The normals can be computed from the distance field directly. In this example, I used standard lit shader, and hooked up the outputs of the ray marcher.
When I want more realistic and non game-ish lighting, I use this custom lighting model that stochastically samples the sdf and a cubemap for sky occlusion:
Ah. No I still render it as ray marching. Yeah he uses some kind of particles to display the result or something like that. I honestly don't like the look of that.
Yes, on these last images, they are all instances. That is actually the reason for why I placed more than one. So I can see how well it works, but it does. You can apply the shader to any amount of boxes...
I should make it so you can select wether you want to modify only shape, only color, or both. Because currently the color of the cutout shape appears on top. This should be optional.
Hi. It sure would be. But the first thing that comes to my mind is the usual marching cubes algorithm. The output would be similar to Blenders voxelization feature. There are other methods too but this is the simplest. Also I have never tried doing something like that.
I've been following along with Fake Fluid Sims Week as well.... it seems like the popular technique people are using is a Vertex Shader accessing a Read/Write Compute Shader, and manipulating the verts directly. This looks more like a fragment-shader mask approach. Would love some more info, but here's my guess: Looks like you might be accumulating and dissipating a couple of different velocity float values (x and y directions maybe?) in BP, and then using those to drive a sinewave-perturbed planar-mask that is pivoting around the center. Am I close, or are you getting into some sneaky Compute stuff here? How would you approach dealing with the liquid's "top surface" appearance? (my assumption, especially based on your other stuff, would be raymarching, which would work great).
@edoublea I use "lagged floats" like Ryan. I get the xy linear and angular velocity, choose the larger between the linear and angular one. Put them in 2 scalar parameter, and slowly fade them out when there is no movement. Then I manipulate some sines in the shader. Yeah I used ray marching.
Thanks. Yeah its pretty convincing, even though its so fake. I'm thinking about adding more perturbation using vector noise so maybe I could have overhanging splashes too. I'm not sure yet how well would that work but I guess I'll find it out.
Great results. Are you using Unreal's transparent/lit lighting models, or doing all the lighting yourself in the raymarcher? Curious about the interplay between the "box surface" highlights/reflections and the water surface highlights.
For now its just quickly thrown together using 2 normals (box outer and liquid surface). I must have to do it inside the ray marcher to add multiple layers of reflections. Currently I'm adding tank thickness and refractions so properly implemented multiple reflections and refractions will be there in the next update. I'm experiencing some weird stuff with refract(). ior lower than 1 works while higher than 1 breaks. And I even need to use flipped normal to get lower than 1 to look correct.
Launched a patreon page. Nothing there yet, but I'm planning to post "exlusive content" there. It will also serve as an extension of this page where I go into more details regarding the ones I'm posting about here, including code samples, downloadables and more... Don't judge, we live from money and there are a bunch of people thinking that everything should be for free. Some people can allow himself to give out anything for free, but I can't afford that unfortunately.
I'm experimenting with texture array asset, as a possible alternative of pseudo volume texture, and volume texture asset. The benefit of this method is that you can achieve larger 3d resolution, because you don't have the size limits of a 2d texture. The second benefit is that you could theoratically have any axis ratio (1,1,2...3,1,1...etc) because there is no connection between the x-y and z resolution. Reading/sampling performance is fine. These are examples of a 512^3 volume, in a non volumetric material, and in a raymarcher. Z interpolation is manually added, since the texture array doesn't interpolate the elements. But this is the same using pseudovolume. If you take a look into its code, you can see that it takes 2 samples and interpolates them to get smooth z. Resource size of the 512^3 volume using this method, and dxt is 128 mb vram. So its not even that much memory.
Started to learn ray tracing (not ray marching). In the beginnig, I had trouble wrapping my head around the ray-box intersection, but I'm starting to get the hang of it. This is essential, as its the core of bounding volume hierarchies which can be used to accelerate triangle ray tracing. Other than that, it can be used to render voxels for relatively cheap. Which is exactly what I do as a learning project. This way I can fairly easily make content for it to display, by baking stuff into volume textures. So far I got a basic tracer with eye and shadow rays to work.
I'm not sure about how much, but in theory this could be accelerated too by using an octree. Since I'm working with a volume texture, this should be trivial to implement by utilizing the mip maps of the texture. I'll probably look into it soon.
Here is a more interesting model which I exported from Magicvoxel as slices, and then I made a volume texture asset from it inside Unreal. This also solves the question of mip maps which I will need for the octree.
What a crazy year...I still managed to accomplish most of the things I wanted, and maybe being at home all the time even helped with this. Here is a short recap of some of the shader stuff I did in 2020.
What a crazy year...I still managed to accomplish most of the things I wanted, and maybe being at home all the time even helped with this. Here is a short recap of some of the shader stuff I did in 2020.
Is that cloud on bottom left realtime? It looks cooler than the microsoft flight sim clouds
Spending the last and the few upcoming weeks with https://twitter.com/FluidNinjaLIVE to bring some options with the fog and the clouds to fluidninja live! Maybe a custom ray marcher as well but we will see about that.
I did not watch the whole video - do you intend to use FluidNinja and especially this demonstration for a game? Or is it mere playing around at this point? Asking because this tech demo alone could probably inspire/spawn a few game mechanics. Anyway, cool experiment.
Hi. I am not the developer of FluidNinja Live, I just laid the foundations for the cloud and fog materials. This is not for a game, we are just having fun with these techs.
I also got some different stuff to share. Remember the sdf modeler from the top of this page? I'm revisiting this project, and I have a lot of now stuff to share actually. Its basically a complete rework, so it has different features, some of the orignal ones are missing, but i have some new ones. The most recent update is that now all sdf objects blend and sort correctly with each other and polygonal objects. so you could have a few different sdf textures of objects, and make a layout like this:
These are made out of 3 sdf textures, and the rendering is done using sphere tracing. I added a lot of new shapes and features to the modeler part of it, so now its easy to boolean stuff out. Here is a more clear example of what happens. The box meshes are are laid out in the level, and they use an alpha test material with ray marching the sdf texture, to render the object.
it outputs position, normal, and pixel depth offset, so you can use the default unreal shader to shade it. Dynamic shadowmaps do not work yet, but ssao, ss shadows, and ss reflections work.
So the voxel modeler with smooth voxels is back...And it just became more powerful with the blending of sdf objects.
Replies
So essentially, a voxel texture gets created, and then I output data from a ray marcher, that the regular unreal shader can use. So world normal, color, etc.
So this is what I do:
1. have a blueprint class that can be used for placing primitives. This class will also contain basic info about the primitives, such as color, blend softness, etc. Have some of these placed in the level.
2. have a second blueprint class, that can be placed in the level in an area (box volume). Upon a transform change of any of the intersecting primitives, or when a new one is placed, gather info about them, and store them in a data texture so a shader can actually use ot. The texture that this makes looks like this, and one row represents one primitive's data:
3. Have a shader that takes this data texture as input, and spawns math based distance field shapes, and bakes them into a volume texture. At this point, the color also gets rendered:
distance field volume:
color volume:
4. These 2 textures can be used to drive a regular sphere tracing based ray marcher. So render the result using ray marching:
The normals can be computed from the distance field directly. In this example, I used standard lit shader, and hooked up the outputs of the ray marcher.
When I want more realistic and non game-ish lighting, I use this custom lighting model that stochastically samples the sdf and a cubemap for sky occlusion:
And do (or could) all these buildings act like instances, rendered from the same volume texture?
Yes, on these last images, they are all instances. That is actually the reason for why I placed more than one. So I can see how well it works, but it does. You can apply the shader to any amount of boxes...
Just great! I'm wondering how useful would SDF be for hardsurface modeling? Would it be possible to polygonize these shapes?
Based on:
https://wallisc.github.io/rendering/2020/05/02/Volumetric-Rendering-Part-2.html
https://www.youtube.com/watch?v=WCmpvJbNMEc&feature=youtu.be
Looks like you might be accumulating and dissipating a couple of different velocity float values (x and y directions maybe?) in BP, and then using those to drive a sinewave-perturbed planar-mask that is pivoting around the center. Am I close, or are you getting into some sneaky Compute stuff here? How would you approach dealing with the liquid's "top surface" appearance? (my assumption, especially based on your other stuff, would be raymarching, which would work great).
https://www.youtube.com/watch?v=AsyJ9dX-GAY&feature=youtu.be
@edoublea I use "lagged floats" like Ryan. I get the xy linear and angular velocity, choose the larger between the linear and angular one. Put them in 2 scalar parameter, and slowly fade them out when there is no movement. Then I manipulate some sines in the shader. Yeah I used ray marching.
Don't judge, we live from money and there are a bunch of people thinking that everything should be for free. Some people can allow himself to give out anything for free, but I can't afford that unfortunately.
https://www.patreon.com/kristoflovas
https://blog.demofox.org/2020/05/25/casual-shadertoy-path-tracing-1-basic-camera-diffuse-emissive/
I'm not sure about how much, but in theory this could be accelerated too by using an octree. Since I'm working with a volume texture, this should be trivial to implement by utilizing the mip maps of the texture. I'll probably look into it soon.
I started a series of blogposts about breaking down this project. Here is the first part:
https://www.artstation.com/kristoflovas/blog/RYGb/building-a-voxel-based-ray-tracer-in-unreal-engine-part-1-the-ray-box-intersection-function
You can get it here:
https://gum.co/IRVhk
https://twitter.com/FluidNinjaLIVE/status/1375248650763374592
These are made out of 3 sdf textures, and the rendering is done using sphere tracing. I added a lot of new shapes and features to the modeler part of it, so now its easy to boolean stuff out. Here is a more clear example of what happens. The box meshes are are laid out in the level, and they use an alpha test material with ray marching the sdf texture, to render the object.
it outputs position, normal, and pixel depth offset, so you can use the default unreal shader to shade it. Dynamic shadowmaps do not work yet, but ssao, ss shadows, and ss reflections work.
So the voxel modeler with smooth voxels is back...And it just became more powerful with the blending of sdf objects.
More random pictures:
https://www.youtube.com/watch?v=_JOgZD0BJZg