@PixelGoat Here's the Material Network, the basic Idea is to generate a spherical Distance Field Gradient from the Player's Location and use it to add a spherical offset to the basic Wind Math. We could easily reduce the number of instructions, but I haven't bothered with that for now. If you need a more detailed explanation, I'll probably make a blog post, if I ever get some free time
How would you handle it so that it worked on all players and not just one? By using a render target centered around camera and drawing to that using BP?
@PixelGoat Here's the Material Network, the basic Idea is to generate a spherical Distance Field Gradient from the Player's Location and use it to add a spherical offset to the basic Wind Math. We could easily reduce the number of instructions, but I haven't bothered with that for now. If you need a more detailed explanation, I'll probably make a blog post, if I ever get some free time
How would you handle it so that it worked on all players and not just one? By using a render target centered around camera and drawing to that using BP?
Honestly, I'm not sure, would have to find some way to feed the closest player location per instance of the foliage. That gets into gameplay code and is quite beyond me. Render Targets as I understand it, have a pretty heavy overhead and aren't actually feasible in-game. I'm not even sure if render targets are an option here since there are multiple meshes and Render Target are in 2D texture space, for this shader I needed a 3D spherical gradient in world coordinates.
@PixelGoat Here's the Material Network, the basic Idea is to generate a spherical Distance Field Gradient from the Player's Location and use it to add a spherical offset to the basic Wind Math. We could easily reduce the number of instructions, but I haven't bothered with that for now. If you need a more detailed explanation, I'll probably make a blog post, if I ever get some free time
And this collection with "TexLoc" is the texture with your distance field?
"TexLoc" was just a temporary name I had set a long time ago, it is actually "Player Location" fed from the Player Blueprint, the distance field is not a texture, it is a 3 Dimensional gradient generated using math inside the shader, in this case, the Distance Function. "TexLoc" is literally just a vector3 coordinate value that has the player's location in the world.
AssetGen is a free addon that automates the tasks to get a
game asset ready for video games from an High Poly model. While it
takes several hours to get an asset ready from an high poly, this addon
does that in a matter of few clicks. It is ideal for all your static
assets.
It is developed by Srđan Ignjatović aka zero025 and surpervised by Danyl Bekhoucha aka Linko.
@gilesruscoe That looks great. Could you explain how it works? My best guess would be several extruded and masked shells, but it looks definetely better than any fur I have seen using this technique, especially in motion.
Hi guys, I gave this talk and the recording was released. It's about the creative use of textures for other things than color - for example to store position data in a texture to be able to pause and rewind time in the game (and therefore play-back the effect). I hope you'll like it! Note: Extra-Content can be found on the blog post below the video: https://simonschreibt.de/gat/cool-stuff-with-textures/
Following up on a free parallax-based fake interior shader we did a while back with our shader editor at Amplify (Amplify Shader Editor).
Currently working on a new version based on the original paper by Oogst 3D for fake interiors done entirely via shader sorcery. The Unreal implementation by Stefander was an awesome example, definitely recommend it to anyone using UE4. For even more information on how to build something like this in Unreal, check out the old UDK Example.
Interior Mapping in Unity3d
Possible extra features: Add extra layers for props.(people, desks, boxes etc) Lighting (Emission) Randomize rooms
@gilesruscoe That looks great. Could you explain how it works? My best guess would be several extruded and masked shells, but it looks definetely better than any fur I have seen using this technique, especially in motion.
Yep, you got it, this is a bunch of shells (I think 12 shells in the images, I don't remember). Specifically, its a geometry shader that dynamically creates the shells and shell spacing depending on parameters. The only thing I'm doing differently is that the shells are offset by a noise vector, giving random direction to the strands, multiplying the colour by the layer step (top layers are brighter than inner layers) and also randomising the length of the strands using a single channel of the noise vector, to give it patches of different length. Besides that there's a few other things ontop which are fairly straight forward, like a two-colour gradient over the length of the strands, a mask texture to control different coloured areas etc.
For the lighting, one addition that I found to give a good result is to do a fresnel effect (N dot V) and add the strand length to it. This gave longer strands more fresnel regardless of angle and makes it look like light scattering through the hair.
The motion is done by rotating each normal around its local xy plane using a 2x2 matrix, this was just plugged into a sin wave for the gif, but I hope to plug rotational velocity of the mesh into it at some stage.
Calculating a wallride trajectory on the fly and its offset for the character (exagarated here, the offset algorithm was so fricking hard!!!). Next will be how to tackle the character movement mechanism to make use of the trajectory (which has velocity info embedded in it). We'll see...
I would like to ask you guys if this idea would make any sense in your opinion before I make anything. I thought you could technically voxelize a scene by taking xyz+- slices with a scene capture and rendering them into 6 volume textures, or something ... This would be a render of some short scene depth view, 1/ steps / side for example. And then either do a procedural mesh pass by passing through pixel values from the render targets, or to just render a volumetric result by raymarching. I would assume generating a mesh would give you an overall more optimized result. All non filtered so you get planes or cubes if doing per side rendering.
You could also just read the render target pixel value in the mesh loop. I would think if its like that, you only need a single render target with 1x1 res.
optional drop in mesh helper for 3ds max, my solution for multiple optional meshes in game dependent on the game params. The helper has a custom attach constraint that binds it to the mesh that requires the optional geometry (the attach constraint returns all data needed to fit the optional mesh to existing geometry). The helper use a NodeTransform Monitor so it can update when the target changes. The first part shows the helper updating to changes in the target mesh then I change some of the mesh params in the helper and finally we are changing the barycentric position of the helper in the face using the attach constrarint. Still have to implement the surface blending properly (positions, uv, vertex colours and normals)
to show changing the face index in the attach constraint
https://youtu.be/Iy9fCdcrNSo fixed the blend to the target mesh with added bonus of flagging the drop in mesh as invalid if it exceeds the bounds of the faces.
I would like to ask you guys if this idea would make any sense in your opinion before I make anything. I thought you could technically voxelize a scene by taking xyz+- slices with a scene capture and rendering them into 6 volume textures, or something ... This would be a render of some short scene depth view, 1/ steps / side for example. And then either do a procedural mesh pass by passing through pixel values from the render targets, or to just render a volumetric result by raymarching. I would assume generating a mesh would give you an overall more optimized result. All non filtered so you get planes or cubes if doing per side rendering.
You could also just read the render target pixel value in the mesh loop. I would think if its like that, you only need a single render target with 1x1 res.
I would like to ask you guys if this idea would make any sense in your opinion before I make anything. I thought you could technically voxelize a scene by taking xyz+- slices with a scene capture and rendering them into 6 volume textures, or something ... This would be a render of some short scene depth view, 1/ steps / side for example. And then either do a procedural mesh pass by passing through pixel values from the render targets, or to just render a volumetric result by raymarching. I would assume generating a mesh would give you an overall more optimized result. All non filtered so you get planes or cubes if doing per side rendering.
You could also just read the render target pixel value in the mesh loop. I would think if its like that, you only need a single render target with 1x1 res.
It works by taking max hair and fur, creating splines from guides, converting those to geometry (so they have UV's), then generating a unique grayscale value per strand and flooding the vertex color red channel with it, doing a root to tip gradient in the green channel, and a depth gradient in the blue channel.
The vertex colors can then be baked down in your app of choice.
Big thanks to Dennis Lehmann for helping me iron out some issues i was having.
At the end, I went with an entirely different approach from what I was writing about with my "voxelizer", because I got issues reading pixel values from a render target. Its only possible with static textures... So I ended up using a collision based approach, and iterating a collision box location to "scan" the volume.
I'll probably switch from instanced static mesh to simple static mesh components, so you can bake it. Unfortunately this doesn't work with instanced ones.
Performance also isn't the best because currently I iterate through the whole volume without any pre filtering so when resolution is 24, it takes ~170 ms to update. So now I'm looking into using an octree to filter out areas that doesn't need to be scanned.
At the end, I went with an entirely different approach from what I was writing about with my "voxelizer", because I got issues reading pixel values from a render target. Its only possible with static textures... So I ended up using a collision based approach, and iterating a collision box location to "scan" the volume.
I'll probably switch from instanced static mesh to simple static mesh components, so you can bake it. Unfortunately this doesn't work with instanced ones.
Performance also isn't the best because currently I iterate through the whole volume without any pre filtering so when resolution is 24, it takes ~170 ms to update. So now I'm looking into using an octree to filter out areas that doesn't need to be scanned.
Using an octree to skip empty spaces works nicely, however, it did made it much faster, but at least now it can do any resolution without killing Unreal and the computer. It generates the result during multiple frames. Later I ran into issues with baking it to a static mesh. If I use static mesh components as source, I again get bad performance from the drawcalls so its not really an option. Then I tried instanced meshes, but the merge actor and the convert to static mesh function only sees the main instance, and it doesn't see anything from a procedural mesh. I found this post about this problem: https://answers.unrealengine.com/questions/388977/i-cant-export-or-merge-procedural-mesh.html
So now I'm making an obj exporter
Edit - I did a few more things with the octree and now its actually very fast. It can process 8 depths search (256x256x256 resolution) in a few seconds. Now it doesn't need to run over multiple frames. Woohoo! I'll upload a new video when I got the obj exporter working.
It works by taking max hair and fur, creating splines from guides, converting those to geometry (so they have UV's), then generating a unique grayscale value per strand and flooding the vertex color red channel with it, doing a root to tip gradient in the green channel, and a depth gradient in the blue channel.
The vertex colors can then be baked down in your app of choice.
Big thanks to Dennis Lehmann for helping me iron out some issues i was having.
wow ! this is awesome man. There really needs to be a max--toolset for making game ready hair. Will this be released for public use?
It works by taking max hair and fur, creating splines from guides, converting those to geometry (so they have UV's), then generating a unique grayscale value per strand and flooding the vertex color red channel with it, doing a root to tip gradient in the green channel, and a depth gradient in the blue channel.
The vertex colors can then be baked down in your app of choice.
Big thanks to Dennis Lehmann for helping me iron out some issues i was having.
wow ! this is awesome man. There really needs to be a max--toolset for making game ready hair. Will this be released for public use?
I don't see why not... I mean, the workflow (at the moment) is basically:
Use hair and fur to create the hair you want, against a backplane (which will ultimately be your UV sheet). In the image above you can see i'm working on the second hair strip.
Convert hair to splines. Enable UV generation in the spline, give it whatever thickness you need and enable render in viewport, convert to editable poly.
Run script on selection. You can see on the left where i've done that on hair that's been converted to splines and then to poly.
Then you bake those vertex colors down to a plane which looks like this:
I'm going to work on making the conversion from hair and fur to the final renderable hair a single click process.
"What are you working on?" Like, nothing I can publically talk about. As usual. Sigh. Cool work almighty_gir - now to fix the rest of the hair authoring pipe?
"What are you working on?" Like, nothing I can publically talk about. As usual. Sigh. Cool work almighty_gir - now to fix the rest of the hair authoring pipe?
finally got round to doing more work on my obj space cookie cutter modifer for max https://youtu.be/piBuF8CdQhc lot of work yet but showing some promise. https://youtu.be/1PwLvG-cIJ4 animated versions with a quick test of max's built in cut clean up routines (seems to just hide all the new edges added that weren't in the cut doesn't remove any "incidental" verts though so we won't be using that)
thanks @almighty_gir, dished the goods as much as I can on Achieving Anisotropy; should help you understand what I've done. As for how, I suggest getting comfortable with the Maya API and python.
you know its going to be tricky when you have to use visual aids to help debugging https://youtu.be/t45kIHz3jmk even when you think you've covered every angle sods law dictates an artist will break it!
btw those animated gifs ^^ above are killing my cpu so badly it's nigh on impossible to type.
first stage clean up removing all the incidental verts from the cut https://youtu.be/Vv7wak9HKT8 my brain is cooked !
finished the clean up code https://youtu.be/0USokCVHpiA has an added topoly modifier applied to remove concave polys and restrict poly degree to 4.
This super cool model has been made by the awesome 3D Character Artist Yekaterina Bourykina and is available for free Here's the link : https://www.artstation.com/artwork/QqeVL
One of my biggest frustrations in Maya was losing my local object orientation when I combined objects together. That and the object being pulled out of it's hierarchy...and it's pivot automatically being sent to the origin. F' that noise!
I finally figured out how to combine objects together while maintaining the object's original rotation, translation, hierarchy, pivot, and name. Rotation was the big breakthrough that I hadn't found in other combine scripts. Now that I've been using it for awhile, I'm not sure how I lived without it
One of my biggest frustrations in Maya was losing my local object orientation when I combined objects together. That and the object being pulled out of it's hierarchy...and it's pivot automatically being sent to the origin. F' that noise!
I finally figured out how to combine objects together while maintaining the object's original rotation, translation, hierarchy, pivot, and name. Rotation was the big breakthrough that I hadn't found in other combine scripts. Now that I've been using it for awhile, I'm not sure how I lived without it
Good stuff! How well does it work when combining objects have different materials, or pulling faces off of an object that has multiple per-face materials? I've been using a couple other scripts to avoid materials breaking (usually), but they don't do the fancy things yours does.
@throttlekitty Am I interpreting this correctly?: "...an object that has multiple per-face materials". Are you saying, for example, that you have a face on an object that has two materials assigned to it at once, or are you saying you have an object which has multiple materials assigned to different faces (but still only one material assignment per face.)?
If it's the former, I didn't know you could even do that. I'd love to know how and the context of usage!
If it's the latter, I personally have not seen any issues when combining objects, detaching faces, or cloning faces on objects with multiple material assignments. A lot of the content I deal with has multiple material assignments.
If you have a test case you'd like me to validate for you I'd be more than happy to put it through it's paces. cheers!
slight diversion today Outline Spline as a modifier, also with a rail option (the real reason as I need 2 "rail" splines to generate some specific meshes) also added a nifty manipulator too.
added ray intersect traverse to my kdtree code, finding all verts within a pescribed distance from the ray (I use a reverse look up table to the get the faces used by the verts) , the 4096 tri teapot is averaging about 200 low cost look ups giving 32 ish faces to test the ray against to get the true intersection.
always wanting to reinvent the wheel and having to create a lot of tri meshes where I have "no control" of where the polys would be I created my own quadify mesh modifier for max.....
Replies
https://i.gyazo.com/f0a8ab15ad2837d41567e06214fa35e2.mp4
"TexLoc" was just a temporary name I had set a long time ago, it is actually "Player Location" fed from the Player Blueprint, the distance field is not a texture, it is a 3 Dimensional gradient generated using math inside the shader, in this case, the Distance Function. "TexLoc" is literally just a vector3 coordinate value that has the player's location in the world.
If you are curious about distance functions I highly recommend checking out Inigo Quillez's website, he explains them really well http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm
AssetGen is a free addon that automates the tasks to get a game asset ready for video games from an High Poly model. While it takes several hours to get an asset ready from an high poly, this addon does that in a matter of few clicks. It is ideal for all your static assets.
It is developed by Srđan Ignjatović aka zero025 and surpervised by Danyl Bekhoucha aka Linko.
https://youtu.be/-Fy7g3g6uCIFirst time creating hair with xgen, process so far>
For the lighting, one addition that I found to give a good result is to do a fresnel effect (N dot V) and add the strand length to it. This gave longer strands more fresnel regardless of angle and makes it look like light scattering through the hair.
The motion is done by rotating each normal around its local xy plane using a 2x2 matrix, this was just plugged into a sin wave for the gif, but I hope to plug rotational velocity of the mesh into it at some stage.
You could also just read the render target pixel value in the mesh loop. I would think if its like that, you only need a single render target with 1x1 res.
optional drop in mesh helper for 3ds max, my solution for multiple optional meshes in game dependent on the game params. The helper has a custom attach constraint that binds it to the mesh that requires the optional geometry (the attach constraint returns all data needed to fit the optional mesh to existing geometry). The helper use a NodeTransform Monitor so it can update when the target changes. The first part shows the helper updating to changes in the target mesh then I change some of the mesh params in the helper and finally we are changing the barycentric position of the helper in the face using the attach constrarint. Still have to implement the surface blending properly (positions, uv, vertex colours and normals)
https://youtu.be/tGugdbAw8MU
to show changing the face index in the attach constraint
https://youtu.be/Iy9fCdcrNSo
fixed the blend to the target mesh with added bonus of flagging the drop in mesh as invalid if it exceeds the bounds of the faces.
It works by taking max hair and fur, creating splines from guides, converting those to geometry (so they have UV's), then generating a unique grayscale value per strand and flooding the vertex color red channel with it, doing a root to tip gradient in the green channel, and a depth gradient in the blue channel.
The vertex colors can then be baked down in your app of choice.
Big thanks to Dennis Lehmann for helping me iron out some issues i was having.
https://www.youtube.com/watch?v=3sbOamdNE2Y&feature=youtu.be
I'll probably switch from instanced static mesh to simple static mesh components, so you can bake it. Unfortunately this doesn't work with instanced ones.
Performance also isn't the best because currently I iterate through the whole volume without any pre filtering so when resolution is 24, it takes ~170 ms to update. So now I'm looking into using an octree to filter out areas that doesn't need to be scanned.
https://answers.unrealengine.com/questions/388977/i-cant-export-or-merge-procedural-mesh.html
So now I'm making an obj exporter
Edit - I did a few more things with the octree and now its actually very fast. It can process 8 depths search (256x256x256 resolution) in a few seconds. Now it doesn't need to run over multiple frames. Woohoo! I'll upload a new video when I got the obj exporter working.
this is a slowed down version, for demonstration...
https://www.youtube.com/watch?v=BxSLCHd2fYM
References:
https://en.wikipedia.org/wiki/Octree
https://en.wikipedia.org/wiki/Octant_(solid_geometry)
https://www.youtube.com/watch?v=jxbDYxm-pXg
http://www.merl.com/publications/docs/TR2000-15.pdf
a lite break down at http://technicalartlead.blogspot.com/2017/10/basic-ibl-shaders-in-hlsl.html
wow ! this is awesome man. There really needs to be a max--toolset for making game ready hair. Will this be released for public use?
- Use hair and fur to create the hair you want, against a backplane (which will ultimately be your UV sheet). In the image above you can see i'm working on the second hair strip.
- Convert hair to splines. Enable UV generation in the spline, give it whatever thickness you need and enable render in viewport, convert to editable poly.
- Run script on selection. You can see on the left where i've done that on hair that's been converted to splines and then to poly.
Then you bake those vertex colors down to a plane which looks like this:I'm going to work on making the conversion from hair and fur to the final renderable hair a single click process.
Like, nothing I can publically talk about. As usual. Sigh.
Cool work almighty_gir - now to fix the rest of the hair authoring pipe?
https://youtu.be/piBuF8CdQhc
lot of work yet but showing some promise.
https://youtu.be/1PwLvG-cIJ4
animated versions with a quick test of max's built in cut clean up routines (seems to just hide all the new edges added that weren't in the cut doesn't remove any "incidental" verts though so we won't be using that)
still writing my post on aniso content and shaders, but getting there
http://technicalartlead.blogspot.com/2017/11/achieving-anisotropy.html
As for how, I suggest getting comfortable with the Maya API and python.
https://youtu.be/t45kIHz3jmk
even when you think you've covered every angle sods law dictates an artist will break it!
btw those animated gifs ^^ above are killing my cpu so badly it's nigh on impossible to type.
first stage clean up removing all the incidental verts from the cut
https://youtu.be/Vv7wak9HKT8
my brain is cooked !
finished the clean up code
https://youtu.be/0USokCVHpiA
has an added topoly modifier applied to remove concave polys and restrict poly degree to 4.
This has not been mentioned here but in my tweet : https://twitter.com/blackfirestu/status/930120077273452544
This super cool model has been made by the awesome 3D Character Artist Yekaterina Bourykina and is available for free Here's the link : https://www.artstation.com/artwork/QqeVL
F' that noise!
I finally figured out how to combine objects together while maintaining the object's original rotation, translation, hierarchy, pivot, and name. Rotation was the big breakthrough that I hadn't found in other combine scripts. Now that I've been using it for awhile, I'm not sure how I lived without it
Check out the demo. Comes with an entire suite of about 50 other tools.
https://youtu.be/O2mK7I1Errw?t=85
http://www.thomashamiltonart.com/TTools.html
https://youtu.be/i2cAZhUN90U
the hidden cutter (a cylinder on a path constrain) and base is on a sine wave controller with modifier set to difference
Am I interpreting this correctly?: "...an object that has multiple per-face materials". Are you saying, for example, that you have a face on an object that has two materials assigned to it at once, or are you saying you have an object which has multiple materials assigned to different faces (but still only one material assignment per face.)?
If it's the former, I didn't know you could even do that. I'd love to know how and the context of usage!
If it's the latter, I personally have not seen any issues when combining objects, detaching faces, or cloning faces on objects with multiple material assignments. A lot of the content I deal with has multiple material assignments.
If you have a test case you'd like me to validate for you I'd be more than happy to put it through it's paces.
cheers!
https://vimeo.com/242842322
slight diversion today Outline Spline as a modifier, also with a rail option (the real reason as I need 2 "rail" splines to generate some specific meshes) also added a nifty manipulator too.
Finally launched the FREE Fake Interiors Unity shader pack. Check it out, it's fully editable in Amplify Shader Editor (optional).
Feedback and request welcome.
Fake Interiors - Asset Store
added ray intersect traverse to my kdtree code, finding all verts within a pescribed distance from the ray (I use a reverse look up table to the get the faces used by the verts) , the 4096 tri teapot is averaging about 200 low cost look ups giving 32 ish faces to test the ray against to get the true intersection.
https://youtu.be/i6SlYOaLVVk
it uses a "squareness" factor and the face edge angle for it's decisions it can also reject concave polys too.
https://gfycat.com/GratefulVillainousAnhinga