You guys are absolute stars by the way.
I'm learning so much from this thread.
To recap:
Each pixel on this coordinate image corresponds to pixels taking up the same space on a texture map.
And moving any of these pixels around, would also move the pixels in the texture they are plugged into. So using the 'Smudge' tool in Photoshop to create the below map should, in theory, make a swirl effect in the material too?
Then combining this with a Texture's UVs and a simple Panner expression should make a rudimentary swirling effect, no?
Sorry for all the questions, I just want to make certain I have the theory right
Edit: How are you guys making your 4-point gradients in Photoshop anyway? I can Screen Blend a red/alpha and green/alpha gradient together, but it's not perfect and the falloff is too steep.
.. the pixels you're moving are indicative of 2d vectors. Each has a r and a G component. or if you prefer an X and Y.
The value of 0 is still 0 the value of 255 (or all 8 bits with 1's in them) is 1.
What these maps are doing is telling the texture fetcher to pull the pixel from your texture that is at coordinate R,G in the original image. You get the blending issues because texture space isn't limited to 8 bits. If I recall, the coords are 32bit floating point.
sirCalaot: its not a 4 point gradient. Its 2 single straight gradients.
Go to your channels. Put a horizontal gradient (black to white) in green and a vertical gradient in red.
and really you guys could just do this (like I mentioned in your "how do I make a whirlpool" thread Ace_
edit: I forgot texture samples don't actually work with displacement, so you'd either have to do it Like I am here, with the product of two gaussian functions, or just vertex paint the depth.
For full rotation of the area plug in another rotator into the coordinate slot of the distorted rotator and build in a simple time control for the overall spin rate.
here. Have the network I'm using (minus the few nodes I'm using to generate the circular gradient)
SirCalalot: sorta.. not so much though. Its good for vortices confined to a single area but for complex hand authored stuff there's a different direction to take since you'll mainly be moving uv coords there rather than rotating them.
Personally, I don't think they should be that hard to just plain paint with vertex paint. Depends how crazy your flow is.
edit: hmm.. also this is definitely showing me that procedural stuff is way superior to textures when possible. no artifacting due to compression..
so ive found that if you change the compressions settings to TC_VectorDisplacementmap, it heavily gets rid of the pixelation in the distortion. one thing im having an issue with though, is that the distortion stretches A LOT from the top left of the material and barely as much from the bottom right.
this is the map im using, i tried to get something similar to your example.
and this is my result.
Ok, first things first! There is such a thing as a TC_VectorDisplacementmap compression setting! Holy crap! Totally missed this one. It must have been implemented in the last year or so.
The reason why it is not being distorted properly is because you need to uncheck sRGB and set the UnpackMin values to -1. That would be my guess.
EDIT: also, what sort of filetype is the 16bit image you used before? with unreal, only targas will work properly for textures, and if you have a 16 or 32 bit image, you cant have a targa.
That is correct, you can't import 16-bit images. But you can split a 16-bit image in two 8-bit images and combine them back together in the shader. It's simple maths. I'd like someone here to give it a try before I reveal my secrets lol :P
And moving any of these pixels around, would also move the pixels in the texture they are plugged into. So using the 'Smudge' tool in Photoshop to create the below map should, in theory, make a swirl effect in the material too?
Then combining this with a Texture's UVs and a simple Panner expression should make a rudimentary swirling effect, no?
Yeah. If you pan a texture with that distortion it will still go from let's say, left to right. But it will go through the distortion. Just give it a try
Edit: How are you guys making your 4-point gradients in Photoshop anyway? I can Screen Blend a red/alpha and green/alpha gradient together, but it's not perfect and the falloff is too steep.
You can paint the gradients per channel. Try that.
Personally, I don't think they should be that hard to just plain paint with vertex paint. Depends how crazy your flow is.
I have tried this, and it my experience it's pretty much impossible. A nightmare lol. But this actually makes me think, that I could do something similar to my 3d vector displecement script. You can perhaps bake down the UV distortion to vertex colors, and then have the shader use that data to reconstruct the distortion. Hmm... I might look into this.
edit: hmm.. also this is definitely showing me that procedural stuff is way superior to textures when possible. no artifacting due to compression..
Exactly! This is very important and does not only apply to vectormaps. Pretty much anything. If you can use procedural stuff, do it! You will save texture space, texture calls and it will look better. Specially if it's distortion maps due to the reason I explained in my previous post. (Careful with shader complexity, though).
The example I have used wasn't the best one. It was just something really quick I did. As you explained above, you can make a swirl effect without textures. But for more complex vector maps, you'll need a texture.
since you get no compression with the tc_Grayscale setting importing the red and green channel like that and combining them with a constant 0 blue channel might work.
not sure though, haven't tried
That is correct, you can't import 16-bit images. But you can split a 16-bit image in two 8-bit images and combine them back together in the shader. It's simple maths. I'd like someone here to give it a try before I reveal my secrets lol :P
Nope hehe. What you are doing there actually makes no sense. You are essentially adding one texture's red channel to anothers green channel. You are multiplying it with red and green in order to "tint" the greyscale image and be able to mix both together (red and green images). The result is a vectormap, yes. But you might as well plug it in directly.
The first thing you should figure out how to do, is how to split a 16-bit color image. Into 2 8-bit color images in Photoshop. Once you figure out that part you should be able to figure out how to "merge" them back together in the shader.
Nope hehe. What you are doing there actually makes no sense. You are essentially adding one texture's red channel to anothers green channel. You are multiplying it with red and green in order to "tint" the greyscale image and be able to mix both together (red and green images). The result is a vectormap, yes. But you might as well plug it in directly.
norman has a point. If you used an 'append' node instead of an 'add' node you wouldn't need the component mask node either.
so the only thing im really coming up with as far as splitting the image is taking my 16 bit image, copying the red and green channels to new layers, and then going to File > Scripts > Export Layers to Files.. which will give me 2 8bit images, but then combining them, replacing how i did it before with what sprunghunt says, by just using append.. still results in some distortion.. so am i even splitting the image correctly?
so the only thing im really coming up with as far as splitting the image is taking my 16 bit image, copying the red and green channels to new layers, and then going to File > Scripts > Export Layers to Files.. which will give me 2 8bit images, but then combining them, replacing how i did it before with what sprunghunt says, by just using append.. still results in some distortion.. so am i even splitting the image correctly?
No, that type of split does not have any benefit at all. The 2 greyscale images you get, have the same information as if they were in an ordinary RGB image. You are still loosing the "other" 8-bit of the original 16-bit image. Forget about splitting the red channel from the green channel. It has nothing to do with that.
Another hint: Look at how HDR images are created. What is needed in order to create a HDR photo?
PS: Don't worry I'll eventually post the solution. I'm just trying to encourage you guys to put some thought into it. (I'm not saying you are not, as a matter of fact you seem to be the only one trying,hehe )
Quick review: an 8 bit value has a maximum integer value of 255. A 16 bit integer has a maximum of 65536.
each additional bit increases the range of values by one power of 2.
Since our normal textures have 8 bits per channel, using one channel for the lower 8 bits and a second one for the upper can be combined to Be equivalent to a 16 bit single channel.
Separating this via photoshop will be interesting, I don't actually know this trick but I think I can reason my way through t here. . With direct bit manipulation it would be easy as pie. Just a copy, 2 AND's and a left shift. So we'll need to do it mathemagically.
Start with a 16 bit grayscale image. Make a copy to a new layer.
One will be for the lower bits, the other will be for the upper bits.
Make a new solid color layer over the low bit image. Fill the solid color layer with a value equal to 255, which should be .0039 if ps handles 16 bit greys in floating point. Set the blending mode to multiply.
Make two copies of that solid color layer. Set them to normal blend mode. Change your upper bit layer blend mode to subtract, and ensure it is on top of the first solid color. Merge down and set the resulting layer to divide. Merge down again. This should give two layers with empty top bits and packed bottom bits.
Not sure how ps converts between 16 bit and 8 bit channels though. If it just dumps the upper bits this will work. If it has some internal scaling then other steps will need to be taken. Will try later.
Recombining them is the easy part. Multiply lower channel by 255. Multiply upper bit channel by 256 then square the result. Add the two multiplies channels and divide by 65536 to scale back to a 16 bit 0-1 range.
Good! Close enough! But not quite right (I think)...
So here is my solution...
As you said we need to save the lower 8-bit and the higher 8-bit to different files in order to mix them back together in Unreal.
In order to to do this, open up the original 16-bit vectormap and simply go to Image/Adjustments Levels.
First we'll save the "darker" values. Set the input levels to: 0/1,00/127
Convert the image to 8-bit and save the tga. (At this point it actually already is a 8-bit image since you killed half of the image's data)
Now go back a couple steps, before you adjusted the levels and when the image was still 16-bit.
This time the input levels will be: 127/1,00/255
Convert it to 8-bit and save as a tga. Now we just saved the "lighter" values.
Import both to Unreal and setup the shader like this in order to combine them back together.
I'm not sure if this is the only solution, though. I personally don't like the "lerping". To me it seems it should be loosing data there. But the result proves otherwise.
EDIT: Also I'm not really sure about the blending methods in Photoshop, I don't trust them. They are kinda inaccurate. iirc there was a website that really detailed what each blending method was actually doing. There were some that were miss-leading.
EDIT2: Nevermind, I think the lerping is indeed a valid solution. I'm not really losing data there. It's not the same as if were mixinig both images in photoshop by setting the normal blend method to 50%
Argh, already 4/6 texture calls it seems like if you want the simple basics in the map. Poopy!
Still, awesome work you guys.
Well you can always put both textures side by side and split them up in the shader.
Personally I don't think texture calls are an issue, tho. It will be extremly rare to top the 16 texturesample limit anyway (if that's what you mean with texture calls). Texture space I don't think is an issue either, we are talking about a 256x256 texture, 512x256 if you want the higher bit workaround. It's totally ok if you want to use it for something big like the water in l4d2.
My main concern would be shader complexity, specially if you are going to use it on a surface that uses a lot of screen space (again, water in l4d2). The 16-bit workaround is already using quite a few extra instrunctions, splitting up a texture (if you want to combine both in order to save a single texture call) will be a couple instructions more. Personally I don't think it's worth it.
Ace-Angel, did you have anything particular in mind that would require a lot of texture calls?
Oh no, nothing in particular, I was creating a beauty Vertex-Blend shader which would use 4 channels for everything, and was hoping to just hab-hack in, sort of a Distort Map.
So this means, 4 diffuse textures, 4 Spec and 4 Normals, and so on, just as a beauty shader, not meant for games, just a good looking shader, and was going to plonk in a distort map to help thing along.
Really now, that is a totally different topic, all I can say is cheers, much appreciated for all the feedback and solutions guys, you're epic (no pun intended)!
You shouldn't need 2 separate textures though, just double the channels you're going to use.. so yeah 2 textures if you need a 4 channel 16 bit image, but for flow maps you could just get away with a single texture, packing rg as one set, and ba as the other.
Thanks for reminding me of levels. (duh) I'm 98% certain my math would work, but I was doing the integer and binary arithmetic in my head on the train back from work so it very well could be off, but honestly it should do something nearly identical to your method.
ie
(in hex for brevity)
FF FF - 00 FF = FF 00 <-(preserves high values) / 00 FF = 00 FF Moves high values to the lower registers
whereas FF FF * 00 FF = 00 FF = preserves lower values while discarding high.
Also I think that you are losing some data with the lerp. You're still getting a smoother flow due to simply more data being present. If you actually want a full 16 bits of texture precision you'd need to get the "brighter" values into a higher floating point value to begin with.
As for ps blending modes, Multiply and divide are the pure arithmetic they suggest. I have the link for the PS blending mode function math here http://dunnbypaul.net/blends/
and yes I've found a trouble spot. It looks like ps has some internal scaling going on when transforming values from 16 to 8 bit (like I pretty much expected but hoped against), which is solved by your levels method.
I only get positive y and positive x. it doesn't go negative.
That's because you've set up your material to only work that way. Doing something like subtracting 1 from a colour channel will give you a range of -0.5 to 0.5, which gives you bidirectional panning if you need it. Makes your flow maps harder to author though.
I Tried that and painting gave unexpected results. Much harder to author.
What's the easier way to do this for vertex paint? I want it so I fill my vertexes to 127,127 (yellow) and that will be untouched uvs, and when I paint red or green channels down to black they will go in the negative direction and vice versa.
Make sure your shader is scaling the raw values properly.
Multiply your colors by 2 then subtract 1 will move them from 0-1 to -1 - +1.
So painting with 127 (or 0.5) will be 0 offset. 0 will be -1 offset and 1 will be +1
(or subtract 1 and multiply by 2. A constant bias scale node is what you need. Bias of -1 scale of 2.)
This might not be as useful after everyone else's posts, but I just wrote a UDK specific tutorial implementing a version of Valve's approach from their Siggraph presentation.
Lerp with the cosine of time is good. I was trying a different blending method using that and it wasn't working so well.
Also minor technical point. Fmod is floating point modulus, so you can have remainders like 1.2836455. A normal modulus only returns integers.
Minor point but the difference can be big depending on what effect you're building.
Lerp with the cosine of time is good. I was trying a different blending method using that and it wasn't working so well.
Also minor technical point. Fmod is floating point modulus, so you can have remainders like 1.2836455. A normal modulus only returns integers.
Minor point but the difference can be big depending on what effect you're building.
Thanks!
I just edited for clarity on that point. I can see that some people might assume I meant whole numbers if not explicitly told otherwise.
I'm hoping to write a few more tutorials as I go, so feedback is great.
Looks great - pretty similar to my own setup, but it does have considerably more nodes, and I reckon they're probably unecessary (lots of duplication in there where you've got the 50% offset). I'd try it out if I had time, but I'm flying out to Malaysia first thing tomorrow. I'm not sure why you're seeing that last artifact with the reset though, when I did my implementation, I lerped the offset path with the regular path in order to hide it.
Looks great - pretty similar to my own setup, but it does have considerably more nodes, and I reckon they're probably unecessary (lots of duplication in there where you've got the 50% offset). I'd try it out if I had time, but I'm flying out to Malaysia first thing tomorrow. I'm not sure why you're seeing that last artifact with the reset though, when I did my implementation, I lerped the offset path with the regular path in order to hide it.
Thanks! What artifact are you referring to?
For the number of nodes: The tutorial is exploded for comprehension rather than optimization. The last step addresses your point in simplification. The version of the shader in the tutorial comes out to being 45 instructions for the baseline, and 85 (edit: which is higher than I'd like, but it's not too bad) for the "in-use" example I supplied. That final could be +/- 10 depending on what kind of opacity control you're using.
did a little messing about with blender 2.61 and it;s new waves and dynamic paint, and think it might be possible to do things like valve did in houdini with blender and running simulations and baking the results to a normal map.
was able to make a sphere with a outward force on the surface it was placed ontop, that rippled the surface as the simulation ran, and it would be possible to import level geometry, and use it to interact and reflect those ripples., than after that point you can just choose the frame, you like the best, apply the modifiers and export out to bake in xnormal with a 2nd flat uv mapped plane.
Is enyone know how to use a noise texture as time input ?
You can create a grayscale noise texture and multiply it against time. This lets you do a per-pixel variance for the passage of time. If you're following what I posted, it comes just after the FMod.
I would keep it low-contrast to avoid any major deviations. The higher resolution said noise texture is, the more granular the distortion. I've used 16x16 sized textures to great effect, or higher res with blobs of homogenous values. Think grayscale camouflage in that case.
Others might have different implementation ideas, this is just how I've done it.
Some people have their settings set to display a different amount of posts per pages. Like for me there are 40 posts per page, so where actually on page 3 right now.
Sonofabitch. How the crap did I miss the last page?!
Thanks for the tutorial anyway @Phill, it has been amazingly helpful!
With regards to using Vertex Colours instead of a Flow Map Texture, I take it that I would simply need to replace the TextureSample node with a VertexColor Node?
In that case, I would be filling the whole mesh in 0.5 Red and 0.5 Green before then painting extra Red to move flow in the direction of the X axis and Green for the Y?
Sorry if I'm being dense, I just want to make sure I have it right in my head, as the benefits of being able to paint flow in realtime using the viewport would be amazing.
@Phil - I think a lot of your material can be simplified if you multiply texture coordinates by your flow map before piping them into the panner node. It will change the way you paint your flow maps a bit, but should dramatically reduce the complexity of your material network. I'll throw up an example soon-ish.
Since this is basically the first and best result for this type of thing that pops up in Google, I figure it'd be good if I posted this tutorial in here.
It's quick and easy to make flow maps in Photoshop:
Replies
I'm learning so much from this thread.
To recap:
Each pixel on this coordinate image corresponds to pixels taking up the same space on a texture map.
And moving any of these pixels around, would also move the pixels in the texture they are plugged into. So using the 'Smudge' tool in Photoshop to create the below map should, in theory, make a swirl effect in the material too?
Then combining this with a Texture's UVs and a simple Panner expression should make a rudimentary swirling effect, no?
Sorry for all the questions, I just want to make certain I have the theory right
Edit: How are you guys making your 4-point gradients in Photoshop anyway? I can Screen Blend a red/alpha and green/alpha gradient together, but it's not perfect and the falloff is too steep.
The value of 0 is still 0 the value of 255 (or all 8 bits with 1's in them) is 1.
What these maps are doing is telling the texture fetcher to pull the pixel from your texture that is at coordinate R,G in the original image. You get the blending issues because texture space isn't limited to 8 bits. If I recall, the coords are 32bit floating point.
sirCalaot: its not a 4 point gradient. Its 2 single straight gradients.
Go to your channels. Put a horizontal gradient (black to white) in green and a vertical gradient in red.
and really you guys could just do this (like I mentioned in your "how do I make a whirlpool" thread Ace_
edit: I forgot texture samples don't actually work with displacement, so you'd either have to do it Like I am here, with the product of two gaussian functions, or just vertex paint the depth.
For full rotation of the area plug in another rotator into the coordinate slot of the distorted rotator and build in a simple time control for the overall spin rate.
here. Have the network I'm using (minus the few nodes I'm using to generate the circular gradient)
Although, cheers for the Rotator method, but would that work for more complex hand-authored flow maps?
Personally, I don't think they should be that hard to just plain paint with vertex paint. Depends how crazy your flow is.
edit: hmm.. also this is definitely showing me that procedural stuff is way superior to textures when possible. no artifacting due to compression..
Thank you very much for sharing, dude
In my experience it will also draw faster in many cases. Texture lookups aren't as fast as math as far as I can tell.
Ok, first things first! There is such a thing as a TC_VectorDisplacementmap compression setting! Holy crap! Totally missed this one. It must have been implemented in the last year or so.
The reason why it is not being distorted properly is because you need to uncheck sRGB and set the UnpackMin values to -1. That would be my guess.
That is correct, you can't import 16-bit images. But you can split a 16-bit image in two 8-bit images and combine them back together in the shader. It's simple maths. I'd like someone here to give it a try before I reveal my secrets lol :P
Yes!
Yeah. If you pan a texture with that distortion it will still go from let's say, left to right. But it will go through the distortion. Just give it a try
You can paint the gradients per channel. Try that.
I have tried this, and it my experience it's pretty much impossible. A nightmare lol. But this actually makes me think, that I could do something similar to my 3d vector displecement script. You can perhaps bake down the UV distortion to vertex colors, and then have the shader use that data to reconstruct the distortion. Hmm... I might look into this.
Exactly! This is very important and does not only apply to vectormaps. Pretty much anything. If you can use procedural stuff, do it! You will save texture space, texture calls and it will look better. Specially if it's distortion maps due to the reason I explained in my previous post. (Careful with shader complexity, though).
The example I have used wasn't the best one. It was just something really quick I did. As you explained above, you can make a swirl effect without textures. But for more complex vector maps, you'll need a texture.
not sure though, haven't tried
all in all a very interesting discussion
would it be something like this??
Nope hehe. What you are doing there actually makes no sense. You are essentially adding one texture's red channel to anothers green channel. You are multiplying it with red and green in order to "tint" the greyscale image and be able to mix both together (red and green images). The result is a vectormap, yes. But you might as well plug it in directly.
The first thing you should figure out how to do, is how to split a 16-bit color image. Into 2 8-bit color images in Photoshop. Once you figure out that part you should be able to figure out how to "merge" them back together in the shader.
norman has a point. If you used an 'append' node instead of an 'add' node you wouldn't need the component mask node either.
No, that type of split does not have any benefit at all. The 2 greyscale images you get, have the same information as if they were in an ordinary RGB image. You are still loosing the "other" 8-bit of the original 16-bit image. Forget about splitting the red channel from the green channel. It has nothing to do with that.
Another hint: Look at how HDR images are created. What is needed in order to create a HDR photo?
PS: Don't worry I'll eventually post the solution. I'm just trying to encourage you guys to put some thought into it. (I'm not saying you are not, as a matter of fact you seem to be the only one trying,hehe )
Quick review: an 8 bit value has a maximum integer value of 255. A 16 bit integer has a maximum of 65536.
each additional bit increases the range of values by one power of 2.
Since our normal textures have 8 bits per channel, using one channel for the lower 8 bits and a second one for the upper can be combined to Be equivalent to a 16 bit single channel.
Separating this via photoshop will be interesting, I don't actually know this trick but I think I can reason my way through t here. . With direct bit manipulation it would be easy as pie. Just a copy, 2 AND's and a left shift. So we'll need to do it mathemagically.
Start with a 16 bit grayscale image. Make a copy to a new layer.
One will be for the lower bits, the other will be for the upper bits.
Make a new solid color layer over the low bit image. Fill the solid color layer with a value equal to 255, which should be .0039 if ps handles 16 bit greys in floating point. Set the blending mode to multiply.
Make two copies of that solid color layer. Set them to normal blend mode. Change your upper bit layer blend mode to subtract, and ensure it is on top of the first solid color. Merge down and set the resulting layer to divide. Merge down again. This should give two layers with empty top bits and packed bottom bits.
Not sure how ps converts between 16 bit and 8 bit channels though. If it just dumps the upper bits this will work. If it has some internal scaling then other steps will need to be taken. Will try later.
Recombining them is the easy part. Multiply lower channel by 255. Multiply upper bit channel by 256 then square the result. Add the two multiplies channels and divide by 65536 to scale back to a 16 bit 0-1 range.
So here is my solution...
As you said we need to save the lower 8-bit and the higher 8-bit to different files in order to mix them back together in Unreal.
In order to to do this, open up the original 16-bit vectormap and simply go to Image/Adjustments Levels.
First we'll save the "darker" values. Set the input levels to: 0/1,00/127
Convert the image to 8-bit and save the tga. (At this point it actually already is a 8-bit image since you killed half of the image's data)
Now go back a couple steps, before you adjusted the levels and when the image was still 16-bit.
This time the input levels will be: 127/1,00/255
Convert it to 8-bit and save as a tga. Now we just saved the "lighter" values.
Import both to Unreal and setup the shader like this in order to combine them back together.
I'm not sure if this is the only solution, though. I personally don't like the "lerping". To me it seems it should be loosing data there. But the result proves otherwise.
EDIT: Also I'm not really sure about the blending methods in Photoshop, I don't trust them. They are kinda inaccurate. iirc there was a website that really detailed what each blending method was actually doing. There were some that were miss-leading.
EDIT2: Nevermind, I think the lerping is indeed a valid solution. I'm not really losing data there. It's not the same as if were mixinig both images in photoshop by setting the normal blend method to 50%
Still, awesome work you guys.
Well you can always put both textures side by side and split them up in the shader.
Personally I don't think texture calls are an issue, tho. It will be extremly rare to top the 16 texturesample limit anyway (if that's what you mean with texture calls). Texture space I don't think is an issue either, we are talking about a 256x256 texture, 512x256 if you want the higher bit workaround. It's totally ok if you want to use it for something big like the water in l4d2.
My main concern would be shader complexity, specially if you are going to use it on a surface that uses a lot of screen space (again, water in l4d2). The 16-bit workaround is already using quite a few extra instrunctions, splitting up a texture (if you want to combine both in order to save a single texture call) will be a couple instructions more. Personally I don't think it's worth it.
Ace-Angel, did you have anything particular in mind that would require a lot of texture calls?
So this means, 4 diffuse textures, 4 Spec and 4 Normals, and so on, just as a beauty shader, not meant for games, just a good looking shader, and was going to plonk in a distort map to help thing along.
Really now, that is a totally different topic, all I can say is cheers, much appreciated for all the feedback and solutions guys, you're epic (no pun intended)!
Thanks for reminding me of levels. (duh) I'm 98% certain my math would work, but I was doing the integer and binary arithmetic in my head on the train back from work so it very well could be off, but honestly it should do something nearly identical to your method.
ie
(in hex for brevity)
FF FF - 00 FF = FF 00 <-(preserves high values) / 00 FF = 00 FF Moves high values to the lower registers
whereas FF FF * 00 FF = 00 FF = preserves lower values while discarding high.
Also I think that you are losing some data with the lerp. You're still getting a smoother flow due to simply more data being present. If you actually want a full 16 bits of texture precision you'd need to get the "brighter" values into a higher floating point value to begin with.
As for ps blending modes, Multiply and divide are the pure arithmetic they suggest. I have the link for the PS blending mode function math here http://dunnbypaul.net/blends/
and yes I've found a trouble spot. It looks like ps has some internal scaling going on when transforming values from 16 to 8 bit (like I pretty much expected but hoped against), which is solved by your levels method.
I Tried that and painting gave unexpected results. Much harder to author.
What's the easier way to do this for vertex paint? I want it so I fill my vertexes to 127,127 (yellow) and that will be untouched uvs, and when I paint red or green channels down to black they will go in the negative direction and vice versa.
Multiply your colors by 2 then subtract 1 will move them from 0-1 to -1 - +1.
So painting with 127 (or 0.5) will be 0 offset. 0 will be -1 offset and 1 will be +1
(or subtract 1 and multiply by 2. A constant bias scale node is what you need. Bias of -1 scale of 2.)
If not, don't mind me! ^_^
http://phill.inksworth.com/tut.php
Lerp with the cosine of time is good. I was trying a different blending method using that and it wasn't working so well.
Also minor technical point. Fmod is floating point modulus, so you can have remainders like 1.2836455. A normal modulus only returns integers.
Minor point but the difference can be big depending on what effect you're building.
Thanks!
I just edited for clarity on that point. I can see that some people might assume I meant whole numbers if not explicitly told otherwise.
I'm hoping to write a few more tutorials as I go, so feedback is great.
Thanks! What artifact are you referring to?
For the number of nodes: The tutorial is exploded for comprehension rather than optimization. The last step addresses your point in simplification. The version of the shader in the tutorial comes out to being 45 instructions for the baseline, and 85 (edit: which is higher than I'd like, but it's not too bad) for the "in-use" example I supplied. That final could be +/- 10 depending on what kind of opacity control you're using.
Have a good trip!
was able to make a sphere with a outward force on the surface it was placed ontop, that rippled the surface as the simulation ran, and it would be possible to import level geometry, and use it to interact and reflect those ripples., than after that point you can just choose the frame, you like the best, apply the modifiers and export out to bake in xnormal with a 2nd flat uv mapped plane.
Also, yes, it seems very likely possible with the latest version of Blender.
i didnt bake in blender, i just exported out a UVed plane and the high poly made with the simulation and dynamic paint, and used xnormal.
Is enyone know how to use a noise texture as time input ?
You can create a grayscale noise texture and multiply it against time. This lets you do a per-pixel variance for the passage of time. If you're following what I posted, it comes just after the FMod.
I would keep it low-contrast to avoid any major deviations. The higher resolution said noise texture is, the more granular the distortion. I've used 16x16 sized textures to great effect, or higher res with blobs of homogenous values. Think grayscale camouflage in that case.
Others might have different implementation ideas, this is just how I've done it.
I've posted a thread to explain : http://www.polycount.com/forum/showthread.php?t=92456
And here's my test :
I've used your setup ( thank you so much btw for the tut ! ) and i plugged a multiply node with a time node and my noise textures into the fmod A slot
Thank you for your answer!
Instead link to the post number like so: http://www.polycount.com/forum/showpost.php?p=1486893&postcount=96
Thanks for the tutorial anyway @Phill, it has been amazingly helpful!
With regards to using Vertex Colours instead of a Flow Map Texture, I take it that I would simply need to replace the TextureSample node with a VertexColor Node?
In that case, I would be filling the whole mesh in 0.5 Red and 0.5 Green before then painting extra Red to move flow in the direction of the X axis and Green for the Y?
Sorry if I'm being dense, I just want to make sure I have it right in my head, as the benefits of being able to paint flow in realtime using the viewport would be amazing.
It's quick and easy to make flow maps in Photoshop:
http://www.polycount.com/forum/showthread.php?p=1582749#post1582749