Rather than clog up the sticky or bravely announce that I'm going to implement a Doom style renderer in designer (still working on it ) and then fail to deliver I thought I'd dump my stuff here
edit: speaking of the doom style renderer - my raycaster doesn't scale to the point where it'll generate anything good so that's a dead duck
I'll start by showing off the pixel processor I made that finds the distance to points on a bezier curve.
Replies
So, this evening I set out with the very specific and very noble goal of encoding the sound of a fart into a texture that could be read and thus visualised by designer.
so i fired up pycharm and hacked something together and voila! a fart encoded as a 2d image. This is currently mono, but it would be trivial support stereo samples
After some fiddling in designer we have waveform visualisers - one as an fx map that can read the 2d LUT and one in a pixel processor that needs a 1d LUT because I was too tired and drunk to work out how to do it with a 2D one
I ran it through some gradients and I'm proud to present this image - which I have entitled 'the vaporwave fart'
If you use less messy samples you get much more pleasing results .
ive finally worked out how to do a voronoi without a loop
lulz - mine's 5 times faster than the built in cells node :D (on a shitty amd integrated gpu at least)
Added support for the cells4 type coloring - there's 3 options - based on u-coord, v-coord and the traditional dot against point position method (which only tiles when it matches one of the other two so it's sort of redundant).
Loving these.
Atlas selection by grayscale value - again , all pixel processor, seems to be very fast
our input atlas looks like this
This is nice because it allows for custom halftone patterns based on input value -
for example...
and lena cos image processing
currently this just runs from top left to bottom right of the atlas. It'd be interesting to see if there's a way to link some sort of kernel based dithering algorithm into this
thankyou sir :)
shame there's basically no practical value to any of it really :D
I've been doodling a bit
step 1: slicing heightmaps up at <n> intervals so we get a volume
source :
step 2 : some hasty (read improper) transforms gives us a pseudo 3d view
I can't be bothered now but I do have a distance field for each line so in principle I could make this "solid"
I had a small flash of inspiration earlier and I think this is the beginnings of a not incredibly slow raycaster.
I'll grant, it doesn't look that impressive at the moment but i'm feeling quite pleased with myself cos I didn't even use google.
edit : sadface
it's faster, but not fast enough
I made Square UVs that fit between two points
it feels useful but I'm not sure why
okay... so i think I know why it's useful - my control points are sort of backwards but this looks suspiciously like I'm generating UVs that follow a bezier curve
edit :
yep -definitely on to something here. The maths fall over when the mid point is significantly closer to one of the end points than the other but I think that's solvable.
and lena - because image processing
A nice side benefit of this is that I now know how to draw nice continuous bezier curves (and curves that run perpendicular to them) without any sort of hacked together looping / fixed point sampling along the curve
this should work for higher order beziers as well but I need to solve the exploding numbers and refactor the graph before that's anything other than a huge pain in the arse
Fantastic job poopipe . Could you share how you do it? I tried for pretty long to something like this than gave up :( A series of distance fields around dots put together very closely? I mean for perpendicular gradient?
this one is continous. using a fixed number of samples(dots) leads to a load of problems.
the key is that a bezier curve is just a series of stacked lerp operations so once you have UVs between two points you can just lerp between those and the next ones based on total distance along the whole curve
given points p1, p2 and $pos
the u-coord as a gradient between p1 and p2 is arrived at by doing this..
dot( (p1-$pos), (p1-p2) )
it needs scaling and depending on your use case you might want to flip some things round
for the v-coord something like this..
o1 = p1 - $pos
o2 = p1-p2
o1.x * o2.y - o1.y * o2.x
again - you'll need to scale it etc..
to get a quadratic bezier curve you simply add a point (p3),repeat the process going from p2 to p3 then lerp the results, for cubic you add another and so on
As I alluded to in the previous post, the u-coord values can get a bit extreme - something I will hopefully get the urge to look into soon. since this is where im deriving the distance along the total curve it most likely explains why it all gets explody when the curve becomes unbalanced
I pulled these out of the archives cos i found a screen to gif app that looked like it didn't have a trojan in it - these are all captured from substance player.
This one was interesting - the ground is done using the same pseudo 3d technique you'd see in 16bit driving games through the late 80s/early 90s. the clouds and box are "sprites" and its all composited and animated within a pixel processor
This one is a bit more straightforward - in the sense that it composites a bunch of fx-maps - the parallax and depth cueing is procedural so it's not completely dumb and I had to make some compositor graphs to neatly handle all the layers
This is one of my early experiments with tiled UV manipulation. There's no fx-maps involved at all - everything is animated/offset by buggering around with UVs within a single pixel processor - what I learned here ended up in a nice bubbly liquid shader
Thanks poopipe. The last one is really cool
This was fun - nothing too clever so far but it's sparked off a couple of thoughts around performing pixel perfect transforms that I'd like to implement if I get around to comping together a bigger picture around this
remembered I'm bad at color
applied color theory to the problem
there's some fiddling to do still but the idea here is that you feed it a color and it generates a palette.
atm i've only covered analogous and split complementary palettes .
I'm curious about trying this with red-yellow-blue to see whether it looks amazing or utter crap
simple spiral offset with awkwardly fleshy tones
le graph
back in a bit with something more interesting...
i like having loops
the graph itself got a bit wordy but im happy to share if it's of interest to anyone
And then if you fiddle a bit more you can get something resembling a solid surface - it looks weirdly like the zbrush viewport for some reason
heightblended scattering of input images is a thing that isn't too painful now (diffuse is modulated with height in this image)
I wonder if I can make a space filling growth thingy... even if I can't, I've made scribbles
it turns out that if you can draw lines between points you suddenly find yourself with some really interesting ways to graph UVs
feeding it some simple UVs produces fairly obvious results (the warped_uvs node wraps up a bunch of functionality like tiling, offsets polar coords etc, and supports warping by an input image - ive mucked around with various params in these images )
The fun stuff starts to happen when you feed in some noise - one delightful side effect is that if you manipulate the disorder parameter of the perlin noise graphs it gives the effect of rotation in 3d space.
all these lines are SDF functions so in principle you can do quite a lot with them in terms of how they combine etc.
the main issue I'd like to solve is that I can't get smooth lines using the simple line drawing technique I applied here. designer's loop node seems to cap out at 1024 iterations which gives limited fidelity.
it wouldn't be too difficult to use bezier curves instead but I can't be arsed this evening, I have video games to play
more vertices!
it needs refactoring (which i will do when im not planning to go play video games)
but ...
I'm about 90% certain that is a correct implementation of conway's game of life in designer (turned into a gif)
post refactor edit:
rather depressingly I don't think it's possible to get this into a single node and thus drive the generations using $time.
the algorithm relies pretty heavily on having the whole image generated in order to create the next generation and there's no facility in a pixel processor to write out a texture.
I can chain nodes together but that means I have to hook them all up by hand and work out a way to display them based on an input float - given that some of the fun stuff doesn't happen til over a thousand generations in, it's going to be a pretty tedious exercise