Hey to all,
I'm wondering if it's possible, in a cheap way to the approximate information from the a texture (in this case, an Cube Map) and remapping that information onto the object roughly.
For instance, what if I capture the color information from a small number of pixels at specific texture coordinates on each side of an object that will give me color and location information.
Then I reproject subtle hints of those colors at each recorded location into the emissive channel
Using a Step-Smooth then, I could blend those colors to differing degrees.
Basically, a sky-ambient color type of thing, but on steroids.
Replies
Yeah. It's what I'm working on when I get back from the gym. The bilinear cubemap filtering thing I've been working at for months does just this.
My first application of it had way too many artifacts.
My current sticking point is getting a proper spherical step for the multiple point samples needed for the color blending
I may just wind up using my already built vector rotator, but I can't help wondering if there is a cheaper way, mathematically speaking.
If I'm not mistaken, you mentioned using Slerp to achieve said effect? I read there are several way to achieve it in math, which one did you go with?
As for the cheaper way, basically this is what I'm thinking about since I spend some time looking up every possible solution, and the only cheap solution I could find being mentioned is basically a just a straight up Tangent to World projection, where you take each normal, binormal, etc, define which one casts what color and Smooth-Step the final output.
It not proper blur by any means of the imagination, but a simple gaussian-box blur lerped to this this method should give you more or less a cheap alternative (although depends if it's really cheaper).
With a flat/2d surface its simple arithmetic. But inside a sphere the location of "next sample pixel to the right" is a rotation of the initial sample vector. I can do this with my already built Rodriquez's rotation function, but it involves a cross product, which is performance heavy. I'll probably just use the thing to start with to get something that works, then I'll see if there's a simpler way later.
I made up a trilinear filtering function for cubemaps, which worked, but had significant artifacting due to the projection inside the cube being essentially spherical.
The color interpolation between sample points is just simple bilinear filtering, as the inside surface of the projection sphere is still technically 2d. The trick is getting the correct sample points to begin with.
(this gives me an idea actually.. hmm.. translate 2d coords to a spherical surface) lol thanks ace for the rubber ducky help.
Going to see how/if this idea works in just a sec, need a shower and graph paper.