While wasting some time on the interwebs I stumbled upon
Peirce quincuncial projection which is a mapping technique. Looking at it I wondered why we aren't using this type of projection in games, if it were feasible at all. And stemming from those, a more general question:
What are the pros and cons of different types of mapping?
Spheremapping:
Con:
-'black hole' behind the object
-noticable distortion at the edge of the map
Cubemapping:
Pro:
-usable from all angles
-less distortion
Con:
-requires 6 separate textures, or one very wasteful texture
Replies
The basic cubex6 type maps are much easier to edit as well. Really the question you should ask is what benefit does a sphere map give you?
DDS cubemap format saves all 6 sides in one image, so your con here isn't really correct. It save them in a row, not separately, or with wasted space.
Spheremapping avoids showing its singularity because it's usually projected on the mesh to always face the viewpoint. The reflection looks fine if the viewpoint doesn't move; it falls apart when it does because the reflection doesn't change to match the new view (you always see the sun in the upper right corner, for example). So for simple reflections it's super cheap to use.
DDS cubemap format doesn't store the six images in a horizontal or cross layout, it actually stores them as slices (layers), same way it stores volume 3D textures (a bunch of slices). So in the end it's about the same file size as six 2D DDS maps, minus the 6-file overhead.
There are a couple other common pano formats in games, like latlong, but they're not supported in 3d hardware the way cubemaps are, so as I understand it they're not as easy to use (from a graphics programmer standpoint). Also latlong still has two singularities, top and bottom, which a cubemap really doesn't.
http://www.mentalwarp.com/~moob/show/polycount/texGenConvert.cgfx
you need to apply the cgfx in maya to a quad with unitized UVs, the rest is pretty self-explanatory. The easiest way to save the result is using maya hardware rendering.
and here's how to convert a regular world vector to latlong space:
float2 vector2ll(float3 v)
{
float2 vo = 0;
vo.x = atan2(v.x,v.z) / 3.14159265;
vo.y = -v.y;
vo = vo * 0.5 + 0.5;
return vo;
}
one of the main problem is the uv derivative (mipmaps) at the seam, there's not much you can do except forcing the mip with tex2dLOD() (I'd love to ear another solution about that tbh..)
I tired Bixorama for this, but they had a ton of artifacts... a ring of clamped pixels around the center face, and no multisampling. Yuck!
HDRShop is prohibitively expensive ($400 for 2 yrs), so that's out.
Max does a decent job of rendering a latlong panorama from a Reflect/Refract map (6 images). Maybe polar coord filter in Photoshop would be good enough from that?
I think the Wii is doing this. But it's not too difficult to implement it so it behaves like a cube map. The problem is mainly that the texutre gets stretched and compressed differently across the sphere. The front hemisphere is pretty okay, because that's the "center" of your sphere map. The back hemisphere gives troubles because all the image data for this part of the sphere is on the outter "ring" of the texture (forgive my lack of math vocabulary, I never had to describe a sphere or before )
Also you end up having a "black" or "undefined" point right in the center on the back hemisphere where the map cannot be calculated - at this point all the pixels who sit right at the outter edge of your sphere map get pulled together in a single point.
I think the calculation is slower than for a cube map though since it has a sqrt in it and a couple of float divisions. Given the quality loss and the calculation a cube map is probably better than this. I have a simple HLSL shader for this...I can post it, in case anyone is interested.
For converting cube maps, the free version of HDR shop (HDR shop 1) can do this - but I don't think it's automatable.
HDRShop 1 is licensed strictly for non-commercial use, so we can't use that either. Unless we didn't care, but... we do.