Home Technical Talk

Spheremapping/Cubemapping/Environmentmapping question/discussion

polycounter lvl 15
Offline / Send Message
Snader polycounter lvl 15
While wasting some time on the interwebs I stumbled upon Peirce quincuncial projection which is a mapping technique. Looking at it I wondered why we aren't using this type of projection in games, if it were feasible at all. And stemming from those, a more general question:

What are the pros and cons of different types of mapping?

spheremap.jpg
Spheremapping:
Con:
-'black hole' behind the object
-noticable distortion at the edge of the map

cube_map_unfolded_191.jpg
Cubemapping:
Pro:
-usable from all angles
-less distortion
Con:
-requires 6 separate textures, or one very wasteful texture

Replies

  • EarthQuake
    Options
    Offline / Send Message
    Sphere maps result in MAJOR detail loss around the edges, its just not really worth it if you want any sort of detail in your reflections. If you just want a blurry ambient cube, sure, but you can just load a very low res ambient, like 64x64x6 which is barely any memory.

    The basic cubex6 type maps are much easier to edit as well. Really the question you should ask is what benefit does a sphere map give you?

    DDS cubemap format saves all 6 sides in one image, so your con here isn't really correct. It save them in a row, not separately, or with wasted space.
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    Strangely enough the game I'm working on now uses sphere maps instead of cubemaps, very weird. This is called environment ball if you want to see how it works in Maya. The major difference is time vs quality for me. Cube maps take forever to make by hand and you might think, yeah but I can just make these easy in my 3d app by creating 6 cameras and setting the angle of view to 90" and rendering out square images from each camera then saving them to a volume texture in photoshop dxt1 plugin. That would work great unless your lighting, shaders, depth fog, and post processing were all visible in game but not in Maya. Later on in the project to save time our rendering engineer wrote a tool for me where I could fly to any location in the game cam and capture a perfect cube map from that point. Also another point to mention is that if you want to use cube maps in Maya it's really painful as they don't read in a single dxt1 texture they actually need 6 file nodes, super slow to set up and if you want to edit something you have to adjust all 6 textures, gross. Sphere maps are ghetto since I'm pretty sure they look the same no matter what angle you view them from, where the cube map you can actually look at them from different angles and you'll see what you're supposed to.
  • Eric Chadwick
    Options
    Offline / Send Message
    Peirce is really cool! Nice find, thanks for this.

    Spheremapping avoids showing its singularity because it's usually projected on the mesh to always face the viewpoint. The reflection looks fine if the viewpoint doesn't move; it falls apart when it does because the reflection doesn't change to match the new view (you always see the sun in the upper right corner, for example). So for simple reflections it's super cheap to use.

    DDS cubemap format doesn't store the six images in a horizontal or cross layout, it actually stores them as slices (layers), same way it stores volume 3D textures (a bunch of slices). So in the end it's about the same file size as six 2D DDS maps, minus the 6-file overhead.

    There are a couple other common pano formats in games, like latlong, but they're not supported in 3d hardware the way cubemaps are, so as I understand it they're not as easy to use (from a graphics programmer standpoint). Also latlong still has two singularities, top and bottom, which a cubemap really doesn't.
  • Brice Vandemoortele
    Options
    Offline / Send Message
    Brice Vandemoortele polycounter lvl 19
    I spent too many hours once trying to convert regular dds cube to spheremaps. I wrote this hopefully it will be usefull to someone:
    http://www.mentalwarp.com/~moob/show/polycount/texGenConvert.cgfx

    you need to apply the cgfx in maya to a quad with unitized UVs, the rest is pretty self-explanatory. The easiest way to save the result is using maya hardware rendering.

    and here's how to convert a regular world vector to latlong space:

    float2 vector2ll(float3 v)
    {
    float2 vo = 0;
    vo.x = atan2(v.x,v.z) / 3.14159265;
    vo.y = -v.y;
    vo = vo * 0.5 + 0.5;
    return vo;
    }

    one of the main problem is the uv derivative (mipmaps) at the seam, there's not much you can do except forcing the mip with tex2dLOD() (I'd love to ear another solution about that tbh..)
  • Eric Chadwick
    Options
    Offline / Send Message
    Hmm. Can't you just use Clamp addressing to prevent edge filtering problems?

    I tired Bixorama for this, but they had a ton of artifacts... a ring of clamped pixels around the center face, and no multisampling. Yuck!

    HDRShop is prohibitively expensive ($400 for 2 yrs), so that's out.

    Max does a decent job of rendering a latlong panorama from a Reflect/Refract map (6 images). Maybe polar coord filter in Photoshop would be good enough from that?
  • Kwramm
    Options
    Offline / Send Message
    Kwramm interpolator
    Peirce is really cool! Nice find, thanks for this.

    Spheremapping avoids showing its singularity because it's usually projected on the mesh to always face the viewpoint. The reflection looks fine if the viewpoint doesn't move; it falls apart when it does because the reflection doesn't change to match the new view (you always see the sun in the upper right corner, for example). So for simple reflections it's super cheap to use.

    I think the Wii is doing this. But it's not too difficult to implement it so it behaves like a cube map. The problem is mainly that the texutre gets stretched and compressed differently across the sphere. The front hemisphere is pretty okay, because that's the "center" of your sphere map. The back hemisphere gives troubles because all the image data for this part of the sphere is on the outter "ring" of the texture (forgive my lack of math vocabulary, I never had to describe a sphere or before ;) )
    Also you end up having a "black" or "undefined" point right in the center on the back hemisphere where the map cannot be calculated - at this point all the pixels who sit right at the outter edge of your sphere map get pulled together in a single point.

    I think the calculation is slower than for a cube map though since it has a sqrt in it and a couple of float divisions. Given the quality loss and the calculation a cube map is probably better than this. I have a simple HLSL shader for this...I can post it, in case anyone is interested.

    For converting cube maps, the free version of HDR shop (HDR shop 1) can do this - but I don't think it's automatable.
  • Eric Chadwick
    Options
    Offline / Send Message
    There's also dual-paraboloid mapping, where you use basically two spheremaps, one for each side. But apparently that has a seam too, we almost used it but didn't for some reason or other.

    HDRShop 1 is licensed strictly for non-commercial use, so we can't use that either. Unless we didn't care, but... we do. ;)
  • Eric Chadwick
    Options
    Offline / Send Message
    Just fostering debate on the subject, more out of curiosity than need. I don't use cubemaps enough for it to pay for itself, at least not in my current role. Thanks for the links!
Sign In or Register to comment.