Home Technical Talk

Brief tutorial on using the Google Photosphere camera to create cubemaps.

polycounter lvl 12
Offline / Send Message
mystichobo polycounter lvl 12
Hey guys!
I've been playing around recently with the leaked Android 4.2 camera app, and especially the photosphere feature, and I thought it could be pretty cool to use it to fairly quickly create cubemaps from locations for use in games (Especially for model presentation, I've been mainly playing around with it in marmoset toolbox).

For those that don't know, here's how it works: https://www.youtube.com/watch?v=nrOaYHtnues
It basically creates a spherical panoramic image.

I've managed to get something that actually works pretty well so far, I'll run you through the steps of how I got there and made my baking rig (but will also provide the scene at the end to make it quicker - for max users anyway). I'm sure there are other ways of doing it but this works pretty well for me, and once it's set up, there's very little you have to do.

1. Start off with a cube, though make sure you remember it’s dimensions for later on (I made mine 100cmX100cm).
Unwrap it in the typical cubemap cross pattern and then subdivide it.
step1.jpg




2. Chuck a Spherify modifier onto the cube, and then hide it, we won't need it until later.
step2.jpg



3.Create a sphere primitive with a radius that is half the size of one of your sides of your cube (so in this case 50cm).
Give it 64 segments, and delete the top 3 rows of polygons. The stock photosphere viewer on the phone and g+ have an empty area there and I found if I baked it without a section cut off everything looked stretched
step3.jpg



4. Create a Multi/Sub Object material, with the first mat being your photosphere, and the second being any bright colour that will be highly visible in PS (we will be painting this out with the clone stamp brush anyway). I'm using this photosphere by Shawn Maynard, who was kind enough to send me the original image.
step4.jpg



5. Apply this material to the sphere. You might have to change the matid so it shows the photosphere texture instead of the fluro colour. fix the unwrap to use the whole image like so:
step5.jpg

Cap the open edges and apply the bright green submaterial to them.
step5b.jpg



6. Unhide your Cube-Sphere, throw a push modifier onto it and shrink it ever so slightly, so that it's inside our photosphere-sphere. Open the render to texture dialouge, under the projection section choose pick and select your photosphere.
In the modifier panel you now want to reset your cage and then push it out until it encompasses the photosphere.

Add a diffuse output slot, and set the file path and resolution, then you are free to hit bake!
step6.jpg

7. It'll spit something out like this:
step7a.jpg

I then went over it and cleaned it up in PS where the green bits are (took all of 2 secs with the clone brush and voila!
step7b.jpg
And in the marmoset sky tool:
step7c.jpg


Though the rig is a little painful to set up, once it is set up it's simply a matter of replacing the Photosphere image and rebaking to create new ones. For those who are feeling lazy my scene can be found here, and should work on max 2010 upwards.

Hopefully this will be helpful to someone!

Replies

  • JamesWild
    Options
    Offline / Send Message
    JamesWild polycounter lvl 8
    Cool, this workflow looks mostly compatible with the spherical images Photosynth (iOS/WP7 equivalent) puts out. Good idea for converting to a cubemap!
  • gauss
    Options
    Offline / Send Message
    gauss polycounter lvl 18
    Very, very cool. Thanks for the tutorial and the scene!
  • monkeyscience
    Options
    Offline / Send Message
    monkeyscience polycounter lvl 12
    Very clever. Toolbag's sky tool does handle lat-long panoramas like your source image but I suppose shooping the missing pole data is much easier with cube faces.
  • Computron
    Options
    Offline / Send Message
    Computron polycounter lvl 7
    So, assuming that this ends up as a typical 128/256/512x6 size cubemap, resolution probably doesn't effect the quality too much, but what about high dynamic range?
  • EarthQuake
    Options
    Offline / Send Message
    Computron wrote: »
    So, assuming that this ends up as a typical 128/256/512x6 size cubemap, resolution probably doesn't effect the quality too much, but what about high dynamic range?

    If you want HDR you're going to need to take multiple exposures, which is probably impossible on a phone that is stitching the images together for you. The images need to match up exactly to be "HDR-ized".

    There may be some potential here if you've got a decent DSLR, to shoot in raw and then stitch in Microsoft ICE, but only if ICE can export a 16bpc image. DSLR's raw formats are usually 12 or 14bpc which should be good enough for this sort of thing. I've been knocking this around in my head for a while but haven't had a chance to test it out.

    I have done some basic tests with HDR content from a single raw file and the results are encouraging, but haven't gotten around to the stitching bit.
  • AlecMoody
    Options
    Offline / Send Message
    AlecMoody ngon master
    The last dslr I looked at doing hdr capture with was my 5d and the raw files didn't have enough tonal information to use for lighting/reflections. One issue is that all that extra dynamic range in a raw file gets progressively less accurate at extreme ends of tonal information.
  • kodde
    Options
    Offline / Send Message
    kodde polycounter lvl 18
    Awww... I thought this was some new cool affordable camera when clicking this thread ;)

    Cool stuff though. That app seems to produce quite nice panorama photos. Thanks for sharing the workflow. By the way, doesn't HDRShop have any transformation filters for this type of panorama format?
  • EarthQuake
    Options
    Offline / Send Message
    AlecMoody wrote: »
    The last dslr I looked at doing hdr capture with was my 5d and the raw files didn't have enough tonal information to use for lighting/reflections. One issue is that all that extra dynamic range in a raw file gets progressively less accurate at extreme ends of tonal information.

    Yeah, 12 or 14 bit raw isn't as good as say, taking 3 exposures at -2, 0, +2, or -/+3 and getting a proper 16bpc hdr image, but its a lot lot better than 8bpc. My Sony A580 can do 13.3EVs - which is a lot of range. Which 5D do you have? Dynamic range isn't really a strength of Canons, even the high end bodies. The 5DIII is 11.7EVs and the 5DII is 11.9EVs. Also keep in mind that DR is best at base ISO(usually ISO 100).

    Then you've got the Nikon D800, D600, D7000 and Pentax K5 which are all around 14EV or better. So it varies a bit I guess. The difference between 12 and 14EV is like an extra stop of range in the highlights and an extra stop in the shadows, from what I understand.

    So the question is: is it good enough? doing bracketed exposures can be a pain, and for this sort of thing you can get decent results with LDR cubes even, so a one-shot raw solution is probably going to be acceptable.

    Well it doesn't look like ICE can export anything other than 8bit, even though it can load raw. So the benefits of doing a 1 shot system are probably less, as you've still got to convert that image to an HDR format and then find some way to stitch it. You wont need to carry a tripod at least.

    Heres a -2, 0, +2 pull from my A580 just to show what I mean. Its an overcast rainy morning so not the most realistic test case but a scene with loads of contrast regardless.
    a580range.jpg
  • kodde
    Options
    Offline / Send Message
    kodde polycounter lvl 18
    When you guys speak of DSLR setup for panorama/cube map purposes how do you go about shooting the footage? chrome ball? some rotating tripod head? I'm curious to try this as well.
  • EarthQuake
    Options
    Offline / Send Message
    The original sky content for toolbag was done with:

    Canon 1Ds 12mp fullframe camera
    Sturdy tripod with fancy panoramic head
    Sigma 10mm 2.8 circular fisheye lens

    Then:

    Camera on tripod, take a few shots at X degree intervals
    "De-fish" images
    Merge to HDR
    Manually stitch together images in PS
    Run through sky tool

    With the fisheye you can take less shots, but the detail outside of the center of the frame isn't very good.

    I would probably suggest using an ultra-wide rectilinear lens(non fishey) like the Sigma 8-16mm on APS-C or 12-24mm on FF. You'll have to take a few more images but the quality will likely be a lot higher. Though the resolution of the ambient/specular cube maps doesn't necessarily call for it, the actual skybox images we had were never of the best quality(1024x per quad face).
  • kodde
    Options
    Offline / Send Message
    kodde polycounter lvl 18
    Thanks for the tips EQ. Now if only camera gear wasn't that damn expensive...
  • EarthQuake
    Options
    Offline / Send Message
    Yeah, you can do it with a lot worse gear than what I've listed. Pretty much any camera that has bracketing, manual exposure or exposure compensation/exposure lock can do hdr. The big problem with most p&s cameras is that you'll need to take a LOT of images to get the full panorama and of course lots and lots of processing time to do it that way.

    You can pick up something like a used Tamron 11-18mm UWA lens for around $300, and a used low end/mid level DSLR body for about $300-500, plus pretty much any cheap tripod and do it too. You can get total cost under a grand easily, maybe under $700 even. US prices though of course.
  • AlecMoody
    Options
    Offline / Send Message
    AlecMoody ngon master
    kodde wrote: »
    When you guys speak of DSLR setup for panorama/cube map purposes how do you go about shooting the footage? chrome ball? some rotating tripod head? I'm curious to try this as well.

    I have done it using a panormic head and a bunch of exposure brackets. Typically about 6 exposures per camera position. You want to use the widest rectilinear lens you can find. I used a sigma 8mm fisheye but they introduce a lot of distortion that can make it very difficult for the stitching software to figure out. Essentially, a fisheye has one nodal point for the center of the frame and a different nodal point at the edge. In some environments this wont matter, in others it makes stitching very difficult.


    EQ:
    One of the problems I found with using raw files to stimulate brackets is that while there is more detail in the data, the luminance values are not proportionally correct. When you stack a bunch of these together you can get an okay result but there will be a difference in overall scene contrast and the lighting isn't true to the capture location.
  • Gestalt
    Options
    Offline / Send Message
    Gestalt polycounter lvl 11
    Yeah, 12 or 14 bit raw isn't as good as say, taking 3 exposures at -2, 0, +2, or -/+3 and getting a proper 16bpc hdr image, but its a lot lot better than 8bpc.

    This is probably a stupid question but what are you guys using to combine your bracketed images? I've always hated doing anything raw in photoshop.

    On a related note is anyone aware of how raw translates to linear data? I'm worried about using camera data for actual measurements, and I don't know how people usually go about making linear files from it. Do the camera sensors have a falloff of some sort?
  • AlecMoody
    Options
    Offline / Send Message
    AlecMoody ngon master
    I use HDR shop. I think the old version 1.0 (which as free) is still floating around online somewhere.
  • EarthQuake
    Options
    Offline / Send Message
    AlecMoody wrote: »
    EQ:
    One of the problems I found with using raw files to stimulate brackets is that while there is more detail in the data, the luminance values are not proportionally correct. When you stack a bunch of these together you can get an okay result but there will be a difference in overall scene contrast and the lighting isn't true to the capture location.

    Yeah absolutely, I wasn't suggesting the raw method would be equal in quality by any means, though I think for this sort of thing it would be good enough(8bit LDR is good enough in a lot of cases). For Toolbag sky content you really just want enough range to prevent blown highlights and to get better gradients especially with low specular values.

    The biggest benefit of the raw option is this: Say you're hiking in the mountains, you have your camera and a couple lenses(I almost always take an UWA zoom with me), you can capture sky content on a whim, you don't have to worry about carrying a tripod with panorama head, which is something I would never take on a hike.

    Now if you're doing something like compositing renders to a specific photo/video with a very specific lighting environment that you need to match exactly, making sure your captures are as accurate is possible is really important. For toolbag/games in general where you don't really have a "reference point" to notice the inaccuracies, its not important. Often "true to the location" isn't what you want at all, you're going to adjust color, saturation and contrast to make it a bit more interesting.

    At the end of the day though, if you've got the time, you've planed it out, you've got your tripod ready, there is no reason not to take multiple exposures.

    [edit] Here I found some image examples I did a year or two ago with my pathetic Canon 350D with 10.5EV or so sensor.
    compare1.jpg
    Left is bracketing pulled from single raw, right is proper bracketed exposures. You be the judge, the raw is clearly worse, but the extent of which gets very subjective.
  • EarthQuake
    Options
    Offline / Send Message
    Gestalt wrote: »
    On a related note is anyone aware of how raw translates to linear data? I'm worried about using camera data for actual measurements, and I don't know how people usually go about making linear files from it. Do the camera sensors have a falloff of some sort?

    I'm not sure if this is what you're asking exactly but adobe camera raw or any raw converter can adjust exposure in stops like you would for bracketing. So just take your medium exposure, and then do a -2, and a +2 and save them all(though you may need to kill the exif data or else it will confuse your HDR app).

    How far you can push it will depend on your camera's sensor. With older DSLRs you might only get 1-1.5 either way. My A580 can do 2 stops with decent results, provided the mid level is exposed well. Some of the cameras with better DR I mentioned earlier might give you 2.5 or even 3 stops both ways.
  • Gestalt
    Options
    Offline / Send Message
    Gestalt polycounter lvl 11
    Yeah that's pretty much what I was referring to, what range of data is reliable and if the camera would start to have some type of responsiveness curve that would show up in the data.

    Does anyone have experience with Luminance HDR (qtpfs)?
  • mystichobo
    Options
    Offline / Send Message
    mystichobo polycounter lvl 12
    Wow, this really took a bit of a turn from the Photosphere stuff :D

    Not complaining mind, this is interesting as hell.
  • fearian
    Options
    Offline / Send Message
    fearian greentooth
    Sweet info, I missed this thread!

    some more info for people looking for android photosphere images for a low res cubemap:

    1.search on Google plus for #photosphere

    2. on an image you want find the display as flat picture button in the bottom right

    3. Right click and open the image in a new tab, or equivalent.

    4. change ".../s123/..." in the url to ".../s2560/...", or play around with sizes till you find the largest you can get.

    5. Save!


    Here's some of my favourites from a quick trawl:


    http://i.imgur.com/LZcmd.jpg
    http://i.imgur.com/AOBw8.jpg
    http://i.imgur.com/RYNfe.jpg
    http://i.imgur.com/FeGhj.jpg
    http://i.imgur.com/GI6gp.jpg
    http://i.imgur.com/KKZ5V.jpg
    http://i.imgur.com/8reh3.jpg
  • ZacD
    Options
    Offline / Send Message
    ZacD ngon master
    I just wanted to bump this thread because I just got a phone that supports this, I look forward to trying it out in Marmo2, kinda disappointed there isn't more resolution and exposure controls to make better HDR images. Hopefully we will get raw support soon, they have promised it for a while. If anyone has any requests for a cube map, I can try to make it happen.

    I wonder how well you can fake an HDR image with post editing...
  • Kosai106
    Options
    Offline / Send Message
    Kosai106 polycounter lvl 13
    I've been wanting to try making panoramas with my DSLR, but I've been told that I should get a wide lens for my camera first. Would it however be somewhat possible to do with an 18-55mm lens, without too much trouble?

    I also tried to make one with my phone, just for the heck of it, but of course my camera is partially broken, which results in pink/purple images. Gotta send it in for repairs soon.
Sign In or Register to comment.