My buddy Joe Drust just posted this video up on Facebook. It's just like what the title says. Pretty phenomenal. I imagine this is the future of realistic game models and textures. He has a few more images of brick walls that he made on Facebook, but i'm not sure if I should make those public or not. Anyway, check it out.
[ame]
http://www.youtube.com/watch?v=Z4VebRhdQVY[/ame]
Replies
It could definitly be a good base like 3d scans are already, but it still requires a lot of work besides, to fix geometry and badly stretched/overlapped textures...
You're still limited to things that actually exist too. Regardless, this is pretty neat, but I highly doubt it is the future.
I totally agree it only works for things that can be photographed easily (a traffic light would be pretty hard for example), but there are quite a few things that can be photographed, and this looks much faster than having people build it and texture it (especially at this fidelity). I think it's just going to get easier and easier to do this. Could be wrong.
That's why I want to stick to fantasy and sci fi - things that don't exist in the real world.
Sites like 3D.sk should jump on this tech.
http://www.facebook.com/photo.php?fbid=10150325549818018&set=a.10150325549753018.417796.747488017&type=1&theater
http://www.facebook.com/photo.php?fbid=10150325549873018&set=a.10150325549753018.417796.747488017&type=1&theater
http://www.facebook.com/photo.php?fbid=10150325549903018&set=a.10150325549753018.417796.747488017&type=1&theater
....go to 4:30.
...
Okay first results : It doesn't like pointy spikes too much ; and using a lazy susan instead of turning around the object step by step seems to cause problems (I suppose because of the improper landmark in the background hehe)
Gonna try again!
work (for lack of space to walk around everything), but I guess not so
much.
ah well. (runs off to create a big tent-like circular structure to bounce
uniform light around and occlude background elements)
http://www.polycount.com/forum/showthread.php?t=85608
I think if you were going to photograph it on a lazy susan you would have to 1) make sure the object is perfectly centered and 2) you would need a completely neutral background such a white wall with no shadows on it. That would be my guess anyway
The video on the page doesn't do it justice. The generation from photo features are very powerful. This + make human, what else could you need?
Skip this one to like 3 minutes:
[ame]http://www.youtube.com/watch?v=1lOFWaA7Q2Q[/ame]
quicker perhaps, but this isnt going to beat a real scan. In that example the likeness dissapears when the texture is gone.
i did one with 19 photos and it still hasnt emailed me back.
There are definitely a few rules to good captures. First off, you do need a background with clear geometrical landmarks ; also, a lazy susan wont work at all as it will see the scene as a fixed camera with a moving object in front of it.
I tried it with video capture exported as a 3 FPS image sequence and it gave very good tracking too.
As far as how many pictures are needed, I think its mostly a matter of constantly checking the screen of your camera and ask yourself if details from one picture to the next overlap enough...
I am also under the impression that taking 2 pictures under the same angle might be confusing the program, but I am unsure about that one.
Oh and obviously, you need to avoid overhangs and/or thin details sticking out (as on the laces below. I actually left them on on purpose, to see it would break the process and it did )
First images show the Photofly draft and the final mesh at 100% self illumination. Then some Max screenshots and scanline renders, hole filling in meshlab, and Zbrush reproject after adding some geometry for the bottom sole.
I think the best use for such captured objects would be integration within matte paintings / projected scenes since 100% self illuminated objects work well within such circumstances as far as I know.
Loving this!!
but FML i cant get this shit to work at all for me. ive tried like 15 different objects, inside, outside, different cameras. i tried putting down papers on the floor as ref points. the best result i got was a skewed model that looked really bad.
i dont know what the hell im doing wrong, but it seems like every time i try it, the god damn thing gets stuck at 1% after it uploads. doesnt matter if its 4 photos or 30 photos.
i feel like theyre touting this "cloud computing" bullshit over the actual technology. i dont give a damn if theyre using amazon cloud servers, i want to process this shit on my own computer so i can accurately tell if its working or not.
but the real reason they wont let you do that is not because its easier to process in the cloud, this is a tech demo and they probably dont want people dissecting the software so they keep it to themselves.
edit: god damnit. i even used patterend sheets on the floor. still nothing. even when it doesnt get stuck at 1% and it finishes, it doesnt give me a mesh, just a collection of photos plastered in the air creatin this weird floating cloud effect
/hungover rant
Anyways - maybe posts a few of your pictures here, there might be an obvious factor coming into play ?
One thing maybe : at some point I captured my pictures in 720p format (for another video test) but it seemed like the app couldn't process them, probably because it was a crop from the native format of my camera's sensor. So maybe there's something there ?
As for patterns, I feel like the most important part is to have good geometry behind the object, but not necessarily on the floor. But who knows ...
i finally got it to work after manually setting tracking points on like 30 images but the final highres model was so bad (random lumps everywhere) that it was unusable. (it was a ceramic dog).
eh, maybe ill try again next weekend.
More seriously ... Im still not exactly sure why it sometimes works and sometime doesn't. I kind of suspect that the scale of objects might matter, regardless of how far or close one is (like, mesh density seems to be absolute, not relative... but im not sure)
Trying another test now, starting from an unfinished clay figure, then remesh, resym, and export to Mudbox :
Screw Zspheres and basemeshes! Let's use clay!
I actually got awesome surface quality out of David. It took a bit of tinkering but I eventually found a simple enough solution using a projector to bean an animated line on the object (instead of complex arduino-driven line lasers.) The swipes looked great, but assempling them together turned out to be problematic since even a medium distortion can cause them to not line up.
Nextengine is cool, but not the one click solution they claim it to be. Even tho it has a motorized turntable, it still performs discrete passes and then composites them together - with potential alignements problems too. The surface quality is marginally better than David's, but obviously it is much more practical to use. Their proprietary software for scan assembly suck tho.
All in one, I much, much prefer Photofly for my own use. The surfaces are lumpy and have holes, but at least it delivers a solid, 360 continuous mesh in one go. Scan data always has to be cleaned up anyways so yeah it definitely is my pick. Plus it worked perfectly fine at the scale I needed scans for (1/3 busts)
Good times!
It worked, but still, lumps and spikes everywhere :
Then I thought I could try the worse possible scenario and see what happens. Thats the most complex shape I could, with very fine texture detail. I took the photograps pretty much at random. Obviously it didnt get much of the complex hoops and topology weirdness of the horns and bones sticking out, but the surface detail of the captured areas turned out flawless :
So I think it all means that you need scratches and rake marks on a model to really make it work in Photofly. Painted surface detail might not be enough, but as soon as you have cracks and texture work then it seems very powerful. YAY! Oddly enough the proportions seem slightly squashed too - but nothing that cannot be fixed easily.
Here's my craptacular attempt, first 3 failed miserably. Done with shitty camera and equally shitty lighting conditions, 30-40 pics used. Had to manually stitch 1 to get a big portion of the pics to be used, it fixed a lot of missing areas tho. It's still a lumpy pile of crap, but seeing Pior's tests gives me hope :poly142: