Hey guys, I’ve been wanting to try out 3D scanning for a while, here is my first attempt! It’s actually more like my third attempt, the first two sets of photos I took didn’t work. I tried putting the camera in a fixed position and moving Mr. Gnome around for each shot but the stitch process failed, I think it may have been due to the fact that I had an even white backdrop, so there wasn’t enough detail for the stitching software to recognize.
After trying (and failing) with a studio lighting/backdrop setup, I took Mr. Gnome outside and sat him on a box in the shade where there was ambient lighting, and took photos from multiple angles, making sure to get shots of any overlapping or overhanging geometry. I took one series of shots 360 degrees around him looking slightly up, and then another series of shots angled down at him. For a more complex asset, you would probably want to take some detail shots of various areas. The set of shots looked like this:
Camera stuff:
The camera I used is an Olympus EM1 with 12-40/2.8 lens, though the camera really isn’t that important (I've read that some people use cell phone cameras for this sort of thing). All you really need is something that gives you full manual control, and ideally can shoot raw with a lens that takes screw on filters. I used a circular polarizer to reduce specular reflections. I shot in manual mode to make sure each shot was evenly exposed, and I shot in raw so that I could correct the white balance for each shot in Lightroom later. I also shot at F11 to make sure the depth of field was wide enough that he was in focus for each shot, and ISO 200 (which is the base ISO for my camera) to limit noise. F11 at ISO 200 meant using slow shutter speeds, so I used a tripod with a 2 second delay to gaurd against motion blur. The exact settings will vary depending on your camera.
After the photos were processed, I brought them into Agisoft Photoscan to stitch them. This worked rather well, I tried Autodesk 123D Catch, which is easier to use but doesn’t appear to have the ability to create dense geometry and high resolution textures like Photoscan does. In the next shot you can see the pattern of the photos I took:
Here is the high resolution mesh in Agisoft Photoscan:
For the most part it looks good, there are some random chunks here (looks like it got confused by some background elements). You can mask your shots which would probably help, but for me it was easier to clean this up in Modo/Zbrush than to mask all 37 shots.
In addition to creating 3D geometry, Photoscan can create a uv map and bake out a diffuse texture from your photos. From a tutorial I watched, you might want to make copies of your reference textures and process them to remove lighting information before you bake out your diffuse map. I didn’t do that here but it’s something I may do next time. I baked out a texture at 16K because the auto-uvs are very inefficient.
After I had the highpoly and texture data, I cleaned up some of the issues mentioned before, deleted the base section Mr. Gnome was standing on, decimated the mesh in zbrush to get my lowpoly, and then redid the UVs in modo. At that point, I baked the highpoly content down to normal and diffuse maps:
Then, I loaded the model and textures into Marmoset Toolbag, set the reflectivity to 0.04 (because I don’t have a spec map) and eyeballed the gloss value, 0.326 seemed to look about right.
From there, I exported a Marmoset Viewer file, and uploaded him to ArtStation, which you can see in full 3D glory here:
https://www.artstation.com/artwork/gnome-scan-marmoset-viewer
Replies
Thanks for sharing the process.
Looks awesome
This looks cool Joe! Hope it inspires more people to explore this. Really great time saver when there's a solid pipeline and you cant argue with the results.
The hdri thing is pretty straight forward actually, as far as I understand it. They've published some pretty in depth slides and videos on it. I've done a bit of research on this and I've just bought some extra gear to do it. So I'll hopefully post some results soon. Might be wrong about this though. the chrome and matte balls are only for reference and aligning the HDRi to the mesh. Then they render the aligned lighting map in maya and use something like difference in photoshop to get rid of the shadows. They developed their own custom tools for this though so I don't know exactly what the process was.
There are ways...
More seriously, excellent write-up. Thanks for sharing.
I have tried some 123D Catch stuff (like my shoe hanging from string in garage) but the results were pretty bad. I wasn't too good at it.
This is making me want to try to go some crappy camera phone photgrammetry now. I wonder how possible it would be to get a usable heightmap from a wall for a tile or trim.
@Mant1k0re: Marmoset viewer tells you that when you view single channels.
Triangles: 46,152
Vertices: 23,601
Thanks, I thought it was sketchfab.
Hi Joe, can you tell us a bit more on how you get the diffuse texture from the highpoly in Photoscan onto the low poly you made within ZBrush?
Like UV´s will not be the same on high and low, right?!
Would like to know about how you made that happen...
kind regards
Goshi: Getting the texture content out is pretty straight forward, Photoscan has a feature that will create auto uvs for the dense mesh and project the texture onto it, which looks like this:
Theoretically you could use this as is, but the uvs are really bad and there is no pixel padding, so you'll see a lot of mip-mapping errors. I wanted to drop the poly count of the model anyway, which meant redoing the uvs. Once you have your low/uvs done, you can load the highpoly into xnormal, and load the Photoscan generated texture map into the diffuse slot, then in XN, make sure to bake both normals and diffuse - that's it!
You can also load the lowpoly back into Photoscan and project the texture directly on it, I did both and didn't really see much difference in quality (both had some minor stretching/errors that needed be cleaned up).
No UV copy/transfer was made - is that correct?
(Just want to understand this process and save this in my brain, you know) ^^
Kind Regards
Here's the original item and its in game cousin (2,305 vertices and 3,933 tris in UE4). Pretty decent for the amount of work involved!
I' m suddenly itching to buy 40+ cameras and set up a rig in my garage
Regardless of the camera/optic price. Looks like less bokeh , vignetting soft corners etc, all that photographers seems so fond off, is better .
So I guess cheaper camera may work better or no worse at least than expensive one.
Still not sure if image stabilizer helps or not. Sometimes it looks like it works better with IS off . Then no difference or other way around.
in my experience it's not a huge time saver but rather a way to get very realistic textures since you just can't compete with Mother Nature really whatever time you spend on .
Agisoft uses an arbitrary value and scale as it has no reference when building the alignment. You can only scale and use a script to set orientation it in the pro version.
it's what i get from Lumia phone. Also on my end Photoscan does make padding. No necessity to do it elsewhere
First off I went to the beach
This came out really well, If you look closely you can even see its caught things like shells, seaweed and pebbles on the sand. Sadly it's completely useless due to the lighting but like I said I was on holiday and who would go to the beach on a cloudy day?
My second test was on a tree, beach got too hot
This came out great so I did some work on it and turned it into a bit of a model. you can download it on turbosquid for free and poke around it a bit.
http://www.turbosquid.com/FullPreview/Index.cfm/ID/943340
I never thought about it until I started to process it but you get these really awesome tiled textures when you completely scan an object, you go right around to the start in a perfect loop, its awesome. Plus you get a height map for free
My next job is to get a DSLR camera and take raw pictures. Gotta get some high quality source, I can see the camera's noise in the models and its getting on my nerves
after this tiny experiment I'm even trying to convince my bosses to use this in production at work, its amazing what you can capture with a few photos
Still can't find a proper answer how to get rid of it. Sometimes it's better lighting, other time just proper paralax and overlapping. Looks like anything vivid green and red colored also produces more noise.
Wonder if somebody uses pro (expensive) version and how it differs in regards of noise?
I'm assuming like with ndo when you convert a texture to a normal map it's getting some detail from the wood grain texture of the table and making it bumpy.
Also if your camera has burst mode try using that, I'm getting really good results with it
I have a replica terracotta solider in my house and I knew I had scan him eventually so I spent my weekend doing just that. I wanted to make him in the style and quality of one of the chess pieces from one of my earlier projects pure chess
[ame]https://www.youtube.com/watch?v=YmJNRP_iYio[/ame]
Like the game I have a polycount of around 8 thousand, a texture res of 2048 and a physically based material system. Here's the final low poly model rendered in marmoset 2.
I'll upload it to turbosquid during the week, for free of course so people can get a good close up look at photogrammetry in a production. This stuff should be much bigger that it currently is. Were literately capturing reality!
Now for the best part. This took about 5 hours from start to finish. For reference the chess sculptures in pure chess took about 3-5 days each. A huge saving of time and a huge boost in realism.
I've even convinced my boss of its quality and were going to use it in production of VooFoo studios next title Thanks so much for the introduction to this earthquake!
Thanks!