Wow. That is insane. I hope to see scanning become more popular with game art, it really makes some pretty crazy photoreal assets. Vanishing of Ethan carter was pretty amazing, but I would love to see this with more hard surface kind of stuff. I've wanted to get into this.
I'm curious if scanning is faster than making complex things from scratch. The cleanup seems daunting from what I've read.
Fair point, the clean up can be an issue, but you can get some amazing results if you get everything right. And like anything it takes some practice.
Point-clouds cop a lot of flack, but generally I think they are used in the wrong way. People try to model things that are really much simpler and would be easier modelled traditionally.
My exposure to point clouds is photogrammetry and I've seen it requires better quality photography and lots of photos, plus vast amounts of processing power, usually GPUs too (or two=]) so that you can over model the object and cull the quality back, I can spend a few days using a few quite fast computers doing this over 2000 photos or more.
I use VisualSFM for my photogrammetry, it's amazing, and free.
With VisualSFM and a software project that I'm a part of DotSwarm I made a Point Cloud short film, that shows some of the analog-to-digital failures of photogrammetry, "Sifted".
Unfortunately I see people with no 3d modelling skills using it as technique to replicate someone else's work, be it a little toy that was plastic injection moulded, or a table in real life, expecting it's going to look perfect.
Maybe it will one day, but today it won't, so learn how to build - just as a carpenter or an engineer does in your world of 3D.
What they work great for is replicating nature, really intricate details, where the world isn't shinny, things like tree trunks, leaves, topography, places where this lots of detail, little occluded spaces, old buildings, paint flaked houses, really some pretty cool stuff too!
Hi MisterSande, ah yes my bad, its scanned from real materials and not Photogrammetry. Anyway its looks cool. I used the same technique on a weedkiller advert many moons ago.
I'd hate to see this thread die, so I thought I'd add this to the list of interesting videos/techniques...
[vv]60629702[/vv]
In a nutshell, they're building the initial photoscan inside Agisoft Photoscan, then exporting the geo, importing the result into 3DSMax to be rebuilt, then baking/transferring the colour and details between the high-res geo and the low-res geo inside of ZBrush.
To add to the process, you could rebuild a low-res (or at least cleaner) cage in your program of choice (Maya, 3DSMax, Topogun, etc), then bake using which ever method you normally use for your HD to SD workflow (XNormal?).
Regarding the textures, I've found that I've gotten better results importing the new geometry (with UVs) back into Photoscan, to have reprojected textures rebuilt using the new geo.
So far, I've gotten *some* good results using Bitmap2Material to remove lighting and shadow from the reprojected textures, but nothing as phenomenal as what they demonstrated in the latest UnrealEngine Photogrammetry Demo.
I'd love to see some good photoscan to PBR workflows!
It was previously mentioned to shoot at the lowest iso for the least grain, that's not entirely true. Your cameras all have a native iso, shooting lower than this will not improve the image and if you're lowering the signal more than you need to get a proper exposure, then you may be introducing noise/blur/wider aperture.
Normally not a big issue since the native iso is usually closer to the 100iso mark but, cameras have come out such as the Sony a7s that have a high native iso with the a7s @ 3200.
edit: the scan of the house in insane, wouldn't mind exploring in vr.
Hello everyone I thought I would share some of my WIP in Agisoft Photoscan and provide any necessary info that may be of help and hopefully I can learn from the rest of you
two more items where I am just posting screens of them in Agisoft, I scanned a sandy area of a park to test as a ground texture for UE4 and I am working with a second tree scan, the mesh is giving me headaches so I may just manually retopo it.
3D printing brings Da Vincis anatomical drawings to life
Collaborative work between WMG (Warwick Manufacturing Group) and Warwick Medical School created translucent 3D plastinations that use Leonardo Da Vincis original drawings of the human anatomy. The exhibition, which runs until 10 November 2013, sees the drawings, which are over 500 years old, side by side with plastinations, 3D scans and a translucent 3D heart.
The heart was generated by transferring an MRI image into a STereoLithography (STL) file. The software then slices this STL file into layers that can be 3D printed by jetting a liquid polymer layer-by-layer, which is instantaneously solidified through the use of ultraviolet light.
In the process of printing, a number of print heads provided the polymer for the heart, and some print heads created a gel-like support material, required to be able to print the complex heart structure, that would later be removed by water jets.
Just got Skanect! Been blown away by some of the results some r getting using kinect for photogrammetry. Anyone else have Kinect examples? http://skanect.occipital.com/
I'm using a Canon EOS 70D. Didn't buy it specifically for photogrammetry though. Works just fine. I usually use my EF 40mm STM or EF-S 18-135mm STM.
I haven't tried this yet, but with a polarizing lens you should be able to extract/separate glossy data to some extent. It will produce a few clicks darker photos, but if you do one texture generating pass with and one without the polarizing lens then you should be able to produce a specular texture(?). By subtracting one from the other? This is going to be my next attempt
The truth is any number of likely hundreds of different cameras will be suitable for this purpose. If you ask which ones are best, you'll simply get a list for cameras that people have used, more so than a "best of photogrammetry" recommendation.
What would be the best camera for your depends on a wide range of factors:
1. Do you want it to be small and light for portability or do you not mind carrying around a heavy camera/set of lenses?
2. How much money do you have to sped?
3. Do you want an optical viewer finder? Is an electronic viewfinder ok? Do you only need a rear LCD screen to compose?
4. Do you need the flexibility of an interchangeable lens camera with multiple lenses or would you rather a fixed lens compact?
5. Do you intend to use the camera for other purposes, like traditional photography?
6. If so, what sort of photography are you interested in? Portraits, sports, landscapes, architecture, low light, etc?
7. Are you willing to use a tripod or not?
As far as pulling gloss/spec out without special gear, no there isn't really any easy way to do this. You can use a polarizer on both your lens and your light source (you need to use a custom light source, typically with off camera flash(es)) to separate specular, pulling gloss is a lot more complicated and I'm not sure how one could do that short of building a specialized scanning rig.
Hey guys, thanks for the help!
I'm going to borrow a couple cameras today and do some reading up on those links you posted then go from there!
EQ: I was reading last night about using a turntable/greenscreen setup for smaller objects but it seems like you had troubles doing something similar with your gnome?
Was this just because there was no additional reference points besides the subject?
Yes, I think that was my issue. I was photographing in the gnome on a completely flat, neutral background, great for photography but a poor choices for photogrammetry. If I were to do it again I think I could solve the problem by placing reference points. I've seen people use newspaper or other bits of paper with a lot of text/detail, or colored construction paper etc.
Hey guys, thanks for the help!
I'm going to borrow a couple cameras today and do some reading up on those links you posted then go from there!
EQ: I was reading last night about using a turntable/greenscreen setup for smaller objects but it seems like you had troubles doing something similar with your gnome?
Was this just because there was no additional reference points besides the subject?
If you use a turntable, you need to mask out your object. I use a lighttent and a turntable: deer skull
Yes, I think that was my issue. I was photographing in the gnome on a completely flat, neutral background, great for photography but a poor choices for photogrammetry. If I were to do it again I think I could solve the problem by placing reference points. I've seen people use newspaper or other bits of paper with a lot of text/detail, or colored construction paper etc.
The neutral background is just messing with Photoscan's "depth perception." The setup you had was actually pretty good.
The counter to this is, after you've taken your series of photos, with the object on it's turntable (and a stationary camera), you take a final photo without the gnome.
You can then feed this into Photoscan as a "Background Mask," and Photoscan can use that information to create an automask to help crop out your object (and give it a better chance of working out what's where in 3D space).
These 2 forum responses from the Agisoft team explain it better....
I picked up agisoft photoscan standard edition over the weekend and did some testing with it. So far it seems to have no clue what to do with anything dark and slightly shiny. All of the black vinyl is missing or distorted and the semi gloss metals aren't coming in either:
Considering how much effort went into the image capture, the scan quality is not great. There are probably some workflow tricks I am missing that would improve the results.
The pricing structure on photoscan is a little ridiculous. The only feature in the pro edition I want is the tool to establish scale. I can do it manually in 3dsmax but it will be more time consuming.. I am not paying over $3000 for such a simple feature.
I know baby powder is often used to dull shiny surfaces for laser scanning so I am going to try that on the problem areas. I also sprayed a dusting coat of primer on some control arms to see if that knocks the shine down enough to scan them better.
Hi, I saw this thread and I'm really interest!
I have a few questions to ask. If you don't mind.
-Is this technique widely use in game art?
-Is this practical for small team of artists with 3-4 people?
-How many weeks it take to master this technique at production quality?
I've had this crazy and silly workflow/pipeline in my head for a while:
1. Sculpt your character's anatomy (in t-pose) normally in ZBrush or any other 3D sculpting software of your choise.
2. 3D print the full body of your (t-posed) character with a reasonable size.
3. Get it dressed with clothes. Depending on the printed size, sew custom-sized clothes or use real clothes if it's a life-sized 3D print (quite rare opportunity to do that though)
4. 3D scan or use photogrammetry to get it back to ZBrush with clothes
5. Clean up, resize, fine tune and detail it for the final sculpt, if needed.
Of course there are multiple factors to fail during the process, but I'd love to get clothes with cool wrinkles and folds. The size of the print usually matters, since the bigger it is, the better the drapery is. And you could get any kind of wrinkles and folds, adjusting by hand.
That would be a great idea as long as you didn't need to do an original clothing design. Good fabric's expensive and the bigger the clothes are the more sewing you have to do, so working in 1/4 or 1/8 scale for example could save a good bit of time. Of course you would have to be careful to make sure that the clothes don't look like they're miniatures... in the end it's probably just easier to use MD if you're doing original clothing designs.
The effort and/or expense level required to get anything close to a production ready high poly model is so high that it doesn't make sense to attempt it unless you can't model and want to spend hours doing cleanup or you have the budget for a dedicated multi camera/lighting rig.
I am curious how people are handling the removal of lighting from the texture to use for an Albedo. I know that you can Desaturate-Invert-and Blend with Soft Light but not sure if that is the correct way to handle it. I have seen the presentations from Epic and others but haven't figured out how they are going about it.
To delight you have to capture a 360 degree HDRI at the time of model capture and using something like vray to render a lightmap that exactly matches reality. Then in Photoshop blend the delighting texture and the photoscanned texture with the divide blend mode. If done correctly the lighhting gets stripped out.
It's blooming hard though. Getting your delighting lightmap to perfectly match reality is super hard, if it's off by a little bit the small errors will literally glow once blended.
I think it's feasible to do it if you have the budget (probably 2 camera bodies, fisheye lens, pano head)and time, but otherwise shooting in diffuse lighting conditions will give you pretty good results.
I think it's feasible to do it if you have the budget (probably 2 camera bodies, fisheye lens, pano head)and time, but otherwise shooting in diffuse lighting conditions will give you pretty good results.
I did it on the cheap for about £100 off amazon. I already had a tripod and you can just swap your lens to a cheap fisheye, you defiantly don't need another body.
It just takes about 2 minutes out in the field to capture the HDRI and the processing on the computer takes maybe an hour per asset. one quick lightmap and a photoshop blend mode and your done.
I did it on the cheap for about £100 off amazon. I already had a tripod and you can just swap your lens to a cheap fisheye, you defiantly don't need another body.
It just takes about 2 minutes out in the field to capture the HDRI and the processing on the computer takes maybe an hour per asset. one quick lightmap and a photoshop blend mode and your done.
Could you go into a bit more detail if you have the time? What equipment are you using? Are you creating a full panoramic HDRI or using a light probe ball?
I realise the tripod heads not right but it's cheap and allows me to set the camera back far enough (and safely enough) to avoid about 90% of the paralax effect that comes from rotating the camera around its base.
Then once the HDRI's made I line the image up in max and render a delighting map and bobs your uncle
I'm thinking of capturing some ruins piece by piece with a view to retopologizing the geometry. In small controlled environments, it's easier to set a scene up but outside it's more difficult. Grass blowing/moving, trees etc...
I'm still very new to this but I'm enjoying a period of fascination with photogrammetry. If anyone's working on a list of pointers, here or otherwise, it would be good to have a link .: ]
hi guys, I am really new to photogramettry stuff the latest result i have here is combination of raw data and "inflate cheat" in zbrush.
Canon 60 D
after inflate balloon from Diffuse AO
some people in fb comment said i dont have to take a lot of picture to get accurate mesh data , however , i tried that before and got so many holes, so I learned from my mistakes and take as many pic as possible.
still , I couldnt get pores/micro detail without "inflate cheat" it.
I heard that is depends on the object size too. is that true?
------------------ some progress
first itteration
detail reconstruct after cutting tons of necessary areas.. ( i have to calculate twice, and change the setting to high mesh detail before reconstructing it , i think agisoft would do better job)
earlier test with Cellphone cam ( samsung galaxy s4)
Replies
I'm curious if scanning is faster than making complex things from scratch. The cleanup seems daunting from what I've read.
Point-clouds cop a lot of flack, but generally I think they are used in the wrong way. People try to model things that are really much simpler and would be easier modelled traditionally.
My exposure to point clouds is photogrammetry and I've seen it requires better quality photography and lots of photos, plus vast amounts of processing power, usually GPUs too (or two=]) so that you can over model the object and cull the quality back, I can spend a few days using a few quite fast computers doing this over 2000 photos or more.
I use VisualSFM for my photogrammetry, it's amazing, and free.
With VisualSFM and a software project that I'm a part of DotSwarm I made a Point Cloud short film, that shows some of the analog-to-digital failures of photogrammetry, "Sifted".
Unfortunately I see people with no 3d modelling skills using it as technique to replicate someone else's work, be it a little toy that was plastic injection moulded, or a table in real life, expecting it's going to look perfect.
Maybe it will one day, but today it won't, so learn how to build - just as a carpenter or an engineer does in your world of 3D.
What they work great for is replicating nature, really intricate details, where the world isn't shinny, things like tree trunks, leaves, topography, places where this lots of detail, little occluded spaces, old buildings, paint flaked houses, really some pretty cool stuff too!
[ame]www.youtube.com/watch?v=0EyHSzfDo6c[/ame]
BTW, what about the Megascans video @linkedclaude?
[ame="http://www.youtube.com/watch?v=0DG51glKipU"]www.youtube.com/watch?v=0DG51glKipU[/ame]
[vv]60629702[/vv]
In a nutshell, they're building the initial photoscan inside Agisoft Photoscan, then exporting the geo, importing the result into 3DSMax to be rebuilt, then baking/transferring the colour and details between the high-res geo and the low-res geo inside of ZBrush.
To add to the process, you could rebuild a low-res (or at least cleaner) cage in your program of choice (Maya, 3DSMax, Topogun, etc), then bake using which ever method you normally use for your HD to SD workflow (XNormal?).
Regarding the textures, I've found that I've gotten better results importing the new geometry (with UVs) back into Photoscan, to have reprojected textures rebuilt using the new geo.
So far, I've gotten *some* good results using Bitmap2Material to remove lighting and shadow from the reprojected textures, but nothing as phenomenal as what they demonstrated in the latest UnrealEngine Photogrammetry Demo.
I'd love to see some good photoscan to PBR workflows!
Initial Settings
22 Cameras into Photoscan
Aligned - High Accuracy
Dense Cloud - Low
Mesh - High
Imported Mesh into ZBrush
Remesh with guides
Import to Maya for UV unwrap
Import back into Photoscan to reproject the textures
export textures from Photoscan
Normal map baked in XNormal
Lighting is baked in to the texture.
[SKETCHFAB]03360cf2690140b4b0e6a61984746327[/SKETCHFAB]
Still working on the PBR part...
I'll just add this here
[ame]https://www.youtube.com/watch?v=bdaepPjZKmM[/ame]
Normally not a big issue since the native iso is usually closer to the 100iso mark but, cameras have come out such as the Sony a7s that have a high native iso with the a7s @ 3200.
edit: the scan of the house in insane, wouldn't mind exploring in vr.
[IMG][/img]
[IMG][/img]
Photogrammetry agisoft, 210 photos at 18MP
3D printing brings Da Vincis anatomical drawings to life
Collaborative work between WMG (Warwick Manufacturing Group) and Warwick Medical School created translucent 3D plastinations that use Leonardo Da Vincis original drawings of the human anatomy. The exhibition, which runs until 10 November 2013, sees the drawings, which are over 500 years old, side by side with plastinations, 3D scans and a translucent 3D heart.
The heart was generated by transferring an MRI image into a STereoLithography (STL) file. The software then slices this STL file into layers that can be 3D printed by jetting a liquid polymer layer-by-layer, which is instantaneously solidified through the use of ultraviolet light.
In the process of printing, a number of print heads provided the polymer for the heart, and some print heads created a gel-like support material, required to be able to print the complex heart structure, that would later be removed by water jets.
http://www2.warwick.ac.uk/newsandevents/pressreleases/3d_printing_brings/
http://www.laboiteverte.fr/des-espaces-numerises-au-lidar/
Nice use for the medium
[vv]124635236[/vv]
[vv]120708083[/vv]
[vv]119136236[/vv]
[ame]www.youtube.com/watch?v=yh3MSIEsh2w[/ame]
Just got Skanect! Been blown away by some of the results some r getting using kinect for photogrammetry. Anyone else have Kinect examples?
http://skanect.occipital.com/
http://www.smartprinting.co/3d-printing-news/next-apple-iphone-may-also-be-the-worlds-best-3d-scanner/
Also, is there any way using standard cameras that you would be able to extract specular/gloss information for a pbr engine?
I haven't tried this yet, but with a polarizing lens you should be able to extract/separate glossy data to some extent. It will produce a few clicks darker photos, but if you do one texture generating pass with and one without the polarizing lens then you should be able to produce a specular texture(?). By subtracting one from the other? This is going to be my next attempt
What would be the best camera for your depends on a wide range of factors:
1. Do you want it to be small and light for portability or do you not mind carrying around a heavy camera/set of lenses?
2. How much money do you have to sped?
3. Do you want an optical viewer finder? Is an electronic viewfinder ok? Do you only need a rear LCD screen to compose?
4. Do you need the flexibility of an interchangeable lens camera with multiple lenses or would you rather a fixed lens compact?
5. Do you intend to use the camera for other purposes, like traditional photography?
6. If so, what sort of photography are you interested in? Portraits, sports, landscapes, architecture, low light, etc?
7. Are you willing to use a tripod or not?
Also, it's worth checking out these threads:
http://www.polycount.com/forum/showthread.php?t=153940
http://www.polycount.com/forum/showthread.php?t=130896
http://www.polycount.com/forum/showthread.php?t=128289
http://www.polycount.com/forum/showthread.php?t=154396
As far as pulling gloss/spec out without special gear, no there isn't really any easy way to do this. You can use a polarizer on both your lens and your light source (you need to use a custom light source, typically with off camera flash(es)) to separate specular, pulling gloss is a lot more complicated and I'm not sure how one could do that short of building a specialized scanning rig.
More about that stuff here:
https://udn.epicgames.com/Three/TakingBetterPhotosForTextures.html
http://gl.ict.usc.edu/Research/DigitalEmily/
http://gl.ict.usc.edu/Research/FaceScanning/EGSR2007_SGI_low.pdf
I'm going to borrow a couple cameras today and do some reading up on those links you posted then go from there!
EQ: I was reading last night about using a turntable/greenscreen setup for smaller objects but it seems like you had troubles doing something similar with your gnome?
Was this just because there was no additional reference points besides the subject?
If you use a turntable, you need to mask out your object. I use a lighttent and a turntable: deer skull
The neutral background is just messing with Photoscan's "depth perception." The setup you had was actually pretty good.
The counter to this is, after you've taken your series of photos, with the object on it's turntable (and a stationary camera), you take a final photo without the gnome.
You can then feed this into Photoscan as a "Background Mask," and Photoscan can use that information to create an automask to help crop out your object (and give it a better chance of working out what's where in 3D space).
These 2 forum responses from the Agisoft team explain it better....
http://www.agisoft.com/forum/index.php?topic=1797.0
http://www.agisoft.com/forum/index.php?topic=2174.0
Autodesk just posted a Memento presentation:
[ame]https://www.youtube.com/watch?v=kjKJaTMZNJ8[/ame]
http://i.imgur.com/mHiMzji.png
Considering how much effort went into the image capture, the scan quality is not great. There are probably some workflow tricks I am missing that would improve the results.
The pricing structure on photoscan is a little ridiculous. The only feature in the pro edition I want is the tool to establish scale. I can do it manually in 3dsmax but it will be more time consuming.. I am not paying over $3000 for such a simple feature.
I know baby powder is often used to dull shiny surfaces for laser scanning so I am going to try that on the problem areas. I also sprayed a dusting coat of primer on some control arms to see if that knocks the shine down enough to scan them better.
The result is good enough to use for a detailed reference object and maybe to manually retopo over.
I have a few questions to ask. If you don't mind.
-Is this technique widely use in game art?
-Is this practical for small team of artists with 3-4 people?
-How many weeks it take to master this technique at production quality?
1. Sculpt your character's anatomy (in t-pose) normally in ZBrush or any other 3D sculpting software of your choise.
2. 3D print the full body of your (t-posed) character with a reasonable size.
3. Get it dressed with clothes. Depending on the printed size, sew custom-sized clothes or use real clothes if it's a life-sized 3D print (quite rare opportunity to do that though)
4. 3D scan or use photogrammetry to get it back to ZBrush with clothes
5. Clean up, resize, fine tune and detail it for the final sculpt, if needed.
Of course there are multiple factors to fail during the process, but I'd love to get clothes with cool wrinkles and folds. The size of the print usually matters, since the bigger it is, the better the drapery is. And you could get any kind of wrinkles and folds, adjusting by hand.
Like a "real-life Marvelous Designer".
http://starwars.ea.com/starwars/battlefront/news/how-we-used-photogrammetry
DICE uses photogrammetry extensively and our team learned a lot from the techniques they developed.
Any insight would be greatly appreciated.
It's blooming hard though. Getting your delighting lightmap to perfectly match reality is super hard, if it's off by a little bit the small errors will literally glow once blended.
I did it on the cheap for about £100 off amazon. I already had a tripod and you can just swap your lens to a cheap fisheye, you defiantly don't need another body.
It just takes about 2 minutes out in the field to capture the HDRI and the processing on the computer takes maybe an hour per asset. one quick lightmap and a photoshop blend mode and your done.
Could you go into a bit more detail if you have the time? What equipment are you using? Are you creating a full panoramic HDRI or using a light probe ball?
https://www.marmoset.co/toolbag/learn/hdr-panos
And these things
[ame="http://www.amazon.co.uk/Q-45-12KG-Panoramic-Gimbal-Plate/dp/B00KWA68LI/ref=sr_1_5?s=photo&ie=UTF8&qid=1437422060&sr=1-5&keywords=panoramic+head"]Pro Q-45 12KG Load Panoramic Gimbal Head with 70mm QR: Amazon.co.uk: Camera & Photo[/ame]
[ame="http://www.amazon.co.uk/Opteka-Professional-Fisheye-Digital-Cameras/dp/B001LZJB9Y/ref=sr_1_1?s=electronics&ie=UTF8&qid=1437419973&sr=1-1&keywords=fisheye+lens+canon"]Opteka HD² 0.20X Professional Super AF Fisheye Lens for: Amazon.co.uk: Electronics[/ame]
I realise the tripod heads not right but it's cheap and allows me to set the camera back far enough (and safely enough) to avoid about 90% of the paralax effect that comes from rotating the camera around its base.
Then once the HDRI's made I line the image up in max and render a delighting map and bobs your uncle
Heres one I did earlier
You should probably straighten that horizon though.
https://environmentagency.blog.gov.uk/2015/09/18/laser-surveys-light-up-open-data/
It’s 11 terabytes if you download it all so just an average UE4 export size, ha-ha!
Best,
Neil
You can cheat by using the different channels in your worldspace normalmap to delight/flatten your diffusetexture in photoshop.
Any more info on this? Sounds like it would be a neat little trick.
Super awesome thread by the way, so much information. I'm hoping to have a piece to contribute very shortly.
I'm thinking of capturing some ruins piece by piece with a view to retopologizing the geometry. In small controlled environments, it's easier to set a scene up but outside it's more difficult. Grass blowing/moving, trees etc...
I'm still very new to this but I'm enjoying a period of fascination with photogrammetry. If anyone's working on a list of pointers, here or otherwise, it would be good to have a link .: ]
the latest result i have here is combination of raw data and "inflate cheat" in zbrush.
Canon 60 D
after inflate balloon from Diffuse AO
some people in fb comment said i dont have to take a lot of picture to get accurate mesh data , however ,
i tried that before and got so many holes, so I learned from my mistakes and take as many pic as possible.
still , I couldnt get pores/micro detail without "inflate cheat" it.
I heard that is depends on the object size too. is that true?
------------------ some progress
first itteration
detail reconstruct after cutting tons of necessary areas.. ( i have to calculate twice, and change the setting to high mesh detail before reconstructing it ,
i think agisoft would do better job)
earlier test with Cellphone cam ( samsung galaxy s4)
--------
I used Agisoft for this, I think I'm starting to get some nice results from using it.