We would love to know more about cameras used, software used, break downs, pipelines, software and any other hints and tips for us to look into would be great.
What are your favorite pipelines to get the detail and texture information from a digital scan transferred to a texture?
Could you tell us what are the best practices for "Photogrammetry" for use in the games industry.
Who are the people at the forefront in this area? people like 3D Scanning Specialist - Jeffrey Ian Wilson.
haven't had the time but would be really interested in anyone who has given it a chance...
( wondering if realviz tech made it into the app or application: windows, android, ios, or web app )
from phone pic capture to geometry sounds like an awesome on the fly werld geometry note taking strategy.
I have used Photoscan for a while now. Works great for me. I tried making a tree trunk, which turned out barely decent, i only got 20 photos (you should always get around 40-50 for small/medium objects, and 100+ for big ones) because my memory card ran out
*no displacement though, which would really help it.
Going out hunting tomorrow, i will contribute a bit more to this thread. If you need to know anything about Photoscan ask me and maybe i have an answer, really interesting subject and TRUE NEXT-GEN!
To claydough, im sure architectural stuff like houses will work, but it will be alot of work later on, especially retopologizing it, and because it's such a big object, unique UV's will not look good because the object is too big to give a crisp texture unless you want to use 8k-16k haha.
Almost any camera that has a decent sized sensor and a good lens should work fine. I personally like to use a 50mm lens on a full frame camera (Canon 5D Mk2). I tend to avoid zoom lenses (Primes have higher quality for the price) - some photogrametry software prefers shots in a project to all be shot with the same camera/lens - a prime (non zoom) lens fixes that. Even if you use a zoom and never touch the zoom dial, it can still move a little.
Try to use a high F Stop, get as much in focus as possible. Shoot in good light and make sure your pictures are correctly exposed. Use a tripod if you have to when taking shots.
Take WAY too many pictures (or think that you are taking way too many pictures). The #1 rule of photogrametry is that you can never have too many photos.
As for software - Agisoft Photoscan is great, and honestly quite a steal at the price. Autodesk 123D Catch is fine and free, however it doesn't use full resolution photographs, and will only give you an averagely dense mesh. The higher end Autodesk solution (Recap Pro) can do a great job - often stuff looks better from that than Agisoft - plus it supports GoPro (super wide) images now, so if you do photogrametry from drones, that is a plus.
My personal preference is to keep my work somewhat private, so I don't much like the idea of my data being sent off in to the cloud for processing, but I might just be old and jaded.
When getting the data back it is super useful to not only get an OBJ/textures of your model, but also a scene that has the cameras in, along with images corrected for lens distortion. Since 3d packages don't compensate a camera for lens distortion, you need to make sure that the images you load in to a camera in your 3d app as a background image have been corrected for lens distortion too, otherwise the mesh wont match up with the camera image plane correctly.
So i was outside for a while and decided to make a new model, this time a brick i found.
The most important in my opinion is equal lighting. Go outside on a overcast day. And as matty says, never enough pictures. There are still a few blurry parts on my model after 45 pictures so i think you get the memo.
Base head model in about 5 minutes using 123D catch and using my Samsung 3 mobile phone. Lighting was far to bright, I will try again with a darker back drop but not bad just as a test. I will try again with a Cannon 5D and then work out how to bake the textures down and patch it up or just hand paint it, all fun .
About 40, I need to do more from a low angle to capture under the nose, chin. I will try an come back to it today or tomorrow and see what's the best way to clean it up.
Do you guys bake textures across meshes in world space in maya/ mental ray? or how do you do it?
Xnormal can do that easy peasy. Look for the base texture bake option. I'm sure Maya can do it too, but I don't use Maya so often so I couldn't tell you how.
nice. ive been looking into this stuff for a while now so ill pass along some infos ive got from googling and trial and error. SORRY IF THIS GOES LONG.
123 catch is alright but some of users don't like its blackbox nature and apparently it will limit the dimension of your image for bandwidth reasons(or whatever).
another alternative way to work is detailed here http://wedidstuff.heavyimage.com/index.php/2013/07/12/open-source-photogrammetry-workflow/
Open source and gives you a good idea of how photogrammety/sfm workflows work. but apperently there are restrictions on commercial use.
My favorite program for this sort of work is Agisoft Photoscan(Standard Edition)http://www.agisoft.com/features/standard-edition/
$189(pro edition has nice feature but price is a little high(lol) for what you get SE is good enough).
WARNING SOME STEPS MAYBE PERSONAL PREFERENCE and/or UNNECESSARY EXTREMELY W.I.P
now on to shooting (I use a sony a6000 and am not the world greatest photographer so please take these as guidlines only)
Drop your iso as low as you can in the lighting conditions(i'll drop it all the way to iso 100 if i can)
Drop down you aperture if you are able(again lighting conditions) ideally you want all your subject as sharp as can be(as little bokeh as you can get)usually between f4 and f8 works for me.
shutterspeed is more dependant on how you are taking the photos i take most of mine handheld so i need a slightly faster one to avoid motion blur(SHARP!SHARP!SHARP!) but slow would probably be better if you can stabilise(but since you have to go around the subject a tripod is probaly overkill, a monopod is probably my next purchase).
MAKE SURE THE SUBJECT IS IN FOCUS(this is OBVIOUSLY important)
Also if possible use a prime lense(i don't think photoscan(or any photogrammetry progam) can deal with images of different focal lengths)
Elimination of noise and blur the key goal when setting up camera settings.
Currently i'll circle the subject twice (trees, statue, rocks) taking 2 steps then reframing as much of the object(preferabley all) into the shot as possible.
usually end up with 30 to 40 shots. Keep in mind that there must be overlapping details for this to work and there may be parts you need take extra shots for(ie under the park table,to avoid gaping holes in the mesh). Usually anything that can be obscured by other objects needs this(ie legs, arms, tree branches). COVERAGE IS KEY
Also some things will absolutly not work with this method(Leaves,grass,Hair mainly stuff with insanely high detail)
If you want to just push the images into photoscan(or others) you can just use jpgs(but use the highest quality setting allowable(again noise is the enemy)). I personally use either RAW or RAW + jpeg(gives more options). jpeg will limit you to eight bit colour(not a problem with geometry building but the textures can be processed in 16bit or as HDRs(haven't tried this yet also))I to process the RAW images into 16 bit TIFFs and remove any stray noise(probably unnecessary but im still refining this part )
(IMPORTANT NOTE- Photoscan requires Exif data to process focal length, shutterspeed, etc, automagically so don't use programs or file formats that detroy this).
RTFM with photoscan its not long and will reveal all the steps and setting involved
Photoscan can take a long time to process the image for each part(beefy gpu and cpu with a nice amount of ram a must). MEDIUM QUALITY setting will probably be more than enough for most subject(unless you really have all day).
Still learning these parts but any mesh will require ALOT of clean up work at several stages.
Most meshes will be extremely dense and with horrible topology. but these are problems that many polycounters already have solutions for so. maybe look those up.
once thats done(relatively low poly mesh with good uvs,topology) you can re import and reproject photos onto the model in Photoscan so hooray
sorry for the long post but this is something ive been having fun with lately good to see some other polycounters are checking it out though(to be honest i probably need some people to bounce ideas off, doing it all on your own sucks). HAVE FUN GUYS
P.S. Ba should have read whole thread half of my infos already there oh well.
I work for Plowman Craven in their VFX team, and worked on the 3d scanning for Edge of Tomorrow, Guardians, Avengers 2 and other stuff. We use a bunch of different scanning methods and photogrammetry is something we use more and more often as its been getting better and better. I can't go into workflow specifics but I can give some general advice.
Firstly, Agisoft Photoscan is definitely the King of photogrammetry. It's cheap, it works brilliantly, that's about it. There are so many aspects that affect the quality of a scan, but you can boil nearly all of them down to the photos you put in. Unfortunately that mean camera body, lens, f-stop, iso, shutter speed, lighting, movement, camera position, shadows EVERYTHING. And what's good for a nice photo is not usually good for agisoft.
As others have mentioned, you want generally to stick to ISO 100 and keep everything as sharp as possible. You may end up manually masking out blurred parts of photos later, so save yourself some time.
Now this will give you very dark photos, but bright lighting is also key. You want flat lighting, meaning shoot on an overcast day or using polarising filters in a studio environment. You may need to shoot in raw and pull up the exposure quite a bit later. I've found that for best texture results you want to be taking photos that look almost like flat albedo textures off the bat. Also, keep the lighting constant from photo to photo.
Now coverage, you want to cover the object as much as possible. Think about covering every part of it from at least two angles or more, and by angles I mean 20 degrees or so. A common mistake I see people make is photographing say, a cube, from the side. And then from the 45 degree corner. And then the top. You can infer the shape of the object from the photos, but agisoft is looking at similarities between clusters of pixels.
I have used Photoscan for a while now. Works great for me. I tried making a tree trunk, which turned out barely decent, i only got 20 photos (you should always get around 40-50 for small/medium objects, and 100+ for big ones) because my memory card ran out
Not true! I would say shoot for around 60-100 photos of any object. Size does not factor into it, just the level of detail. In fact smaller objects usually have smaller details that are harder to capture.
Finally a word on formats - shoot in raw so you can get the best images to use in agisoft, but unless it is SUPER important that you get 16 bit colour, just give Agisoft Jpegs. Much much quicker and no really change in the quality of the mesh.
For more reading check out the Infinite Realities blog, this guy is an Agisoft Pioneer. And the Agisoft forums. Literally every question you have about agisoft has been answered there by people who have sweated through the trials and troubles of learning how it all works from scratch. My advice is find someone answering a question and just dig through their post history because they probably have a lot of useful information.
Oh and lastly, a crap mesh with photo textures will look like a photo, but ultimately you want something you can light in engine and don't have to spend days editing!
Hi there. Wondering if anyone has tried this technique?
Let's say you wanted a brick texture, instead of having to have worry about lighting conidions or having your camera parallell to the bricks allignment, take maybe 10-15 pictures of the "area" where the bricks are. You will have alot more flexibility as you can practically choose which area to use.
I have tried this method today, and it works, but the workflow is still VERY WIP. You can get real heightmaps if you take enough pictures, plus real normals (this depends on how good your software handles the pictures).
An example of what you can achieve (note it's not tilable, as i said it's WIP). Very good lighting and detail. If anyone is intereted i would love to go into a bit more details.
My main issue though, is if you look closely, there are small white lines which comes from the original UV map (because it came straight out of PhotoScan, it's a mess). Working on a fix, but kind of stuck at the moment.
probably your best best is to take the meseh into zbrush(or any other program that can deal with uvs) Create planar uvs and reimport model back into Photoscan and bake texture using keep uvs feature(export model is under file,import model is under tools.go figure).
problem is photoscan doesn't really like reasonably flat surfaces the chances of it failing with a surface like this is high still nice work
I have tried this method with Mudbox. But the issue i got was that the transfered textures on the clean mesh got a bit distorted. Going to look into this again today
EDIT: From my expirience it does not really hate flat surfaces either. Taking some close pictures can get you really nice HP mesh and possibility for a sculpt-like heightmap.
As far as 3d scanning, there are many ways to get the job done. I use photogrammetry, structured light and LIDAR depending on the job at hand. I am interested in giving an online masterclass covering all aspects of 3d scanning in the near future.
Well i got my tileable workflow to work finally. Here is the result, if anyone wants the workflow il share it .
PBR calibrated.
So is it a viable option? Depends on what you are after. For brick textures and textures in general that has a certain repeating pattern i would definitly recommend trying it out.
Well i got my tileable workflow to work finally. Here is the result, if anyone wants the workflow il share it .
PBR calibrated.
So is it a viable option? Depends on what you are after. For brick textures and textures in general that has a certain repeating pattern i would definitly recommend trying it out.
repetition is looking good to my eyes ( doesn't stick out to badly at all )
nice light that day?
repetition is looking good to my eyes ( doesn't stick out to badly at all )
nice light that day?
Yeah, light overcast. Sadly, it's becoming autumn here in Norway, and it gets dark quickly plus it rains alot. Might be hard for me to get good references from now on... :poly127:
In Maya I found the surface transfer works really well for a base texture and then you can use the same textures that you captured for the 3D scan to clean it up in Mudbox. I did a quick test (with a mobile phone during a break in really poor lighting!), so character artists please forgive my hackyness, it was just to try out a pipeline.
The Retopogize is on a base mesh from another source as I trashed my 123D scan with some patch experiments, ill come back to that later when I do the next one but it was just to show you the options.
I am looking forward to trying this in proper lighting conditions and with a cannon 5D and Agisoft Photoscan.
For peoples information, 123d Catch is only the hobbyist/consumer version of our reality capture solution. The professional version is actually ReCap, and gives you more options.
Check out the infor here, there's some really cool stuff: http://recap.autodesk.com/
If you're on active Subscription for Max, Maya and Mudbox, you should be able to access the free version.
There's also some really good information on our ReCap blog, including links to webinars and youtube videos: http://recapsupport.typepad.com/
Getting good shots all depends on your needs and project so some tinkering around depending on needs is essential before you go actually shoot stuff.
Agisoft Photoscan seems the best software for now, fast, reliable, and the best possible chance of reconstruction.
The only slight bugbear is that if you go overly dense with overlaps then camera matching go go awry, and too low, and the matching gets better to a point, but then it may not allow a good dense cloud to be created.
Ultimately it's a feature tracker so features that can be tracked are essential. Homogenous surfaces, highlights/reflections and low quality details on the surfaces you want to recreate will cause problems.
But for a project I'm working on right now, having Photoscan and Clouds2Max in my tool-set is invaluable!
I've taken random video (not rolling shutter type) from the web and generated good usable reconstructions from it!
I wonder if in a few years there will be a market for libraries full of videos just walking around items and stuff on overcast days!?
Amazing the difference I got using a Cannon 5D and 123D, rather than a small digital camera.
I have taken some more photos where I tried to keep really still also Zen like and not breathing when the picture was being taken, I am converting them at the moment and I will post them soon.
This week I will try again with some kind of hard neck rest on the back of the head to help keep it in place.
I will look into Recap next as its free for three years for students. I will also try adding some dots to the face and see if that helps? small enough to clean off in Zbursh/Mudbox.
From mobile phone while in London using 123D Catch. Note to self, must take more images! you can never take enough photos for photogrammatry. I think that is the key!
what sort of misfortune are u having?
Haven't tried agisoft yet myself. But am interested in any comparisons u come up with since everyone seems to be recommending Agisoft.
BTW thanks for sharing yer efforts thus far...
They r greatly appreciated.
I think I was being to impatient so I am now building up the workflow as the above Agisoft SFM tutorial suggests, so I will post up results in a while. Seams to take a good couple of hours but then if the results are as good as some people are getting then its well worth the wait.
This is where I am at the moment and I am about to build the mesh, just a little more clean up.
And this is what I got, back to the drawing board....
I used the same images in 123D catch and this is what I got
Next time I capture a a person I will make sure the head is still with some kind of head rest on the back of the head. Also I think there may have been a monitor on and the windows with some trees that may have been blowing in the wind a little. There is quite of a lot of data tripping out around those areas.
A still model is definitely what to try first before jumping into a human and take lots more images, have a look at this.
For a living person it's better to scan using multiple cameras setup.
My example of game environment, mostly photoscanned http://i.imgur.com/nxZQAZe.jpg
Great thread! I dabbled for a little while with 123D Catch, but obviously the resolution's lacking, and personally, I found that my successful build rate was, like, 20%. Most of the time it would just hang forever and never finish compiling. Super frustrating. But yeah - that Epic Games GDC breakdown that Mr.Funkdog mentioned was incredibly impressive, and it definitely revitalized my interest in a big way. Got to look into Agisoft and a proper camera; maybe dip into the rainy-day fund.
Replies
http://www.123dapp.com/catch
haven't had the time but would be really interested in anyone who has given it a chance...
( wondering if realviz tech made it into the app or application: windows, android, ios, or web app )
from phone pic capture to geometry sounds like an awesome on the fly werld geometry note taking strategy.
You can even download the OBJ files all for free
http://www.123dapp.com/catch/2014-10-09-23-25-49/2904590
http://www.123dapp.com/catch/2014-10-09-23-33-05/2904618
I hope to start scanning the werld with nikon soon!
thanks fer the obj.
Do u think environments and architecture possible?
*no displacement though, which would really help it.
Going out hunting tomorrow, i will contribute a bit more to this thread. If you need to know anything about Photoscan ask me and maybe i have an answer, really interesting subject and TRUE NEXT-GEN!
To claydough, im sure architectural stuff like houses will work, but it will be alot of work later on, especially retopologizing it, and because it's such a big object, unique UV's will not look good because the object is too big to give a crisp texture unless you want to use 8k-16k haha.
jes opened the dino"s in maya...
looking dead on geometry awesome gorgeous! very exciting> ( great job... thanks again fer the obj reference! )
starting point for texture capture turned out very nice as well!
BORAT niiiiiccceee!!!!
Try to use a high F Stop, get as much in focus as possible. Shoot in good light and make sure your pictures are correctly exposed. Use a tripod if you have to when taking shots.
Take WAY too many pictures (or think that you are taking way too many pictures). The #1 rule of photogrametry is that you can never have too many photos.
As for software - Agisoft Photoscan is great, and honestly quite a steal at the price. Autodesk 123D Catch is fine and free, however it doesn't use full resolution photographs, and will only give you an averagely dense mesh. The higher end Autodesk solution (Recap Pro) can do a great job - often stuff looks better from that than Agisoft - plus it supports GoPro (super wide) images now, so if you do photogrametry from drones, that is a plus.
My personal preference is to keep my work somewhat private, so I don't much like the idea of my data being sent off in to the cloud for processing, but I might just be old and jaded.
When getting the data back it is super useful to not only get an OBJ/textures of your model, but also a scene that has the cameras in, along with images corrected for lens distortion. Since 3d packages don't compensate a camera for lens distortion, you need to make sure that the images you load in to a camera in your 3d app as a background image have been corrected for lens distortion too, otherwise the mesh wont match up with the camera image plane correctly.
The most important in my opinion is equal lighting. Go outside on a overcast day. And as matty says, never enough pictures. There are still a few blurry parts on my model after 45 pictures so i think you get the memo.
Do you guys bake textures across meshes in world space in maya/ mental ray? or how do you do it?
nice. ive been looking into this stuff for a while now so ill pass along some infos ive got from googling and trial and error. SORRY IF THIS GOES LONG.
123 catch is alright but some of users don't like its blackbox nature and apparently it will limit the dimension of your image for bandwidth reasons(or whatever).
another alternative way to work is detailed here http://wedidstuff.heavyimage.com/index.php/2013/07/12/open-source-photogrammetry-workflow/
Open source and gives you a good idea of how photogrammety/sfm workflows work. but apperently there are restrictions on commercial use.
My favorite program for this sort of work is Agisoft Photoscan(Standard Edition)http://www.agisoft.com/features/standard-edition/
$189(pro edition has nice feature but price is a little high(lol) for what you get SE is good enough).
WARNING SOME STEPS MAYBE PERSONAL PREFERENCE and/or UNNECESSARY EXTREMELY W.I.P
now on to shooting (I use a sony a6000 and am not the world greatest photographer so please take these as guidlines only)
Drop your iso as low as you can in the lighting conditions(i'll drop it all the way to iso 100 if i can)
Drop down you aperture if you are able(again lighting conditions) ideally you want all your subject as sharp as can be(as little bokeh as you can get)usually between f4 and f8 works for me.
shutterspeed is more dependant on how you are taking the photos i take most of mine handheld so i need a slightly faster one to avoid motion blur(SHARP!SHARP!SHARP!) but slow would probably be better if you can stabilise(but since you have to go around the subject a tripod is probaly overkill, a monopod is probably my next purchase).
MAKE SURE THE SUBJECT IS IN FOCUS(this is OBVIOUSLY important)
Also if possible use a prime lense(i don't think photoscan(or any photogrammetry progam) can deal with images of different focal lengths)
Elimination of noise and blur the key goal when setting up camera settings.
Currently i'll circle the subject twice (trees, statue, rocks) taking 2 steps then reframing as much of the object(preferabley all) into the shot as possible.
usually end up with 30 to 40 shots. Keep in mind that there must be overlapping details for this to work and there may be parts you need take extra shots for(ie under the park table,to avoid gaping holes in the mesh). Usually anything that can be obscured by other objects needs this(ie legs, arms, tree branches). COVERAGE IS KEY
Also some things will absolutly not work with this method(Leaves,grass,Hair mainly stuff with insanely high detail)
If you want to just push the images into photoscan(or others) you can just use jpgs(but use the highest quality setting allowable(again noise is the enemy)). I personally use either RAW or RAW + jpeg(gives more options). jpeg will limit you to eight bit colour(not a problem with geometry building but the textures can be processed in 16bit or as HDRs(haven't tried this yet also))I to process the RAW images into 16 bit TIFFs and remove any stray noise(probably unnecessary but im still refining this part )
(IMPORTANT NOTE- Photoscan requires Exif data to process focal length, shutterspeed, etc, automagically so don't use programs or file formats that detroy this).
RTFM with photoscan its not long and will reveal all the steps and setting involved
Photoscan can take a long time to process the image for each part(beefy gpu and cpu with a nice amount of ram a must). MEDIUM QUALITY setting will probably be more than enough for most subject(unless you really have all day).
Still learning these parts but any mesh will require ALOT of clean up work at several stages.
Most meshes will be extremely dense and with horrible topology. but these are problems that many polycounters already have solutions for so. maybe look those up.
once thats done(relatively low poly mesh with good uvs,topology) you can re import and reproject photos onto the model in Photoscan so hooray
sorry for the long post but this is something ive been having fun with lately good to see some other polycounters are checking it out though(to be honest i probably need some people to bounce ideas off, doing it all on your own sucks). HAVE FUN GUYS
P.S. Ba should have read whole thread half of my infos already there oh well.
Firstly, Agisoft Photoscan is definitely the King of photogrammetry. It's cheap, it works brilliantly, that's about it. There are so many aspects that affect the quality of a scan, but you can boil nearly all of them down to the photos you put in. Unfortunately that mean camera body, lens, f-stop, iso, shutter speed, lighting, movement, camera position, shadows EVERYTHING. And what's good for a nice photo is not usually good for agisoft.
As others have mentioned, you want generally to stick to ISO 100 and keep everything as sharp as possible. You may end up manually masking out blurred parts of photos later, so save yourself some time.
Now this will give you very dark photos, but bright lighting is also key. You want flat lighting, meaning shoot on an overcast day or using polarising filters in a studio environment. You may need to shoot in raw and pull up the exposure quite a bit later. I've found that for best texture results you want to be taking photos that look almost like flat albedo textures off the bat. Also, keep the lighting constant from photo to photo.
Now coverage, you want to cover the object as much as possible. Think about covering every part of it from at least two angles or more, and by angles I mean 20 degrees or so. A common mistake I see people make is photographing say, a cube, from the side. And then from the 45 degree corner. And then the top. You can infer the shape of the object from the photos, but agisoft is looking at similarities between clusters of pixels.
Not true! I would say shoot for around 60-100 photos of any object. Size does not factor into it, just the level of detail. In fact smaller objects usually have smaller details that are harder to capture.
Finally a word on formats - shoot in raw so you can get the best images to use in agisoft, but unless it is SUPER important that you get 16 bit colour, just give Agisoft Jpegs. Much much quicker and no really change in the quality of the mesh.
For more reading check out the Infinite Realities blog, this guy is an Agisoft Pioneer. And the Agisoft forums. Literally every question you have about agisoft has been answered there by people who have sweated through the trials and troubles of learning how it all works from scratch. My advice is find someone answering a question and just dig through their post history because they probably have a lot of useful information.
Oh and lastly, a crap mesh with photo textures will look like a photo, but ultimately you want something you can light in engine and don't have to spend days editing!
A couple more 123D scans I tested today while out and about with my mobile phone.
Download links
http://www.123dapp.com/catch/door/2917565
http://www.123dapp.com/catch/lion-door-handle/2917573
[vv]98638688[/vv]
Let's say you wanted a brick texture, instead of having to have worry about lighting conidions or having your camera parallell to the bricks allignment, take maybe 10-15 pictures of the "area" where the bricks are. You will have alot more flexibility as you can practically choose which area to use.
I have tried this method today, and it works, but the workflow is still VERY WIP. You can get real heightmaps if you take enough pictures, plus real normals (this depends on how good your software handles the pictures).
An example of what you can achieve (note it's not tilable, as i said it's WIP). Very good lighting and detail. If anyone is intereted i would love to go into a bit more details.
My main issue though, is if you look closely, there are small white lines which comes from the original UV map (because it came straight out of PhotoScan, it's a mess). Working on a fix, but kind of stuck at the moment.
problem is photoscan doesn't really like reasonably flat surfaces the chances of it failing with a surface like this is high still nice work
EDIT: From my expirience it does not really hate flat surfaces either. Taking some close pictures can get you really nice HP mesh and possibility for a sculpt-like heightmap.
Neil pointed me to this forum.
As far as 3d scanning, there are many ways to get the job done. I use photogrammetry, structured light and LIDAR depending on the job at hand. I am interested in giving an online masterclass covering all aspects of 3d scanning in the near future.
In the meantime, visit the 3d scanning users group on facebook
https://www.facebook.com/groups/1439036619645915/
My corporate Facebook Fan Page
https://www.facebook.com/2cgvfx
And my personal webpage
http://www.jeffreyianwilson.com/
Hope to see you soon,
Jeff
PBR calibrated.
So is it a viable option? Depends on what you are after. For brick textures and textures in general that has a certain repeating pattern i would definitly recommend trying it out.
repetition is looking good to my eyes ( doesn't stick out to badly at all )
nice light that day?
In Maya I found the surface transfer works really well for a base texture and then you can use the same textures that you captured for the 3D scan to clean it up in Mudbox. I did a quick test (with a mobile phone during a break in really poor lighting!), so character artists please forgive my hackyness, it was just to try out a pipeline.
The Retopogize is on a base mesh from another source as I trashed my 123D scan with some patch experiments, ill come back to that later when I do the next one but it was just to show you the options.
I am looking forward to trying this in proper lighting conditions and with a cannon 5D and Agisoft Photoscan.
Anyway, if anyone is interested this is how it works, for more information on surface transfer - http://download.autodesk.com/us/maya/2010help/index.html?url=Lightingshading__Transfer_Maps.htm,topicNumber=d0e521790
Check out the infor here, there's some really cool stuff: http://recap.autodesk.com/
If you're on active Subscription for Max, Maya and Mudbox, you should be able to access the free version.
There's also some really good information on our ReCap blog, including links to webinars and youtube videos: http://recapsupport.typepad.com/
Tutorial on using photogrammetry with tileable textures. Hope you learn something.
http://coub.com/view/3uigh
Getting good shots all depends on your needs and project so some tinkering around depending on needs is essential before you go actually shoot stuff.
Agisoft Photoscan seems the best software for now, fast, reliable, and the best possible chance of reconstruction.
The only slight bugbear is that if you go overly dense with overlaps then camera matching go go awry, and too low, and the matching gets better to a point, but then it may not allow a good dense cloud to be created.
Ultimately it's a feature tracker so features that can be tracked are essential. Homogenous surfaces, highlights/reflections and low quality details on the surfaces you want to recreate will cause problems.
But for a project I'm working on right now, having Photoscan and Clouds2Max in my tool-set is invaluable!
I've taken random video (not rolling shutter type) from the web and generated good usable reconstructions from it!
I wonder if in a few years there will be a market for libraries full of videos just walking around items and stuff on overcast days!?
Dave
I have taken some more photos where I tried to keep really still also Zen like and not breathing when the picture was being taken, I am converting them at the moment and I will post them soon.
This week I will try again with some kind of hard neck rest on the back of the head to help keep it in place.
I will look into Recap next as its free for three years for students. I will also try adding some dots to the face and see if that helps? small enough to clean off in Zbursh/Mudbox.
I did have an angry looking face in this, the sun was shining directly into my eye so I ended up squinting a lot.
Attacked it in Zbrush and eventually got it to a workable level:
Not many defined features unfortunately.
Pushed it a bit further (with some quick polypaint test)
Then I wanted to see if you could scan clothing, turns out you can get a fairly decent result!
(After some very minor cleanup)
Going to keep on working into it, but some decent starting points came out from using scanning.
The obvious thing to note is having enough clearance around the subject as well as good lighting (as most of the people in here have said)
[vv]106186817[/vv]
https://www.youtube.com/watch?v=v3X5OmHXa4c#t=35
From mobile phone while in London using 123D Catch. Note to self, must take more images! you can never take enough photos for photogrammatry. I think that is the key!
Feel free to download - http://www.123dapp.com/catch/Capture_2014_12_08_11_38_59/3185281
[vv]97942368[/vv]
[vv]113753676[/vv]
Has anyone got any good tutorials as maybe I am missing something?
[ame]https://www.youtube.com/watch?v=xQw8rPvYaoA[/ame]
Haven't tried agisoft yet myself. But am interested in any comparisons u come up with since everyone seems to be recommending Agisoft.
BTW thanks for sharing yer efforts thus far...
They r greatly appreciated.
This is where I am at the moment and I am about to build the mesh, just a little more clean up.
And this is what I got, back to the drawing board....
I used the same images in 123D catch and this is what I got
A still model is definitely what to try first before jumping into a human and take lots more images, have a look at this.
[ame]https://www.youtube.com/watch?v=fNtHuL3Zw4E[/ame]
My example of game environment, mostly photoscanned
http://i.imgur.com/nxZQAZe.jpg
http://www.awn.com/news/autodesk-announces-public-beta-release-momento?utm_source=dlvr.it&utm_medium=facebook
Wish we had a Youtube embed button on polycount.
www.youtube.com/watch?v=axi4AYWXWTs#t=63
http://youtu.be/axi4AYWXWTs
http://www.twitch.tv/unrealengine/b/632598238 starts at 45:30
Pretty nice result for mobile from 123d Catch.
[vv]2948844[/vv]
Nice teeth
[SKETCHFAB]b1e0c0a586b241b5996b1f2bcbfc664a[/SKETCHFAB]