Hey! Sorry for crossposting, I just couldn't get any responses on my original thread.
So far, I have a Nikon D5100 with the zoom lenses that come with it and a pan/tilt tripod head, plus a circular polarizer, and a X-Rite color passport, and finally, a chrome ball to capture the HDR environments.
1) Do I need to acquire a fisheye lens and a pano head to recreate the lighting conditions later or will the chrome probe do the job? I understand that an HDRi would be the ideal choice for this, but spending 400$ on the lens + head is a little bit out of my budget. Is there anything I should consider for this technique?
2) Is there a method for extracting roughness information from the photos? I saw this post from Filmic Games about separating the specularity from objects, but is there any way to apply this in PBR besides extracting the albedo?
3) On tileable textures, do you recommend using nDo/Crazybump/Bitmap2material for creating normal maps from photos or is there a way to get that information from photoscan?
4) I understand that an overcast is the best type of weather to take the photos, but after seeing how Epic recreates the lighting conditions to delight the textures, does that even matter anymore?
Hey "xvampire", what "baloon inflate" trick in Zbrush do you have in mind your post? How does it work? I used inflate baloon or inflate in zbvruah before on scans to get rid off the surface noise globally (instead of smoothing) but im not familiar with your solution/trick, could you or someone please exapnd on what do you do with inflate baloon in zbrush with scans?
hi Jonathan, sorry I have been too occupied lately,
to answer your question :
basically inflate in tools - deformation - inflate baloon.
inflate is not to clean up noise, inflate technically is push tool in 3dsmax , ( baloon inflate is to grow in all direction. instead of just initial normal direction.) if u want to clean up noise just use polish slider , click the small circle near the slider to enable polish even bigger shape
then you have high detail information masked , then you do push ( inflate /or inflate baloon later on ) ( this need practice/trial error despite being just one or two step )
or in some case : you mask by cavity to enhance the 3d detail you already have
if you do it correctly , it will enhance the detail evenly without too much manual sculpting. ( of course if you have 16 bit heightmap , use Displacement instead of this method )
lately i use substance designer to enhance detail safely. and leave medium and low freq detail sculpting in zbrush.
I've found this thread to be a great recourse, helped me push my photogrammetry skills quite a lot, so thanks to everyone who contributed and keeps contributing! here's a tiled tree bark texture I just finished (might just work on roughness a bit more)
here's the 3d model with generated texture from photos, my photos turned out quite blurry, I took em handheld and had to go for quite high ISO due to bad lighting conditions, I went for a high F stop to get everything in focus - not sure that was a good trade off? I did some photo enhancing in Camera Raw to reduce noise a bit and crank the exposure. hopefully future photos will be better with the nifty fifty I've ordered, these were taken with the 18-55 kit lens
photoscan scene and SD setup I've used to finalize the tiling, tweak albedo values, get the AO and roughness and add some high freq normal details
I'm still very new to the photography part side of this, so any feedback is welcome :] oh and if someone got any tips / tricks for shooting foliage, I'd love to hear. is it possible to get good results with basic gear? (camera, basic lens, tripod - that's it :P)
I'm in the process of upgrading my scanning setup. Doing some tests over the weekend with what I have on hand and there is a bit too much sensor noise (ISO 100, F11, 1/5s): http://i.imgur.com/lkXioiz.jpg
I am picking up a used 5dmkII body and I'm going to build some simple steel lighting stands to accommodate 8 T8 bulbs. I'm also going to pick up a few cans of matte white spray paint. Moving from mid gray to white ought to get me about 3 stops faster shutter speed. Doubling the number of lights should net me another stop, and then the new camera body will be much lower noise.
Welded up a simple steel stand for 4 lamps on one side, I'll make a bigger bank of lights soon but I don't have any more appropriate scrap tubing. More light + white paint makes a big difference.
Hey guys, was going to buy a new cellphone which would hopefully be cheaper around Black Fri/Cyber Mon, and I was wondering if I could buy something old and cheap with a good camera. Was really intrigued to start exploring photogrammetry through those devices, like the Allego guys are doing here. How many MP are needed these days to get good-looking results? 12 MP? Was looking into the Nexus 4, I wanted something Android and Cyanogenmod compatible, but that camera is only 8 MP...Any thoughts?
Same images as used in the scan result above- this time with realitycapture instead of agisoft. This was much faster and in many areas the result is better:
A scan I did last night. Many of the surfaces are shiny and untreated which is causing problems. The cast iron block and the matte aluminum oil pan look really really good. 240 photos process to a 14 million triangle mesh by reality capture in about an hour.
I'm going to get a junk engine and spray paint it for a better result.
I'm shooting on a tripod with a cable release. 240 photos takes ~45 minutes to shoot. If I had a nicer tripod it would be faster. I'm shooting at 1/20 so I have to wait for the camera to settle down between each shot.
A/B comparison here. First, Reality Capture with a Nikon D800E, Sigma 50mm Art at f/8, 141 photos. This only took about 20 minutes to process from photos into a mesh:
Canon t2i with a canon 50mm 1.4. 140 photos processed in agisoft photoscan. This took hours to process:
Alec, I've been using Handplane Baker for a quick de-lighting pass on some test scanned assets. I load the exported obj from Reality Capture into both the high and low poly slots in HP, bake object space normals and AO, and then run a photoshop action on the texture. The AO gets inverted and screened out of the diffuse, and the blue channel of the normal map is used to select the down-facing surfaces of the texture and lighten them up. For stuff shot outside on a cloudy day it automagically removes lighting from the diffuse texture. After that I can bake the de-lit diffuse texture to polypaint in Zbrush and go to town.
Thats a cool idea. Can you post about it in more detail in the handplane thread? I really want to add baking textures from high to low. I think handplane baker is really well suited to scanning workflows since it's good at handling really big meshes.
I have worked with laser scan data in the past (not those handheld scanners) and the current iteration of my photo based scanning system is much better. I also did some dimensional accuracy testing and i am getting results in the range of 0.10015% to 0.065822% error.
I looked at the artec scanners pretty closely but wasn't impressed by the quality or tech specs.
I'll post more on my current scanning setup soon. I switched to flash photography using two medium sized monolights with stripboxes. This lets me ditch the tripod and shoot handheld at f/11. I can bang out 200 photos in 10 minutes.
Not really photogrammetry but here's the latest from my experiments in multi lighting angle texture capture.
I've been using substance designer to process and output the maps.
I have updated my tutorials mentioned for use with Reality Capture and environment scans. The results are pretty good. There is some free stuff available for those interested.
Back from my Central Java Yogyakarta trip and photogrammetry experiments continue
... almost fall off the cliff ,and some kids laugh at me taking excessive images of ground and stuffs ... lol...( totally understandable ) more renders Comin' , my laptop can't handle very dense stuff unfortunately .. so I have to either divide stuff by chunk or do hand polish later as usual s
statue practice ...
another Seamless Tiling displacement stuff ( rendered in Marmoset )
The process is completely automatic. No coded targets, manual editing or special equipment are needed. 3DF Zephyr is built on top of our proprietary, cutting-edge, reconstruction technology.
what's your guys recommended workflow to remove shadows from models particularly more environmental one? I always inverted the AO map on the diffuse but that isn't 100%. Anything will help a bunch.
what's your guys recommended workflow to remove shadows from models particularly more environmental one? I always inverted the AO map on the diffuse but that isn't 100%. Anything will help a bunch.
I guess if you can light it in ambient conditions would be the best solution but it needs to be a constant source. Outside is a little trickier, obviously in the UK there is a lot of cloud cover for parts of the day which makes life a little easier but in places like the US and Oz I am not so sure? early morning or late evening when your not getting direct light?
what's your guys recommended workflow to remove shadows from models particularly more environmental one? I always inverted the AO map on the diffuse but that isn't 100%. Anything will help a bunch.
I guess if you can light it in ambient conditions would be the best solution but it needs to be a constant source. Outside is a little trickier, obviously in the UK there is a lot of cloud cover for parts of the day which makes life a little easier but in places like the US and Oz I am not so sure? early morning or late evening when your not getting direct light?
Yeah I figured those might be the best times during the day. I haven't really figured out though when dealing with the more smaller shadows that appear in the scans. For example if you scan a small little rock pile you have all these small fine shadows within the scan. Now I suppose the workaround to remove some of them would be to do a clone stamp in photoshop.
Replies
So far, I have a Nikon D5100 with the zoom lenses that come with it and a pan/tilt tripod head, plus a circular polarizer, and a X-Rite color passport, and finally, a chrome ball to capture the HDR environments.
1) Do I need to acquire a fisheye lens and a pano head to recreate the lighting conditions later or will the chrome probe do the job? I understand that an HDRi would be the ideal choice for this, but spending 400$ on the lens + head is a little bit out of my budget. Is there anything I should consider for this technique?
2) Is there a method for extracting roughness information from the photos? I saw this post from Filmic Games about separating the specularity from objects, but is there any way to apply this in PBR besides extracting the albedo?
3) On tileable textures, do you recommend using nDo/Crazybump/Bitmap2material for creating normal maps from photos or is there a way to get that information from photoscan?
4) I understand that an overcast is the best type of weather to take the photos, but after seeing how Epic recreates the lighting conditions to delight the textures, does that even matter anymore?
Thanks!
to answer your question :
basically inflate in tools - deformation - inflate baloon.
inflate is not to clean up noise, inflate technically is push tool in 3dsmax ,
( baloon inflate is to grow in all direction. instead of just initial normal direction.)
if u want to clean up noise just use polish slider , click the small circle near the slider to enable polish even bigger shape
but how does it related to adding detail? it has to do with masking the contour and pushing out the detail
for example when you do mask by colors in zbrush
http://docs.pixologic.com/wp-content/uploads/2013/01/4R6-Masking.jpg
then you have high detail information masked , then you do push ( inflate /or inflate baloon later on ) ( this need practice/trial error despite being just one or two step )
or in some case : you mask by cavity to enhance the 3d detail you already have
if you do it correctly , it will enhance the detail evenly without too much manual sculpting. ( of course if you have 16 bit heightmap , use Displacement instead of this method )
lately i use substance designer to enhance detail safely.
and leave medium and low freq detail sculpting in zbrush.
A quick photogrammetry study. Photoscan, Zbrush, UvLayout, Xnormal and a quick Quixel pass!
here's a tiled tree bark texture I just finished (might just work on roughness a bit more)
here's the 3d model with generated texture from photos, my photos turned out quite blurry, I took em handheld and had to go for quite high ISO due to bad lighting conditions, I went for a high F stop to get everything in focus - not sure that was a good trade off? I did some photo enhancing in Camera Raw to reduce noise a bit and crank the exposure. hopefully future photos will be better with the nifty fifty I've ordered, these were taken with the 18-55 kit lens
photoscan scene and SD setup I've used to finalize the tiling, tweak albedo values, get the AO and roughness and add some high freq normal details
I'm still very new to the photography part side of this, so any feedback is welcome :]
oh and if someone got any tips / tricks for shooting foliage, I'd love to hear. is it possible to get good results with basic gear? (camera, basic lens, tripod - that's it :P)
https://www.youtube.com/watch?v=7_dDpdibtJ0
https://www.youtube.com/watch?v=4kRP-RwZ9KQ
https://www.youtube.com/watch?v=0FZaJ3-4A-g
https://80.lv/articles/photogrammetry-how-does-it-help/
I am picking up a used 5dmkII body and I'm going to build some simple steel lighting stands to accommodate 8 T8 bulbs. I'm also going to pick up a few cans of matte white spray paint. Moving from mid gray to white ought to get me about 3 stops faster shutter speed. Doubling the number of lights should net me another stop, and then the new camera body will be much lower noise.
8 photos with a t2i:
NASA-3D-Resources, 3D scans of Mars, enjoy
https://github.com/nasa/NASA-3D-Resources/tree/master/3D Models
http://eriksphotogrammetery.blogspot.co.nz
Comparing 7 photogrammetry systems. Which is the best one?
http://arc-team-open-research.blogspot.co.uk/2016/12/comparing-7-photogrammetry-systems.html
I'm going to get a junk engine and spray paint it for a better result.
Are you shooting hand held? If not, how long does it take to shoot 240 photos on a tripod? Ughh :S
First, Reality Capture with a Nikon D800E, Sigma 50mm Art at f/8, 141 photos. This only took about 20 minutes to process from photos into a mesh:
Canon t2i with a canon 50mm 1.4. 140 photos processed in agisoft photoscan. This took hours to process:
Alec, I've been using Handplane Baker for a quick de-lighting pass on some test scanned assets. I load the exported obj from Reality Capture into both the high and low poly slots in HP, bake object space normals and AO, and then run a photoshop action on the texture. The AO gets inverted and screened out of the diffuse, and the blue channel of the normal map is used to select the down-facing surfaces of the texture and lighten them up. For stuff shot outside on a cloudy day it automagically removes lighting from the diffuse texture. After that I can bake the de-lit diffuse texture to polypaint in Zbrush and go to town.
Cheap harbor freight calipers to compare scale accuracy to real object. This is in mm.
And my setup is a bunch of philips tl 950 lamps attached to steel tubes clamped to the ceiling:
Something like this:
http://www.3dsystems.com/shop/sense
If yes, what gave better results?
I looked at the artec scanners pretty closely but wasn't impressed by the quality or tech specs.
I'll post more on my current scanning setup soon. I switched to flash photography using two medium sized monolights with stripboxes. This lets me ditch the tripod and shoot handheld at f/11. I can bang out 200 photos in 10 minutes.
Discover loads of 3D digitization of historic stuff init
http://vr3d.vn/trienlam/3d-digitization-of-historic-monument-cultural-heritage-3d-scanningI've been using substance designer to process and output the maps.
Checkout my thread for more detail http://polycount.com/discussion/167507/alexs-texture-scans
https://www.youtube.com/watch?v=AFpcRBuXik8
https://www.jeffreyianwilson.com/course/3d-scanning-for-games-and-vfx-vol2/
Thanks!
Jeffrey Ian Wilson
trying to speed up my process from scratch to finish
manmade garden decoration cliff, Photogrammetry medium dense cloud setting.
still using same raw data from manmade rock above, this time the texture got modified and rotated.
no sculpt, just procedurally tiled in SD.
, welp seems I will need to go outside more.
original
and photogrammetry experiments continue
... almost fall off the cliff ,and some kids laugh at me taking excessive images of ground and stuffs ... lol...( totally understandable )
more renders Comin' ,
my laptop can't handle very dense stuff unfortunately .. so I have to either divide stuff by chunk or do hand polish later as usual
s
statue practice ...
another Seamless Tiling displacement stuff ( rendered in Marmoset )
http://www.gdcvault.com/play/1024340/#.WMjyvuMEUxU.facebook
https://www.youtube.com/watch?v=QSqZyvKhw3U
The process is completely automatic. No coded targets, manual editing or special equipment are needed. 3DF Zephyr is built on top of our proprietary, cutting-edge, reconstruction technology.
model
From face scan to performance this is a good article
https://www.framestore.com/work/saam-vr
https://youtu.be/GHoS1ysjMG0?t=26