Home Technical Talk

Photogrammetry Thread

245

Replies

  • tynew
    Offline / Send Message
    tynew polycounter lvl 9
    Wow. That is insane. I hope to see scanning become more popular with game art, it really makes some pretty crazy photoreal assets. Vanishing of Ethan carter was pretty amazing, but I would love to see this with more hard surface kind of stuff. I've wanted to get into this.

    I'm curious if scanning is faster than making complex things from scratch. The cleanup seems daunting from what I've read.
  • dotswarm
    Fair point, the clean up can be an issue, but you can get some amazing results if you get everything right. And like anything it takes some practice.

    Point-clouds cop a lot of flack, but generally I think they are used in the wrong way. People try to model things that are really much simpler and would be easier modelled traditionally.

    My exposure to point clouds is photogrammetry and I've seen it requires better quality photography and lots of photos, plus vast amounts of processing power, usually GPUs too (or two=]) so that you can over model the object and cull the quality back, I can spend a few days using a few quite fast computers doing this over 2000 photos or more.

    I use VisualSFM for my photogrammetry, it's amazing, and free.

    With VisualSFM and a software project that I'm a part of DotSwarm I made a Point Cloud short film, that shows some of the analog-to-digital failures of photogrammetry, "Sifted".


    Unfortunately I see people with no 3d modelling skills using it as technique to replicate someone else's work, be it a little toy that was plastic injection moulded, or a table in real life, expecting it's going to look perfect.

    Maybe it will one day, but today it won't, so learn how to build - just as a carpenter or an engineer does in your world of 3D.

    What they work great for is replicating nature, really intricate details, where the world isn't shinny, things like tree trunks, leaves, topography, places where this lots of detail, little occluded spaces, old buildings, paint flaked houses, really some pretty cool stuff too!
  • littleclaude
    Offline / Send Message
    littleclaude quad damage
    This is Megascans
    [ame]www.youtube.com/watch?v=0EyHSzfDo6c[/ame]
  • MisterSande
    Offline / Send Message
    MisterSande polycounter lvl 8
    That jet engine looks so good.. Thank you for sharing! Will provide excellent reference for my next study project.

    BTW, what about the Megascans video @linkedclaude?
  • littleclaude
    Offline / Send Message
    littleclaude quad damage
    Hi MisterSande, ah yes my bad, its scanned from real materials and not Photogrammetry. Anyway its looks cool. I used the same technique on a weedkiller advert many moons ago.

    [ame="http://www.youtube.com/watch?v=0DG51glKipU"]www.youtube.com/watch?v=0DG51glKipU[/ame]
  • a3sthesia
    Offline / Send Message
    a3sthesia polycounter lvl 10
    I'd hate to see this thread die, so I thought I'd add this to the list of interesting videos/techniques...

    [vv]60629702[/vv]

    In a nutshell, they're building the initial photoscan inside Agisoft Photoscan, then exporting the geo, importing the result into 3DSMax to be rebuilt, then baking/transferring the colour and details between the high-res geo and the low-res geo inside of ZBrush.

    To add to the process, you could rebuild a low-res (or at least cleaner) cage in your program of choice (Maya, 3DSMax, Topogun, etc), then bake using which ever method you normally use for your HD to SD workflow (XNormal?).

    Regarding the textures, I've found that I've gotten better results importing the new geometry (with UVs) back into Photoscan, to have reprojected textures rebuilt using the new geo.

    So far, I've gotten *some* good results using Bitmap2Material to remove lighting and shadow from the reprojected textures, but nothing as phenomenal as what they demonstrated in the latest UnrealEngine Photogrammetry Demo.

    I'd love to see some good photoscan to PBR workflows!
  • a3sthesia
    Offline / Send Message
    a3sthesia polycounter lvl 10
    Quick build.

    Initial Settings
    22 Cameras into Photoscan
    Aligned - High Accuracy
    Dense Cloud - Low
    Mesh - High

    Imported Mesh into ZBrush
    Remesh with guides

    Import to Maya for UV unwrap

    Import back into Photoscan to reproject the textures
    export textures from Photoscan

    Normal map baked in XNormal

    Lighting is baked in to the texture.

    [SKETCHFAB]03360cf2690140b4b0e6a61984746327[/SKETCHFAB]

    Still working on the PBR part...
  • littleclaude
    Offline / Send Message
    littleclaude quad damage
    Nice cobbles and thank you for the explination.

    I'll just add this here
    [ame]https://www.youtube.com/watch?v=bdaepPjZKmM[/ame]
  • Scruples
    Offline / Send Message
    Scruples polycounter lvl 10
    It was previously mentioned to shoot at the lowest iso for the least grain, that's not entirely true. Your cameras all have a native iso, shooting lower than this will not improve the image and if you're lowering the signal more than you need to get a proper exposure, then you may be introducing noise/blur/wider aperture.

    Normally not a big issue since the native iso is usually closer to the 100iso mark but, cameras have come out such as the Sony a7s that have a high native iso with the a7s @ 3200.

    edit: the scan of the house in insane, wouldn't mind exploring in vr.
  • alexbone3d
    Hello everyone I thought I would share some of my WIP in Agisoft Photoscan and provide any necessary info that may be of help and hopefully I can learn from the rest of you

    [IMG][/img]sYaikis.jpg

    [IMG][/img]hLTmAer.jpg

    HymbYoB.jpg
  • alexbone3d
    two more items where I am just posting screens of them in Agisoft, I scanned a sandy area of a park to test as a ground texture for UE4 and I am working with a second tree scan, the mesh is giving me headaches so I may just manually retopo it.

    uToyafh.jpg

    tYCCK1R.jpg
  • littleclaude
    Offline / Send Message
    littleclaude quad damage
    I saw this nice job on "3D Scanning Users Group"

    Photogrammetry agisoft, 210 photos at 18MP
    TBhH1gb.jpgURiQW7y.jpgenLcF9t.jpgkYh0R3C.jpg
    ouN22mp.jpg
    LGPQw9Y.jpg
  • littleclaude
    Offline / Send Message
    littleclaude quad damage
    German Hardware Hacker Creates Kinect-Based Portable 3D Scanner

    swXwwz9.jpg

    3D printing brings Da Vinci’s anatomical drawings to life

    Collaborative work between WMG (Warwick Manufacturing Group) and Warwick Medical School created translucent 3D plastinations that use Leonardo Da Vinci’s original drawings of the human anatomy. The exhibition, which runs until 10 November 2013, sees the drawings, which are over 500 years old, side by side with plastinations, 3D scans and a translucent 3D heart.

    The heart was generated by transferring an MRI image into a STereoLithography (STL) file. The software then slices this STL file into layers that can be 3D printed by jetting a liquid polymer layer-by-layer, which is instantaneously solidified through the use of ultraviolet light.

    In the process of printing, a number of print heads provided the polymer for the heart, and some print heads created a gel-like support material, required to be able to print the complex heart structure, that would later be removed by water jets.

    http://www2.warwick.ac.uk/newsandevents/pressreleases/3d_printing_brings/
  • littleclaude
    Offline / Send Message
    littleclaude quad damage
    If someone can add "Lidar" to the title that would be great.

    http://www.laboiteverte.fr/des-espaces-numerises-au-lidar/

    Nice use for the medium

    [vv]124635236[/vv]

    [vv]120708083[/vv]

    [vv]119136236[/vv]

    scanlab-06.jpg

    scanlab-01-1080x720.jpg
  • claydough
    Offline / Send Message
    claydough polycounter lvl 10

    [ame]www.youtube.com/watch?v=yh3MSIEsh2w[/ame]

    Just got Skanect! Been blown away by some of the results some r getting using kinect for photogrammetry. Anyone else have Kinect examples?
    http://skanect.occipital.com/


  • littleclaude
  • MikeF
    Offline / Send Message
    MikeF polycounter lvl 20
    I'm thinking about picking up a camera to try a few things out in this area. Whats everyone using nowadays? (total noob when it comes to cameras)

    Also, is there any way using standard cameras that you would be able to extract specular/gloss information for a pbr engine?
  • kodde
    Offline / Send Message
    kodde polycounter lvl 19
    I'm using a Canon EOS 70D. Didn't buy it specifically for photogrammetry though. Works just fine. I usually use my EF 40mm STM or EF-S 18-135mm STM.

    I haven't tried this yet, but with a polarizing lens you should be able to extract/separate glossy data to some extent. It will produce a few clicks darker photos, but if you do one texture generating pass with and one without the polarizing lens then you should be able to produce a specular texture(?). By subtracting one from the other? This is going to be my next attempt :)
  • EarthQuake
    The truth is any number of likely hundreds of different cameras will be suitable for this purpose. If you ask which ones are best, you'll simply get a list for cameras that people have used, more so than a "best of photogrammetry" recommendation.

    What would be the best camera for your depends on a wide range of factors:
    1. Do you want it to be small and light for portability or do you not mind carrying around a heavy camera/set of lenses?
    2. How much money do you have to sped?
    3. Do you want an optical viewer finder? Is an electronic viewfinder ok? Do you only need a rear LCD screen to compose?
    4. Do you need the flexibility of an interchangeable lens camera with multiple lenses or would you rather a fixed lens compact?
    5. Do you intend to use the camera for other purposes, like traditional photography?
    6. If so, what sort of photography are you interested in? Portraits, sports, landscapes, architecture, low light, etc?
    7. Are you willing to use a tripod or not?

    Also, it's worth checking out these threads:
    http://www.polycount.com/forum/showthread.php?t=153940
    http://www.polycount.com/forum/showthread.php?t=130896
    http://www.polycount.com/forum/showthread.php?t=128289
    http://www.polycount.com/forum/showthread.php?t=154396

    As far as pulling gloss/spec out without special gear, no there isn't really any easy way to do this. You can use a polarizer on both your lens and your light source (you need to use a custom light source, typically with off camera flash(es)) to separate specular, pulling gloss is a lot more complicated and I'm not sure how one could do that short of building a specialized scanning rig.

    More about that stuff here:
    https://udn.epicgames.com/Three/TakingBetterPhotosForTextures.html
    http://gl.ict.usc.edu/Research/DigitalEmily/
    http://gl.ict.usc.edu/Research/FaceScanning/EGSR2007_SGI_low.pdf
  • MikeF
    Offline / Send Message
    MikeF polycounter lvl 20
    Hey guys, thanks for the help!
    I'm going to borrow a couple cameras today and do some reading up on those links you posted then go from there!

    EQ: I was reading last night about using a turntable/greenscreen setup for smaller objects but it seems like you had troubles doing something similar with your gnome?
    Was this just because there was no additional reference points besides the subject?
  • EarthQuake
    Yes, I think that was my issue. I was photographing in the gnome on a completely flat, neutral background, great for photography but a poor choices for photogrammetry. If I were to do it again I think I could solve the problem by placing reference points. I've seen people use newspaper or other bits of paper with a lot of text/detail, or colored construction paper etc.
  • Necrophob30
    Offline / Send Message
    Necrophob30 polycounter lvl 10
    MikeF wrote: »
    Hey guys, thanks for the help!
    I'm going to borrow a couple cameras today and do some reading up on those links you posted then go from there!

    EQ: I was reading last night about using a turntable/greenscreen setup for smaller objects but it seems like you had troubles doing something similar with your gnome?
    Was this just because there was no additional reference points besides the subject?

    If you use a turntable, you need to mask out your object. I use a lighttent and a turntable: deer skull
  • a3sthesia
    Offline / Send Message
    a3sthesia polycounter lvl 10
    EarthQuake wrote: »
    Yes, I think that was my issue. I was photographing in the gnome on a completely flat, neutral background, great for photography but a poor choices for photogrammetry. If I were to do it again I think I could solve the problem by placing reference points. I've seen people use newspaper or other bits of paper with a lot of text/detail, or colored construction paper etc.

    The neutral background is just messing with Photoscan's "depth perception." The setup you had was actually pretty good.
    The counter to this is, after you've taken your series of photos, with the object on it's turntable (and a stationary camera), you take a final photo without the gnome.
    You can then feed this into Photoscan as a "Background Mask," and Photoscan can use that information to create an automask to help crop out your object (and give it a better chance of working out what's where in 3D space).

    These 2 forum responses from the Agisoft team explain it better....

    http://www.agisoft.com/forum/index.php?topic=1797.0

    http://www.agisoft.com/forum/index.php?topic=2174.0
  • claydough
    Offline / Send Message
    claydough polycounter lvl 10
    speaking of turntables...

    Autodesk just posted a Memento presentation:

    [ame]https://www.youtube.com/watch?v=kjKJaTMZNJ8[/ame]
  • littleclaude
  • AlecMoody
    Offline / Send Message
    AlecMoody ngon master
    I picked up agisoft photoscan standard edition over the weekend and did some testing with it. So far it seems to have no clue what to do with anything dark and slightly shiny. All of the black vinyl is missing or distorted and the semi gloss metals aren't coming in either:

    http://i.imgur.com/mHiMzji.png

    Considering how much effort went into the image capture, the scan quality is not great. There are probably some workflow tricks I am missing that would improve the results.

    The pricing structure on photoscan is a little ridiculous. The only feature in the pro edition I want is the tool to establish scale. I can do it manually in 3dsmax but it will be more time consuming.. I am not paying over $3000 for such a simple feature.

    I know baby powder is often used to dull shiny surfaces for laser scanning so I am going to try that on the problem areas. I also sprayed a dusting coat of primer on some control arms to see if that knocks the shine down enough to scan them better.
  • AlecMoody
    Offline / Send Message
    AlecMoody ngon master
    Result of using a dusting coat of primer on shiny metal:
    CaAWHNH.jpg

    The result is good enough to use for a detailed reference object and maybe to manually retopo over.
  • Musetatron
    Offline / Send Message
    Musetatron polycounter lvl 8
    Hi, I saw this thread and I'm really interest!
    I have a few questions to ask. If you don't mind.
    -Is this technique widely use in game art?
    -Is this practical for small team of artists with 3-4 people?
    -How many weeks it take to master this technique at production quality?
  • FourtyNights
    Offline / Send Message
    FourtyNights polycounter
    I've had this crazy and silly workflow/pipeline in my head for a while:

    1. Sculpt your character's anatomy (in t-pose) normally in ZBrush or any other 3D sculpting software of your choise.

    2. 3D print the full body of your (t-posed) character with a reasonable size.

    3. Get it dressed with clothes. Depending on the printed size, sew custom-sized clothes or use real clothes if it's a life-sized 3D print (quite rare opportunity to do that though)

    4. 3D scan or use photogrammetry to get it back to ZBrush with clothes

    5. Clean up, resize, fine tune and detail it for the final sculpt, if needed.

    Of course there are multiple factors to fail during the process, but I'd love to get clothes with cool wrinkles and folds. The size of the print usually matters, since the bigger it is, the better the drapery is. And you could get any kind of wrinkles and folds, adjusting by hand.

    Like a "real-life Marvelous Designer". :D
  • EarthQuake
    If you were going to do that, wouldn't it make more sense to use a real person as your model? Then you could use real clothes as well.
  • JedTheKrampus
    Offline / Send Message
    JedTheKrampus polycounter lvl 8
    That would be a great idea as long as you didn't need to do an original clothing design. Good fabric's expensive and the bigger the clothes are the more sewing you have to do, so working in 1/4 or 1/8 scale for example could save a good bit of time. Of course you would have to be careful to make sure that the clothes don't look like they're miniatures... in the end it's probably just easier to use MD if you're doing original clothing designs.
  • AlecMoody
    Offline / Send Message
    AlecMoody ngon master
    The effort and/or expense level required to get anything close to a production ready high poly model is so high that it doesn't make sense to attempt it unless you can't model and want to spend hours doing cleanup or you have the budget for a dedicated multi camera/lighting rig.
  • Ben Cloward
    Offline / Send Message
    Ben Cloward polycounter lvl 18
    With all the buzz around Star Wars Battlefront right now, you guys might enjoy this article:

    http://starwars.ea.com/starwars/battlefront/news/how-we-used-photogrammetry

    DICE uses photogrammetry extensively and our team learned a lot from the techniques they developed.
  • Popeye9
    Offline / Send Message
    Popeye9 polycounter lvl 15
    I am curious how people are handling the removal of lighting from the texture to use for an Albedo. I know that you can Desaturate-Invert-and Blend with Soft Light but not sure if that is the correct way to handle it. I have seen the presentations from Epic and others but haven't figured out how they are going about it.

    Any insight would be greatly appreciated.
  • tadpole3159
    Offline / Send Message
    tadpole3159 polycounter lvl 12
    To delight you have to capture a 360 degree HDRI at the time of model capture and using something like vray to render a lightmap that exactly matches reality. Then in Photoshop blend the delighting texture and the photoscanned texture with the divide blend mode. If done correctly the lighhting gets stripped out.
    It's blooming hard though. Getting your delighting lightmap to perfectly match reality is super hard, if it's off by a little bit the small errors will literally glow once blended.
  • Joost
    Offline / Send Message
    Joost polycount sponsor
    I think it's feasible to do it if you have the budget (probably 2 camera bodies, fisheye lens, pano head)and time, but otherwise shooting in diffuse lighting conditions will give you pretty good results.
  • tadpole3159
    Offline / Send Message
    tadpole3159 polycounter lvl 12
    Joost wrote: »
    I think it's feasible to do it if you have the budget (probably 2 camera bodies, fisheye lens, pano head)and time, but otherwise shooting in diffuse lighting conditions will give you pretty good results.

    I did it on the cheap for about £100 off amazon. I already had a tripod and you can just swap your lens to a cheap fisheye, you defiantly don't need another body.

    It just takes about 2 minutes out in the field to capture the HDRI and the processing on the computer takes maybe an hour per asset. one quick lightmap and a photoshop blend mode and your done.
  • ghaztehschmexeh
    I did it on the cheap for about £100 off amazon. I already had a tripod and you can just swap your lens to a cheap fisheye, you defiantly don't need another body.

    It just takes about 2 minutes out in the field to capture the HDRI and the processing on the computer takes maybe an hour per asset. one quick lightmap and a photoshop blend mode and your done.

    Could you go into a bit more detail if you have the time? What equipment are you using? Are you creating a full panoramic HDRI or using a light probe ball?
  • tadpole3159
    Offline / Send Message
    tadpole3159 polycounter lvl 12
    I used this tutorial (cheers earthquake!)
    https://www.marmoset.co/toolbag/learn/hdr-panos
    And these things
    [ame="http://www.amazon.co.uk/Q-45-12KG-Panoramic-Gimbal-Plate/dp/B00KWA68LI/ref=sr_1_5?s=photo&ie=UTF8&qid=1437422060&sr=1-5&keywords=panoramic+head"]Pro Q-45 12KG Load Panoramic Gimbal Head with 70mm QR: Amazon.co.uk: Camera & Photo[/ame]

    [ame="http://www.amazon.co.uk/Opteka-Professional-Fisheye-Digital-Cameras/dp/B001LZJB9Y/ref=sr_1_1?s=electronics&ie=UTF8&qid=1437419973&sr=1-1&keywords=fisheye+lens+canon"]Opteka HD² 0.20X Professional Super AF Fisheye Lens for: Amazon.co.uk: Electronics[/ame]

    I realise the tripod heads not right but it's cheap and allows me to set the camera back far enough (and safely enough) to avoid about 90% of the paralax effect that comes from rotating the camera around its base.
    Then once the HDRI's made I line the image up in max and render a delighting map and bobs your uncle :)

    Heres one I did earlier
    1hD4I0n.jpg
    eKuRrHZ.jpg
  • Popeye9
    Offline / Send Message
    Popeye9 polycounter lvl 15
    Thanks for the explanation this really helps a lot.
  • Joost
    Offline / Send Message
    Joost polycount sponsor
    Pretty impressive result, I guess you don't need a top quality HDRi to delight.

    You should probably straighten that horizon though.
  • ghaztehschmexeh
    Cheers for taking the time! This is very useful information.
  • littleclaude
  • littleclaude
    Offline / Send Message
    littleclaude quad damage
    The whole of London has been laser scanned.

    https://environmentagency.blog.gov.uk/2015/09/18/laser-surveys-light-up-open-data/

    It’s 11 terabytes if you download it all :) so just an average UE4 export size, ha-ha!

    Olympic-Park-point-cloud-LIDAR-alt.jpg

    Best,
    Neil
  • o2car
    Offline / Send Message
    o2car polycounter lvl 16
    Tip of the day:
    You can cheat by using the different channels in your worldspace normalmap to delight/flatten your diffusetexture in photoshop.
  • jacob thomas
    Offline / Send Message
    jacob thomas polycounter lvl 9
    o2car wrote: »
    Tip of the day:
    You can cheat by using the different channels in your worldspace normalmap to delight/flatten your diffusetexture in photoshop.

    Any more info on this? Sounds like it would be a neat little trick.

    Super awesome thread by the way, so much information. I'm hoping to have a piece to contribute very shortly.
  • Zante
    Has anyone done any more with this?

    I'm thinking of capturing some ruins piece by piece with a view to retopologizing the geometry. In small controlled environments, it's easier to set a scene up but outside it's more difficult. Grass blowing/moving, trees etc...

    I'm still very new to this but I'm enjoying a period of fascination with photogrammetry. If anyone's working on a list of pointers, here or otherwise, it would be good to have a link .: ]

    statue_photogrammtetry_settimio_photoscan.jpg?resize=1024%2C576
  • littleclaude
    Offline / Send Message
    littleclaude quad damage


  • xvampire
    Offline / Send Message
    xvampire polycounter lvl 14
    hi guys, I am really new to photogramettry stuff
    the latest result i have here is combination of raw data and "inflate cheat" in zbrush. 


    Canon 60 D


    after inflate balloon from Diffuse AO





    some people in fb comment said i dont have to take a lot of picture to get accurate mesh data , however , 
    i tried that before and got so many holes,  so I learned from my mistakes and take as many pic as possible. 

    still , I couldnt get pores/micro detail without  "inflate cheat"  it.    

    I heard that is depends on the object size too.  is that true?



    ------------------ some progress

    first itteration


    detail reconstruct  after cutting tons of necessary areas.. ( i have to calculate twice, and change the setting to high mesh detail before reconstructing it , 
    i think agisoft would do better job)



    earlier test with  Cellphone cam ( samsung galaxy s4)




    --------

  • ZombieDawgs
    Offline / Send Message
    ZombieDawgs polycounter


    I used Agisoft for this, I think I'm starting to get some nice results from using it.
245
Sign In or Register to comment.