Home Technical Talk

WTF is "3D scanned TEXTURES" (evermotion) + little talk...

Jonathan85
polycounter lvl 9
Offline / Send Message
Jonathan85 polycounter lvl 9
Hello

I saw this few weeks ago:


Evermotion Archmodels vol. 226, which consists of "3d-scanned textures" (or models made in app but that use "3d-scanned textures"), here are directly few examples:

https://evermotion.org/files/model_images/Am226_cover_closeup3.jpg
https://evermotion.org/files/model_images/Am226_cover_closeup3_wire.jpg
https://evermotion.org/files/model_images/Am226_cover_closeup.jpg
https://evermotion.org/files/model_images/Am226_top_02.jpg
https://evermotion.org/files/model_images/AM226_014_Urtica.jpg
https://evermotion.org/files/model_images/AM226_014_wire.jpg

Those are low poly models made in some DCC like 3Ds Max using "3d-scanned textures". My question is, WTF is "3d-scanned textures"?
Did they took one leaf at a time and used standard photogrammetry and took like 15 different (postion/angle) photos of the leave...? Why do that when you gonna use it only for low poly, so you cannot use the hipoly model and cannot use the heightmap. Sure you get quite good looking NORMAL MAP but is it worth it? All these leaves are meant for for low poly usage...

I would just take ONE standard top view photo of each leaf, sure i would NOT get nice hi poly model but what of it if i cannot use it (its meant for low poly), sure i would NOT get nice displacement/height map - but again that isnt a problem, the advantage of taking several photos of one leaf (=using photogrammetry) would be in this case that you would get a better/more accurate/more realistic normal map...
But you could generate somewhat less realistic normal map from the 1 photo too, using programs like Bitamp 2 material etc...

I dont get it... Why did they do that? Or what is exactly "3d-scanned textures" (i understand it as a standard 3D photogrammetry aproach for 3D models)?

Thank you

Replies

  • pior
    Offline / Send Message
    pior grand marshal polycounter
    You are waaaaaay overthinking this, based only on whichever hype word they decided to use.

    It's just a bunch of leaves laid flat and photographed with a normalmap filter on top. All that applied to manually modeled plant meshes. There's even a sample file you can download to see for yourself.
  • Jonathan85
    Offline / Send Message
    Jonathan85 polycounter lvl 9
    pior said:
    You are waaaaaay overthinking this, based only on whichever hype word they decided to use.

    It's just a bunch of leaves laid flat and photographed with a normalmap filter on top. All that applied to manually modeled plant meshes. There's even a sample file you can download to see for yourself.

    Hi thanks, but no, i dont think so. I was right, its not really photogrammetry, but its called "Photometric Stereo".
  • pior
    Offline / Send Message
    pior grand marshal polycounter
    Sure, they can always create the nmap by comping a few lit shot together, and derive the other maps with some lens filter trickery. The end product is still very much a dead-simple manually created asset that uses techniques that were possible with cheap gear 10 years ago already :)
  • Panupat
    Offline / Send Message
    Panupat polycounter lvl 17
    I guess it's just buzz word to wow people not familiar with the vocabulary we're used to lol. Same reason they can sell 5AR as something revolutionary when it's basically "texture your shit and render in passes".
  • m4dcow
    Offline / Send Message
    m4dcow interpolator
    I've been messing with this sort of stuff for awhile, it is most likely photometric stereo. 

    Depending on what maps they offer, it is essentially doing the same stuff as megascans (when it comes to plants anyway).
    Any height/depth information is probably just derived from the normal map that is created from the multi angled lighting shots, roughness/specular can be derived by using cross & parallel lighting, for translucency the subject is lit by a backlight.

    Substance even has a Node that will generate normals from a bunch of images with varying lighting angles, MultiAngleToNormal or something along those lines.

    There is also a similar technique called RTI(Reflectance Tranformation Imaging) which takes into account arbitrary angles and elevations of lighting, the downside being you need to capture a spherical object in the frame to determine these angles.

    The time and effort to use photogrammetry on assets like these would be overkill and you ultimately get better results (except for depth/height) with photometric stereo.

    Also most of this stuff can and is automated for photometric stereo, trigger different lights, camera shutter even rotate polarizer with an arduino, and then run all your photos through a well crafted substance graph to output everything.

    And as mentioned above, the actual modeling is all old school techniques that have been used for years.

  • oglu
    Offline / Send Message
    oglu polycount lvl 666
    There are several Texture and Material scanning Systems out there. Quixel Megascans, Chaosgroup does have one, X rite an other one and HP is working on a System for Home use.

    https://youtu.be/8alYZgkwClM

    https://www.xrite.com/categories/appearance/tac7
  • gnoop
    Offline / Send Message
    gnoop sublime tool
    There is also no problem whatsoever to “scan” separate leaves with regular photogrammetry too. Do it with close shots and around 10-20 mil per a leaf (maybe 5 min in Reality Capture) and it would reveal details up to tiny leaf veins.  Only things that matter are proper camera focus  and DOF and not creating any air flow around the subject.
  • Jonathan85
    Offline / Send Message
    Jonathan85 polycounter lvl 9
    gnoop said:
    There is also no problem whatsoever to “scan” separate leaves with regular photogrammetry too. Do it with close shots and around 10-20 mil per a leaf (maybe 5 min in Reality Capture) and it would reveal details up to tiny leaf veins.  Only things that matter are proper camera focus  and DOF and not creating any air flow around the subject.

    Thanks, how many pictures do you think you would need to take MINIMUM... to replicate the same results as with this photometric stereo?

    Dabarti (dabarti.com or something) said he tested photogrammetry as you said for this purpose and it was INFERIOR to the photometric scanning....

    Are you sure you can replicate this with "simple" photogrammetry? WOuldnt you need like a full frame camera to capture really every detail...? And how many photos...?¨
  • Jonathan85
    Offline / Send Message
    Jonathan85 polycounter lvl 9
    m4dcow said:
    I've been messing with this sort of stuff for awhile, it is most likely photometric stereo. 

    Depending on what maps they offer, it is essentially doing the same stuff as megascans (when it comes to plants anyway).
    Any height/depth information is probably just derived from the normal map that is created from the multi angled lighting shots, roughness/specular can be derived by using cross & parallel lighting, for translucency the subject is lit by a backlight.

    Substance even has a Node that will generate normals from a bunch of images with varying lighting angles, MultiAngleToNormal or something along those lines.

    There is also a similar technique called RTI(Reflectance Tranformation Imaging) which takes into account arbitrary angles and elevations of lighting, the downside being you need to capture a spherical object in the frame to determine these angles.

    The time and effort to use photogrammetry on assets like these would be overkill and you ultimately get better results (except for depth/height) with photometric stereo.

    Also most of this stuff can and is automated for photometric stereo, trigger different lights, camera shutter even rotate polarizer with an arduino, and then run all your photos through a well crafted substance graph to output everything.

    And as mentioned above, the actual modeling is all old school techniques that have been used for years.

    (Thanks for usefull and informative post)

    You wrote:

    "The time and effort to use photogrammetry on assets like these would be overkill and you ultimately get better results (except for depth/height) with photometric stereo."

    Hm... you think (for leaves and similiar surfaces) you get INFERIOR results for depth/height map using photometric stereo (or RTI method) in comparison with photogrammetry...? I would believe it should be superior...  If not, why not use photogrammetry instead... Is photometrix stereo superior (again, talking only about things like leaves etc.) in albedo and normal maps, but inferior in height...? I would think the superior Normal would provide enough info to extract a superior Height map (in comparison to photogrammetry)...?

    (thanks)
  • m4dcow
    Offline / Send Message
    m4dcow interpolator
    (Thanks for usefull and informative post)

    You wrote:

    "The time and effort to use photogrammetry on assets like these would be overkill and you ultimately get better results (except for depth/height) with photometric stereo."

    Hm... you think (for leaves and similiar surfaces) you get INFERIOR results for depth/height map using photometric stereo (or RTI method) in comparison with photogrammetry...? I would believe it should be superior...  If not, why not use photogrammetry instead... Is photometrix stereo superior (again, talking only about things like leaves etc.) in albedo and normal maps, but inferior in height...? I would think the superior Normal would provide enough info to extract a superior Height map (in comparison to photogrammetry)...?

    (thanks)
    What I mean by that, is with photometric stereo the height is generated from the normal map, it isn't accurate by any means. Some scanners use structured light to get basic but accurate height information and combine that with the normal map from photometric stereo to get the smaller details.

    The same technique could theoretically be used with photogrammetry combining it's real world depth/height value with the smaller frequency details that can be captured and derived using the normal map from a photometric stereo technique.

    Basically if you are okay with a not so accurate height map, photometric stereo is fantastic. Would I recommend using photogrammetry to capture relatively flat things like leaves or fabric because of the accuracy limitations of photometric stereo depth? HELL NO, anyone who says you can get comparable results with comparable effort is crazy.
  • gnoop
    Offline / Send Message
    gnoop sublime tool
    This 5 mil model simplified from RCapture 19 mil originally.   17 photos of old fallen leaves

    it's Sigma  camera   20 mpix with  its fixed, non changeable  48mm(equivalent)   lens.   Shot from hands, around 30 cm distance .   With proper macro lens, shake reduction  and lots of light  it might probably be a lot  better.
  • m4dcow
    Offline / Send Message
    m4dcow interpolator
    gnoop said:
    This 5 mil model simplified from RCapture 19 mil originally.   17 photos of old fallen leaves

    it's Sigma  camera   20 mpix with  its fixed, non changeable  48mm(equivalent)   lens.   Shot from hands, around 30 cm distance .   With proper macro lens, shake reduction  and lots of light  it might probably be a lot  better.

    1:1 pixel crop (open image in new tab or save to see), 4 direction photometric stereo + 1 frontal lit photo. Shot on a Canon 7D (18 Mpix) with not so sharp Tamron Zoom lens.

    Your photogrammetry looks fine, albeit a bit noisy, and doesn't capture the subtle normal details that comes from photometric stereo even with only 4 directions. With the setup i used for this I can also rotate the polarizer 90 degrees and get another set of pictures to derive a decent gloss/roughness map, can't do that easily with a photogrammetry approach.
  • gnoop
    Offline / Send Message
    gnoop sublime tool
    Thanks m4dcow    A cool example

Sign In or Register to comment.