Home Technical Talk

Photogrammetry - Texture baking - was I doing it all WRONG :-)?

Jonathan85
polycounter lvl 9
Offline / Send Message
Jonathan85 polycounter lvl 9
Hello
Photogrammetry (i use agisoft photoscan but its program agnostic). U get few milion polygon mesh, you export it, you retopologize it (max, zbrush, whatever), you create LOW poly version and then bake the original mesh to the (subdivided couple of times) low poly so you get Normal, Displacement etc. maps that you use on your low poly mesh to get the original mesh complexity when rendering. Everything is clear so far.
But my question is: What Mesh do you use to bake the Diffuse (Color) map from in your photogrammetry software (in my cvase agisoft photoscan (it has some new name now i think). I always imported the Hi poly mesh (made from the low poly mesh - low poyl was subdivided, and then detail reprojected back from the original (few milion polygon mesh). I always import the high poly mesh made from the low poly back to agisoft for texture (diffuse/color baking from the photographs). The reason is that usually when i import only the low poly mesh the final diffuse texture is "blurry", doesnt look so good. The diffuse/color texture is later on useable on both low poly mesh or hipoly mesh (created from low poly by subdividing and using displacement map).

The problem is that sometimes this color map from hipoly model is "distorted a bit" when i use it on the low poly mesh directly.
So in few times i use the low poly mesh in agisoft for texture baking. But its only in FEW cases.

My question is - what mesh do u use for color texture baking (generate texture) in agisoft - the LOW poly or the HIPOLY (created from the low poly)?

Was i doing it all WRONG all these years :-D?

Replies

  • Eric Chadwick
    When you generate the high poly in Agisoft, doesn't it also generate a diffuse? 
  • Dash-POWER
    Offline / Send Message
    Dash-POWER polycounter lvl 6
    I think it should output diffuse maps as well otherwise you won't get any surface color information. (You can bake diffuse to vertex color but quality goes down). I sometimes use Meshroom. It's free and open source and output quality is okayish. It's outputing unwrapped diffuse maps for high poly.

    My basic workflow is (excluding delighting):
    [Images]>[Meshroom]>[Zbrush (Zremeshing high to low and unwrapping)]>[Marmoset(baking from high)]>[Substance Designer(generating detail/micro normal map from diffuse/albedo, roughness and adjusting maps)]

    If you have more shadows from sun light, you can use delighter from Agisoft which is free.
  • Jonathan85
    Offline / Send Message
    Jonathan85 polycounter lvl 9
    I dont generate diffuse for the original mesh in agisoft... I export only the original hi poly several milion poly mesh. Import it to zbrush, retopo (+in 3ds max retopo), bake the displacement and normal maps in zbrush, then i take the low poly mesh, which i subdivide it couple of times , reproject the detail from the original few milion polygon mesh from agisoft. ANd then i take this hipoly mesh (generated from low poly and reprojected details (mesh details) back), i take this mesh back to photoscan to generate diffuse texture.
    Am i doing it wrong :-)?


    Dash power - what the advantage of baking the displacement and normal maps in Marmoset instead of Zbrush...? (better comapatability in some game engines...?)

    YOur step of generating details normal map in designer is new to me, i might seen the results i think but never the process, are there any more info/tutorials on this...? (or is it your secret own invented pipeline :-)? )

    Thank you

  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    what the advantage of baking the displacement and normal maps in Marmoset instead of Zbrush...? (better comapatability in some game engines...?)
     Zbrush uses "uv matching" method while Marmoset, Substance Painter, Xnormal, and other bakers uses ray tracing. This means that the smoothing of the lowpoly is taken into account when you bake using ray tracing, while the uv matching method ignores the target mesh smoothing and therefore its shading. This can result in visible uv seams and gradients on a lower poly (game-ish density) mesh. Ray tracing can provide seam free normals and can perfectly display the shading of the highpoly if its done right. This may not be an issue when the target mesh is high enough poly though. I know that some cinematic character workflows uses zbrush normal baking, and they are in fact really hard to properly convert to real time usage. It usually requires re-baking the normals using ray tracing. There is one more thing. All real time renderers uses a thing called "mikk tangent space" , which zbrush baking don't satisfy, so complex things baked in zbrush will definitely show seams and stuff in those renderers.
  • Dash-POWER
    Offline / Send Message
    Dash-POWER polycounter lvl 6
    YOur step of generating details normal map in designer is new to me, i might seen the results i think but never the process, are there any more info/tutorials on this...? (or is it your secret own invented pipeline :-)? )
    I'm using Refine Noise node from Substance Share to generate fake height from diffuse and then normal map node to generate normals.


    It's not 100% precise but it's ok.
  • Jonathan85
    Offline / Send Message
    Jonathan85 polycounter lvl 9
    Obscura said:
    what the advantage of baking the displacement and normal maps in Marmoset instead of Zbrush...? (better comapatability in some game engines...?)
     Zbrush uses "uv matching" method while Marmoset, Substance Painter, Xnormal, and other bakers uses ray tracing. This means that the smoothing of the lowpoly is taken into account when you bake using ray tracing, while the uv matching method ignores the target mesh smoothing and therefore its shading. This can result in visible uv seams and gradients on a lower poly (game-ish density) mesh. Ray tracing can provide seam free normals and can perfectly display the shading of the highpoly if its done right. This may not be an issue when the target mesh is high enough poly though. I know that some cinematic character workflows uses zbrush normal baking, and they are in fact really hard to properly convert to real time usage. It usually requires re-baking the normals using ray tracing. There is one more thing. All real time renderers uses a thing called "mikk tangent space" , which zbrush baking don't satisfy, so complex things baked in zbrush will definitely show seams and stuff in those renderers.
    Thank you. Does the seams apply only for normal maps, or can baking maps in zbrush results also in seams visible in displacement maps (32 bit EXR mainly, but 16 bit tiffs too)?
  • Jonathan85
    Offline / Send Message
    Jonathan85 polycounter lvl 9
    YOur step of generating details normal map in designer is new to me, i might seen the results i think but never the process, are there any more info/tutorials on this...? (or is it your secret own invented pipeline :-)? )
    I'm using Refine Noise node from Substance Share to generate fake height from diffuse and then normal map node to generate normals.


    It's not 100% precise but it's ok.


    Thank you, interesting... The lower image of normal map that is then made into "Normal Combine", the lower normal map image titled "Bitmap" is the normal map baked from highpoly mesh in Marmoset (in your workflow). And you combined it with the normal map made from albedo/diffuse. This then provides much better detail in the final normal map, correct?

    Is this standard workflow or your invention...? (probably stadanrd :-) )?

    Thanks
  • Dash-POWER
    Offline / Send Message
    Dash-POWER polycounter lvl 6
    Correct.
    Is this standard workflow or your invention...? (probably stadanrd :-) )?
    It's always better to have the data in high poly instead derivating from diffuse. It would take ages to process even that small details for photogrammetry sw. It's kind of compromise. I wouldn't call it standard. It's a lazy improvement.
Sign In or Register to comment.