Am watching the YT vid about photogrammetry from the Star Wars Battlefront team and they mentioned this, the issue of reconciling the look of assets derived from photogrammetry or scans with assets created the old fashioned way. I'm still watching the video, but they aren't going into much detail on any subject so I don't expect more on this one.
And it's an interesting question, especially regarding assets that aren't easily created by photogrammetry for indie teams limited to single-camera setups, like plants, animals, people, hair & fur, worn clothing, etc. Anyone happen to have a resource addressing this in some detail, or personal experience to relay? It's kind of a hard thing to run searches for because there isn't a term for it, that I'm aware of anyway.
Replies
I usually have same kind of a problem. Whatever procedurally generated looking super cool at first glance once put next to hiress scanned surface ( billions of polys to capture tiny height variation in initial scan) becomes instantly not true and artificial. So I basically ended up using Substance Designer to add only a few procedural imperfections and scattered details over a depth mix of few scanned base layers in any texture.
Too bad SD is not that great for such approach and too bad the whole Photogrammetry process is a big pain in the a.. itself. Even with Reality Capture. I dream of a soft that could bypass all the retopology and re-baking and output straight to pair of height and color textures. And those Quixel Megascan textures are too small (in meters) and super repetitive to be useful really.