Polycount is one of the most hardcore CG modeling / rendering resources online. Because of that I'm really I'm hoping someone can advise on something I've been struggling with for months.
My end goal is a game ready composite asset in Unreal, made up of around 100 medium models (a dinosaur skeleton from scan data)
I have high res bone scans, and have successfully reduced polys of single bone in Blender (from 1 million to 5k) > baked normal / albedo / roughness in Substance Painter from high res scan to low poly, export 2K textures.
That looks amazing for a single bone in Unreal (5000poly bone with a 2k normal/albdeo/roughness baked from the 1million poly scan).
Now here's the problem: I don't think I can have 100 5k models, with each having a 2K texture in Unreal. From what I've read, that might be too many draw calls. The poly count doesn't seem to be a problem (500k) but the textures are likely too much.
So, do I then import everything back to Blender, connect all the textures up to the low res models, and re'bake / combine the 2k bone texture sets in groups of 5 or 10, so that every 10 models share the 2k maps? That's going to be a super labour intensive process, and one I'm not even sure will work. If anyone has any suggestions for me I'd be really grateful.
Replies
Window > Developper tools > Merge actors.
Assuming that your individual UVs are not to wasteful to begin with, it will pack all the source textures into atlases and will create a single static mesh with a single mat ID, filled by a single lightweight material combining everything (and that also includes any color tinting done inside the original materials). Incredibly useful stuff.
http://wiki.polycount.com/wiki/Polygon_Count#Typical_Triangle_Counts
Decide your target resolution, then work backwards from there. 200k tris with 1 material at 4096x4096 texture set? 30k tris with 1 material at 512x512 textures?
The Smithsonian released a bunch of scanned meshes as open source models recently. You could examine those to see how they're assembled. Triceratops scan:
http://www.3d.si.edu/object/3d/triceratops-horridus-marsh-1889:d8c623be-4ebc-11ea-b77f-2e728ce88125
If you download the low resolution GLB file, you can load that in https://sandbox.babylonjs.com/ and see how it's built, just how much detail you get for a 150k triangle model, with 4096 textures.
Plenty resolution for a showcase asset, like if a player needs to hide inside the skeleton to avoid being eaten by a zombie horde, or whatever.
2k textures uses around 8 mb of video memory if they are not normal maps. For a normal map, its double. So even if you don't pack the grayscale textures, and we count with 4 textures + a normal, thats 48 mb for one texture set. Multiplied by 100, still 4800 mb. Meshes uses very little video memory, so the whole thing could end up being less than 6 gb. Packing the grayscales would reduce the texture memory from 4800 to 3200. Enabling virtual texturing would reduce this even further. Depending on how much the uv spaces are filled, and how high frequency are the details in the textures, maybe even cut it in half. With tightly packed uvs, or high frequency detail, not so much though.
Otherwise, the solution suggested by Pior works great I can confirm.
The skeleton is the main focus of the experience, with relatively few other assets in the world except for a hall it's in.
I'm reducing the bones to 5k each, so x 130 bones that's 650k poly.. Does that sound reasonable?
That's a fantastic link to that Smithsonian model! I will go look at that right away. Regarding textures: That model looks like shares 1 texture right? One challenge I face is that the Blender file is way to big to join / decimate the whole model at once, so I'm exporting bones 1 by 1 to decimate, then baking the high to low in Substance and exporting the maps. Where I'm a little confused is how to join the 100 models / 100 texture sets back together (I have bake tool for Blender, hoping that will work but worried about my UVs not being efficient enough for the atlas). Although if @Ob@Obscura 's suggestion is correct (just have the 100 models with texture sets in there) then maybe I can get away with that? Maybe it would be better to use 1k textures for all 100 bones (rather than a 4k atlas) to save on video memory?
Here's an image of what one of my UVs look like on the low poly.
If you guys have any other tips let me know (especially anything to know with the Pior approach in Unreal, as opposed to loading and baking texture atlases in Blender etc).
Definitely overly wasteful? Do you have any suggestions on what I should be doing differently?
I will start searching for best UV unwrapping practice. Sorry for noob questions.
Any keywords or links let me know.
Thanks for the workflow suggestion -- that sounds like a great plan. Thanks BIG TIME