Alright, I'm coming back to 3d after a few years off my game, and I'm trying to get my workflow down.
The pipeline I want to use is Blender (basemesh) -> Zbrush (sculpting/highpoly, retopo) -> Blender (texture painting) -> Unity (PBR material, poor-man's Toolbag)
So far, I've gotten everything to "work" except that I've found that Zbrush doesn't export very good normal maps. I'm reading up on fixing some of these errors, but a lot of it comes down to "use xnormal instead," which would be more like
Blender -> Zbrush -> Xnormal -> Blender -> Unity
My problem with this is that I want to work on some really high-res meshes, using all the neat Zbrush tools like Nanomesh and Geometry HD, and it doesn't sound like this is feasible using Xnormal to bake.
Are there any tips or tricks for getting Zbrush to emit decent maps? Or should I just learn to deal with not being able to get the most out of Zbrush? What do people do with these absurdly-high-poly models you can make in Zbrush? Just beauty renders?
Replies
1. You export low and middle res mesh with a normal map baked for that mid-res object having no hard edges. Then in X-normal you use that mid res object as Hi definition + that normal map of tiny details as base texture with "Base texture is a tangent-space normal map" checked in. Xnormla would combine it with it's own baked normal map into base texture output. With that method you wouldn't be able to bake all those tiny things into the height map but it's usually not necessary.
2. You do mesh decimation in zbrush with respect of vertex coloring and then go as usual in X-norml.
3. A combination of 1 and 2 , Point 2 for scattered nanomeshes for example if they don't use alpha.
4. Same as in point 1 but mid-res having displacement map . Then baking it in MAx/Maya, not Blender unfortunately since it can't import and work ok with even middle res mesh of couple millions . Limited respect to nanomeshes alpha is possible.
What happens if you run something with nanomesh/GeoHD through Decimation Master? Just scrubs it all out?
There is also a pretty old quick approach of having midpoly baked into single displacement texture with all UV patches oriented accordingly with as equal texel size as possible . Then displace that texture from a square plane and add all the nanomeshes there , even hairs. Not accross the seams although. After that you can just do screen render in Zbrush. For normal map be sure to get rid of ambient light in lighting settings and bake your own normal sphere picture somewhere for normalRGB material ( Zbrush one has wrong gamma curve) , also for hairs and nanomeshes you have to do "best" render first and if Zbrush is still not hang after that, BPR render. Save 16bit outputs. Then go option 1 in my previous post.
A depth pass could be used to make quick curvature/cavity map with hipass and levels in Photoshop. You could re-bake it after to your target low poly mesh with final UV packing in Xnormal.
This methos works for objects with not so many small UV islands and it's pretty time saving.
- Pipeline-wise focus your attention on being able to easily export things out of Zbrush - either directly to your baking application, or to your main 3d program, which will serve as a hub for your scene (which gives you the benefit of having to work lightly and efficiently, and also opens you up to all traditional modeling tools). Don't believe the hype about "AAA characters created fully in Zbrush" - while this is indeed possible for portfolio pieces, this is a nightmare in production as the amount of cleanup and the time wasted on redoing things goes through the roof.
What you can do is to think of it the other way around, like : "If this scene is not fluid and cannot be displayed at 60fps, then something is wrong and I need to address it". This might force you to work at lower densities than what you'd like, but in the long run (after baking, texturing, and so on) the result will probably look just as good as if you worked with gazillions of polygons. And if a scene is light enough to be displayed with high response time in Zbrush, then it will likely be very fast to write out as OBJ too, and Blender will have no problem displaying it.
Would go the route of exporting and working on each item separately in it's own .ztl file before exporting, I also do not decimate any meshes as i find it ruins my normal maps (idk if i am doing it wrong or this is the flaw in design? but the normals are never the same as the raw subtool, FOR ME, I might be missing a step somewhere.)
Just for the record's sake the one thing i am working on now has 103 subtools, usually once I have them together and at a moderate level and final silhouette i can place them into their own .ztl and work on them more there if need be.(mentioning if you need this type of process to get a bit more out of the details)
HOWEVER i find just using your program of choice, tessellating and sub-ding is a quicker solution to a better end result.(recent finding from the sticky here in tech, this because i only started to learn more about hard-surfacing.)Using zbrush as a detail only app.
I still don't get the - keep all objects together when making the AO bake - it never works for me and i still explode bake those separately. (The only time i would decimate is for a screen shot of the high-poly for say the portfolio.)
Thought i'd attempt to help with the subtool separation "method".
Hope this helps someone somewhere on the forums.
David: Yeah, I'm not excited about having to decimate my highpolys, but if that's what it takes to get them OUT of zbrush, then I guess I'll have to deal. I'm reading that xnormal can run like 30m polygons for a highpoly, so that doesn't seem like it'd be a problem. Except if I go nuts on a sculpt up to like 1m, and THEN try to do GeoHD or Nanomesh or something. That might be pushing it. Who knows.
Again, tackle the problem the other way around : work with what you have, identify the bottlenecks, and adjust your pipeline accordingly. For instance, you could decide that an export should never take more than 5 seconds, and determine the density of your models from there.
(and to be sure, I can spin hardware limitations as part of the process, if I decide to show these pieces for some sort of interview or something)
That aside, what is the "industry standard" workflow for Zbrush? Or would that be a question better suited for ZBC? Maybe I should make an account there too.