From your personal perspective and experience what baking features would you like to see in your software of choice, irrespective if it's Blender, Maya, Max, Substance Painter, Marmoset Toolbag etc. Would per object baking settings be something you'd like too see? For instance, say you have a hero character model with clothes on, and you'd like to bake a cavity map for the whole model, where the strength (for the lack of a better term) of cavity map would vary between apparel pieces, would that be something you'd need or like? Or is the traditional way good enough for you, where you bake color ID map for those individual pieces and use it as a selection mask in Substance Painter for trying to adjust cavity through levels modifier/filter?
Also, what are some of the drawbacks you most dislike when it comes to your current baking process?
Replies
Also the trouble I get with overlapped UVs. Often game assets have symmetry or reused parts, that can reuse the same UV space. To bake those, you typically have to move all the overlaps outside the (0,1) UV space, and you have to make sure whatever remains inside is not actually facing backwards (and thus invisible to most bakers). Example: http://wiki.polycount.com/wiki/Texture_Baking#UV_Coordinates It would be nice to not have to do that extra work, the baker should be smart enough to figure it out (only render one forward-facing bit of the overlapped UVs).
I get you here completely, it must be frustrating having to do that all the time when baking, and then placing the mirrored UVs back into (0, 1)-UV space in order to examine the results only to find out that not everything whet as expected and now you're confined to repeating the same process again and again until all baking mistakes have been properly addressed. It's a rinse and repeat process. However, I have to admit that I did not expect this, since I hold a long time belief that this method is a relic of the past, given how much hardware has improved over the past decade or so and that SP became integral part of every 3D artists pipeline way back in 2015.
Substance Painter doesn’t alter the need for UV offsets. This is an issue about UV space savings, and how intentional overlapping causes problems with baking.
I realize the purpose and benefit of stacking mirrored UVs one on top of the other, it's just that I falsely I assumed that by doing so it creates problems in SP or even any texture painting software that uses materials, as most materials by default project their texture from a cube onto the 3D model with blending which creates "broken" textures unless you've imported mesh with mirrored parts removed. And I know that this is method of projection is changeable to UV one, but that mode of operation introduces unwanted seams which kinda defeats the purpose of texturing effectively in the 3D space in the first place.
I woldn't be surprised if what I've just explained is plain out wrong, so I'm anticipating a harsh critic, especially after seeing your portfolio full of industry experience.
By the way, if you don't mind me asking, which software do you find yourself using for baking?
https://rapidpipeline.com/
Of course, I work there, ha ha. But it is a pretty cool baker. When baking AO (and also if you want to remove hidden meshes) we simply ignore surfaces with transparent materials.
I've never used any of the addon bakers that Blender's community provides, so I have to ask if the problem with exploded baked option is that it's not automatic and has to be done manually or if it's destructive in sense that separates highpoly objects that should be baked together. I don't have any visual examples at my hand right now but I'll think I'll be able to explain in words nicely. Say we have a high poly keyboard with rectractable legs and obviously numerous keys. We plan on baking the base keyboard along with the keys onto a single low poly mesh objects and high poly legs onto their own low poly representation, naturally this requires us to explode the keyboard in order to avoid bake-bleeding. However, the problem now arrises with the fact that our keys have also been exploded as they don't have the same origin as the large low poly base keyboard mesh and thus they won't be captured by it during the baking process. Or is the problem in something else?
And what's your opinion on the skew-offset normals, would inclusion of that in Blender be a big benefit?
And how would you feel about a Blender addon that non-destructively automates most of the aformentioned work with UI friendly fine-tune controls with a downside of being a bit slower in baking in comparison to marmoset toolbag. To be clear I'm not offering anything at the moment, I'm just curious.
So if I got this correctly, you essentially want a form of post processing for baked textures? If that's the case, I'm curious to know for what bake types and exact benefit? Would channel packing and compression(i.e. Normal xyz to Normal zy) also be something of a value to you?
Not to stray too much off topic but I've personaly written a semi-practical add-on for blender that amongst other bake types bakes an AO map that automatically puts it in a postprocessing stage where you can control the smoothness between huge height discrepencies. Hope I'm not breaking any rules or TOS' here but here's a link to it on blenderartists, it's free of charge.
to add..
it's not just about alpha-blended surfaces (those are hard anyway)
Proper support for alpha-clip / alpha-test on the source mesh is a really useful thing that very few bakers have.
ie. The ability to render through clipped pixels in the materials on the source mesh and capture underlying opaque pixels.
Use cases include baking foliage cards from arrangements of alpha-cilpped leaf meshes and opaque branch meshes - or baking hair cards down to a surface.
What I'd really like is to be able to push arbitrary data from the source object's materials onto the target object (essentially, let me write a shader and you bake the result to the target) but that's probably pushing it
Would it be correct to assume that what you're trying to describe is essentialy backing LOD's from folliage where the lowest LOD has a bilboard for each leave (also has trunk and branch meshes) and each successive LOD's incorporates an even large bilboard that encompases more leaves, all the way until we're left with only two large bilboards that essentially capture the whole tree data from Y and X axis?
Well... I believe that Blender already kinda allows for that. But I wouldn't be surprised if you're using another software, since the most common way of creating shaders in Blender is by constructing them via shader nodes and not by writting (even though it's possible via OSL shaders) and you specifically wrote "write". Could you kindly provide an example how and where this would be beneficial?
and - to the shader question.
the ability to gather any data I might happen to need from a mesh, composite and transfer that as a texture from a source mesh to a target mesh means that I can reformat data into something useful for my target engine
eg. certain tools pack data into UV sets and vertex color and I might want to gather that, mess with it and bake it to a texture
Where I work we write a fair amount of custom tooling that runs as part of our build process to manipulate data in this way which works great but it's hard to debug for artists as the output is not readily available to for viewing.
if we're talking about blender and you already plan to support 'any' material then it's likely that it'll cover most of my potential use cases - OSL is ok and blenders node based tools seem to be fairly well featured
I've seen the workaround video before, and it's quite tedious, requires additional (unnecessary) geometry and thus incredibly time consuming. This could be "easily" circumvented by procedurally generating additional geometry and carefully capturing/storing the custom split normals from the original mesh and standard normals around its edges. This way you'd be able to bake any map without skewing errors that can be displayed on your original low poly mesh, yes... the one before adding additional geometry. I know that what I wrote doesn't make much sense, and hopefully in couple of months I'll finish the algorithm that does exactly that and I'll demonstrate.
As for "waviness", I'm not too sure about solution for it but I believe that it could also be addressed in a similar manner with a bit more extra steps introduced in the aforementioned "algorithm".
the ability to gather any data I might happen to need from a mesh, composite and transfer that as a texture from a source mesh to a target mesh means that I can reformat data into something useful for my target engine
eg. certain tools pack data into UV sets and vertex color and I might want to gather that, mess with it and bake it to a texture
Where I work we write a fair amount of custom tooling that runs as part of our build process to manipulate data in this way which works great but it's hard to debug for artists as the output is not readily available to for viewing.
Well, I'm no expert when it comes to blender's shader nodes but I've noticed that they lack some fundamental stuff such as partial derivatives and that they're object bound shader nodes which means you can't perform any operations on the image output, for viewport display reasons, ie. outline effect. Thus I find it fairly limited, great for prorotyping, though.