From your personal perspective and experience what baking features would you like to see in your software of choice, irrespective if it's Blender, Maya, Max, Substance Painter, Marmoset Toolbag etc. Would per object baking settings be something you'd like too see? For instance, say you have a hero character model with clothes on, and you'd like to bake a cavity map for the whole model, where the strength (for the lack of a better term) of cavity map would vary between apparel pieces, would that be something you'd need or like? Or is the traditional way good enough for you, where you bake color ID map for those individual pieces and use it as a selection mask in Substance Painter for trying to adjust cavity through levels modifier/filter?
Also, what are some of the drawbacks you most dislike when it comes to your current baking process?
Replies
Also the trouble I get with overlapped UVs. Often game assets have symmetry or reused parts, that can reuse the same UV space. To bake those, you typically have to move all the overlaps outside the (0,1) UV space, and you have to make sure whatever remains inside is not actually facing backwards (and thus invisible to most bakers). Example: http://wiki.polycount.com/wiki/Texture_Baking#UV_Coordinates It would be nice to not have to do that extra work, the baker should be smart enough to figure it out (only render one forward-facing bit of the overlapped UVs).
I get you here completely, it must be frustrating having to do that all the time when baking, and then placing the mirrored UVs back into (0, 1)-UV space in order to examine the results only to find out that not everything whet as expected and now you're confined to repeating the same process again and again until all baking mistakes have been properly addressed. It's a rinse and repeat process. However, I have to admit that I did not expect this, since I hold a long time belief that this method is a relic of the past, given how much hardware has improved over the past decade or so and that SP became integral part of every 3D artists pipeline way back in 2015.
Substance Painter doesn’t alter the need for UV offsets. This is an issue about UV space savings, and how intentional overlapping causes problems with baking.
I realize the purpose and benefit of stacking mirrored UVs one on top of the other, it's just that I falsely I assumed that by doing so it creates problems in SP or even any texture painting software that uses materials, as most materials by default project their texture from a cube onto the 3D model with blending which creates "broken" textures unless you've imported mesh with mirrored parts removed. And I know that this is method of projection is changeable to UV one, but that mode of operation introduces unwanted seams which kinda defeats the purpose of texturing effectively in the 3D space in the first place.
I woldn't be surprised if what I've just explained is plain out wrong, so I'm anticipating a harsh critic, especially after seeing your portfolio full of industry experience.
By the way, if you don't mind me asking, which software do you find yourself using for baking?
https://rapidpipeline.com/
Of course, I work there, ha ha. But it is a pretty cool baker. When baking AO (and also if you want to remove hidden meshes) we simply ignore surfaces with transparent materials.
I've never used any of the addon bakers that Blender's community provides, so I have to ask if the problem with exploded baked option is that it's not automatic and has to be done manually or if it's destructive in sense that separates highpoly objects that should be baked together. I don't have any visual examples at my hand right now but I'll think I'll be able to explain in words nicely. Say we have a high poly keyboard with rectractable legs and obviously numerous keys. We plan on baking the base keyboard along with the keys onto a single low poly mesh objects and high poly legs onto their own low poly representation, naturally this requires us to explode the keyboard in order to avoid bake-bleeding. However, the problem now arrises with the fact that our keys have also been exploded as they don't have the same origin as the large low poly base keyboard mesh and thus they won't be captured by it during the baking process. Or is the problem in something else?
And what's your opinion on the skew-offset normals, would inclusion of that in Blender be a big benefit?
And how would you feel about a Blender addon that non-destructively automates most of the aformentioned work with UI friendly fine-tune controls with a downside of being a bit slower in baking in comparison to marmoset toolbag. To be clear I'm not offering anything at the moment, I'm just curious.
So if I got this correctly, you essentially want a form of post processing for baked textures? If that's the case, I'm curious to know for what bake types and exact benefit? Would channel packing and compression(i.e. Normal xyz to Normal zy) also be something of a value to you?
Not to stray too much off topic but I've personaly written a semi-practical add-on for blender that amongst other bake types bakes an AO map that automatically puts it in a postprocessing stage where you can control the smoothness between huge height discrepencies. Hope I'm not breaking any rules or TOS' here but here's a link to it on blenderartists, it's free of charge.
to add..
it's not just about alpha-blended surfaces (those are hard anyway)
Proper support for alpha-clip / alpha-test on the source mesh is a really useful thing that very few bakers have.
ie. The ability to render through clipped pixels in the materials on the source mesh and capture underlying opaque pixels.
Use cases include baking foliage cards from arrangements of alpha-cilpped leaf meshes and opaque branch meshes - or baking hair cards down to a surface.
What I'd really like is to be able to push arbitrary data from the source object's materials onto the target object (essentially, let me write a shader and you bake the result to the target) but that's probably pushing it
Would it be correct to assume that what you're trying to describe is essentialy backing LOD's from folliage where the lowest LOD has a bilboard for each leave (also has trunk and branch meshes) and each successive LOD's incorporates an even large bilboard that encompases more leaves, all the way until we're left with only two large bilboards that essentially capture the whole tree data from Y and X axis?
Well... I believe that Blender already kinda allows for that. But I wouldn't be surprised if you're using another software, since the most common way of creating shaders in Blender is by constructing them via shader nodes and not by writting (even though it's possible via OSL shaders) and you specifically wrote "write". Could you kindly provide an example how and where this would be beneficial?
and - to the shader question.
the ability to gather any data I might happen to need from a mesh, composite and transfer that as a texture from a source mesh to a target mesh means that I can reformat data into something useful for my target engine
eg. certain tools pack data into UV sets and vertex color and I might want to gather that, mess with it and bake it to a texture
Where I work we write a fair amount of custom tooling that runs as part of our build process to manipulate data in this way which works great but it's hard to debug for artists as the output is not readily available to for viewing.
if we're talking about blender and you already plan to support 'any' material then it's likely that it'll cover most of my potential use cases - OSL is ok and blenders node based tools seem to be fairly well featured
I've seen the workaround video before, and it's quite tedious, requires additional (unnecessary) geometry and thus incredibly time consuming. This could be "easily" circumvented by procedurally generating additional geometry and carefully capturing/storing the custom split normals from the original mesh and standard normals around its edges. This way you'd be able to bake any map without skewing errors that can be displayed on your original low poly mesh, yes... the one before adding additional geometry. I know that what I wrote doesn't make much sense, and hopefully in couple of months I'll finish the algorithm that does exactly that and I'll demonstrate.
As for "waviness", I'm not too sure about solution for it but I believe that it could also be addressed in a similar manner with a bit more extra steps introduced in the aforementioned "algorithm".
the ability to gather any data I might happen to need from a mesh, composite and transfer that as a texture from a source mesh to a target mesh means that I can reformat data into something useful for my target engine
eg. certain tools pack data into UV sets and vertex color and I might want to gather that, mess with it and bake it to a texture
Where I work we write a fair amount of custom tooling that runs as part of our build process to manipulate data in this way which works great but it's hard to debug for artists as the output is not readily available to for viewing.
Well, I'm no expert when it comes to blender's shader nodes but I've noticed that they lack some fundamental stuff such as partial derivatives and that they're object bound shader nodes which means you can't perform any operations on the image output, for viewport display reasons, ie. outline effect. Thus I find it fairly limited, great for prorotyping, though.
S = source
A = what you get
B = what you want
admittedly this is a pretty niche area and use cases are highly context dependent
I'm not sure how the post processing from general_baking.py is supposed to work, if the user would have to rebake every time for it to apply or what, I couldn't get it to do anything, but the basic height baking seems solid so far. On Blender you usually have to resort to pointness or AO or GN to get something similar, and the first method has banding while the second can be awfully slow. Yours baked quickly on a small test object. I'm pretty curious to see how it fares on heavy organic meshes and will try with one later. It'd be very useful to have a way to bake quality cavity/convexity maps directly in Blender!
I also really like the high and low poly lists. I didn't get around baking multiple objects yet, but it's pretty handy and intuitive. Nice work!
Edit: Oh, I see why the pp didn't work! It's using a method for OpenGL and I'm using Vulkan. It's also a really cool feature, made even better by updating immediately so you get to see it affect the material in real time. That's great when texturing!
I just can't believe I forgot about the limitation of number of attributes per vertex, now it's clear to me how "baking" those attributes could be useful. However, would it be correct to assume that you're talking about data extraction from low poly object itself and thus it would be most efficient to "bake" those values to a 1D texture where we can later access it via vertex ID's in the vertex shader and thus save on texture space but also keep the interpolation of those values between vertices?
Of couse it goes without saying that this is only valid as long as our shader language of choice allows for texture reading in the vertex shader in the first place, as thankfully most do nowadays. I'm asking all of this because if that's the case (and just be clear again), then technically we wouldn't need to actually bake anything all, but just iterate through vertices in ascending order and write that specific attribute data into 1D texture via scripting.
No worries, you don't sound pretentious at all. You're quite right and I agree, it's not an add-on one bit, it's my fault that I called it an add-on by mistake in this thread. And if you refer back to the blender artists page you'll see that it's not posted in blender's released scripts and theme but in blender's testing thread section and never once did I originally call it an addon but rather a script since I'm aware that it's not that, again.. you and I agree on this one 100%.
I know you've already figured it out, but I'll just explain it for the sake of explaining, so no need to read this.
The post processing does not require rebaking every time, you just have to place a grayscale texture in the post-processing image slot and the changes performed on it are real time. Sadly it's a bit slow when it comes to textures larger than 2048x2048, then again that depends on your device.
The simplistic explanation to how height map construction works is as follows, one world position map for both high and low poly is baked, and a normal map for low poly, from these three textures a height map is extrapolated by using most basic arithmetic operations and then offset and clamp operations are applied so that the range is mapped between 0.0 and 1.0 thus not leaving any unused bit space. So while it's not the slowest, it's definitely not the fastes. I could theoretically gain on height map speed construction by instead of using python's numpy methods (CPU bound) switch to GLSL (GPU bound).
I'll see what I can do about cavity maps in blender, but as far as I know cavity is more or less AO with a really small sample radius so while possible to create in Blender it's going to run really slow as it will be CPU bound, but that's just my guess. I'm going to have to explore that deeper. Convexity has already been developed and with a fine-tune control preview per each high poly object!
I've been using pointiness which also isn't great because it's still a bit slow for dense geometry and you're forced to use workarounds to avoid a couple of serious float precision limit glitches. Although it's a shader it's derived from the geometry, so geo resolution has direct impact on the final quality.
A streamlined equivalent that doesn't use 10's of GB of RAM to bake would be very welcome. Unfortunately I think the memory-eating sluggishness is just an inherent limitation of Blender's baking. It's so inefficient.
I've not done a great job of explaining myself
there are a number of use cases
As a high-res to low-res transfer example - and sticking with trees:
in your source model (the picture i drew before)
You can use speedtree to assign vertex color values by index to leaves/branches, you can also apply a gradient running from base to tip of branches etc. all of which is very useful information for seasonality, plant degradation etc.
This can be baked to your target mesh (a plane) and used to control masking in runtime shaders that handle seasonality, degradation etc.
it isn't be possible to encode this data as vertex color into the low-res because there aren't enough vertices.
The benefit of being able to run a shader on the source mesh is that I would be able to manipulate
the values at bake time - eg. index into a noise function to randomise the indices, composite channels together etc. etc.
I generally do the sort of manipulation I describe here as a post process step by scripting substance designer but that adds another link in the chain and links are where problems occur.
For low-res to low-res transfer there are lots of use cases but the most common would probably be taking a complex shader and baking it down to something simpler. eg. if you're doing a switch port of a console title or you want to aggregate a bunch of meshes to generate a lod or background asset.
There are very few tools that allow you to do this sort of thing with arbitrary shader code and even fewer that aren't a colossal ballache to operate ( looking at you simplygon ...)