I notice blender has not UDIM option for Uv's layout , any alternative ?to include in this addon? or manual process ? thx
Anything you have in mind? You can already lay out UVs outside of the 0-1 area, which is how I understand UDIM. Perhaps some interface to offset selected face's UV's into a UDIM block? Or cycle / swap UDIM blocks?
Here is a mockup of something I could imagine useful: With a click of a button cycle the UV's from block to block - like offsetting. Perhaps another tool to pop in a new block and offset all others to fit in a new block.
To those that have tried Eevee (the PBR viewport of 2.8) (...)
If I load the "Race Spaceship" scene it looks very different from the pictures, with the scene almost all black, and the only light type that seems to work for me is Sun.
To answer myself, if you're on Windows then you need to right-click the Blender 2.8 executable and pick Run with Graphics Processor -> Nvidia (...) or whatever dedicated GPU you have on your system.
Turns out if you don't do that then 2.8 will use your integrated GPU, which may lack a feature or two and won't draw the viewport right.
On the Nvidia control panel you can also set this option permanently for Blender 2.8 so you don't have to keep doing that right-clicking.
I have recreated the node setup to generate a curvature map from a normal map, the result is closer to substance designer. Important note: the normal map must be imported in Linear mode (that means converted in sRGB):
The picture of the node setup, it's easy to reproduce, the positive translation direction uses an inverted image and it's the opposite with a negative direction. The red channel moves the image in X and the green in Y.
Here I use a color ramp in HSV mode (from cold (max cold = blue) to warm (max warm = yellow) and the blue channel of the normal map to add relief then I combine the curvature with the colors with the value node:
Alternative with a curvature smooth:
I am currently working on the Normal to Curvature Smooth node setup, I will share the node group here. Everything will be automated in AssetGen so you won't have to recreate any nodes (won't have to do UVs, retopo too).
Edit: do not use those node setup in the two images bellow, it creates artifact, you can overlay it with a curvature smooth map to hide the aliasing.
And here is how to improve your curvature maps, most artists use a blur filter to remove the aliasing, I suggest instead to increase the pixel width by a factor of 2 starting at 1 (0.5 in each direction) to 64, you can go higher like 256 max but the difference will be minor.
So here is what you get with the improved curvature nodes: like before I use an overlay with a factor of 3 on the image itself.
You can multiply the blue channel to add more relief so the details of your texture will pop up:
I continue to work on reproducing the curvature smooth of Substance Designer.
As for Blender - we use it in AAA development. It is excellent tool. Modelling in blender is a blast. And I'm talking about complex objects modelling with lots of modifiers, entities and geometry. It has some tools that no other tool has (or just not as convenient to use as in Blender). For example: Y.A.V.N.E. addon provides the best custom normals workflow I ever seen. Folks that use mainly Maya or Modo in my studio says that they wish to have something so useful. Same goes for booleans. There's more examples but I think I gave you general idea.
And another thing - please don't judge tools. Its up to human being who use the tool to utilize it's power. If you find yourself not satisfied with results you made with Blender - maybe you need to improve as an artist. And looking at your artstation page I can say that there is a room for improvement.
Please don't debate about the capabilities of Blender in the mega thread, create a topic and if you think that Blender lacks some feature you can suggest them here: https://blender.community/c/rightclickselect/
But almost everything you mentioned already exist: - dyntopo is awesome, way better than dynamesh because it's local, in real time and never crashes and rebuilding the topology doesn't exist at all in Mudbox. The Orb Brushes have been ported to Blender on Blender Swap for stylized artists. You can combine your sculptures in Blender with poly editing, cloth simulation, you can directly paint the bump map, the future clay viewport with OpenGL 3.3, SSAO and matcap editing. Blender is built for 3D not originally for 2.5D. You have the skin modifiier that combined with subsurf is as good as ZSpheres - for sculpting tutorials: YanSculpts serie on Youtube, Art of Sculpting serie on Blender Cookie, Low Poly Character creation workshop and Creature Factory 2 on Blender Cloud to name a few - the game engine will become UPBGE with EEVEE and is an interactive mode not intended for AAA games but mostly for quick demos like game prototypes to explore game design ideas and small simulations (some car manufacturers use it to configure the car online before buying it with Blend4Web and NASA uses it for the Curiosity demos to teach the public) and archiviz - for booleans and shading you can combine Bool Tool + rounded bevel shader + edge splits it fixes the problem easily or you can use hard ops - manual retopology as always been a pain but ZRemesher isn't the answer to this problem to create a model that needs to be animated - something like ZRemesher is planned after Blender 2.8 called Interactive Quadrilateral Remeshing when the developers will have more time - for static meshes the decimation modifier of Blender is much better than manual retopology and ZRemesher because it keeps more triangles where the details are and less on flat surfaces automatically and you can create vertex groups for hidden part to decimate them more or protect some areas that you want more detailled. The game asset looks better and you can set exactly the polycount you want for each LODs (see AssetGen) to get the maximum amount of details on the tris limit (also means a better normal map quality and curvature generated from it because it envelops the high poly perfectly). You can after the decimation use the Bool Tool addon to union every meshes and remove every intersections. See AssetGen to generate a game asset in one click
My blender is on drug, or i am? I am learning how to bake normal maps, but some stranger things are happening. I was baking an changing my meshes only, like the seams and smoothing splits. but after sometime, things get weird, the normals bake change colores, get distorded, and i was only changin the meshes.
This secon one is from yesterday in another model, almost the same thing, but herer the normals get distorded too, and one bake with the same mesh on handplane. I think that it maybe some settings, but i have no idea of what it could be.
I can upload the other blend file if someone want, sorry for all those pics :X
For the normal map that appears more bright that's because Blender works in the sRGB color space (invented for old cathodic monitors to make every images brighters and contrasted, more infos here: https://docs.unity3d.com/Manual/LinearRendering-LinearOrGammaWorkflow.html and here: https://youtu.be/m9AT7H4GGrA) normally Blender bakes everything in sRGB and when we set it to normal map bake it changes it automatically into a Linear image unless you change the color space manually (probably what you did by mistake). In the "n" menu of the image editor set the color space back to Linear. If it doesn't solve the problem save the normal map, import it in the compositor and add a Color > Gamma node and set the value at 2.2.
For the normal map see if you don't have a problem with the cage, you can use the extrusion setting to generate an invisible cage automatically during the baking.
Thanks for the answare, i dindn't intentionally change anything in the 'n' panel, but aparently the problem was there, the default is set to (source: generated) and (color: srgb), and it was in (source: single image), (color: linear). So you saved me on that one.
As for the cage, using the extrusion settings i am not able to have the output come as good as when using a ''physical'' cage, but the problem is in the cage since it changes just by changing one edge. however the bake end up distorded regardless and if the same mesh/cage is imported to handplane, the distortion disappear.
I will import those file to a fresh blend file, just to make sure, and will play more with the automatic cage see if i can use it properly.
And here is how to improve your curvature maps, most artists use a blur filter to remove the aliasing, I suggest instead to increase the pixel width by a factor of 2 starting at 1 (0.5 in each direction) to 64, you can go higher like 256 max but the difference will be minor ... I continue to work on reproducing the curvature smooth of Substance Designer.
@Linko Here is what I have so far. This method 'composite_nodes' is called after I bake the tangent normal map. You can find the full code here
def composite_nodes(image, scene_name):
previous_scene = bpy.context.screen.scene
# Get Scene with compositing nodes
scene = None
if scene_name in bpy.data.scenes:
scene = bpy.data.scenes["curvature"]
else:
path = os.path.join(os.path.dirname(__file__), "resources/compositing.blend")+"\\Scene\\"
bpy.ops.wm.append(filename=scene_name, directory=path, link=False, autoselect=False)
scene = bpy.data.scenes[scene_name]
if scene:
# Switch scene
bpy.context.screen.scene = scene
path = bpy.app.tempdir;#+""+image.name+".png"
#Find image node
if "Image" in scene.node_tree.nodes:
scene.node_tree.nodes["Image"].image = image
if "File Output" in scene.node_tree.nodes:
scene.node_tree.nodes["File Output"].base_path = path
scene.node_tree.nodes["File Output"].file_slots[0].path = image.name+"#"
# Render & save image
bpy.ops.render.render(use_viewport=False)
# Load image
image.source = 'FILE'
image.filepath = path+image.name+"0.png"
image.reload()
#Restore scene & remove other scene
bpy.context.screen.scene = previous_scene
# Delete compositing scene
bpy.data.scenes[scene_name].user_clear()
bpy.data.scenes.remove(bpy.data.scenes[scene_name])
This is the external blend file that I load in temporarily and composite the curvature map. The green rectangles are the areas I accessed via code, see above code for reference.
There are a few minor bugs remaining but in a nutshell it already works. Next I have to refine the node setup to add multiple blur variants and combine them.
Hi, in AssetGen the addon already generates the curvature map, it recreates this old node setup (the new I have posted keeps more details), the script recreates the node setup and import the normal map as linear then it saves the result with an output node with a gamma node at 2.2. Also in the script it's called Generate Curvature and it can't be toggled on if the normal map is unchecked. We can set the pixel width and blur but those settings will be removed and we will maybe used the improved curvature node setup I had shown above with iterative pixel width reduction.
I have posted how to calculate the pixel width, it is better to bake in 2048 minimum and to divide the X resolution by 4096 and the Y resolution by 4096 to have the offset value of 0.5 in 2048px. For a curvature that must be bellow this resolution the map should be baked in 2048 (because we can't work with subpixel offset) and to scale it down.
Also I recommend to generate the curvature and especially the curvature smooth in a 32 bit Float image so the color gradient applied looks better without harsh color transition.
And here is the node setup to generate a curvature smooth map:
@Linko much appreciated, will have a look tonight and update on my end. As for float precision bakes: i was planning to look into dithering down to 8 bit per channel to avoid banding on normals, this might apply to these grayscale images as well.
Yes I am very interested on dithering, a Blender user showed me how to reduce the color data on Blender Stack (to create a retro 256 colors sprite) maybe it is a starting point to reduce the number of colors of an image with more than 8 bit color data: https://blender.stackexchange.com/a/90949/23134
For now one of the solution is to use the free software IrfanView to get dithering while reducing the bit depth.
About my post with anti-aliasing on the curvature map avoid it, it creates artifacts, the best way is to bake in an high resolution then to scale the texture down. The minimum resolution for the curvature map is 2048 = 1 pixel width. Bellow in 1024 you can't get 0.5 pixel width, it will be rounded to 1 px so the lines will get thicker.
I have asked if my curvature node setup and curvature smooth could be officially integrated into Blender and maybe used with Cycles baking.
You need to overlay the two curvature maps with the smooth curvature on top to shadow the other map. Use a value node and in the first input a color ramp in between in HSV mode to pick analogous colors from cold to warm.
Then mix your base albedo with your thickness map, to color it use a color ramp with R: 0.7 GB: 0.1 for the left cursor to get the color of the blood and RGB: 0.5 for the right cursor for the middle grey sRGB to overlay it correctly without touching the brightness. For the factor invert the thickness map, add a color ramp and for the right cursor set the RGB at 0.2 (higher values add blood). Then you can eventually connect the alpha of the thickness node to the alpha of the viewer node.
Multiply it with an Ambient Occlusion map if it's for a diffuse texture, in PBR the AO is used separately to be only displayed in the shadow, the AO is removed in area directly lighted by the light like in the real world.
I am doing some hardsurface stuff, and im using a cage that is a copy from my lowpoly, i think this is called ''avaraged projection mesh'' and i wonder if there is any way to not having to do it manually, like the extrusion option wich i tried, but it gve the results of the ray traced i guess, with gaps where i have hard edges.
When using the ray distance and hard edges it gives me ''double edges/gap'' where i have hard edges. so i was using custom cage, but is anoying to have to duplicate your lowpoly, then inflate or use a displacement, and probably end up moving the cage to another layer too.
Bcz of that, i was wondering if there was any way that i could get the same results as when im using a custom cage, but in a faster way, like just having to click in a button like changing the ray. In fact, i was thinking about making a sugestion that could be implemented in textools, but first i wanted to make sure that this is not possible yet with the blender default ''cages''.
@Udjani You could try and use the Solidify modifier. @metalliandy showed me that, and it works quite well. Just be sure to have 'Only rim', and generally make sure the mesh is closed.
I am doing some hardsurface stuff, and im using a cage that is a copy from my lowpoly, i think this is called ''avaraged projection mesh'' and i wonder if there is any way to not having to do it manually, like the extrusion option wich i tried, but it gve the results of the ray traced i guess, with gaps where i have hard edges.
What works for me is the Solidify modifier and as @derphouse said, you should get proper results with it.
I've got some strange baking behaviour in Blender using TexTools, though I'm not sure it's TexTools specific since I seem to recall encountering this problem before. Seemingly arbitrarily-shaped islands in the normal map where the color is a shade different. Like a stain. Anyone have any ideas?
My friend is using Softimage 2015 (and XSI Mod Tool for Source engine stuff), but he praises their weight painting tools. Basically because they have "multi-color feedback" for each bone simultaneously. And you can paint by replacing influences from neighbour bones with a selected bone and smooth between influences.
Weight painting in Softimage looks like this, and it definitely looks REALLY artist-friendly being so visually easier to see where your weights are distributed across the mesh:
With Blender's current weight painting visualization it's just confusing as hell. Think about how much easier time you'd have skinning your character to a rig and pose it for your portfolio.
With this kind of multi-color feedback, it would actually make weight painting FUN and a lot easier for character artists.
Hey, I've been watching this thread for a while and I just finished my addon so I figured I'd post it here to see what you guys thought. It's an addon that lets you drag to create primitives like you can in other 3d software like 3ds max.
Here's the link to download it - https://github.com/3ckSpeX/PrimitiveCreationModals I'd also like to know if anyone has any Idea on why while in edit mode when I use the shift+A add menu it calls the execute function instead of invoke function, it makes it impossible to use the shift+A menu in edit mode and was the only bug I saw and wasn't able to fix.
3ckSpeX - sweet. Can you make it snap to verts? RIght now I see it works with snapping to grid (if ctrl key i pressed and holded), but I wish you addon could use blender vert snapping option too.
@3ckSpex this addon is amazing. This should be by default included in Blender. Would you mind when i add it to Bforartists? For the hotkey problem, maybe a overriding issue with other hotkeys? Maybe you can get help at Blender stack exchange.
@JoseConseco - Vertex snapping is something that I wanted to add when I started it but when I thought about it I couldn't think of a good way to implement it, or how it would work. I think I can do vertex snapping when you're first placing the object but I'm not sure how to do it when dragging without doing something I think is too performance intensive. If someone could give me an idea of how vertex snapping should work I might be able to do it, or if just snapping on the initial placement of the new object is enough.
@Tiles - Ya no problem I made it free because I like free stuff too, so feel free to use it. I think it should be in default blender too, the only problem with that is I didn't stick to the proper convention (I'm not sure I stuck to any convention lol) when making it and I'm not sure how much support I'm going to provide for the addon later on. I'd have to fix those if I wanted send it in for default blender, and I'm not sure I want to do that yet.
Thanks Don't worry about support. I am not even sure if i can use it unmodified in Bforartists. We have thrown out quite a few double menus. So i might need to change the scripts here and there
There's finally a developer task for editing multiple objects at the same time: https://developer.blender.org/T54242 There's talk of going for a 'clumsy but functional' path instead, and I'd encourage anyone here to get involved in the discussion --in an adult, constructive way-- to prevent that from happening This is something that matters a lot to me, and I imagine game developers in general, and would be a real quality-of-life improvement in my daily work!
I'm trying to use Blenders lasso select (ctrl+lmb) was wondering if it's possible select all encompassed faces instead of just the ones the user can see? Right now I have to switch to wireframe mode every time I want to lasso select, otherwise backfaces and such are completely ignored. Also would be nice if there was a way for lasso select faces even if only part of them is within the lasso area (instead of needing to encompass the entire face).
Maybe someone here knows if it's possible to do any of that?
I believe Heavypoly's custom selection script does this, but I seem to be having trouble with it with 2.79a. Hopefully he's fixed it by now, but I haven't checked yet. Here's the thread that links to his Dropbox with his scripts. I highly recommend all of them!
There's finally a developer task for editing multiple objects at the same time: https://developer.blender.org/T54242 There's talk of going for a 'clumsy but functional' path instead, and I'd encourage anyone here to get involved in the discussion --in an adult, constructive way-- to prevent that from happening This is something that matters a lot to me, and I imagine game developers in general, and would be a real quality-of-life improvement in my daily work!
The "clumsy but functional" was for UV unwrapping, you don't need multi object editing for that, a new option could be added to unwrap several objects into the same UV map so this could happen before multi object editing itself.
I'm trying to use Blenders lasso select (ctrl+lmb) was wondering if it's possible select all encompassed faces instead of just the ones the user can see? Right now I have to switch to wireframe mode every time I want to lasso select, otherwise backfaces and such are completely ignored. Also would be nice if there was a way for lasso select faces even if only part of them is within the lasso area (instead of needing to encompass the entire face).
Maybe someone here knows if it's possible to do any of that?
There's "Limit selection to visible" next to the selection mode buttons. Also in face select mode you don't need to select the whole face, you just need to select the face dot.
I'm trying to use Blenders lasso select (ctrl+lmb) was wondering if it's possible select all encompassed faces instead of just the ones the user can see? Right now I have to switch to wireframe mode every time I want to lasso select, otherwise backfaces and such are completely ignored. Also would be nice if there was a way for lasso select faces even if only part of them is within the lasso area (instead of needing to encompass the entire face).
Maybe someone here knows if it's possible to do any of that?
In edit mode, click this button to select front and back faces
Was hoping there might be away to do it that didn't change how the mesh was displayed or affect basic selection, just something I could set and forget. Limit Selection to Visible switches display modes like wireframe mode, so not really ideal. Oh well. Thanks anyways guys.
Any idea of how to get that shape without that bump shading? i tried with screw modifier, and it en up with a better shading, but wouldn't bend/deform that well.
To use it set your normal map as Linear colors space (to convert it in sRGB), then separate the red channel of your normal map with an angle at 0 and height at 1, and the green channel with an angle at 90 and an height at 1 then overlay both images, then you can add a second overlay to overlay the image on itself to contrast it. At the end you can eventually use a gamma node at 2.2 to convert :
I will ask for his official integration on rightclickselect.
Replies
Here is a mockup of something I could imagine useful: With a click of a button cycle the UV's from block to block - like offsetting. Perhaps another tool to pop in a new block and offset all others to fit in a new block.
As for displaying materials, from what I know cycles has UDIM support via the image node
https://developer.blender.org/D2575
Important note: the normal map must be imported in Linear mode (that means converted in sRGB):
https://blender.stackexchange.com/questions/52902/how-to-convert-a-normal-map-into-a-curvature-map-cavity-map/72602#72602
The picture of the node setup, it's easy to reproduce, the positive translation direction uses an inverted image and it's the opposite with a negative direction. The red channel moves the image in X and the green in Y.
Here I use a color ramp in HSV mode (from cold (max cold = blue) to warm (max warm = yellow) and the blue channel of the normal map to add relief then I combine the curvature with the colors with the value node:
Alternative with a curvature smooth:
I am currently working on the Normal to Curvature Smooth node setup, I will share the node group here. Everything will be automated in AssetGen so you won't have to recreate any nodes (won't have to do UVs, retopo too).
And here is how to improve your curvature maps, most artists use a blur filter to remove the aliasing, I suggest instead to increase the pixel width by a factor of 2 starting at 1 (0.5 in each direction) to 64, you can go higher like 256 max but the difference will be minor.
So here is what you get with the improved curvature nodes: like before I use an overlay with a factor of 3 on the image itself.
You can multiply the blue channel to add more relief so the details of your texture will pop up:
I continue to work on reproducing the curvature smooth of Substance Designer.
Blender is not perfect but it's great, when you will have more experience, you will understand
As for Blender - we use it in AAA development. It is excellent tool. Modelling in blender is a blast. And I'm talking about complex objects modelling with lots of modifiers, entities and geometry.
It has some tools that no other tool has (or just not as convenient to use as in Blender). For example: Y.A.V.N.E. addon provides the best custom normals workflow I ever seen. Folks that use mainly Maya or Modo in my studio says that they wish to have something so useful. Same goes for booleans. There's more examples but I think I gave you general idea.
And another thing - please don't judge tools. Its up to human being who use the tool to utilize it's power. If you find yourself not satisfied with results you made with Blender - maybe you need to improve as an artist. And looking at your artstation page I can say that there is a room for improvement.
But almost everything you mentioned already exist:
- dyntopo is awesome, way better than dynamesh because it's local, in real time and never crashes and rebuilding the topology doesn't exist at all in Mudbox. The Orb Brushes have been ported to Blender on Blender Swap for stylized artists. You can combine your sculptures in Blender with poly editing, cloth simulation, you can directly paint the bump map, the future clay viewport with OpenGL 3.3, SSAO and matcap editing. Blender is built for 3D not originally for 2.5D. You have the skin modifiier that combined with subsurf is as good as ZSpheres
- for sculpting tutorials: YanSculpts serie on Youtube, Art of Sculpting serie on Blender Cookie, Low Poly Character creation workshop and Creature Factory 2 on Blender Cloud to name a few
- the game engine will become UPBGE with EEVEE and is an interactive mode not intended for AAA games but mostly for quick demos like game prototypes to explore game design ideas and small simulations (some car manufacturers use it to configure the car online before buying it with Blend4Web and NASA uses it for the Curiosity demos to teach the public) and archiviz
- for booleans and shading you can combine Bool Tool + rounded bevel shader + edge splits it fixes the problem easily or you can use hard ops
- manual retopology as always been a pain but ZRemesher isn't the answer to this problem to create a model that needs to be animated
- something like ZRemesher is planned after Blender 2.8 called Interactive Quadrilateral Remeshing when the developers will have more time
- for static meshes the decimation modifier of Blender is much better than manual retopology and ZRemesher because it keeps more triangles where the details are and less on flat surfaces automatically and you can create vertex groups for hidden part to decimate them more or protect some areas that you want more detailled. The game asset looks better and you can set exactly the polycount you want for each LODs (see AssetGen) to get the maximum amount of details on the tris limit (also means a better normal map quality and curvature generated from it because it envelops the high poly perfectly). You can after the decimation use the Bool Tool addon to union every meshes and remove every intersections. See AssetGen to generate a game asset in one click
This secon one is from yesterday in another model, almost the same thing, but herer the normals get distorded too, and one bake with the same mesh on handplane. I think that it maybe some settings, but i have no idea of what it could be.
I can upload the other blend file if someone want, sorry for all those pics :X
For the normal map see if you don't have a problem with the cage, you can use the extrusion setting to generate an invisible cage automatically during the baking.
As for the cage, using the extrusion settings i am not able to have the output come as good as when using a ''physical'' cage, but the problem is in the cage since it changes just by changing one edge. however the bake end up distorded regardless and if the same mesh/cage is imported to handplane, the distortion disappear.
I will import those file to a fresh blend file, just to make sure, and will play more with the automatic cage see if i can use it properly.
I have been looking into this as well recently after bookmarking the stack exchange link and this Blendswap resource to implement as another bake mode in TexTools.
I am close to finishing the scripted loop where I can bake the curvature map in 1 click. My process:
Current state preview screenshot
Will point to the GIT source once I commit later today - feel free to use parts of the code for your addon
Here is what I have so far. This method 'composite_nodes' is called after I bake the tangent normal map. You can find the full code here
This is the external blend file that I load in temporarily and composite the curvature map. The green rectangles are the areas I accessed via code, see above code for reference.
There are a few minor bugs remaining but in a nutshell it already works. Next I have to refine the node setup to add multiple blur variants and combine them.
That way I can adjust the thickness with just one UI variable
I have posted how to calculate the pixel width, it is better to bake in 2048 minimum and to divide the X resolution by 4096 and the Y resolution by 4096 to have the offset value of 0.5 in 2048px. For a curvature that must be bellow this resolution the map should be baked in 2048 (because we can't work with subpixel offset) and to scale it down.
Also I recommend to generate the curvature and especially the curvature smooth in a 32 bit Float image so the color gradient applied looks better without harsh color transition.
And here is the node setup to generate a curvature smooth map:
I have posted the explanation on how this works here: https://blender.stackexchange.com/questions/89278/how-to-get-a-smooth-curvature-map-from-a-normal-map/100637#100637
Here I explain how to combine them and generate a fake hand painting effect, the curvature smooth must be in the first input of the overlay node to shadow the details of the curvature map: https://blender.stackexchange.com/questions/90152/how-to-generate-a-fake-hand-painted-texture
You can download the node setups here: https://drive.google.com/file/d/1bixkxs6cSes-J9GVwDeIP7pW2OnGNhYd/view?usp=sharing
As for float precision bakes: i was planning to look into dithering down to 8 bit per channel to avoid banding on normals, this might apply to these grayscale images as well.
For now one of the solution is to use the free software IrfanView to get dithering while reducing the bit depth.
Here my test between a 16 and 8 bit color depth per pixel curvature (pointiness): http://polycount.com/discussion/comment/2567171/#Comment_2567171 It's impossible to work with an 8 bits grayscale when using color ramps/gradients.
About my post with anti-aliasing on the curvature map avoid it, it creates artifacts, the best way is to bake in an high resolution then to scale the texture down. The minimum resolution for the curvature map is 2048 = 1 pixel width. Bellow in 1024 you can't get 0.5 pixel width, it will be rounded to 1 px so the lines will get thicker.
I have asked if my curvature node setup and curvature smooth could be officially integrated into Blender and maybe used with Cycles baking.
You must use:
- A curvature map: https://blender.stackexchange.com/a/72602/23134
- A curvature smooth map: https://blender.stackexchange.com/a/100637/23134
- A thickness map: https://blender.stackexchange.com/a/100725/23134
You need to overlay the two curvature maps with the smooth curvature on top to shadow the other map. Use a value node and in the first input a color ramp in between in HSV mode to pick analogous colors from cold to warm.
Then mix your base albedo with your thickness map, to color it use a color ramp with R: 0.7 GB: 0.1 for the left cursor to get the color of the blood and RGB: 0.5 for the right cursor for the middle grey sRGB to overlay it correctly without touching the brightness. For the factor invert the thickness map, add a color ramp and for the right cursor set the RGB at 0.2 (higher values add blood). Then you can eventually connect the alpha of the thickness node to the alpha of the viewer node.
Multiply it with an Ambient Occlusion map if it's for a diffuse texture, in PBR the AO is used separately to be only displayed in the shadow, the AO is removed in area directly lighted by the light like in the real world.
I am not sure if I fully understand but have you tried just the Ray Distance parameter in the bake settings?
There is also the displace modifier you could use on your custom cage
When using the ray distance and hard edges it gives me ''double edges/gap'' where i have hard edges. so i was using custom cage, but is anoying to have to duplicate your lowpoly, then inflate or use a displacement, and probably end up moving the cage to another layer too.
Bcz of that, i was wondering if there was any way that i could get the same results as when im using a custom cage, but in a faster way, like just having to click in a button like changing the ray. In fact, i was thinking about making a sugestion that could be implemented in textools, but first i wanted to make sure that this is not possible yet with the blender default ''cages''.
Quick question: in grease pencil mode, what is the shortcut to rotate the view and how can I change it please?
https://www.youtube.com/watch?v=mUZ9y-eWos8&lc=z22oc3a4jx2qhrx0zacdp43b53arrvra415bil0cautw03c010c
https://youtu.be/-BWe5BtqZYE
Weight painting in Softimage looks like this, and it definitely looks REALLY artist-friendly being so visually easier to see where your weights are distributed across the mesh:
With Blender's current weight painting visualization it's just confusing as hell. Think about how much easier time you'd have skinning your character to a rig and pose it for your portfolio.
With this kind of multi-color feedback, it would actually make weight painting FUN and a lot easier for character artists.
It's an addon that lets you drag to create primitives like you can in other 3d software like 3ds max.
Here's the link to download it - https://github.com/3ckSpeX/PrimitiveCreationModals
I'd also like to know if anyone has any Idea on why while in edit mode when I use the shift+A add menu it calls the execute function instead of invoke function, it makes it impossible to use the shift+A menu in edit mode and was the only bug I saw and wasn't able to fix.
For the hotkey problem, maybe a overriding issue with other hotkeys? Maybe you can get help at Blender stack exchange.
@Tiles - Ya no problem I made it free because I like free stuff too, so feel free to use it. I think it should be in default blender too, the only problem with that is I didn't stick to the proper convention (I'm not sure I stuck to any convention lol) when making it and I'm not sure how much support I'm going to provide for the addon later on. I'd have to fix those if I wanted send it in for default blender, and I'm not sure I want to do that yet.
Don't worry about support. I am not even sure if i can use it unmodified in Bforartists. We have thrown out quite a few double menus. So i might need to change the scripts here and there
There's talk of going for a 'clumsy but functional' path instead, and I'd encourage anyone here to get involved in the discussion --in an adult, constructive way-- to prevent that from happening
This is something that matters a lot to me, and I imagine game developers in general, and would be a real quality-of-life improvement in my daily work!
Maybe someone here knows if it's possible to do any of that?
The "clumsy but functional" was for UV unwrapping, you don't need multi object editing for that, a new option could be added to unwrap several objects into the same UV map so this could happen before multi object editing itself.
There's "Limit selection to visible" next to the selection mode buttons. Also in face select mode you don't need to select the whole face, you just need to select the face dot.
In edit mode, click this button to select front and back faces
To use it set your normal map as Linear colors space (to convert it in sRGB), then separate the red channel of your normal map with an angle at 0 and height at 1, and the green channel with an angle at 90 and an height at 1 then overlay both images, then you can add a second overlay to overlay the image on itself to contrast it. At the end you can eventually use a gamma node at 2.2 to convert :
I will ask for his official integration on rightclickselect.
More informations about the node setup: https://blender.stackexchange.com/a/102727/23134