Wow, thanks for the explanation. I didn't use alpha blend nor alpha test. It was something like the lerp you mentioned. In unreal, you can use "layered materials", which is also a kind of lerp, but it allows you to blend between materials. I was planning to update my materials to use this, but actually I left this project for some time. I think I'll get back to it when I'll have time, and check the options for Gbuffer solutions, and also the layered material thing.
Hmm, wait, so you have a slot for underlying material, and you have to change a reference in it if you want your decal to fade into e.g. plastic instead of metal background, for example?
It could do that too if I would use layered material. The last version of mine used the many texture samplers, I had slots for tiling textures and masks. I should change to use layered materials because that would be more user friendly.
It could do that too if I would use layered material. The last version of mine used the many texture samplers, I had slots for tiling textures and masks.
No-no, I don't mean to ask how materials inside the decal are blended - I'm asking how your decal itself is blended with the underlying surface. Is it the left or the right example in this image?
So how does in work in this case, where there are multiple materials under the faces with the decal material?
More than that, even in a case with one material, how in the world would the decal draw the background with proper UVs (texture position, scale and rotation) without having any access to vertices of the underlying mesh? The tiled texture will be mismatched, and blending will only be unnoticeable if your background is high-frequency noise or a flat fill, no?
It works only with tiling textures at the moment. I used a tri planar projection transformed into local space, so it can work on moving objects too. But on the bottom of the first page of this thread, I'm showing the full materials and describing it. Here is the post: http://www.polycount.com/forum/showpost.php?p=2333521&postcount=25
Yes, I'm talking about tiling textures - they won't match at all with this approach, unless you are using world space triplanar mapping or local space triplanar mapping in meshes with identical pivots and positions - otherwise decal surfaces won't get UVs identical to underlying surface for their tiled texture.
But I'm using local space triplanar projection, so it works. Its very limited, but it worked for what I wanted to achieve. I'll try to implement something better when I will have some free time.
Makes sense. Ideally I'd prefer the shader not to require any information about underlying material at all. It should just blend on top - otherwise, workflow isn't really that fun to use, because you are forced to limit materials of underlying surfaces and are forced to configure tons of duplicate decal materials differing only in underlying surface reference property.
Here is an example of my shaders blending over 8 different materials:
Example of full override on top requires nothing fancy (it's almost traditional transparent shader, just with some z-offset), and example of normal + smoothness override depicted below requires per-gbuffer blending I talked about.
In that case:
All your decals can be batched per detail type
All your decals require just one draw call
You can overlay any decal over any surface
You get perfect blending with underlying surface textures without the need to know their UVs
You don't need expensive triplanar in the decal shader (I still use it for underlying surfaces when I'm feeling lazy, tho)
So I took a look again in the newer versions of UE4, and the decal material domains can be used still only with decal actors, so unfortunately there is still no way to operate with GBuffer. A workaround would be to write a shader with hand I think. But I won't do that because I can't.
so @ bac9-flcl+ Obscura How much unwrapping are you guys actually doing on your base/parent objects? Is it all tiled textures ?
I can use triplanar (then no mapping at all) or automatic perpendicular mapping (if I want to apply tiled textures in a performant way). I also auto-generate UV2 mapping for lightmaps - and sometimes reuse those coords for lightmapping-independent UV2 atlas with AO, curvature and cavity that is then used by the shader to drive stuff like dirt and edge wear.
Overall, UV work is ranging from "not much" to nonexistent if you don't count decals. And decal mapping is either very easy strip texturing-like thing (map a sequence of quads to a seam from decal atlas, etc.), or dead-simple planar mapping (map a circle to a quad).
I can use triplanar (then no mapping at all) or automatic perpendicular mapping (if I want to apply tiled textures in a performant way). I also auto-generate UV2 mapping for lightmaps - and sometimes reuse those coords for lightmapping-independent UV2 atlas with AO, curvature and cavity that is then used by the shader to drive stuff like dirt and edge wear.
Overall, UV work is ranging from "not much" to nonexistent if you don't count decals. And decal mapping is either very easy strip texturing-like thing (map a sequence of quads to a seam from decal atlas, etc.), or dead-simple planar mapping (map a circle to a quad).
OK so, if I understand correctly, you have a separate auto-unwrapped UV channel to lay down dirt and edge wear? Do you just load that into substance or whatever to generate localized texture information, or is there a way to do this in-engine in Unity (ie mask a material based on object space AO)
OK so, if I understand correctly, you have a separate auto-unwrapped UV channel to lay down dirt and edge wear? Do you just load that into substance or whatever to generate localized texture information, or is there a way to do this in-engine in Unity (ie mask a material based on object space AO)
I do not use SD or DDO for this - after all, authoring a huge set of traditional per-object PBR maps isn't very nice performance wise. But one packed map is alright. So, as I've said, I only use UV2 for one texture on my side - for a packed map containing AO, cavity and curvature. I bake those out in xNormal.
Using those maps, I can drive some interesting effects, especially if my tiled texture contains an unused alpha channel I can stuff some cool noise into. I use curvature to optionally boost albedo (for brightened worn plastic edges, for example), to drive transitions to edge material, to drive smoothness shifts. I use cavity to make albedo pop more and to drive main dirt intensity. I use AO to drive dirt spread along with cavity and to output occlusion info for the renderer.
Here are some quick and dirty examples of bake-driven edges and dirt:
Oh damn, so that is all done internally within Unity shader?
Yeah, exactly! And it's pretty simple to set up if you have a clear idea of how exactly you want your shader to interpret that packed map. Makes it really easy to iterate on stuff too - instead of editing textures, you just drag sliders until you're satisfied with results.
Basically, my approach is like having a tiny, tiny SD-like algorithm right inside your surface shader. You input bakes, you get fancy surface effects, except you have no intermediate steps like moving externally generated maps around.
All this is definitely less flexible than using SD, but you don't get a huge texel density per object with UV2 anyway, so using it for intricate SD texturing is useless; and it's pretty cheap performance wise; and it's incredibly fast. You can bake asset after asset, authoring dozens of objects like those daily.
So, to sum it up:
One per-scene RGBA packed texture with decal albedo+smoothness+cavity+alpha
One per-scene RGB texture with decal normals
One per-object RGBA packed texture with UV2 occlusion, smoothness, cavity and colormask
One per-surface type (metal, wood, plastic, whatever else) RGBA packed texture with tiled surface detail applied in overlay blending (contains albedo, smoothness and occlusion variation and a noise mask for UV2 based effects in alpha)
All decals get batched (same shader, same material, same texture everywhere), potentially using as low as just one draw call, and then you use another draw call per object or batched group of objects of same type. I think it's a pretty neat lightweight setup, both in terms of drawcalls and memory use.
And you can go bananas with geometry detail (chamfers + FWN).
Yeah, exactly! And it's pretty simple to set up if you have a clear idea of how exactly you want your shader to interpret that packed map. Makes it really easy to iterate on stuff too - instead of editing textures, you just drag sliders until you're satisfied with results.
Basically, my approach is like having a tiny, tiny SD-like algorithm right inside your surface shader. You input bakes, you get fancy surface effects, except you have no intermediate steps like moving externally generated maps around.
All this is definitely less flexible than using SD, but you don't get a huge texel density per object with UV2 anyway, so using it for intricate SD texturing is useless; and it's pretty cheap performance wise; and it's incredibly fast. You can bake asset after asset, authoring dozens of objects like those daily.
So, to sum it up:
One per-scene RGBA packed texture with decal albedo+smoothness+cavity+alpha
One per-scene RGB texture with decal normals
One per-object RGBA packed texture with UV2 occlusion, smoothness, cavity and colormask
One per-surface type (metal, wood, plastic, whatever else) RGBA packed texture with tiled surface detail applied in overlay blending (contains albedo, smoothness and occlusion variation and a noise mask for UV2 based effects in alpha)
All decals get batched (same shader, same material, same texture everywhere), potentially using as low as just one draw call, and then you use another draw call per object or batched group of objects of same type. I think it's a pretty neat lightweight setup, both in terms of drawcalls and memory use.
And you can go bananas with geometry detail (chamfers + FWN).
Just to be clear (I'm pretty much a newb w/ the technical stuff) when you say UV2 you're reffering to just like a second UV channel? So like you'd have one channel that's just a box uvw map for tiling textures, and then a second UV channel with AO/Curvature/etc right? Hah sorry if you already basically said that, just trying to put it into my own words for interpretation.
Now, if we could please please please please get realtime curvature baking, I would be able to die a happy man.
Old Knald can only bake AO from lowpoly
DDO 2.0 isn't out yet
In-Unity baking methods exist (Beast, WN bakers), but are ungodly slow
You can get proper lowpoly curvature out of xNormal, but you have to do two cavity bakes (one on inverted mesh) to get it, and that's a pretty slow process
Substance Designer is incapable of baking curvature on UV seams
Just to be clear (I'm pretty much a newb w/ the technical stuff) when you say UV2 you're reffering to just like a second UV channel? So like you'd have one channel that's just a box uvw map for tiling textures, and then a second UV channel with AO/Curvature/etc right? Hah sorry if you already basically said that, just trying to put it into my own words for interpretation.
That's exactly right. I usually get UV2 mapping through either using Flatten mapping in 3ds, or through stealing auto-generated UV2 mapping by Enlighten from internally stored meshes in Unity (basically, you import a mesh into Unity, tell Enlighten to generate UV2, then use exposed Mesh object data to write an .obj file, then do your xNormal bakes with it and import it into Unity, removing the original source file). Enlighten auto-generated UVs are far, far nicer for curved surfaces, but sometimes I get too lazy to spend time on them and just deal with Flatten mapping.
Ah, another approach to applying tiled textures - you can just use UV2 with altered UV scale to get desired detail mapping. Then you don't even need two UV channels.
That's exactly right. I usually get UV2 mapping through either using Flatten mapping in 3ds, or through stealing auto-generated UV2 mapping by Enlighten from internally stored meshes in Unity (basically, you import a mesh into Unity, tell Enlighten to generate UV2, then use exposed Mesh object data to write an .obj file, then do your xNormal bakes with it and import it into Unity, removing the original source file). Enlighten auto-generated UVs are far, far nicer for curved surfaces, but sometimes I get too lazy to spend time on them and just deal with Flatten mapping.
Ah, another approach to applying tiled textures - you can just use UV2 with altered UV scale to get desired detail mapping. Then you don't even need two UV channels.
Very cool, I'm still trying to wrap my head around the basics of this whole decal techique (will probably make a test asset today) and am gonna be doing it in UE4, but that's great to know. Don't know if there's an analogous way to recreate this process in Unreal, but if I manage to get the basics down without my head exploding, I'll wanna give it a try.
Otherwise, I suppose there's always the method of isolating certain strips of geometry (edges/cavities) and maping those onto decals of dirt/edge wear, but obviously that's not always a practical way since the geometry won't always be laid out in a way that is conducive to unwrapping into linear strips.
Very cool, I'm still trying to wrap my head around the basics of this whole decal techique (will probably make a test asset today) and am gonna be doing it in UE4, but that's great to know. Don't know if there's an analogous way to recreate this process in Unreal, but if I manage to get the basics down without my head exploding, I'll wanna give it a try.
Otherwise, I suppose there's always the method of isolating certain strips of geometry (edges/cavities) and maping those onto decals of dirt/edge wear, but obviously that's not always a practical way since the geometry won't always be laid out in a way that is conducive to unwrapping into linear strips.
If you are going to use UE4, then I'd recommend to approach it with layered materials. It would save a lot of texture samplers, for the cost of more drawcalls. But it would be easier to use later. I'm also planning to change mine to use that.
About the uvs: UE4 can generate second UVs for you, and then you can export the mesh back to bake things based on the new second UVmap.
Maybe it also gives better result than Max, like Enlighten does? But actually I don't know how much better it is. You'll have to try out if you want. I just mentioned it as an equivalent of bac9-flcl's workflow, just with UE4.
Maybe it also gives better result than Max, like Enlighten does? But actually I don't know how better it is. You'll have to try out if you want. I just mentioned it as an equivalent of bac9-flcl's workflow, just with UE4.
Gotcha. Ok, well I'm going to start testing this out, and will be probably asking a lot of newb questions about the technique. Figured I'd put them into this thread in case there's other ppl like me lurking the thread/wanting to try out this technique but are still confused.
All of this has been a fantastic read! It's amazing how some impractical methods from years ago are now super useful because the tech is making the resource demands negligible. And that's cool!
K so Obscura
The tri-planar projection set-up, is that just for the sake of not having 'seams' on the tiled map (I thought that was the point of tri-planar projection, from what I've looked up about it) Or is it somehow required for projecting the normal map data from the decals as well? (just trying to figure out how much of the Unreal material tree I can safely eliminate cause uhh I'm lazy and its my first time so I wanna simplify it)
K so Obscura
The tri-planar projection set-up, is that just for the sake of not having 'seams' on the tiled map (I thought that was the point of tri-planar projection, from what I've looked up about it) Or is it somehow required for projecting the normal map data from the decals as well? (just trying to figure out how much of the Unreal material tree I can safely eliminate cause uhh I'm lazy and its my first time so I wanna simplify it)
In case of implementation by Obscura, it's pretty simple - as far as I understand, he needs triplanar mapping because his decals are actually opaque and rendered with the surface texture as a background (see my illustration on the previous page). To maintain the illusion that decals are seamlessly integrated into the surface, UVs at every pixel have to match perfectly, making position, scale and rotations of the tiles everywhere identical.
That can be done in two ways - either through use of local space triplanar mapping (but ONLY if decal object has exactly the same position and pivot as the surface object, otherwise triplanar mapping won't match), or through use of world space triplanar mapping (that one will look bad on moving objects, though).
In my implementation, triplanar mapping is not used in decals and is entirely a matter of me being lazy with surface UV mapping.
but ONLY if decal object has exactly the same position and pivot as the surface object, otherwise triplanar mapping won't match), or through use of world space triplanar mapping (that one will look bad on moving objects, though.
I thought the decal objects were to be attached to the parent object and thus would have the same co-ords/pivot? Or do they get brought in separately?
Man, I am really wishing i was better at the technical stuff beyond just understanding normals/diffuse/metalness/roughness lol
So this mostly applies to when you are trying to project normals only, since any other channels you'd want to modify (I.E. if you have a red button or whatever that is projecting its own albedo+roughness information) could be masked out using alphas right?
So this mostly applies to when you are trying to project normals only, since any other channels you'd want to modify (I.E. if you have a red button or whatever that is projecting its own albedo+roughness information) could be masked out using alphas right?
Nope-nope-nope. Obscura is not using any alpha blending, and matching position/pivot/triplanar is important not because of normals, but because it allows him to match background texture mapping on surface and decal faces.
Count Vader - I attached the decals to the main meshes so the pivots and positions are always matching by default. I use triplanar projection in local space so it works on moving objects too, but it wouldn't work on a skeletal mesh for example, when a part of the mesh would move, it works only with whole object movement... The triplanar is used to match the decal's background to the underlaying surface (decals are opaque). I used mask textures to blend multiple materials/textures together, like a red button that you are saying. There are buttons in my first example on the first page, that one used 2 masks. One for the plastic button, and one for the emissive line. When you want to modify only the normals with the decals, then it doesn't need a mask.
There can be a workaround when you want a mechanical mesh, with individually moving parts... If its only from a few pieces, then you can import them imdividually, and attach them into sockets in the editor, and then individually moved parts would work, and it would act as a skeletal mesh, the projection would still work.
But I'm will look for a solution to replace the projection to something better in the near future. Unfortunately we can't access the Gbuffer with nodes, so I have to think out something else, or someone would have to write a shader.
But I'm will look for a solution to replace the projection to something better in the near future. Unfortunately we can't access the Gbuffer with nodes, so I have to think out something else, or someone would have to write a shader.
One thing I'm wondering about is why would you need access to GBuffer in the case you are demonstrating with that MFD/console. There, your decal needs to overlay it's own albedo, smoothness, metalness and emissive (plastic, glowing bits, etc.). If that's the case, then you do not need any exotic mixing, it's a textbook case for traditional alpha blending. The only difference decal shader would have vs. a normal transparent shader would be Z-offset to prevent Z-fighting.
Second example with the door, though, will indeed require selective blending (so that normal GBuffer receives the output but others don't), yeah.
Originally I don't wanted for the transparent version. But... I realized that if there is any detail in the normals of the underlaying surface (some noise or anything), then the masked alpha blend will block it from being visible. So then it works only on completely flat surfaces (flat normal under the decal).
Ok, so for now I am just trying to do it with one decal, since It's my first time doing this and I want to keep the variables to a minimum. Once I figure out this much, I'll try to add more types of decals and maybe try for edge scratches and stuff.
I know Obscura posted the shader for his thing, but since looking at any complicated shader tree tends to confuse me, I was hoping someone could just hold my hand while I try this myself And maybe it will be useful for people who wanna try this out but are also confused.
So with all that said, I have imported into Unreal one mesh with a plane attached to it, each one with its own material element, which were specified in 3ds max with a multi sub object mat.
Some pretty advanced shit there, but so far so good I guess.
So the two materials, 'main' and 'decal' are the most basic possible. 'Main' is just tiling albedo/roughness/normals, with a constant of 0 for metallic.
'Decal' has bitmaps for the 'Occlusion' which is baked out from max, and mask for the metallic bit in the center, which is packed into the AO texture. The rest is constants, since as I said I wanna keep it as simple as possible for this first step.
From here, it's probably pretty obvious what I want - The metallic circle bit to propagate the metalness/roughness/base color values, and the inset around it to only show normals (but in a manner where they are 'overlayed' with the base normals)
I am guessing the base material can be left alone, since all the 'magic' happens in the decal mat,
SO!
Assuming I am not concerned with having moving parts or anything like that, what is the most absolute simplest way I can get the decal to do what I need?
Sorry, no idea. To do that you need control of what happens after you output that struct with normals, albedo etc. on the right of your graphs. That is, you need control over how exactly that struct is dropped into deferred RTs, so that you can selectively remove contribution to some channels. I don't know a way to do that in UE4. It certainly can't be done in the graph itself, because the graph ends earlier.
I'm hitting a bit of a problem with my Unity implementation. Specifically, Unity folks decided to use four channels in their second deferred render target: they store specular color in RGB and smoothness in A.
As far as I understand, no GPUs support blending of float4s with a custom float as a factor - only one, zero, or .w (fourth channel) can be used. That poses a huge issue for smoothness blending, because it becomes the blending factor, making it impossible to simultaneously correctly output smoothness and blend smoothness+specular with any smoothness values but 1.
This can be solved either by changing the deferred rendering implementation to move smoothness to a separate render target (thankfully, Unity makes this possible, providing full source of whole deferred rendering implementation and an easy way to plug in your modified one), or - not yet sure - with a two-pass surface shader that will alpha blend output to RGB render targets and then alpha test output to RGBA render targets. Not sure how to do the latter correctly yet. The former is definitely possible, but I'd like to leave it as last resort - if it's possible to solve everything with one surface shader, I'd very much prefer to keep it that way.
Edit: Okay, looks like changing deferred rendering is out because you can only use four RTs at once and the current setup is already at this limit. Two-pass shader route it is, then.
@frmdbl - You see it right, unfortunately the decal material domains works only with decal actors in UE...
Anyways, yeah, its a blending, but not the normals. I'm calling in the textures of the underlaying surface(albedo, roughness, but it could blend normals too), and use them in the decal material. Their placement with match with the underlaying surface just because of the triplanar projection, so it works only with tiling textures/materials.
We could use an another uv channel, yes. But it would be almost impossible to place the decal's uvs to the proper place on the main mesh's uvs. At least I don't know about a way for this.
The default decal options can do what CE can do, but unfortunately they work only with decal actors for some reason.
@ Obscura So I have it set up correctly thus far though, right? And I only need to copy the decal material, the base mat. can be left alone or do I need to do something special to that too?
Also: What are these three roughness things you have:
Thats the roughness map of the underlaying surface. And its there 3 times because the triplanar has to sample it 3 times. Because of the projection from 3 directions. The part you marked is the slot to tell it what textures the base material has.
No, its different because they have different coordinates. One gets projected from X direction, the another from Y, and the third from Z. So you get projection from 3 directions -> tri planar projection :P
So if I have an underlying normal map that I want to show through (like in my thing there's a bit of bump in the normal channel) or Albedo or whatever, I basically have to copy that setup (ie everything that comes before the three roughness slots) And plug those into respective channels in the decal material, right?
But... If you use "texture sample parameters" then you have make that huge mat only once, and for other meshes you can make material instances. It will have texture slots and you wont have to remake the mat. It will give you a different interface where you can change only parameters and there won't be any nodes.
Replies
Hmm, wait, so you have a slot for underlying material, and you have to change a reference in it if you want your decal to fade into e.g. plastic instead of metal background, for example?
UE4 Layered materials
More than that, even in a case with one material, how in the world would the decal draw the background with proper UVs (texture position, scale and rotation) without having any access to vertices of the underlying mesh? The tiled texture will be mismatched, and blending will only be unnoticeable if your background is high-frequency noise or a flat fill, no?
http://www.polycount.com/forum/showpost.php?p=2333521&postcount=25
Here is an example of my shaders blending over 8 different materials:
Example of full override on top requires nothing fancy (it's almost traditional transparent shader, just with some z-offset), and example of normal + smoothness override depicted below requires per-gbuffer blending I talked about.
In that case:
How much unwrapping are you guys actually doing on your base/parent objects? Is it all tiled textures ?
I can use triplanar (then no mapping at all) or automatic perpendicular mapping (if I want to apply tiled textures in a performant way). I also auto-generate UV2 mapping for lightmaps - and sometimes reuse those coords for lightmapping-independent UV2 atlas with AO, curvature and cavity that is then used by the shader to drive stuff like dirt and edge wear.
Overall, UV work is ranging from "not much" to nonexistent if you don't count decals. And decal mapping is either very easy strip texturing-like thing (map a sequence of quads to a seam from decal atlas, etc.), or dead-simple planar mapping (map a circle to a quad).
OK so, if I understand correctly, you have a separate auto-unwrapped UV channel to lay down dirt and edge wear? Do you just load that into substance or whatever to generate localized texture information, or is there a way to do this in-engine in Unity (ie mask a material based on object space AO)
Using those maps, I can drive some interesting effects, especially if my tiled texture contains an unused alpha channel I can stuff some cool noise into. I use curvature to optionally boost albedo (for brightened worn plastic edges, for example), to drive transitions to edge material, to drive smoothness shifts. I use cavity to make albedo pop more and to drive main dirt intensity. I use AO to drive dirt spread along with cavity and to output occlusion info for the renderer.
Here are some quick and dirty examples of bake-driven edges and dirt:
And a heavy gif with the wall from previous page:
http://i.imgur.com/2QPCO5I.gifv
Yeah, exactly! And it's pretty simple to set up if you have a clear idea of how exactly you want your shader to interpret that packed map. Makes it really easy to iterate on stuff too - instead of editing textures, you just drag sliders until you're satisfied with results.
Basically, my approach is like having a tiny, tiny SD-like algorithm right inside your surface shader. You input bakes, you get fancy surface effects, except you have no intermediate steps like moving externally generated maps around.
All this is definitely less flexible than using SD, but you don't get a huge texel density per object with UV2 anyway, so using it for intricate SD texturing is useless; and it's pretty cheap performance wise; and it's incredibly fast. You can bake asset after asset, authoring dozens of objects like those daily.
So, to sum it up:
All decals get batched (same shader, same material, same texture everywhere), potentially using as low as just one draw call, and then you use another draw call per object or batched group of objects of same type. I think it's a pretty neat lightweight setup, both in terms of drawcalls and memory use.
And you can go bananas with geometry detail (chamfers + FWN).
Just to be clear (I'm pretty much a newb w/ the technical stuff) when you say UV2 you're reffering to just like a second UV channel? So like you'd have one channel that's just a box uvw map for tiling textures, and then a second UV channel with AO/Curvature/etc right? Hah sorry if you already basically said that, just trying to put it into my own words for interpretation.
Life is suffering
That's exactly right. I usually get UV2 mapping through either using Flatten mapping in 3ds, or through stealing auto-generated UV2 mapping by Enlighten from internally stored meshes in Unity (basically, you import a mesh into Unity, tell Enlighten to generate UV2, then use exposed Mesh object data to write an .obj file, then do your xNormal bakes with it and import it into Unity, removing the original source file). Enlighten auto-generated UVs are far, far nicer for curved surfaces, but sometimes I get too lazy to spend time on them and just deal with Flatten mapping.
Ah, another approach to applying tiled textures - you can just use UV2 with altered UV scale to get desired detail mapping. Then you don't even need two UV channels.
Very cool, I'm still trying to wrap my head around the basics of this whole decal techique (will probably make a test asset today) and am gonna be doing it in UE4, but that's great to know. Don't know if there's an analogous way to recreate this process in Unreal, but if I manage to get the basics down without my head exploding, I'll wanna give it a try.
Otherwise, I suppose there's always the method of isolating certain strips of geometry (edges/cavities) and maping those onto decals of dirt/edge wear, but obviously that's not always a practical way since the geometry won't always be laid out in a way that is conducive to unwrapping into linear strips.
If you are going to use UE4, then I'd recommend to approach it with layered materials. It would save a lot of texture samplers, for the cost of more drawcalls. But it would be easier to use later. I'm also planning to change mine to use that.
About the uvs: UE4 can generate second UVs for you, and then you can export the mesh back to bake things based on the new second UVmap.
Hmm how's that different from making the second UV's in max?
Gotcha. Ok, well I'm going to start testing this out, and will be probably asking a lot of newb questions about the technique. Figured I'd put them into this thread in case there's other ppl like me lurking the thread/wanting to try out this technique but are still confused.
The tri-planar projection set-up, is that just for the sake of not having 'seams' on the tiled map (I thought that was the point of tri-planar projection, from what I've looked up about it) Or is it somehow required for projecting the normal map data from the decals as well? (just trying to figure out how much of the Unreal material tree I can safely eliminate cause uhh I'm lazy and its my first time so I wanna simplify it)
In case of implementation by Obscura, it's pretty simple - as far as I understand, he needs triplanar mapping because his decals are actually opaque and rendered with the surface texture as a background (see my illustration on the previous page). To maintain the illusion that decals are seamlessly integrated into the surface, UVs at every pixel have to match perfectly, making position, scale and rotations of the tiles everywhere identical.
That can be done in two ways - either through use of local space triplanar mapping (but ONLY if decal object has exactly the same position and pivot as the surface object, otherwise triplanar mapping won't match), or through use of world space triplanar mapping (that one will look bad on moving objects, though).
In my implementation, triplanar mapping is not used in decals and is entirely a matter of me being lazy with surface UV mapping.
I thought the decal objects were to be attached to the parent object and thus would have the same co-ords/pivot? Or do they get brought in separately?
Man, I am really wishing i was better at the technical stuff beyond just understanding normals/diffuse/metalness/roughness lol
Nope-nope-nope. Obscura is not using any alpha blending, and matching position/pivot/triplanar is important not because of normals, but because it allows him to match background texture mapping on surface and decal faces.
There can be a workaround when you want a mechanical mesh, with individually moving parts... If its only from a few pieces, then you can import them imdividually, and attach them into sockets in the editor, and then individually moved parts would work, and it would act as a skeletal mesh, the projection would still work.
But I'm will look for a solution to replace the projection to something better in the near future. Unfortunately we can't access the Gbuffer with nodes, so I have to think out something else, or someone would have to write a shader.
Second example with the door, though, will indeed require selective blending (so that normal GBuffer receives the output but others don't), yeah.
I know Obscura posted the shader for his thing, but since looking at any complicated shader tree tends to confuse me, I was hoping someone could just hold my hand while I try this myself And maybe it will be useful for people who wanna try this out but are also confused.
So with all that said, I have imported into Unreal one mesh with a plane attached to it, each one with its own material element, which were specified in 3ds max with a multi sub object mat.
Some pretty advanced shit there, but so far so good I guess.
So the two materials, 'main' and 'decal' are the most basic possible. 'Main' is just tiling albedo/roughness/normals, with a constant of 0 for metallic.
'Decal' has bitmaps for the 'Occlusion' which is baked out from max, and mask for the metallic bit in the center, which is packed into the AO texture. The rest is constants, since as I said I wanna keep it as simple as possible for this first step.
From here, it's probably pretty obvious what I want - The metallic circle bit to propagate the metalness/roughness/base color values, and the inset around it to only show normals (but in a manner where they are 'overlayed' with the base normals)
I am guessing the base material can be left alone, since all the 'magic' happens in the decal mat,
SO!
Assuming I am not concerned with having moving parts or anything like that, what is the most absolute simplest way I can get the decal to do what I need?
Is there a way to use mesh decals in the UE4 btw?
It seems that the deffered decal material works only on the decal actor.
Any other way to say blend just the normal of the material on top with the one below, rather that blend normals within one material?
@frmdbl
Sorry, no idea. To do that you need control of what happens after you output that struct with normals, albedo etc. on the right of your graphs. That is, you need control over how exactly that struct is dropped into deferred RTs, so that you can selectively remove contribution to some channels. I don't know a way to do that in UE4. It certainly can't be done in the graph itself, because the graph ends earlier.
As far as I understand, no GPUs support blending of float4s with a custom float as a factor - only one, zero, or .w (fourth channel) can be used. That poses a huge issue for smoothness blending, because it becomes the blending factor, making it impossible to simultaneously correctly output smoothness and blend smoothness+specular with any smoothness values but 1.
This can be solved either by changing the deferred rendering implementation to move smoothness to a separate render target (thankfully, Unity makes this possible, providing full source of whole deferred rendering implementation and an easy way to plug in your modified one), or - not yet sure - with a two-pass surface shader that will alpha blend output to RGB render targets and then alpha test output to RGBA render targets. Not sure how to do the latter correctly yet. The former is definitely possible, but I'd like to leave it as last resort - if it's possible to solve everything with one surface shader, I'd very much prefer to keep it that way.
Edit: Okay, looks like changing deferred rendering is out because you can only use four RTs at once and the current setup is already at this limit. Two-pass shader route it is, then.
I think Count Vader is trying to do a similar thing to what I described.
In your materials you seem to blend the additional normals within the materials, which poses a problem of masking.
I think in CE3 it is possible to use mesh decals and setup the material to blend just the channels that you want.
Edit:
How about adding another UV layer for the decals? It does multiply the vertex count right?
Anyways, yeah, its a blending, but not the normals. I'm calling in the textures of the underlaying surface(albedo, roughness, but it could blend normals too), and use them in the decal material. Their placement with match with the underlaying surface just because of the triplanar projection, so it works only with tiling textures/materials.
We could use an another uv channel, yes. But it would be almost impossible to place the decal's uvs to the proper place on the main mesh's uvs. At least I don't know about a way for this.
The default decal options can do what CE can do, but unfortunately they work only with decal actors for some reason.
Also: What are these three roughness things you have:
Why is the thumbnail differnet in each slot? Or is that just the R, G, B channels I'm seeing?
No, its different because they have different coordinates. One gets projected from X direction, the another from Y, and the third from Z. So you get projection from 3 directions -> tri planar projection :P