Issue
I'm trying to composite a bunch of grunge details together. However, my final result looked almost... empty. It's as if almost all of the detail I added just didn't show up. I thought I hooked up the nodes wrong. I hooked up the nodes correctly, but I still had this error. Normally in substance designer I can take one translucent image, a second translucent image, then pump them both through a blend layer. I would just leave the blend mode to copy. I can do this as many times as I want. But LERP does not work like that.
Solution (Update)
Unreal will not carry an objects alpha value over multiple nodes. You will effectively be re-blending over and over again. The solution is to pack the maps. The maps I had issue with were exported from substance Designer as 3 whole maps instead of a bunch of independent maps. Those 3 maps were... Albedo, Roughness, and all the previous mask composited as 1 mask in substance designer using a bunch of screen blends. That way I'd only have to LERP it once.
Steps to resolve
I do what I normally do. Take out more and more stuff until I isolate the systems in which the core problem resides. Normally the hardest part is finding the problem. Solving it is the easy part. This time it's backwards. The problem is that LERP does not work like SD's blend/copy node. With LERP, the first alpha value will stack with the second alpha. Thus the quality of the mask will degrade with each generation. Sort of like taking a camera and shooting a movie after said movie appears on screen. I understand the problem, but I can't fathom how one would even begin to fix it. I know it's possible, because I've played several games that were made with the Unreal. People have stacked dirt detail successfully before.
Any ideas?
Specs
Unreal 4.12.5
OS: Windows 10 Pro (Build: 14393.222)
Video Card: Nvidia GeForce 980 GTX (Driver: 372.90)
Replies
Something like this:
You have to think of the order which these things blend, I emulated what you had in your graph which is dirt over the base, then light, then dark.
You would modulate your alphas to control how strong or weak the blend is for a particular LERP.
As far as i know, the way you were using LERPs, if you did the same thing in substance designer it would do the same thing ie: blend node with 50% opacity. Maybe show a screen of what you were trying to replicate in substance.
Also a thing to keep in mind is in substance you are able to use different blend modes within the blend node, whereas in Unreal you kind of have to split it up like this (unless you make your own material function to do this):
I'm playing with so many mask. That's the reason I've reached the material limit. I don't know. I see guys like Polygoo and Figmentpigment create these super high detailed crisp textures and think "This had to have been rebuilt through the material." But they would have to somehow be able to stack all those various ware and scuff detail all in series without killing the alpha detail. I'll get back to you when I pack all my colored mask more efficiently.
You can manipulate the the texcoord node properties directly for tiling, but that node can't be exposed via parameters, so the best use is to multiply the texcoord by a constant and input into the UV slot instead.
Under the hood it would count as a single texture sample to the sample limit, even though you have multiple texture sample nodes.
Each texture sample is an instance of the same texture 1st being tiled twice, 2nd 3 times, and 3rd 4 times. Also different channels are being in the outputs for each sampler Red Green and Blue respectively but it can be useful if you want to use say the blue channel that tiles twice instead of 4 times by simply using the top texture sample.
Also looking at your screenshot, you don't need to mask the values if you use the individual channel R/G/B/A outputs from texture sample nodes, you are already outputting a single channel at that point. If you have performed other operations down then line you might have to do that.