I'm trying to figure out how to achieve the terrain texturing effect in the attached images.
As you can see the dirt paths on top of the grass texture here appears to be some kind of decal, with both the dirt and the grass being their own tiling materials, but the border between them is some kind of masked (and in the bottom example, normal mapped) texture that tiles or is somehow dynamically painted along the edges.
How would you achieve this in UE4? Creating an interpolation blend mask on two terrain textures doesn't seem like the right approach. It seems like a decal, but it needs to be more dynamic than that - when two decals are close to each other they need to be aware of each other and 'meld' kind of like two water droplets touching. Is it possible to apply this effect to a UE4 terrain? I'm totally fine with using a UV unwrapped mesh for my ground planes it if that's the way to go.
The only ways I can think to do this for sure is to either have a different decal for each path shape I want to use, or to fully texture and UV map the paths to a mesh ground plane, using the UV co-ords for the alpha map between the two materials, and two tiling materials for the grass and dirt, respectively.
I'd really appreciate any leads on how to achieve this effect!
Replies
Or
Using multiple meshes for the borders. Rendering into an RTV and using the result as the terrain material.
In the Zelda example I think that's actual grass that overlaps the ground. You can render just like it into an RTV and reduce the cost at runtime, even if it is a lot of stuff to draw.
A better solution would be to create a patterned grass border mesh, render it once and place it about liberally in the form of a solid decal.
The 2d solution would be to use terrain tiling. A classic approach if you will, but it will never look as good as Zelda if proceduralized.
I created a basic 2x2 tileable grid with a black and white mask - white is grass and black is dirt. Then I built my "terrain" out of grid squares and UV'd them to create my ground pattern:
Then in UE I have set up a simple lerp node to lerp between two different materials, allowing me to use different materials for each part of the mask. The goal here is to make the materials tile in worldspace, but be masked in UV space:
Hot tip: in order to remove seams between tiles, I had to set the texture sample nodes 'Sampler Source' property to 'Shared: clamp' in order to prevent leaking UVs:
Since I am only using a single channel in my mask texture I think I'll use a second channel to try to tint the edges of the grass, and put a shadow underneath the grass, as you can see in the pokemon picture I put in the first image. Then I'm thinking I'll use the remaining two channels to make a whole different mask - this one with dirt and rocks as the top layer.
Here is the material so far:
One thing that I am trying to do here is keep the shader complexity as low as possible, as I am an optimization geek. The shader complexity doesn't look to bad so far, but as you can see it is a little darker than even the mannequin, which you can just barely see in the screenshot here:
Does anyone have any tips for optimizing this thing? Is the worldPositionTexture node inherently a bit complex?
What I would do is set it up so you can paint it. You have the basic going, but it doesn't seem like something you can just apply without changing the texture just yet.
I think your best bet is to use a render target process to draw in tiles at the mouse location, render out a texture, and save it for the final usage (reducing the cost by using an actual texture).
I would probably make a specific bluetility node to click for each paint mode. Grass, mud, or border. Where the border can also be rotated so you can cover the whole area intuitively.
Also, one thing you may be forgetting is material collision/definition.
The landscape has this built in to the layer paint, for a custom mesh you need to define a custom area. You can also just use the black/white mask you are currently using since .24 or so. It allows up to 8 masks i believe, using standard color permutations ff0000/ffff00/00ff00 etc.
I would just re-purpose the render target results to be used as both, saving on memory. If you only have 2 materials. If you have more than it gets a bit more complex but your use case is still easier then most, since you have no alpha blend situation where something can be 50% grass and 50% mudd...
Another thing you should keep in mind right now is the overall size of the textures. Even compressed you are bound to get some nasty limits by using the method you are currently employing. Which is why you normally don't.
2d tiled landscapes are made using the same image bits over and over again (think super mario where the cloud is the Bush upside down and a different color) and specifying where the tile should live.
The process is normally called tile map.
In essence, the engine (unreal or anything else) has a much easier time rendering small bits of an image- 1 tile at a time - than it does rendering an 8k image texture.
Because of the simple nature of what you need, a process similar to tile mapping will end up being better suited to generating a large world without having any of the limitations you'll encounter from using single textures to define the areas.
In terms of my texture sizes, the tiling grass and dirt textures in the last post (and the end-goal) is 512 each, and probably 512 for the mask as well. I'm basing my game on the similar concept of the Zelda Links Awakening switch remake where it's all based on a uniform grid, and collisions will generally be the size of that grid. If I explode out my ground mesh you can see what I mean:
In terms of texture density I'm thinking one 512 texture should match one 3x3 area on that grid.
I'm just reading through this: https://docs.unrealengine.com/en-US/Engine/Rendering/RenderTargets/BlueprintRenderTargets/HowTo/CreatingTextures/index.html and looks like I'm going to have to do some fiddling before I really understand what I'm in for, but while I do like being able to bake out a texture based on dynamic editing in UE4, it seems like this will end up creating really large texture files as I need it to map over the entire space of my ground plane. Is that a correct assumption? I would love to be able to paint this in the editor though!
Is there a cheaper alternative to WorldAlignedTexture? Perhaps something that simply ignores UV co-ords?
Let it be UV dependent and just multiply it by a scale factor.
World aligned would only help if you plan on having several meshes next to each other and want to prevent discontinuity between them. Very situational.
A cheaper alternative is to use World Position Offset mask RG / size of texture. Basically open the world aligned function and break it down to its bare components. You only really need a quarter of the math in it, if even to get the terrain to be mapped on X and Y.
Moving on, right. I'm afraid that the file size or the amount of maps you need to generate with a render target system is just too much for an extensive world such as that of Zelda. Particularly when you can literally just cut things up into tiles and re-use the same exact image for a mere fraction of the overall cost.
At the same time, you need to pay particular attention to the overall amount of draw calls.
How many map tiles would you say are on screen all the time?
Will the camera be allowed to pan and show more tiles?
If yes, that's when the HLOD system can help. Basically it turns a lot of actors into just one, if you can get the setup right.
Aside form that you also need to consider the memory of the device you will he publishing for. Switch can handle a lot less than a PC with 64gb of ram...
But, now that you mention it, maybe I could use a second UV channel for the tiling textures, and use the first UV channel for the mask. Is that possible in the material editor?
Also my plan is to break the world up into bite sized pieces and pan between them similar to how the old zelda gameboy games worked:
So I should be able to tightly limit the amount of these meshes in memory at any one time. I haven't put any time into making this work yet but I'm assuming that streaming in all adjacent 'world nodes' (as I'm calling them) as you move from node to node in the world, and unloading any nodes that are more than one node away should strike a balance between memory use and lack of loading screens. I do plan on generally having more than just the one of these ground planes per node though, as there will be tiered areas and ledges and such.
I understand that each material assigned to a mesh creates a new draw call, does that extend to multiple textures within a material? And does drawing the mesh itself, before applying materials, create another drawcall? So in this instance I have the mesh, the mask and two textures, is that four drawcalls?
I'd also like to have material controls on each of the tiling textures so that I can control, say, specular and roughness separately. If I were to do that, do the references to those materials create more drawcalls?
And tangentially, considering all my nodes will be largely self contained, would grouping each world node under one HLOD reduce drawcalls? I saw something somewhere about UE4's 'join meshes' tool that allows you to cut drawcalls by packaging a bunch of separate meshes together.
Thanks for all your help by the way! You've already helped me shortcut several days of brainstorming!
The major drawback here is that I have to place all this stuff in my 3d editor, which is limiting my workflow. Here's that same map in blender:
As you can see, I've got my tile palette down the bottom and I am picking and placing quads, merging verts as I go.
I'd really love to make some tooling so that I can place tiles in the UE editor, and I have a process in mind, but I need to know more about how the 'join meshes' feature works in UE4.
If I place a bunch of meshes in the editor and then use 'Merge Actors', is that the same as if I were to merge those meshes in blender, export an fbx and import that as a single mesh? Are there still overheads in the form of leftover object data or memory references?
Secondly, is there a way in blueprint (or C++) to automate that merge when I 'cook' the game? I'm yet to learn about cooking or compiling or whatever those build steps are, but I imagine there would be some interface I can hook into to run some custom code at build time?
I'm thinking that I'll make a UI that allows me to 'paint' tiles into a grid where each quad is a separate Mesh asset, and then when I compile the game for production (or whenever the whim takes me) I can have all the meshes merge themselves and cleverly optimize the whole shebang. Anyone got any leads on something like that?
Is there a way for me to apply a nicer border along those edges though... would I be able to use a modulation mask? My current setup allows me some options for applying colouration and shading changes along the edges - I am using a second map to give a drop shadow effect between the grass and the dirt (which you can see prominently in the pokemon example in the OP). Is that something I could do using this mask approach? Or is this limited to hard edges only.