I have a few general tech questions regarding a bunch of different topics, if anyone could shed some light and clear some things up for me, that would be great.
1. I see a alot of Substance Designer textures created that are basically what you could just model with geometry. Buildings, walls, more structural things. For example:
https://www.artstation.com/artwork/qq8nLJust wondering what the advantages are of doing this? I could only guess its maybe quickly being able to make any changes on the fly within the Substance graph itself. But if this is being heavily tessellated in engine, wouldn't it make more sense to just model all of these out in modular pieces?
2. When modeling, I sometimes have to try different methods of modeling to get what I want. Sometimes I start over after what I started doing isn't working the way I want. Sometimes modeling in Maya, when prepping something to be imported in Zbrush, I might use a mix of sub-d and just beveled edges on the same mesh. In Zbrush sometimes I use dynamesh, sometimes I want clean angled edges (since dynamesh only projects in a grid-like fashion), so I use Zremesher. Sometimes I feel my mesh is divided too much to get the detail I need, but I keep it that high anyway. As long as its workable and if the high poly just going to be decimated in the end. Is this the way other people work as well, or is my workflow bad? Or is it more "whatever gets the job done". Just curious how this works in bigger studios.
3. Second UV channel. I've been doing game art for a little bit now, and still haven't come across a reason I need a second UV channel (except for lightmaps). Can anyone point out a good example for this?
thanks
Replies
1- Normally, for these kind of details on surfaces, we don`t use geometry at all. If we do, we gonna need a lot of geometry, time and resources to reproduce those details, not efficient at all! That`s why we use normal mapping and for oblique surfaces that we really want to reproduce slopes, we use shaders tricks Like parallax occlusion.
And, if we need to change things later, we can easily do this by modifying our graphs and textures.
2- There is no recipe on how to do things right. Although you will find many workflows online that mean to speed things up, and avoid production error, sometimes, you go for "whatever gets the job done". But researching online and speaking to more advanced artists (not me) will allow you to develop news methods and better workflows.
3- I have a good example for this. I worked with a in-house game engine where all dirty masks and decals was placed using a second UV map made by our artists. You can probably find on the internets lots of good examples of stuff made by using another UV channel.
The most common use of uv2 is for lightmapping, like you noticed. Another use is baked ambient occlusion for PBR. Another use is for a custom painted mask that blends together multiple tiling textures.
I can't see any actual practical use of having large details like columns in a substance designer texture myself. It wouldn't be useful in an actual game as you would never have enough tesselation to support that kind of detail.
I think it's more of a "why not?" thing, or for the challenge, pushing your skills further. Even if there's no direct use, that creative spark is what drives the industry. Maybe someone else will see substances like this and say "Hey, I bet I can write a script that makes a low poly building from this in a smart way". Procedural building modeling is already a thing, but the more tools we have available to us, the better off we are. I guess this flows back into your "whatever gets the job done" question.