Home Technical Talk

Star Citizen assets workflow questions

Rawalanche
null
Hi,

I am fairly new to realtime/game asset creation, although I have lots of experience as a 3D generalist for past 9 years (offline rendering). I am trying to find a streamlined workflow that would allow me to effectively produce high quality assets for UE4 as fast as possible, meaning more procedural and less manual approaches. 

Recently I've read an awesome Reddit post about a process used to shade and texture Star Citizen assets to maintain high detail regardless of the UV map size, this one: https://www.reddit.com/r/starcitizen/comments/3ogi3o/im_an_tech_artist_in_the_industry_and_id_love_to/

What was fascinating to me is that the workflow described very closely resembles a texturing and shading workflow I've been using in offline rendered VFX since sometime around 2012. Basically, I avoided UVs in vast majority cases, and just box/triplanar mapped the whole thing, and then used procedural, raytraced AO maps to create convex and concave cavity masks to drive the effects, with some more map mixing on top of that to distribute some of the wear and tear even outside of the cavity maps.

Thanks to this approach, I had a growing library of smart materials which I could drop onto any mesh, regardless of topology and UV layout and quickly get a nice, visually complex material. On top of that, thanks to tiling and re-purposing of a very few input textures, this approach was extremely memory efficient and the details held very well up close.

Here are some examples:
https://www.artstation.com/artwork/x2EnX
https://www.artstation.com/artwork/4b3N2

Now, when I read the reddit post, I was really blown away that the similar workflow to what I am used to can actually be used in realtime engines too, but I just can not understand how, because I am quite confused about a few things:

In offline rendering, since I have literally hours to calculate just one frame, I can afford to use live, on the fly raytraced AO maps to get convex and concave cavities, which are necessary to get wear and tear effects, which I want to have on my materials. Realtime game engines, however, can not afford nowhere near as expensive effects and calculate them at a sufficient quality and temporal stability in just a few milliseconds. What this means is that after all, my model still needs to have an unwrapped UV layout and I need to supply at the very least baked down concave and convex cavity masks. There are two routes I can go:

1, I can pre-setup pretty much final edge scratching and corner dirt masks, bake them out and bring them into unreal engine to mix maps or materials with tiling textures:

PROS:
A, I will have final wear, tear & dirt masks for use in game engine, I won't have to do some complex material editor mixing and color correction networks to get out the result. 

CONS:
A, The detail of wear and tear on the edges and in the corners will still be limited by the UV map resolution, resulting in low res blurry transition of wear and tear masks compared to the fine texture detail achieved by tiling. 

B, It's not interactive workflow. In something like Substance Painter, I can work in context of the shader. I can adjust my wear and tear distribution masks while looking at a final material. It definitely looks very different to see how the masks affects actual material vs just looking at the black and white masks, then having to commit to it, bake it down, try it in the engine, and if the look doesn't work, do the round trip again.

2, I can pre-bake just the clean cavity masks, and then use game engine's material editor to lerp them together with some tiling textures and compress the result using some color mapping to get sharp, and scratched masks:

PROS:
A, I can get the high res detail up close since I am mixing the smooth gradients of cavity map with the fine detail, tiled textures, and then clamping the results to get sharp scratching and/or dirt distribution, which means that the mask detail will be adequate to the details of materials I am using it to mask.

B, It is interactive, so I actually do have a control of the radius, contrast and general look of the max in the real time right in the editor. 

CONS:
A, It's shader instruction heavy. There needs to be quite a bit of math done to color correct and mix cavity maps gradients with tiling textures and creating enough input points to be able to control the look. This can tax the performance significantly.

B,  It's harder to control. In offline renderer, I can actually use a texture map to drive AO ray distance, This is especially useful when tracing AO rays inside of the objects, to get convex edge masks. If you have a very thin surface, you are very limited by the length of an AO ray before it hits the back side, however, if you are texturing the distance with some grunge, you can get AO rays to reach way further while still not turning thin parts of the mes completely black. If I have to pre-bake just clean convex AO map though, I can not afford to do this, so what I end up is very limited distnace of the convex mask gradient, and I am not able to create very wide edge wear.

So in a nutshell, I am curious what are the standard workflows for this, or if anyone knows how it's handled in games like Star Citizen. Having box/triplanar mapped textures across the mesh is one thing, deferred decals is another one, those are easy, but how about mixing these with the effects that just inevitably have to be UV map specific?

Thank you in advance.

Replies

Sign In or Register to comment.