Home Unreal Engine

Unreal Engine - Artist's Guide to Optimize [WIP]

polycounter lvl 6
Offline / Send Message
Pinned
scslover polycounter lvl 6
I've decided to shift all Unreal Engine tutorials and information, including free tools, materials, textures and uassets to my website at:


B3D Game Dev
http://b3dgamedev.com


It's still in it's early phases of development, but there are tips on unreal tools already in there. I'm always open to questions or comments/suggestions. So please take a look. My main goal is to educate people who are wanting to get into the gaming industry without falling into massive amounts of debt. If I help at least one person, then I feel like I've accomplished something.  :)    

Thanks






I'm logging updates and creating easy-to-navigate google docs that will include an outline. Information will still be here, but it's much easier to reference the docs. 

Updates;
Updated Master Materials breakdown on 4/16/2019
Google Doc: 
https://bit.ly/2Gq5Wi9
- Updated Lighting section on 3/18/2019

Google Doc:  https://bit.ly/2Hufhaq




Hello everyone,
I'm not great at introductions so I'll just get right to it.

I've been working in UDK and Unreal 4 professionally now for around 4 years. I decided to start an optimization guide from an artist's viewpoint. I am currently and an environment artist, but I've tried to saturate myself with the techy side of game art for a couple years, all in my free time (which unfortunately is very very little. There are tons of resources scattered around the internet on optimizing your assets, materials, UVs, etc. But I thought it would be nice to have a singular go-to guide. 

I did write this up on my own (quickly) and I'm fully aware that these suggestions are platform specific. I am also very open to anyone pointing things out that are incorrect or arbitrary, in fact I encourage it. This is an iterative process I plan on keeping updated as I learn new things (profiling, tips and tricks, etc). 


OPTIMIZING & PERFORMANCE 

A look at optimizing for gaming projects in Unreal Engine 4 (4.19 Currently)

There’s really no clear definitive way to start your optimization pass as it all depends solely on your target platform. In my experience, which is primarily PC, this should be a pretty good guide to start getting some of those frames back.


Artists

  1. Unused Assets & Redirectors

    1. Remove any unused assets.

    2. Deleting, renaming, moving; these all leave behind redirectors. A redirector is something that tells the engine where something has been moved to. For example, if you move a material somewhere that has been applied as an override in a sublevel, you will need to have all associated assets checked out and saved/submitted so they know where each other are. Without checking the other referenced assets out, a redirector is left behind. Later on when all assets are available, you can run a redirector fixup commandlet, or manually fix it inside the editor.

  2. Textures

    1. Compression Types
      1. Albedo textures should stick to BC1 compression unless you can get away with a single channel albedo tinted in the material. In that case it can be treated as a packed RGB or a single low resolution alpha compression.

      2. Packed RGBs utilize the masks compression (linear)

      3. Normals use BC5 compression.

      4. Single channel textures can be packed into RGB masks.

      5. Never use the alpha channel!

        1. This rule is based on specific use. If you only need an additional channel, it's okay to use it. But if you have room in a packed texture elsewhere, consider packing the map into that channel (green channel preferably).

    2. LOD Bias & Streaming

      1. Run through the textures and adjust the LOD Bias to see if any noticeable or very little changes occur in the resolution and use of the asset. By raising the LOD Bias, you can cut the size of the textures memory considerably.

      2. Check that the texture is streaming properly. If the texture is not set to a power of 2 resolution, it will not stream, meaning the texture will never mip down.

  3. Merging, LODs, HLODs and Proxy Meshes

    1. For meshes that are used a lot; combining the meshes into larger combos can help to reduce draw calls. This should be a balance of size and amount. Be sure your combos don’t get so large that they become difficult to occlude entirely.

      1. Combine like-meshes only (use the same material and are generally close to each other in the level).

    2. LODs are an integral part of optimization. Unreal has options for auto-generating LODs, which are useful to create starting LODs.

      1. LODs can have their own materials - check for ways to reduce material costs, like cutting normals, masking, transparency, specular information, etc - and apply these to the LODs. This is very useful for masked materials and translucent materials.

      2. Make sure you’re making LODs for meshes that actually benefit from them. Some assets with reasonable tri-counts may not even need an LOD. I can't remember the number, but I think it's anywhere from 600 - 100 tri (need source). 

      3. Meshes with vertex colors will need decent supportive geo to be able to LOD down as well as keep vertex color information from getting completely ruined. A ‘pseudo-fix’ is to enable LOD dithering in the material detail panel. 

    3. HLODs and Proxy meshes are the next level of LODing. Both types of LOD take existing meshes and combine them into a lower quality mesh (proxy mesh) with its own combined textures and material. In most cases it’s pretty bad visually, but that’s the point. At a distance you won’t be able to tell that one big ugly mesh just magically switched places with 20 - 30 smaller higher quality meshes. This is usually a huge performance gain. Things to watch out for is material metalness. If you disable metalness on the proxy mesh that requires metalness maps, you will see a pretty evident pop when it switches, even from a distance (from experience).

      1. The proxy mesh tool needs to be enabled in the project settings under Editor -> Hierarchical LOD Mesh Simplification settings. Select ProxyLODMeshReduction and restart the editor

----------------------------------------------------------------------------------------------------------------------------

Lighting Information and Optimization

General Lighting

Lighting will have a huge impact on how well your scene or game performs depending on which platform you are targeting, if any. These tips should help you narrow down which scenario you choose to go with as well as give a little insight on the use of each, as well as tips.


Place your lighting in one or two sublevels. If you are using dynamic lighting with a mixture of static lighting, use two. One sublevel should be set to static, with its streaming option set to blueprint. This ensures the static lights get baked, but the sublevel is not loaded at runtime. The dynamic lighting sublevel can be left at default settings (always loaded).





  1. Mobility

  1. Static Lighting
    1. Static lights are lights that are “baked” into your scene. The light color and shadows are compressed into textures. There is no overhead during runtime when using static lighting. But those maps are loaded into vram and with large scenes, they can become problematic.

    2. Static lighting cannot shadow moving objects such as players or other dynamic assets.

    3. The quality of the shadows are directly related to the resolution and layout of your asset’s lightmap. (see lightmapping) - Higher resolutions produce better shadows. For softer shadows, increase the light source radius on your light.

  2. Stationary Lighting
    1. Stationary lights are a mixture of both static and dynamic lighting. The shadows and indirect lighting are both baked light and shadow maps, however the direct lighting and its color can be changed at runtime.

    2. Stationary lighting is typically used for sunlight (directional).

    3. Stationary lighting shadow costs come from movable objects.

      1. To help the cost of dynamic shadows, modify the dynamic shadow distance for either movable light, or stationary light.

      2. For softer shadows produced from your stationary light, enable “Use Area Shadows for Stationary Light”. This produces softer shadows depending on the distance from its caster. Having proper lightmap resolution will help this effect.

  3. Movable Lighting
    1. Movable lights are 100% dynamic and their preferences can be changed at runtime. They produce no shadow maps or lightmaps and have zero indirect lighting influence (global illumination).

    2. The cost of movable lights depends on how many objects the light effects and their triangle count.

More Tips & Performance

  1. Use spot lights as opposed to point lights when possible.

    1. Point lights cost the same as 6 spot lights.

  2. Set the attenuation radius of spot lights to affect only what they need to reach.

  3. Adjusting the inner cone angle will soften their shadows and penumbra.

  4. Adjust each lights draw distance and fade range to help with each light source performance cost.

  1. Where possible, disable dynamic shadows on your placed lights.

  2. Disabling shadows altogether will also help, this is an easy way to create cheap fill lighting. (also can create bleed effects)

  3. Stay away from volumetric fog

    1. Fog is cool, it’s even cooler when you see light shafts scattering in it..but it’s expensive. There are cheaper ways to produce fake light shafts.

    2. Volumetric fog can be enabled in the exponential height fog options.

  4. Disable translucency lighting on lights that don’t need it.

  5. Using more than one lightmass importance volume is okay to do.

    1. Placing one large volume over the level may seem like the correct thing to do, but it’s easier to maintain and control important areas by using multiple volumes. There is no effect on performance and it allows you to focus the lighting importance to areas that are more active.

  6. The character detail lighting volume should be used in areas where it’s only necessary.

    1. The character indirect lighting volume increases the samples of indirect lighting within the volume. Generally these sample spheres are only generated above character height level so that they are always lit. And for the spheres to generate, there needs to be a ground mesh with collision.

Lightmapping

Lightmapping is necessary for when you are using static lighting setups in your scene. It’s a UV layout, generally in the 2nd UV channel, that Unreal uses to bake light and shadow maps, which are then compressed and used at runtime.


Lightmaps should be hand crafted in your 3d software of choice. However, depending on the complexity of your asset, Unreal has a very handy tool that will lay them out for you.


  1. Rules
    1. Lightmap UVs should never overlap each other.

    2. UVs should stay within the 0-1 UV space.

    3. Because lightmaps scale with video settings, making sure there is proper padding between shells will ensure that your light bake stays consistent through low settings to high.

    4. For better quality, consider mimicking your smoothing groups when producing the lightmap UV shells. There are tools for Maya and 3ds Max that will do this automatically for you.

    5. Keeping the UV shell edges straight will produce crisper shadows and can help remove ‘stair-stepping’.

    6. Every face counts and every face gets lighting and shadow information. By unwrapping your own UVs, you can prioritize shells. For example, if you have backfaces (these should be deleted anyways), those faces can be minimized and packed into their own little space away from the important shells, which can then be made larger to get better resolution.

  2. Editor Tips
    1. To generate lightmaps in the editor, simply check the box “Generate Lightmap UVs”. Min Lightmap Resolution is important here; This is the lowest resolution to use, so keeping it low is fine (default). The source lightmap index is the UV channel that you want Unreal to use to make your new lightmaps. Destination lightmap index is your new lightmap UV channel (in Unreal, index 1 is UV channel 2).


More lighting resources;

Cache Shadows: https://docs.unrealengine.com/en-us/Engine/Rendering/LightingAndShadows/LightMobility/DynamicLights

Lighting Troubleshooting Guide: https://wiki.unrealengine.com/LightingTroubleshootingGuide

Unwrapping Lightmaps: https://docs.unrealengine.com/en-us/Engine/Content/Types/StaticMeshes/LightmapUnwrapping






Performance Team / Profiling 

I’m working on this section.

  1. Rendering Method(s) and how to choose.

    1. Forward

    2. Deferred

  2. Console Commands

  3. Blueprints Nativization

  4. Ticks & Runtime Elements

  5. Python Scripting

  6. Modifying *.ini files

  7. Plugins

  8. Profiling

-----------------------------------------------------------------------------------------------
Resources:
Performance Guidelines for Artists and Designers
https://docs.unrealengine.com/en-US/Engine/Performance/Guidelines
Console Commands for Unreal Engine
https://digilander.libero.it/ZioYuri78/

Replies

  • Taylor Brown
    Options
    Offline / Send Message
    Taylor Brown ngon master
    bookmarked. thank you for this!
  • Calvinatorr
    Options
    Offline / Send Message
    Calvinatorr polycounter lvl 4
    Interesting stuff . Though I don't think 2.5.1 of never using alpha compression should be a hard and fast rule. This removes a texture fetch which isn't the cheapest operation in the shader, so it becomes a case of balancing texture memory vs run-time performance.

    I should also add that if you view the HLSL code it will show sampling the alpha channel as 2 texture fetches, but the assembler will optimise this to 1 texture fetch.
  • scslover
    Options
    Offline / Send Message
    scslover polycounter lvl 6
    Interesting stuff . Though I don't think 2.5.1 of never using alpha compression should be a hard and fast rule. This removes a texture fetch which isn't the cheapest operation in the shader, so it becomes a case of balancing texture memory vs run-time performance.

    I should also add that if you view the HLSL code it will show sampling the alpha channel as 2 texture fetches, but the assembler will optimise this to 1 texture fetch.
    This is good info. I didn't know about 'texture fetching' and I haven't jumped into understanding the HLSL code yet. I'm trying, but my schedule is very busy these these few weeks. :)    Thanks for the info!  

    So you're saying that by removing the alpha channel, it drops a texture fetch. Me, assuming a texture fetch includes full rgb values - I would think you could benefit from using a new texture where you could store more info in the other two channels. Me personally, I've just noticed the texture size is always doubled so I've stayed clear of using it in studio projects. 
  • scslover
    Options
    Offline / Send Message
    scslover polycounter lvl 6
    bookmarked. thank you for this!
    You're welcome - I will update it soon when my schedule frees up  :) 
  • Calvinatorr
    Options
    Offline / Send Message
    Calvinatorr polycounter lvl 4
    scslover said:
    This is good info. I didn't know about 'texture fetching' and I haven't jumped into understanding the HLSL code yet. I'm trying, but my schedule is very busy these these few weeks. :)    Thanks for the info!  

    So you're saying that by removing the alpha channel, it drops a texture fetch. Me, assuming a texture fetch includes full rgb values - I would think you could benefit from using a new texture where you could store more info in the other two channels. Me personally, I've just noticed the texture size is always doubled so I've stayed clear of using it in studio projects. 
    @scslover Yes because fetching a texture returns a vec4 (RGBA) value. If you look at the generated HLSL code from the material editor this will show as duplicated code (the same expression repeated) but this will be optimised by the assembler.

    Texture fetches can be expensive so you want to keep this as minimal as you can. Yes if you require other data than always try and pack in RGB wherever you can (prioritising the G channel as this contains an extra bit). But if you have a left-over channel then it's probably best to shove this in the alpha of another texture, unless there's a specific reason not to do so like a detail texture.
    tl;dr it's contextual to what you are doing  :smile:

    Hope that makes sense!
  • scslover
    Options
    Offline / Send Message
    scslover polycounter lvl 6
    scslover said:
    This is good info. I didn't know about 'texture fetching' and I haven't jumped into understanding the HLSL code yet. I'm trying, but my schedule is very busy these these few weeks. :)    Thanks for the info!  

    So you're saying that by removing the alpha channel, it drops a texture fetch. Me, assuming a texture fetch includes full rgb values - I would think you could benefit from using a new texture where you could store more info in the other two channels. Me personally, I've just noticed the texture size is always doubled so I've stayed clear of using it in studio projects. 
    @scslover Yes because fetching a texture returns a vec4 (RGBA) value. If you look at the generated HLSL code from the material editor this will show as duplicated code (the same expression repeated) but this will be optimised by the assembler.

    Texture fetches can be expensive so you want to keep this as minimal as you can. Yes if you require other data than always try and pack in RGB wherever you can (prioritising the G channel as this contains an extra bit). But if you have a left-over channel then it's probably best to shove this in the alpha of another texture, unless there's a specific reason not to do so like a detail texture.
    tl;dr it's contextual to what you are doing  :smile:

    Hope that makes sense!
    Makes perfect sense, thanks for the response and clarification! 
  • jimsvanberg
    Options
    Offline / Send Message
    jimsvanberg interpolator
Sign In or Register to comment.