Home Unreal Engine

Struggling with Shader Performance on Large-Scale 3D Assets in Unreal Engine 5

Hello

I have been working on a large-scale 3D project in Unreal Engine 5, and I’m encountering performance issues related to my custom shaders. When applying complex shaders to large assets, especially with high polygon counts, the performance drops significantly. I’ve tried optimizing the shaders by reducing texture resolutions and simplifying the material setup, but the performance impact still persists. :s

Has anyone experienced similar performance issues when working with shaders on large-scale 3D assets in UE5?  :'( I suspect it might be related to the complexity of my shader graphs or how Unreal handles shaders for large objects in its pipeline. I’ve explored different optimization techniques, such as using simpler lighting models and baking some of the materials, but I’m not seeing a significant boost in performance. Checked https://forums.unrealengine.com/t/shader-compilation-is-slow-and-do-not-use-available-ressources/530585/Splunk guide for reference . 


Additionally; I’m curious if there are any specific settings or optimizations within Unreal Engine 5’s rendering pipeline that could help mitigate this issue. Should I be adjusting my LOD settings or using a different approach to handle these complex shaders? Any advice or recommendations would be greatly appreciated, especially from those with experience working with large-scale assets in UE:smile:



Thank you ! :)

Replies

  • Klunk
    Offline / Send Message
    Klunk ngon master
    I know nothing of UE, is there a Performance Profiler you can use, to show where the slow down is occurring?... other than that if it's the shaders then replace them with something standard... if it still persists then it's geometry or lighting  etc if not add them back in one at a time see how this effects performance. Also I think you are going to be more specific about things like no of tris, number of materials, what machine and card you're running it on and Can we assume there's no game code running with a gazillion  AI bots running around :) Apologies if you've tried all this ;/
  • poopipe
    Offline / Send Message
    poopipe grand marshal polycounter
    That link is about shader build times rather than the runtime shader cost. 

    Unreal caches Shaders once built so long build times are to be expected on first load - or after enabling/disabling certain features (raytracing/changes to shading models/compression defaults)
    I have certainly encountered issues with it either  not properly caching or deciding to rebuild anyway. tbh I've just ended up moving my content to another project in most cases.  At the office I just made someone else fix it and i have no idea what they did :D 

    If the problem is runtime there's stuff that isn't related to your actual shader cost that can impact things but a lot of it won't be apparent without running insights and doing gpu captures
    One common issue is around on-demand shader compilation -  any branching (i.e boolean options) in your shaders can cause this to kick off and dramatically increase frame times (i believe it's less of an issue on ue5 than 4)  -  you won't know if that's a problem without profiling but don't use boolean switches in your materials anyway.

    to fill @Klunk in 

    Unreal comes with really, really good integrated profilng tools for both gpu and cpu  - honestly, it's murder going back to a proprietary engine after a few years on unreal. 

  • Benjammin
    Offline / Send Message
    Benjammin greentooth
    poopipe said:

    One common issue is around on-demand shader compilation -  any branching (i.e boolean options) in your shaders can cause this to kick off and dramatically increase frame times (i believe it's less of an issue on ue5 than 4)  -  you won't know if that's a problem without profiling but don't use boolean switches in your materials anyway.
    Can you expand on this at all or point me toward some documentation? I work with materials (in 4.27) with a TON of boolean switch parameters which are enabled/disabled in instances....
  • poopipe
    Offline / Send Message
    poopipe grand marshal polycounter
    Not with any great confidence in the technical details - the issue was first raised by one of our render programmers after identifying huge stalls while profiling . 

    The material that really upset everything featured nested branches that were driven by non-static boolean switches. eg..
    1. bool
    2. /\
    3. bool bool
    4. /\
    5. bool bool
    (it went 5 or 6 layers deep, but the forum's code format box is white on yellow so it's hard to ascii art)
    My interpretation of what happens is that all paths aren't necessarily built 'offline' and when a new path is encountered it gets built on demand (as in when the material appears on screen or something) . This is generally fine but if the branches are nested and/or precede a lot of work it can cause your GPU to stall. 

    Static switches are largely okay because they don't (shouldn't) feature at runtime - obviously proof only comes from profiling but i believe that's stated in the Unreal docs.
    In general though, I will try to use a lerp rather than a branch if I want an option in a material. This ensures that all logic is accounted for at all times (this applies outside of Unreal btw) 
    It's somewhat counter intuitive but in practice, what upsets a GPU most is being diverted to a different task (that's why 'draw calls' are expensive). As such, branching in a shader is one of those things you should only do if you have to. 

    I don't want to scaremonger - Unreal has really good shader compiler/optimisation and outside of the more egregious situations can probably cope completely fine.
    This is just something I've seen happen a few times and is a trap people can fall into when they're trying to do the right thing (i.e reduce overall material count, make things flexible/friendly).



  • Benjammin
    Offline / Send Message
    Benjammin greentooth
    poopipe said:
    Not with any great confidence in the technical details - the issue was first raised by one of our render programmers after identifying huge stalls while profiling . 

    The material that really upset everything featured nested branches that were driven by non-static boolean switches. eg..

    (it went 5 or 6 layers deep, but the forum's code format box is white on yellow so it's hard to ascii art)
    My interpretation of what happens is that all paths aren't necessarily built 'offline' and when a new path is encountered it gets built on demand (as in when the material appears on screen or something) . This is generally fine but if the branches are nested and/or precede a lot of work it can cause your GPU to stall. 

    Static switches are largely okay because they don't (shouldn't) feature at runtime - obviously proof only comes from profiling but i believe that's stated in the Unreal docs.
    In general though, I will try to use a lerp rather than a branch if I want an option in a material. This ensures that all logic is accounted for at all times (this applies outside of Unreal btw) 
    It's somewhat counter intuitive but in practice, what upsets a GPU most is being diverted to a different task (that's why 'draw calls' are expensive). As such, branching in a shader is one of those things you should only do if you have to. 

    I don't want to scaremonger - Unreal has really good shader compiler/optimisation and outside of the more egregious situations can probably cope completely fine.
    This is just something I've seen happen a few times and is a trap people can fall into when they're trying to do the right thing (i.e reduce overall material count, make things flexible/friendly).

    Thanks, appreciate it!  Its all static switches and only 2-3 layers deep :lol:

Sign In or Register to comment.