Streaming blocks of prefabs -- is it the best way??

vertex
Offline / Send Message
astraldata vertex
Alright, so I've got to know -- what's the best way to optimize a scene composed of a huge amount (and a wide variety) of smaller prefabs that compose the terrain of a 3d world as the player moves about in it?

I've seen engines such as Unreal throw a seemingly unlimited number of high-res objects at the engine and it seems to handle it without a hitch. Unity, on the other hand, seems to struggle with even a very small number of objects.

I've heard object pooling is the most common answer, but keeping such a large number of prefab objects in memory at once would kill performance wouldn't it? Additionally, I don't know how many objects a chunk of the world could consist of before performance starts to drag across various platforms.  What would be the best approach to optimizing something like this so that it's practical?

Replies

  • RyanB
    Offline / Send Message
    RyanB Polycount Sponsor
    Alright, so I've got to know -- what's the best way to optimize a scene composed of a huge amount (and a wide variety) of smaller prefabs that compose the terrain of a 3d world as the player moves about in it?

    I've seen engines such as Unreal throw a seemingly unlimited number of high-res objects at the engine and it seems to handle it without a hitch. Unity, on the other hand, seems to struggle with even a very small number of objects.

    I've heard object pooling is the most common answer, but keeping such a large number of prefab objects in memory at once would kill performance wouldn't it? Additionally, I don't know how many objects a chunk of the world could consist of before performance starts to drag across various platforms.  What would be the best approach to optimizing something like this so that it's practical?
    Like anything, it depends.

    If your "smaller prefabs" are suitable for dynamic batching, then that would reduce your draw calls a lot.  You may also want to crunch all of your prefabs into some bigger objects to reduce draw calls.  You probably can't use static batching because your world is not static.  Without seeing the data, I can't say.

    I write a lot of custom scripts to disable objects based on their type and distance from the camera.  I use LODs.  I have custom shadow casting objects.  I swap shaders at different distances.  Lots and lots of optimizations based on distance from camera.

    Number of lights and type of lights can have a huge impact on GPU.  Number of shader passes and forward rendering on mobile is important.  I manually optimize all shaders.  Shader forge is great for prototyping but often highly unoptimized.

    Pre-loading shaders and textures before the level begins gets rid of loading spikes.

    Object pooling usually helps but it doesn't mean it's the optimization you need.

    You need to Optimize -> Profile -> Test -> Repeat until you reach the framerate you want.  Profiling ON DEVICE is critical.  That means using XCode connected to your iOS device or Adreno profiler for some Android devices etc.  Without profiling, you are just guessing what is wrong and how to fix it.

    Optimize -> Profile -> Test -> Repeat
    Optimize -> Profile -> Test -> Repeat
    Optimize -> Profile -> Test -> Repeat



  • astraldata
    Offline / Send Message
    astraldata vertex
    Thanks for the straight-talk.  I totally appreciate the heads-up from a more experienced 3D developer on this one. I come from a 2D dev background, so I'm learning my way around 3D optimizations a little at a time. Thankfully I've got enough of a hang of it that I've understood everything you've said so far.

    I'm doing desktop development right now, so I have a little more flexibility. Though, in my case I'm making a specific type of world where all terrain and vertical structures (including floating terrains) are made of smaller prefabs (sort of like minecraft tiles) and trying to display these many, many prefabs, all with collision and mesh info, is proving to be a huge pain.

    Any idea what direction I'd need to take in order to be able to display something like this in Unity?

    What I have right now is okay with smaller maps, but when I increase the map size, even a little, things start slowing to a crawl, even on decent Desktop hardware, due to all the overhead in collision and whatnot in the prefabs. I don't know where to start because I need the map to be much MUCH larger than it's capable of being right now. I don't think a simple camera optimization using object pooling would be enough...
  • RyanB
    Offline / Send Message
    RyanB Polycount Sponsor
    Sounds almost exactly like the problem we ran into on our mobile game.  Too many small pieces = too many draw calls. 

    What we did:
    - weld as many models together as possible then save the welded pieces as prefabs.  One big piece is almost always better than lots of tiny pieces. 
    - design your world so you can't see to infinity, put bends, doors, etc in regular intervals
    - separate walking/driving surfaces from walls.  Limit as much as possible where lights and shadows fall.
    - I reduce something like an enemy vehicle from ~15 draw calls to around 3 by welding parts and reducing the materials.  If you had 5 enemy vehicles on screen originally it would have been around 75 draw calls now down to around 15. 
    - I make custom shadow caster objects with a custom shadow casting shader that I attach to the vehicles and other objects that use about 90% fewer polys than the main object
    - Real-time lights are expensive.  Shadow casting is expensive.  Use fancy lighting only on things very close to the camera.  Switch to simple lighting for everything else.  For really distant stuff I use shaders with no textures, just colours that blend into the fog.
    - reduce draw distance to around 700 - 800 units for the main camera
    - add a second camera that draws only far distant objects like mountains, main camera draws on top of this. 
    - use lots of fog
    - use sprites with simple shaders for some fx to make them dynamically batch
    - turn off fx when ~500 units away from camera.  This will vary from game to game but the main idea is to only render something if the player can actually tell the difference
    - If you write scripts, use coroutines and similar techniques so you don't recalculate stuff every frame unless absolutely necessary

    Most important:
    Check your assumptions in a profiler.  I've often found things hidden away that I never thought would be killing memory and performance but were. 
    Frame debugger on big scenes can be an eye opener.  You'll see your game trying to draw teeny tiny little things in the distance that you are much better just turning off.

    Some videos on optimization:
    Unite 2012 - Performance Optimization Tips and Tricks for Unity  https://www.youtube.com/watch?v=jZ4LL1LlqF8  
    How to use the Frame Debugger https://youtu.be/4N8GxCeolzM
    How to use the Profiler https://www.youtube.com/watch?v=sBpXiJ9G3OY
  • Eric Chadwick
  • RyanB
    Offline / Send Message
    RyanB Polycount Sponsor

    Absolutely awesome posts RyanB!
    Thank you sir.  I've been digging deeper into optimization for about a year now and can share a lot of what I've learned.
  • astraldata
    Offline / Send Message
    astraldata vertex
    Seconded. Much helpful info there man! Been spending a lot of time trying to process it all -- Unity apparently has a LOT of room for optimizations that I never would have known about without your help!

    Thank you!


    If you don't mind, I still have a few optimization questions boggling my mind that you might be able to answer:

    1. First, using these techniques, how would you approach an open-world where the terrain itself is made of stacks of small-but-similar gameobjects?
    2. A follow-up question, how would you handle collision/gameobject/transform overhead in a situation like that?
    3. What kind of world-streaming would you employ to handle something like where the terrain is made from tiny gamobject stacks?
    4. Regarding streaming a world like this, would you even bother with pooling, assuming the horizon can be seen in the distance -- wouldn't pooling cost more in trying to keep up with so many stacks of tiny gameobjects than it's worth trying to implement and keep in memory to pool?
  • RyanB
    Offline / Send Message
    RyanB Polycount Sponsor
    Unity apparently has a LOT of room for optimizations that I never would have known about without your help!
    I'm glad you find it useful. 

    There's a lot of scripts, shaders, lighting, LODs and Unity settings all working together so it's a challenge to explain it all clearly. 
    First, using these techniques, how would you approach an open-world where the terrain itself is made of stacks of small-but-similar gameobjects?
    To qualify everything I write:  I'm not an engineer and I don't pretend to be.  I work closely with them, I do a lot of scripting but I stay out of their code.  That being said, here's my 2 cents:

    I'm trying to envision a terrain made of small-but-similar gameobjects.  Like a pile of blocks or other geometric primitives or maybe a rockslide with lots of big and small rocks piled on each other?  Voxels?  Randomly generated or already baked?  Lots of variables there but...
    You could attempt to use dynamic batching if the small objects meet all the criteria of dynamic batching.  You would need a simple shader, low poly count and UNIFORM scaling, preferably 1, 1, 1.  If you had a limited number of prefabs that met the criteria, you would have one drawcall per prefab.  Easier said than done but it could work if set up correctly.  You would have for example five rock prefabs all with scaling left at 1,1,1 and put in a pile.  You could potentially get the draw calls down to 5.
    But!  You would also be limited in terms of lighting and shaders depending on your design.  If you wanted each small object to cast real time shadows, then that would also cost you a drawcall per shadowcasting object.  It could add up fast.  You could limit shadowcasters via Unity settings or swapping shaders based on distance.
    Static batching could also work as long as you aren't moving your sectors around but it sounds like you may be. 
    Deferred shading or lighting would have to be considered depending on the design.
    The alternative is to just create one big terrain sector from all of your little ones.  You can do this in Unity using Meshbaker or similar or do it in a modelling package outside Unity.  I prefer this method because it's simple, works and doesn't require a lot of fiddling.

    A follow-up question, how would you handle collision/gameobject/transform overhead in a situation like that?
    Assuming I must have lots of small objects, I would deactivate anything that is outside of a small radius around the player.  That should reduce the cost significantly.  Proper layering is always a good idea.  Use primitive colliders instead of mesh colliders.  For example, I have overlapped multiple rectangular boxes to approximate a sawblade.  You can also adjust the timestep but generally this isn't acceptable if things are moving quickly in the game.  See example below of turning something (mesh renderers, not colliders) on/off based on distance.


    What kind of world-streaming would you employ to handle something like where the terrain is made from tiny gamobject stacks?
    Honestly, I would avoid lots of small objects.  It requires a lot more work. 
    Streaming is handled by the engineers so I don't make decisions about that directly.
    I would look on the Unity asset store for something to help with streaming.  Often you can spend $50 and save a lot of work. Some examples:
    https://www.assetstore.unity3d.com/en/#!/content/36486
    https://www.assetstore.unity3d.com/en/#!/content/15356  -- We've used this at work
     

    Regarding streaming a world like this, would you even bother with pooling, assuming the horizon can be seen in the distance -- wouldn't pooling cost more in trying to keep up with so many stacks of tiny gameobjects than it's worth trying to implement and keep in memory to pool?
    If every object is different then yes, pooling might be a waste.  But if every enemy and object is totally unique then you are making a lot of work for yourself.  I would re-design with re-use in mind.  But if you have something like a missile object or magic spell or bullet that is constantly being used and you aren't pooling, it's almost certainly worse than instantiating.
    You almost definitely want to preload your materials, textures and shaders.  I've seen this done with a separate camera that looks at each prefab before the game starts.  This puts everything into memory so you don't get a load spike when a new enemy or sector appears on screen.  Look at using ShaderVariantCollection to preload shaders and update regularly as you add new shaders.  You should always be looking for these load spikes in your profiler. 
    Of course, Resources folder should be used properly.
    Unite 2016 has a good talk on optimizing games with lots of objects from a programming perspective.  Some important info about the costs of hierarchies that artists should also know.


    They've also put up a best practices guide at Unity.  Lots of good advice https://unity3d.com/learn/tutorials/topics/best-practices

    //not complete code
    using UnityEngine;
    ...

    public class MeshDisableDistance : MonoBehaviour
    {
        public float FixedUpdateInterval = 0.1f;
        private float currentInterval = 0.0f;

        private MeshRenderer[] mrs;


        void Start()
        {
            mrs = GetComponentsInChildren<MeshRenderer>();

            InvokeRepeating("UpdateMeshActive", 0, 0.1f);

        }

        void UpdateMeshActive()
        {
            if (Camera.main == null)
            {
               // stuff here removed
            }
            else
            {
                if (Vector3.Distance(this.transform.position, Camera.main.transform.position) < 700.0f)  // something like this to turn things on/off based on distance
                {
                   // removed
                }
                else
                {
                   // removed
                }
    ...




  • astraldata
    Thank you for such an awesome overview!! I'm still in awe of the wealth of information contained in that single post!!

    That information took some serious time to process over the holidays -- and I'm still going through it too -- but all of it is totally useful to me, so once again, thank you!

    For now, some questions --- First, I'm still unclear on whether I should bother with pooling:
    If every object is different then yes, pooling might be a waste.  But if every enemy and object is totally unique then you are making a lot of work for yourself.  I would re-design with re-use in mind.
    I'm not sure what you meant to say there, but to clarify from my end, in my project, every terrain object is very similar and rarely changes. Mostly there are just a few rotational and height variations of the same tile object in an environment set. Think of them essentially as individual modular tiles from a 2d tileset. There are just a LOT of instances of these similar tiles over a large world.

    I'm trying to envision a terrain made of small-but-similar gameobjects.

    As hinted at above, the world is made out of many (stationary/non-interactable) prefabs resembling 3d voxels, using essentially the 3d equivalent of a 2d tileset (something like minecraft, but using slightly more detailed prefab pieces for blocks instead of voxels) and with a vertical component to the world (i.e. not all flat terrain). The environment won't change except in very specific areas of the world, and probably just with a shader that draws an alternate version of an arrangement of prefabs (while hiding the original placements/arrangements of the 3d tiles in that special area.) Any ideas on a better way to do this would be very welcome though! If you've ever played LoZ: Skyward Sword, the time-shift stones are what I'm thinking about doing shader-wise (no clue how to go about this just yet though, so any hints would be great!)


    Regarding the current performance of many gameobjects in a single (small) location/area of the world:

    After doing a few tests using various approaches, I'm still not sure of the cost of gameobjects in a system like this, but I am sure now that I've got the drawcall count down a little (probably with dynamic batching, as you suggested), with most calls being the various rotations of gameobject prefabs in the scene. Tons seem to be saved by batching, but I read that batching a forest, for example, isn't the greatest of an idea, so I'm wondering if there's an alternative way that I can potentially even keep the flexibility of the tile system somehow without having to MeshCombine into single areas. 

    Here's my current scene stats -- (and are these good or bad?):

    CPU: main 4.1ms
    render thread 3.1ms
    Tris: 30.7k
    Verts: 53.5k
    Batches: 41
    Saved by batching: 2500
    Screen: 760x427
    SetPass calls: 33
    Shadowcasters: 1379
    Visible skinned meshes: 3
    Animations: 0
    Basically, this is representative of a scene that's pretty close to what I want. There will be more prefab decoration types, a few more characters and FX onscreen, and a GUI, outside of what's indicated in those stats. The screen resolution is pretty low too compared to my target, but I think that's negligible here. I can disable some shadowcasting objects (such as the floors and some prefab decorations and such), but not many. The gameobjects are pretty simple geometry-wise. This is mostly a top-down view, but the camera can be rotated to see the horizon sometimes (I've implemented simple fog to deal with this issue in that scene.)

    Outside of what's mentioned by you already (and in that video), is there anything else that can be optimized with this sort of setup?

    For what it's worth -- thank you so much for your help and advice! I'm miles ahead in what I want to do thanks to you! :)


    (PS: As an aside, and a bit off-topic:)
    I've been watching the Unite 2016 videos and I must say -- there's some really great info gems in almost all of those. I'm geeking out for the realtime cloth stuff more than anything else though! I'd love to see that make its way into the realm of possibility for games one day. It really looks amazing.)







  • RyanB
    Offline / Send Message
    RyanB Polycount Sponsor
    Your CPU and GPU times are very low and your "saved by batches" is high which is good.  

    Your shadowcasters number is quite high.  That isn't an issue now but it may be if you add lights to the scene.  By default, I turn all shadowcaster switches off for each object mesh renderer unless I know the shadow will be seen by the player.  You can do this with scripts in the editor or at run time, for example within a certain distance of the camera.

    Screen resolution is too low and not representative of any real platform.  As you increase screen resolution your GPU time will go up.

    Tri and vert counts are low, even for mobile.

    Setpass calls aka drawcalls are very low which is good (typical numbers for the mobile game I'm working on is 150).  I suspect you are sharing a single default material at this point so your "Saved by batches" number and setpass calls might change a lot.  Keep an eye on those numbers as you change materials and shaders.

    So, you are fine and don't have anything to worry about right now.  In the future, I would play the game with the Profiler running and look for garbage collection spikes, rendering spikes, etc.  The rendering stats give you a good overview but miss the spikes.

    As for the shader, sounds like you want to just do a test of being inside/outside of an expanding sphere.  You might want to add a gradient blend on the edge of the sphere.  Keep in mind that when you are blending between two things it usually renders a complete version with shader A, then a complete version with shader B and then blends between the two complete versions.  So, it can be expensive to render depending on how fancy your versions A and B are.

    Or you could just have two overlapping scenes and activate/deactive objects based on their being inside/outside of  an invisible mesh.
  • astraldata
    Thanks a ton for all your help! Thanks to you, I've got so much more of an understanding now of what to look for than I ever thought possible when I started this topic!

    Thanks to you, I think I've finally got the prefab optimization thing figured out. I apologize for blasting you with questions, though I hope others can learn from this thread too!

    The shader thing is my next monster to tackle, as I have no clue how to handle geometry with them. Any example code for a shader that displays stuff within a sphere would be great if you could spare the time (I've yet to find any good examples), otherwise I'll just do some research and figure it out on my own once I get time to do it. I know Unity has recently changed up how they do shaders, and probably plan to do so again with the new FMV/CG movie features they aim to release sometime this year. Just a rundown summary of the basic idea of how this works right now would help immensely though. Right now, I have no clue how to mess with the display of geometry inside a shader -- I only know about color-swapping / alpha-channels really. As said before, I've been a 2D guy up to this point, so anything you know about geometry/selective-display shaders would be useful! Regarding my use-case with the shader, the overlapping scenes idea may work, but wouldn't loading spikes in an open-world potentially be a problem there?

    Shaders aside -- I only have one more concern, and that pertains to how to properly display huge stuff in the distance. I know using old-school fog is a key element to optimizing distant stuff, but there are a couple of things I'd really love to do that need more than fog alone:

    • 1) First, I've seen stuff like huge planets/moons or space-stations being displayed that get closer as you move toward them -- though their scale is f*ing HUGE -- and it seems to be clear that you can look all the way around you in any direction, up or down, and none of the large structures get clipped in the clipping plane of the camera. How does something like this work? Is there anything I can read that explains the best way to go about displaying something like that? Are they simply some weird version of a skybox??

    • 2) Secondly, if I have a large seamless world, how would I display something like a mountain ridge with a big, prominent, castle on top of its highest peak that you can eventually walk up to if you go far enough across the plains of an open-world? What kind of techniques would one use to display something of this magnitude without being able to use typical tricks like breaking it up into loading zones? It's clear Breath of the Wild and GTA V allow travel across great distances seamlessly, but I've seen older open-world games accomplish this too. I know these are higher-end games, but what they're doing can't be voodoo. Maybe you can give some clues/theories on how to it's done?

    Regarding question #2, maybe they use something akin to the method used to accomplish question #1? Or, alternatively, maybe some type of shader-geometry-fog-blending-voodoo instead? I'm by no means making a huge open-world MMORPG or anything, but I genuinely need to know how these types of distant objects/vistas are created! Even Journey had them, and you could travel through them seamlessly (such as at the end of the game when you were flying up the mountain.) I've tried and tried to dissect these areas, but my limited knowledge of 3d optimization failed to help me understand how they do such seamless transition sequences. Any ideas? D:


    -- PS: Thanks so much for taking the time to answer these endless questions! I assure you, these are really the last ones I've got! Hopefully others can totally learn from this too! I know I've learned a lot! :smiley:
  • RyanB
    Offline / Send Message
    RyanB Polycount Sponsor
    I don't mind answering questions.  It helps me organize things in my head because I often have to explain these things to people. 

    Huge stuff in the distance:
    So, you probably have a skybox.  But, you want a huge object far, far away.  You don't want to set your camera far clipping plane to something huge because that messes with depth calculations and shadows.  What you can do is use two cameras.
    Camera A is your main camera.  It renders everything from 0.3 to around 1000.
    Camera B is your distant camera.  It renders things far, far away.

    Make a new layer.  Let's call it "DistantLayer".  Assign your big object to "DistantLayer"

    Set Camera A with:
    Clear Flags = Depth
    Culling Mask = no check mark next to "DistantLayer"
    Clipping Planes Near = 0.3  Far=700 (or less, depending on your game)
    Depth = 10 or higher (really just needs to be 1 more than what Camera B is set to)

    Set Camera B with:
    Clear Flags = Skybox
    Culling Mask =  check mark next to "DistantLayer", NO check mark next to anything else
    Clipping Planes Near = 10000  Far=20000 (so you put all of your distant objects 10,000 to 20,000 units away from camera)
    Depth = -10 or lower(really just needs to be 1 less than what Camera A is set to)

    Bonus:
    Go to Edit -> Project Settings -> QualitySettings
    Set "Shadow Distance" to as low as possible before you notice shadows popping.  You can greatly increase shadow edge quality and use a lower shadow resolution if you reduce your main camera's far distance and reduce the shadow distance.  May not be possible depending on your game.  Play with these settings until you get the balance that is right for you.

    Easiest way to do the shader/material swap:
    You have an invisible sphere, can be static or expanding.
    You have a script that tests to see what objects are within the sphere.
    For each object found within the sphere, swap it's material.
    This will allow you to just set up Material A and Material B and you can use whatever shaders you have available.
    Doing this in Update might be slow, so use InvokeRepeating or Yield

    There are fancier ways to do the shader but they would be harder to explain.  Their main benefit is on the edge where something is partially in or out of the sphere.

    Hope that helps.  Cheers.
Sign In or Register to comment.