Home Unreal Engine

Sphere and box reflection probes, when do I use them?

zombie420
polycounter lvl 10
Offline / Send Message
zombie420 polycounter lvl 10
I'm finding a lot of info about the differences in the two reflection methods, but not a lot in when one is preferred over the other. And I'd really like to know how they each generate reflections so I can be reasonable with optimizations.

Replies

  • Obscura
    Options
    Offline / Send Message
    Obscura grand marshal polycounter
    Performance: 

    The amount of reflection captures on the screen matters but you can go pretty much crazy, I never noticed a significant impact, even when using a lot of them. This was with the default cubemap resolution. You can increase it in the project settings, if you need to...

    Also, they get baked when you press build lighting, or you update the captures, so normally no recapture happens and therefore they don't really have that much impact when they are just there in the level, only file size of the cubemap, and shader cost.

    Box one:

    As the name says, it has a box shape, and it used parallax corrected cubemaps. This is better when you have angular/planar walls/rooms, it will project more correctly. You can adjust to box extents of the volume to determine the cubemap dimensions in the world. 

    Sphere one:

    Sphere shape, it creates you the panorama kind of reflections. Not much to say about this. In angular rooms, this works less correctly, in that case, I'd use it as a detail capture that has smaller radius and it captures reflections for smaller objects inside the room. 

    With higher roughness, you will notice the inaccurate reflections less, with lower roughness, you'll see the out of place reflections more.


    And here is some resources but I guess you probably already found this:

    https://docs.unrealengine.com/latest/INT/Resources/Showcases/Reflections/ 
  • zombie420
    Options
    Offline / Send Message
    zombie420 polycounter lvl 10
    Obscura said:
    Performance: 

    The amount of reflection captures on the screen matters but you can go pretty much crazy, I never noticed a significant impact, even when using a lot of them. This was with the default cubemap resolution. You can increase it in the project settings, if you need to...

    Also, they get baked when you press build lighting, or you update the captures, so normally no recapture happens and therefore they don't really have that much impact when they are just there in the level, only file size of the cubemap, and shader cost.

    Box one:

    As the name says, it has a box shape, and it used parallax corrected cubemaps. This is better when you have angular/planar walls/rooms, it will project more correctly. You can adjust to box extents of the volume to determine the cubemap dimensions in the world. 

    Sphere one:

    Sphere shape, it creates you the panorama kind of reflections. Not much to say about this. In angular rooms, this works less correctly, in that case, I'd use it as a detail capture that has smaller radius and it captures reflections for smaller objects inside the room. 

    With higher roughness, you will notice the inaccurate reflections less, with lower roughness, you'll see the out of place reflections more.


    And here is some resources but I guess you probably already found this:

    https://docs.unrealengine.com/latest/INT/Resources/Showcases/Reflections/ 
    Thanks for the help man! Ya I'm working on a VR game and my art director wants me to be pretty sparse with reflection actors. So I'm currently trying to make sure performance is pretty optimized right now. Your explanation and application of the box and sphere probes are super helpful
  • Obscura
    Options
    Offline / Send Message
    Obscura grand marshal polycounter
    Cool...Knowing that its for VR makes this a little bit different. You should aim for best fps. But I don't think that the reflection captures should be you main concern because they are nicely optimized by default, I'd say if you don't put one for each object, in a bigger environment. In a smaller one, you probably could.

    But this usually won't always give you the best result anyways.

    Since you are working with VR, I assume you need to work with higher screen resolution. What should matter in this case, is how much screen space the expensive pixel shaders and geometry takes up. So you should avoid having many expensive post processes, and material effects. You could have some, but having a lot of them at the same time at the screen will make things somewhat worse. 

    In post process, depth of field, or any iteration based effect is definitely very expensive on high resolution, depending on the video card. Same goes with anti aliasing, I know you want MSAA on VR, but you could also try some super sampling methods or some hacks along those lines...Screen space reflections are also iteration based, you could tweak the settings in an ini files.

    Iteration based meant its in a loop, it needs to execute the same calculation a couple of times. In a case of a blur, if you have a blur with amount or radius or whatever of 8 pixel, you want to execute the math of all pixels of the screen , at least 8 times, and that will only give you a one dimensional blur. so its like at least 32*screen pixel count+math that you do. or other samples from different passes, depending on what you do...This applies for post process. 

    But more simple post process effects should be much more cheaper. Like coloration effects. Or chromatic aberration. That takes 3 samples so 3*screen pixel count+math to calculate the final render.

    Also avoid massive overdraw.

    And back to the captures, thats still a pixel shader effect at the end of the day, on full screen, so yeah, I'd still say the amount of them at the same time on the screen matters. Maybe you should run some tests, but having a few in a room should be completely fine and you should rather look at some other kind of possibilities for optimization.

  • leleuxart
    Options
    Offline / Send Message
    leleuxart polycounter lvl 10
    Obscura said:

    And back to the captures, thats still a pixel shader effect at the end of the day, on full screen, so yeah, I'd still say the amount of them at the same time on the screen matters. Maybe you should run some tests, but having a few in a room should be completely fine and you should rather look at some other kind of possibilities for optimization.

    Based on my experience, it seems like you'll notice a performance hit faster with the larger resolutions and fewer actors before you would with a ton of actors at 128 pixel resolution. But that's texture memory and not directly related to just the probes, I've just never had any other issues with them.

    DX11 has a limit of 341(I think?) actors on screen at once, but according to Epic they're culled pretty efficiently based on their radius so you'd have to be trying pretty hard to reach that limit. The big thing to keep in mind is the overdraw. Reflection Actors should be treated like deferred lights, if possible, where you can have many on the screen at once, but the more they overlap, the more expensive the pixels get. Generally though, you're always going to have some overlap because a common workflow with probes is to have one larger one for the entire room/scene, then a little bit smaller in key areas, then really small for the local reflections where the player will be moving through. That applies to Sphere actors mostly, but if I'm able to I use the box for the total room actor, then smaller Sphere actors.

    One thing I like to do when placing probes is to disable SSR, because probes would be the fallback option when SSR isn't present. It lets me see the reflections without having to constantly aim my camera away to disable SSR. You can also look at the Reflections view mode, but it can be hard to work in. I like to use it for fixing issues if you get some random hot spot or color, you can figure out where it's coming from and which probe.
  • zombie420
Sign In or Register to comment.