Hey guys, I have a couple general questions about how to optimize my game better.
Im making a desktop pc game.
In the largest scenes, tri count rarely exceeds 500k, until you
turn on shadows. Then it can go from 1-3 million. I've disabled shadow
casting on all but the essential character and enviro pieces. I have baked reflection probes. Most scenes using an HDRI skybox for ambient lighting, and I use one or two directional lights set to realtime mode. No GI, realtime or baked. Our environment geometry is bare bones so the programmer didn't think baking lights would give us much benefit, though neither of us really know. Just speculating. First question then, is that about right? And for older machines?
Particle fx adds a big wallop to the tri count as well. We've trimmed those down mostly by reducing total number of particles and disable shadow cast. Anything else to do with that?
Texture resolution is where all the RAM goes, right? Which is going to be the bottleneck for older video cards? Initially,
I had been using 2k maps for all characters, but that wasn't necessary
so I reduced to 512. I thought that would be a huge performance boost
but I didn't see much. Likely cause I am on a modern machine, but the goal is
for the game to run well on older machines. Any way to gauge that
without buying a toaster just for testing? Some metrics to look for?
An example scene :
The main thing contributing to the tri count is characters. They are still being optimized. By the end, they will be reduced by about 3/4 total tris. This will take about a weeks worth of work, but that is a week I could put into other things if I knew the benefit wouldn't be significant.... Anybody have experience with that? Would reducing tri's by a million matter much? Would time be better spent compositing the textures into atlases? For instance, about 30 character materials at 512 could become about 7 x 1k materials.