Hi guys I'm trying to get some basic knowledge about how the amount polygons can affect the performance of the hardware a game runs on.
And I'm also trying to get some basic understanding about how the hardware itself handles the polygons or in other words how the polygons gets rendered in a game engine.
And I'm hoping you guys can help me out since I'm not really sure what to search for to get some decent answers. If you guys have the answers or know any any articles on these subjects I would appreciate if you could share them with me.
Sorry if there is a thread up on this already, I'm really bad at knowing what to search for.
Thanks in advance onewinged!
Replies
Polys effect 2 things for the most part, it takes time for the GPU to draw them on the screen, than if there is a lot of them, they take up vram space if it is all unqnue geo, and un instanced.
A thousand individual triangles will render slower than a thousand triangle plane. This is because to draw a single triangle it takes three vertex operations, but to draw an adjacent triangle only takes one more operation. So 3000 operations vs 1002 with the same triangle count.
We target tri counts because vertex counts are hard to pin down. Many things affect the vertex. In addition to triangles; UV seams, broken normals, material boundaries, and tri strips will all affect the vertex count in engine.
http://www.ericchadwick.com/examples/provost/byf1.html
So as long as single frame won't exceed 100mln triangles it's good to go (;.
The real issue is amount of unique objects per scene. On PC because of software overhead maximum amount is about 3-4k. Unique objects are not just geometry, textures, rendeer passes, shaders etc also count. So it's about 1-2k of unique meshes maximum per frame.
In the end if you are doing work for PC/next-gen hardware I wouldn't bother with counting every single triangle. It doesn't matter. IF you feel mesh have to many tris, just add LOD level.
To my understanding GPU may be limited by something called the fill rate, which is the rate per second that the GPU can render the pixels on screen or send pixels to the memory? This means that the more the polygons and larger screenresolutions require more fill, which in turn leads to the GPU needs to process more information. And this in turn reduces framrate?
CPU may be limited by the amount of things that need to be rendered to the screen, this is called the draw calls. So the more models that CPU has to render out the more time it takes, which causes framerate drops.
Is this correct or aam i totaly wrong?
Game Renderer Articles
http://wiki.polycount.com/CategoryWhitepapers#Game_Renderer_Articles