Topics like this have been on my mind since I got into video game art, as I've been dreaming of the perfect engine toolset since I started. Anywho, I have always wondered if game engines do indeed render faces/textures/lighting, etc. that the camera cannot see. If so, why? I mean, at a single time, sure, thousands of polys can be in the scene, many lighting situations, many textures, but obviously it isn't necessary to render what can't be seen, so why would a scene with thousands of unseen polygons and a small amount in view run slower than a scene with only a few in view at all (if that is the case of course, I'm not too certain or educated as to how engines go about rendering polygons or what happens under the hood.)
Replies
Read about culling. http://docs.unity3d.com/Documentation/Manual/OcclusionCulling.html
http://en.wikipedia.org/wiki/Hidden_surface_determination#Viewing_frustum_culling
And on that note, what other engines than unity implement this as a stock feature? Sorry if I sound a little naive, I'm just trying to better understand the inner workings of things before I tackle a personal project.
Here's what CryEngine has to say about culling
http://freesdk.crydev.net/display/SDKDOC4/Culling+Explained
UDK
http://udn.epicgames.com/Three/VisibilityCulling.html
its a good thing your thinking about theses things. it will give you an advantage to know how things work under the hood. if nothing else it gives you a better intuition of why stuff happens the way it does.
edit:
once the renderer/engine knows what it needs to render and what it does not it stops calculating the hidden stuff. so no computation is done at all on occluded geometry in general.
@ZacD
those are some good links. i always say RTFM
It's possible now with tessellation to have enviroments with almost a triangle per pixel, [ame="
the other thing to keep in mind is that its not 'just geometry' its lighting, effects, post prossessing etc. all of these things are improving also. dynamic ray traced lights with soft shadows, real time reflections, etc. all of these things take computation. so even if you can render 5 million polygons in real time can you do it in a scene with all the effects just mentioned? its hard to get your head around but real time graphics are just at the beginning of what is possible. it will take a long time yet to get there, and once your there you have not even scratched the surface of the physics that will bring life like environments.
but i get your point. i think things are right about at the point where the choppy polygon look is no longer an issue and you can concentrate on the 'actual' limit surface. and the base cage as apposed to trying to fake a polygon model into looking organic. thats a huge step and is definitely a major hurdle to have jumped over.
Then there is specialized systems such as terrain/particles which generate their render geometry a bit different from regular characters/objects in the scene.
Traditionally you'd want to minimize the data you move from CPU to GPU memory, in future architectures (ps4...) this likely becomes less of an issue with general full-speed access to common memory. But at some point it's just cheaper to draw "invisible" stuff than finding out what is visible exactly and only render that.
There are several bottlenecks that you have to balance when rendering, just to name a few:
- the CPU/GPU transfer mentioned above
- draw calls and state changes: you'd want to maximize work to do for the GPU, few big jobs, vs many small. Otherwise you risk GPU waiting for CPU to create work for it.
- vertex/transform boundedness: the amount of vertices being transformed when rendering an object. Can become a problem when the same scene has to be rendered multiple times albeit simple fragment shaders being used
- fragment boundedness: too heavy shading per pixel, or too many pixels are overdrawn by surfaces occluding, wasting time. This is typically avoided with the "deferred" style approaches or depth pre-pass
- ...
Agreeing with you, the hardware is pretty decent when it comes to poly performance, so we get much better silhouettes/shapes, but shading complexity is still tough, and animation is another big task complex.
can you elaborate on this 'full-speed access to common memory' or provide a link to info? i have not heard much about it, but it sounds interesting.
Here you go:
http://www.gamasutra.com/view/feature/191007/
although saying that i think most engines do pretty well to only fill the pixels that are visible (excluding alpha) BUT you still need to store/transform the verts and polys to get to that point
forgive me if im talking shit this is written with a massive hangover
... but it is always better to see the whole picture. Even if you want to make a character out of 2 mio vertices, you suddenly have the problem of sub-pixel triangles. That is, from father away suddenly 100 triangles need to be rendered at the same position of a single pixel, which introduces new challanges in real-time rendering (maybe dynamic tesselation will be the future).
Some thought about PS4 and 8 gig of shared memory: yes, it is really cool to access memory by CPU/GPU at the same time, still you need to consider, that memory bandwitdh is a very limiting factor. There's no great benefit of having lot of memory if you can't access it effeciently in a single frame.
The memory bandwidth on the PS4 is huge, so that will be a nice advantage of the Xbox One.