So I'm wondering this now for nearly my whole (3d)-live
What is better and why should you use the one or the other?
The obvious difference is that Face-Centric doesn't need to split edges on UV borders. That's why every modeling tool I know of uses this approach
Anything else?
Many games use Point-Centric (that's why you end up with more vertices in game compared to your modeling tool) This can be an issue if you relay on point indices for some effects.
For clarification:
Face-Centric is like Unreal Engine1 format where you store each faces characteristics like used vertices, uvs, colors, etc. and then there a list(s) of these values stored index based (like the vertex positions)
Point-Centric is like 3ds format where every point is defined by its characteristics like position, uv-coordinated, color, etc. and all faces are then defined as just a long list of point indices
Replies
The question is: why use the one or the other (esp. in games)
Where are the graphics programmer?
I only wanted to show what data is stored in a face centric and a point centric format based on certain uvs. It doesn't matter where I make the screenshot. I could just have posted the file's byte data.
So I assume you also don't know why some engines use the one or the other format?
An engine might allow you to import a .3DS mesh, but that file is not going to be shipped with your game because that's an interchange format like FBX, OBJ etc. and it's not as efficient for runtime loading as your own custom format with the data ordered in an easy way to load, with as few I/O seeks and reads as possible.
See here:
But it might have something to do with the vertex processing as one has mentioned in in one of the threads.
At least this would make sense as - from a shader point of view - you work point-centric. You would have to pre-process a face-centric data package before sending it to a vertex shader. This might be more costly then the few additional vertices you get when splitting uv borders.
And as far as I know vertices at the exact same position doesn't really make a difference when it comes to the fragment part.
Now, when thinking about it, it seems very obvious that the direct3d / opengl way of processing 3d data has something to do with it
But @Eric Chadwick you might be right in a sense that it's a bit like left and right side driving. I guess part of the story is simply bc there are two possible approaches both will be (and are) invented at some time.
Would be interesting to know if the way current shading works is just because they had to choose between two equal approaches or because point-centric outperforms face-centric at the hardware level.
This is simple to process because everything is the same as the last thing and its reliable because triangles are always planar
In a situation where you are severely limited on storage and you can make assumptions about the geometry then face-centric is probably a very efficient way of defining a mesh (hence Quake/UE using the method for Brush geometry) but if you want to define an arbitrary mesh with a face centric approach you would need to store load of extra data in order to make sure the faces can be recreated correctly (eg, how would you accurately describe a non-planar 7 sided face? )
You can do this of course, but the data structures associated with each face couldn't be guaranteed to be the same size and shape as any other face and I believe that would have a pretty serious impact on how quickly you can shovel the data around inside a computer (GPU or otherwise)
And @Farfarer I did mention that e.g. Unreal1 stores vertices only once. The whole face-vs-point-centric thing is not about storing redundant data. There is a good reason for Unreal1 to store the data that way. And the reason is that Unreal1 uses vertex animation instead of bone animation. So one frame is defined by a set of vertex positions. All vertices of all frames are just stored in a big list (<modelname>_a.3d).
How the vertices are related to each other (the face-data) is stored in a separate file (<modelname>_d.3d).
You might be right about the drawing as unreal1 also does point-centric rendering if I'm not wrong.
"All vertices of all frames are just stored in a big list"
"The obvious difference is [...]"
"[...] doesn't need to split edges on UV borders."
If someone want's to dig deeper he can read something about various file formats (I'm just using examples I'm currently dealing with)
Face Centric
http://paulbourke.net/dataformats/unreal/
http://www.stevetack.com/archive/TacksDeusExLab/unr2de/unr2de.html (here you can download an 3ds-to-unreal1-deusex1 format converter in c++)
Point Centric
https://docs.unity3d.com/ScriptReference/Mesh.html (unity intern mesh data format)
But bc it's you:
"All vertices of all frames are just stored in a big list"
Check the above reference to the unreal1 format. Look at the mesh class that is writing the vertices. First the whole frame range is stored (number of frames), then the single frame data size (data size of one vert * verts count) and then just all the vertices (position only).
"The obvious difference is [...]"
Well, if you have X vertices on the on hand and Y vertices on the other hand - simply based on the logic of the format - then this is somehow .. obvious, isn't it? at least it seems so for me
"[...] doesn't need to split edges on UV borders."
Same as above. If the format can store (just from it's logic) multiple uvs per point you don't have to manually dublicate points to make sure every point just has one related uv-coordinate
But you're right. I'm lazy and should add more references to what I'm saying
Yep indeed. I found some old polycount post about this topic (with a lot of old and familiar faces)
https://polycount.com/discussion/50588/technical-talk-faq-game-art-optimisation-do-polygon-counts-really-matter
Not sure how much of an issue this is nowadays but it might be part of the reason why a point-centric approach was favored back then - which leads us to nowadays standards.
Edit: it's also nice to show that one point can only have one normal. So whenever you split the normals (e.g. smoothing groups) you have to clone vertices in a point-centric model
https://devblogs.nvidia.com/introduction-turing-mesh-shaders/
https://www.youtube.com/watch?v=CFXKTXtil34&feature=emb_err_woyt
https://www.youtube.com/watch?v=0sJ_g-aWriQ
Would this - performance wise - make any previous optimizations less relevant as uv-cuts etc. would now just merely act as cut-guides?
Does this increase the vertex count or is it just a lookup table that is generated?
Seems you can do what ever results in some inputable data for a mesh shader...
And one of the most important questions.. Who is putting all these meshlets back together if one accidentally drops such a model?