Home Technical Talk

3D Mesh Formats - Face-Centric or Point-Centric - Pro and Contra

polycounter
Offline / Send Message
rollin polycounter
So I'm wondering this now for nearly my whole (3d)-live

What is better and why should you use the one or the other?
The obvious difference is that Face-Centric doesn't need to split edges on UV borders. That's why every modeling tool I know of uses this approach
Anything else?

Many games use Point-Centric (that's why you end up with more vertices in game compared to your modeling tool) This can be an issue if you relay on point indices for some effects.

For clarification:
Face-Centric is like Unreal Engine1 format where you store  each faces characteristics like used vertices, uvs, colors, etc. and then there a list(s) of these values stored index based (like the vertex positions)

Point-Centric is like 3ds format where every point is defined by its characteristics like position, uv-coordinated, color, etc. and all faces are then defined as just a long list of point indices

Replies

  • RN
    Options
    Offline / Send Message
    RN sublime tool
    Hi. From what you described, there's not much difference between those two approaches other than how the data is organized. 
    At the very least you're going to need a list of points with attributes (pos, normal, uvs, color etc.) and a list of polygons that are formed by those points.

  • rollin
    Options
    Offline / Send Message
    rollin polycounter
    Ah com'on, there is obviously the difference in vertex count depending on the UV cuts.



    The question is: why use the one or the other (esp. in games) 

    Where are the graphics programmer? 
  • RN
    Options
    Offline / Send Message
    RN sublime tool
    That image is from Blender and it's hiding the actual vertex count (other programs do the same). Both cases are point-centric and both those cubes have 24 vertices.

    The image should actually be this:
    Vertex attributes are interpolated across the faces. If you happen to need a sharp change in one or more of those attributes, you're going to have to duplicate the vertex so that one face owns one copy and the other face the other copy.
    Those duplicates will exist in the same location, causing a sharp change in color, normal or UV. This is what a UV seam is, for example. 

    I had to deal with this process when writing a mesh exporter in MaxScript long ago.
  • rollin
    Options
    Offline / Send Message
    rollin polycounter
    You are a bit fast in guessing what's going on. 
    I only wanted to show what data is stored in a face centric and a point centric format based on certain uvs. It doesn't matter where I make the screenshot. I could just have posted the file's byte data.

    So I assume you also don't know why some engines use the one or the other format?
  • RN
    Options
    Offline / Send Message
    RN sublime tool
    rollin said:
    why some engines use the one or the other format?
    Depends on what you mean by "use". 
    An engine might allow you to import a .3DS mesh, but that file is not going to be shipped with your game because that's an interchange format like FBX, OBJ etc. and it's not as efficient for runtime loading as your own custom format with the data ordered in an easy way to load, with as few I/O seeks and reads as possible.
    See here:
  • rollin
    Options
    Offline / Send Message
    rollin polycounter
    Thx for the links they where quite informative but still they didn't really address the point.

    But it might have something to do with the vertex processing as one has mentioned in in one of the threads.
    At least this would make sense as - from a shader point of view - you work point-centric. You would have to pre-process a face-centric data package before sending it to a vertex shader. This might be more costly then the few additional vertices you get when splitting uv borders.
    And as far as I know vertices at the exact same position doesn't really  make a difference when it comes to the fragment part.

    Now, when thinking about it, it seems very obvious that the direct3d / opengl way of processing 3d data has something to do with it
  • Eric Chadwick
    Options
    Offline / Send Message
    I think what you're observing as differences are due to limits in the different formats you've chosen. Assuming you're starting from the same source mesh, different formats will toss out data just because they don't support it, which will affect file size and (likely) vertex count as well.
  • rollin
    Options
    Offline / Send Message
    rollin polycounter
    Maybe I didn't made it clear enough that this is a purely programming side question.

    But @Eric Chadwick  you might be right in a sense that it's a bit like left and right side driving. I guess part of the story is simply bc there are two possible approaches both will be (and are) invented at some time. 
    Would be interesting to know if the way current shading works is just because they had to choose between two equal approaches or because point-centric outperforms face-centric at the hardware level.
  • poopipe
    Options
    Offline / Send Message
    poopipe grand marshal polycounter

    in a normal setup where a GPU draws triangles, every triangle is defined by 3 unique points (in order) and every point has the same size/length data structures (normals, UV data etc). 
    This is simple to process because everything is the same as the last thing and its reliable because triangles are always planar

    In a situation where you are severely limited on storage and you can  make assumptions about the geometry then face-centric is probably a very efficient way of defining a mesh (hence Quake/UE using the method for Brush geometry)  but if you want to define an arbitrary mesh with a face centric approach you would need to store load of extra data in order to make sure the faces can be recreated correctly  (eg, how would you accurately describe a non-planar 7 sided face? )

    You can do this of course, but the data structures associated with each face couldn't be guaranteed to be the same size and shape as any other face and I believe that would have a pretty serious impact on how quickly you can shovel the data around inside a computer (GPU or otherwise)


  • Farfarer
    Options
    Offline / Send Message
    Vertices are the fundamental building blocks of polygonal 3D.

    Polygons are really little more than just a set of indices into a vertex array.

    I don't really understand what you mean by being face centric, it's not efficient for each polygon to uniquely carry around the full information of each vertex that makes it up. You'd need to carry around a ton of duplicated data and keep it all in sync between "the same" vertices of each polygon. Any programmer would rightfully slap you for doing that :P

    What you describe with Unreal 1 sounds like it's meant for optimised storage of the source file (only store unique values). Part of me suspects that's not the data structure used when doing any editing of the meshes and it's almost definitely not the data structure used for drawing them.
  • rollin
    Options
    Offline / Send Message
    rollin polycounter
    @poopipe You are right if thinking about arbitrary faces but what if they all consist out of 3 points (per definition)?

    And @Farfarer I did mention that e.g. Unreal1 stores vertices only once. The whole face-vs-point-centric thing is not about storing redundant data. There is a good reason for Unreal1 to store the data that way.  And the reason is that Unreal1 uses vertex animation instead of bone animation. So one frame is defined by a set of vertex positions. All vertices of all frames are just stored in a big list (<modelname>_a.3d).
    How the vertices are related to each other (the face-data) is stored in a separate file (<modelname>_d.3d).

    You might be right about the drawing as unreal1 also does point-centric rendering if I'm not wrong.
  • pior
    Options
    Offline / Send Message
    pior grand marshal polycounter
    I think you'd be much better off actually posting practical examples of this so-called "face centric" file structure, as opposed to basing everything on loose assumptions like :

    "All vertices of all frames are just stored in a big list"
    "The obvious difference is [...]"
    "[...] doesn't need to split edges on UV borders."




  • rollin
    Options
    Offline / Send Message
    rollin polycounter
    Hehe @pior com'on.. you really think I'm just assuming here? I don't want to convince someone. Just hoping for some insight and good ideas. 
    If someone want's to dig deeper he can read something about various file formats (I'm just using examples I'm currently dealing with)

    Face Centric
    http://paulbourke.net/dataformats/unreal/
    http://www.stevetack.com/archive/TacksDeusExLab/unr2de/unr2de.html  (here you can download an 3ds-to-unreal1-deusex1 format converter in c++)

    Point Centric
    https://docs.unity3d.com/ScriptReference/Mesh.html  (unity intern mesh data format)

    But bc it's you:

    "All vertices of all frames are just stored in a big list"
    Check the above reference to the unreal1 format. Look at the mesh class that is writing the vertices. First the whole frame range is stored (number of frames), then the single frame data size (data size of one vert * verts count) and then just all the vertices (position only).
    "The obvious difference is [...]"
    Well, if you have X vertices on the on hand and Y vertices on the other hand - simply based on the logic of the format - then this is somehow .. obvious, isn't it? at least it seems so for me
    "[...] doesn't need to split edges on UV borders."
    Same as above. If the format can store (just from it's logic) multiple uvs per point you don't have to manually dublicate points to make sure every point just has one related uv-coordinate

    But you're right. I'm lazy and should add more references to what I'm saying :)
  • gnoop
    Options
    Offline / Send Message
    gnoop polycounter
    I recall video cards do so called tri-stripping  with vertexes shared same UV.  We had a tool that showed actual triangle strips 20 years ago and it did'n work accross UV borders.    

    As I understand triangle strips is a kind of optimization when only single vertex of any new triangle is calculated
  • rollin
    Options
    Offline / Send Message
    rollin polycounter
    @gnoop
    Yep indeed. I found some old polycount post about this topic (with a lot of old and familiar faces)
    https://polycount.com/discussion/50588/technical-talk-faq-game-art-optimisation-do-polygon-counts-really-matter

    Not sure how much of an issue this is nowadays but it might be part of the reason why a point-centric approach was favored back then - which leads us to nowadays standards.

    Edit: it's also nice to show that one point can only have one normal. So whenever you split the normals (e.g. smoothing groups) you have to clone vertices in a point-centric model
  • rollin
    Options
    Offline / Send Message
    rollin polycounter
    So I wanted to revive this topic with the introduction of meshlets with dx12 and mesh shader.

    https://devblogs.nvidia.com/introduction-turing-mesh-shaders/

    https://www.youtube.com/watch?v=CFXKTXtil34&amp;feature=emb_err_woyt

    https://www.youtube.com/watch?v=0sJ_g-aWriQ

    Would this - performance wise - make any previous optimizations less relevant as uv-cuts etc. would now just merely act as cut-guides?

    Does this increase the vertex count or is it just a lookup table that is generated? 
    Seems you can do what ever results in some inputable data for a mesh shader...

    And one of the most important questions.. Who is putting all these meshlets back together if one accidentally drops such a model?
Sign In or Register to comment.