Why is triangle used instead of Quad? Why is triangle used instead of quad in the first screenshot as in the 2nd screenshot? I thought diamond shaped quads could be used?
First of all, in engine everything is triangulated.
But also because it's safe. If you use that quad, chances are the triangulation will flipped on export to the engine. You really do not want that one flipped there.
We use triangles because triangles must be planar.
This factors hugely into calculations about depth/sort order etc.
A quad can be non planar which makes things a lot less certain
Are there any resources you can recommend for this?
A game engine would turn it into triangles anyway. At least ours do. It's just in a place like that it could be a very thin triangle so better be set manually first. Thin triangles are considered not good for game renderers.
People often say triangulation is important for accurate normal map baking . It's not true actually. Only vertex normals are important.
Imo it's perfectly fine to have quads too in many areas . Easier to cope with.
Don't forget to distinguish between modeling and the final result. Lots of modeling tools requires a quad topology. Edgeloops may fail at a tri topology. So model in quads, and for game needs check for correct triangulation in your engine. Which can mean to triangulate the geometry before you export. This was best practice for quite a while, before the game engines learned to deal with quads too.
Which engine actually does use quads as the final rendering geometry?
Unreal not sure, but Unity can import and render quads.
When we talk about final rendering though, even modeler like Blender or Maya or Max uses a triangulated geometry under the hood. It's the most stable geometry.
Everything use triangles under the hood, that's the only geometry the GPU understands. Rasterization deals with groups of 3 verts. There is no notion of edge in mesh data.
Indeed Unity can interpret quads (even ngons ?), like Blender or Maya. Though i'm not sure what the point is. (edit: for tesselation !) But still it's all triangles for the GPU. It's also true that quads are easier to handle for artists when they need to quickly select loops or needed for subD for instance.
The issue when you don't triangulate, is that softwares in your pipeline don't always triangulate ngons the same way. And that's a pain in the ass.
"People often say triangulation is important for accurate normal map baking . It's not true actually. Only vertex normals are important."
Not true, the triangulation dictates how surface normals are interpolated between vertex normals.
The normals are calculated for every pixel, with a barycentric interpolation of the 3 verts normal of each triangle (aka Phong shading)
To add to what's already been said and hopefully clarify a few things: there's a number of different reasons why an artist would choose to use a triangle instead of a quad when creating an arch. The relevance of any specific answer really depends on the model's intended use and the desired outcome for the project. Looking at those two images, it's not entirely clear whether this model is just a block out or a final low poly.
If it's a block out then it's possible that the triangles provide some additional edges that are required for subsequent modeling operations. Quad corners work well for most linear edge transitions and some types of curved intersections but they can also disrupt the curvature of adjacent edge loops. N-gon corners produce a similar effect.
While it can be beneficial to model with quads and n-gons, triangles can also be useful for pulling support loops inwards along a curve. Below is an example of how quad and n-gon corners can create subtle smoothing artifacts near curved surfaces. Converting these corner faces to triangles produces edge tension that can help reduce the visibility of smoothing artifacts near surfaces that transition from flat to curved.
If it's a low poly model then the mesh may need to be triangulated in certain areas to make seam placement and UV unwrapping easier. While most applications can display quads and n-gons, there are a variety of different triangulation methods and there's no guarantee that the order of the underlying edges and faces will be the same in every application. Which is why it's generally considered best practice to triangulate the final low poly model before exporting for baking and texturing. This will ensure that the model's geometry and smoothing behavior is consistent as it moves through the production pipeline.
Here's an example of just how varied the triangulation methods can be when moving the same mesh between two different applications. This sort of geometry miss match isn't a deal breaker for most simple modeling operations or basic material authoring but it can cause significant issues when working with tangent space normal textures.
Severe gradation in the normal map is often caused by the differences between the shading behavior of the high poly and low poly surfaces. This additional color information is specific to the state of the meshes during the bake. Alterations to the order of the low poly mesh will tend to produce normal artifacts, unless the difference between the shading behavior and baked normal information is resolved.
Which is why normal bakes from meshes without any smoothing splits tend to be sensitive to triangulation changes. The example below shows how the direction and intensity of the normal values changes, based on the low poly's shading splits and edge order.
While it possible to find workarounds for actively controlling mesh triangulation and using smoothing groups, these types of "Never fail, quick and easy!" solutions tend to ignore the fundamentals of established, current generation workflows. Granted there are some situations where it may be beneficial or necessary to use single smoothing groups or add support loops to the low poly or rely heavily on normal data transfers but a lot of the application specific workarounds tend to fall apart when working with a team that uses a wide variety of tools and regularly moves models between applications.
Creating a low poly model with consistent shading behavior goes a long ways towards making tangent space normal bakes a one or two shot deal. Adding smoothing groups is large part of that but controlling mesh triangulation is another important element in the optimization process. A lot of current, industry standard applications use MikkTSpace. Which makes it relatively easy to create assets in a synced tangent workflow. Yes, there are edge cases but that really isn't a good excuse for actively avoiding contemporary tools and workflows for content destine for popular engines like UE or Unity.
While this following example is far from best practice, it does demonstrate that using smoothing splits, in conjunction with a synced normal workflow, is fairly robust when it comes to accidental or unintentional changes in the triangulation after baking. All of the meshes have the same hard edges and only use the texture baked from the quad / n-gon low poly. Much less impressive when considering that the application automatically triangulated the low poly during the baking process but still a reasonable demonstration of how important it is to control the shading behavior with smoothing splits.
As shown below, when using a single smoothing group, even quad geometry doesn't guarantee that the triangulation order won't affect the surface shading and normal data. Using hard edges [smoothing groups] with a synced normal workflow tends to produce clean bakes that can be more resilient but curved surfaces and single smoothing group workflows tend to be more sensitive towards changes to the edge and face order after baking.
Another thing to consider about triangulation order is that the placement of the edges can skew surface details. Which, though it's often quite subtle, can have a negative impact on the overall quality of the bakes. In the example below, the diagonal surface elements have a slight distortion wherever a low poly edge crossed over a change in the high poly's surface.
While it may be possible to resolve some of these issues with a skew map or custom low poly cage, it's worth noting that the low poly that has a triangulation order with similar diagonal edges does have less overall distortion. Simple stuff like this can help avoid unnecessary complexity and extra work that comes with trying to resolve visible baking errors that are caused by letting the software choose the triangulation method on critical areas.
In a workflow with synced tangent space and smoothing splits, edge triangulation in low value areas can be largely ignored, provided it remains consistent during the baking, texturing and importing. However, it is worth running some quick test bakes to evaluate how the current low poly triangulation is affecting areas that are right in-front of the player.
The mesh used to demonstrate these principles is fairly simple, so the issues are quite subtle but carelessness towards these sort of things tends to compound in a way that produces a lot of small errors that bring down the overall quality of a project. Often with no tradeoff for a tangible benefit. Below is a short animation that compares a few different triangulation methods and better highlights how these changes impact the details baked into the normal texture.
There's a lot of great resources on these topics here on polycount and on the help pages of the various texturing applications. Here's a few links that are a good jumping off point for additional self guided learning:
Thanks Frank. Great examples how topology influences shading and normal mapping :)
Indeed Unity can interpret quads (even ngons ?), like Blender or Maya. Though i'm not sure what the point is.
The point is that most of the times you can simply export the quad mesh that you have modeled, and get away with it. It might even work with a dynamic mesh. Nowadays some game characters are made of 20k plus tris. You might not even notice a wrong bending quad here. That's a workflow issue. It saves time and energy. And time is money :)
That under the hood at gpu level everything works with triangles is another issue then. And my comment was aimed towards the statement that game engines just understands tris. Which is not longer valid. Some still does, some not.
The model compiler part of the engine will spit list of triangles out to the renderer - what the various editors show you is irrelevant.
As mentioned in an earlier post, there's no guarantee that a quad/ngon will be subdivided in the same way by two different applications. The only way to be sure is to manually define edges.
As tiles said, this is less important when you have dense geometry as the errors will be far less obvious.
Not sure if Frank covered it( probably did, I don't read his posts cos they're always completely correct ;) ) ...but
You should triangulate your target mesh before baking normals and you must use a mesh with identical triangulation in engine. The reference frame for calculating normal direction is influenced by the presence of edges and you can easily end up with Bork if they differ - it's the same underlying thing that causes UV interpolation issues as well
This does not apply to offline render engines such as Pixar's RenderMan:
Primitives are tessellated into micropolygons.
Based on our game I have never seen a single issue from quads only exported with split edges where they typically should be . But my quads are usually planar and the export is from Max only . Pretty much same as in this gif https://us.v-cdn.net/5021068/uploads/L1SEPP7O9J38/example-bakingtriangulation-animation.gif posted above. So I stopped to care long ago. Well, still putting triangulate modifer in Blender just in case. Never seen issues either although.