@DiamondDog A couple of post above there's a few discussions about blocking out
shapes, matching segments and creating base geometry that's suitable for
subdivision or re-meshing. Even though these aren't the exact same shapes it's the same basic principles so it's something that takes a bit of work to figure out how to apply existing information to a specific shape. Definitely take some time to skim through the pages and look for similar shapes.
That said: extruding off the existing grid is fine for basic shapes but with complex shape intersections (like that brass deflector) there's just too much going on for it to work well. Instead take the time to develop each shape independently, rotate them into position and merge them together. Blocking out the shapes, using Booleans to keep parts separate and
saving incremental versions before making destructive edits will make
the whole process lot easier if anything goes wrong.
Here's one way to approach the basic shape of the brass deflector: Start with the basic primitive shape, rotate it to match the reference, scale the peak down until it matches the reference, round over the top edges, round over the peak and make any final adjustments. Once all of the features match the references the shape can be added to the base model with a Boolean operation.
Here's an example of how extruding shape profiles from existing geometry can be combined with Boolean operations to quickly and accurately block out complex shapes. Start with features that have known dimensions and use those to scale the rest of the mesh. One big advantage to creating shapes separately is any adjustments to those parts won't require re-building the entire mesh.
To recap:
Block out the basic shapes for scale and create more complex shapes independently before joining them to the base mesh.
Research existing solutions for similar topology and make samples to see if they solve any issues that come up while modeling.
Maintain some kind of incremental history of changes to the shapes in case a part needs rework.
Splitting and straightening the UV islands to increase the overall texture density makes sense in theory but in practice it's a bit more complicated than that since there's multiple factors to consider.
In the overall scheme of things a single road wheel is a very small part of a tank and it's unlikely that every part of every road wheel would have it's own unique UV islands, much less it's own unique texture set. The reality is that having more UV islands requires more padding which in turn either reduces the overall texture density by taking up extra space or it reduces the overall efficiency by requiring the use of additional texture sheets.
It's common practice to either pack all of the running gear
together or to pack it into the extra space around the hull and turret
sheets. So, at a minimum, it's much more efficient to reuse the same geometry and UV islands for as much of the running gear as possible. Reducing the number of unique road wheels and reducing the number of UV splits in each road wheel will also help increase the texture density since space that would otherwise be wasted on padding can be used on the single reusable UV layout. If the UVs need to support camouflage patterning then the face of each road wheel can have it's own unique UV island while the rest of the road wheel can reuse the shared UV layout.
Here's a comparison of a few different UV unwrapping and packing strategies. The overall texture density (compared with the same texture size) is largely the same but it's worth noting the differences in usable UV space and the amount of UV distortion. Starting from the left the efficiency of the used area is: 39.84%, 38.24%, 37.87%, 36.8%.
Based on these results, having fewer UV splits and using fewer UV islands allowed the individual UV islands to be slightly larger and the packing was also more efficient since it left additional space for other items. Since the overall texture density is roughly the same the first UV layout in the example below would have more room for other parts or it would fit into less space on an existing UV layout.
The UV splits for the second and forth examples are identical and the only difference is the packing algorithm used to generate the second example is able to nest UV islands inside of each other. This is something the defualt UV packer in Blender cannot do so it's definitely worth investing a good UV packing add-on.
When it comes to baking there's a few things to consider: hard edges will reduce the gradation of the normal bakes but they will also increase the number of unique UV islands which can reduce the overall texture density through additional padding space. Straightening curved UV strips will generally increase the texture density by making it easier to pack things closer together but this extreme deformation of the UV s can introduce distortion and other artifacts. These strategies are not mutually exclusive so it doesn't have to be an all or nothing approach. There's an acceptable amount of normal gradation and there's an acceptable amount of texture distortion. It's important to run test bakes and figure out where the best balance is between UV efficiency and baking quality.
Here's an example of two different approaches to balancing these elements:
The low poly model in the top row has hard 90° shape transitions, matched with hard edges and UV splits and the UV layout has both straightened strips and nested circles. There's very little gradation or distortion in the normal map but the UV layout takes up most of the texture space.
The low poly model in the bottom row has hard 90° shape transitions (with hard edges and UV splits) around the outside faces and uses softer tapered transitions on the inside corners. There's some gradation and very little distortion in the normal map but the UV layout only takes up about two thirds of the texture space. The placement of the edge splits compliments the UV seams and the result could be further improved by using weighted normals but the point was to demonstrate that even in a worse case scenario the gradation doesn't render the result completely unusable.
What makes sense comes down to the project goals and to what comes out of the baking tests. It's really important to take the time to run some unwrapping, packing and baking samples to get a feel for what's going to make the most sense for balancing the quality and efficiency.
Here's a few examples how other professional artists have unwrapped the running gear on their tank models:
@LilacGear Welcome to polycount. A few post above there's a couple of discussions about adding or subtracting shapes from cylinders and curved surfaces and back on page 168 there's a little write up that covers working with a similar shapes on a curved surface. The principles covered in these discussions are the same that you'll want to apply to this shape.
Here's a couple examples of how this could be done. Start off by blocking out the shapes and adjusting the number of segments on the cylinder so there's room for the support loops to end on the existing edge segments on the cylinder walls. From there it's a simple matter of adding the necessary support loops and continuing the geometry on the inside of the angled cut out.
The corners can be sharpened by:
Sliding the edge segments closer to the center of the corners.
Adding additional support loops to the inside of the corner and merging them down to the outside vert in the corner's support loop.
Increasing the number of segments that make up the walls of the cylinder.
No need to over complicate the mesh though. As long as the additional geometry remains relatively consistent and parallel to the face that makes up the edge segment there shouldn't be any major smoothing issues.
Which approach and how much geometry is required will depend on what the model will be used for and how closely it will be viewed. There's more information on this topic in the last couple of pages so it's definitely worth skimming through and reading some of the other post here.
To recap:
Search for existing solutions to similar problems and see if they can help resolve the issues with your shape.
Block out the major forms, adjust the segment counts of adjacent shapes so the existing geometry can be used as support loops.
Leave room for support loops that run across curved shapes by placing intersecting geometry between the edge segments.
@DiamondDog The answer is it depends. Is there a workflow specific technical limitation or requirement for all quad grid topology?
If this type of topology isn't a hard technical requirement then the answer is: Yes, there are more efficient topology layouts and modeling strategies. Leaving extra geometry (that's a byproduct of modeling with automated tools) in a working mesh is one thing but manually adding in all of that extra geometry is something that should be avoided. Flat areas are arguably the lest effected by topology changes. There's minimal benefit to extending edge loops across flats and in most cases this will only complicate things and slow down any edits that need to be made in the future.
If this is a low poly mesh that needs to be cleaned up and optimized then a quick way to do this is to run a limited dissolve, triangulate the mesh and covert the triangles to quads. It's fine to leave the mesh messy until all of the shapes are finalized but there can also be some advantages to periodically cleaning sections of the mesh with a limited dissolve. Unless there's a specific shading or normal baking error then it's best to simplify the low poly mesh as much as possible, while still maintaining the overall shapes and desired smoothness.
If this is a high poly cage mesh that won't need further editing then it might be fine the way it is. However, if this is still a work in progress or if this needs to be sent to another artist then it would probably be best to selectively dissolve some of the excess edges. An important concept behind subdivision modeling is being able to control complex shapes and smoothing behavior with a minimal amount of geometry.
Here's a comparison of a similar shape with different topology layouts. The column on the left is the working mesh topology. The column in the middle is the final mesh topology and the column on the right is a shaded subdivision preview.
The first row is a low poly model that has a very simple and loosely organized working topology. To maintain consistency between applications the mesh shading is controlled by hard edges and the final mesh is triangulated before it's exported.
The second row is a high poly model that has a loose grid topology layout and the support loops are added to the working topology with a modifier that's controlled by edge weights. Support loops and working topology can be adjusted with minimal effort until the modifier is applied. Triangles and n-gons are kept within the flat areas and have a minimal impact on the mesh when subdivision is applied.
The third row is a high poly model that has support loops placed in the working topology. This provides direct control over the supporting geometry but also increases the complexity and amount of effort required to make any major changes to the shapes. Triangles and n-gons on the flat areas of the mesh subdivide without causing any major smoothing artifacts.
A great resource for learning about different modeling and topology strategies is the "How do I model this?" thread in the technical talk section. Here's a link to a couple of recent discussions about topology layouts, triangles, n-gons and quad grids in that thread:
@Yogifi There's a lot to unpack but the answer to most of these questions is: it often depends.
There's a significant amount of overlap between poly modeling and
subdivision modeling but they are still distinct processes that require
slightly different approaches. What's "right" or "best" depends entirely on how a
model will be used and what the limitations are. As an example: the requirements for a
VFX model that will be used in a close up in a feature film will be
quite different from the requirements for a background prop in a game. This is something that Arrimus mentions briefly at several points in his video.
One of the major issues with trying to extrapolate some kind of perfect rule set from general technical overviews is that, without any context to guide why and when something should be done, it becomes very attractive to try and use relatively meaningless technical statistics as some kind of quality indicator. This is a primary factor in the perpetuation of some long standing misconceptions about certain geometry elements and modeling strategies.
A good example of this is when an artist spends very little time on blocking out accurate shapes and instead jumps right into adding minor surface details and manually moving edge loops around for hours to maintain all quad geometry.
This raises the question of where is the added value of the all quad topology if the model's shape is inaccurate and it took significantly longer to make? Subdivision modeling is a commodity and developing an efficient workflow will help an artist bring value to their skills set. Excessive manual topology rework and manually replicating work done by automated tools is something to avoid whenever possible.
An all quad topology layout isn't inherently good
and a topology layout with a lot of triangles and n-gons isn't
inherently bad. It's much more important to judge a given model by
specific project
goals and evaluate how well the various geometry elements were used to
optimize the return on the time spent.
If all quad topology is a hard technical requirement then Boolean re-meshing workflows, including the one Arrimus covered in his video, can have a significant speed advantage over traditional subdivision workflows. (Though it's still worth mentioning that art fundamentals and an understanding of the basic concepts behind subdivision modeling are still an important part of this workflow.) Here's a couple of recent discussions with an artist who starts out with a Boolean re-meshing process and moves into a subdivision modeling workflow. This is a great example of how the overlapping modeling skills can transfer over.
Taking a broad view of things: the basic concepts, technical
fundamentals and best practices for poly modeling and subdivision
modeling are pretty cut and dry. This provides a solid foundation and is
a great place to start learning about how things work. However, there
comes a point where learning to be effective with subdivision modeling
becomes less about how things are done and more about why things are
done and when things are done. Building up this knowledge requires researching and practicing and can take some time to develop. It's all about picking up the tools and screwing up until the screw ups start to resemble completed work.
With the subdivision models: a lot of the artifacts and smoothing issues are caused by mismatched curve segments (where there isn't enough adjacent geometry to support the shapes) and incorrect edge placement to control the subdivision smoothing behavior. Placing the edges on the outside segments of the corners does result in all quads but it also causes the center segment of the corner to collapse inwards. In general, when it's not possible to match the adjacent segments, it's better to have the center corner segment connected to the nearby geometry and pull the shape outwards.
Flat surfaces are least likely to be effected by topology changes and are a good place to end extra edge loops. If smoothing artifacts are appearing on or around flat surfaces then then it's likely that either the geometry elements aren't completely coplanar or there's a missing support loop around the shape transitions. Sharp transitions between surfaces should generally be support by edge loops on both sides. It looks like there may be some spots where the geometry isn't fully supported and this could be causing some of the smoothing issues around the perimeter of the shapes.
With the low poly shading and topology: it's important to use both the geometry and sharp edges to control the smooth shading behavior. Adjusting the geometry, triangulation and placing hard edges will help resolve some of the distortion in the low poly model. It's also important to optimize the low poly mesh by removing any geometry that doesn't add to the visible profiles of the major shapes.
Here's an example of all smooth shaded, smooth shaded + hard edges and all smooth shaded + chamfer.
Here's an example that compares two different topology layouts used to optimize the geometry for high poly (top) and low poly (bottom) models. The n-gons and center segment edge connections in the corners facilitated the use of edge weights to control a bevel / chamfer modifier to quickly and automatically add all of the supporting geometry without causing any major smoothing artifacts.
There's a number of different strategies for creating low poly models. Starting with a fairly detailed base model that can be developed into both the subdivision cage mesh and the final low poly seems to be one of the more efficient approaches. It's also fine to keep the low poly topology organized with quads and n-gons while editing but it's also important to triangulate the mesh before exporting. Different applications can use different triangulation methodologies and without a set triangulation order there can be a triangulation mismatch between programs which can cause issues with baked normals.
Here's an example of the source low poly and two different triangulation methods:
A couple of great resources to learn more about normal baking are Joe Wilson's baking tutorial
and Alec Moody's video about controlling shading behavior.
@kuronekoshiii The sharper corners will need support loops to prevent the shapes from collapsing and the smoothing artifacts across the rest of the shape can be resolved by removing the excess geometry and edge loops that cross over into the curved shapes. Flat surfaces are largely uneffected by triangles and n-gons so they can be a good place to end the extra edge loops.
Try simplifying the starting geometry, matching the segment counts of adjacent shapes whenever practical and adding support loops around major shape transitions with a bevel / chamfer operation or modifier. Here's an example of what this could look like:
@navneethdodla94 When comparing modeling workflows for film and games there's some overlap but there's also some significant differences in the technical requirements for each. Within each industry there's also a range of acceptable quality levels. Much of this variance is based on how the models are used and what the budgets are. What's acceptable for one company or project may not be acceptable somewhere else.
If the goal is to become part of a particular industry, specialization, company or team then it's important to research who is leading in that field and emulate what they are doing. Comparing and contrasting the processes and work of artists in each field can highlight where and how the technical requirements of each discipline are different. As an example, compare the wire frames of film and game models:
Andrew Hodgson is an artist that works in film and shares a lot of his modeling process and philosophy:
Matthias Develtere's subdivision modeling work for Wolfenstein II is an example of how n-gons and triangles can be used to speed up the production process with a minimal impact on the overall shape accuracy and surface quality:
For high poly game models in general: as long as there aren't any specific technical requirements for all
quads and as long as the mesh is easy to work with and subdivides
cleanly then a base mesh or cage mesh with n-gons is passable. Creating
the high poly model is just a part of the asset creation process. It's
not the entire process itself and it's unlikely the player will ever see
the high poly model. Other parts of the process (the low poly model,
normal bakes, textures, lighting, animation and presentation) will end
up directly in front of the player and are (as a whole) arguably more
important.
There's a few discussions on n-gons in this thread and one thing that's mentioned in a lot of these discussions is that a lot of the misconceptions about subdivision modeling are based on the abstraction and oversimplification of specific and contextual technical issues, limitations and requirements. Often the nuanced context of these situations is stripped away and this can lead to the perpetuation of nonsensical and counterproductive rules. This is why it's important for artists who are learning this skill to take the time to research and verify what's being said.
Another issue is that time, tools and topology are relatively easy to quantify and it can be attractive to look at these factors as a primary benchmark for judging quality. In theory this is fine for process improvement but it can also become a trap where an artist will judge the result of someone else's work based entirely on how well the rules were followed while excusing deficiencies in their own results solely because they followed the rules they made up.
For the shape question: Jan has pretty much covered it all but it's also worth mentioning that it's important to match the segments counts of the adjacent shapes. This will help reduce the chance of smoothing artifacts appearing on more complex shapes.
The hard edges all appear to have matching UV splits so that doesn't seem to be the issue. Increasing the texture size captured more details but even at 8k there were significant baking artifacts. It appears that the pixel density is quite low in some areas and this means there's a limited ability to capture the relatively sharp edges of the high poly mesh.
Softening the edges on the high poly mesh removed most of the baking artifacts at a 4k resolution. Increasing the antialiasing samples had a minimal effect on the baking results. There are some areas where the low poly mesh appears to fall inside the high poly mesh and this does have an effect on the quality of the baking results.
Overall it seems that these artifacts are a combination of issues related to the texture density and the sharpness of the high poly mesh. Here's a comparison of the sharper (left column) and softer (right column) high poly meshes baked to the same low poly mesh with 2k, 4k and 8k texture sizes.
Here's a closeup of the low poly with 4k normal textures baked from the sharp high poly mesh. The pixelated normal artifacts are still visible along some of the edge segments.
Here's a closeup of the low poly with 4k normal textures baked from the soft high poly mesh. Most of the pixelated normal artifacts are resolved and the edge segments appear to be relatively clean.
Where to go from here really depends on the scope of the project: time deadline, remaining budget, intended use, quality level and technical limitations. If the edges must remain this sharp then you'll have to find some way to balance the texture size and the ability to capture the minimal edge width on the high poly. Softening the edges on the high poly will translate to a better bake at a lower texture resolution.
Wasted space in the UV layout could be minimized by straightening some of the curved UV islands and using one of the advanced UV packing add-ons to generate a more efficient UV pack. Another option is to remove any internal faces that won't be seen. All of this would help increase the pixel density without increasing the texture size.
If the asset's resource footprint is a concern then it might also be worth dissolving some of the excess geometry on the flat areas. This could be done automatically with the limited dissolve operation or manually with the other dissolve operations. It also looked like there were a few spots with unwelded vertices and flipped normals so try running a merge by distance operation and recalculate the normals outwards.
Try increasing the texture size, sampling count and dilation value. If that doesn't resolve the issue then try adjusting the softness of the high poly mesh and see if that improves the bake.
Both Obscura and rollin have provided valid solutions and feedback. There's often more than one way to resolve an issue and what's right for a given project will depend on finding a suitable balance between expedience and efficiency. Understanding the root causes and first principles is beneficial going forward as these issues will manifest again in different situations and specific technical constraints may require other solutions.
To illustrate Rollin's point: the skewing can be resolved without adding additional geometry.
Figure 1: Automatic mesh triangulation produces skewing where the low poly's horizontal edges cross diagonal elements on the high poly.
Figure 2: Blending vertex and face normal directions with a skew map in Toolbag resolves the issue without requiring additional geometry.
Figure 3: Adjusting the low poly triangulation to match the diagonal elements resolves the skewing without requiring additional geometry.
Figure 4: Adding additional edge loops that match the diagonal elements resolves the skewing but increases the resource footprint.
Looks like the order of operations is important: Bevel the horizontal segments first. Dissolve and bridge where necessary. Bevel the vertical segments last to add the final curve.