I'll be using this sketchbook thread as a place to warehouse write-ups that wouldn't really fit anywhere else. Most of the content will cover concepts and fundamentals related to hard surface modeling with some broader commentary on the creative process.
Replies
@chien Thanks for the question. There are a few links to some write-ups about common baking artifacts in this post and some examples of how triangulation affects normal bakes in this discussion. Additional content about these topics is planned but, since most of the tools used for baking are already well documented, the focus will tend to be on application agnostic concepts.
Block out, base mesh, and bakes.
This write-up is a brief look at using incremental optimization to streamline the high poly to low poly workflow. Optimizing models for baking is often about making tradeoffs that fit the specific technical requirements of a project. Which is why it's important for artists to learn the fundamentals of how modeling, unwrapping and shading affect the results of the baking process.
Shape discrepancies that cause the high poly and low poly meshes to intersect are a common source of ray misses that generate baking artifacts. This issue can be avoided by blocking out the shapes, using the player's view point as a guide for placing details, then developing that block out into a base mesh for both models.
Using a modifier based workflow to generate shape intersections, bevels, chamfers and round overs makes changing the size and resolution of these features as easy as adjusting a few parameters in the modifier's control panel. Though the non-destructive operations of a modifier based workflow do provide a significant speed advantage, elements of this workflow can still be adapted to applications without a modifier stack. Just be aware that it may be necessary to spend more time planning the order of operations and saving additional iterations between certain steps.
It's generally considered best practice to distribute the geometry based on visual importance. Regularly evaluate the model from the player's in-game perspective during the block out. Shapes that define the silhouette, protruding surface features, and parts closest to the player will generally require a bit more geometry than parts that are viewed from a distance or obstructed by other components. Try to maintain relative visual consistency when optimizing the base mesh by adding geometry to areas with visible faceting and removing geometry from areas that are often covered, out of frame or far away.
For subdivision workflows, the block out process also provides an excellent opportunity to resolve topology flow issues, without the added complexity of managing disconnected support loops from adjacent shapes. Focus on creating accurate shapes first then resolve the topology issues before using a bevel / chamfer operation to add the support loops around the edges that define the shape transitions. [Boolean re-meshing workflows are discussed a few post up.]
Artifacts caused by resolution constraints make it difficult to accurately represent details that are smaller than individual pixels. E.g. technical restrictions like textel density limits what details are captured by the baking process and size on screen limits what details are visible during the rendering process. Which is why it's important to check the high poly model, from the player's perspective, for potential artifacts. Especially when adding complex micro details.
Extremely narrow support loops are another common source of baking artifacts that also reduce the quality of shape transitions. Sharper edge highlights often appear more realistic up close but quickly become over sharpened and allow the shapes to blend together at a distance. Softer edge highlights tend to have a more stylistic appearance but also produce smoother transitions that maintain better visual separation from further away.
Edge highlights should generally be sharp enough to accurately convey what material the object is made of but also wide enough to be visible from the player's main point of view. Harder materials like metal tend to have sharper, narrower edge highlights and softer materials like plastic tend to have smoother, wider edge highlights. Slightly exaggerating the edge width can be helpful when baking parts that are smaller or have less textel density. This is why it's important to find a balance between what looks good and what remains visible when the textures start to MIP down.
By establishing the primary forms during the block out and refining the topology flow when developing the base mesh, most of the support loops can be added to the high poly mesh with a bevel / chamfer operation around the edges that define the shapes. An added benefit of generating the support loops with a modifier based workflow is they can be easily adjusted by simply changing the parameters in the bevel modifier's control panel.
Any remaining n-gons or triangles on flat areas should be constrained by the outer support loops. If all quad geometry is required then the surface topology can be adjusted as required. Using operations like loop cut, join through, grid fill, triangle to quads, etc. Though surfaces with complex curves usually require a bit more attention, for most hard surface models, if the mesh subdivides without generating any visible artifacts then it's generally passable for baking.
Since the base mesh is already optimized for the player's in-game point of view, the starting point for the low poly model is generated by turning off any unneeded modifiers or by simply reverting to an earlier iteration of the base mesh. The resolution of shapes still controlled by modifiers can be adjusted as required then unnecessary geometry is removed with edge or limited dissolve operations.
It's generally considered best practice to add shading splits*, with the supporting UV seams, then unwrap and triangulate the low poly mesh before baking. This way the low poly model's shading and triangulation is consistent after exporting. When using a modifier based workflow, the limited dissolve and triangulation operations can be controlled non-destructively. Which makes it a lot easier to iteration on low poly optimization strategies.
*Shading splits are often called: edge splits, hard edges, sharp edges, smoothing groups, smoothing splits, etc.
Low poly meshes with uncontrolled smooth shading often generate normal bakes with intense color gradients. Which correct for inconsistent shading behavior. Some gradation in the baked normal textures is generally acceptable but extreme gradation can cause visible artifacts. Especially in areas with limited textel density.
Marking the entire low poly mesh smooth produces shading that tends to be visually different from the underlying shapes. Face weighted normals and normal data transfers compensate for certain types of undesired shading behaviors but they are only effective when every application in the workflow uses the same custom mesh normals. Constraining the smooth shading with support loops is another option. Though this approach often requires more geometry than simply using shading splits.
Placing shading splits around the perimeter of every shape transition does tend to improve the shading behavior and the supporting UV seams help with straightening the UV islands. The trade off is that every shading split effectively doubles the vertex count for that edge and the additional UV islands use more of the texture space for padding. Which increases the resource footprint of the model and reduces the textel density of the textures.
Adding just a smoothing split or UV seam to an edge does increase the number of vertices by splitting the mesh but once the mesh is split by either there's no additional resource penalty for placing both a smoothing split and UV seam along the same edge. So, effective low poly shading optimization is about finding a balance between maximizing the number of shading splits to sharpen the shape transitions and minimizing the number of UV seams to save texture space.
Which is why it's generally considered best practice to place mesh splits along the natural breaks in the shapes. This sort of approach balances shading improvements and UV optimization by limiting smoothing splits and the supporting UV seams to the edges that define the major forms and areas with severe normal gradation issues.
Smoothing splits must be pared with UV splits, to provide padding that prevents baked normal data from bleeding into adjacent UV islands. Minimizing the number of UV islands does reduce the amount of texture space lost to padding but also limits the placement of smoothing splits. Using fewer UV seams also makes it difficult to straighten UV islands without introducing distortion. Placing UV seams along every shape transition does tend to make straightening the UV islands easier and is required to support more precise smoothing splits but the increased number of UV islands needs additional padding that can reduce the overall textel density.
So, it's generally considered best practice to place UV seams in support of shading splits. While balancing reducing UV distortion with minimizing the amount of texture space lost to padding. Orienting the UV islands with the pixel grid also helps increase packing efficiency. Bent and curved UV islands tend to require more textel density because they often cross pixel grid at odd angles. Which is why long, snaking strips of wavy UV islands should be straightened. Provided the straightening doesn't generate significant UV distortion.
UV padding can also be a source of baking artifacts. Too little padding and the normal data from adjacent UV islands can bleed over into each other when the texture MIPs down. Too much padding and the textel density can drop below what's required to capture the details. A padding rage of 8-32px is usually sufficient for most projects. A lot of popular 3D DCC's have decent packing tools or paid add-ons that enable advanced packing algorithms. Used effectively, these types of tools make UV packing a highly automated process.
It's generally considered best practice to optimize the UV pack by adjusting the size of the UV islands. Parts that are closer to the player tend to require more textel density and parts that are further away can generally use a bit less. Of course there are exceptions to this. Such as areas with a lot of small text details, parts that will be viewed up close, areas with complex surface details, etc. Identical sections and repetitive parts should generally have mirrored or overlapping UV layouts. Unless there's a specific need for unique texture details across the entire model.
Both Marmoset Toolbag and Substance Painter have straight forward baking workflows with automatic object grouping and good documentation. Most DCC applications and popular game engines, like Unity and Unreal, also use MikkTSpace. Which means it's possible to achieve relatively consistent baking results when using edge splits to control low poly shading in a synced tangent workflow. If the low poly shading is fairly even and the hard edges are paired with UV seams then the rest of the baking process should be fairly simple.
Recap: Try to streamline the content authoring workflow as much as possible. Especially when it comes to modeling and baking. Avoid re-work and hacky workarounds whenever possible. Create the block out, high poly and low poly model in an orderly workflow that makes it easy to build upon the existing work from the previous steps in the process. Remember to pair hard edges with UV seams and use an appropriate amount of padding when unwrapping the UVs. Triangulate the low poly before exporting and ensure the smoothing behavior remains consistent. When the models are setup correctly, the baking applications usually do a decent job of taking care of the rest. No need for over-painting, manually mixing normal maps, etc.
Additional resources:
https://polycount.com/discussion/163872/long-running-technical-talk-threads#latest
Subdivision sketches: automotive details.
This write-up is a brief look at how creating accurate shapes can make it easier to generate geometry that maintains a consistent surface quality, while also producing quad grid topology that subdivides smoothly. Which is important for surfaces with smooth compound curves and objects with highly reflective materials. Something that's fairly common on smaller automotive parts and finely machined mechanical components.
Constraining features are key shapes that heavily influence the amount of geometry required to generate clean, quad grid intersections with the adjacent geometry. Blocking out these features first allows the remaining shapes to be developed using a segment matching strategy. Which makes adding the support loops a lot easier, since most of the topology flow issues are resolved during the block out.
The following example shows what this process could look like when modeling hard surface components with a mix of rounded shapes and sharp transitions. Start blocking out the spindle nut by identifying the constraining features. Focus on modeling the shapes accurately then adjust the number of segments in the rest of the model to match. Apply the booleans then add the support loops with a bevel / chamfer operation.
Subtle features are the less obvious surface modifiers and surface transitions that change how the intersecting shapes behave. Things like pattern draft, rounded fillets, shallow chamfers, etc. all play a major role in the actual shapes produced by joining or subtracting geometric primitives. Study the references closely and identify any subtle features that produce sweeping curves, oblong holes, tapered transitions, etc. Merge extraneous geometry to the defining elements of the major forms and use the space between the support loops of the shape transitions to average out any minor differences in the shape intersections. This will help preserve the accuracy of the shapes while also maintaining a consistent flow when transitioning between features with known dimensions.
Cast aluminum wheels often have subtle tapers on surfaces that otherwise appear to be parallel and perpendicular. These slight angles determine how much overlap there is between the cylindrical walls and intersecting shapes like counter bores for the wheel hub and lug holes. The following example shows how a little bit of pattern draft tends to produce shape transitions with a lot of gradual overlap and what the boolean cleanup can look like.
Start the block out by establishing the major forms. Add the draft and rounded fillet on the central counter bore then place the lug holes. Adjust the segment count and draft angles on all of the shapes until the topology lines up. After that, the boolean operations can be cleaned up by merging the extra vertices into the geometry that defines the cylinder walls and the rest of the support loops can be added with loop cuts and a bevel / chamfer modifier. Since the shape of most lug holes are fairly simple and don't require much geometry, the constraining feature tends to be the number of spokes on the rim.
Tiling features are shapes or groups of shapes that repeat across a surface. Simplifying these features into small, reusable elements generally makes the modeling process a lot easier, since a lot of the complex topology routing issues can be solved on a small section of the mesh. Which is then copied and modified to complete the rest of the model.
Wheel assemblies, e.g. rims and tires, tend to have radially tiling features such as tread patterns, spokes, lug holes and other repeating design elements. All of which can be broken down into basic shapes for ease of modeling. Below is an example of what this process could look like. Start the block out by establishing the scale and proportion of the primary features. This will help determine how many segments are required for the larger shapes.
Try to break the complex geometric patterns down into individual features and use local mirroring, in conjunction with radial tiling, to reduce the amount work required to create the base model. Clean, quad grid topology doesn't compensate for geometry that breaks from the shape of the underlying curves. So, keep surface features constrained to the underlying curvature by either cutting across the existing geometry or projecting the shapes onto the basic forms. This will help ensure the consistency of the final surface.
Recap: Identify features that constrain the adjacent geometry and model those areas first. Resolve topology flow issues before dealing with the added complexity of support loops. Plan ahead and try to focus on creating accurate shapes for these features then match the rest of the mesh to those existing segments. Be aware of how subtle changes to the angle of intersecting surfaces produce different shapes. Analyze the references to find subtle shape transitions that generate unique shape profiles. Break down complex, repeating patterns into smaller sections that can be modeled and copied. This will help reduce the amount of work required to create the object and can make it easier to solve some types of topology flow issues.
and ensure the smoothing behavior remains consistent.
@FrankPolygon thank you for such detail write up on baking.
Quick question, what do you mean by above line smoothing behavior remains consistent? as in 1 smoothing group for full model or split groups on hard edges followed by UV split ?
@HAWK12HT Thanks! Good question.
Low poly smoothing is a term that's kind of ambiguous but it generally means the overall shading of the model. In this context it would include meshes with and without hard edges, provided there's some soft shading.
It's important to triangulate the low poly before exporting because uncontrolled changes to the mesh can generate different types of normal artifacts. Checking the model in different applications, like Toolbag, Substance Painter, Unity, Unreal, Etc., helps eliminate potential baking issues by verifying the shading behavior is consistent and things like custom normals, smoothing splits, triangulation, etc. were exported / imported properly.
So, a better choice of words might have been: Triangulate the low poly before exporting to ensure the shading behavior remains consistent.
@bewsii Reply was delayed but really appreciate your comment. Thanks for sharing your experience! Always good to evaluate different workflows and adapt elements that fit new goals.
@FrankPolygon, thank you so much for this series of posts, it's great to see this amount of detailed breakdowns when it comes to doing this type of modeling.
I'm enjoying like a little kid reading each of your updates and I love to see how you solve each of the modeling challenges in a smart way. It's quite a vice reading you.
Now, I understand that modeling with booleans and N-Gons subdivision can work fabulously on baking models for video games, but would it be possible and acceptable to use this workflow for Pixar film productions?
Do you think that submitting a portfolio detailing this workflow could be a problem if the company reviewing my work is used to the classic quad-only workflow?
I would like to hear your opinion about this and also the opinion of other modelers more talented than me.
Thank you very much for your attention, I am adapting my workflow following your modeling advice at the moment for my personal project.
@orangesky Thanks for the comment. Glad the write-ups have been enjoyable and informative.
The short answer is: it depends. Context is really important because, even from a broad perspective, there's some significant differences between organic and hard surface modeling. While there's definitely some overlap in the fundamentals of subdivision modeling, these two broad fields are still completely different disciplines. Each with specific sub-focuses that require somewhat unique skill sets. What's acceptable in one instance may not be ideal for another.
It's the same sort of situation when it comes to 3D modeling for different types of media. E.g. animation, game, VFX, and visualization projects all tend to have specific technical requirements. The various studios that specialize in each of these fields will generally have their own preferred workflow(s) and best practices. Which are sometimes made public through articles, discussions, documentation, interviews, presentations, etc.
Information that references sources with firsthand experience is probably more accurate than personal opinion. As an example: use cases for triangles and n-gons are discussed in documentation for OpenSubdiv and booleans are also mentioned in other user docs and articles about artists that have worked on feature length animations at the company in question.
In general though, most technical elements are relatively easy to measure. That's probably why it's tempting to try distilling creative processes down into a set of fixed rules. While this sort of approach can work well enough when just learning the basics. It also tends to restrict problem solving by oversimplifying things and optimizing for binary decision making. Which generally produces less than optimal results when dealing complex problems over a longer period of time and contributes to the perpetuation of technical dogma.
Just in game art alone, the relatively short evolutionary period of the design tools has already seen several inflection points where established workflows have changed suddenly and artists that were unwilling to at least entertain the idea of doing things differently were completely left behind.
The switch to PBR texturing workflows and the subsequent rise of dedicated texturing applications is one fairly recent example. Another, which is adjacent to the earlier shift towards sculpting organics, is the rapid evolution of the boolean re-meshing workflow that's now seeing 3D DCC's being replaced with CAD applications. Parametric modeling is accurate and relatively fast. Two things that, arguably, old school, grid based subdivision modeling is not.
These kinds of rapid paradigm shifts are often focused on moving to processes that offer significant improvements in efficiency and visual fidelity. Something that a lot of the older workflows just can't compete against. That's not to say that elements of these older workflows aren't still relevant. It's just that the weaknesses now outweigh the strengths. Traditional subdivision modeling is no exception to this. Especially when it comes to hard surface content.
Booleans, modifiers and n-gons speed up different parts of the modeling process but it's important to remember that they're just intermediate steps and aren't the entire workflow. When combined with effective block outs and segment matching strategies, the n-gons in a base mesh can be resolved to all quads if required. So all quad geometry isn't necessarily exclusive to slower, traditional modeling methods like box, point inflation, edge extrusion or strip modeling.
The takeaway from all this is that the technical elements should be chosen to serve the creative elements that further the story. Not the reverse. Part of improving as an artist is learning to work within resource constraints to creatively solve aesthetic and communication problems while maintaining a cohesive narrative. Which admittedly does require technical understanding. It's just that understanding has to be tempered with other creative skills that might be neglected during these types of discussions.
Sometimes it's helpful to look at the technical elements of a workflow with a more pragmatic lens. Focusing less on tradition and more on comparing the cost of the inputs [time, emotional capital, etc.] to the value generated. Exploring different workflows also provides some needed contrast that helps identify weaknesses in current workflows. It's that sort of iteration and reflection that moves things forwards.
Polycount's career and education section is also a great resource for learning about what's expected from artists working in a specific field and is probably a better venue for discussing building a portfolio to land an animation, VFX or film job. Definitely worth the time to read through the advice offered there by other experienced artists that have worked on similar projects.
Subdivision sketch: hemispherical intersections.
This is a quick look at how segment matching can be used to create a clean intersection between the shapes that are commonly found on the ball fence of side by side break actions. Many antiques and some current production examples use the same hand shaping processes. So it's not uncommon to find references with minor shape inconsistencies or blended surfaces that have very gradual transitions. Which means close enough is good enough for most models.
Segment matching is a fundamental element of most subdivision modeling workflows. Using primitives with an arbitrary number of segments can be a convenient way to start the block out but also tends to produce geometry that makes it difficult to route the loop flow around the base of the shape intersection. Matching the number of segments in the intersecting shapes helps provide a relatively consistent path for the adjacent edge loops. Which not only improves the flow path for the support loops around the base of the intersection but also makes it easier to join the shapes without generating undesired surface deformation.
Start with a simple block out of all the major shapes. Adjust the number of segments in the round over and the sphere to roughly align the edges around the base of the shape intersection. Use a boolean operation to join the shapes then clean up any stray geometry by merging it down into the vertices of the underlying surfaces.
Slice across the surface of the sphere to re-route the loop flow and resolve the mesh to all quads by dissolving any left over edges. Depending on the size and position of the sphere and round over, it may be necessary to adjust the position of the corner vertex that turns the loop flow up and around the sphere. The edges that define the shape intersection then become the loop path for the support loops. Which can be added with a bevel / chamfer operation or modifier.
The same approach also works with wider round overs and shallower curves. Simply adjust the amount of geometry so the edges of the shapes align roughly where the surfaces overlap. Use enough geometry to accurately represent the shapes at the desired view distance. Balancing geometry density with editablity.
Quad spheres can also be used with a segment matching strategy but the arbitrary number of segments in the quad sphere means most of the adjustment will need to be done on the shape with the round over. Which tends to make optimizing the sphere side of the intersection a bit more challenging. In some of these cases it's necessary to dissolve the extraneous edges and slide the remaining loop along the surface to form the desired shape.
Recap: Block out the basic shapes and adjust the number of segments in each until the overlapping edges are roughly aligned. This will generally make it easier to clean up boolean operations and provide a clean path, for the support loops, around the shape intersection.
Subdivision sketch: studying shapes and simplifying subdivision.
This write-up is a brief overview of a simple, shape based approach to subdivision modeling. This approach, with the primary focus being to create accurate shapes that define the loop flow paths, can help streamline the modeling process for most hard surface objects.
Working without enough information to visually interpret the shapes tends to add unnecessary frustration. So, start the modeling process by researching and gathering different references. Which can include things like background information, images, measurements, videos, etc. These references should be used as a guide for modeling real world objects and as a baseline for evaluating the form and function of design concepts.
Look at the references and think about how the object is made and how it's used. This will provide additional context that ties the real world details from the reference images to the creative narrative used to guide the development the artwork. Something that also helps inform decision making during the modeling process and provides inspiration for developing the visual storytelling elements that carry through to the texturing process.
Analyze the shapes in the concepts and references. Identify the key surface features and observe how they interact with each other. Establish the overall scale of the object then figure out the proportions between the different shapes that make up the surfaces. Use this information to come up with an order of operations for blocking out the basic shapes. If necessary: draw out the basic primitives that make up the object. Highlight flat areas, curved areas and the transitions between them.
Most topology flow issues can be solved during the block out. Which is why it's generally considered best practice to: Use the edges of the existing geometry as support for shape intersections whenever possible. Use the minimum amount of geometry required to create reasonably accurate shapes. Use a segment matching strategy to maintain uniform edge spacing when joining curved shapes.
Develop the block out in stages. Keep things relatively simple for the first few iterations of the block out. Larger shapes should generally be defined first, while also keeping the smaller details in mind. Focus on creating shapes that are accurate to the references then solve the major topology flow issues before adding the support loops. Overall mesh complexity can also be reduced by modeling individual components of the object separately.
Let the shapes define the loop flow. Some features may have curvature that influences or restricts the loop flow of adjacent surfaces. Block out those shapes first then adjust the number of segments in the adjacent surfaces to roughly match the edges where the two shapes intersect. Any significant difference between the edges of intersecting shapes can usually be averaged out between the support loops.
With this iterative approach to blocking out the shapes then solving the topology flow issues, the edges that define the borders of the shapes become the loop paths. Which means most of the support loops can be added by simply selecting those defining edges and using a bevel / chamfer operation to add the outside loops. Alternately, loop cuts and inset operations can also be used when the support loops are only needed on one side of the edges that define the shapes.
This shape based loop routing strategy tends to require little manual cleanup and can be mostly automated using modifiers. Something that helps make hard surface subdivision modeling much more approachable. The examples in this write-up show how this basic workflow can be applied to a moderately complex, plastic injection molded part which has a mix of soft, lofted shape transitions and hard seam lines. Which are commonly found on a variety of different hard surface objects. So, the same sort of approach will generally work with most hard surface modeling workflows.
Recap: Analyze the shapes in the concepts and references. Develop the block out in stages. Let the shapes define the loop flow. Match the segments of intersecting shapes. Use the existing geometry to guide the loop paths. Solve topology issues early on then add the support loops.
Subdivision sketch: hand guard.
This is a follow up to the previous post about shape analysis. It's just a quick look at applying the iterative block out process to larger plastic components. Which are often part of other hard surface objects.
Identifying how the basic shapes are connected is a fundamental part of subdivision modeling. So, gather a lot of good reference material. Analyze the references to figure out what the shapes are then come up with a plan for connecting those shapes.
Work through the block out process in stages. Keep things relatively simple early on and focus on creating accurate shapes before adding surface details. This will make it a lot easier to maintain a higher level of surface quality during subsequent modeling operations.
Approach the modeling process with specific goals but also be willing to adjust the order of operations based on the actual results. Rather than sticking with preconceived ideas. Focus on getting the shapes right and rely on tools or modifiers to generate curves, complex shape intersections, fillets, roundovers, etc.
There's often significant overlap in the poly modeling fundamentals used to develop block outs for both re-meshing and subdivision workflows. Three fundamental concepts that make subdivision modeling more approachable are: use a reasonable amount of geometry in the shapes, adjust the number of segments in the intersecting shapes so they roughly match each other and use the existing geometry as support for shape transitions.
Most hard surface game art props aren't required to deform. Which opens up a lot of possibilities for using simplified topology on high poly surfaces that are flat or otherwise well supported. This makes it a lot easier to streamline the modeling process by reducing mesh complexity with workflow elements like booleans, modifiers and n-gons. Something that's still relevant in contemporary re-meshing workflows.
In this example: the basic shapes are mostly generated by boolean operations and all of the small support loops are generated by a simple bevel / chamfer modifier. Which means it's possible to adjust the width and profile of the edge highlights by changing the values in the modifier controls. These modifier based support loops are also used to replicate the parting lines. Where the splits in the mold, used to manufacture the real part, generate visible interruptions in the shape transitions.
Recap: It's very easy to over focus on the technical aspects of different workflows but one of the core elements of hard surface modeling is being able to recognize and recreate the shapes that make up an object. Regardless of the modeling workflow, using an iterative block out strategy makes it easier to create accurate shape profiles and solve problems sequentially. Without becoming encumbered by minor surface details and managing complex technical elements, that often aren't relevant during the early part of the modeling process.
Subdivision sketch: cylinder release.
This is a quick overview of an iterative block out process, combined with a boolean and modifier based subdivision workflow.
A significant part of hard surface modeling is figuring out the shapes in the references then generating an accurate block out. Booleans and modifiers help streamline a lot of basic modeling tasks. They also make it easier to adjust individual surface features, without having to remodel large sections of the mesh. Which reduces the amount of work required to make significant revisions to the basic shapes. There's also the added benefit of using the final block out as a base mesh for generating both the high poly and low poly models. Something that's still relevant to contemporary poly re-meshing workflows.
In the example above: The curved surfaces are generated by bevel / chamfer modifiers and surface features, like the spherical knurling pattern, are cut in with booleans. Everything in the block out remains editable through the modifier stack. First pass cleanup of the base mesh is handled by modifiers that dissolve extraneous edges by angle and weld any stray vertices by distance. Support loops for the high poly are automatically generated by a simple angle based bevel / chamfer modifier and the width parameter can be adjusted to make the edge highlights sharper or softer.
Recap: Using an iterative block out process makes it easier to focus on resolving issues with the individual shapes of an object and is a workflow element that's relevant to almost all hard surface modeling workflows. There's also a significant amount of overlap in the poly modeling skills used to develop the block out, base mesh, high poly and low poly models. Which are still relevant to both boolean re-meshing and subdivision workflows. It's these overlapping skills that are worth developing and the associated workflow processes that are worth streamlining.
@FrankPolygon thank you for sharing, i will also try apply in my workflow, can i ask if you also make your own maxscript to improve your workflow?
Thanks Frank, I'm always blown away by how you do things and it's great to be able to learn from it! Also, I love the way you present it all, it must take a while to get all the pics set up just right.
@chien Appreciate the comment. Writing custom scripts can be useful for solving specific workflow problems but most of the major 3D DCCs already have a variety of third party solutions that cover common modeling tasks. So, it often makes more sense to look for existing solutions before investing a lot of time, especially when there's a dedicated scripting community.
@danielrobnik Thanks! Great to hear the write-ups are informative. Producing consistent documentation is a significant time investment but saving the working files often and setting up presentation templates does help streamline the process. Testing the different workflows and summarizing the results is probably the most time consuming part.
Subdivision sketch: cylinder details and radial tiling.
This write-up looks at matching a cylinder's segment count to radially tiled details. While there are different ways to approach adding these details, the focus here is less about specific modeling operations and more about planning during the block out stage.
Gathering reference material is part of this planning process. Dimensional references, like CAD data, GD&T prints, scans, photogrammetry, etc., are often ideal reference sources but aren't always available. High quality images [photo or video] are an alternate reference source that can also be used to identify detailed features. Different camera angles and lighting setups are often helpful for figuring out how all of the shapes blend together. Near orthographic views are also helpful for establishing the overall scale and proportion.
Analyzing the shapes in the references usually provides some insight about the minimum number of segments required to accurately block out the basic shapes. Start the shape analysis by identifying the primary forms then look for the smallest details on those forms that need to be accurately modeled. The relative size of these smaller details often constraints the adjacent geometry in a way that's a significant factor in determining how many segments are required in the larger forms.
Some details are too small to be reasonably integrated into the base mesh. Depending on the visual quality goals, smaller surface details can be added with floating geometry or with texture overlays in the height or normal channels. Figuring out what to simplify on the high poly and low poly models is generally going to be based on artistic elements like style, prominence, typical view distance and other technical constraints like poly count, texture size, texel density, etc.
Below is a visual example of this type of shape analysis: Overall, the primary form is a basic cylinder with some radially tiling details. The smallest detail on the outside wall of the cylinder is the stop notch. Which means it's the constraining feature for the segment spacing of the primary form. The stop notches and other details, like the flutes, chambers and ratchet slots, are grouped together and repeat radially at regular intervals. Each detail appears five times. So, the total number of segments in the larger cylinder will need to be divisible by five. That way the larger form can be simplified into a single, tileable section.
There's a few different ways to come up with ratios that describe radially tiling features. Using an image to measure the features and comparing the size difference is relatively straightforward but doesn't account for the full distance along the curvature of the surfaces. Inserting a cylinder primitive over a background image then adjusting the number of segments until they line up with all of the shapes and are divisible by the total number of unique features is another option. However, this approach can be time consuming if the 3D application doesn't support parametric primitives.
With radially tiling features, it's also possible to use the total number of unique elements to develop some basic ratios. These basic ratios can then be used to determine the minimum number of segments required to create the repeating patterns. Which makes it possible to quickly calculate a few different options for the total segment count needed to block out the larger forms.
As shown below, a simple mesh can be helpful for visualizing the relationship between the radial features. The cylinder in the reference image has five flutes and five empty spaces between the flutes. If the width of the flutes and empty space is the same then it's a simple 1:1 ratio and the minimum segment count is 5+5. If the flutes are half the width of the empty spaces then the ratio is 1:2 and the minimum segment count is 5+10. If the flutes are only 25% smaller than the empty spaces then the ratio is 3:4 and the minimum segment count is 15 + 20. Etc.
Multiples of this ratio can then be used to adjust the total number of segments in the primary form to support the constraining features. Using the previous example, where the flutes are 25% smaller than the empty spaces, if the shape of each flute requires six radial edge segments then the total number of segments in the larger cylindrical form is 70 or 30+40.
To produce an evenly distributed segment count, that's divisible by the number of unique radial features, it's sometimes necessary to round the ratio to the nearest whole number. E.g. Multiplying the base ratio of 3:4 by 1.5 is 4.5:6 or 22.5 + 30 which needs to be rounded to 5:6 or 25 + 30 to match the flute geometry and be evenly divisible by 5.
Math isn't a hard requirement for most modeling tasks. Intuition gained through practice is often good enough. It's just that math makes it easier to evaluate certain technical questions, like "How much geometry is required to accurately represent these shapes?", without having to model the object several times to find an answer.
Blocking out the shapes with an arbitrary number of segments then adjusting the mesh to line up the edges is a fairly straightforward approach that doesn't require using a lot of math. The tradeoff is that trying to match the segments using trial and error relies heavily on intuition, iteration and luck. Which often means either reworking large portions of the mesh or compromising on quality around complex shape intersections. Especially when using only basic poly modeling operations.
This is where the flexibility of an iterative block out process, that uses a parametric primitive and modifier based modeling workflow, has a significant advantage. With this sort of workflow the mesh density can be adjusted non-destructively. Which makes aligning the segments of each shape fairly straightforward. Just change the input numbers on the shape or modifier that controls an individual feature and the updated segment count is pushed through the modifier stack. Without requiring any manual mesh editing operations.
Whether using destructive or non-destructive modeling, dividing the object into tileable sections can reduce the amount of work required. Which makes quickly testing different segment count and topology strategies a lot easier. Below is an example of what this iterative block out process could look like. Using a non-linear, non-destructive modeling workflow that leverages booleans, modifiers and parametric primitives.
Deciding how much geometry to use really comes down to how accurate the shapes need to be. For these shapes the minimum viable segment count is pretty close to the original estimate of 35 segments. While the subdivided mesh looks fine from most angles there are some subtle deformations around the stop notch. Likely caused by all of the geometry bunching up in that area and disrupting the segment spacing between the edges that make up the larger shape.
Some of these issues can be resolved by simplifying the geometry but this also tends to soften some of the sharper corners. While it is possible to compensate for a lot of these artifacts by manually deforming the geometry of the base mesh, this can reduce the overall accuracy and quality of the surface. In some situations it may make sense to use shrink-wrap modifiers to project the mesh onto a clean surface but this approach does come with certain limitations.
Something with this level of surface quality might be fine for a third person prop or background element but wouldn't be acceptable for most AAA FPS items. This is why it often makes sense to do some quick tests with just the constraining features and decide what level of surface quality is appropriate for a given view distance and time budget.
Sometimes it makes sense to try and use the existing geometry as support loops and other times it's more effective to generate the support loops on top of the existing geometry.
In the example above, offsetting the stop notch from the existing edges in the cylinder provides a set of outer support loops but using a bevel / chamfer modifier to automatically generate a set of tighter support loops around the shapes is part of what's causing the geometry to bunch up and deform the curve. Manually placing support loops on the inside of those shapes would solve a few more of the smoothing artifacts but would also reduce the sharpness of the shape transitions. Which could work if softer edge highlights were acceptable.
However, it's much more time efficient to rely on the support loops being automatically generated via a modifier. In this case it makes more sense to place the shapes and support loops directly on the existing edges of the primary form. Increasing the segment count by a multiple of 1.5 then rounding up to the next whole number that's divisible by 5 produces much cleaner results.
When it comes to subdivision modeling, there's often a tendency to over complicate the mesh. Mostly due to skipping over parts of the block out process or arbitrarily increasing the mesh density to avoid using booleans and modifiers. Sometimes this sort of decision making is rooted in the limitations of the 3D application's tool set and other times it's based on technical dogma or popular misconceptions.
It's very easy to get bogged down trying to perfect a lot of the technical elements.
Mainly because they're easily measurable and aiming for specific numbers or a quad grid mesh can be really satisfying. What's important to remember though is that most players won't ever see the high poly model. Much less care about what the wire-frame looks like. So, it's much more important to focus on the artistic and effort/cost components of the modeling process and less on chasing technical minutia.
There's also some important tradeoffs that are worth considering. If a surface is going to have a lot of high frequency normal details added later in the workflow, like rust, pitting, dents or other surface defects, then the actual surface quality of the high poly model probably doesn't need to be absolutely perfect.
Of course that sort of pragmatic outlook isn't an excuse for creating meshes with visible shading artifacts. It's more about having the permission to explore workflow elements that save time and effort while producing usable results.
There are also easier ways to create some of these patterns. Like modeling the details flat and deforming them into a cylinder but there will be certain situations where that is unworkable or causes other issues. So the overall goal here was to look more at the planning process for complex shapes tiled along curves.
To recap: the planning process really starts with finding good reference material and building an understanding of the shapes that make up the object. From this shape analysis it's possible to come up with some basic ratios that can be used to derive the minimum amount of geometry required to represent the shapes on a cylinder.
Working through an iterative block out process makes it a lot easier to resolve major shape and topology issues before investing a lot of time into any specific part of the mesh. Sometimes it's necessary to make compromises on the accuracy of the shapes and the surface quality but it's still possible to generate usable results quickly. Streamlining the modeling process and using parametric primitives or modifiers to control the different shapes will make it a lot easier to experiment with different density and topology strategies.
Subdivision sketch: hand grip.
This is a quick overview of an iterative block out process that uses subdivision to generate soft[er] hard surface shapes.
An over complicated base mesh, often caused by rushing through the block out process, can make it difficult to generate clean shape transitions and smooth surfaces. Which is why it's generally considered best practice to: Focus on defining the larger forms before adding details. Develop both the shapes and topology flow together. Keep the base mesh relatively simple, until the larger shapes are accurate, then apply subdivision as required. Continue refining the surfaces and adding smaller details in stages.
Adding small surface details too early in the block out can cause extreme changes in mesh density. Which can be difficult to manage when modeling everything as a single mesh. Wait to add these types of details later in the modeling process. To simplify the mesh further, consider using floating geometry or textures to add these types of small surface details. Hide any potential seams around floating geometry by using the natural breaks between individual parts and other decorative surface features.
One major drawback of subdivision modeling is that it's often difficult to manage extreme density shifts across a mesh with a lot of compound curves. This grip is an example of something that's relatively quick and easy to block out but can become an absolute nightmare to edit, if trying to manually add all of the details to a single, watertight mesh.
Which demonstrates is why it's important to develop the larger shapes first then work through adding all of the details in stages. While also using different modeling and texturing techniques to streamline the workflow.
That hand-guard (foregrip) is nuts, oh well guess alot more practice to get my bool operands to look as good or even behave : /
@sacboi It definitely has an interesting combination of linear and curved features that produce some challenging shape intersections.
Every application is a bit different but a few things that can cause boolean solvers to fail are non-manifold geometry, overlapping co-planar edges or faces and un-welded vertices. Sometimes these types of issues can be resolved by simply changing the order of the boolean operations but persistent issues with the boolean meshes often need to be cleaned up with weld or decimate [by face angle] modifiers.
There's a lot of different ways to approach developing and combining the shapes. Below is a quick animated breakdown of the process used in the previous example. Most of the external features are generated by modifiers. The internal features could have also been done with booleans and modifiers but the loop cuts and insets were a bit quicker. Individual boolean objects and destructive edits are highlighted in the animation.
Non-destructive workflow elements like booleans, modifiers and parametric primitives make adjusting the density or shape of key features fairly quick and easy. This type of workflow also helps avoid having to manually re-model large sections of the mesh when creating the in-game low poly and high poly base mesh. Something that's relevant to poly modeling for both traditional subdivision modeling and re-meshing + polishing workflows.
Subdivision modeling can also be streamlined quite a bit with a boolean and modifier based workflow. All of the minor support loops in this example are generated by a bevel / chamfer modifier. Which means the edge width or sharpness of the subdivided model can be adjusted at any time by simply changing the values in the modifier's settings panel.
Recap: Traditional subdivision modeling techniques do have some significant drawbacks, like the steep learning curve and counterintuitive smoothing behavior, but more contemporary workflow elements make results like this a lot more achievable. For those who prefer re-meshing + polishing workflows, the same shape focused approach to block outs and boolean + modifier workflow elements can help streamline a lot of different poly modeling processes.
Very cool, appreciate the additional info :)
Subdivision sketch: manifold block.
This is another animated overview of a boolean and modifier based subdivision workflow. It covers the modeling process from block out to final high poly.
Basic forms are generated using primitives and modifiers. Matching the segments around shape intersections reduces the amount of manual cleanup required. Triangles and n-gons are constrained to flat surfaces to prevent visible smoothing artifacts. Additional loop paths are established with basic cut and join through operations. Final support loops are generated with an edge weighted bevel / chamfer modifier. The base mesh can be easily resolved to all quads if required.
It's a simple part but still demonstrates how a shape first approach reduces unnecessary complexity and makes it easier to create hard surface subdivision models. With modifiers, the shape and density of most features will remain flexible until the loop flow is finalized. The same base mesh can also be used to create low poly models or be pushed through a re-meshing and polishing workflow to avoid traditional subdivision all together.
Below is a comparison of the subdivision high poly model and a couple of low poly bakes. Both low poly models were generated, from the high poly base mesh, by adjusting the modifiers that control segment density on the indvidual features then unwrapping the UVs by face angle. Any destructive edits were done towards the end of the low poly optimization pass. This streamlined workflow makes it a lot easier to iterate surface features and mesh density, without having to manually re-model large sections of the mesh.
Recap: Avoid wasting time trying to perfect minor technical elements before the fundamentals are fully developed.
Methodically work through the iterative phases of the modeling process. Focus on creating accurate shapes first then adjust the number of segments in the features and adjust the topology flow. Test variations of the low poly mesh and optimize anything that doesn't contribute to the visual quality of the surface or clarity of the silhouette. Place hard edges and UV seams at the same time. Optimize for consistent shading behavior while using the minimum number of mesh splits. Use UV checkers and test bakes to validate the unwrap and pack before committing to a specific UV layout.
Be willing to go back and adjust the mesh when issues are identified. It's a lot easier to fix problems with the models before low poly optimization and texturing. Continually evaluate the model from the player's point of view. Use resources efficiently and try to achieve the best possible results with the time available for each stage of the development process.
Material study: model tank road wheel.
Material blockout: Start simple. Establish the base values without damage, wear and weathering effects. Regularly check the materials under different lighting conditions and compare the results to the reference images. Use subtle variations in base color [diffuse] and roughness [gloss] values to help differentiate individual materials.
Below is what the material block out looked like for this study. Values for raw surfaces like bare metal and rubber were established first. Followed by matte red oxide primer and semi gloss dark yellow. The mask between the two paint layers is just a tiling overlay applied to the model with tri-planar projection.
This camouflage scheme is a stylized interpretation of an allegedly ahistoric late war factory pattern. Which had patches of red primer left exposed to save paint and speed up production. Evidence for this pattern is mostly anecdotal but the contrast provides some visual interest over the usual solid red or solid yellow that's often depicted in exhibits and on scale models.
Wear masks: Sometimes basic edge masks are too uniform. Use [tiling] overlays (chipped paint, random scratches, etc.) to add some visual interest and break up the shapes in the masks. Carefully select damage overlays that match the wear patterns in the reference images. Adjust the contrast, intensity, projection, rotation, and scale of the overlays to fit the visual narrative of the components.
Here's what the combined masks and damage overlays look like for the wear pass. Small circles were manually painted over areas that would be worn down by fastener rotation during field maintenance. An edge mask was used to reveal the base materials in areas that would damaged by regular contact with the environment and tools. E.g. The rim of the steel road wheel would be exposed to abrasive material like dirt, gravel, large rocks, etc. Corners of fasteners and protruding parts of the hub assembly would contact repair equipment.
These basic wear masks were then refined with texture overlays. Contact damage tends to accumulate along exposed edges and produces wear patterns with sharp borders around large chips in the coating. Recessed surfaces tend to be better protected from impact damage but can trap abrasive material that creates softer and smaller scratches over a wide area.
Here's what the progression looks like for the wear pass. A subtle overlay lightened the color values in areas faded by exposure to the sun. Chipped edges and scratched surfaces were used to create points of visual interest, without overpowering the narrative. Surface cracks and dents were added to the normal channel to provide some contrasting wear patterns.
The wear pass provides a great opportunity to create interesting details but it's also important to avoid over wearing the surfaces. Since these wheels are supposed to be fairly new, it's unlikely that freshly exposed metal would have heavy rust or that the fresh paint would be flaking off. Surface damage on the rubber material is mostly tears, caused by rocks in the tracks, but there's also some subtle cracking that hints at the substandard quality of late war production.
Subdivision sketch: placing intersecting cylinders on existing edges.
Avoid deforming the edges or vertices that make up the walls of either cylinder. Use the space between the intersecting shape and the outer support loop around to connect the two shapes without causing unintended surface deformation. Merge or dissolve any stray geometry back into the nearest vertices that are in line with the existing geometry.
The same segment matching and topology routing strategy also works with multiple cylinder intersections, where each circumference lands on or close to an existing edge in the larger cylinder. In the example below the largest intersecting cylinder is that same diameter as the base cylinder so there's no need to adjust the segment counts on that one.
Subdivision sketch: cylinder with chamfered slot.
Subdivision sketch: slide release.
This is another brief look at using an iterative block out process and modifier based subdivision workflow.
Study the reference images and figure out how the shapes fit together. Establish the rough proportions then come up with an order of operations for developing each important detail. Keep things relatively simple. Focus on accurately representing the shapes and use segment matching to reduce the amount of clean up required. Let the defining edges of the shapes guide the topology flow.
An open letter to anyone just starting out in game art.
There will also be periods where related skills are at different levels. When the next step in a process relies on one of these under developed skills it will slow progress down. Which can be very frustrating when the source of the skill gap isn't obvious. This is why it's important to regularly think about what elements of the workflow can be improved and whether or not there other skills required to reach the desired goal.
While it's not strictly necessary, some game artists do find it helpful to learn about or practice related art skills. Such as traditional drawing, painting, photography, etc. All of these require working with color, composition, and proportion in different ways. Which can help bring a fresh approach to understanding forms and solving similar problems in game art.
As an example: Though 3D modeling is a highly technical process, most art fundamentals (Like analytical observation, pattern recognition, and problem solving.) are shared across most creative disciplines. If identifying shapes in reference images is difficult then it can be helpful to practice analyzing shapes by outlining key surface transitions to gain a better understanding of how everything is connected. So, practicing those other traditional art skills can help level up game art skills indirectly.
Going through the various stages of the emotional cycle of change is a part of almost every project and learning experience. When progress is slow or things become difficult, it can be helpful to remember this chart and continue moving forward past the low points.
Managing these feelings is it's own skill set and it's often easier to get through this part of the learning process with the support of a dedicated community or other artists that are moving towards similar goals and sharing similar experiences. Just be careful to guard against letting external emotional trauma or preconceptions dictate a path before things even get started. Continue to show up, sharpen skills and take new opportunities as they come.
There's a number of dedicated game art discords but that experience isn't for everyone. So, it can be helpful to seek out experienced artists on other platforms. Like the career and education section here on Polycount.
Recap: When learning the fundamentals of game art, it can be really helpful to pick a series of smaller projects that provide a unique set of challenges but are also easy enough to complete in a couple of days or a couple of weeks. Invest a few of hours per day on these projects, stick with it, find a supportive community of artists, regularly ask for feedback on the various stages of each project, be willing to implement changes based on relevant feedback, consistently finish projects and share the results on a professional platform.
Keep everything in perspective. Success isn't guaranteed but it's often much closer than it appears.
Sources: the concepts depicted in these graphs aren't new, they have been presented by others numerous times and similar images make the rounds on different pages. So, it's difficult to pin down exactly who came up with some of the presentation formats. The written component of this post is a revised message, based largely on personal experience, to another artist.
An open letter to artists that are stuck .