and ensure the smoothing behavior remains consistent.
@FrankPolygon thank you for such detail write up on baking.
Quick question, what do you mean by above line smoothing behavior remains consistent? as in 1 smoothing group for full model or split groups on hard edges followed by UV split ?
Low poly smoothing is a term that's kind of ambiguous but it generally means the overall shading of the model. In this context it would include meshes with and without hard edges, provided there's some soft shading.
It's important to triangulate the low poly before exporting because uncontrolled changes to the mesh can generate different types of normal artifacts. Checking the model in different applications, like Toolbag, Substance Painter, Unity, Unreal, Etc., helps eliminate potential baking issues by verifying the shading behavior is consistent and things like custom normals, smoothing splits, triangulation, etc. were exported / imported properly.
So, a better choice of words might have been: Triangulate the low poly before exporting to ensure the shading behavior remains consistent.
@bewsii Reply was delayed but really appreciate your comment. Thanks for sharing your experience! Always good to evaluate different workflows and adapt elements that fit new goals.
@FrankPolygon, thank you so much for this series of posts, it's great to see this amount of detailed breakdowns when it comes to doing this type of modeling.
I'm enjoying like a little kid reading each of your updates and I love to see how you solve each of the modeling challenges in a smart way. It's quite a vice reading you.
Now, I understand that modeling with booleans and N-Gons subdivision can work fabulously on baking models for video games, but would it be possible and acceptable to use this workflow for Pixar film productions?
Do you think that submitting a portfolio detailing this workflow could be a problem if the company reviewing my work is used to the classic quad-only workflow?
I would like to hear your opinion about this and also the opinion of other modelers more talented than me.
Thank you very much for your attention, I am adapting my workflow following your modeling advice at the moment for my personal project.
@orangesky Thanks for the comment. Glad the write-ups have been enjoyable and informative.
The short answer is: it depends. Context is really important because, even from a broad perspective, there's some significant differences between organic and hard surface modeling. While there's definitely some overlap in the fundamentals of subdivision modeling, these two broad fields are still completely different disciplines. Each with specific sub-focuses that require somewhat unique skill sets. What's acceptable in one instance may not be ideal for another.
It's the same sort of situation when it comes to 3D modeling for different types of media. E.g. animation, game, VFX, and visualization projects all tend to have specific technical requirements. The various studios that specialize in each of these fields will generally have their own preferred workflow(s) and best practices. Which are sometimes made public through articles, discussions, documentation, interviews, presentations, etc.
Information that references sources with firsthand experience is probably more accurate than personal opinion. As an example: use cases for triangles and n-gons are discussed in documentation for OpenSubdiv and booleans are also mentioned in other user docs and articles about artists that have worked on feature length animations at the company in question.
In general though, most technical elements are relatively easy to measure. That's probably why it's tempting to try distilling creative processes down into a set of fixed rules. While this sort of approach can work well enough when just learning the basics. It also tends to restrict problem solving by oversimplifying things and optimizing for binary decision making. Which generally produces less than optimal results when dealing complex problems over a longer period of time and contributes to the perpetuation of technical dogma.
Just in game art alone, the relatively short evolutionary period of the design tools has already seen several inflection points where established workflows have changed suddenly and artists that were unwilling to at least entertain the idea of doing things differently were completely left behind.
The switch to PBR texturing workflows and the subsequent rise of dedicated texturing applications is one fairly recent example. Another, which is adjacent to the earlier shift towards sculpting organics, is the rapid evolution of the boolean re-meshing workflow that's now seeing 3D DCC's being replaced with CAD applications. Parametric modeling is accurate and relatively fast. Two things that, arguably, old school, grid based subdivision modeling is not.
These kinds of rapid paradigm shifts are often focused on moving to processes that offer significant improvements in efficiency and visual fidelity. Something that a lot of the older workflows just can't compete against. That's not to say that elements of these older workflows aren't still relevant. It's just that the weaknesses now outweigh the strengths. Traditional subdivision modeling is no exception to this. Especially when it comes to hard surface content.
Booleans, modifiers and n-gons speed up different parts of the modeling process but it's important to remember that they're just intermediate steps and aren't the entire workflow. When combined with effective block outs and segment matching strategies, the n-gons in a base mesh can be resolved to all quads if required. So all quad geometry isn't necessarily exclusive to slower, traditional modeling methods like box, point inflation, edge extrusion or strip modeling.
The takeaway from all this is that the technical elements should be chosen to serve the creative elements that further the story. Not the reverse. Part of improving as an artist is learning to work within resource constraints to creatively solve aesthetic and communication problems while maintaining a cohesive narrative. Which admittedly does require technical understanding. It's just that understanding has to be tempered with other creative skills that might be neglected during these types of discussions.
Sometimes it's helpful to look at the technical elements of a workflow with a more pragmatic lens. Focusing less on tradition and more on comparing the cost of the inputs [time, emotional capital, etc.] to the value generated. Exploring different workflows also provides some needed contrast that helps identify weaknesses in current workflows. It's that sort of iteration and reflection that moves things forwards.
Polycount's career and education section is also a great resource for learning about what's expected from artists working in a specific field and is probably a better venue for discussing building a portfolio to land an animation, VFX or film job. Definitely worth the time to read through the advice offered there by other experienced artists that have worked on similar projects.
This is a quick look at how segment matching can be used to create a clean intersection between the shapes that are commonly found on the ball fence of side by side break actions. Many antiques and some current production examples use the same hand shaping processes. So it's not uncommon to find references with minor shape inconsistencies or blended surfaces that have very gradual transitions. Which means close enough is good enough for most models.
Segment matching is a fundamental element of most subdivision modeling workflows. Using primitives with an arbitrary number of segments can be a convenient way to start the block out but also tends to produce geometry that makes it difficult to route the loop flow around the base of the shape intersection. Matching the number of segments in the intersecting shapes helps provide a relatively consistent path for the adjacent edge loops. Which not only improves the flow path for the support loops around the base of the intersection but also makes it easier to join the shapes without generating undesired surface deformation.
Start with a simple block out of all the major shapes. Adjust the number of segments in the round over and the sphere to roughly align the edges around the base of the shape intersection. Use a boolean operation to join the shapes then clean up any stray geometry by merging it down into the vertices of the underlying surfaces.
Slice across the surface of the sphere to re-route the loop flow and resolve the mesh to all quads by dissolving any left over edges. Depending on the size and position of the sphere and round over, it may be necessary to adjust the position of the corner vertex that turns the loop flow up and around the sphere. The edges that define the shape intersection then become the loop path for the support loops. Which can be added with a bevel / chamfer operation or modifier.
The same approach also works with wider round overs and shallower curves. Simply adjust the amount of geometry so the edges of the shapes align roughly where the surfaces overlap. Use enough geometry to accurately represent the shapes at the desired view distance. Balancing geometry density with editablity.
Quad spheres can also be used with a segment matching strategy but the arbitrary number of segments in the quad sphere means most of the adjustment will need to be done on the shape with the round over. Which tends to make optimizing the sphere side of the intersection a bit more challenging. In some of these cases it's necessary to dissolve the extraneous edges and slide the remaining loop along the surface to form the desired shape.
Recap: Block out the basic shapes and adjust the number of segments in each until the overlapping edges are roughly aligned. This will generally make it easier to clean up boolean operations and provide a clean path, for the support loops, around the shape intersection.
Subdivision sketch: studying shapes and simplifying subdivision.
This write-up is a brief overview of a simple, shape based approach to subdivision modeling. This approach, with the primary focus being to create accurate shapes that define the loop flow paths, can help streamline the modeling process for most hard surface objects.
Working without enough information to visually interpret the shapes tends to add unnecessary frustration. So, start the modeling process by researching and gathering different references. Which can include things like background information, images, measurements, videos, etc. These references should be used as a guide for modeling real world objects and as a baseline for evaluating the form and function of design concepts.
Look at the references and think about how the object is made and how it's used. This will provide additional context that ties the real world details from the reference images to the creative narrative used to guide the development the artwork. Something that also helps inform decision making during the modeling process and provides inspiration for developing the visual storytelling elements that carry through to the texturing process.
Analyze the shapes in the concepts and references. Identify the key surface features and observe how they interact with each other. Establish the overall scale of the object then figure out the proportions between the different shapes that make up the surfaces. Use this information to come up with an order of operations for blocking out the basic shapes. If necessary: draw out the basic primitives that make up the object. Highlight flat areas, curved areas and the transitions between them.
Most topology flow issues can be solved during the block out. Which is why it's generally considered best practice to: Use the edges of the existing geometry as support for shape intersections whenever possible. Use the minimum amount of geometry required to create reasonably accurate shapes. Use a segment matching strategy to maintain uniform edge spacing when joining curved shapes.
Develop the block out in stages. Keep things relatively simple for the first few iterations of the block out. Larger shapes should generally be defined first, while also keeping the smaller details in mind. Focus on creating shapes that are accurate to the references then solve the major topology flow issues before adding the support loops. Overall mesh complexity can also be reduced by modeling individual components of the object separately.
Let the shapes define the loop flow. Some features may have curvature that influences or restricts the loop flow of adjacent surfaces. Block out those shapes first then adjust the number of segments in the adjacent surfaces to roughly match the edges where the two shapes intersect. Any significant difference between the edges of intersecting shapes can usually be averaged out between the support loops.
With this iterative approach to blocking out the shapes then solving the topology flow issues, the edges that define the borders of the shapes become the loop paths. Which means most of the support loops can be added by simply selecting those defining edges and using a bevel / chamfer operation to add the outside loops. Alternately, loop cuts and inset operations can also be used when the support loops are only needed on one side of the edges that define the shapes.
This shape based loop routing strategy tends to require little manual cleanup and can be mostly automated using modifiers. Something that helps make hard surface subdivision modeling much more approachable. The examples in this write-up show how this basic workflow can be applied to a moderately complex, plastic injection molded part which has a mix of soft, lofted shape transitions and hard seam lines. Which are commonly found on a variety of different hard surface objects. So, the same sort of approach will generally work with most hard surface modeling workflows.
Recap: Analyze the shapes in the concepts and references. Develop the block out in stages. Let the shapes define the loop flow. Match the segments of intersecting shapes. Use the existing geometry to guide the loop paths. Solve topology issues early on then add the support loops.
This is a follow up to the previous post about shape analysis. It's just a quick look at applying the iterative block out process to larger plastic components. Which are often part of other hard surface objects.
Identifying how the basic shapes are connected is a fundamental part of subdivision modeling. So, gather a lot of good reference material. Analyze the references to figure out what the shapes are then come up with a plan for connecting those shapes.
Work through the block out process in stages. Keep things relatively simple early on and focus on creating accurate shapes before adding surface details. This will make it a lot easier to maintain a higher level of surface quality during subsequent modeling operations.
Approach the modeling process with specific goals but also be willing to adjust the order of operations based on the actual results. Rather than sticking with preconceived ideas. Focus on getting the shapes right and rely on tools or modifiers to generate curves, complex shape intersections, fillets, roundovers, etc.
There's often significant overlap in the poly modeling fundamentals used to develop block outs for both re-meshing and subdivision workflows. Three fundamental concepts that make subdivision modeling more approachable are: use a reasonable amount of geometry in the shapes, adjust the number of segments in the intersecting shapes so they roughly match each other and use the existing geometry as support for shape transitions.
Most hard surface game art props aren't required to deform. Which opens up a lot of possibilities for using simplified topology on high poly surfaces that are flat or otherwise well supported. This makes it a lot easier to streamline the modeling process by reducing mesh complexity with workflow elements like booleans, modifiers and n-gons. Something that's still relevant in contemporary re-meshing workflows.
In this example: the basic shapes are mostly generated by boolean operations and all of the small support loops are generated by a simple bevel / chamfer modifier. Which means it's possible to adjust the width and profile of the edge highlights by changing the values in the modifier controls. These modifier based support loops are also used to replicate the parting lines. Where the splits in the mold, used to manufacture the real part, generate visible interruptions in the shape transitions.
Recap: It's very easy to over focus on the technical aspects of different workflows but one of the core elements of hard surface modeling is being able to recognize and recreate the shapes that make up an object. Regardless of the modeling workflow, using an iterative block out strategy makes it easier to create accurate shape profiles and solve problems sequentially. Without becoming encumbered by minor surface details and managing complex technical elements, that often aren't relevant during the early part of the modeling process.
This is a quick overview of an iterative block out process, combined with a boolean and modifier based subdivision workflow.
A significant part of hard surface modeling is figuring out the shapes in the references then generating an accurate block out. Booleans and modifiers help streamline a lot of basic modeling tasks. They also make it easier to adjust individual surface features, without having to remodel large sections of the mesh. Which reduces the amount of work required to make significant revisions to the basic shapes. There's also the added benefit of using the final block out as a base mesh for generating both the high poly and low poly models. Something that's still relevant to contemporary poly re-meshing workflows.
In the example above: The curved surfaces are generated by bevel / chamfer modifiers and surface features, like the spherical knurling pattern, are cut in with booleans. Everything in the block out remains editable through the modifier stack. First pass cleanup of the base mesh is handled by modifiers that dissolve extraneous edges by angle and weld any stray vertices by distance. Support loops for the high poly are automatically generated by a simple angle based bevel / chamfer modifier and the width parameter can be adjusted to make the edge highlights sharper or softer.
Recap: Using an iterative block out process makes it easier to focus on resolving issues with the individual shapes of an object and is a workflow element that's relevant to almost all hard surface modeling workflows. There's also a significant amount of overlap in the poly modeling skills used to develop the block out, base mesh, high poly and low poly models. Which are still relevant to both boolean re-meshing and subdivision workflows. It's these overlapping skills that are worth developing and the associated workflow processes that are worth streamlining.
Thanks Frank, I'm always blown away by how you do things and it's great to be able to learn from it! Also, I love the way you present it all, it must take a while to get all the pics set up just right.
@chien Appreciate the comment. Writing custom scripts can be useful for solving specific workflow problems but most of the major 3D DCCs already have a variety of third party solutions that cover common modeling tasks. So, it often makes more sense to look for existing solutions before investing a lot of time, especially when there's a dedicated scripting community.
@danielrobnik Thanks! Great to hear the write-ups are informative. Producing consistent documentation is a significant time investment but saving the working files often and setting up presentation templates does help streamline the process. Testing the different workflows and summarizing the results is probably the most time consuming part.
Subdivision sketch: cylinder details and radial tiling.
This write-up looks at matching a cylinder's segment count to radially tiled details. While there are different ways to approach adding these details, the focus here is less about specific modeling operations and more about planning during the block out stage.
Gathering reference material is part of this planning process. Dimensional references, like CAD data, GD&T prints, scans, photogrammetry, etc., are often ideal reference sources but aren't always available. High quality images [photo or video] are an alternate reference source that can also be used to identify detailed features. Different camera angles and lighting setups are often helpful for figuring out how all of the shapes blend together. Near orthographic views are also helpful for establishing the overall scale and proportion.
Analyzing the shapes in the references usually provides some insight about the minimum number of segments required to accurately block out the basic shapes. Start the shape analysis by identifying the primary forms then look for the smallest details on those forms that need to be accurately modeled. The relative size of these smaller details often constraints the adjacent geometry in a way that's a significant factor in determining how many segments are required in the larger forms.
Some details are too small to be reasonably integrated into the base mesh. Depending on the visual quality goals, smaller surface details can be added with floating geometry or with texture overlays in the height or normal channels. Figuring out what to simplify on the high poly and low poly models is generally going to be based on artistic elements like style, prominence, typical view distance and other technical constraints like poly count, texture size, texel density, etc.
Below is a visual example of this type of shape analysis: Overall, the primary form is a basic cylinder with some radially tiling details. The smallest detail on the outside wall of the cylinder is the stop notch. Which means it's the constraining feature for the segment spacing of the primary form. The stop notches and other details, like the flutes, chambers and ratchet slots, are grouped together and repeat radially at regular intervals. Each detail appears five times. So, the total number of segments in the larger cylinder will need to be divisible by five. That way the larger form can be simplified into a single, tileable section.
There's a few different ways to come up with ratios that describe radially tiling features. Using an image to measure the features and comparing the size difference is relatively straightforward but doesn't account for the full distance along the curvature of the surfaces. Inserting a cylinder primitive over a background image then adjusting the number of segments until they line up with all of the shapes and are divisible by the total number of unique features is another option. However, this approach can be time consuming if the 3D application doesn't support parametric primitives.
With radially tiling features, it's also possible to use the total number of unique elements to develop some basic ratios. These basic ratios can then be used to determine the minimum number of segments required to create the repeating patterns. Which makes it possible to quickly calculate a few different options for the total segment count needed to block out the larger forms.
As shown below, a simple mesh can be helpful for visualizing the relationship between the radial features. The cylinder in the reference image has five flutes and five empty spaces between the flutes. If the width of the flutes and empty space is the same then it's a simple 1:1 ratio and the minimum segment count is 5+5. If the flutes are half the width of the empty spaces then the ratio is 1:2 and the minimum segment count is 5+10. If the flutes are only 25% smaller than the empty spaces then the ratio is 3:4 and the minimum segment count is 15 + 20. Etc.
Multiples of this ratio can then be used to adjust the total number of segments in the primary form to support the constraining features. Using the previous example, where the flutes are 25% smaller than the empty spaces, if the shape of each flute requires six radial edge segments then the total number of segments in the larger cylindrical form is 70 or 30+40.
To produce an evenly distributed segment count, that's divisible by the number of unique radial features, it's sometimes necessary to round the ratio to the nearest whole number. E.g. Multiplying the base ratio of 3:4 by 1.5 is 4.5:6 or 22.5 + 30 which needs to be rounded to 5:6 or 25 + 30 to match the flute geometry and be evenly divisible by 5.
Math isn't a hard requirement for most modeling tasks. Intuition gained through practice is often good enough. It's just that math makes it easier to evaluate certain technical questions, like "How much geometry is required to accurately represent these shapes?", without having to model the object several times to find an answer.
Blocking out the shapes with an arbitrary number of segments then adjusting the mesh to line up the edges is a fairly straightforward approach that doesn't require using a lot of math. The tradeoff is that trying to match the segments using trial and error relies heavily on intuition, iteration and luck. Which often means either reworking large portions of the mesh or compromising on quality around complex shape intersections. Especially when using only basic poly modeling operations.
This is where the flexibility of an iterative block out process, that uses a parametric primitive and modifier based modeling workflow, has a significant advantage. With this sort of workflow the mesh density can be adjusted non-destructively. Which makes aligning the segments of each shape fairly straightforward. Just change the input numbers on the shape or modifier that controls an individual feature and the updated segment count is pushed through the modifier stack. Without requiring any manual mesh editing operations.
Whether using destructive or non-destructive modeling, dividing the object into tileable sections can reduce the amount of work required. Which makes quickly testing different segment count and topology strategies a lot easier. Below is an example of what this iterative block out process could look like. Using a non-linear, non-destructive modeling workflow that leverages booleans, modifiers and parametric primitives.
Deciding how much geometry to use really comes down to how accurate the shapes need to be. For these shapes the minimum viable segment count is pretty close to the original estimate of 35 segments. While the subdivided mesh looks fine from most angles there are some subtle deformations around the stop notch. Likely caused by all of the geometry bunching up in that area and disrupting the segment spacing between the edges that make up the larger shape.
Some of these issues can be resolved by simplifying the geometry but this also tends to soften some of the sharper corners. While it is possible to compensate for a lot of these artifacts by manually deforming the geometry of the base mesh, this can reduce the overall accuracy and quality of the surface. In some situations it may make sense to use shrink-wrap modifiers to project the mesh onto a clean surface but this approach does come with certain limitations.
Something with this level of surface quality might be fine for a third person prop or background element but wouldn't be acceptable for most AAA FPS items. This is why it often makes sense to do some quick tests with just the constraining features and decide what level of surface quality is appropriate for a given view distance and time budget.
Sometimes it makes sense to try and use the existing geometry as support loops and other times it's more effective to generate the support loops on top of the existing geometry.
In the example above, offsetting the stop notch from the existing edges in the cylinder provides a set of outer support loops but using a bevel / chamfer modifier to automatically generate a set of tighter support loops around the shapes is part of what's causing the geometry to bunch up and deform the curve. Manually placing support loops on the inside of those shapes would solve a few more of the smoothing artifacts but would also reduce the sharpness of the shape transitions. Which could work if softer edge highlights were acceptable.
However, it's much more time efficient to rely on the support loops being automatically generated via a modifier. In this case it makes more sense to place the shapes and support loops directly on the existing edges of the primary form. Increasing the segment count by a multiple of 1.5 then rounding up to the next whole number that's divisible by 5 produces much cleaner results.
When it comes to subdivision modeling, there's often a tendency to over complicate the mesh. Mostly due to skipping over parts of the block out process or arbitrarily increasing the mesh density to avoid using booleans and modifiers. Sometimes this sort of decision making is rooted in the limitations of the 3D application's tool set and other times it's based on technical dogma or popular misconceptions.
It's very easy to get bogged down trying to perfect a lot of the technical elements.
Mainly because they're easily measurable and aiming for specific numbers or a quad grid mesh can be really satisfying. What's important to remember though is that most players won't ever see the high poly model. Much less care about what the wire-frame looks like. So, it's much more important to focus on the artistic and effort/cost components of the modeling process and less on chasing technical minutia.
There's also some important tradeoffs that are worth considering. If a surface is going to have a lot of high frequency normal details added later in the workflow, like rust, pitting, dents or other surface defects, then the actual surface quality of the high poly model probably doesn't need to be absolutely perfect.
Of course that sort of pragmatic outlook isn't an excuse for creating meshes with visible shading artifacts. It's more about having the permission to explore workflow elements that save time and effort while producing usable results.
There are also easier ways to create some of these patterns. Like modeling the details flat and deforming them into a cylinder but there will be certain situations where that is unworkable or causes other issues. So the overall goal here was to look more at the planning process for complex shapes tiled along curves.
To recap: the planning process really starts with finding good reference material and building an understanding of the shapes that make up the object. From this shape analysis it's possible to come up with some basic ratios that can be used to derive the minimum amount of geometry required to represent the shapes on a cylinder.
Working through an iterative block out process makes it a lot easier to resolve major shape and topology issues before investing a lot of time into any specific part of the mesh. Sometimes it's necessary to make compromises on the accuracy of the shapes and the surface quality but it's still possible to generate usable results quickly. Streamlining the modeling process and using parametric primitives or modifiers to control the different shapes will make it a lot easier to experiment with different density and topology strategies.
This is a quick overview of an iterative block out process that uses subdivision to generate soft[er] hard surface shapes.
An over complicated base mesh, often caused by rushing through the block out process, can make it difficult to generate clean shape transitions and smooth surfaces. Which is why it's generally considered best practice to: Focus on defining the larger forms before adding details. Develop both the shapes and topology flow together. Keep the base mesh relatively simple, until the larger shapes are accurate, then apply subdivision as required. Continue refining the surfaces and adding smaller details in stages.
Adding small surface details too early in the block out can cause extreme changes in mesh density. Which can be difficult to manage when modeling everything as a single mesh. Wait to add these types of details later in the modeling process. To simplify the mesh further, consider using floating geometry or textures to add these types of small surface details. Hide any potential seams around floating geometry by using the natural breaks between individual parts and other decorative surface features.
One major drawback of subdivision modeling is that it's often difficult to manage extreme density shifts across a mesh with a lot of compound curves. This grip is an example of something that's relatively quick and easy to block out but can become an absolute nightmare to edit, if trying to manually add all of the details to a single, watertight mesh.
Which demonstrates is why it's important to develop the larger shapes first then work through adding all of the details in stages. While also using different modeling and texturing techniques to streamline the workflow.
@sacboi It definitely has an interesting combination of linear and curved features that produce some challenging shape intersections.
Every application is a bit different but a few things that can cause boolean solvers to fail are non-manifold geometry, overlapping co-planar edges or faces and un-welded vertices. Sometimes these types of issues can be resolved by simply changing the order of the boolean operations but persistent issues with the boolean meshes often need to be cleaned up with weld or decimate [by face angle] modifiers.
There's a lot of different ways to approach developing and combining the shapes. Below is a quick animated breakdown of the process used in the previous example. Most of the external features are generated by modifiers. The internal features could have also been done with booleans and modifiers but the loop cuts and insets were a bit quicker. Individual boolean objects and destructive edits are highlighted in the animation.
Non-destructive workflow elements like booleans, modifiers and parametric primitives make adjusting the density or shape of key features fairly quick and easy. This type of workflow also helps avoid having to manually re-model large sections of the mesh when creating the in-game low poly and high poly base mesh. Something that's relevant to poly modeling for both traditional subdivision modeling and re-meshing + polishing workflows.
Subdivision modeling can also be streamlined quite a bit with a boolean and modifier based workflow. All of the minor support loops in this example are generated by a bevel / chamfer modifier. Which means the edge width or sharpness of the subdivided model can be adjusted at any time by simply changing the values in the modifier's settings panel.
Recap: Traditional subdivision modeling techniques do have some significant drawbacks, like the steep learning curve and counterintuitive smoothing behavior, but more contemporary workflow elements make results like this a lot more achievable. For those who prefer re-meshing + polishing workflows, the same shape focused approach to block outs and boolean + modifier workflow elements can help streamline a lot of different poly modeling processes.
This is another animated overview of a boolean and modifier based subdivision workflow. It covers the modeling process from block out to final high poly.
Basic forms are generated using primitives and modifiers. Matching the segments around shape intersections reduces the amount of manual cleanup required. Triangles and n-gons are constrained to flat surfaces to prevent visible smoothing artifacts. Additional loop paths are established with basic cut and join through operations. Final support loops are generated with an edge weighted bevel / chamfer modifier. The base mesh can be easily resolved to all quads if required.
It's a simple part but still demonstrates how a shape first approach reduces unnecessary complexity and makes it easier to create hard surface subdivision models. With modifiers, the shape and density of most features will remain flexible until the loop flow is finalized. The same base mesh can also be used to create low poly models or be pushed through a re-meshing and polishing workflow to avoid traditional subdivision all together.
Below is a comparison of the subdivision high poly model and a couple of low poly bakes. Both low poly models were generated, from the high poly base mesh, by adjusting the modifiers that control segment density on the indvidual features then unwrapping the UVs by face angle. Any destructive edits were done towards the end of the low poly optimization pass. This streamlined workflow makes it a lot easier to iterate surface features and mesh density, without having to manually re-model large sections of the mesh.
Recap: Avoid wasting time trying to perfect minor technical elements before the fundamentals are fully developed.
Methodically work through the iterative phases of the modeling process. Focus on creating accurate shapes first then adjust the number of segments in the features and adjust the topology flow. Test variations of the low poly mesh and optimize anything that doesn't contribute to the visual quality of the surface or clarity of the silhouette. Place hard edges and UV seams at the same time. Optimize for consistent shading behavior while using the minimum number of mesh splits. Use UV checkers and test bakes to validate the unwrap and pack before committing to a specific UV layout.
Be willing to go back and adjust the mesh when issues are identified. It's a lot easier to fix problems with the models before low poly optimization and texturing. Continually evaluate the model from the player's point of view. Use resources efficiently and try to achieve the best possible results with the time available for each stage of the development process.
This is a quick overview of a recent material study for real-time vehicle models. The primary goal was to create a simple PBR texture, using a visual style that blends a few detail cues from reference images with some stylized painting techniques for weathering scale models. An additional technical goal was to use directional tiling overlays with the bakes (ambient occlusion, cavity, curvature, etc.) to reduce the amount of manual painting required.
Material blockout: Start simple. Establish the base values without damage, wear and weathering effects. Regularly check the materials under different lighting conditions and compare the results to the reference images. Use subtle variations in base color [diffuse] and roughness [gloss] values to help differentiate individual materials.
Below is what the material block out looked like for this study. Values for raw surfaces like bare metal and rubber were established first. Followed by matte red oxide primer and semi gloss dark yellow. The mask between the two paint layers is just a tiling overlay applied to the model with tri-planar projection.
This camouflage scheme is a stylized interpretation of an allegedly ahistoric late war factory pattern. Which had patches of red primer left exposed to save paint and speed up production. Evidence for this pattern is mostly anecdotal but the contrast provides some visual interest over the usual solid red or solid yellow that's often depicted in exhibits and on scale models.
Wear masks: Sometimes basic edge masks are too uniform. Use [tiling] overlays (chipped paint, random scratches, etc.) to add some visual interest and break up the shapes in the masks. Carefully select damage overlays that match the wear patterns in the reference images. Adjust the contrast, intensity, projection, rotation, and scale of the overlays to fit the visual narrative of the components.
Here's what the combined masks and damage overlays look like for the wear pass. Small circles were manually painted over areas that would be worn down by fastener rotation during field maintenance. An edge mask was used to reveal the base materials in areas that would damaged by regular contact with the environment and tools. E.g. The rim of the steel road wheel would be exposed to abrasive material like dirt, gravel, large rocks, etc. Corners of fasteners and protruding parts of the hub assembly would contact repair equipment.
These basic wear masks were then refined with texture overlays. Contact damage tends to accumulate along exposed edges and produces wear patterns with sharp borders around large chips in the coating. Recessed surfaces tend to be better protected from impact damage but can trap abrasive material that creates softer and smaller scratches over a wide area.
Wear pass: Use damage effects like edge wear, surface scratches and paint chips to build a narrative by exposing the underlying material layers. Compliment these damage effects with surface defects like cracks, dents, slashes, etc. Add subtle color variations to indicate fading, oxidation, and staining.
Here's what the progression looks like for the wear pass. A subtle overlay lightened the color values in areas faded by exposure to the sun. Chipped edges and scratched surfaces were used to create points of visual interest, without overpowering the narrative. Surface cracks and dents were added to the normal channel to provide some contrasting wear patterns.
The wear pass provides a great opportunity to create interesting details but it's also important to avoid over wearing the surfaces. Since these wheels are supposed to be fairly new, it's unlikely that freshly exposed metal would have heavy rust or that the fresh paint would be flaking off. Surface damage on the rubber material is mostly tears, caused by rocks in the tracks, but there's also some subtle cracking that hints at the substandard quality of late war production.
Weathering masks: Environmental particulates like dust, dry soil, wet mud, etc. tend to accumulate in different patterns. Select unique texture overlays for each weathering layer. Match these overlays to the patterns in the reference images. Use different masking inputs to control where the overlays appear.
Here's what a few of the individual weathering masks look like. Inverse AO was used to mask off paint that was faded by the sun. AO and direction were combined with a dust overlay to create a mask for the dry dirt. The same setup was reused with different overlays to create separate masks for each of the mud layers.
Some of these overlays were a mix of wipe and spatter patterns. Which are visually similar to dry and wet brushing techniques for painting scale models. Keeping the scale and direction of these details in line with what's in the reference images helps blend the more stylized elements with the rest of the overlays.
Weathering pass: Avoid trying to do too much with too few layers. Build up the weathering
effects in stages. Use materials with different color and roughness
values to generate some subtle visual contrast. It's often helpful to
have separate materials with light, medium and dark earth tones.
Here's what the basic weathering progression looks like. A light colored dry sand material was added first. Followed by a medium brown mud material. Then a darker oil material was added on top of the previous layers. These weathering details are just a base for additional detailing.
Since the base weathering textures aren't dynamic, it's important to balance out the placement of larger details in a way that prevents odd patterns from appearing when the wheels rotate in game. The streak of grease leaking from the hub is one exception to this but it's relatively thin and in the same orientation as the stripes on the camouflage pattern. If it becomes an issue then the overlay used to generate it can be toggled off.
Wash masks: Additional visual contrast can be created by highlighting recessed surface details with darker or lighter weathering materials. Use several complimentary masking inputs to outline the larger shapes and add a couple of overlays to break up the outline.
Here's what the combined mask and overlay inputs look like for the liquid weathering layer. Surface details were outlined by combining AO and curvature masks. A grunge overlay was used to create some variation in the outline and drip overlays were used to add directional streaks.
In this material study the dark oily liquid is supposed to
represent a combination of spilled engine oil and grease leaking from
the cap on the wheel hub. The thin outline on the inside of the rim is
where the oil has soaked into the dried mud and the drips are where it's
been flung out by the rotation of the road wheel. Dark washes work well for outlining fine surface details on scale models
but it's easy to push it too far when replicating this on game assets. Use this effect sparingly and try to keep the outlines fairly
thin or relatively subtle.
Environment pass: Use additional layers of dry dirt and wet mud to build up different weathering states that match the environment. Combine similar earth materials to create a subtle contrast between light and dark color variations and matte and shiny roughness values. The illusion of depth can be built up between the layers by adjusting the height and normal values of each material.
Here's what the different environmental states look like. The first layer is just a thicker dry earth. Followed by wet earth and mud. The previous oil layer is on top of the two dirt layers but below the mud layer. Height values between the layers were increased proportionally with the amount of mud.
Weathering a model so it looks like it fits in the environment is something that takes time and patience. Though there are some exceptions, the earth tones of a specific area tend to be fairly consistent. Use different types of earth materials to create subtle contrast. Adjust the color values of each environmental layer to build a narrative about where the vehicle has been.
Note: The material study was just a quick test to get a rough
feel for what it would take to use this sort of workflow for a complete vehicle. A full production texture set would likely require additional
hand painting. This model is also a place holder and some of the smaller
details may be a bit anachronistic. The primary goal of this sort of material study is to try new things out and get to a minimum viable product while streamlining the authoring process.
Recap: Establish the base material values
first then build up the wear and weathering layers individually. Check
the materials against reference images and under different lighting
conditions. Use subtle variations in the base color and roughness values
to differentiate each material. Combine different mask inputs and
overlays to create unique patterns that fit the style and references.
Subdivision sketch: placing intersecting cylinders on existing edges.
Sometimes the size of an intersecting cylinder causes its circumference to fall directly on top of an existing edge. One way to resolve this is to adjust the number of segments in the intersecting cylinder until they are aligned with the existing edges of the other shape.
There generally isn't a hard requirement for the cage mesh of a subdivision cylinder to have a segment count that's evenly divisible by 4. Use whatever segment count provides the best match with the underlying geometry. As long as the spacing is relatively consistent, it's also fine to have cylinders with different rotational orientations or an odd number of segments.
Avoid deforming the edges or vertices that make up the walls of either cylinder. Use the space between the intersecting shape and the outer support loop
around to connect the two shapes without causing unintended surface
deformation. Merge or dissolve any stray geometry back into the nearest vertices that are in line with the existing geometry.
There's a few different ways to structure the order of operations for routing the topology around the shape intersection. Below is just one example of what this process could look like.
Start by adjusting the number of segments in the intersecting cylinder to align with the edges in the larger cylinder. Join the shapes with a boolean operation then select the edges around the base of the shape intersection and chamfer them. This might look messy but all of the new geometry is coplanar with the existing faces. Clean up any stray geometry with snap vertex merge and edge dissolve operations. Cut in perpendicular edge loops to support the rest of the shape intersection.
Here's what the final topology routing looks like when subdivided. It's possible to achieve similar results with projection and manual loop cutting but that still requires clean up and increases the possibility of introducing unintended shape deformation.
The same segment matching and topology routing strategy also works with multiple cylinder intersections, where each circumference lands on or close to an existing edge in the larger cylinder. In the example below the largest intersecting cylinder is that same diameter as the base cylinder so there's no need to adjust the segment counts on that one.
Recap: Match the segments of intersecting cylinders. Adjust the segment count so the overlapping edges are parallel. Use the space between the base of the intersection and the outer support loop to make up for any difference between the smaller and larger shapes.
Additional resources on cylinder and cone intersections:
This is a quick look at using segment matching to create complex cutouts in cylindrical shapes and a comparison of the cleanup required for different modeling operations.
The basic modeling process, as shown below, is relatively straightforward. Align the segments by adjusting the density of the curved surfaces then run a boolean operation and inset or chamfer the features. Dissolve or snap merge left over geometry into the vertices that define the basic shapes. Cut in additional support loops to resolve the base mesh to all quads.
Cleaning up the mesh after the boolean operation does simplify the following
modeling operations but can also cause accuracy issues or unintended shape
deformation. Whether or not this an issue often depends on the shape and
the tool. Here the effect is quite subtle because the inset tool
generates the smaller inner profile from the larger outer profile. Which effectively constrains the new shape to a smaller area where minor differences are less noticeable.
Below is what the rest of the modeling process could look like. Create additional features like end chamfers and a through hole with basic modeling operations. There should be minimal cleanup on the inside of the bore since the segments are already matched on the outside of the shape. Add support loops by running a bevel / chamfer operation on the edges that define the shapes .
Here's what the final mesh topology and subdivision previews look like.
There's a lot of different ways to structure the order of operations for adding the chamfer to the slot on the cylinder. The middle column of the following examples shows just how much difference there is between the inside and outside profiles created by each modeling operation.
Creating the outside profile then insetting to generate the chamfer produces fairly accurate shapes but requires some clean up. Removing geometry left over by the boolean operation then insetting does reduce some of the clean up but can also reduce the accuracy of the interior profile.
Creating the inner profile then insetting to generate the outer profile creates all of the chamfers at once but requires a lot more clean up and tends to produce some minor variance in the profile of the chamfer. With this particular shape, cleaning up the geometry then running the inset operation will likely cause deformation of the chamfer profile or disrupt the segment spacing of the cylinder.
Creating the inner profile then beveling / chamfering does generate the desired chamfer profile but also requires some clean up. Removing the left over geometry before running the bevel / chamfer operation does simplify things. Though it also tends to either change the profile of the chamfer or deform the cylinder.
Recap: When creating polygonal or subdivision models, there are some shapes that just require extra clean up work. The modeling workflow can be streamlined by approximating some of the features but it's also important to balance tradeoffs between accuracy and efficiency. Align the segments of the intersecting shapes then use modeling operations that produce the desired shapes with minimal clean up.
Study the reference images and figure out how the shapes fit together. Establish the rough proportions then come up with an order of operations for developing each important detail. Keep things relatively simple. Focus on accurately representing the shapes and use segment matching to reduce the amount of clean up required. Let the defining edges of the shapes guide the topology flow.
A lot of hard surface work is just variations of the same basic modeling
operations. There's a certain repetition to it. Each individual part is
different in it's own way but it's also similar. All the individual
parts tend to have a shared uniformity that brings the whole piece
together. It all comes down to seeing how the basic shapes interact with each
other to form the surface features.
An open letter to anyone just starting out in game art.
Learning a game asset creation pipeline requires commitment and repetition. It can be tempting to rush through projects but it's important to remember that it's often less about how fast the work is done and more about what is learned along the way. Actively participating in the learning process is an essential part of growing as an artist.
Active learning is a mix of gathering new information, discussing things with other artists, and working through different parts of a process to understand how things are generally done. While gathering new information is an important part of learning, it's the practice, personal reflection, and asking for outside feedback that really helps boost skill growth.
There's a number of different learning styles and formal education or mentorship can be a great way to learn from other artists but there really isn't a shortcut around putting in the work. Which is why leveling up game art skills really just comes down to lots of mindful practice by working through a series of specific goals. Part of this learning process is evaluating past work and finding areas that need improvement. Which often means being willing to go back and try several different approaches to see which works best in a specific situation.
Skill development tends to progress in stages. Fundamental knowledge quickly develops into a basic understanding but there are plateaus where experience or knowledge gaps make it difficult to solve unfamiliar challenges.
Progress can be slow during this period and it's important to keep moving forward. Try different approaches and figure out what moves things closer to the desired outcome. With enough effort, feedback, and practice this will generally result in a breakthrough moment where skills increase rapidly as a path forward is discovered.
This process of incremental improvement is more or less continuous: styles change, technology changes, workflows change, etc. There's always one more challenge, one more project, one more skill to level up. What's important is to keep learning. To advance along a path and grow as an artist.
There will also be periods where related skills are at different levels. When the next step in a process relies on one of these under developed skills it will slow progress down. Which can be very frustrating when the source of the skill gap isn't obvious. This is why it's important to regularly think about what elements of the workflow can be improved and whether or not there other skills required to reach the desired goal.
While it's not strictly necessary, some game artists do find it helpful to learn about or practice related art skills. Such as traditional drawing, painting, photography, etc. All of these require working with color, composition, and proportion in different ways. Which can help bring a fresh approach to understanding forms and solving similar problems in game art.
As an example: Though 3D modeling is a highly technical process, most art fundamentals (Like analytical observation, pattern recognition, and problem solving.) are shared across most creative disciplines. If identifying shapes in reference images is difficult then it can be
helpful to practice analyzing shapes by outlining key surface
transitions to gain a better understanding of how everything is
connected. So, practicing those other traditional art skills can help level up game art skills indirectly.
Going through the various stages of the emotional cycle of change is a part of almost every project and learning experience. When progress is slow or things become difficult, it can be helpful to remember this chart and continue moving forward past the low points.
Managing these feelings is it's own skill set and it's often easier to get through this part of the learning process with the support of a dedicated community or other artists that are moving towards similar goals and sharing similar experiences. Just be careful to guard against letting external emotional trauma or preconceptions dictate a path before things even get started. Continue
to show up, sharpen skills and take new opportunities as they come.
There's a number of dedicated game art discords but that experience isn't for everyone. So, it can be helpful to seek out experienced artists on other platforms. Like the career and education section here on Polycount.
Recap: When learning the fundamentals of game art, it can be really helpful to pick a series of smaller projects that provide a unique set of challenges but are also easy enough to complete in a couple of days or a couple of weeks. Invest a few of hours per day on these projects,
stick with it, find a supportive community of artists, regularly ask for
feedback on the various stages of each project, be willing to implement changes based on relevant feedback, consistently finish projects and share the results on a professional platform.
Keep everything in perspective. Success isn't guaranteed but it's often much closer than it appears.
Sources: the concepts depicted in these graphs aren't new, they have been presented by others numerous times and similar images make the rounds on different pages. So, it's difficult to pin down exactly who came up with some of the presentation formats. The written component of this post is a revised message, based largely on personal experience, to another artist.
This is a quick look at blocking out linear details on softgoods, before sculpting the surface details.
Use basic subdivision modeling strategies to block out the larger shapes then place key edge loops around the seams on the fabric shell. Shrink down the middle support loops to create depth along the seams. Undulations and macro fold details can also be created by generating subtle height differences in the shell geometry.
Randomly select and move some of the vertices away from the surface of the shell to create a subtle height difference. This can be done either manually with randomized selections and scaling operations or with a displacement modifier. Triangulating the base mesh, before subdividing, creates additional edges that change the localized smoothing behavior and that's what helps generate these larger surface wrinkles.
All of these operations are done at subdivision level 0 and using
modifiers to control the localized displacement of select vertices and
the triangulation order makes it really easy to quickly change stuff.
It's also not that difficult to set up since most of it is just throwing a
modifier into the stack and adjusting the parameters.
Localized differences in surface height controls where the wrinkles
appear but the triangulation method used controls the orientation,
scale, and frequency of the wrinkles themselves. The difference can be
subtle but there is a difference in the smoothing stress created by each
triangulation method.
This effect can be controlled non-destructively with modifiers.
Different triangulation methods produce different stress patterns for
the wrinkles. It's a really cheap, quick, and easy way to generate
undulations and macro wrinkles on certain types of softgood shapes.
It's not perfect but it's very fast and completely reversible since
the surface displacement and triangulation can be controlled with
modifiers. Subdivision resolves the mesh to all quads so it's pretty
much ready for sculpting the fine details. The rest of the sculpting work can be done either in zBrush or Blender using the cloth sim brushes that inflate, stretch, and manipulate the folds on the high poly sculpt.
Alternately, just use Marvelous to model and simulate textile shells. It's a fairly straightforward application with a lot of documentation and learning a little bit about sewing also goes a long way towards figuring out how all the pieces of fabric shells are stitched together on real products.
Some additional examples of similar soft hard surface shapes:
I've been testing 'simplified' methods to create detailed ammo pouches - load bearing harness - backpacks...etc, without resorting too using dedicated cloth apps like MD, so thanks very much for your insight.
Replies
and ensure the smoothing behavior remains consistent.
@FrankPolygon thank you for such detail write up on baking.
Quick question, what do you mean by above line smoothing behavior remains consistent? as in 1 smoothing group for full model or split groups on hard edges followed by UV split ?
@HAWK12HT Thanks! Good question.
Low poly smoothing is a term that's kind of ambiguous but it generally means the overall shading of the model. In this context it would include meshes with and without hard edges, provided there's some soft shading.
It's important to triangulate the low poly before exporting because uncontrolled changes to the mesh can generate different types of normal artifacts. Checking the model in different applications, like Toolbag, Substance Painter, Unity, Unreal, Etc., helps eliminate potential baking issues by verifying the shading behavior is consistent and things like custom normals, smoothing splits, triangulation, etc. were exported / imported properly.
So, a better choice of words might have been: Triangulate the low poly before exporting to ensure the shading behavior remains consistent.
@bewsii Reply was delayed but really appreciate your comment. Thanks for sharing your experience! Always good to evaluate different workflows and adapt elements that fit new goals.
@FrankPolygon, thank you so much for this series of posts, it's great to see this amount of detailed breakdowns when it comes to doing this type of modeling.
I'm enjoying like a little kid reading each of your updates and I love to see how you solve each of the modeling challenges in a smart way. It's quite a vice reading you.
Now, I understand that modeling with booleans and N-Gons subdivision can work fabulously on baking models for video games, but would it be possible and acceptable to use this workflow for Pixar film productions?
Do you think that submitting a portfolio detailing this workflow could be a problem if the company reviewing my work is used to the classic quad-only workflow?
I would like to hear your opinion about this and also the opinion of other modelers more talented than me.
Thank you very much for your attention, I am adapting my workflow following your modeling advice at the moment for my personal project.
@orangesky Thanks for the comment. Glad the write-ups have been enjoyable and informative.
The short answer is: it depends. Context is really important because, even from a broad perspective, there's some significant differences between organic and hard surface modeling. While there's definitely some overlap in the fundamentals of subdivision modeling, these two broad fields are still completely different disciplines. Each with specific sub-focuses that require somewhat unique skill sets. What's acceptable in one instance may not be ideal for another.
It's the same sort of situation when it comes to 3D modeling for different types of media. E.g. animation, game, VFX, and visualization projects all tend to have specific technical requirements. The various studios that specialize in each of these fields will generally have their own preferred workflow(s) and best practices. Which are sometimes made public through articles, discussions, documentation, interviews, presentations, etc.
Information that references sources with firsthand experience is probably more accurate than personal opinion. As an example: use cases for triangles and n-gons are discussed in documentation for OpenSubdiv and booleans are also mentioned in other user docs and articles about artists that have worked on feature length animations at the company in question.
In general though, most technical elements are relatively easy to measure. That's probably why it's tempting to try distilling creative processes down into a set of fixed rules. While this sort of approach can work well enough when just learning the basics. It also tends to restrict problem solving by oversimplifying things and optimizing for binary decision making. Which generally produces less than optimal results when dealing complex problems over a longer period of time and contributes to the perpetuation of technical dogma.
Just in game art alone, the relatively short evolutionary period of the design tools has already seen several inflection points where established workflows have changed suddenly and artists that were unwilling to at least entertain the idea of doing things differently were completely left behind.
The switch to PBR texturing workflows and the subsequent rise of dedicated texturing applications is one fairly recent example. Another, which is adjacent to the earlier shift towards sculpting organics, is the rapid evolution of the boolean re-meshing workflow that's now seeing 3D DCC's being replaced with CAD applications. Parametric modeling is accurate and relatively fast. Two things that, arguably, old school, grid based subdivision modeling is not.
These kinds of rapid paradigm shifts are often focused on moving to processes that offer significant improvements in efficiency and visual fidelity. Something that a lot of the older workflows just can't compete against. That's not to say that elements of these older workflows aren't still relevant. It's just that the weaknesses now outweigh the strengths. Traditional subdivision modeling is no exception to this. Especially when it comes to hard surface content.
Booleans, modifiers and n-gons speed up different parts of the modeling process but it's important to remember that they're just intermediate steps and aren't the entire workflow. When combined with effective block outs and segment matching strategies, the n-gons in a base mesh can be resolved to all quads if required. So all quad geometry isn't necessarily exclusive to slower, traditional modeling methods like box, point inflation, edge extrusion or strip modeling.
The takeaway from all this is that the technical elements should be chosen to serve the creative elements that further the story. Not the reverse. Part of improving as an artist is learning to work within resource constraints to creatively solve aesthetic and communication problems while maintaining a cohesive narrative. Which admittedly does require technical understanding. It's just that understanding has to be tempered with other creative skills that might be neglected during these types of discussions.
Sometimes it's helpful to look at the technical elements of a workflow with a more pragmatic lens. Focusing less on tradition and more on comparing the cost of the inputs [time, emotional capital, etc.] to the value generated. Exploring different workflows also provides some needed contrast that helps identify weaknesses in current workflows. It's that sort of iteration and reflection that moves things forwards.
Polycount's career and education section is also a great resource for learning about what's expected from artists working in a specific field and is probably a better venue for discussing building a portfolio to land an animation, VFX or film job. Definitely worth the time to read through the advice offered there by other experienced artists that have worked on similar projects.
Subdivision sketch: hemispherical intersections.
This is a quick look at how segment matching can be used to create a clean intersection between the shapes that are commonly found on the ball fence of side by side break actions. Many antiques and some current production examples use the same hand shaping processes. So it's not uncommon to find references with minor shape inconsistencies or blended surfaces that have very gradual transitions. Which means close enough is good enough for most models.
Segment matching is a fundamental element of most subdivision modeling workflows. Using primitives with an arbitrary number of segments can be a convenient way to start the block out but also tends to produce geometry that makes it difficult to route the loop flow around the base of the shape intersection. Matching the number of segments in the intersecting shapes helps provide a relatively consistent path for the adjacent edge loops. Which not only improves the flow path for the support loops around the base of the intersection but also makes it easier to join the shapes without generating undesired surface deformation.
Start with a simple block out of all the major shapes. Adjust the number of segments in the round over and the sphere to roughly align the edges around the base of the shape intersection. Use a boolean operation to join the shapes then clean up any stray geometry by merging it down into the vertices of the underlying surfaces.
Slice across the surface of the sphere to re-route the loop flow and resolve the mesh to all quads by dissolving any left over edges. Depending on the size and position of the sphere and round over, it may be necessary to adjust the position of the corner vertex that turns the loop flow up and around the sphere. The edges that define the shape intersection then become the loop path for the support loops. Which can be added with a bevel / chamfer operation or modifier.
The same approach also works with wider round overs and shallower curves. Simply adjust the amount of geometry so the edges of the shapes align roughly where the surfaces overlap. Use enough geometry to accurately represent the shapes at the desired view distance. Balancing geometry density with editablity.
Quad spheres can also be used with a segment matching strategy but the arbitrary number of segments in the quad sphere means most of the adjustment will need to be done on the shape with the round over. Which tends to make optimizing the sphere side of the intersection a bit more challenging. In some of these cases it's necessary to dissolve the extraneous edges and slide the remaining loop along the surface to form the desired shape.
Recap: Block out the basic shapes and adjust the number of segments in each until the overlapping edges are roughly aligned. This will generally make it easier to clean up boolean operations and provide a clean path, for the support loops, around the shape intersection.
Subdivision sketch: studying shapes and simplifying subdivision.
This write-up is a brief overview of a simple, shape based approach to subdivision modeling. This approach, with the primary focus being to create accurate shapes that define the loop flow paths, can help streamline the modeling process for most hard surface objects.
Working without enough information to visually interpret the shapes tends to add unnecessary frustration. So, start the modeling process by researching and gathering different references. Which can include things like background information, images, measurements, videos, etc. These references should be used as a guide for modeling real world objects and as a baseline for evaluating the form and function of design concepts.
Look at the references and think about how the object is made and how it's used. This will provide additional context that ties the real world details from the reference images to the creative narrative used to guide the development the artwork. Something that also helps inform decision making during the modeling process and provides inspiration for developing the visual storytelling elements that carry through to the texturing process.
Analyze the shapes in the concepts and references. Identify the key surface features and observe how they interact with each other. Establish the overall scale of the object then figure out the proportions between the different shapes that make up the surfaces. Use this information to come up with an order of operations for blocking out the basic shapes. If necessary: draw out the basic primitives that make up the object. Highlight flat areas, curved areas and the transitions between them.
Most topology flow issues can be solved during the block out. Which is why it's generally considered best practice to: Use the edges of the existing geometry as support for shape intersections whenever possible. Use the minimum amount of geometry required to create reasonably accurate shapes. Use a segment matching strategy to maintain uniform edge spacing when joining curved shapes.
Develop the block out in stages. Keep things relatively simple for the first few iterations of the block out. Larger shapes should generally be defined first, while also keeping the smaller details in mind. Focus on creating shapes that are accurate to the references then solve the major topology flow issues before adding the support loops. Overall mesh complexity can also be reduced by modeling individual components of the object separately.
Let the shapes define the loop flow. Some features may have curvature that influences or restricts the loop flow of adjacent surfaces. Block out those shapes first then adjust the number of segments in the adjacent surfaces to roughly match the edges where the two shapes intersect. Any significant difference between the edges of intersecting shapes can usually be averaged out between the support loops.
With this iterative approach to blocking out the shapes then solving the topology flow issues, the edges that define the borders of the shapes become the loop paths. Which means most of the support loops can be added by simply selecting those defining edges and using a bevel / chamfer operation to add the outside loops. Alternately, loop cuts and inset operations can also be used when the support loops are only needed on one side of the edges that define the shapes.
This shape based loop routing strategy tends to require little manual cleanup and can be mostly automated using modifiers. Something that helps make hard surface subdivision modeling much more approachable. The examples in this write-up show how this basic workflow can be applied to a moderately complex, plastic injection molded part which has a mix of soft, lofted shape transitions and hard seam lines. Which are commonly found on a variety of different hard surface objects. So, the same sort of approach will generally work with most hard surface modeling workflows.
Recap: Analyze the shapes in the concepts and references. Develop the block out in stages. Let the shapes define the loop flow. Match the segments of intersecting shapes. Use the existing geometry to guide the loop paths. Solve topology issues early on then add the support loops.
Subdivision sketch: hand guard.
This is a follow up to the previous post about shape analysis. It's just a quick look at applying the iterative block out process to larger plastic components. Which are often part of other hard surface objects.
Identifying how the basic shapes are connected is a fundamental part of subdivision modeling. So, gather a lot of good reference material. Analyze the references to figure out what the shapes are then come up with a plan for connecting those shapes.
Work through the block out process in stages. Keep things relatively simple early on and focus on creating accurate shapes before adding surface details. This will make it a lot easier to maintain a higher level of surface quality during subsequent modeling operations.
Approach the modeling process with specific goals but also be willing to adjust the order of operations based on the actual results. Rather than sticking with preconceived ideas. Focus on getting the shapes right and rely on tools or modifiers to generate curves, complex shape intersections, fillets, roundovers, etc.
There's often significant overlap in the poly modeling fundamentals used to develop block outs for both re-meshing and subdivision workflows. Three fundamental concepts that make subdivision modeling more approachable are: use a reasonable amount of geometry in the shapes, adjust the number of segments in the intersecting shapes so they roughly match each other and use the existing geometry as support for shape transitions.
Most hard surface game art props aren't required to deform. Which opens up a lot of possibilities for using simplified topology on high poly surfaces that are flat or otherwise well supported. This makes it a lot easier to streamline the modeling process by reducing mesh complexity with workflow elements like booleans, modifiers and n-gons. Something that's still relevant in contemporary re-meshing workflows.
In this example: the basic shapes are mostly generated by boolean operations and all of the small support loops are generated by a simple bevel / chamfer modifier. Which means it's possible to adjust the width and profile of the edge highlights by changing the values in the modifier controls. These modifier based support loops are also used to replicate the parting lines. Where the splits in the mold, used to manufacture the real part, generate visible interruptions in the shape transitions.
Recap: It's very easy to over focus on the technical aspects of different workflows but one of the core elements of hard surface modeling is being able to recognize and recreate the shapes that make up an object. Regardless of the modeling workflow, using an iterative block out strategy makes it easier to create accurate shape profiles and solve problems sequentially. Without becoming encumbered by minor surface details and managing complex technical elements, that often aren't relevant during the early part of the modeling process.
Subdivision sketch: cylinder release.
This is a quick overview of an iterative block out process, combined with a boolean and modifier based subdivision workflow.
A significant part of hard surface modeling is figuring out the shapes in the references then generating an accurate block out. Booleans and modifiers help streamline a lot of basic modeling tasks. They also make it easier to adjust individual surface features, without having to remodel large sections of the mesh. Which reduces the amount of work required to make significant revisions to the basic shapes. There's also the added benefit of using the final block out as a base mesh for generating both the high poly and low poly models. Something that's still relevant to contemporary poly re-meshing workflows.
In the example above: The curved surfaces are generated by bevel / chamfer modifiers and surface features, like the spherical knurling pattern, are cut in with booleans. Everything in the block out remains editable through the modifier stack. First pass cleanup of the base mesh is handled by modifiers that dissolve extraneous edges by angle and weld any stray vertices by distance. Support loops for the high poly are automatically generated by a simple angle based bevel / chamfer modifier and the width parameter can be adjusted to make the edge highlights sharper or softer.
Recap: Using an iterative block out process makes it easier to focus on resolving issues with the individual shapes of an object and is a workflow element that's relevant to almost all hard surface modeling workflows. There's also a significant amount of overlap in the poly modeling skills used to develop the block out, base mesh, high poly and low poly models. Which are still relevant to both boolean re-meshing and subdivision workflows. It's these overlapping skills that are worth developing and the associated workflow processes that are worth streamlining.
@FrankPolygon thank you for sharing, i will also try apply in my workflow, can i ask if you also make your own maxscript to improve your workflow?
Thanks Frank, I'm always blown away by how you do things and it's great to be able to learn from it! Also, I love the way you present it all, it must take a while to get all the pics set up just right.
@chien Appreciate the comment. Writing custom scripts can be useful for solving specific workflow problems but most of the major 3D DCCs already have a variety of third party solutions that cover common modeling tasks. So, it often makes more sense to look for existing solutions before investing a lot of time, especially when there's a dedicated scripting community.
@danielrobnik Thanks! Great to hear the write-ups are informative. Producing consistent documentation is a significant time investment but saving the working files often and setting up presentation templates does help streamline the process. Testing the different workflows and summarizing the results is probably the most time consuming part.
Subdivision sketch: cylinder details and radial tiling.
This write-up looks at matching a cylinder's segment count to radially tiled details. While there are different ways to approach adding these details, the focus here is less about specific modeling operations and more about planning during the block out stage.
Gathering reference material is part of this planning process. Dimensional references, like CAD data, GD&T prints, scans, photogrammetry, etc., are often ideal reference sources but aren't always available. High quality images [photo or video] are an alternate reference source that can also be used to identify detailed features. Different camera angles and lighting setups are often helpful for figuring out how all of the shapes blend together. Near orthographic views are also helpful for establishing the overall scale and proportion.
Analyzing the shapes in the references usually provides some insight about the minimum number of segments required to accurately block out the basic shapes. Start the shape analysis by identifying the primary forms then look for the smallest details on those forms that need to be accurately modeled. The relative size of these smaller details often constraints the adjacent geometry in a way that's a significant factor in determining how many segments are required in the larger forms.
Some details are too small to be reasonably integrated into the base mesh. Depending on the visual quality goals, smaller surface details can be added with floating geometry or with texture overlays in the height or normal channels. Figuring out what to simplify on the high poly and low poly models is generally going to be based on artistic elements like style, prominence, typical view distance and other technical constraints like poly count, texture size, texel density, etc.
Below is a visual example of this type of shape analysis: Overall, the primary form is a basic cylinder with some radially tiling details. The smallest detail on the outside wall of the cylinder is the stop notch. Which means it's the constraining feature for the segment spacing of the primary form. The stop notches and other details, like the flutes, chambers and ratchet slots, are grouped together and repeat radially at regular intervals. Each detail appears five times. So, the total number of segments in the larger cylinder will need to be divisible by five. That way the larger form can be simplified into a single, tileable section.
There's a few different ways to come up with ratios that describe radially tiling features. Using an image to measure the features and comparing the size difference is relatively straightforward but doesn't account for the full distance along the curvature of the surfaces. Inserting a cylinder primitive over a background image then adjusting the number of segments until they line up with all of the shapes and are divisible by the total number of unique features is another option. However, this approach can be time consuming if the 3D application doesn't support parametric primitives.
With radially tiling features, it's also possible to use the total number of unique elements to develop some basic ratios. These basic ratios can then be used to determine the minimum number of segments required to create the repeating patterns. Which makes it possible to quickly calculate a few different options for the total segment count needed to block out the larger forms.
As shown below, a simple mesh can be helpful for visualizing the relationship between the radial features. The cylinder in the reference image has five flutes and five empty spaces between the flutes. If the width of the flutes and empty space is the same then it's a simple 1:1 ratio and the minimum segment count is 5+5. If the flutes are half the width of the empty spaces then the ratio is 1:2 and the minimum segment count is 5+10. If the flutes are only 25% smaller than the empty spaces then the ratio is 3:4 and the minimum segment count is 15 + 20. Etc.
Multiples of this ratio can then be used to adjust the total number of segments in the primary form to support the constraining features. Using the previous example, where the flutes are 25% smaller than the empty spaces, if the shape of each flute requires six radial edge segments then the total number of segments in the larger cylindrical form is 70 or 30+40.
To produce an evenly distributed segment count, that's divisible by the number of unique radial features, it's sometimes necessary to round the ratio to the nearest whole number. E.g. Multiplying the base ratio of 3:4 by 1.5 is 4.5:6 or 22.5 + 30 which needs to be rounded to 5:6 or 25 + 30 to match the flute geometry and be evenly divisible by 5.
Math isn't a hard requirement for most modeling tasks. Intuition gained through practice is often good enough. It's just that math makes it easier to evaluate certain technical questions, like "How much geometry is required to accurately represent these shapes?", without having to model the object several times to find an answer.
Blocking out the shapes with an arbitrary number of segments then adjusting the mesh to line up the edges is a fairly straightforward approach that doesn't require using a lot of math. The tradeoff is that trying to match the segments using trial and error relies heavily on intuition, iteration and luck. Which often means either reworking large portions of the mesh or compromising on quality around complex shape intersections. Especially when using only basic poly modeling operations.
This is where the flexibility of an iterative block out process, that uses a parametric primitive and modifier based modeling workflow, has a significant advantage. With this sort of workflow the mesh density can be adjusted non-destructively. Which makes aligning the segments of each shape fairly straightforward. Just change the input numbers on the shape or modifier that controls an individual feature and the updated segment count is pushed through the modifier stack. Without requiring any manual mesh editing operations.
Whether using destructive or non-destructive modeling, dividing the object into tileable sections can reduce the amount of work required. Which makes quickly testing different segment count and topology strategies a lot easier. Below is an example of what this iterative block out process could look like. Using a non-linear, non-destructive modeling workflow that leverages booleans, modifiers and parametric primitives.
Deciding how much geometry to use really comes down to how accurate the shapes need to be. For these shapes the minimum viable segment count is pretty close to the original estimate of 35 segments. While the subdivided mesh looks fine from most angles there are some subtle deformations around the stop notch. Likely caused by all of the geometry bunching up in that area and disrupting the segment spacing between the edges that make up the larger shape.
Some of these issues can be resolved by simplifying the geometry but this also tends to soften some of the sharper corners. While it is possible to compensate for a lot of these artifacts by manually deforming the geometry of the base mesh, this can reduce the overall accuracy and quality of the surface. In some situations it may make sense to use shrink-wrap modifiers to project the mesh onto a clean surface but this approach does come with certain limitations.
Something with this level of surface quality might be fine for a third person prop or background element but wouldn't be acceptable for most AAA FPS items. This is why it often makes sense to do some quick tests with just the constraining features and decide what level of surface quality is appropriate for a given view distance and time budget.
Sometimes it makes sense to try and use the existing geometry as support loops and other times it's more effective to generate the support loops on top of the existing geometry.
In the example above, offsetting the stop notch from the existing edges in the cylinder provides a set of outer support loops but using a bevel / chamfer modifier to automatically generate a set of tighter support loops around the shapes is part of what's causing the geometry to bunch up and deform the curve. Manually placing support loops on the inside of those shapes would solve a few more of the smoothing artifacts but would also reduce the sharpness of the shape transitions. Which could work if softer edge highlights were acceptable.
However, it's much more time efficient to rely on the support loops being automatically generated via a modifier. In this case it makes more sense to place the shapes and support loops directly on the existing edges of the primary form. Increasing the segment count by a multiple of 1.5 then rounding up to the next whole number that's divisible by 5 produces much cleaner results.
When it comes to subdivision modeling, there's often a tendency to over complicate the mesh. Mostly due to skipping over parts of the block out process or arbitrarily increasing the mesh density to avoid using booleans and modifiers. Sometimes this sort of decision making is rooted in the limitations of the 3D application's tool set and other times it's based on technical dogma or popular misconceptions.
It's very easy to get bogged down trying to perfect a lot of the technical elements.
Mainly because they're easily measurable and aiming for specific numbers or a quad grid mesh can be really satisfying. What's important to remember though is that most players won't ever see the high poly model. Much less care about what the wire-frame looks like. So, it's much more important to focus on the artistic and effort/cost components of the modeling process and less on chasing technical minutia.
There's also some important tradeoffs that are worth considering. If a surface is going to have a lot of high frequency normal details added later in the workflow, like rust, pitting, dents or other surface defects, then the actual surface quality of the high poly model probably doesn't need to be absolutely perfect.
Of course that sort of pragmatic outlook isn't an excuse for creating meshes with visible shading artifacts. It's more about having the permission to explore workflow elements that save time and effort while producing usable results.
There are also easier ways to create some of these patterns. Like modeling the details flat and deforming them into a cylinder but there will be certain situations where that is unworkable or causes other issues. So the overall goal here was to look more at the planning process for complex shapes tiled along curves.
To recap: the planning process really starts with finding good reference material and building an understanding of the shapes that make up the object. From this shape analysis it's possible to come up with some basic ratios that can be used to derive the minimum amount of geometry required to represent the shapes on a cylinder.
Working through an iterative block out process makes it a lot easier to resolve major shape and topology issues before investing a lot of time into any specific part of the mesh. Sometimes it's necessary to make compromises on the accuracy of the shapes and the surface quality but it's still possible to generate usable results quickly. Streamlining the modeling process and using parametric primitives or modifiers to control the different shapes will make it a lot easier to experiment with different density and topology strategies.
Subdivision sketch: hand grip.
This is a quick overview of an iterative block out process that uses subdivision to generate soft[er] hard surface shapes.
An over complicated base mesh, often caused by rushing through the block out process, can make it difficult to generate clean shape transitions and smooth surfaces. Which is why it's generally considered best practice to: Focus on defining the larger forms before adding details. Develop both the shapes and topology flow together. Keep the base mesh relatively simple, until the larger shapes are accurate, then apply subdivision as required. Continue refining the surfaces and adding smaller details in stages.
Adding small surface details too early in the block out can cause extreme changes in mesh density. Which can be difficult to manage when modeling everything as a single mesh. Wait to add these types of details later in the modeling process. To simplify the mesh further, consider using floating geometry or textures to add these types of small surface details. Hide any potential seams around floating geometry by using the natural breaks between individual parts and other decorative surface features.
One major drawback of subdivision modeling is that it's often difficult to manage extreme density shifts across a mesh with a lot of compound curves. This grip is an example of something that's relatively quick and easy to block out but can become an absolute nightmare to edit, if trying to manually add all of the details to a single, watertight mesh.
Which demonstrates is why it's important to develop the larger shapes first then work through adding all of the details in stages. While also using different modeling and texturing techniques to streamline the workflow.
That hand-guard (foregrip) is nuts, oh well guess alot more practice to get my bool operands to look as good or even behave : /
@sacboi It definitely has an interesting combination of linear and curved features that produce some challenging shape intersections.
Every application is a bit different but a few things that can cause boolean solvers to fail are non-manifold geometry, overlapping co-planar edges or faces and un-welded vertices. Sometimes these types of issues can be resolved by simply changing the order of the boolean operations but persistent issues with the boolean meshes often need to be cleaned up with weld or decimate [by face angle] modifiers.
There's a lot of different ways to approach developing and combining the shapes. Below is a quick animated breakdown of the process used in the previous example. Most of the external features are generated by modifiers. The internal features could have also been done with booleans and modifiers but the loop cuts and insets were a bit quicker. Individual boolean objects and destructive edits are highlighted in the animation.
Non-destructive workflow elements like booleans, modifiers and parametric primitives make adjusting the density or shape of key features fairly quick and easy. This type of workflow also helps avoid having to manually re-model large sections of the mesh when creating the in-game low poly and high poly base mesh. Something that's relevant to poly modeling for both traditional subdivision modeling and re-meshing + polishing workflows.
Subdivision modeling can also be streamlined quite a bit with a boolean and modifier based workflow. All of the minor support loops in this example are generated by a bevel / chamfer modifier. Which means the edge width or sharpness of the subdivided model can be adjusted at any time by simply changing the values in the modifier's settings panel.
Recap: Traditional subdivision modeling techniques do have some significant drawbacks, like the steep learning curve and counterintuitive smoothing behavior, but more contemporary workflow elements make results like this a lot more achievable. For those who prefer re-meshing + polishing workflows, the same shape focused approach to block outs and boolean + modifier workflow elements can help streamline a lot of different poly modeling processes.
Very cool, appreciate the additional info :)
Subdivision sketch: manifold block.
This is another animated overview of a boolean and modifier based subdivision workflow. It covers the modeling process from block out to final high poly.
Basic forms are generated using primitives and modifiers. Matching the segments around shape intersections reduces the amount of manual cleanup required. Triangles and n-gons are constrained to flat surfaces to prevent visible smoothing artifacts. Additional loop paths are established with basic cut and join through operations. Final support loops are generated with an edge weighted bevel / chamfer modifier. The base mesh can be easily resolved to all quads if required.
It's a simple part but still demonstrates how a shape first approach reduces unnecessary complexity and makes it easier to create hard surface subdivision models. With modifiers, the shape and density of most features will remain flexible until the loop flow is finalized. The same base mesh can also be used to create low poly models or be pushed through a re-meshing and polishing workflow to avoid traditional subdivision all together.
Below is a comparison of the subdivision high poly model and a couple of low poly bakes. Both low poly models were generated, from the high poly base mesh, by adjusting the modifiers that control segment density on the indvidual features then unwrapping the UVs by face angle. Any destructive edits were done towards the end of the low poly optimization pass. This streamlined workflow makes it a lot easier to iterate surface features and mesh density, without having to manually re-model large sections of the mesh.
Recap: Avoid wasting time trying to perfect minor technical elements before the fundamentals are fully developed.
Methodically work through the iterative phases of the modeling process. Focus on creating accurate shapes first then adjust the number of segments in the features and adjust the topology flow. Test variations of the low poly mesh and optimize anything that doesn't contribute to the visual quality of the surface or clarity of the silhouette. Place hard edges and UV seams at the same time. Optimize for consistent shading behavior while using the minimum number of mesh splits. Use UV checkers and test bakes to validate the unwrap and pack before committing to a specific UV layout.
Be willing to go back and adjust the mesh when issues are identified. It's a lot easier to fix problems with the models before low poly optimization and texturing. Continually evaluate the model from the player's point of view. Use resources efficiently and try to achieve the best possible results with the time available for each stage of the development process.
Material study: model tank road wheel.
Material blockout: Start simple. Establish the base values without damage, wear and weathering effects. Regularly check the materials under different lighting conditions and compare the results to the reference images. Use subtle variations in base color [diffuse] and roughness [gloss] values to help differentiate individual materials.
Below is what the material block out looked like for this study. Values for raw surfaces like bare metal and rubber were established first. Followed by matte red oxide primer and semi gloss dark yellow. The mask between the two paint layers is just a tiling overlay applied to the model with tri-planar projection.
This camouflage scheme is a stylized interpretation of an allegedly ahistoric late war factory pattern. Which had patches of red primer left exposed to save paint and speed up production. Evidence for this pattern is mostly anecdotal but the contrast provides some visual interest over the usual solid red or solid yellow that's often depicted in exhibits and on scale models.
Wear masks: Sometimes basic edge masks are too uniform. Use [tiling] overlays (chipped paint, random scratches, etc.) to add some visual interest and break up the shapes in the masks. Carefully select damage overlays that match the wear patterns in the reference images. Adjust the contrast, intensity, projection, rotation, and scale of the overlays to fit the visual narrative of the components.
Here's what the combined masks and damage overlays look like for the wear pass. Small circles were manually painted over areas that would be worn down by fastener rotation during field maintenance. An edge mask was used to reveal the base materials in areas that would damaged by regular contact with the environment and tools. E.g. The rim of the steel road wheel would be exposed to abrasive material like dirt, gravel, large rocks, etc. Corners of fasteners and protruding parts of the hub assembly would contact repair equipment.
These basic wear masks were then refined with texture overlays. Contact damage tends to accumulate along exposed edges and produces wear patterns with sharp borders around large chips in the coating. Recessed surfaces tend to be better protected from impact damage but can trap abrasive material that creates softer and smaller scratches over a wide area.
Here's what the progression looks like for the wear pass. A subtle overlay lightened the color values in areas faded by exposure to the sun. Chipped edges and scratched surfaces were used to create points of visual interest, without overpowering the narrative. Surface cracks and dents were added to the normal channel to provide some contrasting wear patterns.
The wear pass provides a great opportunity to create interesting details but it's also important to avoid over wearing the surfaces. Since these wheels are supposed to be fairly new, it's unlikely that freshly exposed metal would have heavy rust or that the fresh paint would be flaking off. Surface damage on the rubber material is mostly tears, caused by rocks in the tracks, but there's also some subtle cracking that hints at the substandard quality of late war production.
Subdivision sketch: placing intersecting cylinders on existing edges.
Avoid deforming the edges or vertices that make up the walls of either cylinder. Use the space between the intersecting shape and the outer support loop around to connect the two shapes without causing unintended surface deformation. Merge or dissolve any stray geometry back into the nearest vertices that are in line with the existing geometry.
The same segment matching and topology routing strategy also works with multiple cylinder intersections, where each circumference lands on or close to an existing edge in the larger cylinder. In the example below the largest intersecting cylinder is that same diameter as the base cylinder so there's no need to adjust the segment counts on that one.
Subdivision sketch: cylinder with chamfered slot.
Subdivision sketch: slide release.
This is another brief look at using an iterative block out process and modifier based subdivision workflow.
Study the reference images and figure out how the shapes fit together. Establish the rough proportions then come up with an order of operations for developing each important detail. Keep things relatively simple. Focus on accurately representing the shapes and use segment matching to reduce the amount of clean up required. Let the defining edges of the shapes guide the topology flow.
An open letter to anyone just starting out in game art.
There will also be periods where related skills are at different levels. When the next step in a process relies on one of these under developed skills it will slow progress down. Which can be very frustrating when the source of the skill gap isn't obvious. This is why it's important to regularly think about what elements of the workflow can be improved and whether or not there other skills required to reach the desired goal.
While it's not strictly necessary, some game artists do find it helpful to learn about or practice related art skills. Such as traditional drawing, painting, photography, etc. All of these require working with color, composition, and proportion in different ways. Which can help bring a fresh approach to understanding forms and solving similar problems in game art.
As an example: Though 3D modeling is a highly technical process, most art fundamentals (Like analytical observation, pattern recognition, and problem solving.) are shared across most creative disciplines. If identifying shapes in reference images is difficult then it can be helpful to practice analyzing shapes by outlining key surface transitions to gain a better understanding of how everything is connected. So, practicing those other traditional art skills can help level up game art skills indirectly.
Going through the various stages of the emotional cycle of change is a part of almost every project and learning experience. When progress is slow or things become difficult, it can be helpful to remember this chart and continue moving forward past the low points.
Managing these feelings is it's own skill set and it's often easier to get through this part of the learning process with the support of a dedicated community or other artists that are moving towards similar goals and sharing similar experiences. Just be careful to guard against letting external emotional trauma or preconceptions dictate a path before things even get started. Continue to show up, sharpen skills and take new opportunities as they come.
There's a number of dedicated game art discords but that experience isn't for everyone. So, it can be helpful to seek out experienced artists on other platforms. Like the career and education section here on Polycount.
Recap: When learning the fundamentals of game art, it can be really helpful to pick a series of smaller projects that provide a unique set of challenges but are also easy enough to complete in a couple of days or a couple of weeks. Invest a few of hours per day on these projects, stick with it, find a supportive community of artists, regularly ask for feedback on the various stages of each project, be willing to implement changes based on relevant feedback, consistently finish projects and share the results on a professional platform.
Keep everything in perspective. Success isn't guaranteed but it's often much closer than it appears.
Sources: the concepts depicted in these graphs aren't new, they have been presented by others numerous times and similar images make the rounds on different pages. So, it's difficult to pin down exactly who came up with some of the presentation formats. The written component of this post is a revised message, based largely on personal experience, to another artist.
Subdivision sketch: seams on softgoods.
https://polycount.com/discussion/comment/2774300/#Comment_2774300