Home Digital Sketchbooks

Sketchbook: Frank Polygon

2

Replies

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Texturing breakdown

    This write-up is a brief overview of the processes used to create PBR textures for a hard surface model with subtle weathering effects. The documentation for most of the popular texture authoring tools does a good job of covering the technical side of things. So the majority of the focus here will be the decision making process that drives the artistic components of texture creation.

    All that said, there is one technical thing worth mentioning: The quality of the baked input textures does tend to have an impact on the overall quality of the output textures and the texture authoring experience. Which is why it's generally advisable to identify and resolve any significant issues with the bake setup before moving forward with the texturing.

    Check that the UV layout has appropriate padding and textel density. Make sure that any hard edges in the low poly model are paired with UV splits. Try to minimize UV distortion and straighten the UV islands whenever possible. Run a few test bakes then adjust the high poly and low poly models to resolve all of the major artifacts.

    Try to keep things as orderly as possible. Take the time to setup a well organized material ID map. Document the baking setup and note when any unconventional shortcuts are used to meet deadlines. This way, if the work is handed off to someone else or if there's future revisions, there won't be a bunch of time wasted trying to guess, remember or reverse engineer whatever process was used to generate the input textures.

    Smart materials and masks tend to rely on the input textures for information about the model's surface. So having clean bakes for the normal, ambient occlusion and curvature maps can help make the texturing process a bit easier.

    Blocking out the lighting and material values is a good place to start most texturing projects. Look at the reference images and try to create several different lighting setups that match the typical environmental conditions. It's often helpful to have a dark, neutral and bright environment setup. Frequently rotate the camera and lighting setup to check the material values from different angles. This makes it easier to see how the base materials will perform under various in-game lighting conditions.

    The material block out is all about establishing the base values for the diffuse, roughness and metalness channels. It's important to work through the material block out before adding any weathering overlays. This is because it tends to be a lot easier to nail the look of clean materials that haven't been obscured by additional layers of environmental build up and damage effects.

    Visual identification of specific materials is usually filtered through some kind of evaluative hierarchy: shape, color, reflectivity, physical [damage] state, surface texture, etc. These sort of subtle details tend to be highly contextual and help convey information about what the underlying material is supposed to be made of. Break down complex materials into individual layers and work through the inputs for each of the texture channels.

    Ask and answer questions about the physical properties of the object. Use visual cues in the reference images to answer any questions. What is it? A compass. What color is it? Green / olive drab. What's it made out of? Painted metal. What type of paint is generally used? Single stage semi gloss. What condition is it in? Slightly worn. What environmental clues are present? Sandy dust, light brown dirt and grease smudges on flat surfaces.

    Additional surface details that weren't part of the high poly model can also be added during the material block out. Things like paint build up around the edges of shapes and fine micro surface textures are fairly easy to add using smart masks and triplanar texture projection. How well the chosen input values and texture overlays represent the actual materials (Metal, plastic, paint, etc.) is an important part of creating convincing textures.

    Working through the material block out then adding the major wear states can make it easier to figure out exactly where to place the dirt and grime layers. There's a maxim in the scale modeling community that's something along the lines of "Well sculpted models practically paint themselves." Same sort of logic applies here. Physical surface texture and damage tends to accumulate loose environmental matter over time. So it can make sense to start with the damage layers then add the weathering on top of it.

    Effective use of texture details and weathering effects will help an object have greater visual impact and can also be used as storytelling elements. Damage effects can be used to indicate whether something is broken or about to break. Visible repairs can be used to indicate how reliable something is. Dirt, grime and wear marks can be used to indicate how old something is, how much it's been used or how it should be used.

    Gather additional reference images of the same object or similar objects in a variety of different wear states. Study the references and note how each type of physical material tends to have it's own unique accumulation and wear patterns. Categorize the types of damage and weathering effects, based attributes like shape or location, then come up with a rough plan for where they should appear on the model.

    Smart materials and masks offer a tempting solution to try and add all of these details with just a couple of layers but the results often look overly flat or sort of muddy. Most of the time this is because there just isn't enough separation between the individual effects and it's difficult to control the granularity of how the individual details blend together. That's not to say that things can't be done efficiently. There just needs to be enough subtle difference between things to create the illusion of things building up over time.

    Any damage that breaks through a coating should generally be on it's own layer. These layers can be stacked on top of each other and the width of the masks can be adjusted so it reveals subtle steps in height difference between the paint, primer and bare metal. Putting each type of unique damage effect into it's own layer not only makes it easier to adjust each element independently but it also helps simulate the built up appearance that tends to happen over time.

    Start with the clean base materials. Add wear through damage that reflects how the object contacts or interacts with the environment and any moving parts. Once the primary damage pass is complete, the weathering layers can be added on top of the existing materials.

    These weathering layers can be made up of wet or dry materials like: oil, grease, paint, water, dust, dirt, lint, sawdust, metal shavings, environmental particulates, etc. It can make sense to start with a subtle layer of semi-wet materials, like accumulated grease, since this will help inform where the next set of dry layers should stick to the model.

    Wet material layers can also be used to suggest how other entities interact with the object and communicate their relative scale. Under harsh conditions or in the presence of certain environmental conditions there may not be any clear shape signatures like fingerprints. This is because grease and oils usually mix with environmental elements like dirt and are easily disturbed by interactions with things like cloth, plant material, plastic coverings, water immersion, heat sources, etc.

    It's important to have subtle variations in the roughness values. When the object is viewed from a distance the individual materials should have a relatively uniform appearance but when viewed up close or under specific cross lighting conditions the other materials present in the weathering overlays should become more apparent.

    After the first wet weathering pass is added, it makes sense to add a few broad layers of dry weathering materials. Preferably in contrasting colors, with a darker base and lighter accent layer on top. The base layer of dirt is usually wider than the accent layer. Each of these contrasting dirt layers should generally target different areas of the model to provide some visual interest but there should also be some areas where they overlap to maintain chronological continuity.

    Another wet weathering pass provides some additional visual interest and should generally be a more fluid material like oil. The glossiness of this second wet layer can be moderated with another subtle dry pass and accented with clumps of dirt, lint, fragmented plant matter, etc. These final dry layers are mostly used to break up the larger patches of accumulated weathering materials and provide a visual break or bridge between the various weathering layers.

    Layers of dust, dirt and other particles can be used to hint at the presence of certain environmental conditions in the game world. Multiple shades of dirt with different grain compositions can be used to suggest that the object has been to multiple geographic locations. Strongly emphasizing just one type of dirt is a good way of communicating the idea that the object has only been to one location or that it's been stored for a long period of time.

    Dirt and dust tend to accumulate in corners, behind shallow ridges and on or near wet areas. Each type of material used in the dry weathering layers will tend to behave differently. Three basic shades of dust [dark, neutral and light] and a few variations of dirt particles should be enough for most basic projects.

    In this example, the subtle dust layer is a darker color and it accumulates on the flatter surfaces. The heavy dust layer has a slightly brighter color and collects around the tighter corners. These contrasting dust layers blend together around transitional areas and certain flat spots that are partially cleaned by regular use. On top of this first set of dry weathering layers is another wet layer with oily smudges and some partial finger prints. This suggests that the object is used often in an outdoor environment.

    A very thin layer of fine dust gets caught behind the subtle ridge of paint around the edges of the shapes. This thin layer of dust has a slightly lighter color than the heavy layer of dust and just provides some visual contrast that helps outline the shapes. It's also useful for breaking up the monotonous tones of the first dry weathering pass.

    The smaller void shapes are partially filled with a crumbly looking dirt that's lighter than the rest of the dry weathering layers. This contrast helps draw attention to these areas and also suggests that it's recently been to a place that's completely different from the previous environment. A top layer with small dust speckles helps cover the areas between the dirt in the corners and the patches of dust that accumulate on the flat surfaces. This helps unify the dry weathering layers and the size of the particles also provides a scale reference.

    When creating multiple assets for the same game world, it can be helpful to develop specific rules for the weathering layers so everything is fairly consistent. On this project the older layers were darker and the newer layers were brighter. There's also a visible difference in the brightness and glossiness (sharpness of the reflections) on the different wet layers. Which helps suggest the age of the materials, since the more volatile components have evaporated off with passing time.

    It's also important to test the combined weathering passes under various lighting conditions to make sure everything will behave correctly in-engine. This will help ensure that everything has a cohesive look and that none of the individual layers have material vales or texture patterns that break the immersion. Most of the subtle micro texture variations in the normal and roughness channels should be barely visible under regular conditions. Only under the right lighting, when the camera or environment lights hit at glancing angles, should some of these details become visible.

    Diffuse colors that are relatively close together can make it difficult to spot some of the finer details. As the lighting rolls across the surface it will pick up on the different values in the normal and roughness channels. Which can help enhance or suppress the visibility of certain details. This can be used to create a layered look where only certain weathering elements will be visible at any given time. Which can help harmonize the different layers so they blend together in a cohesive way that doesn't appear too stylized or artificially flattened.

    Under more diffuse lighting conditions, where everything is evenly lit, the differences in the roughness values should be fairly subtle. Equipment that looks like it's been dragged to hell and back is it's own aesthetic category but reality is often a bit more muted. Especially when it comes to precision instruments or something that needs to be properly maintained so it will function reliably. The weathering cues should match the game world's back story and the wear level should match the age of the object and it's intended use. Just because they're storytelling elements doesn't mean they are the entire story.

    Previous lighting breakdown: https://polycount.com/discussion/comment/2764276/#Comment_2764276

  • LaurentiuN
    Offline / Send Message
    LaurentiuN interpolator

    Amazing breakdown man, your modeling/baking/texturing is top notch!

  • Alex_J
    Online / Send Message
    Alex_J grand marshal polycounter

    Fantastic breakdown. Now that's "paying close attention to detail."

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @LaurentiuN Thanks! Usually find that texturing isn't as fun as modeling but having clean bakes does make it a lot more enjoyable.

    @Alex_J Thank you. Always trying to push quality and efficiency. Ironically some of the write ups take longer than the production work.


    Currently working through a very brief overview of the modeling, unwrapping and baking for this project. After that the intent is to work through posting the back log of modeling write-ups from mentoring and some other thoughts about texturing and game art in general.

  • RRRabbit

    Thank you so much!!! Your article and cases helped me a lot, wish you a happy life!

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Modeling breakdown

    This write-up is a summary of the different modeling processes and the reasons they were used to create specific parts of the models. The amount of planning and research required varies based on the complexity of the project. Creating a rough outline of the intended workflow can help evaluate how technical constraints may affect the individual processes. Thinking about how the asset will be used in-game, along with the art style and technical constraints, will help determine how accurate the model and textures need to be.

    Objectives for this project are relatively straightforward: Since the asset will be used in a simulation the model and textures will need to look fairly realistic. It will also fill the screen when the player is reading the dial and sighting landmarks but the model and textures will need to be optimized to fit within allotted resource footprint. The ideal poly budget is between 10,000-12,000 triangles with a maximum of two materials. One set of textures for transparent materials and the other for opaque materials. Dimensional accuracy is also important because the linear graduation marks on the side of the base can be used to measure distance on in-game maps.

    The accuracy of a model's details tends to be limited by the quality of the reference material. Physical access, scan data and detailed drawings are ideal reference sources for real objects. These types of references can be measured directly and that makes it a lot easier to establish real world scale. Which helps create a model with accurate proportions. Photographs and illustrations can also be used as references but require a bit more effort to use effectively. Collect, compare, cross check and cull the reference images until there's a set that accurately represents the subject.

    Reference images of key angles (Top, front, side, etc.) can be aligned and overlapped to create a reference sheet. The object's overall dimensions can then be used to estimate the size of key features on the reference sheet. Following the reference sheet while also using these dimensions and incremental snapping (In real world units.) should produce a model that's built close to scale with fairly accurate proportions.

    Obscure topics tend to be more difficult to research but there's usually a lot of usable reference material for most objects. It's often helpful to start the search with broad terms then refine the keywords as the terminology used to describe the components becomes more familiar. Collector or enthusiast communities can be great resources for reference material and specialized information. Locating useful reference sources just takes a bit of effort and persistent searching.

    A few quick tests with primitive shapes can be used to evaluate the optimal number of segments for the low poly model and verify the dimensional accuracy of key features. Small scale samples like this can be used to identify potential issues, isolate specific variables and test different solutions. The simplicity of these test models also makes it a lot easier to resolve proportion and topology issues with the major forms, without having to invest a significant amount of time.

    The initial block out just establishes the overall scale and proportions of the basic shapes. These shapes are then used as a base for the detailed block out. Individual features are added to the model in a series of iterative detail passes. Boolean operations are used to join the individual mesh components and modifiers are used to generate precise curves, bevels and copies of repetitive shapes. This helps keep the individual mesh components fairly simple. Which makes it a lot easier to make major changes without having to manually re-work large sections of the model.

    Both halves of the model were blocked out at the same time. It's just easier to show progress on each half separately.

    The first couple of detail passes are focused solely on blocking out the larger components. Smaller details will be added in subsequent passes. Deciding which details get added to a specific stage of the block out comes down to two basic factors: context and consistency. Context means looking at decisions in relative terms that relate to things like scale, in-game usage, workflow, etc. Consistency means looking at the overall goals for the project and previous decisions then applying the same logic in a uniform way.

    E.g. adding all of the text to the base mesh would easily push the model beyond the resource budget for the low poly but adding the linear gradation marks may be necessary to increase the accuracy of the map readings. Especially for users with lower end systems that can't display textures at full resolution. This is where using modifiers to generate certain features provides a lot of flexibility that hedges future change requests.

    Updating complex details or toggling them on and off with the modifier stack means that both the high poly and low poly models can be generated from the same base mesh. Which can help streamline a lot of the high poly to low poly workflow and allows for a more pragmatic approach towards high poly modeling. This block out strategy is generally compatible with both subdivision modeling and boolean re-meshing workflows.

    E.g. there's a lot of text and repetitive surface details on the larger components and it's easier to run those parts through a boolean re-meshing and polishing workflow in ZBrush. The rest of the smaller components are fairly simple shapes. So, it makes more sense to use a subdivision modeling workflow with a mix of floating and solid geometry to create those parts.

    It's also important to keep things organized. Logical and consistent naming conventions will make it easier to locate files. Notes with the settings used for different modeling operations, like the various re-meshing and polishing steps, will make it easier to update the model after making changes. General process notes will also be helpful for handing off project files or when revising a project that needs adjustments 6-12 months down the road.

    Below is a visual outline of the basic workflow for this asset and a brief description of the modeling techniques used during each stage of the process.

    • Block out: basic poly modeling with numerous booleans and modifiers.
    • Base model: poly modeling with modifiers for edge bevels and round overs.
    • High poly model: re-meshing with edge polishing pass and basic subdivision with modifiers.
    • Low poly model: basic poly modeling.


    Certain parts of this workflow could have been simplified further. Mainly the high poly re-meshing and polishing but trading a small increase in speed just isn't worth the reduced flexibility. The primary advantage of this workflow is only having to build the base mesh once and being able to use that one mesh to generate both the high poly and low poly model. Which makes it a lot easier to push any last minute changes through without having to do a lot of re-work.

    Low poly optimization is pretty straightforward: turn off the modifiers used to generate the edge and surface details. Setup the mesh smoothing groups then unwrap and pack the UVs. Adjust the split placement and re-pack as needed to maximize the quality. Run several test bakes before accepting a layout.

    Hard edges [edge splits] are placed to control the smoothing behavior on the low poly. If the mesh shading and normal bakes look good then it's generally acceptable to have larger smoothing groups and some gradation in the normal map. Each new UV island increases the amount of texture space lost to padding. So, unnecessary hard edges effectively reduce the overall textel density of a model.

    UV splits can be placed at the same time to back the hard edges and control the UV unwrap. Hiding splits and seams along the natural breaks in the shapes can help reduce the visibility of any minor artifacts that may appear along the split edges. It may also be necessary to adjust the scale of the UV islands, based on their distance from the players camera, to optimize the textel density for the mesh's size on screen and visibility. The goal here is to use an appropriate number of seams, placed in a way that balances straightening the UVs with minimizing any noticeable distortion and limiting the total number of UV islands.

    In the end, adding the linear graduation marks and lettering with texture masks would have simplified the high poly process. The base model could have just been left blank and used to create a subdivision high poly where the edge loops are generated with a bevel / chamfer modifier. Once again, the tradeoff in time saved just wasn't worth the headache of solving potential accuracy issues with the linear graduation marks and this would have also made it extremely difficult to pivot to a fully modeled low poly if the project required it.

    Alternately: a full sculpt pass could have been done on the high poly model but much of the damage was fairly shallow. So it made more sense to just paint the texture masks and use the height values to generate the changes in the normal maps. This also made it easier to gradually reveal the different layers in the materials and turn specific types of damage on or off based on the different wear states. Much faster than manually sculpting these details.

    Some additional write-ups on related topics:

    Block outs and incremental progression.

    https://polycount.com/discussion/comment/2751240/#Comment_2751240

    Mesh complexity and shape accuracy

    https://polycount.com/discussion/comment/2751340/#Comment_2751340

    Isolating variables and iterating to solve complex problems.

    https://polycount.com/discussion/comment/2760744/#Comment_2760744

  • sacboi
    Offline / Send Message
    sacboi high dynamic range

    Per usual, knocked my socks off 😀

  • wirrexx
    Online / Send Message
    wirrexx ngon master

    Agree with Sacboi, you are just *bows down*

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @RRRabbit Thanks for the comment! Glad the write-ups were helpful.

    @sacboi Thanks! It was a bit shorter and took longer than expected but happy to hear it was on par with the previous write-ups.

    @wirrexx Thank you, really appreciate your support and contributions to the how do I model this thread.


    Upcoming content: Next few posts will be a mix of visual process breakdowns for hard surface shapes, along with some additional thoughts on other art processes.

  • KebabEmperor
    Offline / Send Message
    KebabEmperor polycounter lvl 3

    frankpolygon is a higher being, we can just sit back and admire his work.

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision sketch: turbocharger compressor housing.

    This write-up is a brief process overview that shows how segment matching, during the initial block out, can make blending complex shapes together a lot easier. The order of operations and choice of modeling tools may be different for each shape but the important thing is to try and match the geometry of the intersecting meshes. Getting everything to line up is mostly about creating curved surfaces with a consistent number of edge loops that are shared between the adjacent shapes. Solving these types of topology flow issues early on in the modeling process is one of the keys to efficient subdivision modeling.

    For this example, it made the most sense to start with the largest shape first, because it was the most complex. Since curves can be used to generate procedural geometry, it's a fairly straightforward process to adjust the density of the circular cross section, number of segments along the path and taper the end of the spiral. This flattened helical shape was created by splitting a Bézier circle, extruding one side and adjusting the curve's geometry settings.

    The center of the housing was created by outlining the shape's profile then using a modifier to sweep it the same number of segments as the adjacent shape.

    Details like the outlet flange can also be sketched flat then extruded and rounded over with modifiers.

    Smaller surface details are added towards the end of the block out. Since the base shapes are still procedural geometry, it's fairly easy to adjust the number of segments in the larger shapes so all of the circular bosses are supported by adjacent geometry.

    Once the block out is completed the shapes can be merged with boolean operations. Any stray geometry can be removed or blended into the existing shapes with operations like limited dissolve, merge by distance, snap merge, etc. It may also be necessary to make room for the support loops by moving some of the vertices along the surface of the shapes. However, most of the topology flow issues should resolve cleanly because of the segment matching.

    After the base mesh is completed, a chamfer modifier can be used to generate the support loops around the edges that define the shapes. (Highlighted in the example below.) Using a modifier to add the support loops isn't strictly necessary but it helps preserve the simplicity of the base mesh. Which makes it a lot easier to adjust the shapes and sharpness of the edges when the subdivision preview is applied.

    Below is what the final base mesh looks like, along with a couple of mesh previews with the chamfer and subdivision modifiers active.

    While it is possible to manually create these shapes and try to plan out all of the segment counts ahead of time, using procedural geometry generated by curves and modifiers makes the process a lot easier. It's also important to try and solve most of the topology flow issues at the lowest level possible. This will help prevent a lot of unnecessary work whenever the mesh density has to be increased to support smaller details.

    Another thing to keep in mind is that efficient subdivision modeling is often about making tradeoffs that compliment the model's intended use. The important thing is to try and balance accuracy and efficiency. In the context of high poly modeling for game assets, segment matching doesn't always have to be perfect. More often then not, close enough will be good enough.

    Recap:

    Evaluate the shapes in the references and figure out which part constrains the rest of the nearby surfaces. Establish the block out using a reasonable amount of geometry. Match the number of segments in the intersecting geometry to the adjacent shapes. Try to maintain consistent distribution of edge loops along curved surfaces. Rely on specific tools and modifiers that can generate accurate geometry to make things easier. Solve major topology issues before subdividing and adding smaller details.

  • tatertots

    Thank you for another awesome write-up, Frank. So informative, and it's really great to be reminded to use modifiers and procedural geometry as much as possible to make the work easier. Thanks for that! You inspire me to do some more hard-surface modelling. Soon as my bad monster-sculpt is finished xD

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @KebabEmperor Thanks for the kind words. Just comes down to making a lot of sample models and sharing the results.

    @tatertots Thank you and you're welcome! Glad to hear that these posts encourage artists to continue exploring subdivision hard surface modeling.

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision block out: stamped dust cover.

    This is a quick look at how multiple bevel / chamfer modifiers can be used to generate relatively complex shape intersections from a simple block out. Basic modeling operations are used to create the rough block out and leaving the mesh a quad grid makes it a lot easier to visualize different topology routing options.

    All of the extraneous geometry is then cleaned up with a limited dissolve operation. While this isn't strictly necessary, it does improve the visual cleanliness of the mesh. If the final base mesh is going straight through to a subdivision modeling workflow then it probably makes sense to just leave the quad grid in place.

    Each set of curved features has it's own bevel modifier that's controlled by separate vertex groups. This means that changing the shape profile or number of segments in each curved surface is easy as going into the modifier panel and adjusting the settings. Since the geometry generated by these modifiers remains editable, it's possible to adjust the density of the mesh throughout the entire block out stage. Which makes it a lot easier to do segment matching as shapes are added or revised.

    After blocking out the basic proportions and primary shapes, rounded edges can be added with another bevel modifier. Edge weights are used to control both the placement and width of the rounded edges. Adjusting the smoothness of the shapes is easy as changing the number of segments generated by the bevel modifier. Which makes it really easy to preview low poly optimizations.

    Once the appropriate level of mesh density is established, any stray geometry can generally be removed with either weld or planar decimation modifiers and the rest of the n-gons in the mesh can be filled with a triangulation modifier. Parts of the mesh that are larger or closer to player's camera will tend to need slightly more geometry than shapes that are smaller or further away. In certain situations it may also make sense to do some final manual cleanup with edge dissolve operations.

    The same sort of approach can be used to generate a high poly from the block out but the base mesh can be a lot simpler, since that makes it a lot easier to solve any topology flow issues and the subdivision modifier will smooth out the entire surface of the mesh. Constrained cut and join through operations can be a quick way to organize any trouble spots in the base mesh's topology flow and final support loops can generally be added with another bevel modifier.


  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision sketch: radial selector switch.

    This write-up is a brief overview of how resolving topology issues, before moving on to the next level of mesh complexity, can help make subdivision modeling processes a lot easier to manage.

    It often makes sense to start the block out by establishing the larger shapes and curved surfaces but it's also important to include any complex shape intersections, that could constrain the nearby geometry, when planing out the order of operations and topology flow. Basic modeling operations like [constrained] extrude, inset and bevel / chamfer can be used to generate accurate angles and curves that preserve the underlying edge structure.

    Surface shapes, especially along curved areas, can be defined by adding loop cuts that follow the existing topology flow. Constrained cut operations can also be used to slice through the existing geometry. This is an easy way to connect nearby loops in a way that defines the straight edges of the surface features. Extraneous edge loops generated by these operations can be left in place to preserve the shapes for additional modeling operations but it is important to remove left over geometry as soon as it's no longer needed. After outlining the surface shapes, connect and dissolve operations can be a quick way to organize edge loops along a surface.

    Extruding edges into existing shapes sometimes requires cutting holes in the surface of the mesh. Bridge and fill operations can be a quick way to close these holes but mismatched edge segments can leave behind triangles and n-gons. Both can be left in to make additional modeling operations easier or when they have minimal impact on surface quality.

    If it's necessary to resolve the mesh as all quads, it can make sense to work through adding secondary shape details that can be used to even out the segment count of adjacent loops. Focus on creating accurate shapes first then adjust the number of segments in secondary details, to resolve the topology flow issues at the lowest possible level, before complicating the mesh by adding lots of support loops.

    After the major surface features are established, it should be fairly straightforward to continue adding smaller details. Keeping the edge flow relatively tidy after adding each shape and breaking the mesh up into tileable sections will also make it a lot easier to add the final support loops.

    Boolean operations and other modifiers can be used to speed up the block out process but the same fundamental principles of subdivision modeling still apply. The increased flexibility of these tools makes it a lot easier to adjust each shape independently and experiment with different segment counts to find complimentary geometry. Without having to rely on lucky guesses or having to manually rework the same shape multiple times.

    In this example: the basic profile is created from a flat outline and a modifier is used to sweep it around the central axis of the shape. Another modifier is used to generate the outer round over. The curves of the subtractive boolean object are also created with modifiers. This setup makes it incredibly easy to adjust the number of segments in the features of the primary shape.

    Once all of the segments in the shapes are roughly aligned the modifiers can be applied and the mesh can be cleaned up with a merge by distance operation. Multiple mirror modifiers make it possible to split the base mesh into the smallest possible tiling segment. Which makes it a lot easier to work on the topology routing. Additional edge loops are cut in to provide structure and resolve the mesh to quads then the support loops are added around the shapes using a bevel / chamfer modifier. So adjusting the softness of the edges is easy as changing the modifier's settings.

    Final surface details can be added using grid fill, proportional editing and basic inset operations.


  • wirrexx
    Online / Send Message
    wirrexx ngon master

    Frank As I am getting more custom to Blender, would you mind if you have the time, to show the second process with a video. One of the thing I found i have a hard time with blender, is trying to match segments. This due to Blender breaking the ability to adjust segments when an item have been moved.

  • KebabEmperor
    Offline / Send Message
    KebabEmperor polycounter lvl 3

    That's why you use modifiers like Screw, Bevel and Radial Array. Modifiers are non-destructive, you can adjust values anytime you want and it makes trying & failing easier.

    With experience, you will get to know how many segments you need for details depending on their scale, visibility, etc.

  • wirrexx
    Online / Send Message
    wirrexx ngon master

    that's not the issue for me, I come from 3ds max bg. Adjusting a primitive when rotating and moving is very easy. In Blender I have to use modifiers that need to be on the right axis from the get go to actually make that work. Rotating them makes them change shape sometimes, and that is something I am not used to.

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @wirrexx Currently putting together a brief video of the modifier based process and will follow up by posting it here.

    Most of the modifiers in Blender inherit a transform orientation that's based on the object's stored rotation. Managing this, along with object origin points, does add another layer of complexity but also provides flexibility in how things can be setup. Transforms in object mode can be stored and reset but the same operations in edit mode cannot, since they are directly changing the mesh. So, it's often helpful to keep the object's transforms zeroed out to match the global values. Depending on the follow up questions this topic could become it's own write-up or video.

  • wirrexx
    Online / Send Message
    wirrexx ngon master

    really apprecaite you going over and beyond for me! Thank you once again for taking care of the community.

  • sacboi
    Offline / Send Message
    sacboi high dynamic range

    "Most of the modifiers in Blender inherit a transform orientation that's based on the object's stored rotation. Managing this, along with object origin points, does add another layer of complexity but also provides flexibility in how things can be setup. Transforms in object mode can be stored and reset but the same operations in edit mode cannot, since they are directly changing the mesh. So, it's often helpful to keep the object's transforms zeroed out to match the global values. Depending on the follow up questions this topic could become it's own write-up or video."

    Indeed succinctly put, had struggled with this very issue for some time when first starting out so similarly curious in your approach.


    @wirrexx

    "really apprecaite you going over and beyond for me! Thank you once again for taking care of the community."

    +1

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @wirrexx @sacboi Below is a link to the video showing the boolean block out process described in the write-up.

    This is just one example, there's other ways to use the same tools to accomplish similar results. When to apply the modifiers really comes down to how the model will be used and whether or not there's a possibility of revisions. Generally a good idea to save a copy of the working files before applying the modifiers and moving on to the next stage in the modeling process. Just comes down to finding an order of operations that works best for the individual artist and the project.

  • wirrexx
    Online / Send Message
    wirrexx ngon master

    @FrankPolygon yeah i dont lurk here.. just happened to turn the webpage on! :D Thank you

  • wirrexx
    Online / Send Message
    wirrexx ngon master

    ok just watched it. MIND F***ING BLOWING! thank you

  • Fabi_G
    Offline / Send Message
    Fabi_G insane polycounter

    Very nice video, thanks 👍️

  • sacboi
    Offline / Send Message
    sacboi high dynamic range

    @FrankPolygon

    Awesome, appreciate the time and effort spent on describing your workflow with that accompanying vid which I think clarifies quite nicely reasoning behind each operation in more detail when utilizing a non destructive subd workflow. Certainly in my humble opinion a efficient time performant process that I'll look further into potentially applying for an upcoming personal project - thanks again for sharing.

  • RocketAlex
    Offline / Send Message
    RocketAlex polycounter lvl 6

    I wish I'd spotted something like this thread long ago. This could be used as a modelling book lol. Awesome works. 😯

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @wirrexx Not a problem. There's definitely some things Blender handles differently. Good to hear the video was helpful in visualizing the process.

    @Fabi_G Glad you liked it. Appreciate the support!

    @sacboi Thanks! There's so many interesting workflows for hard surface modeling now but it does seem like there's a significant efficiency bonus that comes from being able to do a lot of the work in a single application. It's also nice to have a process that's structured around creating a base model with features and surfaces that can be adjusted quickly. This kind of flexibility makes it a lot easier to send the model out to the more specialized applications when it's required.

    @RocketAlex Thank you. The Polycount community has a long running modeling thread that's definitely worth checking out.

  • tatertots

    Awesome video, Frank! So good to actually see the process with the modifiers, that you're using to sub-d model efficiently. It's one thing to know you should use modifiers, but another to actually see them being applied in a workflow and seeing how you can actually use them to their fullest.

  • Oreazt
    Offline / Send Message
    Oreazt polycounter lvl 8

    If you're taking requests for videos, I've been pretty stumped trying to figure out how to do pic 5 and 6 of the pistol in this post.

    https://polycount.com/discussion/comment/2731641/#Comment_2731641

  • itaibachar
    Offline / Send Message
    itaibachar polycounter lvl 8

    This is awesome education!

    I have a question, regarding the :

    Modeling process 2 (≈ 3-5 individual operations.)

    • Fill half cap - 2x.


    How do I fill the gap of these uneven edge loops?

    A 'Fill' command doesn't work.

    Thanks!

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @tatertots Thanks! Appreciate hearing how the video was helpful. Figuring out when it makes sense to use modifiers, instead of directly editing the mesh, is definitely one of the challenges to incorporating them into an existing modeling workflow.

    @Oreazt Sketching shape profiles with flat primitives is something that will be covered in at least one upcoming write-up and that will probably get it's own video as well. Depending on how long it takes to make that content, I will look at making a shorter video covering the process used in that previous example and tag you when the video is available.

    @itaibachar Thank you. The exact process for creating faces between edges and closing non-manifold shapes varies based on the application. When using Blender there's a few different operations that can be used. Each behaves differently so it may be necessary to only select half of the edges for the tool to recognize the open area of the shapes.

    Additional documentation for these types of tools can be found in the manual:

    https://docs.blender.org/manual/en/latest/modeling/meshes/editing/face/fill.html

    https://docs.blender.org/manual/en/latest/modeling/meshes/editing/vertex/make_face_edge.html

    https://docs.blender.org/manual/en/latest/modeling/meshes/editing/edge/bridge_edge_loops.html

    https://docs.blender.org/manual/en/latest/modeling/meshes/editing/face/grid_fill.html

  • itaibachar
    Offline / Send Message
    itaibachar polycounter lvl 8

    Thank you for taking the time to reply and explain so much!

    I actually work in Maya, but trying to learn the concepts as you show them.

    Cheers!

    itai

  • Joopson
    Offline / Send Message
    Joopson quad damage


    It ain't pretty but, in Maya, so far as I know, this may be the best way with the least fuss, to fill it; connect the inside cylinder to the outside with one polygon using the Append to polygon tool; then you can use fill hole to fill in the rest. Then you can delete the one extra edge you'll be left with.


    It's possible there's a better way, but opening Maya up to take a look, this is the first way that came to me as a long-time Maya user.

  • itaibachar
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision sketches: miscellaneous cylinder topology.

    These are a few examples of simple topology layouts that use segment matching to resolve shape intersections on curved surfaces. Only the key loop paths are shown, to simplify the presentation, and the n-gons could be resolved to all quad geometry using grid fill or manual loop cuts. Most of these shapes can be created with basic modeling operations. So this post is just a test for smaller pieces of content that don't require a lot of explanation. The next couple of posts will be the usual long form write-ups.

    Angled cylindrical boss intersecting both flat and curved surfaces.

    Toroidal section with intersecting cylinder.

    Chamfered cylinder with rounded boss.


  • slrove
    Offline / Send Message
    slrove polycounter lvl 5

    Hey Frank, just want to say your topology is always an inspiration. If I'm ever stuck on how something should flow, I am certain to find the answers in your posts!

    In your last example, I actually had an similar case with the angled cylindrical boss intersecting flat and curved surfaces last year for a project, this one just had a few more extruding cylinders on the top. Looking at your topology though I see where I may have been able to improve the edge flows of each cylinder. Based on the reference, because this was a real-world object, I did end up filling in more of the top with tris because I couldn't quite make each top cylinder perfectly flow to each other and just time limitations. The project has shipped but it doesn't hurt to touch up previous work for practice and portfolio.

    If you had to model this, how might you approach making the main glass body? I had started with polygons but after making the main connections I just wasn't retaining the smooth bevel of the main cylinder body, so I ended up building more of my intersecting cylinders and their connection bevels with nurbs surfaces and converting to polygons. It was kind of round a bout and not ideal workflow, but ended up working for the production.


  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @slrove Thanks! Appreciate the kudos and detailed question.

    Subdivision sketch: angled cylinder intersections.

    Quad grid topology can be important for certain parts of a game art workflow but it's generally going to be acceptable to use some triangles and n-gons to streamline the modeling process. Most in-game assets will need to be triangulated before baking. So there's often little benefit to spending a significant amount of time creating a low poly mesh that's all quads.

    Some pure subdivision shapes will have surface quality issues that can be resolved with consistent quad geometry but the root cause of a lot of common subdivision artifacts is shape deformation that's directly related to the modeling process or abrupt changes in the base topology. Whether or not the high poly mesh needs to be all quads, excluding any workflow specific requirements, really just comes down to what the surface looks like when subdivision is applied.

    There's generally going to be a couple of different ways to layout the base topology and a few more ways to structure the order of operations used to model the shapes. Which approach makes the most sense often comes down to how the model will be used, technical limitations, project requirements, timeline, etc. but it should be possible to create most hard surface objects using basic poly modeling operations.

    Trying to manage all of the support loops, while modeling the shapes individually, tends to create a lot of unnecessary mesh complexity early on. Which can make it difficult to adjust the topology flow or connect adjacent mesh components, without deforming the underlying shapes. It's often much simpler to resolve topology issues at the lowest level possible by using the block out process to create accurate shapes that also define the loop flow.

    Redirecting the loop flow

    Using the existing edges as support loops and matching the edge segments of the intersecting shapes is generally considered best practice but there are some situations where it's not possible. A common strategy to resolve this type of situation is to match the existing segments wherever possible then cut in additional edges to redirect the loop flow around the shape intersections to produce a mesh that's all quads.

    Below is an example of what that process could look like. The arbitrary number of segments in each cylinder limits the segment matching to the slight round over that runs around the edge of the larger cylinder and intersects the base of the smaller, angled cylinder. Additional loop cuts can be used to join the remaining segments around the shape intersection.

    This strategy can produce usable results quickly but does tend to have some compromises when it comes to the overall surface quality.

    The area around the highlighted quad, where additional loops are used to redirect the flow around the shape and as a workaround for segment matching, has edges that cross over the existing segments in the base shape. This abrupt change in the topology disrupts the spacing and deforms the shape of the curved surface. Which tends to produce a subtle smoothing artifact that's visible when lit or viewed from glancing angles and when using highly reflective material values. The change in topology flow also produces a five sided E pole that's left unsupported in the middle of the shape. While this pole isn't causing any visible artifacts with this material, it does have the potential to cause surface quality issues. Especially if there are fewer segments in the underlying surface.

    There's also a visible pinching artifact around the base of the shape intersection, where the loop around the base of the fillet interrupts the segment spacing of the larger cylinder. Manually adjusting the position of the highlighted edges can minimize the visibility of the pinching artifact between the shapes but can reduce the shape accuracy or create other types of smoothing artifacts. Expanding the fillet to the same size as the loop around the shape intersection would have similar tradeoffs.

    Topology like this is workable. It just requires some extra care when routing the loops around the surface and a willingness to trade some minor smoothing artifacts for increased speed and flexibility. This type of topology layout can be especially useful when there isn't enough starting geometry to match the segments in the intersecting shapes but completely redoing the base mesh isn't an option either.


    Matching the intersecting segments.

    Adjusting the number of segments in each shape, until everything lines up so the existing geometry can be used as the outer support loops, is a lot easier to do during the initial block out. This process can take a bit of trial and error but the increased control over the basic topology will help maintain the accuracy of the shapes and provide a clean set of edges that can be used to guide the loop flow.

    Below is an example of what this process could look like. It may be necessary to visualize the final width of the fillet before committing to a specific segment spacing for both shapes. If modifier based boolean operations aren't available then an alternate option is to use another cylinder, with the same number of segments as the intersecting cylinder, that's the same size as the desired fillet.

    After combining the shapes and adding the fillet around the shape intersection, any extraneous geometry can be removed using an edge dissolve operation. Use the loops that make up the fillet to constrain any shape changes between the large and small cylinders. This will help preserve the accuracy of the basic shapes and minimize smoothing errors caused by unintended deformation of the base mesh.

    This strategy can take a bit more time to work through but tends to provide a cleaner topology layout with evenly spaced segments.

    Complex shape intersections on compound curves do generate poles but with this topology layout they are generally going to be constrained to a very small area that's well supported. This extra support around the poles tends to reduce their potential to affect the mesh flow and reduces the visibility of any subtle smoothing artifacts.

    The topology flow here isn't all that different from the previous example. It's just that the support loops around the shapes are a lot more consistent. There are certain situations where it does make sense to offset the intersecting shapes and average out any potential smoothing issues over a wider area. It's just that this particular combination of intersecting curves needs to be well supported to prevent unintended shape deformation.

    Segment matching on the rest of the shapes is fairly straightforward and the flat areas can be used to either terminate the unnecessary support loops or resolve the mesh to all quads. Ideally the segment counts would be matched before adding any support loops but once the key surface features are defined it may not be strictly necessary to block out everything in one pass. The internal portion of the vessel can be generated using a solidify operation if desired.


    Low / high poly topology overview.

    Below is what the final topology layout would look like for both the low poly and high poly models. In this example the low poly model is derived from the high poly base mesh by using edge dissolve to remove the unnecessary support loops. Depending on the view distance, it may be necessary to add additional segments to the round over on the larger cylinder. This little bit of extra geometry would help provide a cleaner silhouette when viewed up close.

    If a modifier based workflow was used to block out the shapes it would also be fairly easy to increase the number of segments in the individual shapes to optimize the low poly for close viewing. Without having to over-complicate the high poly or rebuild the low poly from scratch.


    Recap:

    • Try to solve most of the major topology flow issues during the block out.
    • Match the segments of intersecting shapes and use existing geometry as support whenever possible.
    • Examine the tradeoffs between different modeling and topology strategies and select the one that best fits the project's goals.


    Also want to mention, for those who haven't already seen it, that polycount's dedicated modeling thread is a great resource for community feedback, topology layouts, and modeling strategies. It's something that a lot of artists have contributed over the years and there's numerous examples and discussions about the pros and cons of different approaches to the same problems. Even if the shapes aren't exactly the same, a lot of the basic fundamentals will still apply.

  • slrove
    Offline / Send Message
    slrove polycounter lvl 5

    @FrankPolygon Awesome, thank you for the detailed and thoughtful response. I did give it a try matching flow between and settled on something with just a little more density, because why not, it could be a hero prop. Your layout for extra edge loops around the main beveled edges is handy for the hi poly version that gets baked. Here were my results: (on the far left was the 1st iteration, then next over is the blocked shapes and topologies, followed by the refined low poly version and its same version without the wireframe and baked smooth mesh)

    Just for fun tried to improve the glass shader I had on it in Unity: (left is the original, and right is the new/improved model)

    Again, I really appreciate your feedback. These threads are great source of information for resolving so many 3D issues.

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision sketch: soft hard surface.

    This write-up is a brief look at using basic modeling operations to create a base mesh for stiffer types of soft goods. Complex folds and detailed wrinkles can be added to the base mesh using textile specific sculpting and simulation tools but the tradeoff is these processes tend to require a fairly dense mesh.

    Simple fabric details, often found on smaller, less complex parts of some hard surface models, can also be added to the base mesh by using some creative subdivision modeling techniques. The details may be a bit softer with this method but the simplified mesh tends to use a lot less geometry and the surface can still be detailed later with fabric specific tools.

    Modeling fabric products with a subdivision only approach does have some significant limitations but can still be an efficient way to create simpler shapes. Certain types of stiff fabrics like foam, leather, plastic, etc. tend to be used for more utilitarian or geometric designs and often have less pronounced fold details. Below is a preview of what can be achieved by making some quick changes to the topology of a simple base mesh.

    Starting with hard surface shapes, that are already well defined, will help establish a sense of scale and proportion for the fabric parts that are attached to the rest of the model. The underlying hard surface components can be created using basic modeling operations like solidify, bevel, loop cut, etc.

    Blocking out the fabric parts is fairly straightforward. Start by establishing the basic topology flow around the shape's profile. Extrude and add loop cuts to create a relatively consistent quad grid. Enable subdivision preview and adjust individual edge segments to form the basic shapes along the surface of the mesh. Create additional geometry to support complex surface features by using inset or loop cut operations and continue adjusting the shapes to match the references.

    For softer fabrics, the quad grid mesh produced during the block out can be used as a base for sculpting or simulating fine details.

    For stiffer fabrics, it's possible to create minor creases and ripples by triangulating sections of the mesh then randomly selecting individual edges and moving them into or away from the the surrounding surface. Dissolving individual vertices or edges and redirecting adjacent loops will also help produce subtle folds and wrinkles. Tools that are able to make random selections and move geometry relative to the surface normals can make this a fairly quick process.

    The example below shows the final base mesh and subdivision previews. A few more details have been added by triangulating other sections and moving some of the new edges away from the existing surface. While it is possible to continue adding details by sculpting in high frequency details like micro folds, pores, weaves, patterns, etc. adding those types of micro details with texture overlays tends to be a bit more flexible. For this type of heavy fabric, the unsubdivided base mesh can also be used as the low poly model. Which can help save a bit of extra time.

    Recap:

    Applications like Blender, Marvelous Designer, ZBrush, Etc. provide a variety of different workflow options for modeling fabric parts. Deciding which approach should be used for a project really just depends on the complexity of the components and size of the surface details. While a subdivision only approach can be a quick way to model stiffer types of fabric, blocking out the basic shapes of soft goods can still be useful when using a sculpting or simulation workflow. It can be helpful to avoid adding unnecessary complexity to both the model and modeling process but it's also important to evaluate how the model will be used and chose a workflow that can be used to efficiently create accurate shapes.

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision sketch: hemispherical headgear.

    This is just a quick look at a simple modeling process and topology layout for common helmet shapes.

    Start with a quad sphere [or a UV sphere with 8 segments and a quad cap] that uses the minimum amount of geometry required to define the largest shapes. Work through all of the major shape and topology flow issues during the first few steps of the block out phase.

    Refine those basic shapes then continue developing the smaller details, applying each level of subdivision as required, while keeping the topology as simple as possible. Rely on the subdivision to smooth the shapes and add geometry.

    Mirroring the mesh can help speed up the workflow by reducing the amount of work required to adjust the shapes. Solidify can be used to create the internal thickness, once the shape of the outer shell is completed.

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision sketch: minimum viable topology.

    This is a brief write-up about subdivision artifacts caused by mesh deformation and how segment matching can create topology that reduces the disruption caused by intersecting shapes.

    The accuracy of subdivision modeling is limited because it smooths the mesh by averaging the geometry. Adding, merging or moving geometry disrupts the segment spacing of curved surfaces and can deform the mesh. Subtle deformation may be acceptable but severe deformation will likely cause visible smoothing artifacts.

    Below is an example of how mesh deformation affects subdivision smoothing when sliding, moving or adding edge segments on a cylinder.

    Mesh deformation, around shape intersections on curved surfaces, is often caused by adding support loops that disrupt the segment spacing or by merging the existing geometry into the intersecting geometry. The examples below show what this can look like when the mesh is subdivided.

    From left to right: Adding support loops that disrupt the segment spacing near the base of the intersection tends to deform the curvature outwards. Increasing the number of segments and merging them into the support loop at the base of the intersection tends to deform the curvature inwards.

    Using the geometry of the curvature to form part of the support loop around the base of the shape intersection greatly reduces the visibility of the smoothing artifact but doesn't completely resolve the deformation that's caused by the inconsistent segment spacing.

    Adjusting the shapes, so the edge segments match, will produce a shape intersection with a relatively even polygon distribution. Since the segments in each curve are roughly the same length, the subdivision smoothing will be relatively consistent across all of the connected shapes. Which helps prevent undesired mesh deformation that could cause smoothing artifacts.

    Working through these basic topology flow issues early in the block out will make the modeling process a lot easier but sometimes it's not possible to match the segments of each shape perfectly. In most cases though, close is good enough.

    Any minor difference between the shapes can usually be taken up by the faces between the outer support loop and the base of the shape intersection. The support loop on the inside of the shape intersection can also be adjusted to compensate for the width of the support loop on the outside of the shape intersection.

    Below is an example of the order of operations that works for segment matching most curved shapes. Start by adjusting the shapes until the segments that make up the curve are aligned then cut in the additional loops along the curve. Remember to include the width of the outer support loop when matching the segments.

    Basic cylinder intersection: Line up the segments in both shapes until they're close to where the outer support loop will be. Confine any potential difference in the shapes to the area between the inner and outer support loops.

    Tapered cylinder intersection: Same as the basic cylinder intersection but a triangular quad can be used to redirect the loop flow where the three shape profiles meet.

    Edge to edge cylinder intersection: Rotating the starting position of the intersecting shape can provide better options for directing the topology flow, without having to arbitrarily increase the amount of geometry required.

    [Perpendicular] Overlapping cylinder intersection: Complex shape intersections increase the likelihood that some of the support loops will disrupt the curvature and cause smoothing artifacts. Rotating the intersecting shape can help optimize the use of existing geometry as support for the adjacent topology flow but sometimes it's just necessary to increase the number of segments in the shapes to reach the desired level of surface quality.

    [Parallel] Overlapping cylinder intersection: Similar to the shapes in the previous example but could also benefit from moving the smaller shape a bit further into the larger one. Sometimes it's necessary to simplify certain details or compromise slightly on the position of the shapes to achieved the desired results when the intersecting shape is constrained by the number of segments in existing mesh.

    Increasing the number of segments in a curve can make it easier to route the topology flow across the shape intersections and can also increase the accuracy of the subdivided curve but there are certain situations where details need to be added to an existing mesh that can't be adjusted easily. The same segment matching strategy can be used to find the minimum amount of geometry required to match the intersecting shape to the existing shape, even if the number of segments isn't evenly divisible or if the intersecting cylinder geometry has to be rotated to line up with the existing geometry.

    Recap: Block out complex shape intersections before merging and adding support loops. Match the segments of the intersecting shapes and line up the edges of adjacent shapes as support whenever possible. When merging intersecting shapes, avoid deforming the curved surfaces by constraining any differences in the shapes to the area between the inner and outer support loops that define the shape intersection.

  • Brandon.LaFrance
    Offline / Send Message
    Brandon.LaFrance polycount sponsor

    Dammit, Frank, your constant stream of incredibly useful content is out of control. Seriously, this thread gets its own permanent, dedicated tab in my browser, and I reference it constantly. It is my first recommendation to anyone asking me for help or advice regarding sub-d modeling. Just wanted to say thanks!

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @Brandon.LaFrance Really appreciate your support! Glad the write-ups are helpful and thanks for recommending them.

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision sketch: truncated cones.

    These examples show how segment matching can be used to join truncated cones to curved surfaces.

    When the support loop around an intersecting shape disrupts the segment spacing of an existing shape, it tends to cause unintended deformation that can generate visible smoothing artifacts. Simplifying the support loop routing, by using the geometry of the existing shape to support the shape intersection, maintains consistent segment spacing and helps reduce the visibility of smoothing artifacts caused by abrupt changes in the topology.

    Truncated cone joined to cylinder: This will generally behave like a simple cylinder to cylinder intersection. Align the segments in both shapes and simplify the support loops around the shape intersection whenever possible.

    Truncated cone with adjacent cylinder joined to cylinder: Adjust the number of segments in each shape to maintain a relatively consistent geometry density while also aligning the edges around the shape intersection.

    Truncated cone with radially clocked cylinder joined to cylinder: Start by aligning the segments in the shapes then add perpendicular support loops, as required, to match the support loops around the base of the shape intersection.

    Truncated cone with adjacent cylinder joined to truncated cone: Get the alignment as close as possible then constrain any differences in the shapes to the area between the inner and outer support loops around the shape intersections.

    Truncated cone joined to chamfered cylinder: Rotate the intersecting geometry as required and adjust the number of segments in each shape to align the edges around the base of the intersection. Perfect alignment isn't always possible but close enough is usually good enough. Perpendicular edge loops can be routed across the intersecting shape or reduced with a triangular quad.

    Angled cylinder joined to truncated cone: Steeper tapers and proportionally larger intersecting shapes tend to amplify the difference between the segment spacing around the extreme ends of the shape intersection. Using the minimum amount of geometry required for each shape can help reduce the overall complexity and make it a lot easier to join the two shapes, without generating unintended shape deformations.

    Recap: Adjust the number of segments to align the edges around intersections while also preserving the accuracy of the underlying shapes. Simplify topology routing and use the existing geometry to maintain the segment spacing of curved surfaces. Rotate intersecting geometry to align the edges without adding unnecessary mesh complexity.

  • chien
    Offline / Send Message
    chien polycounter lvl 13

    @FrankPolygon just come to this thread, i saw your process to optimize for topology. do you process also for hi poly baking?

  • bewsii
    Offline / Send Message
    bewsii polycounter lvl 9

    This thread is just absurd. After 14 years of modeling (off and on) as both a hobbyist and professional, your methodology has me rethinking my entire workflow.


    I'm an oldschool 3ds Max user who loves hard surface modeling and has mainly focused on vehicles, where booleans are mostly an afterthought since there's so few flat surfaces in that space. As I've recently gotten back into 3d I've been branching out into weapons and other assets which lead me down a rabbit hole of finding ways to become more efficient and able to work quicker, which is how I found your thread. I've been blown away by the way Blender handles booleans (though Max's ProBoolean, and tools like LazerCut by KeyHydra are very nice too) and how artists like yourself are using them to create insanely complex objects that I'd spend days modeling with traditional SubD methods.


    I may have to learn Blender here soon. It just seems too good to pass up. lol

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @chien Thanks for the question. There are a few links to some write-ups about common baking artifacts in this post and some examples of how triangulation affects normal bakes in this discussion. Additional content about these topics is planned but, since most of the tools used for baking are already well documented, the focus will tend to be on application agnostic concepts.

    Block out, base mesh, and bakes.

    This write-up is a brief look at using incremental optimization to streamline the high poly to low poly workflow. Optimizing models for baking is often about making tradeoffs that fit the specific technical requirements of a project. Which is why it's important for artists to learn the fundamentals of how modeling, unwrapping and shading affect the results of the baking process.

    Shape discrepancies that cause the high poly and low poly meshes to intersect are a common source of ray misses that generate baking artifacts. This issue can be avoided by blocking out the shapes, using the player's view point as a guide for placing details, then developing that block out into a base mesh for both models.

    Using a modifier based workflow to generate shape intersections, bevels, chamfers and round overs makes changing the size and resolution of these features as easy as adjusting a few parameters in the modifier's control panel. Though the non-destructive operations of a modifier based workflow do provide a significant speed advantage, elements of this workflow can still be adapted to applications without a modifier stack. Just be aware that it may be necessary to spend more time planning the order of operations and saving additional iterations between certain steps.

    It's generally considered best practice to distribute the geometry based on visual importance. Regularly evaluate the model from the player's in-game perspective during the block out. Shapes that define the silhouette, protruding surface features, and parts closest to the player will generally require a bit more geometry than parts that are viewed from a distance or obstructed by other components. Try to maintain relative visual consistency when optimizing the base mesh by adding geometry to areas with visible faceting and removing geometry from areas that are often covered, out of frame or far away.

    For subdivision workflows, the block out process also provides an excellent opportunity to resolve topology flow issues, without the added complexity of managing disconnected support loops from adjacent shapes. Focus on creating accurate shapes first then resolve the topology issues before using a bevel / chamfer operation to add the support loops around the edges that define the shape transitions. [Boolean re-meshing workflows are discussed a few post up.]

    Artifacts caused by resolution constraints make it difficult to accurately represent details that are smaller than individual pixels. E.g. technical restrictions like textel density limits what details are captured by the baking process and size on screen limits what details are visible during the rendering process. Which is why it's important to check the high poly model, from the player's perspective, for potential artifacts. Especially when adding complex micro details.

    Extremely narrow support loops are another common source of baking artifacts that also reduce the quality of shape transitions. Sharper edge highlights often appear more realistic up close but quickly become over sharpened and allow the shapes to blend together at a distance. Softer edge highlights tend to have a more stylistic appearance but also produce smoother transitions that maintain better visual separation from further away.

    Edge highlights should generally be sharp enough to accurately convey what material the object is made of but also wide enough to be visible from the player's main point of view. Harder materials like metal tend to have sharper, narrower edge highlights and softer materials like plastic tend to have smoother, wider edge highlights. Slightly exaggerating the edge width can be helpful when baking parts that are smaller or have less textel density. This is why it's important to find a balance between what looks good and what remains visible when the textures start to MIP down.

    By establishing the primary forms during the block out and refining the topology flow when developing the base mesh, most of the support loops can be added to the high poly mesh with a bevel / chamfer operation around the edges that define the shapes. An added benefit of generating the support loops with a modifier based workflow is they can be easily adjusted by simply changing the parameters in the bevel modifier's control panel.

    Any remaining n-gons or triangles on flat areas should be constrained by the outer support loops. If all quad geometry is required then the surface topology can be adjusted as required. Using operations like loop cut, join through, grid fill, triangle to quads, etc. Though surfaces with complex curves usually require a bit more attention, for most hard surface models, if the mesh subdivides without generating any visible artifacts then it's generally passable for baking.

    Since the base mesh is already optimized for the player's in-game point of view, the starting point for the low poly model is generated by turning off any unneeded modifiers or by simply reverting to an earlier iteration of the base mesh. The resolution of shapes still controlled by modifiers can be adjusted as required then unnecessary geometry is removed with edge or limited dissolve operations.

    It's generally considered best practice to add shading splits*, with the supporting UV seams, then unwrap and triangulate the low poly mesh before baking. This way the low poly model's shading and triangulation is consistent after exporting. When using a modifier based workflow, the limited dissolve and triangulation operations can be controlled non-destructively. Which makes it a lot easier to iteration on low poly optimization strategies.

    *Shading splits are often called: edge splits, hard edges, sharp edges, smoothing groups, smoothing splits, etc.

    Low poly meshes with uncontrolled smooth shading often generate normal bakes with intense color gradients. Which correct for inconsistent shading behavior. Some gradation in the baked normal textures is generally acceptable but extreme gradation can cause visible artifacts. Especially in areas with limited textel density.

    Marking the entire low poly mesh smooth produces shading that tends to be visually different from the underlying shapes. Face weighted normals and normal data transfers compensate for certain types of undesired shading behaviors but they are only effective when every application in the workflow uses the same custom mesh normals. Constraining the smooth shading with support loops is another option. Though this approach often requires more geometry than simply using shading splits.

    Placing shading splits around the perimeter of every shape transition does tend to improve the shading behavior and the supporting UV seams help with straightening the UV islands. The trade off is that every shading split effectively doubles the vertex count for that edge and the additional UV islands use more of the texture space for padding. Which increases the resource footprint of the model and reduces the textel density of the textures.

    Adding just a smoothing split or UV seam to an edge does increase the number of vertices by splitting the mesh but once the mesh is split by either there's no additional resource penalty for placing both a smoothing split and UV seam along the same edge. So, effective low poly shading optimization is about finding a balance between maximizing the number of shading splits to sharpen the shape transitions and minimizing the number of UV seams to save texture space.

    Which is why it's generally considered best practice to place mesh splits along the natural breaks in the shapes. This sort of approach balances shading improvements and UV optimization by limiting smoothing splits and the supporting UV seams to the edges that define the major forms and areas with severe normal gradation issues.

    Smoothing splits must be pared with UV splits, to provide padding that prevents baked normal data from bleeding into adjacent UV islands. Minimizing the number of UV islands does reduce the amount of texture space lost to padding but also limits the placement of smoothing splits. Using fewer UV seams also makes it difficult to straighten UV islands without introducing distortion. Placing UV seams along every shape transition does tend to make straightening the UV islands easier and is required to support more precise smoothing splits but the increased number of UV islands needs additional padding that can reduce the overall textel density.

    So, it's generally considered best practice to place UV seams in support of shading splits. While balancing reducing UV distortion with minimizing the amount of texture space lost to padding. Orienting the UV islands with the pixel grid also helps increase packing efficiency. Bent and curved UV islands tend to require more textel density because they often cross pixel grid at odd angles. Which is why long, snaking strips of wavy UV islands should be straightened. Provided the straightening doesn't generate significant UV distortion.

    UV padding can also be a source of baking artifacts. Too little padding and the normal data from adjacent UV islands can bleed over into each other when the texture MIPs down. Too much padding and the textel density can drop below what's required to capture the details. A padding rage of 8-32px is usually sufficient for most projects. A lot of popular 3D DCC's have decent packing tools or paid add-ons that enable advanced packing algorithms. Used effectively, these types of tools make UV packing a highly automated process.

    It's generally considered best practice to optimize the UV pack by adjusting the size of the UV islands. Parts that are closer to the player tend to require more textel density and parts that are further away can generally use a bit less. Of course there are exceptions to this. Such as areas with a lot of small text details, parts that will be viewed up close, areas with complex surface details, etc. Identical sections and repetitive parts should generally have mirrored or overlapping UV layouts. Unless there's a specific need for unique texture details across the entire model.

    Both Marmoset Toolbag and Substance Painter have straight forward baking workflows with automatic object grouping and good documentation. Most DCC applications and popular game engines, like Unity and Unreal, also use MikkTSpace. Which means it's possible to achieve relatively consistent baking results when using edge splits to control low poly shading in a synced tangent workflow. If the low poly shading is fairly even and the hard edges are paired with UV seams then the rest of the baking process should be fairly simple.

    Recap: Try to streamline the content authoring workflow as much as possible. Especially when it comes to modeling and baking. Avoid re-work and hacky workarounds whenever possible. Create the block out, high poly and low poly model in an orderly workflow that makes it easy to build upon the existing work from the previous steps in the process. Remember to pair hard edges with UV seams and use an appropriate amount of padding when unwrapping the UVs. Triangulate the low poly before exporting and ensure the smoothing behavior remains consistent. When the models are setup correctly, the baking applications usually do a decent job of taking care of the rest. No need for over-painting, manually mixing normal maps, etc.

    Additional resources:

    https://polycount.com/discussion/163872/long-running-technical-talk-threads#latest

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision sketches: automotive details.

    This write-up is a brief look at how creating accurate shapes can make it easier to generate geometry that maintains a consistent surface quality, while also producing quad grid topology that subdivides smoothly. Which is important for surfaces with smooth compound curves and objects with highly reflective materials. Something that's fairly common on smaller automotive parts and finely machined mechanical components.

    Constraining features are key shapes that heavily influence the amount of geometry required to generate clean, quad grid intersections with the adjacent geometry. Blocking out these features first allows the remaining shapes to be developed using a segment matching strategy. Which makes adding the support loops a lot easier, since most of the topology flow issues are resolved during the block out.

    The following example shows what this process could look like when modeling hard surface components with a mix of rounded shapes and sharp transitions. Start blocking out the spindle nut by identifying the constraining features. Focus on modeling the shapes accurately then adjust the number of segments in the rest of the model to match. Apply the booleans then add the support loops with a bevel / chamfer operation.

    Subtle features are the less obvious surface modifiers and surface transitions that change how the intersecting shapes behave. Things like pattern draft, rounded fillets, shallow chamfers, etc. all play a major role in the actual shapes produced by joining or subtracting geometric primitives. Study the references closely and identify any subtle features that produce sweeping curves, oblong holes, tapered transitions, etc. Merge extraneous geometry to the defining elements of the major forms and use the space between the support loops of the shape transitions to average out any minor differences in the shape intersections. This will help preserve the accuracy of the shapes while also maintaining a consistent flow when transitioning between features with known dimensions.

    Cast aluminum wheels often have subtle tapers on surfaces that otherwise appear to be parallel and perpendicular. These slight angles determine how much overlap there is between the cylindrical walls and intersecting shapes like counter bores for the wheel hub and lug holes. The following example shows how a little bit of pattern draft tends to produce shape transitions with a lot of gradual overlap and what the boolean cleanup can look like.

    Start the block out by establishing the major forms. Add the draft and rounded fillet on the central counter bore then place the lug holes. Adjust the segment count and draft angles on all of the shapes until the topology lines up. After that, the boolean operations can be cleaned up by merging the extra vertices into the geometry that defines the cylinder walls and the rest of the support loops can be added with loop cuts and a bevel / chamfer modifier. Since the shape of most lug holes are fairly simple and don't require much geometry, the constraining feature tends to be the number of spokes on the rim.

    Tiling features are shapes or groups of shapes that repeat across a surface. Simplifying these features into small, reusable elements generally makes the modeling process a lot easier, since a lot of the complex topology routing issues can be solved on a small section of the mesh. Which is then copied and modified to complete the rest of the model.

    Wheel assemblies, e.g. rims and tires, tend to have radially tiling features such as tread patterns, spokes, lug holes and other repeating design elements. All of which can be broken down into basic shapes for ease of modeling. Below is an example of what this process could look like. Start the block out by establishing the scale and proportion of the primary features. This will help determine how many segments are required for the larger shapes.

    Try to break the complex geometric patterns down into individual features and use local mirroring, in conjunction with radial tiling, to reduce the amount work required to create the base model. Clean, quad grid topology doesn't compensate for geometry that breaks from the shape of the underlying curves. So, keep surface features constrained to the underlying curvature by either cutting across the existing geometry or projecting the shapes onto the basic forms. This will help ensure the consistency of the final surface.

    Recap: Identify features that constrain the adjacent geometry and model those areas first. Resolve topology flow issues before dealing with the added complexity of support loops. Plan ahead and try to focus on creating accurate shapes for these features then match the rest of the mesh to those existing segments. Be aware of how subtle changes to the angle of intersecting surfaces produce different shapes. Analyze the references to find subtle shape transitions that generate unique shape profiles. Break down complex, repeating patterns into smaller sections that can be modeled and copied. This will help reduce the amount of work required to create the object and can make it easier to solve some types of topology flow issues.

2
This discussion has been closed.