Home Digital Sketchbooks

Sketchbook: Frank Polygon

13
FrankPolygon
grand marshal polycounter
Offline / Send Message
FrankPolygon grand marshal polycounter
I'll be using this sketchbook thread as a place to warehouse write-ups that wouldn't really fit anywhere else. Most of the content will cover concepts and fundamentals related to hard surface modeling with some broader commentary on the creative process.

Replies

  • Hoodelali
    Offline / Send Message
    Hoodelali polycounter lvl 4
    Your teaching is very pedagogic and truly awesome, you always take the time to write detailed answer with amazing schematics in the topic "How the F# do I model that", will this thread be a repository of all those sweet detailed pictures/explanations you've posted there ?
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter
    @Hoodelali Thank you.

    Most of what I intend to post here is new content that covers broader concepts with more of an emphasis on the how and why behind it all. Some of this content will be very niche hard surface, mechanical and fabrication stuff and how that ties into creating art. There will also be some general art process stuff that I think is worth sharing. The more technical stuff will be very image heavy and I've been considering doing some of that content as video (maybe audio) but it really depends on how it all works out and what people are interested in. Organizing links back to the existing write-ups so it's easier for people to find and share what their interested in will be a part of this thread but it will be one of the smaller parts.
  • Hoodelali
    Offline / Send Message
    Hoodelali polycounter lvl 4
    Understood, thanks for the detailed answer ! Can't wait to read what you'll post here !
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision topology: grids, edge tension and smoothing stress.

    [I think that] this topic is worth talking about because: there's a tendency for artists learning about subdivision modeling to look at examples of isolated topology layouts without considering the broader context of why it was modeled a particular way and how the topology will interact with adjacent shapes on the object.

    Without this context there's a good chance that some of the underlying concepts (that aren't entirely obvious or may be counter intuitive) can't be easily explained or understood. Combine this missing background information with the logical appeal of grid topology and the definitive nature of mantras like just add more geometry and only use quads and it's easy to see why artists often latch on to these ideas.

    There's nothing inherently wrong with these concepts. In fact they're often based on some truths that are still relevant to specific workflows and older modeling paradigms. However things move on and where hard surface subdivision modeling for games is concerned some of the older truisms no longer hold up in the face of competing re-meshing workflows. That's not to say that subdivision modeling can't be part of an effective contemporary modeling workflow. (It can.) It's just that all the benefits of Boolean / poly re-meshing and CAD re-meshing workflows have really put pure subdivision modelers in a position where time is a precious commodity. This is why, as the current trends in hard surface modeling continue to grind on, pure subdivision workflows need to focus on balancing accuracy and efficiency to keep pace with competing workflows.

    An important technical aspect of many art processes is learning how to avoid creating unnecessary complexity and cleverly hiding minor imperfections. This is particularly true for subdivision modeling. It's not CAD and there's always going to be some minor imperfections. When working with limited resources on time sensitive projects it's important to learn how to pick and choose the battles. Time spent on work that doesn't provide a substantial improvement to the overall visual quality or editability of a mesh is essentially time that has been wasted.

    One of the obvious solutions to improving the quality of curved surfaces is to increase the number of segments along the curve but sometimes, on minor details or features that are constrained by the surrounding geometry, this approach isn't going to be an effective option without completely re-working the entire mesh. In these situations it's important to work with the available geometry and balance the preservation of shape accuracy with the reduction of smoothing artifacts.

    That's why it's worth mentioning that edge tension isn't always a bad thing. There's certain situations where increasing the smoothing stress can pull curved geometry into shape and reduce smoothing artifacts. This effect can be useful in situations where it's not feasible to increase the geometry density. Below is a comparison that shows how increasing the edge tension and smoothing stress can produce a cleaner result when there's a limited amount of geometry.



    A similar effect can still be observed (though only when viewing from glancing angles) with additional support loops, denser geometry and tighter edge widths. Increasing the segment count of the cylinder walls does improve the shape accuracy and smoothing behavior but adding perpendicular edge loops around the cylinder generates a minor smoothing artifact.

    Arbitrarily increasing the mesh density to maintain grid topology or to relax tension in the corners increases mesh complexity, reduces editability and doesn't necessarily guarantee improved smoothing results. Much of the improvement is coming from the reduced distance between the segments of the cylinder rather than the overall increase in geometry density.



    More importantly: using the existing edge segments that make up the cylinder wall as support loops allows them to remain mostly parallel and concentric to the rest of the shape. It's the effect of the adjacent loops pulling the existing cylinder wall segments out of shape that will cause most of the smoothing issues. This topic has been discussed at length in the how do you model this thread so there's no need to rehash that discussion here.

    The example below demonstrates this basic principle with minimal geometry and it also demonstrates how, as the segment count increases, shape accuracy tends to increase and smoothing artifacts tend to decrease. Zooming in and out will also demonstrate how changes in object size and view distance will change the amount of geometry required for the desired level of shape accuracy.

    After a certain point the size of the object and the view distance render minor smoothing artifacts immaterial and inconsequential. It's also important to consider whether or not the material has micro surface normal details that will help disguise any minor smoothing artifacts.

    If a mesh is easy to work with and subdivides cleanly without any major smoothing artifacts then it's generally passable. It's highly unlikely the average player will ever see or care about the high poly wire frames.



    The same topology can be sharpened by adding support loops around the perimeter of the flat areas and shape transitions. Using modifiers to add these support loops will tend to reduce the amount of work required while also increasing the editability of the mesh. Again the same basic topology layout and associated modeling processes have already been discussed at length in the how do you model this thread.



    Recap:

    Take the time to evaluate your pure subdivision modeling process and topology strategies. Continuing excess edge loops across the entire model, manually editing loops to maintain all quad or grid topology can be a waste of time if it's not significantly improving the visual quality of the model. Hard surface subdivision modeling for game art is a commodity. If your process is being slowed down by perfectionism and unnecessary complexity then it's likely that process won't stand up to the competition from artists using re-meshing workflows.

    Even grid topology doesn't always eliminate smoothing artifacts and edge tension / smoothing stress doesn't always generate smoothing artifacts. Under certain conditions edge tension and smoothing stress can be used to reduce smoothing artifacts.

    Though it's true that larger shapes and sharper edges are generally going to require more geometry than smaller shapes and softer edges, increasing the geometry density isn't always the best way forward. Instead try to balance the overall mesh density based on how players will view and interact with the object.

    Learn to accept the minor imperfections of subdivision modeling and move on. If you look hard enough you'll find artifacts everywhere and after a certain point it just becomes a pointless exercise in chasing ghosts. Stare into the n-gon long enough and it will stare back.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision modeling: process optimization, order of operations and flat surface topology.

    Process optimization is an important part of skill building and taking the time to evaluate the underlying assumptions that drive your modeling strategy can help improve both the efficiency and quality of your modeling process.

    Below is a visual overview of a simple modeling task. Fill the cap between two cylinders that have different segment counts and add support loops to create a subdivision ready cage mesh. How much time and effort it takes to complete this simple task will depend on the modeling tools and strategies used.



    The following example is a modeling process that's driven by a lot of common preconceptions and assumptions. Here's a breakdown of the rational behind this process and a summary of how the model was created.

    Subdivision modeling implies that the cage mesh needs to be all quads with even grid topology to avoid smoothing artifacts. Starting with fixed topology suggest that everything needs to be manually joined together to create the perfect edge flow. Edge extrusion modeling extends the existing topology so it will be the fastest way to fill in the cap. Loop cut, fill and connect are basic operations that provide granular control of the topology layout phase of the process.

    Modeling process 1 (≈ 48-62 individual operations.)
    • Extrude upper outer diameter support loop.
    • Extrude upper inner diameter support loop.
    • Fill radial segment - 12x.
    • Cut lower outer diameter support loop.
    • Cut low inner diameter support loop.
    • Join inner diameter support loops as quads - 42x. (Alternate: select and fill as quads - 30x.)



    This modeling process relies heavily on a brutally straightforward approach that uses a very limited selection of modeling tools and a strategy that follows whatever trajectory feels right in the moment. There's a significant amount of time spent on repetitive modeling tasks and the misconception that everything MUST resolve into quads to be subdivision ready only compounds this problem. What follows is a look at whether or not the previous assumptions created artificial limitations that generated unnecessary complexity.

    Modeling process 1 technical parameters:

    All quad geometry - Unless there's a specific technical limitation that absolutely requires the cage mesh be all quad geometry there's generally minimal benefit to manually adjusting the topology to create it. Simply subdividing the cage mesh will make it all quads. If quad grid topology is required for other process (like detail sculpting) then using automatic quad re-meshing tools is a better approach. The assumption that all subdivision cage meshes need to be quad grids is something of a misconception and in this case it's an artificial limitation.

    Fixed segment counts - Part of the block out process is figuring out how much geometry will be needed for subsequent modeling operations. There will be situations where it's either impossible or impractical to plan for all additional details and in these situations the base geometry will become a limiting factor. This is a realistic technical limitation and it's important to build an understanding of how to work around issues caused by having a limited amount of geometry to work with.

    Manual topology routing - There are certain cases where manual topology routing is necessary but unless the automatic tools have failed to create usable topology there's no real benefit to manually creating all of the topology flow. Sometimes there's a tendency to assume that creating every face and placing every support loop by hand improves the quality of the mesh but reality is this is just needless busy work and this behavior should be avoided whenever possible.

    Edge extrusion modeling - Basic manual modeling tools can be used to accomplish a wide variety of tasks and tend to have a high degree of granular movement control but that doesn't mean they are always the best tools for the job. The biggest drawback to using these tools is it can be difficult and time consuming to maintain consistent edge width. More complex tools can be used to place support geometry and automatically maintain a consistent edge width. This will improve both the process efficiency and visual quality of the model. Improving the edge width consistency tends to improve the readability of the shapes and helps provide a clean, professional look.

    With the exception of fixed segment counts, most of these assumptions introduced artificial limitations that negatively influenced the modeling process by narrowing the tool selection and increasing the number of manual operations. If edge extrusion was the method of choice then it would be much better to extrude the inside and outside segments to form the support loops then fill the faces between them.



    The only really important part of the topology is the support loops on either side of the outer bounds of the main shapes. Flat areas are largely unaffected by changes in topology so it makes more sense to average out the the differences between the segments there rather than in the support loop around a key shape transition. With this broader context that's now backed up with experience gained through experimentation it becomes more obvious that the only real constraint here is the starting geometry. The rest of the process is open to interpretation.

    The following example is a modeling process that applies what was learned by removing unnecessary limitations, expanding the tool selection and adjusting the order of operations. Here's a breakdown of the rational behind this process and a summary of how the model was created.

    Modeling process 2 technical parameters:

    Fixed segment counts - In certain situations the underlying topology will be a limiting factor that determines how much geometry is available to work with. In these cases the fixed segment count of the inner and outer diameter is the only true limitations here.

    Abandoning the previously mentioned preconceptions means it's possible to explore new ways of approaching the problem of connecting two support loops with different segment counts. Thinking about what's actually important and what tools are available will help identify more efficient ways of adding the support loops around the shapes.

    Modeling process 2 (≈ 3-5 individual operations.)
    • Fill half cap - 2x.
    • Bevel / chamfer outer and inner diameter to add support loops.
    • Add edge between inner and outer support loops on the cap - 2x.

    Changing the order of operations and tools used means it's possible to add most of the support geometry in a single step and all of the newly added support loops will have a consistent width. The flat area between the two support loops around the inside of the cap is used to absorb any major topology changes without causing any major smoothing artifacts. Not only does this modeling process ensure that potential shading issues are minimized but there's also the added benefit of having a simplified mesh that's easier to work with.



    In most cases: the important topology layouts are around the shape boundaries and are made up of the support loops that hold the shapes. Whatever happens on the flat surfaces is often irrelevant. Trying to resolve all of the intersecting edge loops into quad geometry doesn't improve the smoothing results and (in this case) would only add unnecessary complexity.

    That's not to say that it's unimportant to be aware of how the underlying topology effects subdivision smoothing behavior or to just ignore technical specifications when situations call of quad grid topology. Instead the message is that, in most cases, a few triangles here and there is a moot point if they aren't causing any major subdivision smoothing artifacts. Along similar lines: N-gons can be especially useful since they make the mesh easier to work with and when used correctly they have virtually no downsides compared to mixed quad / triangle topology.

    The most important takeaway here is the most important geometry is the edges that define the shapes and the support loops that back up these shapes.

    Below is a sample of a variety of topology strategies that all produce similar results. Double looping isn't always necessary but it can be helpful on large flat surfaces where the distance between support loops is quite wide. It's also worth noting that there's practically no distinguishable difference between the all quad geometry and the mixed triangle topology when it's used on a flat surface. This isn't an exhaustive list of all the possible topology combinations but it illustrates the point.



    Recap:

    Taking time to evaluate the assumptions that drive the modeling process is an important part of skill building. Automatically following popular modeling mantras (without evaluating the broader context behind the how, when and why) can introduce unnecessary process restrictions that tend to lead to extra work and excessive complexity.

    Evaluate whether or not these assumptions introduce artificial limitations and experiment with different tools and order of operations to identify simpler and more efficient ways to model the underlying shapes.
  • Whoolkan
    Offline / Send Message
    Whoolkan polycounter lvl 6
    great insight on some modeling practices. sometimes i have a hard time making details work on curved surfaces without having shading distortions, it's always good to know some extra tricks
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision modeling: When is good enough, good enough?

    The cylinder on the left has 20 segments. The cylinder in the middle has 64 segments. The cylinder on the right has 12 segments. So why can't the geometry on the left hold its shape when subdivision smoothing is applied?



    [Arguably] one of the most common answers to this question tends to be some variation of "the cage mesh needs more geometry" but is this really the best answer?

    The subdivision modeling process tends to be more approximate than exact and this means there's trade-offs between shape accuracy and process efficiency. While it is true that increasing the geometry density of a mesh does tend to increase the overall accuracy of the shapes, it's worth noting that it also tends to decrease the overall editability of the model. Arbitrarily increasing the geometry density to resolve shape inaccuracies (without addressing the fundamental topology issues) tends to introduce unnecessary complexity and under certain conditions it can create more problems than it solves.

    A better answer to the previous question might be: The cylinder has uneven geometry distribution and the support loop topology is poorly routed around the shapes. Evaluate the size and orientation of intersecting features, match the segments of adjacent surfaces and position intersecting features between the existing geometry to maintain a relatively consistent segment spacing around the cylinder walls. Maintaining relatively consistent spacing around the cylinder and using adjacent geometry to support intersecting shapes tends to be a more efficient way to achieve visually similar results with less geometry.



    Returning to the previous point about subdivision modeling being an approximate process with trade-offs: there's almost always going to be minor shape inaccuracies and smoothing issues. The overall scale of the object, visual prominence in any given scene and player interaction distance should be the deciding factors in whether or not it's worth the time to resolve minor shape and smoothing issues.

    The question should be "How good is good enough?" not "How can this mesh be made perfect?" because there's diminishing returns to the amount of geometry used and the amount of time spent improving the results. In the first example it's fairly obvious that the cylinder on the left needs to be improved but what's not so clear is whether there's a significant enough improvement to warrant using all of that geometry that the center cylinder uses...

    Below is an example of four different cylinders. Two of them have 24 segments and two of them have 12 segments. Is there a visually significant difference between any of them? Are the cylinders with twice as many segments twice as good? How does scale effect the visual quality of each when compared to the others?



    At this scale there seems to be minimal benefit to using double the amount of geometry. The 12 segment cylinders perform just as well visually as the 24 segment cylinders. Which topology layout makes the most sense depends on what the adjacent geometry looks like and what features need to be added to the shapes.

    There's definitely some minor differences (arguably the worst performing of all is the one with 24 segments and the quad grid topology) but they are all visually similar enough that (with textures) it would be difficult for players to notice a difference under normal conditions.



    Recap:

    Increasing the geometry density tends to increase the accuracy of the shape but at the cost of editability. Doubling the mesh density doesn't guarantee a result that's twice as good. Consistent segment spacing and using adjacent geometry as support has a large impact on the shape accuracy of a low density mesh. Use the appropriate amount of geometry that holds the shapes and subdivides cleanly.

    Consider how large the object is and how closely players will view the object before spending a significant amount of time trying to improve minor smoothing issues. Visually insignificant smoothing artifact are rarely worth the time and may be hidden by normal texture details. Spend time improving the results where players spend their time looking and can see them.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision [Saturday] Sketch: Camera rod clamp and rosette.

    [Trying a new content format with more images and less text.]

    Here's a basic overview of my hard surface subdivision modeling process. Most of the support loops are added with a bevel / chamfer modifier and can be adjusted at any time.

    The support loops around some of the smaller shapes and fine details on some of the parts required limited manual adjustments. In those cases the bevel / chamfer modifier was applied and the mesh was adjusted.

    Only the body of the clamp and the head of the flag screw have manual support loop edits. All key features are modeled to scale in real world units.

    Top row: shaded subdivision preview.
    Middle row: wire frame subdivision preview.
    Bottom row: base meshes with the bevel / chamfer modifier support loops turned off.



    Here's the modeling process for the rosette and small screws. Start with primitive shapes and add details with inset operations, using real world scale and incremental snapping.

    The teeth on the rosette were created by making a single tooth and spin duplicating it in a circle. Screw holes are added with simple boolean operations.

    Both the rosette and the screw have support loops generated by active (editable) modifiers.



    Here's the modeling process for the body of the clamp. It starts with primitive shapes and edge loop sketches made to real world scale.

    Connecting geometry is created with single click automated bridge and fill operations. Minor details are added with inset operations and boolean subtractions. Support loops are generated with bevel / chamfer and inset operations.

    Minor topology adjustments are made after automated edge loop placement. Triangles and n-gons are used to simplify the modeling process. Since they aren't causing any major smoothing issues there's marginal benefit to resolving the topology to all quads.



    Here's the modeling process for the 15mm camera rail segment and flag head clamp screw. The Camera rail segment is a basic primitive shape that was beveled on the ends and the support loops around the shapes are generated by active (editable) modifiers.

    The flag head clamp screw starts as a cylinder primative and additional shapes are extruded from it. There's a lot of small details in a limited amount of space so some of the support loops required manual editing. Minor details like the hex head in the top of the screw are simplified and slightly exaggerated to improve baking performance.

    Screw threads are made from a single segment that's stacked with an array modifier and capped with the separate flag head mesh. Washer geometry is a flat plane with thickness and support loops automatically generated by modifiers.



    Recap:
    Mimicking a CAD type workflow in a non-parametric modeling program, working in real world units, sketching shapes with primitives, using minimal geometry, leveraging modifier functionality and avoiding repetitive manual editing operations helps maintain accurate shape geometry and streamlines the modeling process.

    Spending a lot of time on minor details that won't be seen by players generally isn't a great use of time. This model is a very small part of a larger stabilizer system so any minor smoothing imperfections won't be noticeable to players after the bake is completed and normal texture details are applied. There are certain areas that could be abstracted or optimized further to speed up the modeling process but the goal for this project was to maintain a high level of dimensional accuracy with minimal geometry.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision modeling: all quads and manual loops Vs n-gons, triangles, booleans and modifiers.

    Three of the [arguably] most popular subdivision modeling mantras [used out of context] go something like this: "[Someone] told me to..."


    Although the previous statements are all based on small grains of truth, advice like this is problematic because it tends to be oversimplified and abstracted well beyond the appropriate context of a specific use case. Then it's often repeated [mercilessly] until it starts to sound like some sort of universal rule set.

    Subdivision modeling is a broad field that covers a wide variety of use cases and specializations. What's best for one specific situation may not be best for another situation. Narrowing this discussion to creating high poly game art assets: although there's a lot of skill overlap there's also a wide range of technical requirements for each discipline and each project. Topology that's acceptable to hard surface artists may not be acceptable to organic artists. The same holds true when breaking each category down into specific specialties: character artists, vehicle artists, prop artist, environment artist, etc. Similar trends continue breaking down each asset type: character, vehicle, terrain feature, structures, hero props, background props, etc.

    "How do I get better results and model faster?" is a fairly common question. There's two basic answers: practice to improve the skills and stop doing stupid shit that wastes time and doesn't improve the model. Reducing wasted time is often much easier than improving skills with a high learning curve.

    Treating [and repeating] popular subdivision modeling mantras as universal rules, without understanding the broader context of how and where they fit into the modeling process, can result in buried misconceptions that will lock in inefficient and unproductive modeling habits. For hard surface subdivision modeling, following [self imposed] quads only and manual topology routing rules often results in unnecessary complexity and wasted time. All this wasted time could be better spent polishing the asset or creating additional assets.

    When learning subdivision modeling it's important to research, practice, evaluate, adjust and repeat. Continually repeating this process is important because it helps develop an understanding of the contextual knowledge that drives various modeling processes and [most importantly] because manual editing habits can be a time sink and they tend to stick around. They don't just go away on their own as the artist's skill level increases. It takes a lot of conscious effort to adjust an established process so it's best to start making those adjustments to the process on the front end.

    All quads and manual support loop placement.

    Here's an example of a modeling process that's been artificial limited by the quads only, manual topology routing mentality. This process isn't based on any one workflow or modeling approach but rather it's based on some common habits I've seen [artist of all skill levels] do.



    After the basic shapes are created the artist starts adding and positioning each support loop. This example took an incredible amount of time because most of the support loops were manually positioned exactly the same distance from each edge. Most artist who use this method of adding support loops just slide them into place until the edge width looks close enough.

    Though this process is painfully slow and it's obvious there's a lot of steps it seems really fast in real time because there's always something to do and the geometry is always in motion. Some automatic tools like inset and grid fill were used in places where it was painfully obvious there's no point in manually creating the geometry but all of the major support loops are added manually.



    There's still a couple of large n-gons and the only obvious solution is to continue adding geometry until there's enough loops to match the number of segments in the cylinder. Since the geometry is already boxed in it's OK to use grid fill again. What follows is more manual loop placement and some manual vertices merging to get the support loops to turn the corner around the intersecting shapes.



    Most artists who use this method will stop when the mesh reaches the state shown above. All the major shapes are supported and there's no triangles or n-gons in the mesh. However some artists feel the need to add additional loops to support the existing shapes or in case they want to extrude some additional features, as shown below.


    Process evaluation.

    While working through this process in real time it doesn't seem overly complex because it's all straightforward dissolve, extrude and loop cut operations. There's nothing wrong with any particular operation. Each has it's place and individually they're all quite quick. The problem is the way the artificial restrictions have impacted the tool usage and order of operations.

    Watching this process in real time it only feels fast because the artist is always doing something. Breaking the process down, looking at it step by step and evaluating the results it's pretty clear that a lot of effort was expended without seeing a significant return on the time invested.

    N-gons, triangles, booleans and modifiers.

    Here's an example of a modeling process that leverages boolean operations, non-quad geometry and support loops generated by active modifiers.

    The two primitive shapes are added, sized and joined with a boolean union operation. An edge loop is added to give the support loops a place to run out around the cylinder shape. The corners of the cylinder are joined to the corners of the rectangle with a couple of edges. A bevel / chamfer modifier is added with the appropriate edge weights and the support loops are automatically generated by the modifier.



    The topology of the base mesh provides a clean path around the shapes and the edge width of the support loops can be adjusted at any time via the modifier settings. All of the support loops have a consistent spacing and the mesh subdivides cleanly without any major smoothing artifacts. None of the support loops require any manual adjustments.


    Process review.

    When used properly there's very few drawbacks to n-gons or geometry created by automated tools. The editability of a lightweight base mesh and support loops generated by the modifier system is a powerful combination that makes creating and editing subdivision models less of a chore. The simplicity of the process and the quality of the results should more or less speak for themselves.

    Result comparison.

    Below is a comparison of the base [cage] meshes, wire-frame subdivision previews and the shaded subdivision previews. For most hard surface, high poly game assets there's few legitimate technical reasons to restrict the modeling workflow with unnecessary geometry, quads only and manual support loop placement. There's definitely a place of all quad grid topology in certain workflows but with automated re-meshing tools like ZBrush's DynaMesh, QuadRemesher,etc there's minimal benefit to manually creating this sort of topology.


    In most cases, if they aren't causing any major smoothing errors then triangles and n-gons shouldn't be a problem. Subdivision modeling is an approximate process and it's important to make the right trade-offs between shape accuracy and process efficiency. Evaluate which parts of the object players will interact with and how closely they'll view the object. Put time and effort into polishing the parts players will spend the most time looking at. Few players will ever see or care about the high poly wire frames.

    Recap:
    Examine how advice fits into the broader context of a particular modeling workflow. Avoid generating misconceptions caused by repeating oversimplified concepts that lack important context. Take the time to evaluate the modeling process and look for areas to improve the results and try minimize the amount of manual work required. Spend time on polishing areas that improve the players experience.
  • Nominous
    Offline / Send Message
    Nominous polycounter lvl 10
    Great posts, Frank. :) Your last post reminds me that I need to embrace how subdivision is just an approximation and use more shortcuts to model faster. I've avoided Blender's bevel modifier like the plague since I obsess over perfectly straight support edges when, in hindsight, wobbly support edges aren't really noticeable after they're subdivided and especially after texturing.

    Are you using 3ds Max in your screenshots btw? I had to add a bevel weight to those bottom two diagonal edges connecting the cylinder to the cuboid in Blender since its bevel modifier doesn't support both sharp and arc outer miter types at the same time.

    Also, an important question: do you model the high poly base mesh and the low poly model at the same time in order to have most (like 80%~) of the low poly done right off the bat? I can see a significant advantage of using the bevel modifier instead of manual support edges in that there are much less support loops to remove for the low poly as opposed to using manual support loops.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter
    @Nominous Thank you, glad you enjoyed the content. Overall I'd have to agree that getting a clean and consistent result is (almost always) an important priority.

    Modifier based modeling strategies can be a really efficient way to work since the underlying mesh is kept relatively simple. This makes the model easier to edit since most of the support loops are generated by the modifier system and will remain editable (via modifier settings) throughout most of the modeling process.

    Offloading repetitive modeling tasks to tools and modifiers should reduce manual editing and increase shape consistency. The structure of the underlying mesh topology tends to have the single greatest influence over the quality of the geometry generated by modifiers and other tools. If the input mesh is clean, straight and well organized then the support loops added by the modifiers should be clean, straight and well organized.

    It should be possible to re-create the previous example without having to apply any modifiers. Here's an example that shows the base mesh and modifier settings required to get this to work in Blender 2.91. Depending on the shapes, angle based modifier controls aren't always accurate or reliable so sometimes it's necessary to use grouping or weights to achieve the desired results.


    Correct: there's a significant efficiency bonus to using the same base mesh for both the high poly and low poly models.

    Using modifiers to control the geometry density of specific features means this process works in both directions since the parameters can be used to add or remove geometry from across the entire mesh or specific areas.

    How complex this setup is depends entirely on the project and the complexity of each individual part. It's all about balancing shape accuracy and editability to efficiently arrive at a result that hits the project's quality goals.

    This process is something I plan on covering in the near future.
  • Hoodelali
    Offline / Send Message
    Hoodelali polycounter lvl 4
    Extremely useful and very well detailed, you're doing an amazing job 🔥 please continue! 🙏
    Thanks a lot for your hard work ❤️
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision modeling: use more geometry or create better geometry?

    Removing subdivision smoothing errors from shape intersections on curved surfaces is a common theme for questions about hard surface subdivision modeling. A lot of these questions are about pinching and stretching issues near corners and around shape transitions. Most of these questions have been asked countless times and more often then not have been answered with some variation of "Just use more geometry." but is this really the best answer?

    More importantly, if using more geometry is the best answer to these questions then how much more geometry is enough geometry?

    In most cases: increasing the geometry density does tend to increase the overall accuracy of the underlying shapes but it also tends to increase the complexity of the base mesh. Adding unnecessary complexity will make it harder to edit the base mesh and using too much geometry on a small segment of the model can generate it's own type of smoothing artifacts.

    Outside of the obvious cases where there isn't enough starting geometry to support the intersecting shapes, arbitrarily increasing the number of segments in the starting geometry can be a viable solution to remove some smoothing issues but there's diminishing returns on the trade off between accuracy and efficiency.

    In some cases there's already more than enough starting geometry and the smoothing issues are caused by improper geometry distribution or topology flow that disrupts the natural forms of the underlying shapes. Fundamentally these type of smoothing issues are problems with the underlying shapes not the geometry density. Blindly increasing the amount of the geometry will work but it's a brute force approach that doesn't address the root cause of the issue.

    Subdivision modeling is a [mostly] approximate process but the smoothing behavior is relatively consistent across all platforms. To understand how subdivision smoothing works it's important to break things down into their simplest forms and observe the fundamental smoothing behavior of the basic shapes. Once subdivision smoothing behavior is understood it becomes easier to plan out efficient topology routing solutions for complex shape intersections.

    Below is an example of basic cylinder geometry:

    Row 1: The edges that define the cylinder wall segments tend to pull inwards uniformly. To counter this it's necessary to add support loops around the edge loop at the top and bottom of the cylinder. These support loops counter the uniform inward pull and deform the top of the shape into a flat plane with a rounded transition into the cylinder wall.

    Row 2: Moving an individual segment of the cylinder wall (in or out, up or down, right or left) will change the spacing between the segments and this causes the wall of the cylinder to deform. Failure to maintain concentricity and spacing of cylinder wall segments is a major cause of smoothing artifacts on curved surfaces.

    Row 3: Adding edge loops that run perpendicular to the cylinder wall segments has virtually no effect on the curvature of the cylinder wall. Placing additional edge segments between existing cylinder wall segments tends to disrupt the subdivision smoothing which flattens the cylinder wall. Adding support loops to either side of existing cylinder wall segments tends to disrupt the subdivision smoothing and causes pinching.

    As shown in this example: moving existing cylinder wall segments can cause undesirable deformation. Additional geometry that runs perpendicular to simple curves tends to have a relatively minor impact on smoothing behavior along the curve. Additional geometry that runs parallel to existing edge segments in the cylinder wall tends to disrupt the smoothing behavior and is a major cause of smoothing artifacts on curved surfaces.

    To avoid these kind of smoothing errors it's important to maintain a relatively consistent spacing between curve segments and ensure that the smoothed mesh components and surface features are concentric to the wall of primary shape.



    Below is an example of basic rectangle geometry:

    Row 1: The edges that define the volume of the rectangle tend to pull inwards uniformly. To counter this it's necessary to add support loops around the edges that define each face of the rectangle. These support loops counter the uniform inward pull of the smoothing and deform the basic shape into a series of flat surfaces with a slightly rounded transition between each surface.

    Row 2: Adding geometry to or moving geometry along flat surfaces [between the outer support loops] generally has no major impact on the overall shape. Moving surface geometry components out of plane does cause deformation.

    When properly supported, flat surfaces are generally very tolerant to substantial topology changes so they can be a good place to simplify the mesh by culling unnecessary edge loop propagation. Flat surfaces that are angled, relative to other surfaces, should be flat to themselves. Keeping shape boundaries well supported and individual surface coplanar should be all that's required to prevent major smoothing issues on shapes with flat surfaces.



    The previous observations seem fairly obvious but it's important to keep this information in mind when merging shapes. Trying to add geometry to curved surfaces, without accounting for the effects of edge loop propagation, often results in pinching and stretching errors.

    The modeling process shown below seems to work but once the final support loops are added to the cube they run off the cube and parallel the existing geometry of the cylinder. This changes the segment spacing in that area and causes undesired cylinder wall deformation.



    Moving the extra edge loop along the cylinder wall can reduce the severity of the smoothing artifacts but it generally doesn't fully resolve the issue. It is possible to manually re-adjust the segments so things are relatively smooth but this often requires a significant time investment and tends to introduce other issues with the overall accuracy of the major shapes.

    When asking for feedback on resolving this type of smoothing error, it's very common to get a vague answer like "Just add more geometry." and that's understandable, because this strategy is simple, easy to explain and generally works. Sadly this advice is often simplified into something along the lines of "Just double or triple the segment count." Most of the time this will work but it's not an optimal strategy and this kind of advice ignores the wider context of what's appropriate for a particular part of a model.

    Arbitrarily increasing the complexity of the model without addressing the root problem that's causing the smoothing error is a sure fire way to reduce process efficiency. It will also make it more difficult for anyone who has to come back and make adjustments to that part of the model. This is why it's important to evaluate what's actually causing the smoothing error and take other factors (such as object size, average view distance, complexity of adjacent mesh segments, etc) and make an informed decision about where to add the additional geometry.

    A better answer is: match the segments of the intersecting shapes and use the appropriate amount of geometry to match the necessary edge loops on the adjacent shapes.

    Below is an example of how the number of segments on the larger cylinder can be increased to match the number of edges that make up the intersecting rectangle. As discussed previously, the number and position of the segments in the cylinder wall are critical elements that influence smoothing behavior. It's also generally considered best practice to use the existing geometry as support for the shape transition area around the shape intersection.



    The process of segment matching is fairly straightforward and this is one of the reasons why it's important to block out all of the major shapes before investing a significant amount of time in adding support loops and other details. Establishing accurate shapes with good topology flow between them will make it easier to place and route support loops that won't cause major smoothing issues.

    With subdivision modeling: the goal should generally be to use the minimum amount of geometry required to effectively hold the required level of shape accuracy and surface details.

    Once all of this is understood and put into practice it becomes much easier to generate meshes that have a better balance between shape accuracy and process efficiency. Here's an example of the previous modeling process but the number of segments in the cylinder is increased slightly to match the number of segments required to support the intersecting cube.


    It's also important to view the mesh from multiple angles and evaluate the total prominence of any smoothing artifacts. A shiny preview material with a wide highlight roll off can really light up surface imperfections when a mesh is viewed from off axis glancing angles. Using this type of material and toggling the subdivision preview will help make it easier to identify the severity and source of any smoothing errors.

    When increasing the mesh density there's a definite fall off in quality as the amount of geometry increases and there's a corresponding jump in editing difficulties. This is why it's important to consider how visible a smoothing error is before investing a significant amount of time trying to remove it. If there's minor smoothing errors in a small segment of the mesh that's not regularly visible to players or if minor smoothing errors are covered by high frequency surface normal details then there's minimal benefit to achieving perfection in these areas.

    Below are three examples [with increasing mesh density] to compare how increasing the amount of geometry effects the overall visual quality of the subdivided mesh and editability of the base mesh.

    Here's the cube intersecting a 12 segment cylinder.



    Here's the cube intersecting a 24 segment cylinder.


    Here's the cube intersecting a 32 segment cylinder.



    As the density of the mesh increases there's a clear fall off in the rate of change at which the difference in visual quality is perceivable.

    This means that, past a certain point [for an object of a given size, viewed from an average view distance] increasing the number of segments in a cylinder will no longer have a measurable difference to what players experience. At this point there's very few reasons to continue adding additional segments.

    Where good enough is good enough will be different from project to project but there's a distinct case to made against over complicating the mesh. Time spent perfecting something that doesn't directly effect the playability or player's perception of the overall visual quality of the model is time that could be better used else where. This all circles back to why block outs are so important.

    Another point worth making is that there will be situations where it's either impractical, impossible or imprudent to increase the number of segments in the base geometry. Often there's cases where parts are very small or constrained by adjacent geometry and the cost of making minor improvements far outweighs leaving in minor smoothing artifacts.

    In situations like this it's best to try and reduce the visibility of minor smoothing artifacts by manually adjusting the mesh to help pull everything to shape when subdivision is applied. Often small, soft smoothing artifacts can be covered by surface noise in the normal textures.

    It's generally acceptable to make minor manual adjustments to small details constrained by adjacent topology but keep in mind that using this to make large changes to most of the shapes in an object can produce undesirable results. Excessive manipulation of geometry components to compensate for an issue caused by a lack of adjacent supporting geometry can generate it's own type of smoothing issues and tends to decrease the overall accuracy of the shapes.

    Below is an example of what this process can look like. Dissolve unnecessary edge loops and with subdivision preview enabled, scale or move the offending geometry components back into line until the stretching or pinching issues are minimized.



    The results are far from perfect but it is an improvement on the original base mesh's smoothing behavior. This strategy is really only useful on making minor improvements to parts that would have otherwise required reworking a large section of the mesh to blend in. Again, it's important to evaluate the root cause of the smoothing issues and address that whenever possible. This solution is only passable on minor parts that are generally out of the player's close view.



    All of these principles can be applied to other shapes. Below is an example of how this works with cylinder to cylinder intersections. Start by blocking out the major forms then match the number of segments in the intersecting shapes. Preserving the natural shape of the underlying geometry will reduce the likelihood of prominent smoothing artifacts.

    The mesh in this example is passable if the geometry density is constrained by adjacent shapes but it does have some deformation issues caused by the limited amount of geometry in the larger cylinder.



    Here's another example that shows the same modeling process with more starting geometry.

    The overall goal should be to preserve the natural forms of the underlying shapes and increasing the geometry density to closely match the segments of the intersecting geometry tends to improve both shape accuracy and smoothing behavior. Using the appropriate amount of geometry will help provide a good balance between shape accuracy and process efficiency.

    it's also worth noting that transitions areas around shape intersections can provide natural support loops that should be used to constrain any smoothing errors caused by any differences between the geometry of the intersecting shapes.


    Below are three examples to compare the difference made by increasing the geometry density of intersecting curve shapes.

    Here's the small cylinder intersecting a 12 segment cylinder.


    Here's the small cylinder intersecting a 24 segment cylinder.


    Here's the small cylinder intersecting a 32 segment cylinder.



    After a certain point it becomes necessary to also increase the number of segments in the smaller cylinder to match the segment increase in the larger cylinder. Failure to match the number of segments on curved intersections will generally just move the smoothing issues from one shape to the other.

    Intersecting shapes with flat segments and sharp corners tend to be more demanding than circular shapes with soft corners. This circles back to the idea that it's possible to adjust the underlying geometry to compensate for minor smoothing issues. With cylinder to cylinder intersections it's often possible to average out any minor smoothing issues over a wider area without needing to increase the density of the starting mesh.

    As shown in the example below: there is a minor decrease in overall shape accuracy but the tension in both curves helps pull everything back into shape. The accuracy penalty for making minor shape adjustments on this type of geometry tends to be much less than when it's used on intersecting shapes with strong linear features.


    Becoming proficient at subdivision modeling does require practice but it also requires learning to see the underlying shapes and being able to extrapolate new solutions from existing knowledge. Limited experience can be a barrier that makes it difficult to take relevant examples and apply them to new problems. The basic principles will work on most shapes. Start by learning the fundamentals and build up from there.

    Below are a couple examples that show how the topology for a through hole is essentially the same as the topology for a boss. The only difference is one pushes inwards and the other pokes outwards. There's very little magic. It's mostly about repetition and rhyme.


    There will be challenges along the way but it's important to keep going and to study how other artists have resolved similar problems. Sometimes it seems like it can't be that easy [because sometimes it's not] but it just take thoughtful practice and evaluation. Experimenting with different topology solutions and comparing the results can help identify winning modeling strategies.



    Here's another example: surely an oblong through hole must have different topology than what's needed for a circular through hole! Nope. It's pretty much the same, just split the circular through hole and fill the gap. Fundamentally it's mostly the same or variations on a theme because the underlying process (subdivision smoothing) is relatively consistent. It's all repeatable which is why it's important to research, experiment, evaluate and repeat.



    Below is another example that's a variation on the theme: does it need more geometry or does it just need better geometry?



    Well for starters it's overly complicated and poorly organized. But remember that there's cases where more geometry is a brute force solution to problems with the underlying shapes. Effective subdivision modeling is about using the minimum amount of geometry required to accurately hold the shapes.

    Here's an example of how a better result can be achieved with the same amount of starting geometry. Start by defining ALL of the basic shapes, use the existing geometry as support for shape intersections and place the intersecting geometry on or between existing segments as appropriate.



    Below is a comparison of the same shape with different mesh densities. Under optimal conditions there's very little practical difference between all of these. With that said, there will be certain situations where increasing the sharpness of the edge width by adding additional support loops will require more geometry to support the increased edge sharpness.

    Again it all comes down to what the project requires and knowing how and when to make tradeoffs between accuracy and efficiency. This is why it's important to block out everything beforehand and evaluate how accurate things need to be based on player view distance.



    How about this shape? More geometry or better geometry?



    Parallel edge propagation is making an absolute mesh of the underlying shapes. This is a case where increasing the geometry density will help and so will adjusting the placement of the cylinder's edge segments so the intersecting geometry is properly supported by the existing geometry.

    Here's one example of how these smoothing issues could be resolved without throwing a lot of geometry at the problem. Start by blocking out the shapes then match up the segments of the intersecting shapes. Try to preserve as much of the existing geometry as possible and use it as support for the shape intersections.

    The initial result is much cleaner than the previous version but it does have some unwanted deformation near the peak of the circle at the end of the slot. This minor imperfection can be resolved by shirking the height of the circle's center vertex at the surface of the curve.



    As before, increasing the geometry density does tend to increase the overall accuracy of the shapes. The question is how much has this increase improved the results and how much has it complicated the underlying mesh? The answer depends on the quality goals for the project.


    It's also worth noting that, when increasing the geometry, it doesn't always have to be an even number and the segments don't always have to align perfectly. it's very common for artists to stick with common even numbers like 8, 16, 32, 64 or 12, 24, 48, etc. All of these numbers are easy to work with and visualize but sometimes it's better to use less conventional numbers like 10, 14, 20, etc. Sometimes it's more important to line things up with the intersecting geometry than it is to have a common number of segments.
  • Jackskullcrack
    Offline / Send Message
    Jackskullcrack polycounter lvl 6
    I'd very much like to see you do some tutorial videos or perhaps a live stream at some point so we can see your process in real-time.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision [Saturday] Sketch: Camera mount baseplate.

    Last weekend's hard surface sketch continues with the camera rig theme. Here's a subdivision model of my favorite lightweight camera mount for standard 15mm rods.

    Another modifier heavy workflow where the basic shape is created with a series of boolean operations on a mirrored base mesh. Most of the support loops are generated by modifiers that are controlled by vertex groupings and edge weights. The waffle pattern on the rubber pads is a floater that's generated by a simple tile pattern and trimmed with a boolean operation.

    Some parts were reused from the previous project and everything is built to scale in real world units.

    Top row: shaded subdivision preview.
    Middle row: wire frame subdivision preview.
    Bottom row: base meshes with the mirror and bevel / chamfer modifiers turned off.



    When it comes to modifiers and non-destructive modeling there's a lot of different ways to approach creating the shapes but it's always important to evaluate the overall practical efficiency of the entire process.

    Some artists recommend only using unedited primitives with boolean operations. The argument in support of this strategy is that everything can be walked back to the original starting primitive and avoiding mesh edits will maximize the non-destructive editability of the surface features. While this type of modeling strategy can make sense on projects where large, continuous changes can be expected, there's often a significant efficiency penalty to brute forcing shapes with unedited primitives and over complicating the boolean stack.

    The idea that everything needs to roll back to unedited primitives to mimic CAD processes doesn't really hold water because drawing flat dimensional sketches and extruding basics shapes is an actual process used in CAD workflows.

    In most cases it will be more efficient to combine modeling operations wherever possible by editing the underlying shapes that drive the boolean operations. Keeping these mesh shapes simple and controlling surface features with other modifiers reduces complexity without sacrificing editability.

    Here's the basic modeling process for the camera mount's metal base plate. Start with the overall dimensions and use snap to grid to keep all of the mesh components in line when blocking out the basic shapes. Use bevel / chamfer modifiers to add chamfered and rounded corners that can be adjusted at any time. Both edited and unedited primitives can be used to define key surface features. Sketch out flat outlines of the complex pocket features and use modifiers to solidify the shape and add corner radii.

    Most boolean operations will leave behind some unnecessary geometry. Once all of the major features are present and the model is approved then it should be safe to apply the boolean operations and cleanup the mesh by triangulating and running limited dissolve.

    This editable base mesh can then be used to create both the high poly and low poly models. Add a bevel / chamfer modifiers to preview the edge width before cutting in surface features that are close to the outside of the shapes. A mirror modifier is used to simplify the base mesh and bevel / chamfer modifiers, controlled by face angles, edge weights and vertex groups, are used to automatically generate support loops that can be adjusted at any time.


    Here's the modeling process for the rubber pads on the top of the base plate. Start with the basic shapes and use bevel / chamfer modifiers to generate the corner radii and support loops.

    The waffle pattern is a simple inset tile that's repeated across the top of the surface and trimmed with a boolean operation. This surface detail is floating geometry [floater] so it's basically a one sided mesh that sits above the base mesh. More often then not it will be more efficient to add minor surface details like this using texturing tools.



    There's a lot of basic shapes and a few complex intersections with compound curves but the base mesh is fairly simple. All of the support loops are generated by un-applied bevel / chamfer modifiers and remain fully editable.

    The simplicity of the base mesh and use of modifiers means that making minor changes to the shape won't be an issue. Any major changes can be made by going back to the block out model with the active boolean operations.

    Additional standardized parts can be pulled from previous projects to save time. Here the flag head clamp screws are reused from the rosette clamp model.



    Recap:
    Keep the base mesh simple and use modifiers whenever possible to avoid repeating manual editing operations. Avoid over complicating non-destructive workflows and use destructive editing where it makes sense. Work in real world units and reuse components from other projects.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision modeling: corner topology edge flow.

    Topology layouts for sharp, angled corners are fairly simple but often overlooked. This is why it's important to build a solid understanding of how different corner topology layouts impact edge flow, mesh complexity and smoothing behavior.

    Most corner topology can generally be described as being either boxed or mitered. Boxed corner topology tends to produce a linear edge flow where adjacent support loops intersect and run off the end of the shape. Mitered corner topology tends to produce a continuous edge flow that follows the form of the underlying shape.

    Both are viable but it's important to remember that efficient subdivision modelling is about cleanly directing edge flow while using the minimum amount of geometry required to accurately represent the shapes. The overall goal should be generate an accurate base mesh that's easy to edit and relies on the subdivision to generate smooth shape transitions and polished edges.

    Here's an example that compares the underlying topology layouts and edge flow for both individual loops and support loops. The top row shows how boxed corner topology creates linear edge flow channels that direct the loops across the mesh until they intersect each other or run off into adjacent shapes. Compare this with the bottom row where the mitered corner topology creates continuous edge flow channels that direct the loops around the mesh without generating any extraneous geometry.



    Enabling subdivision preview shows how the smoothing behavior deforms the underlying mesh and highlights where additional edge loops are needed to support the corners.

    The linear flow of the boxed corner topology does provide some additional support for the outside corners but this extraneous geometry could be problematic if it runs out into other topology layouts on adjacent shapes. Mitered corner topology layouts tend to use less geometry and in most cases this will make it a bit easier to work with.

    When comparing the results of the two topology layouts: there's often no significant perceptible difference in the overall quality of the shapes but there does tend to be a difference in overall geometry efficiency.


    Another important thing to consider is whether or not the support loops need to flow around the shape to support additional details. The continuous edge flow of mitered corner topology tends to make it easier to select and adjust the loops that support the shapes.



    Mitered corner topology also tends to make it easier to make consistent adjustments to adjacent corners without effecting adjacent surfaces. With that said, there are certain edge cases where the linear flow of boxed corner topology is needed to support additional shape intersections and surface features.

    Which topology layout makes the most sense will depend entirely on how the support loops interact with the surrounding shapes and whether or not the mesh needs to be edited again in the future.



    Choosing to use mitered corner topology isn't always that obvious. Certain types of shape intersections do require grid topology or intersecting loops to support the shapes but they can often be further optimized by using mitered corners on the perimeter support loops. The first and second row in the example below shows how support loop topology can be optimized to produce better results with less geometry.

    There's also some specific edge cases, similar to what's shown in the third row, where secondary surface features require additional support and support loops need to turn corners or flow around complex shape intersections. In cases like this the mitered corner topology will help direct the edge flow without creating extraneous edge loops that flow off into adjacent shapes.



    Recap:
    Optimizing the topology layout and edge flow will help reduce extraneous edge loops that tend to increase mesh complexity without providing significant improvements to the overall visual quality of the final mesh.

    Effective subdivision modeling is about balancing accuracy and efficiency so it's worth taking the time to block out all of the major shapes and figure out the most effective topology routing before jumping in and investing a significant amount of time adding support loops.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision modeling: variations on a theme and observation as a skill.

    Building a solid foundation with relevant technical knowledge is essential but it's also important to develop fundamental art skills. One thing new artists tend to overlook is just how important observational skills are.

    Prioritizing the accumulation of technical knowledge over the development of observational skills tends to lead to situations where the artist can only see exactly what they're looking for. This often means a lot of potential solutions are ignored because they don't look EXACTLY like the problem at hand. The result tends to be a lot of frustration and wasted time.

    Below are some examples of common topology layouts that cause subdivision smoothing artifacts. Almost all of these examples share the same problem and therefore they almost all share the same solution. This is where observational skills, like the ability to identify and compare similarities and differences, becomes important to solving the problem.

    Take some time to compare the topology layouts, smoothing artifacts and smoothing behavior (edge tension) around the shape intersections. What attributes are shared across these smoothing artifacts?


    Smoothing artifacts around rectangular shape intersections on curved surfaces are a common pain point and a good place to start. Extruding the geometry directly off of the cylinder wall geometry tends to generate stretching artifacts along the side of the shape intersection. Adding parallel support loops to the existing cylinder wall geometry changes the segment spacing and introduces it's own kind of pinching artifact.

    The generally accepted solution to this problem is to use the existing geometry as support by adjusting the segment count of the cylinder so the intersecting geometry lands between the existing edge segments that make up the cylinder wall and routing the rest of the topology into the adjacent edge segments as evenly as possible. Keeping the segment spacing relatively even and confining the topology changes to small areas within the shape transitions helps minimize the visibility of any smoothing issues.


    The topology routing on the curved surface is often the same for bosses, pockets and slots. Once the basic principles of support loop behavior on curved surfaces is understood the same topology strategy can be used to resolve the smoothing issues on similar shapes.



    This same basic topology strategy can be applied to almost all of the previous shapes to resolve most of the smoothing issues. The key here is to research the correct technical information and use observational skills to figure out how to apply this information to similar problems.


    Building technical knowledge is great but it's not a substitute for building observational skills and putting in the work to experiment with different solutions to come up with an answer that works for the project.

    A large part of art is problem solving. Relying on rote memorization of shapes or getting other artists to solve the problems isn't a sustainable long term solution or even a shortcut to becoming a better artist.

    Self reflection isn't always fun but sometimes it's worth looking at the results of a piece and asking: Is this a technical knowledge issue or is this an art fundamentals issue? If it's a knowledge issue then research different ways of doing things, make some samples and compare the results. If it's a fundamental skill issue then work on exercising that particular skill over a longer period of time.
  • ant1fact
    Offline / Send Message
    ant1fact polycounter lvl 9
    Hi @FrankPolygon
    Would you consider selling some kind of 3d reference package of these subd examples that you have made over time? as .blend or .obj? Thanks
  • wirrexx
    Offline / Send Message
    wirrexx quad damage

    Subdivision [Saturday] Sketch: Camera rod clamp and rosette.

    [Trying a new content format with more images and less text.]

    Here's a basic overview of my hard surface subdivision modeling process. Most of the support loops are added with a bevel / chamfer modifier and can be adjusted at any time.

    The support loops around some of the smaller shapes and fine details on some of the parts required limited manual adjustments. In those cases the bevel / chamfer modifier was applied and the mesh was adjusted.

    Only the body of the clamp and the head of the flag screw have manual support loop edits. All key features are modeled to scale in real world units.

    Top row: shaded subdivision preview.
    Middle row: wire frame subdivision preview.
    Bottom row: base meshes with the bevel / chamfer modifier support loops turned off.



    Here's the modeling process for the rosette and small screws. Start with primitive shapes and add details with inset operations, using real world scale and incremental snapping.

    The teeth on the rosette were created by making a single tooth and spin duplicating it in a circle. Screw holes are added with simple boolean operations.

    Both the rosette and the screw have support loops generated by active (editable) modifiers.



    Here's the modeling process for the body of the clamp. It starts with primitive shapes and edge loop sketches made to real world scale.

    Connecting geometry is created with single click automated bridge and fill operations. Minor details are added with inset operations and boolean subtractions. Support loops are generated with bevel / chamfer and inset operations.

    Minor topology adjustments are made after automated edge loop placement. Triangles and n-gons are used to simplify the modeling process. Since they aren't causing any major smoothing issues there's marginal benefit to resolving the topology to all quads.




    hey Frank, love theses!!
    One question, the first picture with the spilnes (maybe nurbs?). You use it as a guide to fill in the missing polygons by hand or do you close it automatically? if then, how?

    Keep up the great work!

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter
    @Jackskullcrack Thank you for the feedback. Workflow related video content is something I've done in the past and will probably try adding again sometime in the future. That type of content tends to require a significant level of commitment and it's something I'd like to do in a way that provides viewers a meaningful way to interact with the discussion and ask questions but in the past moderating comments has been an issue. Video platforms tend to have a different audience so it's very likely any future video content will just be unlisted and posted here.

    @ant1fact Thank you for the feedback. I'm currently reworking my gum road content and topology samples are something I'll be adding there. Once the content refresh is complete I'll add that page to the rest of my links.

    @wirrexx Thank you. The highlighted lines in the first picture are just raw verts and edges generated by flat primitives. They're used in a way that mimics the sketch feature in Fusion. This makes it easier to figure out where everything needs to be before committing to certain shapes.

    Fill in geometry is automatically generated by using face fill selection, grid fill or bridge operations. All of the face geometry is automatically generated from the n-gons using triangulation and quadrangulation operations. Every operation is tied to a hotkey or macro button so the whole process only takes a couple keystrokes and selections.

    Here's what these steps look like, starting from the highlighted edges in the previous image. Order of operations is important to reduce the amount of clicks but there's no hard and fast way to go about filling everything in. The n-gons on the sides could really be left in until after the support loops are added with modifiers but the compound rounding on the corner does need to be filled by triangulating and quadrangulating the large n-gon.


    The actual corner is rolled over with a belt sander in a couple of different directions so it's not perfectly spherical and the shape was a good excuse to use triangles and n-gons in an unexpected way. If the shape was perfectly spherical the geometry would be slightly different.

    This is something that will be covered a bit more in a future post about making revisions to existing projects.
  • wirrexx
    Offline / Send Message
    wirrexx quad damage
    @Jackskullcrack Thank you for the feedback. Workflow related video content is something I've done in the past and will probably try adding again sometime in the future. That type of content tends to require a significant level of commitment and it's something I'd like to do in a way that provides viewers a meaningful way to interact with the discussion and ask questions but in the past moderating comments has been an issue. Video platforms tend to have a different audience so it's very likely any future video content will just be unlisted and posted here.

    @ant1fact Thank you for the feedback. I'm currently reworking my gum road content and topology samples are something I'll be adding there. Once the content refresh is complete I'll add that page to the rest of my links.

    @wirrexx Thank you. The highlighted lines in the first picture are just raw verts and edges generated by flat primitives. They're used in a way that mimics the sketch feature in Fusion. This makes it easier to figure out where everything needs to be before committing to certain shapes.

    Fill in geometry is automatically generated by using face fill selection, grid fill or bridge operations. All of the face geometry is automatically generated from the n-gons using triangulation and quadrangulation operations. Every operation is tied to a hotkey or macro button so the whole process only takes a couple keystrokes and selections.

    Here's what these steps look like, starting from the highlighted edges in the previous image. Order of operations is important to reduce the amount of clicks but there's no hard and fast way to go about filling everything in. The n-gons on the sides could really be left in until after the support loops are added with modifiers but the compound rounding on the corner does need to be filled by triangulating and quadrangulating the large n-gon.


    The actual corner is rolled over with a belt sander in a couple of different directions so it's not perfectly spherical and the shape was a good excuse to use triangles and n-gons in an unexpected way. If the shape was perfectly spherical the geometry would be slightly different.

    This is something that will be covered a bit more in a future post about making revisions to existing projects.
    You’re awesome thank you!
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision Sketch: camera rig - detail iterations and streamlining geometry.

    Project constraints can introduce specific issues that need to be solved with effective resource management strategies. Sometimes it makes sense to start off simplifying the mesh by skipping over minor details in low visibility areas. Resources [time, geometry, texture density, etc.] saved can then be reallocated to the more visually prominent areas where fine details are more likely to be noticed. Any left over project time can then be used to go back and make incremental improvements to the model in a series of detail and polish passes.

    The underside of the mount was simplified on the first iteration so a couple of the weight reduction pockets were missing. Leaving out minor details like this can speed up the modeling process and produce minor performance improvements for the in-game model. It's unlikely that players would notice these missing details since the underside of the mount isn't something that will be viewed regularly. Nothing major was left out but the bottom of the mount did feel a bit too monolithic without them so they were added during a secondary detail pass.

    Reusing existing content can be a great way to maximize process efficiency but it can also lead to over repetition of certain shapes that can become visually boring. A good example of this would be the straight tabbed heads on the clap screws. These are a common item across different brands of camera rod accessories but they're also pretty bland and lack visual appeal when paired with the shapes on the camera mount.

    Letting things settle for a while between passes can help reset expectations and make it easier to identify things that feel incomplete or look out of place. Changing the clamp screw design to a slightly shorter version with rounded convex tabs helped increase the visual interest and provided another opportunity to streamline the base geometry by using the modifier workflow on more mesh components. Below is what the updated high-poly looks like after the detail pass.


    The first iteration of the clamp screw had a very basic shape and it was easy enough to create the entire cage mesh with just extrude, inset and loop cut operations. Although the modeling process was fairly quick it did lock in the width of the support loops which reduces the overall editability of the basic shapes. Some of the mesh components were reusable but the key shapes were rebuilt with simplified geometry that uses modifiers to streamline the modeling process by generating adjustable support loops.

    Below is a topology comparison between the first and second iteration of the clamp screws.


    Here's what the basic modeling process looks like for the second version of the clamp screws. The block out starts with a top down sketch of the basic shapes and topology. This sketch is extruded and merged with mesh components reused from the first version and unnecessary geometry is removed using dissolve operations. Additional surface details are added and support loops are generated by a bevel / chamfer modifier controlled by edge weights. Not only is the new shape much more interesting but the base geometry is also easier to work with.


    There's been a few questions about the shape and topology of the corners on the rosette clamp. The short answer is there's a lot of variations on this type of clamp and the overall shape is often determined by how it was manufactured. A large part of the decision about which shape to go with just came down to what looked interesting and what could have the widest variation in surface finish.

    Round shapes are fairly easy to produce with specialized equipment but this means the surface finish on that shape would tend to be quite good which would be somewhat uninteresting. Going with blended shapes opens up more possibilities for adding chatter and directional sanding marks in the normal map.

    When it comes to the topology: making samples is an underrated part of the art process but it's a great way to explore different design ideas before committing to a specific direction. A pure spherical shape (left) has a more traditional topology layout but tends to have a very soft and bulbous appearance. An angled secondary radius (right) also has a fairly standard layout with sharper edges but the shape is visually too simplistic. The blended corner (middle) combines the compound curves of the spherical shape with relative flatness and simplicity of the angled radius and provides a more interesting topology layout that subverts expectations.



    Recap:
    Focusing on visible details first then iterating on less prominent details can ensure that critical visual elements are properly represented at the minimum viable stage while also leaving open the possibility of making incremental improvements. This strategy can be expanded or contracted as needed to meet overall time budgets.

    Reusing existing components can speed up the modeling process but it's also important to have reasonable levels of shape variance to help maintain visual interest across the entire project. Sometimes it helps to let parts of a project sit for a little bit then come back to it with fresh eyes then make the necessary adjustments.

    Creating samples is an important part of the art process and provides a cost effective way of exploring alternative solutions to design and technical issues without having to commit a significant amount of resources.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Art fundamentals: test samples and process validation.

    Creating test samples, to explore different workflows or approaches to solving a specific problem, is something that takes a bit of extra time but can be a very powerful tool for validating ideas without having to commit to taking a specific path.

    It's tempting to skip over this part of the creative process in the hope that it will speed up the completion of the project. The problem is that skipping over this part of the process often means placing artificial limitations on knowledge of both potential problems and solutions. This often means wading into a project with a heavier reliance on assumptions, intuition and improvisation.

    There's absolutely nothing wrong with making assumptions. It's often a safe shortcut around things that have already been encountered. Likewise both intuition and improvisation are great fundamental art skills to develop. The problem is that assumptions can be based on incorrect or incomplete information. Which tends to lead intuition astray and the result is often a lot of wasted time re-working problems caused by over improvisation.

    Another thing to consider is that sometimes the work piece can become so jumbled that it's just going to be easier and faster to take the lessons learned from the current iteration and start over. It can be painful to discard work but there's diminishing returns and a point of no return.

    One of the biggest time sinks artists can fall into is trying to force an overly complex or deformed piece of work into the proper shape. Planning ahead, blocking out the major forms, adding details incrementally and saving revision files all help provide fallbacks to work forward from if something goes wrong.

    Anticipating and working through potential problem areas on small scale samples can be helpful because it allows for direct comparison of different solutions. Resolving something once using intuition and improvisation can be down to pure luck. Doing things a couple of times and comparing the results provides insight into exactly what works and what doesn't. That's where a lot of the real learning happens.

    Here's an example of why it's important to create samples and analyze different solutions before making blanket assumptions about specific workflows and topology layout strategies.

    Below is a relatively basic shape. It's just a blind oblong hole in a rectangular cuboid. The modeling process is relatively simple and the topology layout is fairly conventional. Nothing really amazing or ground breaking.

    The process starts with a boolean subtraction of the hole then the center points on each side of the semi circle are connected to the adjacent points on the rectangle. Additional geometry is cut in to support the hole's geometry then perpendicular support loops are added to prevent the rounded areas from deforming the sidewalls. Additional support loops are added around the shapes so they keep their shape when subdivision is applied.



    The results are as expected. There's no major smoothing errors and the entire process is fairly intuitive.



    But what happens if the circle geometry requires additional geometry segments? The subdivided result doesn't look much better but since these changes were based on the assumption that all of the geometry elements must have their own support loops and the mesh must remain all quads the overall mesh complexity has increased.


    Projecting this behavior forward: what happens if there's adjacent geometry that needs to be connected to the rectangular cuboid?

    All of these extraneous support loops will tend to increase the amount of time and effort required to get the topology to flow smoothly around the rest of the shapes. It's also quite likely that the addition of multiple complex shape intersections with this type of topology layout strategy would be quite difficult to improvise with while also keeping a relatively clean and well organized topology layout.

    So the next logical step would be to start challenging some of the underlying assumptions and looking for alternate workflows that reduce the amount of stray geometry and provide some level of automated support loop generation. Part of this process may involve searching out examples of other workflows, similar to what's shown below.

    In this example the basic shapes are generated using a similar modeling process but the base mesh is much simpler and the support loops are generated with a bevel / chamfer modifier that remains editable. The simple mesh and automated support loop placement makes it easier to add additional shapes and produces minimum viable results that are comparable to the previous examples.



    Testing this workflow should be a fairly straightforward process but skipping over critical details or trying to apply previous assumptions may produce different results. So what went wrong here?

    The same boolean subtraction process is used to generate the basic shapes and the support loops around the shapes are generated by a modifier. Yet the results are completely unusable because the mesh collapses in on itself. When this sort of thing happens there can be a strong temptation to go back to what works and try to apply the previous assumptions that generated a usable result.


    This anchoring bias can make it difficult to troubleshoot the problem because, even if it takes longer and is less flexible, there's already a workable solution. This can lead to even more assumptions which tend to be based on incomplete or faulty information.

    The following is an example of how anchoring bias and faulty assumptions can mask the underlying problems and generate even more faulty assumptions that can create circular reasoning loops: "The sample above is pretty close to the previous example so it should work. Since this latest sample doesn't work and the first sample with all of the extraneous loops works that must mean the only way to make this shape is to manually add all of the support loops and the extraneous geometry. Clearly, based on these samples and these logical assumptions this simplified mesh workflow with automated support loops is completely flawed and unusable. On top of that there's now conclusive proof from the first example that having lots of support loops and an all quad mesh is the only way to create usable subdivision meshes."

    Again, the paragraph above is just an example of how shallow observations and assumptions based on incomplete information can create a false picture of what's actually happening. When results don't match expectations and there's sufficient documentation to demonstrate a process works then it's important to go back through the sample and compare technical elements like order of operations, topology layout, segment matching, etc.

    While the basic modeling process was the same and the basic mesh was close it wasn't exactly the same as the reference mesh above it. In this case the problem wasn't the workflow but the placement of the edges that connect the blind oblong hole to the rectangular face. Adjusting where the edges connect the shapes solved the issue with the mesh collapsing in on itself.

    Small details like this may seem unimportant at first glance but when learning about new processes it's important to pay attention to the details and figure out how things work. Sometimes this kind of stuff can slip through the cracks due to inattentiveness or hurry-up-itis. When results don't meet expectations it's important to walk back through the process to try and identify what the root cause is an troubleshoot a solution.



    Recap:
    Assumptions, intuition and improvisation are all excellent tools but come with their own blind spots that need to be filled in by testing. When creating test samples it's important to compare the inputs and the results while also double checking whether or not assumptions are correct and complete. Creating test samples is an important part of the creative process and is a tool for learning and solving problems.

    Test samples aren't just for technical stuff either. They can also be used to compare the effectiveness of different design elements and design strategies which can be used to solve common design problems. The whole point of making test samples is to evaluate different ideas before committing a significant amount of resources to any one direction. This reduces the risk of any single failure and helps encourage exploration by making experimentation a low cost high reward exercise.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision modeling: block outs and incremental progression.

    Drawing is a creative process that most people encounter quite early on so it's often appealing to try and draw parallels between the initial rough sketch used to develop a finished drawing and the block out mesh used to develop a finished model.

    Taking this simple analogy at face value tends to overlook how sketches can be more than just a framework that guides the creation of the finished drawing. Sketching is also something that tends to happen throughout the entire drawing process. Sometimes it happens above the paper and sometimes it happens on the paper. This is repeated throughout the drawing process until more and more of the initial sketch is lost to incremental progression. The transitory nature of sketching, along with the rough sketches (test samples) actively discarded along the way, tends to suggest that block outs can also be treated as purely disposable elements.

    The idea that the block out is only used to establish proportions then is thrown away is problematic because: at best it undersells the importance of a process that helps solve shape and topology issues before they become major problems and at worse it reinforces the idea that the process is some kind of redundant work that only needs to be done to tick another box. This overly simplified view of things can make it tempting to skip over the block out process to save time.

    While block outs can be used like rough sketches, to establish forms, check proportions and test out different ideas, they can also have a more direct role in defining the quality of the finished product. Skipping over the block out or only using it during the very beginning of the modeling process often leads to situations where unnecessary complexity ends up causing problems that burn up more time than was initially saved.

    Blocking out the base [cage] mesh is important because it's a lot easier to get the shapes and basic topology flow right without all of the support loops and excess geometry in the way. Support loops and topology layouts are important but they won't really matter if the underlying shapes are so inaccurate that the final mesh doesn't look anything like the concepts or references.

    Creating a block out may not be the most exciting thing in the world but jumping into a model without having a clear idea of what the shapes are and how the topology will flow often results in creating an unnecessarily complex mesh. All of this added mesh complexity often makes it difficult to join shapes, route topology and (most importantly) make changes to the model.

    Here's an example of a typical modeling workflow where the block out process stops after the basic outline of the shape is created. Support loops are added right away and the rest of the model is created with a subdivision modifier active. A lot of time and effort goes into manually placing and straightening support loops to provide space for additional surface details. All of the details are added with basic modeling operations that preserve the underlying quad topology.

    The result is an all quad mesh that doesn't require any major cleanup but this approach also tends to generate a dense topology that locks in the shapes and makes the mesh difficult to edit. Shortening the block out process did save some time but it also means that if any major smoothing issues appear or if any major change requests are made then it's very likely the entire model will have to be reworked.



    Here's an example of a modeling workflow where the block out process extends well into the addition of minor secondary surface features. Editable modifiers make changing most of these surface details a non-issue but even if this wasn't the case the incremental progression of the block out makes it easy to roll back to a previous iteration and make changes without having to redo a lot of existing work. Most of the time and effort in this example is focused on the creation of accurate shape details to match the references. Only once the shapes are correct is any time spent managing support loops.

    That's not to say that support loop topology isn't important. It is. Instead the idea is that the underlying shapes will determine what the support loop topology needs to look like so it's more effective to manage the basic shapes first then the support loop topology second.

    Keeping the mesh relatively simple and progressively adding details before adding support loops (generated by modifiers and a couple of inset operations) means that any change requests can be handled before a significant level of mesh complexity makes changing the shapes a lot of extra work. Waiting to add additional support loops until after the block out model is approved reduces the risk of having to rework a significant portion of the model to accommodate even relatively minor changes.



    Both modeling process produce usable results but there are subtle differences in overall surface quality. The model created with a limited block out phase suffers in a couple of places where the underlying topology had to be deformed to accommodate the serrations while also maintaining quad geometry. However this minor quality difference shouldn't be the primary focus. The major difference here is the amount of additional time and potential risk for rework that can accrue when skipping over or shortening time spent on the block out.

    It's often helpful to think of the block out as more of an extended process of establishing the shapes that make up the major forms while also developing the underlying geometry. Time spent resolving shape and topology issues early on in the modeling process is often paid back later when the mesh complexity starts to increase. Making major changes to a simple but accurate base mesh is much easier than trying to re-route a bunch of extraneous edge loops around additional shape intersections or surface details. This is why it's important to work through the block out phase and really nail down the major shapes before adding all of the support loops.



    For those interested in some additional reading on the topic: shape analysis is another important part of the block out process that's covered in this write up: https://polycount.com/discussion/comment/2745166/#Comment_2745166

    Recap:
    Avoid jumping into a subdivision model without doing some level of planning. Block outs can be used to experiment with different forms and topology layouts but they're also an important part of the modeling process. Often the final quality of the subdivided mesh is directly influenced by the quality of the shapes that were created while blocking out the base mesh. Incremental progression also helps reduce risk by creating regular opportunities to evaluate and change the model while also generating iterative states to fall back on if major changes are required.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision modeling: mesh complexity and shape accuracy.

    Modeling hard surface objects with more organic surface features, like stamped metal parts and castings, can be challenging because of the complex compound curves and the odd combination of hard and soft shape transitions. When working on these types of models it can be helpful to block out the major forms of the complex compound curves with a relatively simple cage mesh then apply the subdivision to lock in the shapes and provide support for smaller surface features.

    While this strategy of incrementally applying subdivision to a model with a lot of complex compound curves does tend to work well, one thing to avoid is applying too high of a subdivision level too early in the modeling process. Subdividing a mesh beyond the minimum amount of geometry needed to hold the shapes tends to introduce a lot of unnecessary mesh complexity. Which can make it difficult to adjust the larger shapes without generating slight undulations in the subdivided mesh.

    Over subdividing the mesh can also make it difficult to add larger secondary surface features without having to add or reroute support loops. This can be especially problematic when the additional support loops deform the underlying surface or significantly change the segment spacing of a curve on the larger shape. These extra support loops often generate undesired creasing or pinching elsewhere on the curve of the larger form and it can be extremely difficult to manually even out the curvature.

    Below is an example of what the modeling process can look like when too much subdivision is applied to the basic shape. Though most of the larger forms on the tank are correct, applying the subdivision too early has made it difficult to adjust the mesh in a way that effectively creates some of the secondary shapes where the sides of the tank curve inwards. This unnecessary mesh complexity also tends to actively discourage fully exploring some of the finer shape transitions and can push artists to try manually carving the details into the larger shape.

    While there are some cases where it's necessary to model sharper raised or depressed panel lines (welds, gaps, bead rolls, coining, embossing, etc.) most deep shapes formed in a single sheet of metal have very soft, gradual shape transitions. With traditional metal forming, extremely sharp or harsh shape transitions on pressed metal objects tend to indicate spots where multiple parts have been joined together. Too much harshness in the shape transitions can give the model a different appearance or surface read from the actual object.



    When modeling hard surface objects with soft compound curves it's important to analyze the reference images and establish realistic transitions between the primary and secondary surface forms. Gather good reference images that show the object from multiple angles and under different lighting conditions then study how the shapes flow into each other.

    Below is an example of what the modelling process can look like when less geometry is used and the mesh topology is kept relatively clean and simple. Start by blocking out the major forms and only apply the subdivision when it's absolutely necessary to support the shapes. Keeping things as simple as possible for as long as possible tends to make larger shapes easier to work with.

    Try working in some of the secondary shapes earlier in the block out process and carry that simplicity over into the base mesh. This should help reduce the overall complexity of the cage mesh. It can also be helpful to deform some of the edge loops on the base mesh so the underlying topology will fit around the shapes in the references. This will make it easier to add surface details without having to use a lot of complex boolean operations.

    The closer the shapes are to what's in the references, at the lowest possible subdivision level, the easier it should be to add additional details to the curved surfaces later in the process. Spend the time refining the shape of the curves at the lowest level that makes sense then work up from there while keeping things as simple as possible.



    Avoid the assumption that a dense or complex mesh is an accurate mesh. Mesh density does tend to increase the quality of a subdivided surface but the position and form the underlying geometry is what really determines the overall accuracy of the mesh. Poorly constructed shapes that don't match the reference images or are inconsistent won't be improved by increasing the mesh density.

    In fact there's a lot of situations where arbitrarily increasing the mesh density can actually introduce more issues than it solves. Representing as much of the surface shapes as possible at the lower levels will also help keep things relatively simple. When to add additional subdivision levels or support loops will depend entirely on the complexity of the shapes / details that need to be added, as well as where it all falls in the overall modeling process.


    Recap:
    Take the time to gather quality reference images. Study these reference images and if necessary draw over them to highlight important shapes and shape transitions. Carry the block out process as far as it needs to go to accurately develop all of the shapes. Avoid over subdividing the mesh and trying to manually carve or batter surface features into shape. Start with a relatively simple mesh and use the appropriate amount of geometry to hold the shapes. Let the subdivision do the work of filling in the geometry and smoothing the shapes.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision Sketch: Camera rig grip + clamps and articulated ball arm.

    Nothing really special here, just an update on the modular camera rig parts.

    Here's some images of the progress from a couple of hours spent noodling around this weekend. Same modeling process as before with the modifier based support loops. Most of the base models used to control the subdivided high poly models will be used directly as low poly models when the modifiers are turned off. This can be a really efficient way to create both the high and low poly models at the same time. Some areas, like the knurling on the grips, will be baked down to simplified geometry on the low poly models. The low poly optimization and baking part of this workflow will be covered some time in the future.

    Multipurpose L shaped rail clamp.



    Top of the vertical hand grip.



    Knurled rubber grip pattern. (First two steps are generated with modifiers. Final geometry generated by a single inset operation.)



    Screw in hand grip assembly. (All the major support loops are generated by editable modifiers.)



    Angled rosette to screw adapter for the hand grip. (Some of the modifier based support loops got a little too close so the edge weights need to be adjusted in a couple of spots.)



    Articulated arm segment. (No need to use a quad sphere since both poles are trimmed off to support circular details.)



    Articulated arm parts details.



    Arm joint tension knob. (Good example of segment matching and boolean cleanup. Done in tileable sections to reduce work.)



    Articulated arm with threaded ball heads.



    Example of how all the modular camera rail parts connect.



  • KebabEmperor
    Offline / Send Message
    KebabEmperor polycounter lvl 3
    What camera rig is this exactly ? I want to give it a try as well. Looks like a good practice. Couldnt find the name

  • pixelpatron
    Offline / Send Message
    pixelpatron polycounter
    Incredible stuff happening in here. Where were you when I needed you in 2015!?
  • dlz
    Offline / Send Message
    dlz polycounter lvl 4
    I'm glad I encountered that thread! All the processes are very illustrative, well described and I value a lot that you explain the benefits and the reason for each process.
  • pignaccia
    Offline / Send Message
    pignaccia polycounter lvl 11
    pure pornography! thx
  • jeanfree
    Imagens boas. Tem tutorial?
  • HAWK12HT
    Offline / Send Message
    HAWK12HT polycounter lvl 13
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter
    @KebabEmperor This camera rig is based on a couple of different setups I've put together for commercial video projects. It's basically a collection of parts from different brands so it isn't available as a complete package. A lot of professional cinema camera accessories are built around the 15mm camera rod standard so there's a lot to choose from. Most of the parts are interchangeable and can be built up into whatever size rig is needed.

    @pixelpatron Thank you. Been lurking here since 2008 but never signed up until 2016 and didn't start posting until 2019... So yeah, I should have probably done all that a lot sooner. Really appreciate the support.

    @dlz Thank you. Good to hear the write ups have been informative.

    @pignaccia Thanks!

    @jeanfree Thank you. Glad you like the visual process breakdowns. This sketchbook is currently just write ups with breakdown images but there is some step by step content on my ArtStation blog. I've also been looking at the possibility of creating different types of video content which would feature a bit more step by step content.

    Overall there's just a lot of existing tutorial content that already covers the how to side of things. So, for game art, I generally like to focus more on the why side of things. It's certainly not outside the realm of possibility though so it just depends on how much demand there is and what people want to see.

    @HAWK12HT Thanks!
  • SnowInChina
    Offline / Send Message
    SnowInChina interpolator
    nice clean work
  • dopamine
    Offline / Send Message
    dopamine polycounter lvl 7
    This is the most precious thread which I read on this site.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter
    Thanks!

    Thank you, glad it's helpful. Just a small part of the institutional knowledge available here on Polycount.

    Sketchbook backlog

    Slowly working through the content backlog. There's the usual mix of subdivision modeling examples, along with some other write ups about general art processes, low poly optimization and creating materials. Posts will continue as time allows.

    Individualized feedback

    After some requests for focused, individualized feedback: I've started doing artistic and technical critiques. The goal of these critiques is to meet the artist where they're at in the process and provide actionable feedback that fits within the context of the project and artist's skill level.

    Depending on the nature of the critique the feedback will be provided as images, text or some combination of the two. Some things are fairly straightforward but doing a deep dive into the evaluations tends to take a significant amount of time. So there's a limited depth and number of [unpaid] reviews that I can handle at any one time.

    Currently only considering feedback requests [at any stage] on full cycle hard surface game art portfolio projects. So this requirement is less about what stage a project is at and more about the attempt to create a finished portfolio piece and the learning that takes place along the way.

    If this is something that interests you then message me on Polycount or on ArtStation.
    This is just something I'm testing out on a small scale. If it's sustainable then it will probably be spun off into it's own thread in the appropriate section. As such the current focus of this sketchbook thread hasn't changed. So I will continue to post hard surface modeling and art process content here.


  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Art fundamentals: isolating variables and iterating to solve complex problems.

    The process of creating game art often requires solving complex artistic and technical problems. Common issues tend to be well documented, so searching for existing solutions to similar problems can be a good place to start.

    Asking for help can uncover the non-obvious information that makes it easier to understand certain concepts but waiting for someone else to solve complex problems isn't a sustainable long term strategy. It just takes way too much time. This is why [as an artist] it's important to develop strong, independent problem solving skills.

    Complex problems tend to be caused by a chain of failures. So it's often helpful to look at the results of a failed process and try to break it down into specific issues that can be solved independently from each other. This process of separating the different issues should make it much easier to search for existing solutions and uncover the relationship between unknown interactions.

    When trying to resolve multiple issues it's important to isolate the variables first then troubleshoot each one independently. Breaking down a complex problem like this will make it a lot easier to see exactly what effect each change has on the results. Which also makes it a lot easier to understand exactly what's causing each problem and the different options for resolving the individual issues.

    Since there's often more than one way to resolve a problem it's also important to test different solutions and compare the results to identify any trade-offs between efficiency and visual quality. Having several test samples for direct comparison will make it a lot easier to identify which optimization decisions produce the best results for the specific goals of a project.

    Optimizing low poly models and resolving artifacts in baked normal textures are two areas with a lot of overlapping elements that interact with each other. So this type of problem solving strategy can be useful for figuring out where to place hard edges and UV seams, as well as how much geometry is needed to maintain the desired visual quality level. Running a few different bake tests on a model can really highlight what works and what's important at a given view distance. Things that might not be obvious without being able to see a direct comparison.

    Below are a few preview images and links back to previous write ups that show how to use this problem solving strategy for low poly optimization, hard edge placement and identifying common normal baking artifacts.







    https://polycount.com/discussion/comment/2759935#Comment_2759935



  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision modeling: mesh density Vs loop structure on curved surfaces.

    Sharpening corner details by adding support loops that interrupt the segment spacing of a curved surface will generally result in some kind of smoothing artifact. The example below demonstrates how adding the highlighted support loops to the existing edges of the cylinder produces an undesired faceting artifact.


    Increasing the number of segments in the curved surface, until the desired level of edge sharpness is reached, is a common fix for this type of smoothing artifact. This approach to modeling tends to produce evenly spaced topology layouts that provide consistent results when subdivision smoothing is applied.

    Though dense topology layouts tend to be predictable they can also be difficult to work with. Creating larger objects or sharper corners will often require adding a significant amount of geometry to the curve. Which can make it difficult to manage complex shape intersections that require a lot of manual clean up.

    The example below shows just how much geometry is required to produce a relatively soft corner on a shape that doesn't take up much of the screen. Segment counts from left to right are 24, 48, 96. If the object needed to fill the screen and have fairly sharp edges it's quite possible that the density of the starting geometry would have to double or even triple to maintain the current level of corner sharpness.



    Adding support loops to the corners, that merge into the existing segments of the cylinder, is another option for resolving this type of smoothing artifact. This type of topology layout can be used to produce sharper corners with less starting geometry. When the support loops are generated by a bevel / chamfer modifier the width of the edge highlight can be adjusted any time before the modifier is applied.

    Though this topology layout tends to be easier to work with and can be quite flexible when pared with support loops generated by modifiers, it does tend to have some minor artifacts that can be visible in the edge highlights around the outside corners. With sufficient starting geometry and minimal deformation of the underlying shapes these minor artifacts generally won't be visible at normal viewing distances.

    The example below shows how this type of topology layout can be created with basic modeling operations. This modeling process starts by figuring out the minimum number of segments required to create all of the basic shapes. A loop is added to the middle of the cylinder and the shape of the loop is adjusted to match the concept references. Running a bevel / chamfer operation produces two loops that are a set distance apart then the depth and taper is added with an inset operation. Support loops are added around the highlighted edges with a bevel / chamfer modifier and subdivision is applied.

    Bottom row shows the base mesh with the modifiers turned off, smoothing preview with the support loops added and shaded view of the subdivided model. The cylinder primitive used in this example has 48 segments.


    Below is a comparison of both topology layouts with similar levels of edge sharpness. It's worth noting that the structured edge loops can produce a sharper edge highlight with significantly less geometry and minimal mesh complexity. The quality trade off being the slightly truncated edge highlight near the outside corners of the shape.



    This same modeling and topology layout strategies can be applied to more complex shape intersections that are present on a lot of popular hard surface modeling exercises. With these kind of shapes it's often a good idea to start by trying to find the minimum number of segments required to hold the shapes. Assuming the radial features are symmetrical: if there's 8 tapered tabs and 8 tapered cut outs that's a minimum of 16 segments to capture the basic shapes.

    Since these features are often curved along the shape of the underlying cylinder it makes sense to add a bit more geometry. Using two segments per feature and adding an additional segment for each of the tapered transitions brings the total minimum viable segment count up to 48 but using three segments per feature brought the segment count up to 64 and improved the smoothing results near the corners of the cut outs. From there the shapes themselves are pretty basic and can be create quickly with standard modeling operations.

    Start by blocking out the basic shape of the cylinder. Add the loop cuts for the secondary surface features. Move a section of the new loops to create the basic outlines of the shapes and use inset to add depth and (edge taper) draft. Straighten out some of the edges and merge to remove the excess space between the shapes. Select the outline of the bottom lug and use inset again with the same settings to produce a consistent extrusion away from the main shape. Cut in another edge half way through the truncated cone at the bottom and use join through to create the basic angle of the shape. Dissolve the edge along the bottom of the shape to produce the tapered transition into the truncated cone. Setup the edge bevel weights and add a bevel modifier to generate the support loops.


    This same topology layout strategy still works at lower density levels but after a certain point there just isn't enough starting geometry to hold the shapes and the smoothing artifacts around some of the loops become more apparent. For this shape a 48 segment starting primitive is about the limit before the artifacts become visually intrusive.



    There's some misconceptions about just how much geometry is actually required to hold radially tiling shapes and often the threshold is much lower than is expected. Increasing the geometry density is often a go to recommendation because it's simple to explain and is predictable. However it isn't always the most efficient way of doing things.

    Well structured support loops that produce minimal shape distortion are generally going to be more efficient but the trade off is they sometimes produce very minor cut offs in the edge highlights. Another thing to consider is that: when trying to attach a single shape to a cylinder it's important to work between the segments but with radial details that are unbroken it's sometimes easier to just run the support loops around the continuous shapes.

    In this case very few of the support loops actually displace or disrupt the vertical edges of the cylinder so it doesn't cause a smoothing issue. On this shape the reasons this works is because most of the support loops are perpendicular or diagonal to the existing edges that make up the wall of the cylinder so instead of displacing the geometry sideways it just moves it upwards which doesn't effect the underlying curvature.

    Recap:
    Increasing the density of the starting primitives does tend to increase the sharpness of surface features on a curved surface. However this increase in geometry does have diminishing returns and there's a steep fall off in editability as the mesh density increases.

    Using well structured support loops that merge into the existing geometry can be used to create sharper surface features with less starting geometry but there are some tradeoffs in visual quality when minor artifacts appear near the outside corners.

    Both approaches are viable so it really comes down to whether or not the tradeoffs are acceptable for a given project.
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Subdivision modeling: working through complex shape intersections.

    A fair number of questions about mesh topology [on curved surfaces] are focused on finding an answer that completely explains a very specific type of complex shape intersection. While this approach to trying to solve all of the topology problems at once can sometimes yield results, if someone else has solved the exact same problem, it may not be the best way to frame the question.

    Instead try to break down each part in the reference images into a series of simple shapes. This will make it easier to search for solutions that solve the problems with similar shape intersections. Which can then be applied to each individual shape that was identified earlier. Each of these simple solutions can then be layered on top of each other to build up the shape without having to spend hours trying to find or wait for someone to post a perfect solution to the whole problem.

    The example below shows how a series of simple shapes can be combined with basic subdivision modeling principles, like placing intersecting shapes between existing geometry, matching the number of edge segments in intersecting curves and blocking out the shapes to resolve topology flow issues before adding support loops, to create a very simple but effective base mesh.



    Once the basic shapes are blocked out and the segments are matched the shapes can be joined with boolean operations and cleaned up with limited dissolve. From there it's a simple process of cutting in additional support loops that snap to the existing vertices and adding support loops around the edges that define the shapes. Solving the majority of the topology flow issues at the lowest possible level helps keep the mesh relatively simple and can reduce the amount of work required to get a clean result.



    Once the basic topology problems are solved the same process can be used to iterate on the existing model by adding more complex shapes. The example below shows how changing the shape and position of the block out primitives will produce a more complex variation of the first model that more closely matches the reference images.



    Again. It's very important to solve most of the topology flow on the simple block out mesh. Doing this will help ensure that the intersecting shapes define the path of the supporting topology. Which makes it much easier to add support loops without generating smoothing artifacts. Starting with an appropriate number of segments for the size of the object and matching the segments of the intersecting shapes to the underlying geometry will also help prevent smoothing artifacts.



    Recap:
    When trying to solve topology issues around complex shape intersections: start the block out process by breaking the object down into simple shapes. Look for existing topology solutions for each individual shape. Solve the topology problems with each individual shape and layer each solution onto the previous one to create the complex shape intersection out of simple shapes.

    Use fundamental subdivision modeling techniques like placing intersecting shapes between existing edge segments on curves, matching the segments in intersecting shapes whenever possible and resolving topology flow problems before adding a lot of support loops.

    Here's a link to another write up that covers how break down complex shapes into simple ones and blocking everything out before adding more complex support geometry. https://polycount.com/discussion/comment/2745166/#Comment_2745166
  • HAWK12HT
    Offline / Send Message
    HAWK12HT polycounter lvl 13

    Hey Frank Thank you so much for these tips on modelling.

    You mention getting this sort of shape in few clicks and I am trying to wrap my head around this, the breakdown you posted earlier helped me get manual shape but I had to do a lot of clicks and use conform tool for that rounded corner. Can you explain more in depth. Cheers 


    My attempt, somehow with tight loops around the curves I manage to get result similar to yours but with pinching and plenty of manual work. 


  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    The surface of the softened corner (on the second iteration) is fairly smooth but it doesn't have a single, continuous curvature. This is because the shape is created by blending several curves together. It's a bit counterintuitive but the center of that softened corner is actually being flattened by the three shallow round overs that all converge around that point.

    Below is an example that shows how the basic shape would be made by grinding off the pointed corner and rolling the part along each of the three edges, that originate from the points of the starting triangle, to create the round overs.


    Each iteration of the model has a corner shape that's based on a different machining process. The image below shows the basic topology and the tool shapes they were derived from. From left to right: The first has a continuous curvature that's based on a sphere. The second has a blended curvature that's based on three shallow, intersecting curves. The third has a continuous curvature that's based on a single, steeper curve.



    There's a few different ways to approach modeling the shapes. Most of the simple curves can be generated with standard bevel / chamfer operations or modifiers. On this particular shape the round over along the bottom is always slightly smaller than the radius on the back.


    The workflow in the previous examples becomes more application specific when it comes to sketching shape profiles with just edges and vertices. In applications that support this, it's possible to develop some of the shape profiles by either inserting primitives that are just vertices and edges or by duplicating existing edge geometry without attaching it to any faces. Which opens up some additional possibilities for previewing and lofting edges in a mesh edit without having to work with splines, NURBs or Booleans.

    This edge and vertex only, primitive based sketching workflow is what's shown in the first post about the rod clamp.

    Of course this approach has it's own set of drawbacks that will need to be carefully managed. Most of which are related to unintentionally creating unwelded duplicates or non-manifold geometry but that's to be expected with any unconventional workflow that's trying to mimic parametric sketch based modeling functions. So there's a relatively narrow path where this type of modeling make sense and it relies heavily on application specific tools or modifiers and working with incremental snapping to real world units.

    Here's an example that shows how duplicating the edges from the existing curve can also be used to quickly sketch the corner profile but will leave behind non-manifold geometry when the corner vertex is deleted or dissolved. In applications that don't support this type of workflow it will be necessary to look at alternate modeling operations like boolean subtraction, lofting splines, etc.


    One of the previous questions about this workflow was whether or not the faces had to be created individually by hand. The short answer is: No, there's tools for locating non-manifold geometry, filling edge strips with faces, converting n-gons into triangles and converting triangles into quads. So the discussion about click count and hot keys is mostly within the context of filling non-manifold geometry and converting n-gons into organized faces. https://polycount.com/discussion/comment/2749130/#Comment_2749130

    Below is an example of how non-manifold geometry can be resolved to organized topology with a series of simple operations. Select by feature can be used to identify all non-manifold edges. Face fill can be used to close the non-manifold edges with N-gons. Face select mode can be used to select the corner n-gon and the attached edges. Triangulate faces can be used to turn the n-gon into triangles. Tris to quads can be used to turn the triangles into quads that are organized by features like face angle and shape angle.

    All of these operations can be done in one or two clicks, depending on how many parameters need to be adjusted. These tools are fairly reliable so there really isn't much need to manually create or adjust faces in most of the shapes here.



    Most of the time the support loops can be created with bevel / chamfer operations. Modifiers provide some additional flexibility for previewing and adjusting the edge width on the fly. Though there's going to be certain shapes where there just isn't enough room for support loops. That's a point where either the base mesh needs to be adjusted to make room for the loops or the modifier needs to be applied so the overlapping geometry can be removed.

    Another fairly efficient way to add support loops around certain types of complex shapes is to use a series of inset operations. Working through one side of the support loop at a time and having direct control over the selection tends to make it easier to avoid creating overlapping support loop geometry.

    The example below shows how this process could be done. On some shapes there just isn't enough room in the corners so overlapping geometry is inevitable. This can usually be cleaned up with merge by distance and vertex dissolve operations. The left over edge loop around the bottom of the chamfer can be removed with edge dissolve.



    This particular corner shape was built to mirror specific processes so it's not representative of anything other than itself. As far as workflow goes the vertex + edge sketching might be a little bit off the wall but the rest of the modeling operations are fairly standard. There's lots of different modeling tools and that means there's different ways to approach creating this shape or something that looks similar. So after a certain point the whole conversation moves from the technical to the expressive.


    It can be tempting to try to resolve smoothing issue by moving lots of geometry around but the problem is that once something moves far enough out of plane it's pretty much over. Any modeling operation that comes afterwords just inherits the inaccuracy of the starting shape and the errors will just keep piling up on top of each other.

    There's definitely a time and place for manually cutting in support loops to organize topology flow, moving vertices to compensate for smoothing artifacts and conforming geometry to clean primitives to restore curvature but for most shapes it doesn't have to be the defualt method for generating a clean result when subdivision is applied.

    Mathematically generated shapes created by primitives and tool operations tend to be more accurate and consistent than geometry that's created by freehand modeling the shapes. With a lot of hard surface objects it's important to preserve the accuracy of these underlying shapes and avoid introducing any undesired surface deformation.

    That's why it often makes sense to rely on tools to generate consistent shapes and whenever there's major issues with artifacts or the accuracy of a surface it's probably worth looking at resolving any problems in the underlying shapes first.
  • HAWK12HT
    Offline / Send Message
    HAWK12HT polycounter lvl 13
    @FrankPolygon oh sweet, thank you so much for all this knowledge share. I literally have saved as pdf all your posts as reading part is very crucial instead of just images in ref folder :) 
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter
    Currently working through some process breakdowns on lighting and texturing.

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Toolbag renders: high contrast, two point lighting setup with FOV equivalent to a 100mm macro lens. Composition style is influenced by low budget product photography. Lighting and depth of field are used to direct the viewer's attention to specific points on the model by controlling what surfaces are clearly visible.

    Model is around 11,200 triangles with two materials and texture sheets. Solid materials use 4k textures. Transparent materials use 2k textures. At most view distances the texture sheets can be downsampled by 50% with minimal impact to the visual quality but when viewed up close the higher resolution textures keep small surface details sharp.

    Process breakdowns to follow as additional post(s).

  • LaurentiuN
    Offline / Send Message
    LaurentiuN polycounter

    Looks awesome man, keep'm coming!

  • Alex_J
    Offline / Send Message
    Alex_J grand marshal polycounter

    Now that's how you model a 3d compass! :)

  • sacboi
    Offline / Send Message
    sacboi high dynamic range

    Badass!

    Looking forward too drooling over those breakdowns, dude 😀

  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    Lighting breakdowns

    This write-up looks at how some of the fundamentals of photographic composition can be applied to render setups for presenting real-time assets. Since each render engine is slightly different this is less of a deep dive into the technical nuances of how things work and more of a broad overview of how to use certain render features to enhance the overall quality of portfolio images. Keywords to aid in searching for additional information are in italics.

    Lighting is an important part of creating compelling compositions and environmental elements can have a significant impact on how light interacts with the subject. In traditional photography, natural lighting is constrained by things like time, weather, terrain and nearby objects while studio lighting is controlled by things like flags, bounces, modifiers and artificial lights. All of these environmental elements can be manipulated to visually sculpt the surfaces by controlling where the light and shadow falls on the subject.

    In traditional art disciplines, this use of contrast to create depth is often referred to as chiaroscuro. So this technique has roots in prior media and art movements. How this is all relevant to real-time render setups really comes down to how physical based rendering is currently implemented and how most contemporary texturing tools default to image based lighting for render previews.

    With IBL setups, most of the environmental elements are already baked into the HDRI sky image. So when authoring material textures, without a controlled [calibrated] lighting setup, it's important to cycle through more than one background to get a feel for how the material values will read under various lighting conditions. This type of simple, early lighting test can be used to catch potential issues that could arise from choosing material values based solely on how something looks under a single HDRI sky image that has uneven lighting with a strong color cast or a color temperature that doesn't match the final in-engine lighting setup.

    Another reason to test several different options for IBL lighting is that color temperature and environmental reflections are storytelling elements. So, when it comes to picking a HDRI sky image for the final portfolio renders, it's important to choose one that fits the overall theme of the project. In general, when the goal is to create a narrative piece it can make sense to use lighting with a strong shift in color temperature but for most run of the mill portfolio pieces it's much safer to use lighting with a more neutral color balance.

    All that said... Lighting trends are definitely a thing and following what's popular can be advantageous. The only downside is that it can make it a bit more difficult for portfolio work to maintain a relatively consistent appearance across a longer period of time. Which may or may not be important for certain roles.

    Using sky images as visible backgrounds can be helpful for evaluating material values during the texturing process and can provide some broader context to the stylistic theme in certain types of narrative renders but it can also be very distracting when used as the background for final portfolio renders. Blurring the sky image can provide some separation between the subject and sky but it's still possible for large parts of the model to visually blend with the background. Which can make it difficult to pick up on the details in the silhouette.

    A fairly common critique of sky backgrounds is that there's too much competing detail that makes the image look busy and tends to draw the viewer's attention away from the work that went into the asset. The longstanding consensus on this is that solid background colors tend to be less distracting than visible sky images. Using a dark neutral gray or dull white color as a background is often recommended as it helps improve the consistency of the presentation and will make it easier to read the shapes and texture details.

    Subtle overlays like fine dust particles, long sparks, gradients and vignettes can be used to add some depth and visual interest to the background but the fine line between interesting and distracting can be quite narrow. When in doubt: keep the focus on the subject by using a background that's clean and simple.

    On it's own, basic IBL can be pretty dull. Enabling additional render features like ambient occlusion, local reflections and global illumination can help increase the perceived detail of surface lighting effects by simulating more realistic surface reflections and shadow depth. Certain render features can have significant resource overhead and while they won't always be usable in-game there's few reasons to avoid using them to improve the visual quality of portfolio renders.

    When using IBL and GI, it's possible that any distinct colors or forms in the sky image might produce localized color shifts or tint effects in nearby surface reflections. So it's often helpful to completely rotate the sky image several times to find the optimal position for environmental reflections. Adjusting the relative brightness of the environmental lighting, before adding any independent light sources, can help establish a baseline for just how much additional lighting is required to achieve the desired results.

    Multiple dynamic lights with shadow casting and contact refinement do tend to increase resource consumption but the complete absence of shadows tends to produce bright surface reflections that don't match the way lights work in the physical world. While intentionally disabling these render features can work well for heavily stylized work and technical constraints often mean finding a balance between visual quality and performance, most portfolio renders aren't going to be constrained by performance targets. So for assets with realistic materials it makes sense to bias shadow settings towards maximum quality.

    Perceived lighting quality can also be affected by the size of the lights. Using smaller direct lights will tend to produce highlights and shadows with sharp edges. Which creates a stark, artificial look that can seem unnatural when paired with outdoor environments or interior scenes with natural light components. Increasing the size of the light or the area from which the light is emitted will tend to produce softer highlights and shadows that mimic natural sunlight.

    In theory, setting up studio lighting to mimic natural sunlight light is fairly straightforward: add a large light with a high intensity and rotate it so the highlight and shadow fall in line with the existing natural light in the environment. Reality is often a bit more complex. Incoming light bounces off near by surfaces and creates indirect lighting that lifts the value of areas that would otherwise be occluded.

    IBL and GI help simulate this effect but depending on the environment and lighting style it may be necessary to tune both the direct and indirect lighting inputs to produce the desired results. Adjustments to other light settings like type, shape, intensity and color all tend to mirror the behavior of the light modifiers (bulbs, flags, scrims, gels, etc.) used in [studio] photography.

    Basic lighting setups are generally given labels that describe the number of lights used in the scene. Single point lighting setups can be both simple and realistic but just because something is realistic looking doesn't mean it's going to be interesting to look at. In contrast to this, multi point lighting setups do tend to provide more options for creating visual interest but the use of additional light sources also introduces some unique challenges. Most of which come from having to blend highlights and shadows from competing angles while also managing any stray lighting effects caused by secondary bounces.

    When it comes to basic studio lighting arrangements: One point lighting setups generally use a strong key light that's placed off to one side so it illuminates that side of the object while the shadow obscures the opposite side. Two point lighting setups generally use a strong key light placed on one side with a weaker fill light placed on the other side to boost the light levels in the shadows. Three point lighting setups generally follow the same lighting arrangement as two point lighting but with an additional rim light that's generally placed behind the subject to produce a visible highlight around the outer edges of the shapes.

    These basic studio lighting setups can be modified to produce different lighting styles. High-key lighting is a good example of a lighting style that uses a modified 3 point lighting system where the key and fill lights have similar intensity levels and lights are positioned to minimize shadows. Low-key lighting tends to be on the opposite end of the spectrum. Often using single or two point lighting setups with strong key and rim lights that create harsh contrast between patches of light and shadow. So it's acceptable to adjust the type, position and intensity of each individual light to achieve the desired results.

    An image circle's field of view will be determined by the focal length of the lens. How much of this image circle will be captured is determined by the size of the light sensitive media. Though there's been a wide range of film and sensor sizes, the de-facto standard format is usually represented by a 35mm film equivalent.

    Some real-time render engines support using focal length as an input to drive FOV settings, which makes it a lot easier to follow photography tutorials without having to consult conversion tables. If an engine only supports entering FOV angles then it will be necessary to find the focal length and sensor size used in any particular photography tutorial and find a conversion table that provides the closest FOV angle.

    Without getting too far into technical minutia, due to the differences between how the human eye and various camera systems function, there's several different schools of thought on which focal lengths mimic what the average person sees. A lens with a 50mm focal length is often quoted as being THE normal lens for 35mm film format cameras. However there are convincing arguments for using wider lenses. With 24mm being near or past the limit for acceptable distortion. More traditional thinking usually limits the range of widely accepted normal lenses to between 35mm and 50mm.

    This all becomes important because, when a subject completely fills the captured area of the image circle, lenses with shorter focal lengths will tend to produce strong barrel distortion and lenses with longer focal lengths will tend to produce pincushion distortion. Some lens distortion is generally acceptable but too much outwards image distortion can create the illusion that the subject is much larger than it actually is.

    Lens compression is used to describe the PERCEIVED difference in flatness between two focal lengths, when the subject is kept a consistent size in the image circle, by moving the position of the camera. Again, without getting into a very convoluted and often misunderstood phenomena, the basic idea here is that longer focal lengths allow the subject to fill the frame while being further away from the camera. It's this increased distance between the subject and the image plane that tends to result in relative changes to the perspective of distance objects that appears to produce straighter lines, flatter shapes and less depth between the subject's features.

    A lens's depth of field is basically the area it can keep in focus at a given distance. Setting the point of focus determines what area of the subject will be visibly sharp. Areas in front of and behind the POF will be out of focus. Placing the POF on key details and adjusting the width of the DOF will help guide the viewer's eyes to important areas in the image.

    Since most real-time render systems don't simulate the complex physical limitations of optical systems, that determine the width of the DOF, there's no real benefit to discussing that particular topic in any depth. The DOF inputs can simply be adjusted until the final results look appealing.

    As a final word of caution: an extremely narrow DOF can also cause issues with the perceived scale of an object. So it's often helpful to find some reference images of both snapshots and art photos to compare the minimum and maximum depth of field that's achievable at a given scale.

    Camera angles are generally something that's more or less left up the individual artist's tastes. While there's a lot of different philosophies on the topic and a virtually endless loop of upcoming trends, there's always the risk that unconventional compositions will turn away most viewers who are just looking for traditional breakdown shots of in-game assets.

    Since consistency and ease of viewing is important for most general portfolio projects it's often going to make sense to just use the traditional mix of three-quarters, front, side and back angles for the majority of the images. When it comes to thumbnails, leading covers and narrative images there's a lot more latitude for creative compositions. So having the right mix of creative and conventional camera angles should make for broader appeal.

    One thing to be especially mindful of is that certain extreme low and high camera angles can make things appear out of scale. When combined with lens distortion this effect can produce some unique but often unrelatable results. There's a fine line between interesting and incomprehensible. So really take the time to work through several iterations of any unconventional camera angles and try to find the most visually appealing variant that still aligns with the stylistic goals for the project.

    The multi point lighting setups used to author the textures were mostly bright, outdoor scenes with warm to neutral color casts. This type of lighting setup made it really easy to simulate a wide variety of environmental lighting conditions by turning individual lights on and off. After the textures were mostly completed, the final materials and textures were put through a series of lighting tests with a few different HDRI sky images and very basic single point lighting.

    So coming up with idea for a very dark, high contrast two point lighting setup for the final renders was more the result of an iterative process of testing the textures than anything else. While testing the material separation between the details in the normal and roughness channels, the text details on the bottom cover really stood out when lit from behind with a single rim light.

    This eventually lead to adjusting the rim light to become a strong key light and adding a softer fill light to the same side to pickup some of the details in the shadows. Additional lighting test started moving in a similar direction but with slightly deeper shadows and less rotational separation between the key and fill lights. A fairly strong highlight from the key light helps lift out the subtle details in the normal and roughness channels at different times and the softer fill light brings up the shadows just enough to expose the major shapes on the other side of the model.

    Adding a strong rim light did help by adding some visual interest to some of the tests but due to the mix of flat and curved shapes on the model it often just got in the way and caused some really distracting highlights. So the final setup used for most of the renders is fundamentally a two point lighting system .Though a few of the renders do have supplemental kicker lights that are masked off so they only interact with a very small part of the model.

    Standard zoom lenses can handle a wide range of situations but as a trade-off they also tend to have comparatively long minimum focal distances for relatively short focal lengths. Which can make it difficult to photograph small objects up close with these types of lenses. For small subjects and closeups of fine surface details it often makes sense to use a dedicated macro lens that has a short close focus distance and reasonably long focal length. This allows the lens to keep things in focus while getting reasonably close to the subject without introducing a lot of outwards distortion. Using a longer focal length like 100mm provides a bit of lens compression that will help fill the frame while also rendering the image circle with a perspective that keeps the linear elements fairly crisp.

    Composition in the first image is pretty basic. Standard side view with a slight down angle. A strong key light comes in from the upper left side of the image, from behind the subject, to act as a partial rim light while obscuring the foreground elements with shadows. It's also positioned a bit lower than normal so picks up a lot of the minor surface imperfections with a wide highlight that rolls off into the base color of the paint.

    The fill light is a bit softer and much lower. Almost completely parallel with the flat surface in the center of the image. It also has a slightly warmer color temperature that provides a bit of contrasting color that makes the surface a bit more interesting and easier to read. There's a lot of text on the back cover so the DOF is kept fairly thin and the focus point is set near the bottom of the frame to draw attention to that single line of text at the bottom.

    Modified three-quarters front view with the camera placed slightly above and a slight downward tilt. Camera angle and lighting are both positioned to place a strong emphasis on the linear forms. Key light is in front of the subject and enters the frame from the top right. This creates strong highlights on the top of the shapes and partial shadows over some of the flatter areas that are empty.

    Fill light matches the intensity of the key light and comes in from the left of the frame. The direct side lighting is slightly lower than the subject and points upwards to lift some of the shadows on the underside of the shapes. Overall the lighting setup for this image is a lot brighter since it needs to catch the details in the roughness textures. Focus point is kept towards the front of the hinge to draw attention to the paint chips, scratches and grime layers. DOF is still pretty thin since the recessed details on the top of the lid aren't that interesting when viewed from such a low angle.

    Another modified three-quarters view that's slightly down and off to one side. Other version of this composition had a lower camera angle but this created a forced perspective that felt out of scale when shown along side the previous images. Key light is placed to the right side and slightly above, shining down to highlight the details in the normal textures. Shadows from the key light do obscure some of the empty space between the top and bottom half of the shapes to provide a visual rest and some additional depth.

    A soft fill light shines from right to left. It's placed slightly above and behind the center line so it only catches a limited number of surfaces that are facing the right side of the frame. There's also an orange kicker light that's shining up from almost directly below. This helps catch some of the subtle shape transitions around the bottom edges and provides a nice boost to the color in the worn brass parts.

    Point of focus is still very close to the edge of the object but the way it's rotated means more of the shapes towards the front are in focus. Since there's lots of repetitive details on this side the DOF is still fairly shallow to keep the attention focused on the details closet to the camera.

    More conventional high angle three-quarters view that's slightly rotated more towards one side to match the previous composition. A fairly strong key light comes in from the upper left side of the image. Placed further in the background this key light rakes across the surface at a slightly lower height with a slight downward tilt to highlight the details in both the normal and roughness textures.

    The softer fill light shines down from the upper right side of the image and highlights details in the roughness channel as it wraps around the curves on this side of the subject. There's also a kicker light that comes in from the middle left side of the image and shines towards the right side to simulate a light bounce from the fill light. This additional lighting detail is just there to help lift some of the darker shadows on the left side of the image and makes it a bit easier to read the shapes. Deeper DOF shows off a bit more of the detail on the top cover and the focus is more towards the middle of the image.

    A direct top down view makes the text on the dial and ruler is easier to read. Rotating the top cover up slightly helps fit more of the subject in the frame and also provides an opportunity to shape the light with the opening in the center. The strong key light shines down from the top of the frame and has a slightly weaker kicker light that simulates a bounce from the fill light. This supplemental lighting effect helps lift the values along the top edge of the ruler so a consistent highlight is visible across the base that's laying flat and the top cover that's tilted upwards.

    Bright fill light is placed on the right side of the image and shines across the flat areas along the top of the base. There's an additional kicker light that's shining through the slot and down onto the bezel. When it comes to replicating complex studio lighting setups in real-time engines it sometimes makes sense to do thing the easy way. Technically this setup uses four lights: key, fill and two kickers but it could be done using just the key and fill but adding additional mesh objects to act like bounces and flags would take a lot more time and effort than just adding a couple extra lights and masking them in post.

    Wider DOF in this composition is needed to handle the increased surface depth caused by rotating the cover up towards the camera. Focus point is somewhere between the text on the dial and the text on the ruler. This setup uses a background focus where the closer something is the softer it appears to be as it enters the out-of-focus zone while everything behind the POF remains relatively sharp.

    There's definitely some risk in opening with more abstract camera angles and high contrast lighting setups. This is just a quick example of what a more conventional set of camera angles would look like with this particular lighting setup. Not the most interesting composition but it does provide a decent overview of the model and textures. Which is probably what a lot of people care about more than anything else. Very similar lighting setup to the previous five images but the camera angles make for a very different type of presentation.

    This comparison of the low-key two point lighting setup and a more conventional three point lighting setup illustrates just how much the lighting can influence how the model and textures are perceived. Under normal lighting conditions the more subtle surface details won't be visible unless the lighting hits just right.

    That's where there's certain trade-offs in terms of just how far to push the lighting and composition. Too bland and it's just not all that compelling. Too intense and it becomes unrelatable. So a large part of the challenge of creating unique portfolio renders is balancing what the audience expects to see with the artistic vision for the piece.

    For portfolio renders, where the goal is to show off as much of the work as quickly as possible, it's hard to beat a standard 3 point lighting setup with DOF turned off. That way everything is evenly lit and clearly visible. While it may not be the most exciting approach to lighting a scene it does provide a certain level of consistency while also putting the modeling and texturing skills that went into creating the asset front and center.


  • SnowInChina
    Offline / Send Message
    SnowInChina interpolator
    yo frank, freaking beast
    need to read this later, happy new year to you
  • FrankPolygon
    Offline / Send Message
    FrankPolygon grand marshal polycounter

    @LaurentiuN Thanks! Plan on working through a few more lighting and texturing write-ups on different assets.

    @Alex_J Thanks, it was a neat project and it's cool to see it in LandNav.

    @sacboi Thank you, really appreciate the support. The lighting breakdown is completed and can be viewed on the previous page.

    @SnowInChina Thanks! Happy new year to you as well.

13
This discussion has been closed.