@kuronekoshiii The sharper corners will need support loops to prevent the shapes from collapsing and the smoothing artifacts across the rest of the shape can be resolved by removing the excess geometry and edge loops that cross over into the curved shapes. Flat surfaces are largely uneffected by triangles and n-gons so they can be a good place to end the extra edge loops.
Try simplifying the starting geometry, matching the segment counts of adjacent shapes whenever practical and adding support loops around major shape transitions with a bevel / chamfer operation or modifier. Here's an example of what this could look like:
@navneethdodla94 When comparing modeling workflows for film and games there's some overlap but there's also some significant differences in the technical requirements for each. Within each industry there's also a range of acceptable quality levels. Much of this variance is based on how the models are used and what the budgets are. What's acceptable for one company or project may not be acceptable somewhere else.
If the goal is to become part of a particular industry, specialization, company or team then it's important to research who is leading in that field and emulate what they are doing. Comparing and contrasting the processes and work of artists in each field can highlight where and how the technical requirements of each discipline are different. As an example, compare the wire frames of film and game models:
Andrew Hodgson is an artist that works in film and shares a lot of his modeling process and philosophy:
Matthias Develtere's subdivision modeling work for Wolfenstein II is an example of how n-gons and triangles can be used to speed up the production process with a minimal impact on the overall shape accuracy and surface quality:
For high poly game models in general: as long as there aren't any specific technical requirements for all
quads and as long as the mesh is easy to work with and subdivides
cleanly then a base mesh or cage mesh with n-gons is passable. Creating
the high poly model is just a part of the asset creation process. It's
not the entire process itself and it's unlikely the player will ever see
the high poly model. Other parts of the process (the low poly model,
normal bakes, textures, lighting, animation and presentation) will end
up directly in front of the player and are (as a whole) arguably more
important.
There's a few discussions on n-gons in this thread and one thing that's mentioned in a lot of these discussions is that a lot of the misconceptions about subdivision modeling are based on the abstraction and oversimplification of specific and contextual technical issues, limitations and requirements. Often the nuanced context of these situations is stripped away and this can lead to the perpetuation of nonsensical and counterproductive rules. This is why it's important for artists who are learning this skill to take the time to research and verify what's being said.
Another issue is that time, tools and topology are relatively easy to quantify and it can be attractive to look at these factors as a primary benchmark for judging quality. In theory this is fine for process improvement but it can also become a trap where an artist will judge the result of someone else's work based entirely on how well the rules were followed while excusing deficiencies in their own results solely because they followed the rules they made up.
For the shape question: Jan has pretty much covered it all but it's also worth mentioning that it's important to match the segments counts of the adjacent shapes. This will help reduce the chance of smoothing artifacts appearing on more complex shapes.
The hard edges all appear to have matching UV splits so that doesn't seem to be the issue. Increasing the texture size captured more details but even at 8k there were significant baking artifacts. It appears that the pixel density is quite low in some areas and this means there's a limited ability to capture the relatively sharp edges of the high poly mesh.
Softening the edges on the high poly mesh removed most of the baking artifacts at a 4k resolution. Increasing the antialiasing samples had a minimal effect on the baking results. There are some areas where the low poly mesh appears to fall inside the high poly mesh and this does have an effect on the quality of the baking results.
Overall it seems that these artifacts are a combination of issues related to the texture density and the sharpness of the high poly mesh. Here's a comparison of the sharper (left column) and softer (right column) high poly meshes baked to the same low poly mesh with 2k, 4k and 8k texture sizes.
Here's a closeup of the low poly with 4k normal textures baked from the sharp high poly mesh. The pixelated normal artifacts are still visible along some of the edge segments.
Here's a closeup of the low poly with 4k normal textures baked from the soft high poly mesh. Most of the pixelated normal artifacts are resolved and the edge segments appear to be relatively clean.
Where to go from here really depends on the scope of the project: time deadline, remaining budget, intended use, quality level and technical limitations. If the edges must remain this sharp then you'll have to find some way to balance the texture size and the ability to capture the minimal edge width on the high poly. Softening the edges on the high poly will translate to a better bake at a lower texture resolution.
Wasted space in the UV layout could be minimized by straightening some of the curved UV islands and using one of the advanced UV packing add-ons to generate a more efficient UV pack. Another option is to remove any internal faces that won't be seen. All of this would help increase the pixel density without increasing the texture size.
If the asset's resource footprint is a concern then it might also be worth dissolving some of the excess geometry on the flat areas. This could be done automatically with the limited dissolve operation or manually with the other dissolve operations. It also looked like there were a few spots with unwelded vertices and flipped normals so try running a merge by distance operation and recalculate the normals outwards.
Try increasing the texture size, sampling count and dilation value. If that doesn't resolve the issue then try adjusting the softness of the high poly mesh and see if that improves the bake.
Both Obscura and rollin have provided valid solutions and feedback. There's often more than one way to resolve an issue and what's right for a given project will depend on finding a suitable balance between expedience and efficiency. Understanding the root causes and first principles is beneficial going forward as these issues will manifest again in different situations and specific technical constraints may require other solutions.
To illustrate Rollin's point: the skewing can be resolved without adding additional geometry.
Figure 1: Automatic mesh triangulation produces skewing where the low poly's horizontal edges cross diagonal elements on the high poly.
Figure 2: Blending vertex and face normal directions with a skew map in Toolbag resolves the issue without requiring additional geometry.
Figure 3: Adjusting the low poly triangulation to match the diagonal elements resolves the skewing without requiring additional geometry.
Figure 4: Adding additional edge loops that match the diagonal elements resolves the skewing but increases the resource footprint.
Looks like the order of operations is important: Bevel the horizontal segments first. Dissolve and bridge where necessary. Bevel the vertical segments last to add the final curve.
Start by blocking out the shapes to determine how many segments the arch will need to match the intersections with the vertical gussets. This dumpster looks like it's mostly welded plate and square tubing. The individual parts could be modeled separately or combined into simplified sub-assemblies.
Since there's a lot of basic shapes it might be worth looking into building the high poly with floating geometry or running a Boolean to ZBrush / Quadremesher workflow. What's most efficient depends on the project goals and technical limitations.
Here's an example of one approach to creating the arched side with subdivision modeling. Block out the basic shapes using inset, cut and chamfer / bevel operations. Cleanup the mesh and extrude the rest of the flat / rectangular shapes. Use a chamfer / bevel operation or modifier to add the support loops. This workflow supports creating both soft stylized and sharper realistic shapes.
The end goal for the model should inform how you approach breaking up the low poly and high poly models. Keep in mind that baking to a simplified low poly model generally requires creating slightly more exaggerated features on the high poly model. Breaking up the low poly model into specific components (if the polygon budget is large enough) will make working on the high poly easier since it will require less shape merging.
Try to keep the geometry as simple as possible while still holding the shapes and maintaining a good edge flow. Take advantage of all the flat surfaces by using them to absorb triangles and n-gons generated by terminating excess edge loops. Depending on what the model is going to be used for it's probably worth taking some time to look at alternate workflows such as floating geometry and Booleans + re-meshing.
For me the best option have always been wacom intuos small without screen . It's a sort of convenience when you always see your work as a whole, unobstructed by your right hand, palette and stick in left hand or whatever . Always on a big horizontal IPS screen . No need to bend over in some uncomfortable pose. Had same screenless wacom in large and middle sizes. They required too much energy to move your hand around and not enough space to lay keyboard nearby but middle is ok too.
I once had ipad with a pen . it was a nice toy really to play outside your work environment, no more .
@aregvan@guitarguy00 You're welcome. Glad it was helpful. Thank you to everyone else who posts questions and answers too.
Relying on tools to generate geometry just means avoid doing unnecessary manual work when there's a tool or modifier that will do the job quicker and more accurately.
Here's an example of manual work. Please, for the love of all that's holy, don't do this sort of stuff.
The verts are moved into place freehand, edge loops cut in one segment at a time and the fillet is scaled up manually. Yes, the result is usable but the amount of work that went into it doesn't justify the result. There are tools that can do most of this in just a few keystrokes and will be more accurate than an artist pushing geometry.
Here's an example where using the correct tools speeds things up. Generate the primitives. Block out the intersection and match the segment counts. Run a Boolean operation. Run a chamfer operation. Merge down the left over geometry. Add three edge loops and join them up with the base of the intersection. The tools have done all the work and kept the geometry reasonably accurate.
If shape accuracy isn't a big deal then the Boolean operation could have been cleaned up with a merge by distance operation and the perpendicular edge loops could have been added before the chamfer operation. This would have been even faster but less accurate.
There's a fine line between manual work and manual adjustment. Manual adjustment is part of the process and the important thing is to use tools that will keep the mesh co-planar and parallel along edge normals, etc.
If the shape requires an excessive amount of manual adjustment then it might be time to re-evaluate how the shapes were blocked out. Sometimes it just doesn't matter and the project calls for something quick and dirty. Some shape intersections can be fudged and others can't.
Here's an example where the geometry was created using tools and some minor manual adjustment was required at the end. The vert was moved along the edge normal to alleviate the stress where the support loop came close to the cylinder's edge segment.
Generally speaking, as long as the geometry remains in plane and subdivides cleanly then it's OK to use tools to move things around accurately. There's also exceptions to this where the geometry has to be purposefully distorted to counter subdivision smoothing but that's a different discussion.
Here's a subdivision preview of all the meshes. They all work. It's just that some took significantly longer to make than others. Manually creating everything and manually adjusting everything can be a huge time sink. Avoid it where you can and spend time wisely.
The big take away is avoid having to manually bash things into shape by moving every vert, cutting every edge loop and smoothing shapes by hand. If things are falling apart and causing smoothing errors then there's a fundamental problem with the geometry. Stop and take the time to block things out and work through each problem.
Also don't get caught up on perfection. It's subdivision modeling, not CAD. There's going to be some imperfection. Get it as good as it needs to be and move on to the next part.
It looks like there's a specific order of operations. Start by beveling the opposing vertical and horizontal edges. This should create the desired edge flow. Select the new edge loop and bevel. The difference in bevel width on the second operation looks uniform so it may be a percentage or distance based bevel. The specific bevel operator settings are something you'll have to experiment with to find the exact shape.
Reducing the segment count to the minimum amount of geometry required to hold a shape and maintain edge flow is a good strategy for subdivision modeling. The base geometry looks good. You have the basic shapes and loops working together. Definitely on the right track.
The distortion between the two outlets can be corrected by increasing the loop count on the chamfer / bevel and adjusting the bevel profile settings until the fillet radius matches the reference image. This example is similar base mesh (32,24,16) and shows the stress patterns.
I didn't see any triangles on the front of your mesh so they must be on the sides or the back? Edge loops can be cut in on the back of the stand pipe to match the horizontal segments on the outlet intersections. Triangles and n-gons are fine as long as they aren't causing visible artifacts. Part of subdivision modeling is controlling shading errors by either limiting them to a small area or averaging them out over a larger area. Sometimes it just takes a couple extra loops to match the surrounding geometry.
The subdivision workflow for creating game models is usually something like this: start by blocking out the shapes with a base mesh. Use this base mesh to create the high poly model. Either optimize the high poly cage mesh by deleting edge loops and collapsing geometry or use the base mesh and build up the low poly model. Uv unwrap, setup mesh smoothing groups and bake.
Technical requirements for high poly models and low poly models will be different. A lot depends on the project. Overall it looks like you have the process down. Now it's just a matter
of working through the shapes and matching the reference image.