Home Technical Talk

Question regarding complete workflow chain

Hello everyone,

I've been learning how to model for the purpose of generating some 3D assets for inclusion in a video game I am working on, and was hoping that perhaps some more experienced eyes could take a look at the production chain I am planning and identify any possible issues or ways of doing things differently. I greatly appreciate any insight anyone can provide.

I have a couple of questions regards how to go about connecting up the various parts of the modelling process:

(1) I'm using Blender for the modelling and it doesn't appear to have terribly advanced controls for painting the UV map. Is it usual for the modeller to unwrap the UVs within the modelling software and then pass it off to an artist with a dedicated 2D art package such as Photoshop for rendering the diffuse texture for the model?

If so is it usual to use a plugin within the 2D art package to visualise the unwrapped polygons as an overlay during the painting process? Or do artists normally have to just try and visualise the unwrapping in their heads?

(2) I purchased some software that takes a diffuse image and uses this to generate additional textures such as the normal and specular maps.

For game modelling my understanding is that you build both a low poly model for use in game, and a high poly one that is used for the generation of normal maps. You then run both the low-poly and high-poly through a tool which compares the two and builds the extra detail of the high-poly into the low poly's normal maps.

My question is how do you combine these two outputs - (i) the generated normal maps from the high-poly comparison and (ii) the normal maps from your textures used during texture painting?

Thanks for any help.

Replies

  • Bartalon
    Options
    Offline / Send Message
    Bartalon polycounter lvl 12
    1) Modeling software comes equipped with UV mapping tools that are more than capable of unwrapping anything, but there are also some standalone tools out there that some people use instead. Roadkill and Headus are a couple examples.

    I don't think there are a lot of studios that have dedicated texture artists, especially smaller ones. It's far more common for a person to be able to model, unwrap, bake maps, and texture his own assets.

    Generally, when you texture an asset in Photoshop you will also have a second software package open to view the model in 3D space, such as Blender, Marmoset Toolbag, etc. As you make additions to your texture, you save your document then refresh it in your 3D package to see how it looks on your model. It's becoming more common to apply textures directly onto your 3D models, as is possible in various programs like 3D Coat, Mudbox, ZBrush, Substance Painter, and even Max and Maya (not sure about Blender).

    2) If you are still learning the game art pipeline, I would recommend avoiding software that performs tasks for you until you understand the foundation of the discipline. Learn how to make normal maps and specular maps on your own and understand their purpose and proper implementation.

    (i) Baking HP information down to your LP model can be done inside your native 3D modeling software in most cases. Blender, Max, Maya, ZBrush, Mudbox, etc. all have this capability. A very popular alternative is a third-party program called xNormal.

    (ii) All the texture maps you generate with your various software packages will eventually be combined into the form of a material, or shader, inside a real-time rendering engine like Unity3D, CryEngine, Unreal Editor, Marmoset Toolbag, etc. The shader is applied to your model and will display all your maps on a model.
  • Biomag
    Options
    Offline / Send Message
    Biomag sublime tool
    For starters - I use Maya (for modeling), zBrush (sculpting) and Photoshop (for texturing) + several additional software (like nDo, XNormals & Handplane...). I have no experience with Blender, 3D Max or Gimp,...


    The whole workflow depends on the scope and size of the team as well as their skill. While bigger companies can afford to split the task to specialist, I guess many if not the most companies rather have 3d artist do the whole workflow (even up to rigging & animation if you are in a small team).

    As I haven't used Blender I don't know its options, but normally you don't do texturing in your modeling program. Depending on your team it might vary, but in most cases you will model the assest, make the uv-layout, bake all that is needed and texture it (in Photoshop or something similar). You can do the textures without a 3d view, but its going to be painful to the degree of being stupidly slow and frustrating. Normally I link the texture in Maya to my psd file while painting so each time I save I can refresh the Maya material to see how the changes look (both Maya and Photoshop running side by side, constantly switching between these two). Anything else would leave me working nearly blind.

    Regarding normal maps - its not converting diffuse maps to normal maps by simply pressing a single button. There are a few things to consider and its actually too much to write it down here in this context. A normal map is NOT a desaturated diffuse map! Same for the specular map! You should really take a look at this before planing a workflow and see what your project actually needs.

    When it come to high poly models and baking there isn't an one-size-fits-all solution either. There are advantages to this approach, but for example if you are making just a small mobile game it might easily going to be an overkill to make high poly assets and bake them to low poly as all the details you can get out of a high poly are going to be lost on an iPhone sized screen.


    My suggestion would be you should start understanding the technology first, see what programm does what and how it is used and only then worry about the workflow. Workflows need to be adapted to the project and team anyhow - and you won't be able to do so if you don't understand the basics ;) Keep two things apart - workflow for a pipeline and workflow in each individual programm. For starters you should just take care of the second.



    When it comes to low poly - high poly workflows. Simply put: You use the process of baking maps (normal, ambient occlusion,...) to get the information from the high poly model into the textures of the low poly. Normal maps have 2 usages - 1. as normal maps (obvious) 2. programs like nDo/dDo and others can work with them to create/enhance other textures. AO maps might be used to create a basic shading for the textures or as mask in photoshop to work on you diffuse/whatever texture...


    I suggest you focus on the basics for now and you will be surprised how quickly you will then understand some of the bigger things that are ahead of you :)
  • Chimp
    Options
    Offline / Send Message
    Chimp interpolator
    Agree 100% with all of the above, but both of you have discarded a specific legitimate question in favour of educating him more broadly - combining normal maps generated from geometry and those generated from say, a DDO material: http://www.polycount.com/forum/showpost.php?p=1967493&postcount=11 provides a quick solution.

    But I should re-enforce that the above posters are in my opinion 100% correct.
Sign In or Register to comment.