Disclaimer: Some of this is dependtant on using object space, and wont really work with tangent, but i tried to keep this as application-un-specific as posible. You can always follow all of these steps and then convert your object space map to a tangent space map in the end using xnormal's converter tool, but you will get artifacts on split edges of your lowres mesh etc...
Alright, well i forgot to update the files on my usb drive when i came home, but i do have some stuff that i started to collect about process etc, so i'll go over this a bit here.
This is mainly just discussing my workflow for creating fpv weapons, i'll cover some points that i think are important. I'm not going to spend much time going over exactly how to make highres meshes, well not today atleast, maybe a bit later.
Alright, so first off, I already have my highres mesh done at this point, along with my lowpoly + uvs. I slap a grid texture on and prioritize where i want the most pixel space, and resize accordingly. You can see some objects have probabbly twice as much pixel space(150%x150% uvs, 200% uv space would = 4x the space)
Ok, now that we have our uvs layed out correctly, we can move on to prepping the meshes to be baked.
Now at first glance there may not look like there is a huge difference between these areas, but this stuff really really matters. Areas that you likely want to have more uv space will be the spots that are going to be up close, expecially if you have an area like iron sights/scope that you will zoom in on, this will need to have the most relative space(smallest squares). And you can see areas like the very front, and the very back that will A. be far away from the camera or B. be completely out of sight, those areas should definately get the lease amount of coverage. I'll usually end up a with a few details etc that get a bit more, just because they are smaller shapes and easier to pack into uvs.
Now that we have everything to a point where we're happy with the pixel distribution, we can move on to packing the uvs into they're final position(sorry i didnt save wip shots of the uvs before this) Now things might shuffle size a little at this point, but i try to keep the same rules in mind and have everything stay consistant(relative to how we set it up above of course).
Alright, first here is just the wires of the lowpoly mesh.
Now, this is really the most important ULTRA-TIME-SAVING aspect of baking a clean normals map, we're going to copy of this mesh. Then we'll take both the lowres mesh, and the highres mesh, and split them apart the exact same way, usually i do this by units that are easy to remember, like 100 or 50 or whatever. We do this so that when it comes time to bake, you dont have lots of objects intersecting and causing all sorts of errors that you will have to painstakingly paint out later in photoshop.
So heres the lowpoly mesh, exploded:
And heres the highpoly mesh, exlpoded:
After we have all that in order, what i like to do, and there are a few ways of doing this depending on what app you use. With xnormal i'll make a simple image, with a few colors and just basicly apply different colors to different materials of the highres by planar mapping them over the corresponding color. In max you could just use mat colors etc. This will save a SHITTON of time when it comes to creating layer groups for different material types in your psd later down the line. You generally want pretty contrasting colors here, this isnt a type to choose your color scheme, i'll just use this for creating masks. So here white is polymer, black is metal and grey is rubber. And you can see i switched some of these mats later on while i was texturing, which is a lot easier to do when you have these simple masks available to you.
I'll include the final result of this mask texture, you'll have to tweak a few errors with this generally, but its easy. I actually will render this map last, after the normals/ambocc are all set.
Alright, now that we have colors all set up with our highres mesh, i'll do a test render to see how well the normals are turning out. Generally there will always need to be tweaks at this point to my bake mesh(the exploded mesh) And since i'm using object space maps i can edit this geometry in any way i see fit! This means i can add or remove edges, etc really just do anything i want to do with it whatsoever.
So i'll generally go back in, and add in a bunch more edge loops in certain parts if the details arent rendering out straight(warped, crooked details, etc) The basic idea here is really similar to trying to maintain hard edges etc in your highres, you want more geometry in your cage to give a more accurate projection. Now if you're using tangent space with split smoothing groups, this may not be as much of a problem and you can skip this step. Or alternatively as this really only applies to object space, you can actually convert an OS map to tangent space(xnormal go!) after you have the results you want in the end. This tends to create artifacts around the edges where you have split smoothing groups, which sort of defeats the purpose tho.
So heres a comparison of before and after, now ignore the messy geometry, it really *doesnt matter at all* as long as the bake comes out clean, do whatever you like here. We wont be using this mesh for anything meaningful aside from baking.
Now that we've done a pass over all of the parts of the mesh that arent quite right, we can render out a nice, final, clean normals map!
Now i dont always do this next step, but i have on the last couple models and it tends to go a long way, when you've got the mesh all baked etc and can preview your normals, check and see what details are close-up to the camera but look pretty flat, once you figure that out you can cut in some extra geometry and really make those areas pop. Doing this to a few areas will help to create the illusion that more is actually modeled than you can see, and makes it hard to really tell what is modeled and what is just normals detail.
Now you can do this before you do your final bake, but i tend to like to do it after, that way i can keep my uvs simple. And since i'm using object space normals, i can do anything i want to the mesh afterwords without needing to rebake. Now if you were using tangent, you definately want to nail this sort of detail down before your final bake.
I wont bother going over my process for AO, because i've already done this here:
http://wiki.polycount.net/Ambient_Occlusion_Map?highlight=%28ambient%29
Now this may seem like a lot of crap just to get a decent normals map, but most of these steps are actually pretty simple when you get in there and do it. If anyone has any questions or anything feel free, or if something dosent make sense etc let me know so i can edit o_O
Replies
seriously awesome stuff though, thanks for the tut.
Any feedback you can provide on this would be great. This is just what I have noticed, I'm wondering how correct these observations are. When I first started normal mapping things, I would just give the low poly object that was going to be used with the high to generate the normals, one smoothing group and not care if it had shading errors. The result of the bake was crap, and it took me awhile for it to dawn on me that how that low poly model would shade with vertex lighting was affecting the normal maps.
Bahh I think I gave enough information about my thoughts, hope I don't put you to sleep with the post. Let me know what you think thanks.
Alex
Also I don't think that all current engines use the good old vertex interpolation algorythm to shade meshes. I believe there are more precise algorythms out there now.
Alex
I wish I could show off what I have so far (in comparison to your results, its pretty lame) but somehow I accidentally saved the diffuse over the normal so thats lost, as is the high res mesh (its entirely my fault, I should have saved them iteratively).
I also tried your AO method and it did work out pretty well. I had never though before to make a duplicate of the low mesh and use it to bake AO back onto itself.
Also nice to see that you're doing it pretty much the same way I ended up doing it, I thought I might have been horribly wrong or something
Good tip about being able to modify the lowpoly mesh afterwards since it's object-space. Leads to interesting ideas for iteration/refinement/optimisation.
anyways thank you for the article gotta copy some workflow thingies i liked.
anyway looking at your "scale uv charts based on importance" I think one could automatise that process (not sure about quality so). Basically you place a reference camera and then detect which geometry covers most screenspace, and which the least. For those that make up more screenspace compared to real size you scale their uv chart maximum by 2, and the others (ie invisible or rarely visible clip at back you half in size). It would not be so strict but gradual scaling with "double - half" as maximums. (or better say quadruple - quarter)
Such a script could give a first draft uv scaling (at beginning they are all normalized to common texelratio, then scaled), which the user can later refine and layout.
Commanor: Yeah thats totally fine, the important thing is just that its exploded in some form so you're not getting intersecting objects.
World space normals won't really work for a gun in a FPS because it is moving around, it's basically your character in first person. The animations and such not to mention if it's even swapped over to the other hand make it fairly unusable. I can't remember any specific examples, though there were many when we were using U3.
Also exploding a mesh and baking it in all those seperate parts, how are you showing the edges between the parts? By doing it I think you're going to get a lot of hard lines; which is unacceptable to some people.
and object space you can animate just fine.
Murdoc, he's using object-space, very different from world-space, because the incoming lighting can be rotated into the model's space. Can't do that with world-space. Only limitation I see is if the weapon has some animated pieces that rotate independently of the gun itself, like a hatch opens... then I guess you need that to be a separate object, so it can pass its own object-space transform to the pixelshader for proper lighting.
Sage/pior/etc. tangent-space maps require a constant dialogue with the model's vertex normals in order to calculate the right lighting direction. The light direction is rotated into the tangent space of the mesh surface, for every vertex, so the surfaces can be lit from the right direction. Maybe some methods do it per-pixel?
But object-space maps only require a dialogue with the model's overall vector, not each vertex's vector (normal), so when the object rotates the incoming lighting direction is rotated to match that single vector, not every vertex's vector. The vertices are ignored, so smoothing groups don't matter at all.
EQ, why not use floating mesh for those divots in the last pic?
For a firstperson view model on a current-gen FPS game, modelled detail like that can really help sell the realism of a weapon model.
commador were you talking about working in exploded space from the start? One downside is I think it would be much harder to visualize the end result.
This has been stated a few times but the assumption that you cant deform or translate meshes with object space normals is 100% incorrect, its uncommon, but its not impossible or even hard, i'm pretty sure it took one of our programmers about a half day or add in support for animating object space meshes. So if your programmers are telling you that you cant animate *object* space, they are wrong/misinformed or likely just dont want to do it(gasp! a programmer telling an artist something is imposible because he doesnt want to do it, never! hehe), or simply think you mean world space, or dont know the difference =P
It can and has even been used on characters in games(BF:vietnam) not that i personally think that is effeciant at all, but its more than posible
What? The objects in the low that are split are the same objects that are split in the highres, so you will have hard edges at the intersections no matter what you do, and you actually want that. Theres no need for everything to be built from one mesh, or look like it is, because things arent built like that in the real world.
Eric: I guess i could have actually, i cant seem to think of any reason why that wouldnt work =P I did it the way i did so i wouldn't have to really do much uv work. But yeah, it would be entirely feasable to just throw the geo on top, or even intersecting and just match up the uvs correctly, you wouldnt be able to tell the difference. well for indendt it would need to be modeled in of course, but extrusions oyu could do this.
Unless you're talking on the highres mesh? In which case yeah all those details are floated.
Anyway i'll try to get jeff our graphics programmer to post and point where i am wrong here and make sense of this stuff in a more technical capacity.
I think he just says he explodes after his highres is done, but before he starts modeling his lowres. While this could work fine you might have a couple downsides to it. This could have a couple downsides tho, for example maybe you're accidently modeling in geo that you dont actually need to, because it is hidden inside of another part.... But really overall this should be fine, i wouldnt do it this way but its probably cool.
you could transform world-space-normal maps, object-space-normal maps and tangent-space normal maps into other spaces, or transform lights into them, doesnt matter as long as at the end they are in the same space. Sometimes the "origin" position isnt actually needed.
anyway the conversion between the spaces of course sometimes is more or less efficient, and sometimes needs extra data to be sent per-vert (which raises mesh costs), or some more costly math needs to be done on pixel-level. Whatever pays off or not fully depends on how the engine takes care of things, how stuff is organized.... But technically it's all possible and no voodoo, transform from one to another space with matrices is like the backbone of graphics/engine programming.
--
mirroring is also possible, but would require one more vector to be sent to flip normals for uv-charts back.
now sending the vector from application->vertex-shader is not so heavy (cause for tangentspace you need vertex normal, tangent and sometime bitangent, and now you just need one).
But when you do lighting in world-space in pixel-shader, you need 3 vectors to rotate your object-space normal to worldspace and now you need another vector for the mirroring. which is one more than tangent-space would need (just the 3 vectors to rotate tangent-to-world). The vectors sent from vertex-shader to pixel-shader (where the lighting happens) are even more costly and limited. And when you send things like pixel positions, uv coords... you can quickly have many interpolated vectors you need inside the pixelshader, where 3 more can hurt too much. Though it all depends on how many you really need and so on... the magical "it depends"
However tangent-space textures can be compressed better/more... I'd say those are major reasons why tangent is so common for deformed stuff.
Correct. I'll try to break the high res mesh down to different surface types or shapes, and when I get it to the point I like it then I isolate and move everything to its own spot. Then Ill build the low around that so its ready to bake. Once its baked I move the parts to their final location. Anywhere the parts intersect Ill go in and clean up any unnecessary faces. I'm not sure how this compares to other methods but its worked out for me so far.
Though as others have explained it's nice to see that peopel are getting around this, I'd like to know if they are useing U3, because we had no luck with doing this at the time.
Heres a tutorial even:
http://www.3dkingdoms.com/tutorial.htm
One thing i've noticed is you can seemingly retain details better if you render a very large image, and then size it down. As apposed to just using a lot of AA on the target render size, guess thats just photoshop's resampling magic there for you.
I what talking about the checker texture you uesd. With all those gun parts in the uv layout box sized down, shouldnt the checkered squares be a lot bigger than that?
besides not sure what you mean with "two methods", which are the two you mean? Like that tutorial vs how it would be done today? if yes then wait for that tutorial stuff. but basically those loops where he runs over to process lightpositions per-vertex on cpu, that can move into the vs.
CB: by the different ways i meant tangent vrs OS, well was just curious if you already had something you could show =D will look forward to the next release then!
Ah.. Didnt know that thanks!
Edit... here's some good reading.
http://www.rsart.co.uk/mediawiki/index.php?title=Links
Would you recommend this for anything? What if the size specs were a 512? Would baking at 2048 and knocking it down 75% to 512 be good or would most of the details become too obscured?