Home Technical Talk

FPV Weapon baking tutorial

Disclaimer: Some of this is dependtant on using object space, and wont really work with tangent, but i tried to keep this as application-un-specific as posible. You can always follow all of these steps and then convert your object space map to a tangent space map in the end using xnormal's converter tool, but you will get artifacts on split edges of your lowres mesh etc...

Alright, well i forgot to update the files on my usb drive when i came home, but i do have some stuff that i started to collect about process etc, so i'll go over this a bit here.

This is mainly just discussing my workflow for creating fpv weapons, i'll cover some points that i think are important. I'm not going to spend much time going over exactly how to make highres meshes, well not today atleast, maybe a bit later.

Alright, so first off, I already have my highres mesh done at this point, along with my lowpoly + uvs. I slap a grid texture on and prioritize where i want the most pixel space, and resize accordingly. You can see some objects have probabbly twice as much pixel space(150%x150% uvs, 200% uv space would = 4x the space)

Ok, now that we have our uvs layed out correctly, we can move on to prepping the meshes to be baked.

cgw01jl2.jpg

Now at first glance there may not look like there is a huge difference between these areas, but this stuff really really matters. Areas that you likely want to have more uv space will be the spots that are going to be up close, expecially if you have an area like iron sights/scope that you will zoom in on, this will need to have the most relative space(smallest squares). And you can see areas like the very front, and the very back that will A. be far away from the camera or B. be completely out of sight, those areas should definately get the lease amount of coverage. I'll usually end up a with a few details etc that get a bit more, just because they are smaller shapes and easier to pack into uvs.

Now that we have everything to a point where we're happy with the pixel distribution, we can move on to packing the uvs into they're final position(sorry i didnt save wip shots of the uvs before this) Now things might shuffle size a little at this point, but i try to keep the same rules in mind and have everything stay consistant(relative to how we set it up above of course).

cgwuvsmm6.jpg

Alright, first here is just the wires of the lowpoly mesh.

cgw02av3.jpg

Now, this is really the most important ULTRA-TIME-SAVING aspect of baking a clean normals map, we're going to copy of this mesh. Then we'll take both the lowres mesh, and the highres mesh, and split them apart the exact same way, usually i do this by units that are easy to remember, like 100 or 50 or whatever. We do this so that when it comes time to bake, you dont have lots of objects intersecting and causing all sorts of errors that you will have to painstakingly paint out later in photoshop.

So heres the lowpoly mesh, exploded:
cgw03ne6.jpg

And heres the highpoly mesh, exlpoded:
cgw04mx7.jpg

After we have all that in order, what i like to do, and there are a few ways of doing this depending on what app you use. With xnormal i'll make a simple image, with a few colors and just basicly apply different colors to different materials of the highres by planar mapping them over the corresponding color. In max you could just use mat colors etc. This will save a SHITTON of time when it comes to creating layer groups for different material types in your psd later down the line. You generally want pretty contrasting colors here, this isnt a type to choose your color scheme, i'll just use this for creating masks. So here white is polymer, black is metal and grey is rubber. And you can see i switched some of these mats later on while i was texturing, which is a lot easier to do when you have these simple masks available to you.

cgw05es7.jpg

I'll include the final result of this mask texture, you'll have to tweak a few errors with this generally, but its easy. I actually will render this map last, after the normals/ambocc are all set.

cgwhighcolorsnq4.jpg

Alright, now that we have colors all set up with our highres mesh, i'll do a test render to see how well the normals are turning out. Generally there will always need to be tweaks at this point to my bake mesh(the exploded mesh) And since i'm using object space maps i can edit this geometry in any way i see fit! This means i can add or remove edges, etc really just do anything i want to do with it whatsoever.

So i'll generally go back in, and add in a bunch more edge loops in certain parts if the details arent rendering out straight(warped, crooked details, etc) The basic idea here is really similar to trying to maintain hard edges etc in your highres, you want more geometry in your cage to give a more accurate projection. Now if you're using tangent space with split smoothing groups, this may not be as much of a problem and you can skip this step. Or alternatively as this really only applies to object space, you can actually convert an OS map to tangent space(xnormal go!) after you have the results you want in the end. This tends to create artifacts around the edges where you have split smoothing groups, which sort of defeats the purpose tho.


So heres a comparison of before and after, now ignore the messy geometry, it really *doesnt matter at all* as long as the bake comes out clean, do whatever you like here. We wont be using this mesh for anything meaningful aside from baking.

cgw06km3.jpg


Now that we've done a pass over all of the parts of the mesh that arent quite right, we can render out a nice, final, clean normals map!

cgwnormalsec0.jpg

Now i dont always do this next step, but i have on the last couple models and it tends to go a long way, when you've got the mesh all baked etc and can preview your normals, check and see what details are close-up to the camera but look pretty flat, once you figure that out you can cut in some extra geometry and really make those areas pop. Doing this to a few areas will help to create the illusion that more is actually modeled than you can see, and makes it hard to really tell what is modeled and what is just normals detail.

Now you can do this before you do your final bake, but i tend to like to do it after, that way i can keep my uvs simple. And since i'm using object space normals, i can do anything i want to the mesh afterwords without needing to rebake. Now if you were using tangent, you definately want to nail this sort of detail down before your final bake.

cgw07zb8.jpg


I wont bother going over my process for AO, because i've already done this here: http://wiki.polycount.net/Ambient_Occlusion_Map?highlight=%28ambient%29

Now this may seem like a lot of crap just to get a decent normals map, but most of these steps are actually pretty simple when you get in there and do it. If anyone has any questions or anything feel free, or if something dosent make sense etc let me know so i can edit o_O

Replies

  • Tumerboy
    Options
    Offline / Send Message
    Tumerboy polycounter lvl 17
    FOOL! You're giving away your precious secrets BEFORE you win the contest!?

    seriously awesome stuff though, thanks for the tut.
  • Sage
    Options
    Offline / Send Message
    Sage polycounter lvl 19
    EQ maybe you can clarify something I noticed. When making low poly models (cage) that don't have an even distribution of edges (perfect quads) you get all kinds of lighting errors. You can call these smoothing errors, but it's really a limitation in how vertex lighting works. I noticed that if I don't take the time to get this even distribution of edges into the model I'll be using to bake the hi poly normals onto, any shading issues the low poly model has affect the final bake of the normals. So if the low poly model has crappy shading the normal map get some of that when it gets generated. I usually just make the low poly model look good with one smoothing group and add enough edges to get it to work. If I do this the normals bake clean, of course this might be more of an issue with tangents space normal maps than with object space.

    Any feedback you can provide on this would be great. This is just what I have noticed, I'm wondering how correct these observations are. When I first started normal mapping things, I would just give the low poly object that was going to be used with the high to generate the normals, one smoothing group and not care if it had shading errors. The result of the bake was crap, and it took me awhile for it to dawn on me that how that low poly model would shade with vertex lighting was affecting the normal maps.

    Bahh I think I gave enough information about my thoughts, hope I don't put you to sleep with the post. Let me know what you think thanks.

    Alex
  • sprunghunt
    Options
    Offline / Send Message
    sprunghunt polycounter
  • pior
    Options
    Offline / Send Message
    pior grand marshal polycounter
    Sage, in this exemple he uses object space normals, which makes the lowpoly shading irrelevant (I think). But I might be totally wrong anyway haha!
    Also I don't think that all current engines use the good old vertex interpolation algorythm to shade meshes. I believe there are more precise algorythms out there now.
  • Sage
    Options
    Offline / Send Message
    Sage polycounter lvl 19
    Pior I know that he is is using object space, but I'm wondering about what I noticed with tangent space baking. :D I always thought that when normals were baked the only things that mattered was how the hi res model shaded, but after fighting with this crap over the years it seems that the shading of low poly cage might affect the the bake as well when baking tangent space normals. Once the map is baked the lighting it uses should be that of the pixels, but if the pixels have crappy shading from the low then it would look like crap.


    Alex
  • commador
    Options
    Offline / Send Message
    commador polycounter lvl 14
    Very cool insight! I do wonder though, does it make a difference if you build the model in separate parts which are "exploded" before combining them into one? The way I built my gun was model the high as a bunch of difference parts, then offset each piece, and build the low mesh around them so each one was isolated before UV mapping and baking. After the UVs were done and baked I moved them back together and merged them all together into one mesh.

    I wish I could show off what I have so far (in comparison to your results, its pretty lame) but somehow I accidentally saved the diffuse over the normal so thats lost, as is the high res mesh :( (its entirely my fault, I should have saved them iteratively).

    I also tried your AO method and it did work out pretty well. I had never though before to make a duplicate of the low mesh and use it to bake AO back onto itself.
  • MoP
    Options
    Offline / Send Message
    MoP polycounter lvl 18
    Nice one EQ, thanks for the tutorial.
    Also nice to see that you're doing it pretty much the same way I ended up doing it, I thought I might have been horribly wrong or something :)

    Good tip about being able to modify the lowpoly mesh afterwards since it's object-space. Leads to interesting ideas for iteration/refinement/optimisation.
  • kio
    Options
    Offline / Send Message
    kio polycounter lvl 16
    one thing - doenst hurt this uv checker? i coulnd work with that for more than.. lets say one minute.

    anyways thank you for the article gotta copy some workflow thingies i liked.
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    cool, btw EQ you still use icq? dont see ya anymore

    anyway looking at your "scale uv charts based on importance" I think one could automatise that process (not sure about quality so). Basically you place a reference camera and then detect which geometry covers most screenspace, and which the least. For those that make up more screenspace compared to real size you scale their uv chart maximum by 2, and the others (ie invisible or rarely visible clip at back you half in size). It would not be so strict but gradual scaling with "double - half" as maximums. (or better say quadruple - quarter)
    Such a script could give a first draft uv scaling (at beginning they are all normalized to common texelratio, then scaled), which the user can later refine and layout.
  • EarthQuake
    Options
    Offline / Send Message
    Sage, yeah generally any bad smoothing you have on your lowpoly you'll end up getting on the final mesh with normals, and you can fix this by just adding in extra geometry, or using smoothing groups, both of which i dont think are that great of solutions, but yeah it can be done. One of the bigger advantages of OS, as mop mentions, is the ability to do anything you want with your mesh after its back. Which means they are GOLD for making lod's, no need to rebake extra LOD textures, or have to settle for really poor smoothing errors on lods, etc.

    Commanor: Yeah thats totally fine, the important thing is just that its exploded in some form so you're not getting intersecting objects.
  • Murdoc
    Options
    Offline / Send Message
    Murdoc polycounter lvl 11
    Neat, but a couple comments

    World space normals won't really work for a gun in a FPS because it is moving around, it's basically your character in first person. The animations and such not to mention if it's even swapped over to the other hand make it fairly unusable. I can't remember any specific examples, though there were many when we were using U3.

    Also exploding a mesh and baking it in all those seperate parts, how are you showing the edges between the parts? By doing it I think you're going to get a lot of hard lines; which is unacceptable to some people.
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    object-space normalmaps I am sure are meant in 99% of the cases non-tangent is meant ;)
    and object space you can animate just fine.
  • Eric Chadwick
    Options
    Offline / Send Message
    Thanks for the tut EQ, you get great results, and that's all that matters in the end.

    Murdoc, he's using object-space, very different from world-space, because the incoming lighting can be rotated into the model's space. Can't do that with world-space. Only limitation I see is if the weapon has some animated pieces that rotate independently of the gun itself, like a hatch opens... then I guess you need that to be a separate object, so it can pass its own object-space transform to the pixelshader for proper lighting.

    Sage/pior/etc. tangent-space maps require a constant dialogue with the model's vertex normals in order to calculate the right lighting direction. The light direction is rotated into the tangent space of the mesh surface, for every vertex, so the surfaces can be lit from the right direction. Maybe some methods do it per-pixel?

    But object-space maps only require a dialogue with the model's overall vector, not each vertex's vector (normal), so when the object rotates the incoming lighting direction is rotated to match that single vector, not every vertex's vector. The vertices are ignored, so smoothing groups don't matter at all.

    EQ, why not use floating mesh for those divots in the last pic?
  • MoP
    Options
    Offline / Send Message
    MoP polycounter lvl 18
    Eric, in the last pic, I think that's the lowpoly - he's modelled the divots in after baking the highpoly onto the flat surface, just shows that the object-space normal-map will still hold up after the lowpoly is edited.
    For a firstperson view model on a current-gen FPS game, modelled detail like that can really help sell the realism of a weapon model.
  • Eric Chadwick
    Options
    Offline / Send Message
    Ah gotcha, to add some parallax, thanks MoP.

    commador were you talking about working in exploded space from the start? One downside is I think it would be much harder to visualize the end result.
  • EarthQuake
    Options
    Offline / Send Message
    It doesnt need to be an actual sepperate mesh to be transformed, all it needs is to be rigged. We have weapons rigged up in in our game currently that flip around and rotate and do everything. The shader/engine just needs to keep track of the vert. trasformation which i'm pretty its doing anyway.

    This has been stated a few times but the assumption that you cant deform or translate meshes with object space normals is 100% incorrect, its uncommon, but its not impossible or even hard, i'm pretty sure it took one of our programmers about a half day or add in support for animating object space meshes. So if your programmers are telling you that you cant animate *object* space, they are wrong/misinformed or likely just dont want to do it(gasp! a programmer telling an artist something is imposible because he doesnt want to do it, never! hehe), or simply think you mean world space, or dont know the difference =P

    It can and has even been used on characters in games(BF:vietnam) not that i personally think that is effeciant at all, but its more than posible
    Also exploding a mesh and baking it in all those seperate parts, how are you showing the edges between the parts? By doing it I think you're going to get a lot of hard lines; which is unacceptable to some people.
    What? The objects in the low that are split are the same objects that are split in the highres, so you will have hard edges at the intersections no matter what you do, and you actually want that. Theres no need for everything to be built from one mesh, or look like it is, because things arent built like that in the real world.

    Eric: I guess i could have actually, i cant seem to think of any reason why that wouldnt work =P I did it the way i did so i wouldn't have to really do much uv work. But yeah, it would be entirely feasable to just throw the geo on top, or even intersecting and just match up the uvs correctly, you wouldnt be able to tell the difference. well for indendt it would need to be modeled in of course, but extrusions oyu could do this.

    Unless you're talking on the highres mesh? In which case yeah all those details are floated.

    Anyway i'll try to get jeff our graphics programmer to post and point where i am wrong here and make sense of this stuff in a more technical capacity.
  • EarthQuake
    Options
    Offline / Send Message
    Ah gotcha, to add some parallax, thanks MoP.

    commador were you talking about working in exploded space from the start? One downside is I think it would be much harder to visualize the end result.

    I think he just says he explodes after his highres is done, but before he starts modeling his lowres. While this could work fine you might have a couple downsides to it. This could have a couple downsides tho, for example maybe you're accidently modeling in geo that you dont actually need to, because it is hidden inside of another part.... But really overall this should be fine, i wouldnt do it this way but its probably cool.
  • Eric Chadwick
    Options
    Offline / Send Message
    Hey, cool, glad to hear about being able to deform object-space meshes. Could come in handy. I guess the only real downside would be UV space, unless you could get mirroring going.
  • EarthQuake
    Options
    Offline / Send Message
    Yeah thats still the big issue, and i think why most studios shy away from it.
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    spaces are like "orientation (3 axis) + origin (1 position)", you can transform from one to another more or less easily, and as long as the work you have to do (lighting whatever) is all in same space, it doesnt matter which it is

    you could transform world-space-normal maps, object-space-normal maps and tangent-space normal maps into other spaces, or transform lights into them, doesnt matter as long as at the end they are in the same space. Sometimes the "origin" position isnt actually needed.

    anyway the conversion between the spaces of course sometimes is more or less efficient, and sometimes needs extra data to be sent per-vert (which raises mesh costs), or some more costly math needs to be done on pixel-level. Whatever pays off or not fully depends on how the engine takes care of things, how stuff is organized.... But technically it's all possible and no voodoo, transform from one to another space with matrices is like the backbone of graphics/engine programming.

    --
    mirroring is also possible, but would require one more vector to be sent to flip normals for uv-charts back.
    now sending the vector from application->vertex-shader is not so heavy (cause for tangentspace you need vertex normal, tangent and sometime bitangent, and now you just need one).
    But when you do lighting in world-space in pixel-shader, you need 3 vectors to rotate your object-space normal to worldspace and now you need another vector for the mirroring. which is one more than tangent-space would need (just the 3 vectors to rotate tangent-to-world). The vectors sent from vertex-shader to pixel-shader (where the lighting happens) are even more costly and limited. And when you send things like pixel positions, uv coords... you can quickly have many interpolated vectors you need inside the pixelshader, where 3 more can hurt too much. Though it all depends on how many you really need and so on... the magical "it depends"

    However tangent-space textures can be compressed better/more... I'd say those are major reasons why tangent is so common for deformed stuff.
  • EarthQuake
    Options
    Offline / Send Message
    Another thing worth mentioning is that you can easily store tangent space normals and object space normals in the same texture. I've done this a few times, and to make it work you just need to have 2 separate materials for each chunk.
  • commador
    Options
    Offline / Send Message
    commador polycounter lvl 14
    commador were you talking about working in exploded space from the start? One downside is I think it would be much harder to visualize the end result.

    EarthQuake wrote:
    I think he just says he explodes after his highres is done, but before he starts modeling his lowres. While this could work fine you might have a couple downsides to it. This could have a couple downsides tho, for example maybe you're accidently modeling in geo that you dont actually need to, because it is hidden inside of another part.... But really overall this should be fine, i wouldnt do it this way but its probably cool.

    Correct. I'll try to break the high res mesh down to different surface types or shapes, and when I get it to the point I like it then I isolate and move everything to its own spot. Then Ill build the low around that so its ready to bake. Once its baked I move the parts to their final location. Anywhere the parts intersect Ill go in and clean up any unnecessary faces. I'm not sure how this compares to other methods but its worked out for me so far. :)
  • Murdoc
    Options
    Offline / Send Message
    Murdoc polycounter lvl 11
    I got my terminology messed up, but yeah object space normal map wont work with a gun in my experiance. When it moves around while being animated; think reloading, the gun is being rotated and moved around.

    Though as others have explained it's nice to see that peopel are getting around this, I'd like to know if they are useing U3, because we had no luck with doing this at the time.
  • EarthQuake
    Options
    Offline / Send Message
    Its a relatively simple shader change, just a few lines of code. It should work in any engine that supports shaders and rigged meshes. Your programmers just need to understand the math.

    Heres a tutorial even:
    http://www.3dkingdoms.com/tutorial.htm
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    that tutorial however is very outdated and does a lot in cpu, but yeah a bit different but same achievement is possible fully in shaders now.
  • EarthQuake
    Options
    Offline / Send Message
    CB: would you happen to have any code examples that would show the difference between the two methods, just out of curiosity?
  • Joshua Stubbles
    Options
    Offline / Send Message
    Joshua Stubbles polycounter lvl 19
    Great work EQ, much appreciated.
  • OBlastradiusO
    Options
    Offline / Send Message
    OBlastradiusO polycounter lvl 11
    How the hell did fit all that on the uv layout with great almost perfect resoultion?
  • EarthQuake
    Options
    Offline / Send Message
    Not sure what you mean? The normals picture is a 512x512, sized down and shaprned a little from 2048x2048 which it was baked at(dont worry i'll size it down to 1024 for final shots =P)

    One thing i've noticed is you can seemingly retain details better if you render a very large image, and then size it down. As apposed to just using a lot of AA on the target render size, guess thats just photoshop's resampling magic there for you.
  • Murdoc
    Options
    Offline / Send Message
    Murdoc polycounter lvl 11
    Well I weep for those programmers then, very dissapointed in them :)
  • OBlastradiusO
    Options
    Offline / Send Message
    OBlastradiusO polycounter lvl 11
    EarthQuake wrote: »
    Not sure what you mean? The normals picture is a 512x512, sized down and shaprned a little from 2048x2048 which it was baked at(dont worry i'll size it down to 1024 for final shots =P)

    One thing i've noticed is you can seemingly retain details better if you render a very large image, and then size it down. As apposed to just using a lot of AA on the target render size, guess thats just photoshop's resampling magic there for you.

    I what talking about the checker texture you uesd. With all those gun parts in the uv layout box sized down, shouldnt the checkered squares be a lot bigger than that?
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    I will provide sample shaders with the next major luxinia release (which sadly is still a bit away, swamped with work atm) but I will have mirrored local space normalmaps, as well as animated, just for the sake of it ;) and also provide tutorials. I just want to have lots of new docu content and tutorials done, and improved asset import pipeline. Atm without proper tutorials basically it's just us who can use luxinia to full extent easily ;)

    besides not sure what you mean with "two methods", which are the two you mean? Like that tutorial vs how it would be done today? if yes then wait for that tutorial stuff. but basically those loops where he runs over to process lightpositions per-vertex on cpu, that can move into the vs.
  • EarthQuake
    Options
    Offline / Send Message
    Oblast: that really just depends on how large your squares are in the checkard map, hehehe.


    CB: by the different ways i meant tangent vrs OS, well was just curious if you already had something you could show =D will look forward to the next release then!
  • Eric Chadwick
    Options
    Offline / Send Message
    I often increase my checkermap's tiling in the material while I'm UV-ing, just to get finer detail, less blurry while I'm working things. Then set the material back to 1x1. Might help you OblastoramaO.
  • OBlastradiusO
    Options
    Offline / Send Message
    OBlastradiusO polycounter lvl 11
    I often increase my checkermap's tiling in the material while I'm UV-ing, just to get finer detail, less blurry while I'm working things. Then set the material back to 1x1. Might help you OblastoramaO.

    Ah.. Didnt know that thanks!
  • OBlastradiusO
    Options
    Offline / Send Message
    OBlastradiusO polycounter lvl 11
    One more thing couldnt you have had the texture layout 1024 x 512 just to have more uv space or room?
  • Eric Chadwick
    Options
    Offline / Send Message
    The bigger the texture, the more video memory it uses. Every game has a texture budget, if you use too many bitmaps or they're too large the game framerate slows down while it's waiting for the textures to load into/out of memory.

    Edit... here's some good reading.
    http://www.rsart.co.uk/mediawiki/index.php?title=Links
  • commador
    Options
    Offline / Send Message
    commador polycounter lvl 14
    EarthQuake wrote: »
    Not sure what you mean? The normals picture is a 512x512, sized down and shaprned a little from 2048x2048 which it was baked at(dont worry i'll size it down to 1024 for final shots =P)

    One thing i've noticed is you can seemingly retain details better if you render a very large image, and then size it down. As apposed to just using a lot of AA on the target render size, guess thats just photoshop's resampling magic there for you.


    Would you recommend this for anything? What if the size specs were a 512? Would baking at 2048 and knocking it down 75% to 512 be good or would most of the details become too obscured?
  • EarthQuake
    Options
    Offline / Send Message
    rendering out larger isnt something i do all the time, but it can help if you have really fine details that arent showing up quite right, or are getting poorly anti-aliased. Not sure what you mean about 75% ? A 512x512 is actually 1/16th the size of a 2048x2048.
  • commador
    Options
    Offline / Send Message
    commador polycounter lvl 14
    heh, I guess thats what being up for over 20 hours gets you. 75 is a 1024. Im going to sleep now.
  • EarthQuake
    Options
    Offline / Send Message
    1024 is 25% =D you can fit 4 1024x1024's into one 2048x2048.
  • Tumerboy
    Options
    Offline / Send Message
    Tumerboy polycounter lvl 17
    LOL I was confused at first too EQ, I think he's saying it's a "75% reduction from 2048"
Sign In or Register to comment.