[Technical Talk] - The ultimate be-all end-all normal mapping thread

13456713
quad damage
Offline / Send Message
perna quad damage
Summary: Ask all your nooby normal mapping questions in this thread, and we'll compile the info in a sticky to lower the level of noise on this subject and make things clear once and for all. I just say one thing to those wanting to volunteer answers: Get your facts right. I'm up to here with people trying to educate who have no idea what they're talking about and just make the situation worse, like saying that the low and hipoly models shouldn't intersect or some messed up stuff like that. Sorry, you guys just make the situation a lot worse.

Long rambling version of the above:
ok.. this... wtf...
I know there are moments when I don't understand shit of something and have such a lack of confidence in that subject that I just want to be let through the thing from step one. So I can understand that whole deal. It's like when I'm in bed with a woman, I still didn't figure out what goes where. You're right, Alex, there should be some sticky, if only to shut people up about this stuff because the same questions are answered I feel several times a week. Oh and it can be nice to be helpful. I just remember now how a lot of the people that now teach this stuff needed a hell of a lot of teaching themselves.

OK, let's do a new take on this. Of course I and some of the more experienced guys can give you all the information you need, we just think there are things so obvious you don't need to hear them. Let me get a sense of where the most nooby of the noobs are, ok? And let me understand if you're just damn lazy, or if you're actually making an effort.


So I'll try out some random questions just to test the waters.


Bounchfx: [ QUOTE ]
how do you go about re-building the low poly model on top of the high poly one if the high poly one is way too many polygons for your computer to handle?

[/ QUOTE ]

Don't you understand that you can just use a lower subdivision level of the hipoly that runs fine in the viewport? I'm not asking to insult you, I really want to get an idea of where you guys stand here. Feels like being one of those TV cops trying to track down a murderer by psychologically evaluating his crimes.

ok noobs, do you understand and can you visualize the direction that rays travel from the lowpoly to scan the hipoly model during processing time? This is vital.

What do you feel are the slowest steps of the whole modeling and generating process?

Do you have the necessary viewers/viewport shaders you need to properly check your results?

What kind of timeframes do you work with and what kind of timeframes would you LIKE to work with?

Where exactly does it fail for you?

And so on. Don't be embarrased to ask stupid questions. I might make fun of you, but hey, you'll get the information you want.

Also, let me tell you this, so you won't think that you're a complete moron to fail at this stuff: Some of the guys you look up to and who teach you about normal mapping now were completely retarded on the subject and took like years to understand the basics some of them.

I figured out all this stuff 6 years, so there's no question you won't get answered, just make at least a half-hearted attempt at prose. State which software you're using, what you're trying to do versus the results you're getting, and include illustrative images as needed.

quick addendum: Don't start moaning like a crybaby about how a poster hates you if you don't get the answer you wanted. Most likely you have to rephrase or elaborate on your question.

Replies

  • perna
    Offline / Send Message
    perna quad damage
    No joke answers this time around, guys, let's get a good idea of where you all are.
  • jgarland
    Great idea for a thread, Per. I don't have any issues with normal mapping right now, but I'm sure I will sometime down the road. I'll know who to pester from now on. wink.gif
  • Brice Vandemoortele
    Offline / Send Message
    Brice Vandemoortele polycounter lvl 12
    in this tutorial the author shows differences between render and realtime, only refering it as "smoothing errors". Where does this come from? Is there different method to generate normals on-the-fly from a geo and how can there be differences in the results? how are they called and wich package use wich one? Does storing normal per vertex using "proper normals" (according to the generated nm) would solve theses issues?

    thx wink.gif
  • perna
    Offline / Send Message
    perna quad damage
    Brice:

    In realtime rendering, vertex data is generally linearly interpolated between vertices.
    For lighting, the original("gouraud" shading) reason is to give the appearance of curvature where there is none. In fact only faceted shading is ever "correct", but by having gradual transitions between verts, we can do things such as give the appearance that a 5-sided cylinder is actually round.

    Now, in offline rendering the interpolation does not have to be linear, which means that between two verts on a cylinder, the normal can move in a curve(the curvature is determined by analyzing surrounding geometry). Because of this, the shading on an offline lowpoly cylinder can look undistinguishable from that of a hipoly version of the same geometry.

    Non-linear shading can be performed in realtime shaders, though due to the calculations involved it would often be more beneficial to rendering speed if you simply add more geometry instead.

    In offline rendering there are also other similar tricks that will improve visual quality. In max, for example, it takes me half a second to render a box, while my graphics card can draw literally millions of them in the same time.

    When you use world space normal maps, the problem goes away, since you're no longer relying on the existing, crude vertex normals of the mesh.

    When you consider that each polygon influences the normal of the verts it touches, it may make sense to say that relatively smaller polygons should contribute less than larger ones. This can help cases where, say, a big smooth surface has bad lighting because a small piece of geometry sticks protudes from it. How solutions like this are handled, however, relies on the generator and renderer.

    edit: oh, and this is an advanced topic, Brice is just hardcore and you won't be expected to know this stuff to work anywhere... though it can't hurt.
  • Joao Sapiro
    Offline / Send Message
    Joao Sapiro polycounter
    per is made of winsauce BBQ ! u monkey...
  • Rick Stirling
    Offline / Send Message
    Rick Stirling polycounter lvl 14
    Howsabout a few:

    Q. Should I build the lowpoly or highpoly first?
    A. It's up to you.
    * Some people build a lowpoly model, then up-res it and sculpt it.
    * Some people build and sculpt a high resolution model and then build a new low poly model around that.
    * Some people build a low res model, upscale that, sculpt and detail it, then step down a few levels of detail and usd use that as their low poly (or as a base for building a better lowpoly).



    Dunno if this will be useful, but when explaining what a normal map actaully was to someone, I explained the difference between a pixel and a polygon, and how UVmaps worked. Once this was understood I said that basically on a normal map each pixel was pretending to be a polygon.
  • perna
    Offline / Send Message
    perna quad damage
    I think it's an issue of semantics. If you have a lowpoly model, then quadify, distribute its elements evenly and clean it up to become a cage, then make a hipoly model from THAT, then you didn't start the hipoly from the lowpoly, you started it from your cage.

    There's a very large distinction between lowpoly and cage, and only in extremely rare cases can they both be the same model.

    I would like to ask a question now: For those of you who see any advantages to building and optimizing a game model before you start working on the hipoly, what are they? I see a lot of people discuss this subject without ever stating what the advantages would be of finishing the lowpoly work first. If you don't see any advantages to it, why are you even contemplating doing it that way? It's like, I don't see any advantages to sleeping in the sewers, so I DON'T.

    Not saying there can't be any advantages to a low-first approach, mind you, I've worked like that at times too.
  • perna
    Offline / Send Message
    perna quad damage
    Rick: What I say along the same lines is first teach them about vertex normals, and when they get that.. I say, well now imagine you have such normal at each texel rather than at each vertex. I guess I cover that in the powerpoint thing on my site
  • jgarland
    What do you mean by "the difference between a pixel and a polygon," Rick? Surely they couldn't have gotten this far in the industry, learning to use normal maps, without understanding the difference between the two?
  • poopinmymouth
    Offline / Send Message
    poopinmymouth polycounter lvl 14
    Per, I've made a low poly first, and used it as my cage in Mudbox without any adjustments or quadification. In that instance the pros were:

    Approval of the in game model first (important for this client)
    I knew what the low poly would hold, since I already had it.
    It seemed to go faster for my personal workflow.

    That said, I've only done two models that way, but it worked out pretty well in that instance. Very few triangles anyway, and it was low enough that when subdivided a few times in mudbox, it didn't get any kinks.

    poop.gif
  • perna
    Offline / Send Message
    perna quad damage
    poop: Well I got to ask you now, has there ever been a case where the lowpoly doesn't "hold" as you say? smile.gif I mean you're experienced now.. lowpoly modeling tends to be a minor detail at the end of a project, not a big challenge exactly.
    Maybe you can get into why it was faster for you that way?

    You know, I want to pick everything apart so we can get to the depth of things once and for all, not because I'm saying one way is better than the other. It's like with the square UV thing; sometimes you do stuff because it "feels" right, not necessarily because it has any real advantage.

    I wouldn't want this thread to dissolve into "whatever works for you".. which is very like hippie, peace and love, and everything, but is really worthless for someone looking to be awesome at what he does.

    Like with you, I worked from lowpoly models because the client already had them made.
    Mind you, I didn't subdivide the lowpolies, I used them as base for the cages. In some cases, like with you, there's been very little difference between low and cage, but those are exceptions.

    If you're given 20k tris to do a character and told to optimize for polystrips, then your low may very well end up looking like a nice cage. If however you're working with 6k tris and every poly counts, that mesh is going to make a terrible cage.

    My point of view is that lowpoly and cages are so easy and quick to make that you might as well always take the advantage of optimizing them both for what they are.
  • poopinmymouth
    Offline / Send Message
    poopinmymouth polycounter lvl 14
    Oh to be certain, if given my own preference, I like working with a cage that I upres for the high poly and optimize for the low. No question. I find that to yield the best results.

    However if I'm trying to be super fast, building a game res, sculpting it up quickly and then baking, can be faster. Not sure why I feel that way, I've never tried it twice on identical models to check. It just feels like the shortcut hack that yields passable but faster results.

    Oh and by "hold" I mean what amount of silhouette I can make with the poly limit I'm given. If I model the game res first, I can figure out if the pocket on the pants can be an extruded volume, or if it has to sit flush on a tube. That can dictate how I sculpt it, flat, or more volumetric.

    poop.gif
  • perna
    Offline / Send Message
    perna quad damage
    haha this back-and-forth is working, you just brought up a great point.. the shortcut hack. Personally I find it hard to turn something in that isn't like Per128 representative. Often the clients are more than happy with something that is just ok looking, and it's important to know how to churn stuff like that out.

    I got to stop writing how I speak, I mean not because it offends people but it just takes too damn long smile.gif
  • Joao Sapiro
    Offline / Send Message
    Joao Sapiro polycounter
    i usually work on the highpoly first , then i import a mid level of detail into max and make lowpoly around it , but in a recent work i had to present a game model with uvws before even starting highpoly, what did i do ? i made the highpoly basemesh ,and made the lowpoly around it and bake normals from it to see if it could hold it and it worked, twice the work , but i got the work delivered before time...

    P.S - yeah im dumb.

    P.P.S - awesome thread.
  • Joshua Stubbles
    Offline / Send Message
    Joshua Stubbles polycounter lvl 14
    [ QUOTE ]
    I got to stop writing how I speak, I mean not because it offends people but it just takes too damn long smile.gif

    [/ QUOTE ]

    Are all norweeeeeeeeeGiAN's long winded?
    You're a mean bastard though, Per. Thank you. laugh.gif

    I usually work on a middle-road cage, then split off into each version. Usually I'll do the low poly first, then the high (with the low poly semi-transparent in the same scene).
  • JKMakowka
    Ok here is my question:
    I was told that is is best to tweak the lowpoly mesh before rendering the normalmap and then just use the resulting texture with the original untweaked lowpoly mesh (since the uv is the same). This all make perfect sense to me in theory, but does anyone have any good tips on how to tweak an actual model for normal map generation?
  • perna
    Offline / Send Message
    perna quad damage
    JKMakowka: I take it you talk about when you have normal map warping, but are at the top of your polybudget, so you want to: add polies to the generation mesh that are not added to the game mesh.

    OK, if we just asssume for now that you know how and where to add those polies, I'll briefly explain why that technique can be good sometimes. The lowpoly normals control the direction of each ray. When normals point in all sorts of weird directions, you can end up with a skewed result, like a perfectly round shape on your hipoly turns out oval in the normal map.

    When you add extra, temporary geometry, you force the normals in that area to angle more towards the position of the geo you added, like to avoid interference of polies wrapping around a corner or edge. This is great for capturing the shapes correctly, but since the new geometry won't exist in game, there's now an inconsistency that results in lighting errors. Sometimes it's not very noticeable, and I've used this technique myself on mechanical work.
  • StJoris
    Thanks for having this awesome thread.
    My question:
    I usually have trouble getting the cage right, I'm not sure wether tweaking the individual points fixes things or makes things worse. If I manually move points into other directions, not perse in the normal direction used when doing "push", it seems to sometimes be beneficial. Especially with cylinders it seems to work well, but I'm wondering wether in other cases this might mess up more than is good. Is it bad when the cage intersects? I have had some problems preventing it from intersecting when I had tubes running close to a surface.
  • perna
    Offline / Send Message
    perna quad damage
    the cage controls the direction of the rays, meaning it's no longer in the hands of the normal directions and you'll have less need for techniques like the adding temporary geometry to control your result.

    I think the best thing to do would not be to explain how to use the cage, but to make clear the nature of how rays travel and how normal mapping actually works.

    There's a ppt file somewhere on my old tutorial page that deals with it, maybe you'll want to have a look at that and come back here with your findings? It's got illustrations, which is more than what I have time for right now.

    As for intersection.. the position of each cage vert controls the end point of a ray range, so it should always extend past the geometry you're baking. I may have misunderstood you on that one though.
  • JKMakowka
    Yeah that was a part of what I ment, but I was also wondering if relaxing the mesh slightly, or moving certain polys closer (or further away) to the highpoly mesh would help.
    Sure the result is then slightly inconsistant, but it might be worth it in that particular area of the mesh.
  • perna
    Offline / Send Message
    perna quad damage
    JKMakowka: I hope we get some illustrations up here, because what's going to help you the most is to be able to confidently visualize the rays on a per-case basis, much like Einstein famously did. When you understand the nature of how the trace rays move, you can predict the result.

    I've moved away from cages since I tend to edit texture, lowpoly, uvmap and even hipoly simultaneously towards the end of the project, to get everything to match up, and because of this need to be able to move between any stage freely.
  • perna
    Offline / Send Message
    perna quad damage
    Can anyone tell me whether their software has a ray limiter function? It would would be a mesh that determines the length of each ray by stopping traces upon collision with it. That way, it doesn't need to match the lowpoly vert count in any way.

    I'm sure I've seen it quite a while ago, but can't remember in which app.
  • Joseph Silverman
    Offline / Send Message
    Joseph Silverman polycounter lvl 12
    I think he means when the cage intersects with itself. Which I don't imagine would be good, but would also appreciate the answer to. Specifically why/why not, exactly.
  • Sage
    Offline / Send Message
    Sage polycounter lvl 14
    Okay, I consider myself a noob, why because I don't yet make a living just doing game art and have to work a shit job to support myself, and I have yet to have a published game under my belt. This is as far as I have gotten making normal maps.

    Big problem number one. Failing to see the importance of how you orient your uv islands when baking your normal maps. It's important and solves most problems noobs have with normal maps. I can give a half assed answer and I will give what I think is going on so someone that knows the full story can educate me. You make a tank chassis for example and you unwrap it. Here is my cool tank. wink.gif as an example. I unwrapped considering the orientation geometry had and matched my uv islands to it because depending on the orientation the uv islands have is how the normal are generated. I generated two normal maps to illustrate what I mean. My question used to be does the Normal map generator consider the model itself or the uv island orientation.

    ct_uv_n.jpg

    rotated uv layout and rebaked normal mapped

    Big problem number one. Failing to see the importance of how you orient your uv islands when baking your normal maps. It's important and solves most problems noobs have with normal maps. I can give a half assed answer and I will give what I think is going on so someone that knows the full story can educate me. You make a tank chassis for example and you unwrap it. Here is my cool tank. wink.gif as an example. I unwrapped considering the orientation geometry had and matched my uv islands to it because depending on the orientation the uv islands have is how the normal are generated. I generated two normal maps to illustrate what I mean. My question used to be does the Normal map generator consider the model itself or the uv island orientation.

    ct_uv_nr.jpg


    model shots

    hi
    c_tank_h.jpg

    low
    c_tank_l.jpg

    low with normal map

    c_tank_p.jpg

    If my images are too big I'll shrink them.

    My process:

    1. Make low poly and unwrap to my liking. Optimize it so it's as efficient as possible.

    2. Copy low poly before I remove quads for example, call it hi and add more detail if needed. I try and get a nice smooth result in the lighting with this since the low poly model will use the normals for shading. If there are areas that look wierd I'll change the smoothing groups, turn edges add quads. This model can be subd if needed.

    3. Make all the floating geometry I want to finish the model.

    Problem number two. What's the proper use for normal maps? If I wanted to make my tank chassis not look like it has those sharp angles should I bevel those angles for the low poly game model or should I try to fake it with the normal map? Is the normal map supposed to contain small details like pores, rust, dents or is a bump map still used for that?

    This is it for now, I'll ask more later. Thanks.

    Alex
  • conte
    Offline / Send Message
    conte polycounter lvl 13
    good thread, Per.
  • JordanW
    Offline / Send Message
    JordanW sublime tool
    I'm not entirely sure what your first question is, are you asking if you have to orient your UV sections a certain way? If so, then in my experience the answer would be no.

    Question 2, It generally does help to have bevels on the model, it makes for a cleaner normal map and helps avoid bends in light. Plus you get rid of sharp angles which can hurt the illusion the normal map creates. As far as pores and dents go, you can go ahead and put those into the normal map (if the pipeline doesn't use bump maps).
  • Daz
    [ QUOTE ]

    I would like to ask a question now: For those of you who see any advantages to building and optimizing a game model before you start working on the hipoly, what are they? I see a lot of people discuss this subject without ever stating what the advantages would be of finishing the lowpoly work first.

    [/ QUOTE ]

    Circumstance has forced me into working this way a couple of times. Not many, just a couple. In an nutshell, we were a 3 man team consisting of character artist, character TD and animator, working against a hard deadline to complete a demo to show our venture capitalists that we hadn't been spending their money in Vegas.
    Now, in that instance, with each team member having a very specific role and little time, it made infinitely more sense for me to nail the mass and proportions of the game resolution model in a day and hand it off for rigging and animation, than to have those guys twiddling their thumbs for days while I sculpted leisurely and merrily away in Z. So really, it's a time management issue. It's definitely a bit more scary, since there isn't too much scope for tweaking stuff at the sculpting stage if someone else is well into walk and run cycles for the creature. But yeah, where possible I'd start with the sculpt.
  • perna
    Offline / Send Message
    perna quad damage
    Thanks for contributions!
    Just a quick one before bed...
    SupRore: Self intersection as I understand it would mean the rays are crossing at some point. Now, if the hipoly mesh is placed beyond that point, the end result on the normal map will be a mirror image of that area.

    Again a ray-path visualization illustration thing would be super useful here. Basically draw a line from each vert to its corresponding cage vert. That's the path the ray is going to travel. All the rays between two verts will have angles that are interpolated between them. Wherever the ray intersects with the hipoly, is the are where the normal will be read and plotted back on the normal map.

    For a cylinder, ideally you'd want all the top triangles to have normals that face upwards and the side triangles to have normals that face outwards, in a circle. That would give you the best result. However, when both top and sides are the same smoothing group, faces on the side and top share vertices, and since normal are per-vertex, you now have a 45 normal degree angle around the rim. For many cylinders, you'd want to grab the rim verts in cage and pull them down so they describe rays/paths that are perpendicular (stick out from) the cylinder sides. What will happen now is that you should get perfect cylinder sides.. the caps, on the other hand, will have rays that travel in a 180 degree arc on the extreme. In cases this may be ok, where the surface there is simple, or hidden by other geometry, like a tube is sticking out of it or something.

    These things only tend to be worth playing around with only if you can't afford to add more faces to smooth out the normal transitions.

    You can set the cap elements to a different smoothing group. You'll get a hard edge, which may not be too bad in some cases. Ok, this leads to another issue, which I'm going to describe just superquick now:

    the render engine doesn't care for polygons as much as vertices. When your art lead gives you a poly budget, he's basically giving you an approximate vertex budget. When you use smoothing groups (I take it everyone's familiar with the term, even if it's max-centric), the engine breaks up vertices at that point, even though they seem connected in the 3d editor. This, of course increases the vertex count. Basically what this means is that instead of setting cylinder caps to another smoothing group, you might as well chamfer the whole ring there. The end vert count will be the same. The reality of performance is a hell of a lot more complex than polycount. Often by optimizing and triangulizing a mesh too much, you're creating something that is less optimal to render than if you actually ADDED geometry. One thing all this translates to is that once you've hit your 5000 poly budget for a model, you should in most cases be free to add a couple hundred tris to fix normal mapping errors, it's going to mean zilch for the render performance.
    Well, better yet, always keep in mind that you're going to be adding geometry AFTER you've unwrapped and tested normal maps on your model. I always aim a little lower than my budget, then slightly exceed it after adding geo to fix shading. That's a lot better than being stuck at 5000 with no room to navigate since you've carefully balanced your tri usage so well that you can't remove a couple without the whole thing falling apart.

    Phew, I'm gonna have to learn to be more concise
  • Joseph Silverman
    Offline / Send Message
    Joseph Silverman polycounter lvl 12
    Ah, thanks for the excellent post. smile.gif

    The quick breakdown of verts/smoothing groups in engine helps a lot. I'd read before that separate smoothing groups would double the vert count on the edges, but for some reason it never clicked with me that this would mean chamfering the loop was pretty much the same performance-wise.
  • Rob Galanakis
    Per, on the subject of chamfer/bevel/hard edges, I wrote up a somewhat brief article a month or two ago:
    http://www.twcenter.net/wiki/Beveling
  • Brice Vandemoortele
    Offline / Send Message
    Brice Vandemoortele polycounter lvl 12
    hehe no per, glad somebody try to explain what lies behind smile.gif please don't be concise ^^

    [ QUOTE ]
    In realtime rendering, vertex data is generally linearly interpolated between vertices.

    Now, in offline rendering the interpolation does not have to be linear
    Non-linear shading can be performed in realtime shaders, though due to the calculations involved it would often be more beneficial to rendering speed if you simply add more geometry instead.
    When you use world space normal maps, the problem goes away, since you're no longer relying on the existing, crude vertex normals of the mesh.
    edit: oh, and this is an advanced topic, Brice is just hardcore and you won't be expected to know this stuff to work anywhere... though it can't hurt.

    [/ QUOTE ]

    thx per for your answers smile.gif I got your point about interpolation, but I still can't figure out why results vary from one software to another, both in realtime rendering. Maybe some do renormalize the interpolated vertex normals while some others don't? Is there different algorithms to generate normals?
    For example max don't display seams on uvs borders most of the time, while maya does. I think it's because maya generates vertex's normal on the fly for cgfx rendering (highquality is basically a big cgfx), while max stores some kind of precalc normals. Since vertex are split on uvs borders as well as hardedges - as you mentioned above - the normals are slightly different, showing a seam.
    What do you think? I do understand adding geo will help normal map covering a tighter angle, I just like to know hehe It will help me going more easily from one package to another.
  • Gav
    Offline / Send Message
    Gav ngon master
    [ QUOTE ]

    Circumstance has forced me into working this way a couple of times. Not many, just a couple. In an nutshell, we were a 3 man team consisting of character artist, character TD and animator, working against a hard deadline to complete a demo to show our venture capitalists that we hadn't been spending their money in Vegas.
    Now, in that instance, with each team member having a very specific role and little time, it made infinitely more sense for me to nail the mass and proportions of the game resolution model in a day and hand it off for rigging and animation, than to have those guys twiddling their thumbs for days while I sculpted leisurely and merrily away in Z. So really, it's a time management issue. It's definitely a bit more scary, since there isn't too much scope for tweaking stuff at the sculpting stage if someone else is well into walk and run cycles for the creature. But yeah, where possible I'd start with the sculpt.

    [/ QUOTE ]

    I've used this workflow before as well for more or less the same purposes. Getting work to the animator faster so that he could animate while I sculpted and we'd both finish for the deadline.
  • bounchfx
    Okay, this thread is awesome, but I have a few (undoubtedly noobish) questions.

    You say cage and low poly, I think I'm getting confused, and I think I'm about to answer this myself.. it just got confusing the way you guys were wording it..

    CAGE is what you use to build the High Poly from, Right? and the low poly is obviously the model that will be used in game that the high poly is projected onto? I guess I just got confused with the term CAGE.

    Ok, the other thing is you were talking about having intersections when you go to bake the normals in.. between the hi and low poly models.. and this is.. ok to do? having them intersect? Not sure I fully grasp that yet, but if that's true I guess you are trying to say that it shoots the rays out both ways and not just inside from the low to the high? (or whatever)

    and here "Basically draw a line from each vert to its corresponding cage vert." I think I might be reading the context wrong but I believe you are talking about from the high to the low (vert coming from hi, cage vert being low?), how can there be corresponding verts because the high poly is going to have a shitload more than the low obviously

    I hope I'm not getting too off-track or reading stuff the wrong way, any help is hugely appreciated, this info is gold for trying to learn and understand better.
    Thanks!!
  • Joseph Silverman
    Offline / Send Message
    Joseph Silverman polycounter lvl 12
    Projection cage, which casts the rays that determine the normal map's rendering -- from Ben's tutorial --:

    normal_13.jpg

    Doesn't theoretically have to be your lowpoly, as far as I understand, the uvs just need to line up.
  • bounchfx
  • Rob Galanakis
    Profanity on message boards is fine when its used with reason. Honestly, it makes you seem retarded when each sentence contains the f word, not to mention quite annoying, at least to me.

    Cage is your projection cage, or a quaded-out low-res model that you sculpt from. Different contexts.
  • bounchfx
    yeah, I'll cut the bombs, they are unnecessary, I was just getting flustered. I was thinking correctly when I was thinking about the 'cage' then, and wasn't even thinking about the fact that you can tweak the cage as much as you needed. d'oh. Thanks.
  • StJoris
    [ QUOTE ]
    Again a ray-path visualization illustration thing would be super useful here. Basically draw a line from each vert to its corresponding cage vert. That's the path the ray is going to travel. All the rays between two verts will have angles that are interpolated between them. Wherever the ray intersects with the hipoly, is the are where the normal will be read and plotted back on the normal map.

    [/ QUOTE ]

    Thanks for explaining that, that cleared up a lot:) You talk about adding temporary geometry, how would I go about using that? I've tried using tesselate with tension: 0, so it has the same uv's, but it doesn't make my normal maps any better.
    And sorry for the many questions but another:
    Sometimes parts of the mesh can have the same uv chunk because of similar geometry, but it would not look good because the normal map. How do I know which parts I can mirror fine and which totally don't work?
  • perna
    Offline / Send Message
    perna quad damage
    Appreciate the questions and feedback, people. It's all good stuff.

    I know it can be hard to go through all the info already in here and know what's relevant now. Call it brainstorming, we're just pouring out bits of info and discussing, it should all come together in the end.
    Here's something from another thread:

    Basically what happens is when drawing a new strip, the renderer only has to plot one new vert to describe an entire polygon.

    The problem is.. it's not like you can spend time benchmarking every single model you make and tweak it until it has the optimal tri flow. Most of the time you'd be talking about such tiny performance improvements that it's not even worth considering.

    There are so many other factors involved as well, when it comes to tweaking for performance, that you're better off just following the polybudget you're given fairly accurately.

    The reason why geometry is split up and vertices doubled at UV and normal seams, is that the typically a vertex is stored on the graphics card as having only one uv coordinate and one normal.
  • Eric Chadwick
    On the topic of vertex duplications, it should be mentioned that vertex count is more important than triangle count. I know I'm not going to change years of habit. But vertex count is really where the rubber meets the road, not triangle count.

    Overuse of smoothing groups, over-splittage of UVs, too many material assignments (and too much misalignment of these three properties), all these lead to a very different and much larger vertex count. Which can stress the transform stages for the model, slowing performance.

    FatAssasin made a great Maxscript awhile back that gives an accurate vertex count, might be useful.
    http://boards.polycount.net/showflat.php?Cat=0&Number=9246&an=&page=0&vc=1

    Kind of in the same ballpark, too many bones affecting each vert is bad (>4 per vert), and too many bones on one mesh is also generally bad (>30 or so. Better numbers here anyone?).

    Cool thread Per2048.
  • Whargoul
    Offline / Send Message
    Whargoul polycounter lvl 14
    Hah - just saw your posts about vert counts, after I posted about it in the tri-strip thread.

    Anyways, just wanted to mention the way Maya uses's it's cage (called an envelope in Maya). It does NOT control the direction the rays are fired - it only limits them from passing (well... one option does that, the other chooses the intersection nearest the envelope) through. This is really what a cage should do - firing rays not along the surface normal is actually incorrect, but tends to look better visually in some cases. I think skewing the cast increases the parallax erros somewhat, but it's a sketchy area. A model will always look the best when viewing "down" the normals it was cast at. The old cyclinder with wobbly edges (a great test case) looks shitty when viewed from the side - but perfect when viewed at the 3/4 angle (where the smoothed normals are looking at you).

    The one nice thing about Maya's envelope's are that they don't have to be the same model, shape, or anything. You can just throw a plane in between the character's legs and call it a day. You can edit the auto-created envelope (your pushed model), split edges to get around a tight crease, smooth the whole thing, whatever. Pretty handy.


    Back to Eric's topic of vertex counts. UV splits, hard edges (smooth groups for you Maxians) inflate the vertex count. Material breaks & bones don't inflate the vertex count directly. Materials will cause a draw break, ie. even worse then adding extra verts. The renderer has to submit a new draw call, switch shaders, upload constants, etc. More draw calls per model = worse performance.

    Adding more weights per vert kind of does the same thing (in our engine). A "chunk" of model can only have so many bones weighted to it, in total. This isn't as bad as a new draw call, as only the bone's matrices need to be uploaded, and can be streamed in with the vertex buffer streams. But it does add a small cost. The model gets broken down into mesh fragments, each using only a limited number of bones (I think ours splits at 40). A full human model can get split into 3 to 5 pieces if it's 1 whole model using 1 material. If it's already split into materials (shaders), sometimes only 1 or 2 of those needs to be split into a couple mesh primitives.

    More bone weights per vert makes the problem above a little worse, as now there's a larger list of needed bones for each chunk of mesh, potentially making the chunks smaller (and greater in number). The vertex shader code also gets larger (an extra matrix mult and add per bone weight) and causes more data to be passed around per vert. 4 is usually enough per vert anyway - it's pretty rare to need more than that. The pipeline can auto reduce this as an optimization, and it's hard to spot the differnce. We tried limiting it to 3 bones in the pipe, and it was only a few spots that the person who weighted the mesh could even tell the difference. Pretty subtle.

    Now, how I build my models. I usually do this:

    1. Build proxy/cage/dummy model in Maya. Just a boxy thing with all quads, good layout for zBrush/Mudbox. Quick UVs, clean layout, no overlaps, good coverage. Helps if you want to extract displacements from other models to re-use, or for step 6.
    2. Sculpt the shit out of it in Mudbox - get approval on this model from AD. Use lots of tricks like multiple meshes, floating bits, different rezzed sections, etc to make it manageable.
    3. Bring the mid-rez version (usally anywhere from 50k to 300k - whatever hold all the detail I need to see) into Maya as a reference.
    4. Build a new low poly model. Sometimes it starts from scratch, sometimes it start from a low level of the high rez model, and usually it's a combination of both. For a character (say an arm), I usally just create a cylinder, align it to the model, and shrink wrap it to the surface. Then I cut all my muscles in, add edges where I need them, cut all the folds & wrinkles in, etc. Optimize the inner areas (where silhouette doesn't matter), add any cuts I need for deformation, and stitch it all up. Really focus on the solhouette edges, and internal occlusion. Follow the surface as close as you can, I like to exaggerate the surface more than the high-rez (ie for wrinkles: deeper in the cracks, higher on the highs) to help with parallax and self-shadowing.
    5. UV map - as few breaks as possible. Mirroring is OK depending on the engine/pipeline. I move all mirrored stuff over a unit in UV space to make the sampling easier.
    6. Generate AO off the high-rez model. I sometimes do a few passes and blend as needed. Ie no floor, floor, arms up, arms down, etc. This help shape the model a bit. Place this as a colour map on your highest high-rez model. If no UVs, just autogenerate and bake a huge map.
    7. Sample maps - grab normals and AO off the high rez.
    8. Paint bump textures, Photoshop nVidia filter for fine details, overlay this normal map onto the sampled one. Paint other coulour maps, etc.
    9. weight model, export, finish...

    Sometimes I jump back and forth, ie I might paint a colour map between step 4-5, transfer that to the high-rez, to help place details on the high-rez model that align to a specific colour map.


    (I can't believe PC still has the bug where if you take too long to write a response it gets lost - fucking hell I typed this up twice, excuse the typos but I'm tired!)
  • perna
    Offline / Send Message
    perna quad damage
    Great post whargoul. So it was in max that I'd seen the envelope feature.
    Eh, yeah the lost message thing happened to me too, kind of discouraging eh?

    I'm gonna get around to older stuff in the thread when I have the time kids
  • Eric Chadwick
    Whargoul, nice post. I like the sound of Maya's envelopes. Smart.

    When PC tells me the post is invalid, usually the Back button gets me back to my text. Anyhow, I'm paranoid here, I always Ctrl-A Ctrl-C before hitting that Continue button.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 14
    hehe whargoul, I am trying to make (ctrl+a,ctrl+c) before posting big stuff a habit for that reason, or writing into notepad wink.gif (like this one), but thx for retyping!

    about the vertex-bone-assignments. As whargoul suggested limits like 2-4 make sense, because anything beyond normally is not visible. Sometimes this weight reduction can be done in code (find the most dominant weights, and evenly spread weight of too subtle ones), but in general to have a true wysiwyg experience you can check for the limits yourself.

    now concerning rendering performance, there are many "ifs" and "whens" when it comes to rendering. Engines might do things different, the hardware generations often are different as well. Here is just some general statements

    short
    Get it good looking with the least possible counts, but don't let some fixed counts prevent it from looking good (artefacts...).
    And dont forget consistency. If your game enviro is fairly low (but mostly complete) it makes no sense to put some hi-res characters in it, eventhough the renderer can afford it.

    Skip to end, probably redundant info for many of you...

    long

    memory-costs
    obvious, the textures used cost "most" in average. That is more true for characters than with their unique textures, then environment which reuses stuff, but still texture data will typically be the biggest videoram eater.

    vertexdata is the other. Typical vertex-sizes are around 32 bytes, sometimes a bit fatter, sometimes less when not much attributes (normals,colors,uv channels) are used. Also skinned models, typically require a bit more (boneidx + boneweight) for each vertex-bone pair. 4 bytes thats a uncompressed RGBA pixel (compressed you can typically divide texture costs by 4). So to give you a comparison of costs, a 1024x1024 texture compressed would weigh as much as about 10 000 unique triangles (that is every triangle has its own 3 vertices, which is very unlikely, only particles have similar uniqueness of vertices).
    As hardware likes 32-byte aligned stuff, coders pack as much in as possible, sometimes sacrifice accuracy and so on, just to get everything needed in. Now say they found the "perfect setup" for all needed attributes, and then some artist comes saying he needs a second UV channel, which would mean say a size of 40 bytes instead of 32, and that is "almost as bad as 64", not to mention the fact that the shaders involved need to take this into account, as well... Normally they will make different versions of vertexformats, like light,medium,fat. And different shaders, but maybe they wont have the one you would like, and I just want to hint that something rather simple, might create an avalanche internally, and therefore cannot be done.
    In an ideal world communcation and exchange about information between artists and coders about engine capabilities and setups, will result less problems.

    This doesnt mean you are save off throwing around with vertexcounts, but it shows that textures are more evil. I think the one important thing about textures is keeping consistent texel-density. That is the size of a texel (pixel of a texture) in your world. A) it looks better B) it performs better especially when at the end it is similar to screen resolution. It makes no sense to have hi-res textures of characters that are tiny on your screen at the end, or if their vertex count matches pixels...
    Be aware that to minimize drawcalls sometimes meshes are "dynamically" batched to bigger ones. Think of a "copying" a mesh but "not instancing" (only newer hardware can do that easily). Then of course the copying process is faster with smaller vertexcounts. And when not copied dynamically but statically, the static costs will be higher too. Think about level geometry, it makes no sense to instance every "box brush", but generate one giant triangle mesh out of all boxes. Characters normally dont fall in this category.

    Who else eats videoram?
    your framebuffer window, (color+depth, that is 64 bytes per pixel), then all those new cool post processing fx also tend to have "render-to-texture" so those rendertargets reside in vram, too (think floating point for HDR stuff you got 64 or 128 bytes per pixel for color alone). And of course shaders (which is somewhat negligable so).

    state-changes

    Graphic Cards are stream processors, ie they are super at doing "the same thing, very very often". They dont like it so much if you change "the task" often. Hence material/shader changes result into more drawcalls, as state cannot be changed during a drawcall, and less drawcalls is good.
    However coming back to the statement that multi-materials and so on,require a new drawcall, that is not always the case. Think about all surfaces using the same "shader", that is say 1 texture + 1 lightmap. If you can manage to put the textures used into a bigger atlas, you can manage to render them all (same shader) with the same setup, and they will just pick their "subtexture" of the big atlas, hence requiring no new drawcall. Some engines might do this automatically (the atlas generation), some not. If not you can help packing multiple textures that are likely to be used with others in a bigger one. (Think trashcan + park bank + mailbox +...). This works easily for "non-tiling" textures, once tiling is involved as used in terrain (big walls..), this very often has to be handled on the codeside, as the tiling itself needs to be done in pixelshader, and not all might support/do that.

    Shader changes are very costly changes. That is why we see not very different shaders (just with different tweak values) within a game. It is also a reason why "hey lets do my own .fx shader", do not always translate well, because they are too specific, and if there is too many "unique" ones, it isnt worth it. Also what makes it hard is that you need subversions of the same thing, like shadowed, skinned, both...
    Other changes are like blending,alpha masking and so on.

    vertex/pixel-operations
    Before cards got a unified architecture (that is xbox360 & gf8/ati r600), there were dedicated units for vertex and pixel operations. This is still true for majority of hardware around, and basically means that if you "imbalance" operations, you can pump more time in the other. New cards share the same chips for all (pixel & vertex) operation, and do auto-load balancing.
    For old, eg. you have a very complex pixel shader (normalmapping,subsurface...), then you could add more vertices for "free", as the pixel shader will slow things down. However if you have very simple pixel operations (think lightmap+texture), then vertex will cost a bit more.

    Of course pixel costs directly relate to the pixels on screen, if your model is tiny, it shouldnt have too many vertices, as vertexcount would be too close to pixel count.
    Be aware that vertex count is "all vertices" of the model, not only the visible! While pixel is only the visible (and even more if Anti-Aliasing is used)

    What this means that say you have a bit more complex shader going, then very likely the pixel costs are much higher than vertex (because say you sample 4-5 textures in that shader, do lighting...), then adding a few vertices doesnt hurt. If that will get you a smoother surface (better bake result..) then dont think of polylimits as the ultimate law, but as a guideline.

    Profiling (measuring speed of what takes how long) saves the day, good engines will offer you lots of data about "what the current frame costs", how many drawcalls, texture memory... Hence optimizing from a scene works fairly easy. The problem is you need the models/assets before to make that scene wink.gif
    Also think about following within production "oh that new cool shadowing technique came out and we want it". The shader changes and so on might shift the "vertex/pixel" balance before. Also different hardware generations will shift... That is why games will often see completely different codepaths depending on hardware capability. And especially shader versions. Now combine that with the "one shader for specific needs, like 1light,3lights,+skinned... and you will see that certain shaders explode to many more. And eventhough this is more true for "current/older" hardware, that is what the majority uses.


    Both vertex and pixel operations have caches. Caches are needed, as graphics cards "never evaluate all vertices first, then all pixels", but each triangle on its own. The result of the vertex shader is interpolated linearly over the triangle (pixels within a triangle, get a mix of the 3 vertices' values) and passed to pixel shader.

    Easiest to explain with a vertex. Say vertex A is part of 3 triangles, it would be very dumb to reevaluate the vertex with every new triangle. So cards will store results of like the last 32 vertices (or more/less depending on hardware). Coders normally resort polygon and vertex order of models to make best use of that (if not tell them to!!). However if we are passed that vertex cachesize, and come again with some triangle needing vertex A, we will have to reevaluate it fully again.

    For Pixels there is such a cache for texture lookups. As fragments close to each other normally have similar UVs/mipmap and therefore similar texel.

    end long


    in context of normalmapping

    How complex is your game's normalmap shader, how many textures is it already "pulling from". Is you model residing "static" in memory, is it being "skinned by bones"... How big is the model typically on screen (ie how many pixels)... this somewhat influences how strict you have to look at vertexcounts.

    When using tangent mapping, the "tangent space creation" is the key, if game's method doesnt match baker's, you are in trouble. Check with coders which tools they recommend for baking, how they create them... Even worse 3dsmax's viewport renderer version doesnt match the baker's (which is based on scanline renderer), so slight errors on extremas might occur in viewport, but not in renderer.

    A "object normalmap"'s vertex shader is pretty simple, the "tangent" version, not, it needs to transform light vector to tangentspace, it needs normal/tangent (and sometimes binormal) as vertex attributes.

    Texture compression might leave artefacts, make sure that you try your normalmaps without compression before, to avoid hunting "baking bugs" that are none.


    As for texture pixel useage, Eric mentioned those special .dds textures, that come with differently colored mipmaps. Try substituting your "diffuse" with them, if you end up never seeing "toplevel", then you can resize your texture by a quarter...

    PS: is that long-winded enough,Per wink.gif

    edit: I mixed up my vertex/texture weight comparison by factor 8
  • MoP
    Offline / Send Message
    MoP polycounter lvl 14
    Very awesome thread here. Nice stuff Per, Whargoul, Crazybutcher - thanks! smile.gif
  • EarthQuake
    I'll add in a little bit on object/world space normal maps, because most people seem to have incorrect information on this topic, i've wrote it before but i'll do it again for the sake of this thread.

    Things you can do with object space maps:

    1. Get perfect results regardless of the smoothing on your lowres mesh, amount of geometry you have, etc. Works great for making LOD's and reusing the same texture because the normals/binormals dont matter. And also is very good for mechanical hard surface objects, no need to split model into smoothing groups to get good results, or add tons of extra polygons(polys that you can then use to add even more detail to your awesomely rendered model)

    2. You can animate meshes with object space normals, as long as the mesh is rigged and translated by the engine, and as long as the engine supports it. Definately not only for static props as some people will tell you.

    Things you CANT do:

    1. Mirror any parts of your mesh.
    2. Rotate your model after generating in your 3d app, if this is done in your level editor/game engine, or just with animations on a rigged model its fine.
    3. Add additional bump map detail. Combining these in photoshop can be a pain, because the normal map is using the full color range, not simply the blueish color....


    If any of your programmers come and tell you you can't use these maps, they're just being lazy and refuse to take the 15 minutes it will take to add support for this stuff.
  • Rick Stirling
    Offline / Send Message
    Rick Stirling polycounter lvl 14
    [ QUOTE ]
    As for texture pixel useage, Eric mentioned those special .dds textures, that come with differently colored mipmaps. Try substituting your "diffuse" with them, if you end up never seeing "toplevel", then you can resize your texture by a quarter...

    [/ QUOTE ]

    And sometimes, you don't want the mipmaps -you only want the hires texture/1st mip level (main character who is never seen at a distance for example). In that case NOT creating the mip levels will save you both memory and disk space. I think it's around 30%
  • perna
    Offline / Send Message
    perna quad damage
    I'm on some random laptop in the middle of nowhere, just want to quickly suggest that it's important to describe what something is before detailing all the advantages of that something smile.gif

    world space normal maps bake the normals from the hipoly model directly, while tangent-local space normal maps bake the hipoly normal relative to the lowpoly normal, which means that the model can be freely animated without messing up the lighting.

    To animate a worldspace normalmapped model you have to treat the onmodel normals differently. I
  • Eric Chadwick
    I think you're talking about object-space maps. World-space and object-space = same bitmap, different shaders.
  • Rob Galanakis
    Let's break it down some more.

    A "space" refers to the setup of the XYZ axes. As far as normal maps are concerned, we have World, Object, and Tangent space.

    Consider also what a normal is. It is a vector made of 3 variables, XYZ, which is encoded by the RGB information on a texture.

    So, let's take the same point in space, or vector, or whatever. It is different in each map space.

    World and object space are similar, in that they use a relatively comprehensible space. World space uses the world's XYZ, while object space uses the object's XYZ axes to describe the normal. For this reason, world space mapped objects can't move, object space mapped objects can move and rotate and whatnot put cannot deform (since the normals are always relative to the object's axis; as long as this relationship doesn't change, you're fine, which means you cannot deform the object and you must rotate and translate it from the object's root, meaning any sort of movement must be application based (ie, you cannot have it break into pieces unless each piece is its own object, for example).

    Its unusual to use world space maps because they are so restrictive, and the cost of putting an object space map into world space is low (and object space maps are more foolproof... if you need to rotate your model in your editor, you are screwed). I don't know why anyone would use World space maps, tbh, except in special cases (something with environmental mapping, maybe? I don't know...)

    Tangent space is much harder to understand conceptually. It uses the normal as the Z axis, and the surface U and V as the X and Y (gah I hope I didn't mix that up). So, everything is relative to the normal of vertex... I know I've seen tutorials at polycount about it and it is a complex topic so I won't go into detail now.
13456713
Sign In or Register to comment.