Yeah if you've got synced normals at least. If not that would create some nasty shading unfortunately.
However if that case you might re-think how the low is constructed in the first place, would the player notice that sort of indent or could you just "fake" it in the normal map?
We get into sort of a different discussion there entirely, but one that I think is under appreciated none the less, how certain types of geometry are going to work better with bakes/smoothing issues/etc(this is sort of covered in the waviness thread, the gun muzzle example), and how you can plan your high out to minimize these sorts of situations in your low. Maybe a future topic for a write up.
I probably wouldn't model something like that personally, it is just the most extreme example I could think off to put context to my question. Say if I am using synced normals such as xnormal to UDK, and needed a model like that for whatever reason. If I set my one hard edge to where the green meets the purple it should be fine? Even though in maya the red section gets a bad gradient over the large surface? Am understanding you correctly?
I've got to thank you for this as well EQ- the debate of using all one smoothing group vs. not is one i've discussed all too often with people, and it's really nice to have it all laid out like this. Also, I didn't know xnormal did explicit normals by default. Such great info to be aware of!
In Xnormal
By default, if you use the simple ray trace distance settings, you are using Explicit mesh normals for projection.
If you load your own cage, exported from Max for instance, you are using an Averaged projection mesh. If you set up your own cage within Xnormals 3d editor and save it out, you are also using and Averaged projection mesh.
There's a dropdown for low poly models in the "Smooth Normals" column. You can set it from "Use Exported Normals" to "Use Average Normals"
Doesn't setting this avoid the need to use a cage?
There's a dropdown for low poly models in the "Smooth Normals" column. You can set it from "Use Exported Normals" to "Use Average Normals"
Doesn't setting this avoid the need to use a cage?
No, because this will smooth your lowpoly's mesh normals, and if you've got hard edges set it will over-ride your exported normals, which means you'll get smoothing errors when you try to apply your normal map back on to your mesh with hard edges.
There isn't really much reason to ever use this setting unless for instance you've exported your highpoly mesh without vertex normal data, then you might want to set it to averaged.
It would be lovely if there was an option to set the projection mesh type with a simple drop down(this is essentially what maya does). I've suggested this to Santiago but I don't remember why it never got in, probably some technical reason.
I probably wouldn't model something like that personally, it is just the most extreme example I could think off to put context to my question. Say if I am using synced normals such as xnormal to UDK, and needed a model like that for whatever reason. If I set my one hard edge to where the green meets the purple it should be fine? Even though in maya the red section gets a bad gradient over the large surface? Am understanding you correctly?
I don't think its going to be helpful for me to give you a yes or a no answer here.
What you should really do, any in situation where you're unsure is do a test bake and export it to your engine of choice. There can be so much variance between engine and baker. Do enough test bakes with your workflow of choice and you'll start to know exactly what you can and can't get away with.
I do a lot of test bakes throughout my process of creating the lowpoly, and sometimes even the highpoly, to see if I can represent the highpoly shape with X amount of geometry, to see if I have enough texture resolution to represent certain details, to check for ray skewing, to check for smoothing errors, etc etc. Its a really good idea to get in the habit of doing quick test bakes.
When you go through the process blindly assuming it will work, or rely on "rules" for what to do in X situation is where you're going to run into problems.
I don't think its going to be helpful for me to give you a yes or a no answer here.
What you should really do, any in situation where you're unsure is do a test bake and export it to your engine of choice. There can be so much variance between engine and baker. Do enough test bakes with your workflow of choice and you'll start to know exactly what you can and can't get away with.
I do a lot of test bakes throughout my process of creating the lowpoly, and sometimes even the highpoly, to see if I can represent the highpoly shape with X amount of geometry, to see if I have enough texture resolution to represent certain details, to check for ray skewing, to check for smoothing errors, etc etc. Its a really good idea to get in the habit of doing quick test bakes.
When you go through the process blindly assuming it will work, or rely on "rules" for what to do in X situation is where you're going to run into problems.
That makes sense. I will try it out this week to see what I come up with and report back here. Thanks for the help once again.
A. I thought synchronized pipeline mean, normal map baking application and your engine uses the same type of tangent space i.e +x +y +z in maya. But when I bake normal map in maya using only soft edge I get no shading error, but when I bake the same assest in x normal I get shading error when viewed in maya.
So whats more is there in a synchronized pipeline?
B. I was goin through this thread :http://www.polycount.com/forum/showthread.php?t=73593 . One thing I didnt understand When we bake a normal map does it get bilinear filter before producing the final result ? I dont see any option of bilinear filtering when I bake in maya or xnormal .
It's basically a lot more codey/mathey, with I'm guessing a lot more ways to go about doing it compared to just swizzling the channel values (which is why getting everything synced up isn't always an easy task).
A. I thought synchronized pipeline mean, normal map baking application and your engine uses the same type of tangent space i.e +x +y +z in maya. But when I bake normal map in maya using only soft edge I get no shading error, but when I bake the same assest in x normal I get shading error when viewed in maya.
So whats more is there in a synchronized pipeline?
There's a bunch of maths used to take the object space normal (i.e. the exact normals of the high res object you're baking) and transform it into the tangent space normal map that's commonly seen (the purple one).
What's required is that the maths used for transforming the object space normal into that normal map during baking are exactly the same as what the game engine or 3D viewport uses to display that same normal map.
These maths are based on the position, UVs and normals of the low polygon mesh's vertices. Almost every game engine and 3D application has subtle (sometimes major) differences in the maths they use to calculate these things. In the case of 3DS Max, different parts of the same program used different maths for it. This means the result from one will not properly display in another.
When those two bits of maths are exact - those used to bake and those used to display normal maps - it's said to be a synced pipeline.
Here's a Unity3D example with various bakers and import options and one (the centre one) that's perfectly synced to Unity's tangent basis.
You can clearly see that the synced one shows none of the flaws that the other unsynced objects do. That's the advantage.
Thanks for the reply Farfarer that clears everything up.
One suggestion earthquake combine this thread and your previous thread "WHO PUT WAVINESS IN MY NORMAL MAP" and make it "Normal Map MASTER THREAD" where people can post all their normal map issue.
that was in cryengine 2, the latest version of the engine syncs fine as i did a full asset for someone in that engine recently with no troubles. one sg tests looked fine.
also i had no idea what i was doing then so yeah.
So what is the latest engine synced to? Their baker, broken Max viewport, or something else?
So what is the latest engine synced to? Their baker, broken Max viewport, or something else?
baker. as far as i know nothing was ever synced to the broken viewport math. a one SG bake of a gun model i did in max looked the same in max 2011 (qualified normals) and CE3. the project's tech artist told me it was synced to max and i believe him.
looks like i'll be taking some work using CE3 soon so i'll do some more tests this month.
I remember having issues with Crysis 2's tangent basis, although perhaps the latest CE3 SDK is synced with the qualified Max normals, thanks for the info Racer445. Either way, if the xNormal plugin works with the latest version, I think i'll have to try that out. I've been avoiding xNormal, perhaps it's time to start using it, especially with the CryTiff plugin and the other benefits.
I wonder what bakers most pro hard surface artists use? I saw Tor Frick using modo's on his stream and I heard that doesn't even use a cage for baking. I wonder how common xNormal is for hard surface?
should the philosophy of arranging smoothing groups to compliment uvw islands also apply to "soft" biological shapes?
ie: i have a UVW seam running down the middle of the inside of a characters arm, should i arrange it so that there is a smoothing group split along that seam?
should the philosophy of arranging smoothing groups to compliment uvw islands also apply to "soft" biological shapes?
ie: i have a UVW seam running down the middle of the inside of a characters arm, should i arrange it so that there is a smoothing group split along that seam?
As far as I'm aware, the answer is yes. There's no reason not to harden uv seams.
should the philosophy of arranging smoothing groups to compliment uvw islands also apply to "soft" biological shapes?
ie: i have a UVW seam running down the middle of the inside of a characters arm, should i arrange it so that there is a smoothing group split along that seam?
I tried some time ago and either way didn't make a difference for me. So yeah, cutting won't make a difference (unless you're porting your model to Skyrim, in which I had mild success in understanding how it interprets such data correctly again).
should the philosophy of arranging smoothing groups to compliment uvw islands also apply to "soft" biological shapes?
ie: i have a UVW seam running down the middle of the inside of a characters arm, should i arrange it so that there is a smoothing group split along that seam?
It applies to everything that uses smoothing groups, mate. It doesn't cost any more so you might as well use it
I think for character work the benefits are less, and generally speaking for soft organics I wouldn't really bother. If you've got LODs sometimes the seams can worse if you harden the seam edge on a cylinder, so for something like the seam on an arm I wouldn't do it(though you won't get a visible hard edge in game, dunno how many times I have to say this). But for something like the bottom of a shoe, or hard edged armor/props, or a cuff on a shirt that has a 90 degree angle, etc, it would be a good idea.
Usually for organic stuff your model is going to be made up of mostly soft shapes and not a lot of extreme normal changes.
The benefit also of having one smoothing group (A) is that you can weld your verts a lot more, you can do quicker unwraps with roadkill and pelting on your hard surface objects, and as a result of the bigger UV islands, you have a lower vert count in the engine ( UV breaks = more verts )
would the subtle difference you get in the bake be enough to offset the unwrap time saving (on unique texture pages) and vert count reduction?
You can UV the exact same way you are used to. Once finished, set all existing shells to their own smoothing group, and chances are you'll have a cleaner map.
Since your smoothing breaks follow your UV breaks, the vertex count in-game will not change either.
The benefit also of having one smoothing group (A) is that you can weld your verts a lot more, you can do quicker unwraps with roadkill and pelting on your hard surface objects, and as a result of the bigger UV islands, you have a lower vert count in the engine ( UV breaks = more verts )
would the subtle difference you get in the bake be enough to offset the unwrap time saving (on unique texture pages) and vert count reduction?
Read the entire first post, don't just skim it. The whole point of the writeup is to discuss the two methods with the *same uv layout*.
You can't make a uv layout without seams, so there will always be parts of your mesh you can give hard edges with zero additional cost, and those parts naturally tend to be where you would want hard edges anyway.
How many uv seams or exactly how you layout your uvs is completely irrelevant.
Awesome read, EQ. Thanks for sharing, I learnt a great deal. I've basically been using soft edges all over the mesh and surface normals for baking, which usually results in a good bake but has its obvious drawbacks like you mentioned here. I'll start using hard edges more often now that I know the technical workings behind it. Looking forward to do my tests in Maya. I did actually try hard edges along UV seams the other day on a rock I made, but I used surface normals for baking which resulted a lot of artifacts. Now I know how to fix this. In this case I think I'd be fine with soft edges and surface normals, shading was good and all and it was for personal use. But it's great to have some technical knowledge on the subject.
Read the entire first post, don't just skim it. The whole point of the writeup is to discuss the two methods with the *same uv layout*.
You can't make a uv layout without seams, so there will always be parts of your mesh you can give hard edges with zero additional cost, and those parts naturally tend to be where you would want hard edges anyway.
How many uv seams or exactly how you layout your uvs is completely irrelevant.
hey, yeah i read it. I guess i'm following the same lines as MM. With the hard edged method you have a LOT more breaks in your UV. It really has nothing to do with using the same UVs in your example, because your UVs have already had time put in to support the hard edge workflow.
What I am saying is, that if you use 1 smoothing group with normals and tangents, You can get much larger UV islands, pretty quickly, that make texturing and painting it much easier. You would also have less seams, which results in a lower vert count.
I think both methods have merit, and it really depends on the persons time limit and diffuse texture. If you want to spend time in a 3D painting software like Mudbox, fixing all the diffuse issue seams with method B you can.
Lets say this is a wood gun sight with wood grain direction. you would spend a LOT of time matching up all that wood grain along the breaks in your UV islands in the diffuse. I guess you can paint the high res in Mudbox, but for phtooshop texturing, it would be a pain (along the edges of the sight housing that are in their own UV island, for example).
I don't think there is any RIGHT or WRONG way between A and B, just if the engine supports it and the persons preferred workflow. I personally prefer larger UV islands and not have to deal with separate smoothing groups and seams
You can UV the exact same way you are used to. Once finished, set all existing shells to their own smoothing group, and chances are you'll have a cleaner map.
Since your smoothing breaks follow your UV breaks, the vertex count in-game will not change either.
Yes, depending on your pipeline you might have to break more shells for a cleanly baked angular hard-surface model, but breaking smoothing along your EXISTING shells is already beneficial, no matter what type of model.
hey, yeah i read it. I guess i'm following the same lines as MM. With the hard edged method you have a LOT more breaks in your UV. It really has nothing to do with using the same UVs in your example, because your UVs have already had time put in to support the hard edge workflow.
Except that I didn't do that. Its just a regular old UV layout, it isn't obsessive compulsively welded together to minimize seams, nor is is set up to use a lot of hard edges. For instance, if the sight was meant to go into an engine that doesn't support synced normals, it would need even more uv splits to support the hard edges required. The player facing end of the sight is pretty much one uv island and would need to be split to reduce smoothing errors in that case.
The example mesh was never intended to be an example of the most optimized uv layout, as again, for the purpose of the thread the uv layout is irrelevant. I think I've said it about 20 times now, but virtually all of the statements I've made in this post rely on your uv layout BEING THE SAME. How optimized or unoptimized your uv layout is has absolutely no bearing on the base concepts in this thread. Sure, the benefits for using hard edges may be more or less depending on your exact uv layout, but thats getting really subjective(which is again something I've already stated if you read the entire thread) and of course is something you should/can decide on a case by case basis.
The mesh I used was a couple year old example mesh for 3point Shader, I used it because it was one of the few assets I could show/that wasn't under NDA. There was no greater intent in it than to grab an existing mesh to show the basic principles outlined in this thread.
What I am saying is, that if you use 1 smoothing group with normals and tangents, You can get much larger UV islands, pretty quickly, that make texturing and painting it much easier. You would also have less seams, which results in a lower vert count.
This is all fine and correct, however nothing in my post was contrary to this point.
One thing I feel that is worth noting however; the larger your uv islands are/the less seams you have often the harder it is to get an optimized uv pack that doesn't have a decent amount of wasted space. I've seen it suggested here that less seams = more texture resolution and in my experience it is the opposite. Usually the smaller and more numerous your uv islands are the easier they are to pack into an efficient uv layout with minimal wasted space. This is of course entirely dependent on your model and uvs, it may be the case for some and not for others. So again its all a balancing act, as most everything in game art is.
Less seams and its a bit easier to paint, but you get more distortion, maybe its harder to pack efficiently, certainly it uses less vertexes in game.
More seams and the easier it is to pack, the less distortion you get but the harder it is to paint and the vertex use is higher.
Just two sides of the coin, neither is "better", its just a balance.
I think both methods have merit, and it really depends on the persons time limit and diffuse texture. If you want to spend time in a 3D painting software like Mudbox, fixing all the diffuse issue seams with method B you can.
Lets say this is a wood gun sight with wood grain direction. you would spend a LOT of time matching up all that wood grain along the breaks in your UV islands in the diffuse. I guess you can paint the high res in Mudbox, but for phtooshop texturing, it would be a pain (along the edges of the sight housing that are in their own UV island, for example).
Again uv layout is irrelevant, your exact layout is personal preference and does not prove or disprove any of the facts presented in the initial post.
I don't think there is any RIGHT or WRONG way between A and B, just if the engine supports it and the persons preferred workflow. I personally prefer larger UV islands and not have to deal with separate smoothing groups and seams
You're confused in thinking that methods A and B require a certain type of UV layout, they are simply two different methods to bake your normals, regardless of uv layout.
Nowhere did I say there was a right or wrong, its up to you to come to your own conclusions, and its great that you did, but its a bit disappointing that you either came into this thread with preconceived notions as to what you were going to get out of it, or didn't really understand what I wrote. I suggest giving it another read.
If you can come back to me with some factual inaccuracies I will be happy to discuss them or even edit the initial post. But please give actual examples of what I've said instead of assuming I did this or meant that or the other thing.
This thread is great! I have been doing a bunch of test bakes to see if I grasp what's going on, and I think I am getting it. I am feeling more equipped to tackle errors in my bakes now, that's for sure haha.
Sorry, yes everything is very factually correct and well written (and should be pinned imo)! It's a great document to explain to people having trouble with their normal bakes.
Basically down to, yes if you have time to do hard edges baking and setup, go for it! not all models require this though, and sometimes its detrimental to focus on it in production.
I've seen a junior always running over time on his asset creation. I found he would obsess over getting a perfect bake with hard edges, exploding his prop out to do multiple bakes, when in the end a quick 1 smoothing group bake works 99% as well at a fraction of the time. (as long as your engine supports it) Really down to the importance of your item in the game. Yes you cover examples of when not to use it, though it does get lost in the massive info dump.
With your example, if it's a FPS gun sight, then yes might be worth it (since it takes up a large amount of screen space
I guess my only caveat i would add would be, think of how your object will be used in the game a plan you time accordingly to using one method over the other.
Then your game hits optimization, and 50% of your normal's are cut anyway ( i kid i kid )
I understand the point, it's just a very long post with lots of caveats for something as simple as "use hard edges on your UV seams, except when using 1 smoothing group workflows" and showing a picture of it with and without. I should of treated it as Unwrapping-Baking 101 education, instead of delving into it deeper.
two note:
In the latest Xnormal with FBX exporting from max, if you have one element with 1 smoothing group, and another with multiple smoothing, you will get crazy bake errors. Xnormal has a hard time calculating with you have this across elements in max.
Also textools for max 2012-13 is super buggy, lot of studios won't allow it to be used because of the lack of updates. maybe add a link to another tool that would work for later versions. I max i still don`t think there is a native way to split smoothing groups based on UV islands.
I understand the point, it's just a very long post with lots of caveats for something as simple as "use hard edges on your UV seams, except when using 1 smoothing group workflows" and showing a picture of it with and without. I should of treated it as Unwrapping-Baking 101 education, instead of delving into it deeper.
Short blanket statements without understanding the theory behind them is always a bad idea. Its why we have people posting "Just use 1 SG" in response to other users bake problems, often to the detriment of the user seeking help(IE: with a non synced system its terrible advice).
Or "Always use quads"
Or "Never use triangles"
Or "Use moar geo" - Crap I started that one.
I and many others spend so much time fighting the regurgitation of these stupid little phrases that are repeated with little understanding of their original intent, I have no desire to create more of these little catch phrases to be misconstrued into eternity. I am happy however to explain to the best of my ability all of the factors in the equations so people can come to their own conclusions. I would rather it be too technical and be able to answer questions that someone may have than it be too simple and simply misunderstood.
I think the "use how you see fit" advice is really applicable to any tutorial, guide, technical writeup ever written, how you implement information into your own personal workflow is up to you, I agree with you completely but that isn't something exclusive to this write up. Honestly I should probably write something to this affect and stick it in my signature!
I've seen a junior always running over time on his asset creation. I found he would obsess over getting a perfect bake with hard edges, exploding his prop out to do multiple bakes, when in the end a quick 1 smoothing group bake works 99% as well at a fraction of the time. (as long as your engine supports it) Really down to the importance of your item in the game. Yes you cover examples of when not to use it, though it does get lost in the massive info dump.
As far as doing exploded bakes etc, I am of the opinion that you should know how to do it correctly first, and know how to do it fast second. Getting into timeframes for work is so completely subjective that its better left to the specific company/project/task you're working on, Its not really feasible for me to give that sort of advice.
Also, I think there are a lot of misconceptions out there about the "speed" of doing certain things. For instance:
Not taking a few minutes to set up a cage in xnormal may "seem" like its fast, but then you have to trial and error the ray distance, possibly bake two maps with different ray distances(which can take hours for AO) edit those two maps together, and of course redo those edits if you get a change request for your asset. So those few minutes setting up a cage are really time well spent if you don't need to do any touchups after the fact.
Same thing with setting up an exploded bake, if you spend 10 minutes keyframing your high and low poly to avoid intersection issues, that saves you a lot of time over manually painting those sort errors out, which is again rework that needs to happen with revisions. The cool thing about exploded bakes is they scale with complexity of your asset. Have a simple prop with 2-3 or so mesh chunks? It will only take a minute to set up the keyframes.
If your junior artist was doing separate bakes for each chunk and then compositing them together though, I agree 100% that is a massive waste of time, I don't really know why anyone does that. But setting up a proper cage and doing exploded bakes in my experience are time savers, not time wasters.
two note:
In the latest Xnormal with FBX exporting from max, if you have one element with 1 smoothing group, and another with multiple smoothing, you will get crazy bake errors. Xnormal has a hard time calculating with you have this across elements in max.
Use the SBM exporter instead, you can even export a cage from max's projection modifier, its super super fast and easy to do. Just make sure your mesh is triangulated before you export or it won't save the cage. I think you can just throw an e-mesh modifier on before export.
Also textools for max 2012-13 is super buggy, lot of studios won't allow it to be used because of the lack of updates. maybe add a link to another tool that would work for later versions. I max i still don`t think there is a native way to split smoothing groups based on UV islands.
renderhjs is an active polycount member and I'm sure he'll be happy to look into any issues you're having with the script. If you know of another script I will be happy to include it as well. I'm not a Max guy primarily so you'll have to help me out there.
What would be your recommendation on splits with LODs (bevels on edge or thin rings like around the brim of a hat)? It's extremely easy to ruin a good bake with a single collapse of those, but having a separate UV island/SG to support it doesn't seem efficient in the slightest.
As far as doing exploded bakes etc, I am of the opinion that you should know how to do it correctly first, and know how to do it fast second. Getting into timeframes for work is so completely subjective that its better left to the specific company/project/task you're working on, Its not really feasible for me to give that sort of advice.
Also, I think there are a lot of misconceptions out there about the "speed" of doing certain things. For instance:
Not taking a few minutes to set up a cage in xnormal may "seem" like its fast, but then you have to trial and error the ray distance, possibly bake two maps with different ray distances(which can take hours for AO) edit those two maps together, and of course redo those edits if you get a change request for your asset. So those few minutes setting up a cage are really time well spent if you don't need to do any touchups after the fact.
Same thing with setting up an exploded bake, if you spend 10 minutes keyframing your high and low poly to avoid intersection issues, that saves you a lot of time over manually painting those sort errors out, which is again rework that needs to happen with revisions. The cool thing about exploded bakes is they scale with complexity of your asset. Have a simple prop with 2-3 or so mesh chunks? It will only take a minute to set up the keyframes.
do you think exploded AO bakes are more accurate/realistic given the fact that they are exploded and not in the actual occluding state they should be in final model ?
i know you can do a separate contact AO passes and combine them, but is it same quality ?
as for having a bake process that requires no cleanup, its a great process and i agree.
but, if there were changes, you don't necessarily need to bake the entire asset again. you can do partial bakes. the resulting normal/AO would be the same as full bake.
also, as you talk about "destructive" workflow i wonder what about the other maps like diffuse and specular. i am sure they are created in a manner that could be described as "destructive" because if there were changes that clients needed to do later you cant just generate the diffuse map.
they way i see it, pretty much all 3d art workflow is destructive in once aspect or another if it requires any manual craftsmanship. it would only be non-destructive if it was 100% automated from beginning to end. what if the client cant setup hard edges or cages properly like you did, how will they implement changes on their end ? i know the process is fairly easy for us, but usually when a client is coming to you for your service that means they lack the resources to do it themselves in the same quality as you.
just trying to understand your thought process, dont get angry ok
What would be your recommendation on splits with LODs (bevels on edge or thin rings like around the brim of a hat)? It's extremely easy to ruin a good bake with a single collapse of those, but having a separate UV island/SG to support it doesn't seem efficient in the slightest.
LODs and normal maps get tricky for a number of reasons. There isn't ever going to be any great solution whether you're using hard edges or not, simply because normal maps depend on the lowpoly mesh normals of the highest LOD, so when you remove geometry you're editing those normals and you'll always get smoothing errors.
A few things you can do:
A. Bake an extra normal map for your 2nd lod mesh if it is very important that the shading holds up.
B. Try to plan for your lods when you're doing uvs and geometry placement. Sometimes you just have to bite the bullet and retain some important detail on lower LODs if it is going to be visible at the distance the LOD pops in.
C. You can create custom normals in such a way that removing certain edge loops will have little effect on the mesh normals. I've been required to do this for important assets on certain projects before. It works, it may require some custom tools, some planing, and it can be a fairly big time sink.
I think the most important thing to remember with lods though, is to check them from the actual viewing distance you'll see them in game. Don't zoom in on your lods like it was the high detail version. When you start doing this you realize that small or even large errors may not matter when its actually in game.
do you think exploded AO bakes are more accurate/realistic given the fact that they are exploded and not in the actual occluding state they should be in final model ?
i know you can do a separate contact AO passes and combine them, but is it same quality ?
Yes adding a "lowpoly" baked AO pass is usually what I do, is it the same quality? Its worse in some ways and better in others. It can be more accurate to the actual shapes of the lowpoly for instance, if you have a very round cylinder in your high and not so many sides on your low, the lowpoly baked AO can be better. Probably in general terms I would say its worse quality, but not to the point that it makes a difference in the end result.
Its the best combination of speed, ease of doing and quality that I've done yet and I've tired most methods. Its pretty much the base AO workflow we use at 3Point, so you can check any of the lowpoly assets on our site to see what it looks like.
I know some people like to do the AO bake by material ID in Max but I've never had a chance to try it.
It is another thing you have to redo with major changes though, so I usually do it after the uvs/normal bakes are approved.
as for having a bake process that requires no cleanup, its a great process and i agree.
but, if there were changes, you don't necessarily need to bake the entire asset again. you can do partial bakes. the resulting normal/AO would be the same as full bake.
Yes that is true, you can often do partial rebakes if you just need to tweak a detail on a section of your highpoly. The biggest problem you get is say, the client requests more UV space in "X" part, so you have to repack the entire layout and rebake, then you have to redo all of the work.
also, as you talk about "destructive" workflow i wonder what about the other maps like diffuse and specular. i am sure they are created in a manner that could be described as "destructive" because if there were changes that clients needed to do later you cant just generate the diffuse map.
Oh absolutely, if you're getting highpoly change requests after you've started texturing that is a major problem. That really shouldn't happen in most cases and if it does I think you should insist on a better approval process. IE: First the high, then the low, then the uvs/bakes, then the materials are approved. You shouldn't get change requests on a prior stage that has already been approved. This is going to vary job to job and client to client, but its something I think you should always push for as it saves a lot of time for both clients/management and artists.
they way i see it, pretty much all 3d art workflow is destructive in once aspect or another if it requires any manual craftsmanship. it would only be non-destructive if it was 100% automated from beginning to end.
Sure, but there is plenty of stuff you can do to make the process as painless as possible for rework.
what if the client cant setup hard edges or cages properly like you did, how will they implement changes on their end ? i know the process is fairly easy for us, but usually when a client is coming to you for your service that means they lack the resources to do it themselves in the same quality as you.
Yeah this is something you're always going to have to discuss with the client. Sometimes they will have their own workflows and you'll just have to work with that, sometimes they will be open to making improvements if you can explain what you'd like to do. It varries too much to really give a clear answer.
just trying to understand your thought process, dont get angry ok
Hehe, sorry if I come off a bit snide, I probably get a little too carried away at times.
Hehe, sorry if I come off a bit snide, I probably get a little too carried away at times.
not at all!
at the end of the day, every process has some positives and some negatives.
for example, i would avoid exploded baking at the cost of couple more hours to have a more realistic AO bake. if i have to do couple minutes of manual cleanup, so be it. i want a better end result AO.
you mention the brink guns, but i dont want to compare finished assets without knowing for sure how it was made. i could do the same with my work but i rather not.
obviously the brink guns look great, but there is lot more to it than just normal and AO maps.
over all, your process seems over complicated to me at the expense of many things that i think are important as much as the things you mention and may be more in some cases.
no doubt this article is very important for hard surface artists.
however, i am talking from a generalists perspective. i personally work on everything from character, weapons, vehicles to props.
so i guess, i would say the first post or the main write up needs some disclaimer or clarification that this workflow is recommended for hard surface mostly(at least after you weigh the pros and cons). i know you listed exceptions many times in this thread, but the main post warrants a disclaimer up front.
also, the scope example used seems a bit unfair because a scope like that in todays games would get higher priorities in textures sizes, polycount etc in most cases. not to mention, you should display something with a good diffuse map as well so people can weigh the actual final outcome rather than a low res normal mapped asset. it was meant as an example only i know, but it is what is used to make your case. so it has to be fair and balanced
i know we discussed lot of these in several posts in the thread, but it should be clarified in the main posts. i say this because after i consider everything and at the end of the day it becomes a subjective call on a case by case and in my case i would not follow this workflow for majority of times even for hard surface ONE main reason being i tend to layout UV differently. i would still have lot of uv shells to maximize pixel density, but i would not put UV cuts like you do. there are are several other priority changes as well, which i would have to write 5 more paragraphs to explain.
finally, many new users or inexperienced users will take this article as bible and that is what i think is unfortunate. like you said it before, you dont like misinformation spreading so i think this write needs more disclaimers, exceptions and more fair representation to actually weight the positive and negative. not to mention, i think there are lot of things in here that becomes a matter of priority among other things.
Lot of great discussions in this thread! I agree that the OP can be structured a bit more. Maybe leave all the basic terminology to a different post, or point to the Wiki where the info on it is, and highlight the pros and cons a bit more.
Don't know if it has been linked to, but here is unreals page showing a flawless, hard surface 1 smoothing group bake with xnormal, tangents and bi-normals, and how to set up your exports/imports.
no doubt this article is very important for hard surface artists.
however, i am talking from a generalists perspective. i personally work on everything from character, weapons, vehicles to props.
I don't have time to respond to your entire post right now but just wanted to say, you realize that weapons, vehicles and props are all hard surface work right? Robots, characters with armor and props, etc also have a lot of hard surface elements to them as well.
If you do 90% soft organic characters that is one thing, but the majority of the things you mention fall under hard surface work.
Maybe I just read your statement in a funny way or something though.
you mention the brink guns, but i dont want to compare finished assets without knowing for sure how it was made. i could do the same with my work but i rather not.
obviously the brink guns look great, but there is lot more to it than just normal and AO maps.
There are shots of just the low+AO bakes on our site if you want to take a look, nothing fancy just the high AO + low AO + CB cavity map thing. In the thread linked in my signature too.
I don't have time to respond to your entire post right now but just wanted to say, you realize that weapons, vehicles and props are all hard surface work right? Robots, characters with armor and props, etc also have a lot of hard surface elements to them as well.
If you do 90% soft organic characters that is one thing, but the majority of the things you mention fall under hard surface work.
no, it does not. weapons, vehicles or props can be organic as much as characters.
your definition is a bit narrow i think.
There are shots of just the low+AO bakes on our site if you want to take a look, nothing fancy just the high AO + low AO + CB cavity map thing. In the thread linked in my signature too.
they look great and i never said they didnt. same results or may be better results can be achieved with different work flow. also i have no clue how the UVs are setup on those so i wouldnt be able to tell how friendly they would be to texture.
there are lot of things i can keep discussing but this is getting a bit old.
if my point is not clear enough from my last post then i guess i failed at explaining.
Replies
I probably wouldn't model something like that personally, it is just the most extreme example I could think off to put context to my question. Say if I am using synced normals such as xnormal to UDK, and needed a model like that for whatever reason. If I set my one hard edge to where the green meets the purple it should be fine? Even though in maya the red section gets a bad gradient over the large surface? Am understanding you correctly?
There's a dropdown for low poly models in the "Smooth Normals" column. You can set it from "Use Exported Normals" to "Use Average Normals"
Doesn't setting this avoid the need to use a cage?
No, because this will smooth your lowpoly's mesh normals, and if you've got hard edges set it will over-ride your exported normals, which means you'll get smoothing errors when you try to apply your normal map back on to your mesh with hard edges.
There isn't really much reason to ever use this setting unless for instance you've exported your highpoly mesh without vertex normal data, then you might want to set it to averaged.
It would be lovely if there was an option to set the projection mesh type with a simple drop down(this is essentially what maya does). I've suggested this to Santiago but I don't remember why it never got in, probably some technical reason.
I don't think its going to be helpful for me to give you a yes or a no answer here.
What you should really do, any in situation where you're unsure is do a test bake and export it to your engine of choice. There can be so much variance between engine and baker. Do enough test bakes with your workflow of choice and you'll start to know exactly what you can and can't get away with.
I do a lot of test bakes throughout my process of creating the lowpoly, and sometimes even the highpoly, to see if I can represent the highpoly shape with X amount of geometry, to see if I have enough texture resolution to represent certain details, to check for ray skewing, to check for smoothing errors, etc etc. Its a really good idea to get in the habit of doing quick test bakes.
When you go through the process blindly assuming it will work, or rely on "rules" for what to do in X situation is where you're going to run into problems.
That makes sense. I will try it out this week to see what I come up with and report back here. Thanks for the help once again.
A. I thought synchronized pipeline mean, normal map baking application and your engine uses the same type of tangent space i.e +x +y +z in maya. But when I bake normal map in maya using only soft edge I get no shading error, but when I bake the same assest in x normal I get shading error when viewed in maya.
So whats more is there in a synchronized pipeline?
B. I was goin through this thread :http://www.polycount.com/forum/showthread.php?t=73593 . One thing I didnt understand When we bake a normal map does it get bilinear filter before producing the final result ? I dont see any option of bilinear filtering when I bake in maya or xnormal .
That's not what the tangent basis is. For example, this is how Max's scanline calculates it (or used to?):
http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping
It's basically a lot more codey/mathey, with I'm guessing a lot more ways to go about doing it compared to just swizzling the channel values (which is why getting everything synced up isn't always an easy task).
What's required is that the maths used for transforming the object space normal into that normal map during baking are exactly the same as what the game engine or 3D viewport uses to display that same normal map.
These maths are based on the position, UVs and normals of the low polygon mesh's vertices. Almost every game engine and 3D application has subtle (sometimes major) differences in the maths they use to calculate these things. In the case of 3DS Max, different parts of the same program used different maths for it. This means the result from one will not properly display in another.
When those two bits of maths are exact - those used to bake and those used to display normal maps - it's said to be a synced pipeline.
Here's a Unity3D example with various bakers and import options and one (the centre one) that's perfectly synced to Unity's tangent basis.
You can clearly see that the synced one shows none of the flaws that the other unsynced objects do. That's the advantage.
One suggestion earthquake combine this thread and your previous thread "WHO PUT WAVINESS IN MY NORMAL MAP" and make it "Normal Map MASTER THREAD" where people can post all their normal map issue.
So what is the latest engine synced to? Their baker, broken Max viewport, or something else?
CE3 has an xNormal tangent basis plugin and a CryTIFF plugin now, so I'm assuming that it is all synced up correctly.
http://freesdk.crydev.net/display/SDKDOC3/Normal+map+baking+with+xNormal
http://freesdk.crydev.net/display/SDKDOC3/Using+the+Xnormal+CryTIFF+plugin
baker. as far as i know nothing was ever synced to the broken viewport math. a one SG bake of a gun model i did in max looked the same in max 2011 (qualified normals) and CE3. the project's tech artist told me it was synced to max and i believe him.
looks like i'll be taking some work using CE3 soon so i'll do some more tests this month.
I wonder what bakers most pro hard surface artists use? I saw Tor Frick using modo's on his stream and I heard that doesn't even use a cage for baking. I wonder how common xNormal is for hard surface?
should the philosophy of arranging smoothing groups to compliment uvw islands also apply to "soft" biological shapes?
ie: i have a UVW seam running down the middle of the inside of a characters arm, should i arrange it so that there is a smoothing group split along that seam?
As far as I'm aware, the answer is yes. There's no reason not to harden uv seams.
I tried some time ago and either way didn't make a difference for me. So yeah, cutting won't make a difference (unless you're porting your model to Skyrim, in which I had mild success in understanding how it interprets such data correctly again).
I used XN to Max, X2 shader.
It applies to everything that uses smoothing groups, mate. It doesn't cost any more so you might as well use it
But you don't HAVE to use a seam, so you'd only use it if it looks good right? So in that example, a characters arm, you wouldn't want a hard edge?
Usually for organic stuff your model is going to be made up of mostly soft shapes and not a lot of extreme normal changes.
would the subtle difference you get in the bake be enough to offset the unwrap time saving (on unique texture pages) and vert count reduction?
You can UV the exact same way you are used to. Once finished, set all existing shells to their own smoothing group, and chances are you'll have a cleaner map.
Since your smoothing breaks follow your UV breaks, the vertex count in-game will not change either.
Read the entire first post, don't just skim it. The whole point of the writeup is to discuss the two methods with the *same uv layout*.
You can't make a uv layout without seams, so there will always be parts of your mesh you can give hard edges with zero additional cost, and those parts naturally tend to be where you would want hard edges anyway.
How many uv seams or exactly how you layout your uvs is completely irrelevant.
Cheers!
hey, yeah i read it. I guess i'm following the same lines as MM. With the hard edged method you have a LOT more breaks in your UV. It really has nothing to do with using the same UVs in your example, because your UVs have already had time put in to support the hard edge workflow.
What I am saying is, that if you use 1 smoothing group with normals and tangents, You can get much larger UV islands, pretty quickly, that make texturing and painting it much easier. You would also have less seams, which results in a lower vert count.
I think both methods have merit, and it really depends on the persons time limit and diffuse texture. If you want to spend time in a 3D painting software like Mudbox, fixing all the diffuse issue seams with method B you can.
Lets say this is a wood gun sight with wood grain direction. you would spend a LOT of time matching up all that wood grain along the breaks in your UV islands in the diffuse. I guess you can paint the high res in Mudbox, but for phtooshop texturing, it would be a pain (along the edges of the sight housing that are in their own UV island, for example).
I don't think there is any RIGHT or WRONG way between A and B, just if the engine supports it and the persons preferred workflow. I personally prefer larger UV islands and not have to deal with separate smoothing groups and seams
Yes, depending on your pipeline you might have to break more shells for a cleanly baked angular hard-surface model, but breaking smoothing along your EXISTING shells is already beneficial, no matter what type of model.
Except that I didn't do that. Its just a regular old UV layout, it isn't obsessive compulsively welded together to minimize seams, nor is is set up to use a lot of hard edges. For instance, if the sight was meant to go into an engine that doesn't support synced normals, it would need even more uv splits to support the hard edges required. The player facing end of the sight is pretty much one uv island and would need to be split to reduce smoothing errors in that case.
The example mesh was never intended to be an example of the most optimized uv layout, as again, for the purpose of the thread the uv layout is irrelevant. I think I've said it about 20 times now, but virtually all of the statements I've made in this post rely on your uv layout BEING THE SAME. How optimized or unoptimized your uv layout is has absolutely no bearing on the base concepts in this thread. Sure, the benefits for using hard edges may be more or less depending on your exact uv layout, but thats getting really subjective(which is again something I've already stated if you read the entire thread) and of course is something you should/can decide on a case by case basis.
The mesh I used was a couple year old example mesh for 3point Shader, I used it because it was one of the few assets I could show/that wasn't under NDA. There was no greater intent in it than to grab an existing mesh to show the basic principles outlined in this thread.
This is all fine and correct, however nothing in my post was contrary to this point.
One thing I feel that is worth noting however; the larger your uv islands are/the less seams you have often the harder it is to get an optimized uv pack that doesn't have a decent amount of wasted space. I've seen it suggested here that less seams = more texture resolution and in my experience it is the opposite. Usually the smaller and more numerous your uv islands are the easier they are to pack into an efficient uv layout with minimal wasted space. This is of course entirely dependent on your model and uvs, it may be the case for some and not for others. So again its all a balancing act, as most everything in game art is.
Less seams and its a bit easier to paint, but you get more distortion, maybe its harder to pack efficiently, certainly it uses less vertexes in game.
More seams and the easier it is to pack, the less distortion you get but the harder it is to paint and the vertex use is higher.
Just two sides of the coin, neither is "better", its just a balance.
Again uv layout is irrelevant, your exact layout is personal preference and does not prove or disprove any of the facts presented in the initial post.
You're confused in thinking that methods A and B require a certain type of UV layout, they are simply two different methods to bake your normals, regardless of uv layout.
Nowhere did I say there was a right or wrong, its up to you to come to your own conclusions, and its great that you did, but its a bit disappointing that you either came into this thread with preconceived notions as to what you were going to get out of it, or didn't really understand what I wrote. I suggest giving it another read.
If you can come back to me with some factual inaccuracies I will be happy to discuss them or even edit the initial post. But please give actual examples of what I've said instead of assuming I did this or meant that or the other thing.
Basically down to, yes if you have time to do hard edges baking and setup, go for it! not all models require this though, and sometimes its detrimental to focus on it in production.
I've seen a junior always running over time on his asset creation. I found he would obsess over getting a perfect bake with hard edges, exploding his prop out to do multiple bakes, when in the end a quick 1 smoothing group bake works 99% as well at a fraction of the time. (as long as your engine supports it) Really down to the importance of your item in the game. Yes you cover examples of when not to use it, though it does get lost in the massive info dump.
With your example, if it's a FPS gun sight, then yes might be worth it (since it takes up a large amount of screen space
I guess my only caveat i would add would be, think of how your object will be used in the game a plan you time accordingly to using one method over the other.
Then your game hits optimization, and 50% of your normal's are cut anyway ( i kid i kid )
two note:
In the latest Xnormal with FBX exporting from max, if you have one element with 1 smoothing group, and another with multiple smoothing, you will get crazy bake errors. Xnormal has a hard time calculating with you have this across elements in max.
Also textools for max 2012-13 is super buggy, lot of studios won't allow it to be used because of the lack of updates. maybe add a link to another tool that would work for later versions. I max i still don`t think there is a native way to split smoothing groups based on UV islands.
Also, not allowing scripts because they're buggy, but running the latest Max? That seems somewhat counter-productive.
Short blanket statements without understanding the theory behind them is always a bad idea. Its why we have people posting "Just use 1 SG" in response to other users bake problems, often to the detriment of the user seeking help(IE: with a non synced system its terrible advice).
Or "Always use quads"
Or "Never use triangles"
Or "Use moar geo" - Crap I started that one.
I and many others spend so much time fighting the regurgitation of these stupid little phrases that are repeated with little understanding of their original intent, I have no desire to create more of these little catch phrases to be misconstrued into eternity. I am happy however to explain to the best of my ability all of the factors in the equations so people can come to their own conclusions. I would rather it be too technical and be able to answer questions that someone may have than it be too simple and simply misunderstood.
I think the "use how you see fit" advice is really applicable to any tutorial, guide, technical writeup ever written, how you implement information into your own personal workflow is up to you, I agree with you completely but that isn't something exclusive to this write up. Honestly I should probably write something to this affect and stick it in my signature!
As far as doing exploded bakes etc, I am of the opinion that you should know how to do it correctly first, and know how to do it fast second. Getting into timeframes for work is so completely subjective that its better left to the specific company/project/task you're working on, Its not really feasible for me to give that sort of advice.
Also, I think there are a lot of misconceptions out there about the "speed" of doing certain things. For instance:
Not taking a few minutes to set up a cage in xnormal may "seem" like its fast, but then you have to trial and error the ray distance, possibly bake two maps with different ray distances(which can take hours for AO) edit those two maps together, and of course redo those edits if you get a change request for your asset. So those few minutes setting up a cage are really time well spent if you don't need to do any touchups after the fact.
Same thing with setting up an exploded bake, if you spend 10 minutes keyframing your high and low poly to avoid intersection issues, that saves you a lot of time over manually painting those sort errors out, which is again rework that needs to happen with revisions. The cool thing about exploded bakes is they scale with complexity of your asset. Have a simple prop with 2-3 or so mesh chunks? It will only take a minute to set up the keyframes.
If your junior artist was doing separate bakes for each chunk and then compositing them together though, I agree 100% that is a massive waste of time, I don't really know why anyone does that. But setting up a proper cage and doing exploded bakes in my experience are time savers, not time wasters.
Use the SBM exporter instead, you can even export a cage from max's projection modifier, its super super fast and easy to do. Just make sure your mesh is triangulated before you export or it won't save the cage. I think you can just throw an e-mesh modifier on before export.
Post a bug report here: http://www.polycount.com/forum/showthread.php?t=69736
renderhjs is an active polycount member and I'm sure he'll be happy to look into any issues you're having with the script. If you know of another script I will be happy to include it as well. I'm not a Max guy primarily so you'll have to help me out there.
do you think exploded AO bakes are more accurate/realistic given the fact that they are exploded and not in the actual occluding state they should be in final model ?
i know you can do a separate contact AO passes and combine them, but is it same quality ?
as for having a bake process that requires no cleanup, its a great process and i agree.
but, if there were changes, you don't necessarily need to bake the entire asset again. you can do partial bakes. the resulting normal/AO would be the same as full bake.
also, as you talk about "destructive" workflow i wonder what about the other maps like diffuse and specular. i am sure they are created in a manner that could be described as "destructive" because if there were changes that clients needed to do later you cant just generate the diffuse map.
they way i see it, pretty much all 3d art workflow is destructive in once aspect or another if it requires any manual craftsmanship. it would only be non-destructive if it was 100% automated from beginning to end. what if the client cant setup hard edges or cages properly like you did, how will they implement changes on their end ? i know the process is fairly easy for us, but usually when a client is coming to you for your service that means they lack the resources to do it themselves in the same quality as you.
just trying to understand your thought process, dont get angry ok
LODs and normal maps get tricky for a number of reasons. There isn't ever going to be any great solution whether you're using hard edges or not, simply because normal maps depend on the lowpoly mesh normals of the highest LOD, so when you remove geometry you're editing those normals and you'll always get smoothing errors.
A few things you can do:
A. Bake an extra normal map for your 2nd lod mesh if it is very important that the shading holds up.
B. Try to plan for your lods when you're doing uvs and geometry placement. Sometimes you just have to bite the bullet and retain some important detail on lower LODs if it is going to be visible at the distance the LOD pops in.
C. You can create custom normals in such a way that removing certain edge loops will have little effect on the mesh normals. I've been required to do this for important assets on certain projects before. It works, it may require some custom tools, some planing, and it can be a fairly big time sink.
I think the most important thing to remember with lods though, is to check them from the actual viewing distance you'll see them in game. Don't zoom in on your lods like it was the high detail version. When you start doing this you realize that small or even large errors may not matter when its actually in game.
Yes adding a "lowpoly" baked AO pass is usually what I do, is it the same quality? Its worse in some ways and better in others. It can be more accurate to the actual shapes of the lowpoly for instance, if you have a very round cylinder in your high and not so many sides on your low, the lowpoly baked AO can be better. Probably in general terms I would say its worse quality, but not to the point that it makes a difference in the end result.
Its the best combination of speed, ease of doing and quality that I've done yet and I've tired most methods. Its pretty much the base AO workflow we use at 3Point, so you can check any of the lowpoly assets on our site to see what it looks like.
I know some people like to do the AO bake by material ID in Max but I've never had a chance to try it.
It is another thing you have to redo with major changes though, so I usually do it after the uvs/normal bakes are approved. Yes that is true, you can often do partial rebakes if you just need to tweak a detail on a section of your highpoly. The biggest problem you get is say, the client requests more UV space in "X" part, so you have to repack the entire layout and rebake, then you have to redo all of the work.
Oh absolutely, if you're getting highpoly change requests after you've started texturing that is a major problem. That really shouldn't happen in most cases and if it does I think you should insist on a better approval process. IE: First the high, then the low, then the uvs/bakes, then the materials are approved. You shouldn't get change requests on a prior stage that has already been approved. This is going to vary job to job and client to client, but its something I think you should always push for as it saves a lot of time for both clients/management and artists.
Sure, but there is plenty of stuff you can do to make the process as painless as possible for rework.
Yeah this is something you're always going to have to discuss with the client. Sometimes they will have their own workflows and you'll just have to work with that, sometimes they will be open to making improvements if you can explain what you'd like to do. It varries too much to really give a clear answer.
Hehe, sorry if I come off a bit snide, I probably get a little too carried away at times.
not at all!
at the end of the day, every process has some positives and some negatives.
for example, i would avoid exploded baking at the cost of couple more hours to have a more realistic AO bake. if i have to do couple minutes of manual cleanup, so be it. i want a better end result AO.
you mention the brink guns, but i dont want to compare finished assets without knowing for sure how it was made. i could do the same with my work but i rather not.
obviously the brink guns look great, but there is lot more to it than just normal and AO maps.
over all, your process seems over complicated to me at the expense of many things that i think are important as much as the things you mention and may be more in some cases.
no doubt this article is very important for hard surface artists.
however, i am talking from a generalists perspective. i personally work on everything from character, weapons, vehicles to props.
so i guess, i would say the first post or the main write up needs some disclaimer or clarification that this workflow is recommended for hard surface mostly(at least after you weigh the pros and cons). i know you listed exceptions many times in this thread, but the main post warrants a disclaimer up front.
also, the scope example used seems a bit unfair because a scope like that in todays games would get higher priorities in textures sizes, polycount etc in most cases. not to mention, you should display something with a good diffuse map as well so people can weigh the actual final outcome rather than a low res normal mapped asset. it was meant as an example only i know, but it is what is used to make your case. so it has to be fair and balanced
i know we discussed lot of these in several posts in the thread, but it should be clarified in the main posts. i say this because after i consider everything and at the end of the day it becomes a subjective call on a case by case and in my case i would not follow this workflow for majority of times even for hard surface ONE main reason being i tend to layout UV differently. i would still have lot of uv shells to maximize pixel density, but i would not put UV cuts like you do. there are are several other priority changes as well, which i would have to write 5 more paragraphs to explain.
finally, many new users or inexperienced users will take this article as bible and that is what i think is unfortunate. like you said it before, you dont like misinformation spreading so i think this write needs more disclaimers, exceptions and more fair representation to actually weight the positive and negative. not to mention, i think there are lot of things in here that becomes a matter of priority among other things.
Don't know if it has been linked to, but here is unreals page showing a flawless, hard surface 1 smoothing group bake with xnormal, tangents and bi-normals, and how to set up your exports/imports.
http://udn.epicgames.com/Three/XNormalWorkflow.html
with a old vs new result.
I don't have time to respond to your entire post right now but just wanted to say, you realize that weapons, vehicles and props are all hard surface work right? Robots, characters with armor and props, etc also have a lot of hard surface elements to them as well.
If you do 90% soft organic characters that is one thing, but the majority of the things you mention fall under hard surface work.
Maybe I just read your statement in a funny way or something though.
There are shots of just the low+AO bakes on our site if you want to take a look, nothing fancy just the high AO + low AO + CB cavity map thing. In the thread linked in my signature too.
no, it does not. weapons, vehicles or props can be organic as much as characters.
your definition is a bit narrow i think.
they look great and i never said they didnt. same results or may be better results can be achieved with different work flow. also i have no clue how the UVs are setup on those so i wouldnt be able to tell how friendly they would be to texture.
there are lot of things i can keep discussing but this is getting a bit old.
if my point is not clear enough from my last post then i guess i failed at explaining.