Hehe Yeah Xol I took it straight from EQ's zip, baked it Max and XN and threw it in Unreal. I understand what you did to fix it and it's good to see that it is possible to get something clean out of Max+Unreal ... but to me it also stresses the big weaknesses of this combination.
What are your results out of a bake from the original UVs ? I could barely believe what I saw honestly, it looked so bad when I tried.
Also on top of increasing the vert count, splitting everything would really make texturing a pain... (I mean texture overlays)
I've had arguments where I say I find it annoying that splitting the uv's so excessively seems neccesary, the argument arising from people just not being able to stomach any negativity or curiosity about the tools we use on a daily basis. You know the kind... 'it's not the tool...'. Well, obviously I am living with it, but I still find it a horrid waste of tris as well as a big pain in the arse while unwrapping and texturing, so I'm glad to see the debate in this thread, even when it's people arguing against the point (like Xoliul). It makes for interesting discussion.
It's only too bad we have practically 0 influence on Autodesk to change any of this, so we're still limited to talking about workarounds.
I always thought it was me when I made normal maps, it's nice to see that it isn't me. So is it safe to say that you are keeping the extra edges to keep the model looking nice?
I've kind of been reading this thread off and on. It's good to evaluate techniques but I feel like this is something this board kinda goes in phases about, I really thought this same thread occured with the same examples like 6mo or a year ago? At any rate, here's my take on normal mapping
it sucks, there's so many problems and every engine has it's pros and cons. One thing you have to keep in mind is that a lot of times you may get more visual inconstancies because there are approximations in the shader math. I think someone mentioned earlier this is why offline renderer's look the best.
Just looking at that worst case example makes me cringe It contains like every case where normal maps fall apart.
I don't know if it's good work ethic or being lazy but every time I make a normal mapped asset I try to make it as shrinkwrapped as possible and with as many supporting edges as I can without making an uncomfortable polycount I also try to just go ahead and put a construction seam where I know i might have some wonkyness to hide some of the ugly shading.
I dont know man, it it lazyness or just a desire to be more efficient and get better results? Even with a very clean mesh, you're still going to have some errors if your baker and engine aren't synced up. So you're doing extra work AND your end result is still worse than what it would be if your engine/pipeline was simply doing it correctly. Again, lazyness or having the sense not to want to waste time while still producing a inferior result, you tell me.
Hmm I think you missed my point. I was saying that normal maps suck in general and I've yet to see a perfect result in a game engine. So I usually go the lazy route of adding supporting geo and a more shrink wrapped shape vs splitting uvs smmothing groups etc.
I've never run into major warping issues in ue3 but I guess I could be using 'too much' geometry so it minimized the areas where the normal map has to work hard. There's nothing special about my pipeline jus max + xnormal.
Oh wait, i think i read you saying lazyness as in the people here who are trying to find better solutions as the ones being lazy, creating poor meshes etc, which didnt make a whole lot of sense to me.
I think the idea that it will always suck so we might as well buck up and accept it is a flawed notion, sure there will be things that are always going to be difficult to do, but i think most workflows can be improved a considerable amount(as shown with the maya examples). With a little bit of tools work, you should be able to save a LOT of time you would otherwise spend debugging poor shading/adding in extra supporting geometry/spliting your uvs to hell/etc and still end up with a better overall result, which to me is just win-win in every way.
I think its easy to get into these cycles of thought in game dev where we just accept things like this as "facts of life", infact i bet if you poll graphics programmers working in the games industry, 90%+ of them would tell you that it is impossible to animate models using Object Space normals, and that you have to use tangent for anything that animates/deforms. Which is simply incorrect. I see the sentiment that normals are always going to suck being in the same boat as that thought process. Most workflows simply arent doing it correctly/well enough, so we've become accustomed to certain ways of working, which are actually pretty in-efficient, adding tons of small, thin triangles to our meshes in the form of bevels to correct rendering errors etc, inefficient in time spent in asset creation and in terms of render performance.
So while subjectively these sort of workflows may be alright, or someone might not notice the difference in a game running around high speed, it doesnt have a whole lot of bearing on what is a technical discussion. And an entirely solvable technical problem. (Just wait to see what CB is working on for max =P )
Yeah it seems like the main difference here is that people using Max & UE3 have just put up with adding extra supporting geometry, or increasing the number of UV splits in problem areas, mainly due to Max's baker being dodgy with regard to real-time tangents.
I have a feeling that if people were using Maya for UE3 stuff they might not have to worry about the lowpoly and UVs quite as much, simply because the baker's tangent space is more representative of the real-time display method, therefore you don't have to "fight it" as much.
I agree that normal-maps are not an ideal solution, but there are varying degrees of this, and I'd much rather be using the "best" version of that sort of solution rather than a mediocre one that I have to manually take into account and work around.
[HP], but now you have to worry about displacement accuracy and mesh density too, for example a standard 8-bit displacement map will look "stepped" due to the lack of range. That's a whole different discussion though
I imagine vargatom knows what he's talking about since IIRC he works on prerendered cinematics and probably already uses this stuff every day.
What I was suggesting more specifically was a universal Object Space to Tangent Space converter tool. It would accept the mesh (xyz + uv), a normal map in OS and custom (scriptable, or imported) tangent basis (with a few presets). The output would be a TS normal map in whatever basis your pipeline requires.
Frontends could be made afterwards to integrate the tool with Max, Maya, Blender, etc..
It could also allow importing any OS normal map into Max, converting to Max's native tangent basis in the process.
I'm all for this especially the "universal" part as this isn't really a Max issue but a issue dealing all apps that bake normal maps and all engines that display them. Not everyone is going to have access to any particular app to do "corrected" bakes.
[HP], but now you have to worry about displacement accuracy and mesh density too, for example a standard 8-bit displacement map will look "stepped" due to the lack of range. That's a whole different discussion though
Exactly. Also remember that the normals you want to modify are the normals after the tesselation and displacement... so all the problems discussed here would apply with the extra problem of having less control over the original normals (cause tesselation will be view- and camera distance dependent).
The added bonus is when you have relatively small, but highly displaced details, like spikes, carvings etc. In these cases the tesselation is probably not going to be detailed enough to sufficiently reproduce the shapes and forms, so you get some blobby mess of a mesh. Then you add your normals on top of it and you get an even bigger mess... So you may have to selectively exclude details from one map or the other... or between LOD changes... I can only guess at this point but the potential for suffering will definitely increase.
I imagine vargatom knows what he's talking about since IIRC he works on prerendered cinematics and probably already uses this stuff every day.
Yeah, and we've abandoned PRMan with the quasi-unlimited tesselation quite a while ago, so we have to work with displacement and normal maps combined. Can be a real pain in the ass at times, we've ended up building more detailed highres models to sidestep the problem.
pior do you import your normalmap as a normalmap into unreal? i didn't try and maybe you did but if you didn't you should try it, as importing it as a ordinary texture always messes up things in unreal
Sometimes imperfections like seen on the right (teal arrow) can help a model look more real world as it could easily be a mold imperfection or a dent. In our search to make everything perfect and mathematically correct we could be stripping out the kinds of detail we try hard to build in.
We might actually want to learn why and how these things pop up so we can force them in certain situations rather than eradicate them.
The real world objects we seek to replicate have tons of imperfections, we need to be careful we don't eradicate all traces and judge them to be errors... The imperfections you see in Maya bakes (as above) seem to be more passable than the ones in max (hidden edges killing surface detail).
Well yeah but in the case where you REALLY want something smooth (like a polished object with a env map reflection on it) you really want accuracy ...
Now I understand that in such cases one might be better off increasing the density and not even use normalmaps at all (like on car driving simulators) but still, it shouldnt prevent us from looking for a perfect normalmapping solution :P
I agree figure out what causes it, so you can stamp it out if you need to. Just take a second and ask yourself, "can this be used as imperfect detail?" Before taking the extra time to kill it just to be mathematically perfect.
In our quest for the perfect bake we can end up taking more time and end up with less detail.
Right, smoothing errors to me are never a good thing. If i want a specific look i will create the specific look, not rely on bad software to .... maybe give it to me.
Yeah good points. I agree if its a planned detail you've probably already thought about it and its not like it wouldn't be hard to paint in. I agree I probably wouldn't waste time trying to getting a spot to bake badly, but I might not nuke it if pops up... maybe...
dur23: Well, the whole point of a normal-map is to represent the high-poly normals using the texture, it's not really anything to do with having "the inverse of the low poly shading" as you seem to be suggesting.
All it represents is the difference between the interpolated vertex normal and the high-poly normal, per pixel. It just so happens that a lot of the time, to make that look correct, you end up with per-pixel normals which are "counteracting" the shading of the lowpoly normals, resulting in the stuff you're identifying in those images.
Yar mop, i do understand the per-pixel representation of the high poly normals. I think i just worded it incorrectly, because in the second paragraph you kind of say exactly what i was saying by "counteracting the shading of the lowpoly normals". Which translates to (imo) the baker taking the normal of the lowpoly, inversing (pixel based) it and overlaying it over top of the per-pixel high poly normals. Which means the low poly shading (normals) = zero. So when you add the High poly normals it does 100% of the shading. Which means the low polys shading represents 0% of the overall shading.
Now when i look at the max render, it appears it is not properly adjusting for the low poly meshs normals (smoothing groups). Which to me looks like the whole problem starts at the interpolation of the low poly normals. Anyone wanting to use max baking in their engine would have to interpolate the low poly normals the same way that max (which is why it only looks right in max) does in order to replicate the shading within max. Am i crazy? Am i being useless by pointing this out???
dur, not really useless, it's clear you get it - and that's the whole point of the thread. and it's not really the lowpoly normals as such, more actually the tangent space it creates from the lowpoly at render time, which is different to the viewport tangent space.
single smoothing group applied, raytrace projection, crytek method generates its own custom normals (however those won't affect the bake ray direction)
further tests need to be done, whether smoothing groups help cryteks way at all and so on... but anyway proof of concept is done. How this is taken further within 3ps and outside (considering the associated additional work of rebuilding the maya way) still needs evaluation.
be in mind that I am no baking expert, I just applied the standard stuff, set projection method to "Raytrace" and thats it. As said if its really worth further efforts, will be found out by fellow artists.
CrazyButcher, got to love how major behemoth apps are being fixed by their users, again ...
Not really sure if I fully understand what you are doing but yeah that screenshot looks odd and awesome at the same time :P like something that should have been here always, but only happens now, thanks to you!
Turtle also makes a mess of all your Maya scenes with extra nodes that you can't remove if you don't have the plugin installed. We've done a few outsourcing jobs where the client sent us these nice surprises
I did my own tests with EQ's mesh and my conclusion is that Max uses a different method to interpolate the normals in the software render no matter what you do.
The only partial solution is to try to bring the normals in the viewport close as possible to what the offline engine engine does.
The best result with EQ's mesh was with after I ran a Retriangulation in Edit Poly and I added an Edit Normals modifier and Retested them.
haha, no wonder more people don't use Turtle if it's $1500 per license... that kinda prices it out of anything other than a studio who do a lot of lightmap baking's budget.
Undoz, can you elaborate on that? Please post your results too
The best result with EQ's mesh was with after I ran a Retriangulation in Edit Poly and I added an Edit Normals modifier and Retested them.
I don't see why or how this would change anything at all but I totally trust you, and wouldnt be much surprised if it was yet another one of these special Max cases...
I've also got some progress going on: I'm well on my way developing the conversion tool, but decided to test the theory first.
In the shots below, taken from the viewport, I'm using Max's own tangent basis. Both shaders are using an object space normal map as a normal source. The mesh on the right, however, is running a shader that converts OS to TS on the fly.
I should be able to get the tool up and running in the next couple of weeks.
so fozi, what we see there on the right is what the generated normal map from max would have to look like in order to work perfectly with realtime shader in max viewport? Interesting stuff! IT would be interesting to check the delta between that shader output of yours and max's default generated normal map, to see what the differences are.
I don't think there's any conversion to file going on yet, right?
Converting OS to TS in shader doesn't really benefit the baking process yet and it's just more instructions for the same result?
Those converted maps need to be exported to file for there to be any benefit I think.
great work fozi, when I tried the same with converting OS to nvmeshmender's TS there still were issues (http://boards.polycount.net/showthread.php?t=44460), but you seem to have nailed it! It's nice to see this time the issue generates more interest than 3 years ago
you mentioned you wanted the TS space stuff in your tool being scriptable, what language will you go for? I would scrap my plugin then and support you on this if you want.
Xoliul, not yet. I just wanted to make sure the math works. The precision loss, when saving to 8bit / channel, shouldn't affect the end result in any significant way.
CB, I don't see any of those seam related issues. It seems to work with any mesh and uv configuration, including organic objects. I'm using an expensive matrix inverse to make sure the potential lack of orthogonality in the basis doesn't break the conversion.
Regarding the scripting, I'm thinking Mono, but I'll probably support native dll plugins starting from the first version. The app is going along ok right now, but I'm going to need a lot of help for presets: finding out how Max, Blender, UE3 and others compute the tangent basis would be very helpful.
back then I didn't do the expensive inversion, but I also worked from orthonormal basis. Well native code would have the benefit of being able to plug the "tangent space generator" lib in any frontend application more easily (standalone, plugin...)
yeah exactly that "preset" work is what I was thinking of as well. I am also not sure how legal it actually is, as some falls under "reverse engineering", which is prohibited by the EULAs.
Replies
What are your results out of a bake from the original UVs ? I could barely believe what I saw honestly, it looked so bad when I tried.
Also on top of increasing the vert count, splitting everything would really make texturing a pain... (I mean texture overlays)
Still have to try out D3 renderbump!
It's only too bad we have practically 0 influence on Autodesk to change any of this, so we're still limited to talking about workarounds.
it sucks, there's so many problems and every engine has it's pros and cons. One thing you have to keep in mind is that a lot of times you may get more visual inconstancies because there are approximations in the shader math. I think someone mentioned earlier this is why offline renderer's look the best.
Just looking at that worst case example makes me cringe It contains like every case where normal maps fall apart.
I don't know if it's good work ethic or being lazy but every time I make a normal mapped asset I try to make it as shrinkwrapped as possible and with as many supporting edges as I can without making an uncomfortable polycount I also try to just go ahead and put a construction seam where I know i might have some wonkyness to hide some of the ugly shading.
I've never run into major warping issues in ue3 but I guess I could be using 'too much' geometry so it minimized the areas where the normal map has to work hard. There's nothing special about my pipeline jus max + xnormal.
I think the idea that it will always suck so we might as well buck up and accept it is a flawed notion, sure there will be things that are always going to be difficult to do, but i think most workflows can be improved a considerable amount(as shown with the maya examples). With a little bit of tools work, you should be able to save a LOT of time you would otherwise spend debugging poor shading/adding in extra supporting geometry/spliting your uvs to hell/etc and still end up with a better overall result, which to me is just win-win in every way.
I think its easy to get into these cycles of thought in game dev where we just accept things like this as "facts of life", infact i bet if you poll graphics programmers working in the games industry, 90%+ of them would tell you that it is impossible to animate models using Object Space normals, and that you have to use tangent for anything that animates/deforms. Which is simply incorrect. I see the sentiment that normals are always going to suck being in the same boat as that thought process. Most workflows simply arent doing it correctly/well enough, so we've become accustomed to certain ways of working, which are actually pretty in-efficient, adding tons of small, thin triangles to our meshes in the form of bevels to correct rendering errors etc, inefficient in time spent in asset creation and in terms of render performance.
So while subjectively these sort of workflows may be alright, or someone might not notice the difference in a game running around high speed, it doesnt have a whole lot of bearing on what is a technical discussion. And an entirely solvable technical problem. (Just wait to see what CB is working on for max =P )
I have a feeling that if people were using Maya for UE3 stuff they might not have to worry about the lowpoly and UVs quite as much, simply because the baker's tangent space is more representative of the real-time display method, therefore you don't have to "fight it" as much.
I agree that normal-maps are not an ideal solution, but there are varying degrees of this, and I'd much rather be using the "best" version of that sort of solution rather than a mediocre one that I have to manually take into account and work around.
It's gonna be the same, just an extra bake map. (Heightmap)
I imagine vargatom knows what he's talking about since IIRC he works on prerendered cinematics and probably already uses this stuff every day.
I'm all for this especially the "universal" part as this isn't really a Max issue but a issue dealing all apps that bake normal maps and all engines that display them. Not everyone is going to have access to any particular app to do "corrected" bakes.
Exactly. Also remember that the normals you want to modify are the normals after the tesselation and displacement... so all the problems discussed here would apply with the extra problem of having less control over the original normals (cause tesselation will be view- and camera distance dependent).
The added bonus is when you have relatively small, but highly displaced details, like spikes, carvings etc. In these cases the tesselation is probably not going to be detailed enough to sufficiently reproduce the shapes and forms, so you get some blobby mess of a mesh. Then you add your normals on top of it and you get an even bigger mess... So you may have to selectively exclude details from one map or the other... or between LOD changes... I can only guess at this point but the potential for suffering will definitely increase.
Yeah, and we've abandoned PRMan with the quasi-unlimited tesselation quite a while ago, so we have to work with displacement and normal maps combined. Can be a real pain in the ass at times, we've ended up building more detailed highres models to sidestep the problem.
Sometimes imperfections like seen on the right (teal arrow) can help a model look more real world as it could easily be a mold imperfection or a dent. In our search to make everything perfect and mathematically correct we could be stripping out the kinds of detail we try hard to build in.
We might actually want to learn why and how these things pop up so we can force them in certain situations rather than eradicate them.
The real world objects we seek to replicate have tons of imperfections, we need to be careful we don't eradicate all traces and judge them to be errors... The imperfections you see in Maya bakes (as above) seem to be more passable than the ones in max (hidden edges killing surface detail).
Now I understand that in such cases one might be better off increasing the density and not even use normalmaps at all (like on car driving simulators) but still, it shouldnt prevent us from looking for a perfect normalmapping solution :P
I say, IRRELEVANT!
In our quest for the perfect bake we can end up taking more time and end up with less detail.
I see what you mean, Vig, but if i want an imperfection, I'll model or texture it in rather than trying to make some bake error fit into my texture.
Yar mop, i do understand the per-pixel representation of the high poly normals. I think i just worded it incorrectly, because in the second paragraph you kind of say exactly what i was saying by "counteracting the shading of the lowpoly normals". Which translates to (imo) the baker taking the normal of the lowpoly, inversing (pixel based) it and overlaying it over top of the per-pixel high poly normals. Which means the low poly shading (normals) = zero. So when you add the High poly normals it does 100% of the shading. Which means the low polys shading represents 0% of the overall shading.
Now when i look at the max render, it appears it is not properly adjusting for the low poly meshs normals (smoothing groups). Which to me looks like the whole problem starts at the interpolation of the low poly normals. Anyone wanting to use max baking in their engine would have to interpolate the low poly normals the same way that max (which is why it only looks right in max) does in order to replicate the shading within max. Am i crazy? Am i being useless by pointing this out???
single smoothing group applied, raytrace projection, crytek method generates its own custom normals (however those won't affect the bake ray direction)
further tests need to be done, whether smoothing groups help cryteks way at all and so on... but anyway proof of concept is done. How this is taken further within 3ps and outside (considering the associated additional work of rebuilding the maya way) still needs evaluation.
wuts turtle? links plz
http://www.illuminatelabs.com/
Not really sure if I fully understand what you are doing but yeah that screenshot looks odd and awesome at the same time :P like something that should have been here always, but only happens now, thanks to you!
Because it costs money and you have to email Illuminate Labs for pricing information?
How much does a single user license cost, bugo?
http://www.illuminatelabs.com/shop/Turtle/purchasing-turtle
Never tried it, but it's always prasied in 3DWorld.
I did my own tests with EQ's mesh and my conclusion is that Max uses a different method to interpolate the normals in the software render no matter what you do.
The only partial solution is to try to bring the normals in the viewport close as possible to what the offline engine engine does.
The best result with EQ's mesh was with after I ran a Retriangulation in Edit Poly and I added an Edit Normals modifier and Retested them.
I don't see why or how this would change anything at all but I totally trust you, and wouldnt be much surprised if it was yet another one of these special Max cases...
I have no idea, I use here at the company. So I don't know the prices, I can research that tho.
But yes, it might be expensive for one user only.
Taking the price out, it is a great plugin, best I've ever used.
So I retested it because I get like that sometimes, maybe I can crack this impossible problem, but of course I can't.
Interesting though, that adding edit normal modifier in max fixes some of the errors! How so? Who knows! It must be magic!
here:
from left to right, scanline, shader before edit normals, shader after edit normals.
good old max, the package that keeps on giving!
oh, I think this is the same as undoz posted, sorry!
In the shots below, taken from the viewport, I'm using Max's own tangent basis. Both shaders are using an object space normal map as a normal source. The mesh on the right, however, is running a shader that converts OS to TS on the fly.
I should be able to get the tool up and running in the next couple of weeks.
good work!
Converting OS to TS in shader doesn't really benefit the baking process yet and it's just more instructions for the same result?
Those converted maps need to be exported to file for there to be any benefit I think.
you mentioned you wanted the TS space stuff in your tool being scriptable, what language will you go for? I would scrap my plugin then and support you on this if you want.
CW, exactly. Here's both + diff:
(edit: forgot to abs the diff; scaled to x3)
Xoliul, not yet. I just wanted to make sure the math works. The precision loss, when saving to 8bit / channel, shouldn't affect the end result in any significant way.
CB, I don't see any of those seam related issues. It seems to work with any mesh and uv configuration, including organic objects. I'm using an expensive matrix inverse to make sure the potential lack of orthogonality in the basis doesn't break the conversion.
Regarding the scripting, I'm thinking Mono, but I'll probably support native dll plugins starting from the first version. The app is going along ok right now, but I'm going to need a lot of help for presets: finding out how Max, Blender, UE3 and others compute the tangent basis would be very helpful.
Shading comparison:
yeah exactly that "preset" work is what I was thinking of as well. I am also not sure how legal it actually is, as some falls under "reverse engineering", which is prohibited by the EULAs.