Kodde I noticed on your second example you are using the same UV layout on both the all soft and the hard normal instances. With soft normals its a good idea to keep UV's connected but where you have hard/split normals the UV shells should be kept separate.
This is why you are getting nasty-ass seams on your hard normal instance.
I would definitely favour the hard edge method out of your examples the kind of wavy/multi-coloured interpolation in your soft edge cases is bad juju in my opinion.
A sexy smooth lavender colour is going to behave better.
-Things may look fine in a simple example like this but if you perform deformation on your asset it could totally change the way the normals are interpolated across a face, also you may want to delete part of the asset this will also ruin the result.
A hard edge (flatter normal map) will not be affected so much and is more flexible.
-Also if your mesh is not triangulated I suppose the end result in game could be triangulated differently and not match.
-These are very simple shapes just inheriting bevels from their high poly brethren.
Things would be different if you have more shapes (screws, vents whatever) modelled in the high poly. With soft normals across 90 degree edges your only option is to have the surface transfer rays fire out interpolated around the edge.
This may warp and distort the shapes on the surface. If the edge is hard the rays can be made to fire straight and parallel on flat areas and not distort the forms.
With a single bevel the 90 degree angle has been turned into 45 this is still going to interpolate across the flat areas and give some slightly sketchy results.
To echo the point JordanW made about tangent space maps over object space ones. Having this kind of interpolation over flat areas means you cant use the same flat area of the map on some other geometry. It becomes bespoke and locked down to the original sample geometry.
I was actually playing around with separating the UV shells in the Hard Edge version earlier today, and yes it more or less completely eliminated the nasty edges.
Now it hit me that there is a "Warp Image" feature in Maya that lets you transfer texture data from one UV set to another UV set. So I figured maybe I could bake to a UVSet with separated faces (to not get errors) and then transfer this information to the sewn-together UVSet in order to save vertices. The Warp Image feature worked fine. Now the results I ended up with after warping looked pretty much the same as baking the sewn-together version. Hmm...
So I started comparing the results of separated and sewn-together versions generated Nmaps, and the separated one had more color variations around all the shell borders. This is when it hit me that my colleague (old Programming Lead) had told me something like "the normal map sample adjacent pixels". I hadn't understood completely what he meant when he said it. But now it seemed to be making sense. The separated version had more color variations outside its UV shells borders. Could it be that it samples a few pixels outside the UV shell, and therefor the separated version with more info outside the shells makes it look good?
Just to be clear on this. All my examples in this thread so far are screenshots from Mayas Viewport, some of them with "High Quality Render" option turned on. Hi Qual is still realtime viewport results. Whether this is "scanline" or not I wouldn't know as I don't really know exactly what that term means.
Kodde, yes that is what is going on. The colour variation around the edges your seeing is the bleeding which is designed for the reasons you gave.
And yes that is why the hard edge shells can't be placed directly next to each other the tangent basis is different from shell to shell so the colours will be radically differnt even though the UV's may corespond to the same vertex.
Kodde, yes that is what is going on. The colour variation around the edges your seeing is the bleeding which is designed for the reasons you gave.
And yes that is why the hard edge shells can't be placed directly next to each other the tangent basis is different from shell to shell so the colours will be radically differnt even though the UV's may corespond to the same vertex.
Persistent testing of all scenarios and a bit of technical insight will get you far, oh and having a programmer to bug about stuff now and then also helps.
I'm learning a lot in this thread, gotta love polycount.
Seriously though, I'm working with the pay-it-forward theory. I work as a teacher, so all the good stuff you send at me will kick back to the gaming industry some day in the future.
Oh and thanks for putting up with my questions. Hopefully many others can learn something ass well by reading this.
As I said earlier, we're doing the same thing that id engines do (custom baking app built into the engine tools), however we have set it up to use Maya's normals and reconstruct tangents in the same way, so we end up with a result where if a normal-map looks correct in Maya, it will look correct in game. We could bake in Maya but doing it in-engine guarantees perfect results without having to tweak various settings that Maya offers, and also means we can batch re-bake all the models in the game if anything changed with how we were dealing with normals.
I think this is the way to go ... if you don't have a setup where an object with a normalmap can be displayed the same way in your app as in your engine then you're potentially losing time in your pipeline for various reasons.
It isn't that difficult to tweak either your 3d app or your engine to be compatible either way, and I definitely think it's worth it.
As has been said, the bottom line is how it looks in game. Anything else is secondary.
Seriously though, I'm working with the pay-it-forward theory. I work as a teacher, so all the good stuff you send at me will kick back to the gaming industry some day in the future.
Oh and thanks for putting up with my questions. Hopefully many others can learn something ass well by reading this.
You hear so often about people's bad experiences with teachers who are completely behind the times, and teach courses that they know nothing about. So its refreshing to see one that is actually interested in learning how and why we do things.
Kodde you might find an article I wrote some time ago useful.
Damn it's been a while since I've updated this thing, never getting around to doing it.
EarthQuake do you have any screenshots of assets using object space with mirroring ? I want to learn more about this, but I haven't seen it implemented in any game except the marmoset engine.
If normalmaps are to stay in the next few years, it will be a shame it won't become a standard.
chai: http://boards.polycount.net/showthread.php?t=53986
there should be virtually no differece between mirrored or unmirrored, if done right. I intend on making a 3dsmax viewport shader and tools for it.
Can someone elaborate on using object space normalmaps for deformable geometry? How can you do that without calculating the surface tangent? The only information I found about it was basically recalculating the object space in to tangent space in the shader, which seems a bit pointless... or am I missing something...
Anyone?
If the model is rigged, its easy to keep track of the vert transformations, and apply that when you're lighting the object. Atleast that is how i understand it. Its a little more expensive than a tangent shader, but OS is a little less expensive by nature(TS needs to be converted to OS to render anyway) so its pretty much a wash, or atleast a performance hit that would never be noticable.
Also another way that OS is cheaper: there is no need to store bi/tangents, and there is no need to have hard edges/add a bunch of extra edges, so your vert count will be lower with the same results.
EQ: Regarding lower vert count - not necessarily, since usually to get a good result on hard-surface stuff with tangent-space maps you pretty much just have to harden the edges along discontinuous UVs, which are split verts anyway, so you actually end up with the same for the vast majority of objects, since your UVs will be the same for both maps.
Well, it will never be higher, because you're using OS, that's for sure. I don't think you can make the assumption that for every asset, where you have a hard edge, you also have a uv split. This generally can be the case, but not always. Its ideal of course.
But really, there is a different mentality all together when using OS maps as far as uvs go, you can actually get away with fewer splits in your uv, because you have no hard edges, and thus no reason to split uvs where you split edges.
So really, in virtually every case, it will be equivilant, or less, and probabbly a little less in nearly every case. I really doubt it makes much of a difference tho.
Heres a quick example, this was from an OS mesh. If i'm doing a TS bake, theres a good chance i will want to have this little bit on a different smoothing group, and i would want to break the uvs here to not get artifacting at the edges. With a complex mesh, you could have many instances like this, that it would be more efficient to keep your uvs with as few groups as possible, but have to break them up more for TS maps.
If i'm getting bad smoothing there, and it is noticable then yes i would. But that is besides the point, it was simply an example.
If you take the same mesh, after the fact, like say you have a mesh built for TS maps that only has hard edge splits at the uv splits, then yeah this will be the same vert count. But thats making a mesh by "TS" rules, so you have to do extra stuff that just isnt neccassy when doing a mesh for OS maps.
Before trying it out myself, can you float detail with an OS normal map like you can with a tangent space, or do I have to model the detail straight in there if I want to have it?
Wait, whats going on here? You can compress OS maps the same as a lot of games compress tangent, with DX5 compression. You cant do 3dc or whatever NM specific compression, because it uses the full 3 channels, but none of those formats are "faster", they're actually slower, because you have to re-calculate that 3rd channel on load time, and AFIAK, take up the same amount of video memory, only save memory on disk. That third channel has to be calculated at some point. Unless its doing it every frame?
I'm confused now.
Edit: of you're just saying OS mirroring, yeah, gotcha.
As far as compressing OS maps as a practical matter, you tend to get... different artifacts, you have much less of the wide gradients that you would need in TS maps to compensate for smoothing, so you get less errors from that sort of thing, but there also tends to be... I dont know how to say, a wider range of colors, so say you have a cylinder, you basically have colors representing the entire rainbow here, so these areas can not work so well with compression.
Also i dont think CB was saying that you cant compress mirrored OS maps(i dont see why there would be a problem), just that he hasn't tried.
and you can compress OS maps, but if you look at the id/nvidia research paper on compression, you will see that compressed OS is slightly inferior in quality to compressed tangent, and that 3Dc yields best quality.
Even if you have to recalc Z in two-channel tangent compression, cards nowaday dont have so much issues with arithmetic ops, than texture fetch itself. And texture fetch on compressed map is much faster than uncompressed, due to more pixels fitting into texture cache (to my understanding that is). So compression is not really only to save disk costs, in general it is faster. It is faster to unzip gamedate into memory, than to read the same data from disk, simply as moving lots of data is slower than calculating data.
Now back to the quality discussion. So while in theory OS compressed stuff is inferior to tangent compressed, one should ask whether OS doesnt still win the race, due to tangent simply having much more interpolation issues, requiring proper mesh layout... I would say that for mechanical stuff OS will win.
all this has been mentioned in the "be all end all" thread, too. The false truths about OS not being animateable (BF2 uses that), not being mirrorable (showed myself)... everything has been stated before, but certain opinions of people simply stick...
If you think of the math involved in lighting, its basically normal direction and light direction, wherever those two come from doesnt play any role, be it per-vertex normal, os map, tangent map. And any vector can be transformed (read bone animated). There really isnt any difference here. We had animated characters without normalmaps before, and a per-vertex normal is in object-space. Whether we deform it per-vertex, or per-pixel wouldnt be any difference to the result.
I have the strong feeling that many people simply dont believe what they havent seen... even if you can perfectly argument for it based on the math involved...
so why do we have tangent, then? I would say the main issue is texture re-use and generating them from the oldschool "bump maps". More or less historical reasons but is pretty nice if you want to use the same texture over large areas in your game. Think of the brush-based maps... That became so popular (plus the fact that you can recalc z) that imo enhancements to OS maps simply were overlooked, despite the many issues you have baking tangentspace...
The strong colours in Kodde's normal-maps are what I'd expect to see from baking an object like this.
EQ: What app did you bake / display those meshes in? It looks like the baker and/or shader aren't calculating things correctly. As you can see from the Maya bakes and shaders, the output normals are much stronger, and the previews are much less blobby (which implies it's calculating tangent basis better). You can see a couple of small issues on the far right mesh but not hugely noticeable.
Basically if you can't display a normalmap like that in an engine then something is probably wrong with your shaders, or your normal map baker.
Ah i missed this post. It was baked in max, and displayed with a standard viewport shader in max.
I'de be really interested to see what sort of results you're getting with your process, exactly what you can and cant get away with.
For work on DoD, we added xnormal support for our file format(to get the correct bi/tangents etc), and while this helped to make our results as accurate as possible, it didn't come anywhere near close to solving most common problems with tangent space maps. IE: needing to double your polycount by adding tons more edges just to get a "clean"(still inacurate compared to a proper OS map) bake, or needing to use tons of hard edges for anything remotely complex.
EQ> I actually took your testmodels and tried them with your baked Nmap in Maya, it looked pretty much the same as your Max screenshots. Then I used your high/low model to bake my own nmap in Maya, and it turned out way different. First of all I got some weird artifacts I had never seen before, but besides those artifacts the rest looked better than the Max baked ones.
Cool, maybe it will be worth trying the maya-baked maps in max. I'm going to be pissed if max cant even render a normal map to display properly...in... MAX.
Oh also, make sure you're throwing some spec on your test materials. Often smoothing errors can seem pretty minimal, but throw a nice shiny material on there and it all goes to hell. Even worse, a cube map or image based lighting can really bring out errors.
Oh shit yeah i was getting something like this from xnormal i think, damn i dont remember what i did to fix it. Oh wait yeah, it was just messed up smoothing from resizing in modo.
Just unlock normals in maya, and then average the normals on these meshes, seperate out that middle one and set his smoothing angle to like 45. Then you should get a proper bake, sorry about that.
Edit: Considering how terribly wrong the smoothing is on those meshes i'm actually pretty surprised how well the maya bake looks there, lol!
Yeah, like I said earlier, it's a matter of having a baker that creates perfect maps for your shader. I wouldn't trust Max at all for this, it doesn't bake normal maps that it can display properly in real-time (I assume they're calculating tangent basis differently for offline render and viewport display, which is silly).
Maya does it right - and there's no reason why game engines can't do this either. Ours does.
Hey Mop on a related note - iirc the 3dsmax bakes work perfectly in UE3 since the Epic pipeline revolves around the app. So that's a combo to keep in mind. However ... some UT3 props suffer from normalmap mirroring issues. So maybe it's all related in the end?
Yeah, if your pipeline is set up to work with a specific app then there will be no problem - I assume the programmers just used the exact same information and calculations on the meshes/UVs in their display shaders as Max does when baking normals, so the results should be identical.
Hey I have a quick question. If anyone is till reading this thread lol. From what I have gathered using "hard edges" or non-averaged vertex normals effectively doubles your vert count along those hard edges. So if you compare the bevled version and the hard edge version arn't they ultimatly the same in terms of performance?
Hey I have a quick question. If anyone is till reading this thread lol. From what I have gathered using "hard edges" or non-averaged vertex normals effectively doubles your vert count along those hard edges. So if you compare the bevled version and the hard edge version arn't they ultimatly the same in terms of performance?
No, this is not the case. People have claimed this in the past, but they fail to realize two importand factors:
1. You're adding extra triangles, and while very count may be the main thing to consider when dealing with performance, triangles are *not* free.
2. Small thin polygons can tend to be a bottleneck, because you're spending time rendering stuff that is too small to show up on screen. So, the pixel shader just sits around waiting on these polys, but not actually rendering anything. - I'm pretty sure this is the jist of it, but someone could give a more technically accurate response i would imagine.
Thanks EQ, sometimes it makes my head spin reading some of these misconceptions and then trying to apply them. This is by far one of the best threads on normal maps I have ever come across. I think a good chunk of the issues were address here.
Replies
That or you may just have enough experience doing this, that you know what sort of geometry tends to work out well. I would imagine that is the case.
hence part of my post saying, this isnt accurate, dont do this.
I was actually playing around with separating the UV shells in the Hard Edge version earlier today, and yes it more or less completely eliminated the nasty edges.
Now it hit me that there is a "Warp Image" feature in Maya that lets you transfer texture data from one UV set to another UV set. So I figured maybe I could bake to a UVSet with separated faces (to not get errors) and then transfer this information to the sewn-together UVSet in order to save vertices. The Warp Image feature worked fine. Now the results I ended up with after warping looked pretty much the same as baking the sewn-together version. Hmm...
So I started comparing the results of separated and sewn-together versions generated Nmaps, and the separated one had more color variations around all the shell borders. This is when it hit me that my colleague (old Programming Lead) had told me something like "the normal map sample adjacent pixels". I hadn't understood completely what he meant when he said it. But now it seemed to be making sense. The separated version had more color variations outside its UV shells borders. Could it be that it samples a few pixels outside the UV shell, and therefor the separated version with more info outside the shells makes it look good?
And yes that is why the hard edge shells can't be placed directly next to each other the tangent basis is different from shell to shell so the colours will be radically differnt even though the UV's may corespond to the same vertex.
Persistent testing of all scenarios and a bit of technical insight will get you far, oh and having a programmer to bug about stuff now and then also helps.
I'm learning a lot in this thread, gotta love polycount.
=(
Oh and thanks for putting up with my questions. Hopefully many others can learn something ass well by reading this.
I think this is the way to go ... if you don't have a setup where an object with a normalmap can be displayed the same way in your app as in your engine then you're potentially losing time in your pipeline for various reasons.
It isn't that difficult to tweak either your 3d app or your engine to be compatible either way, and I definitely think it's worth it.
As has been said, the bottom line is how it looks in game. Anything else is secondary.
You hear so often about people's bad experiences with teachers who are completely behind the times, and teach courses that they know nothing about. So its refreshing to see one that is actually interested in learning how and why we do things.
Damn it's been a while since I've updated this thing, never getting around to doing it.
EarthQuake do you have any screenshots of assets using object space with mirroring ? I want to learn more about this, but I haven't seen it implemented in any game except the marmoset engine.
If normalmaps are to stay in the next few years, it will be a shame it won't become a standard.
there should be virtually no differece between mirrored or unmirrored, if done right. I intend on making a 3dsmax viewport shader and tools for it.
Anyone?
Also another way that OS is cheaper: there is no need to store bi/tangents, and there is no need to have hard edges/add a bunch of extra edges, so your vert count will be lower with the same results.
But really, there is a different mentality all together when using OS maps as far as uvs go, you can actually get away with fewer splits in your uv, because you have no hard edges, and thus no reason to split uvs where you split edges.
So really, in virtually every case, it will be equivilant, or less, and probabbly a little less in nearly every case. I really doubt it makes much of a difference tho.
Heres a quick example, this was from an OS mesh. If i'm doing a TS bake, theres a good chance i will want to have this little bit on a different smoothing group, and i would want to break the uvs here to not get artifacting at the edges. With a complex mesh, you could have many instances like this, that it would be more efficient to keep your uvs with as few groups as possible, but have to break them up more for TS maps.
If you take the same mesh, after the fact, like say you have a mesh built for TS maps that only has hard edge splits at the uv splits, then yeah this will be the same vert count. But thats making a mesh by "TS" rules, so you have to do extra stuff that just isnt neccassy when doing a mesh for OS maps.
lol EarthQuake, so that means I know no game that uses it now :P
I'm confused now.
Edit: of you're just saying OS mirroring, yeah, gotcha.
As far as compressing OS maps as a practical matter, you tend to get... different artifacts, you have much less of the wide gradients that you would need in TS maps to compensate for smoothing, so you get less errors from that sort of thing, but there also tends to be... I dont know how to say, a wider range of colors, so say you have a cylinder, you basically have colors representing the entire rainbow here, so these areas can not work so well with compression.
Also i dont think CB was saying that you cant compress mirrored OS maps(i dont see why there would be a problem), just that he hasn't tried.
if i make an asset, such as a rock formation. would i be better using an OS normal map for it, than a TS one? would it produce a better end product?
and you can compress OS maps, but if you look at the id/nvidia research paper on compression, you will see that compressed OS is slightly inferior in quality to compressed tangent, and that 3Dc yields best quality.
Even if you have to recalc Z in two-channel tangent compression, cards nowaday dont have so much issues with arithmetic ops, than texture fetch itself. And texture fetch on compressed map is much faster than uncompressed, due to more pixels fitting into texture cache (to my understanding that is). So compression is not really only to save disk costs, in general it is faster. It is faster to unzip gamedate into memory, than to read the same data from disk, simply as moving lots of data is slower than calculating data.
Now back to the quality discussion. So while in theory OS compressed stuff is inferior to tangent compressed, one should ask whether OS doesnt still win the race, due to tangent simply having much more interpolation issues, requiring proper mesh layout... I would say that for mechanical stuff OS will win.
all this has been mentioned in the "be all end all" thread, too. The false truths about OS not being animateable (BF2 uses that), not being mirrorable (showed myself)... everything has been stated before, but certain opinions of people simply stick...
If you think of the math involved in lighting, its basically normal direction and light direction, wherever those two come from doesnt play any role, be it per-vertex normal, os map, tangent map. And any vector can be transformed (read bone animated). There really isnt any difference here. We had animated characters without normalmaps before, and a per-vertex normal is in object-space. Whether we deform it per-vertex, or per-pixel wouldnt be any difference to the result.
I have the strong feeling that many people simply dont believe what they havent seen... even if you can perfectly argument for it based on the math involved...
so why do we have tangent, then? I would say the main issue is texture re-use and generating them from the oldschool "bump maps". More or less historical reasons but is pretty nice if you want to use the same texture over large areas in your game. Think of the brush-based maps... That became so popular (plus the fact that you can recalc z) that imo enhancements to OS maps simply were overlooked, despite the many issues you have baking tangentspace...
Ah i missed this post. It was baked in max, and displayed with a standard viewport shader in max.
I'de be really interested to see what sort of results you're getting with your process, exactly what you can and cant get away with.
For work on DoD, we added xnormal support for our file format(to get the correct bi/tangents etc), and while this helped to make our results as accurate as possible, it didn't come anywhere near close to solving most common problems with tangent space maps. IE: needing to double your polycount by adding tons more edges just to get a "clean"(still inacurate compared to a proper OS map) bake, or needing to use tons of hard edges for anything remotely complex.
Let me get you some screenshots.
Oh also, make sure you're throwing some spec on your test materials. Often smoothing errors can seem pretty minimal, but throw a nice shiny material on there and it all goes to hell. Even worse, a cube map or image based lighting can really bring out errors.
If you want to try Maya baked normal maps in Max then go back a page or two and grab my test scenario.
Although I think this has already been tested. Looks like some color channel needs flipping.
Just unlock normals in maya, and then average the normals on these meshes, seperate out that middle one and set his smoothing angle to like 45. Then you should get a proper bake, sorry about that.
Edit: Considering how terribly wrong the smoothing is on those meshes i'm actually pretty surprised how well the maya bake looks there, lol!
Maya does it right - and there's no reason why game engines can't do this either. Ours does.
No, this is not the case. People have claimed this in the past, but they fail to realize two importand factors:
1. You're adding extra triangles, and while very count may be the main thing to consider when dealing with performance, triangles are *not* free.
2. Small thin polygons can tend to be a bottleneck, because you're spending time rendering stuff that is too small to show up on screen. So, the pixel shader just sits around waiting on these polys, but not actually rendering anything. - I'm pretty sure this is the jist of it, but someone could give a more technically accurate response i would imagine.