Why exactly do we need the .GetDimensions function?
As I'm using this in a d3d9 environment I've been having to pass the texture width and height in externally to the shader - but if I put the full dimensions in (e.g. width:2048, height:1024) I find I have to turn the strength down extremely low (0.001 or so) for it to look correct - because it's multiplied so greatly by the multiplication by dim in float2 dBdUV = dim * derivMap * _BumpStrength.
But if I feed in the texture ratio rather than the size (e.g. width:2, height:1), it seems I can generally keep strength at 1.0 and it looks pretty good.
not sure, but it seems to differ between applications. in maya and unity i don't multiply by sidelength, but in max i do. There are some differences in how they handle uvs so I guess that has to do with it.
morten, I've been wondering how this works with anisotropy for a while. I guess there is a local coordinate frame with r1,r2 and the normal, but I'm not sure if that's the right set of coordinates to use considering there's some additional magic to compute the surface gradient. I know that anisotropy is a pretty big issue in itself, I'm just really interested in how they would behave with screen space derivatives.
>Why exactly do we need the .GetDimensions function?
It's just the d3d10-d3d11 way of getting the resolution of the top mip level. You can just pass these yourself which is probably what you're already doing?
The reason you have to scale by the texture resolution is because in a derivative map of the height map the slopes are stored w.r.t. texture space in units of texels.
The scale by the resolution will change it to units of an entire texture which allows us to use the normalized texture coordinates when using the chain rule to transform the derivative such that we obtain the derivative of the heights w.r.t. screen-space.
The size of the bump scale you have to apply is a separate issue. This has to do with the size of your object like if you're doing displacement mapping. In that case you would also have to adjust scale to match the extent of your object. However, if you don't want to set bump scale manually then you can generate the bump scale in code as was explained in the post --> http://mmikkelsen3d.blogspot.com/2011/11/derivative-maps-in-xnormal.html
Also notice if you choose to use this method to generate the bump_scale then the dependency on texture resolution disappears altogether for square textures. This is because the function SuggestInitialScaleBumpDerivative() returns a value that has been divided by sqrt(width*height).
>not sure, but it seems to differ between applications. in maya and unity i don't >multiply by sidelength, but in max i do. There are some differences in how they handle >uvs so I guess that has to do with it.
Sounds like you have a bug somewhere. The method is definitely not application dependent though you want to watch out for things like size of object changing. Also set bFlipVertDeriv to true if the texture origin upper-left vs. lower-left doesn't match the derivative map.
> I've been wondering how this works with anisotropy for a while
The short answer is yes it still works well in such cases. There was a little bit more info from me in a previous post about why the silhouette is the perfect place to put derivative/tangent splits.
The more elaborate answer to the question is that one of the fundamental things I show in the paper is that Blinn's perturbed normal depends on the shape/size of the surface and the distribution of height values and not the underlying parametrization used to obtain the specific surface and the specific distribution of heights. Since inverse projection from the screen is a valid local parametrization (of the surface and the distribution of heights) it is correct to use this to obtain Blinn's normal.
Stuck the bump scale estimator into Unity and it seems to be giving fairly decent results.
There's still some oddities, visually, but I can't tell with any ease whether it's the baker, the shader or Unity. No worse than normal mapped, just different.
As I was saying earlier you need to have "well-behaved" normals on the mesh you are
bump mapping (trivially the case for tessellated meshes).
Essentially, the mesh to be bump mapped has to look nice on its own when lit for this to work. Roughly, this means the variance of the vertex normal within the scope of one face must be "moderate". So for non tessellation cases this means if you want a hard edge you have to either use smoothing groups on the lo-res or shove in more faces. As I was saying earlier once an artist learns to work with this it is trivial to reproduce the same result in any engine.
Me and metalliandy are releasing a small demo a few days from now so there will be a reference implementation available. It's D3D11 only though. There'll be code and binary.
So does that mean it's back to using support edges like non-identical tangents require? Unlike something like 3point shader/modifier that syncs the tangents up identically and doesn't require as many support edges?
I understand for tessellated meshes, this method is a substantial benefit that doesn't require tangents. But for regular geometry, this sort of seems a step back at the moment.
As I was saying earlier you need to have "well-behaved" normals on the mesh you are
bump mapping (trivially the case for tessellated meshes).
Essentially, the mesh to be bump mapped has to look nice on its own when lit for this to work. Roughly, this means the variance of the vertex normal within the scope of one face must be "moderate". So for non tessellation cases this means if you want a hard edge you have to either use smoothing groups on the lo-res or shove in more faces. As I was saying earlier once an artist learns to work with this it is trivial to reproduce the same result in any engine.
So does that mean it's back to using support edges like non-identical tangents require? Unlike something like 3point shader/modifier that syncs the tangents up identically and doesn't require as many support edges?
Eek yeah, thats a pretty big dealbreaker, if this method works no better than a broken tangent space workflow and worse than a synced tangent space workflow, really what is the point of using it?
Working like this is the farthest thing from trivial, its what we've been doing for years and now that awareness is higher for proper synced workflows, going back to a method where extreme normal changes cause problems is just like.... going back in time.
Working with a proper sycned workflow with reliable, accurate results really can not be understated, the time it takes to create the content, and the resources you need to spend(verts, tris) are both significantly less.
I don't think it's quite as bad as I might have made it sound.
I suggest people try it out. The point to what I was saying was it can't pull off the more extreme cases like the one we saw earlier in this thread.
In practice it works well for regular geometry too but we'll see what happens when people start trying to use this stuff. In the end it comes down to how practical it is for real world cases. There'll also be a viewer in xNormal's next release for this.
I don't think it's quite as bad as I might have made it sound.
I suggest people try it out. The point to what I was saying was it can't pull off the more extreme cases like the one we saw earlier in this thread.
Well thats the problem though, if it can't pull off that test mesh well, it won't work for 90% of hard surface models. Its not really that extreme of a case, its actually very common for any sort of non-organic modeling to have that sort of topology.
Maybe its slightly better than broken tangents? Still I wouldn't see the benefit of using it as a normal map replacement over synced tangents if it can't do accurate shading. It really is a huge deal breaker if it can't, for a lot of types of work. Even sync-ed tangents aren't perfect, but they're about 98%, if this is... 90% or so, thats not nearly good enough.
Its like a cure for cancer that also causes cancer. =P
At the end of the day you're left with the same problem as with a broken tangent space.
You may think i'm just being an ass here, but i'm looking at this from an actual production point of view, I mean sure you can get good results with this, I'm not going to say otherwise, but if its more work(new workflows + same old problems with broken TS) to a production artist to use it and no visual gain, theres just no benefit in it.
I would love to try it out, but I've only got a DX10 card, so unfortunately I will remain a skeptic until someone proves otherwise(that this can replicate accurate shading on complex geometry).
This is what I've been messing around with. At the moment I'm using a derivative map that's generated from tangent space normal map because xNormal's throwing errors at me when I bake a deriv map.
I've tried it with 1-smooth-group cubes and it doesn't work so well. But then neither do normal maps (at least, in Unity).
This is what I've been messing around with. At the moment I'm using a derivative map that's generated from tangent space normal map because xNormal's throwing errors at me when I bake a deriv map.
I've tried it with 1-smooth-group cubes and it doesn't work so well. But then neither do normal maps (at least, in Unity).
If you're baking to a broken tangent space, and then converting to derivative that may effect the end quality you're getting, you're essentially baking those smoothing errors into your deriv map. So doing a native driv map bake, or converting from object space if you're able to do either may give you better results?
Synced up normals work pretty well on simple box shapes, so I wouldn't really use Unity and its broken tangents as your reference point for TS normals. =P
It's baked in XNormal and imported into Unity using the same 3DS Max 2012 exported FBX file with explicit normals and tangents (both Unity and XNormal will import them).
Until the next release of XNormal I can't give a true comparison on the raw baked maps (as XNormal's deriv map baker's broken) so I'll have to convert TS Normal-to-Derivative. You can't easily convert OS Normal to Derivative as Derivative's still in UV space so I'm not going to compare them.
It's baked in XNormal and imported into Unity using the same 3DS Max 2012 exported FBX file with explicit normals and tangents (both Unity and XNormal will import them).
Until the next release of XNormal I can't give a true comparison on the raw baked maps so I'll have to convert TS Normal-to-Derivative. You can't easily convert OS Normal to derivative so I'm not going to compare them.
If you want to PM me the LP obj files (with all SGs applied etc.) and the baked maps, I would be happy to compare them in Blender for you and send you shots or send you a .blend file with everything set up so you can check them out.
xNormals bakes match Blender 100%, so its actually the best way to currently view them, imho
I'm baking Derivative maps in xNormal ok, so I'm really not sure what the problem could be.
Have you tried baking with obj, rather than fbx?
Also, i'm not sure if I mentioned this but the Normal to Derivative and Derivative to Normal actions assume that the original normal map is baked as X+Y+Z+ and the you want the final Normal Map to be X+Y+Z+ (in the case of the Derivative to Normal) if that helps any.
Yep, under some strange circustances xN will emit a scale error. I'll hot-fix that in a few days.
We want also add derivative maps into the 3D viewer for the next version, by Feb-March probably. Then we'll post our experiences dealing with these new type of maps... by now, R&D
Well thats the problem though, if it can't pull off that test mesh well, it won't work for 90% of hard surface models. Its not really that extreme of a case, its actually very common for any sort of non-organic modeling to have that sort of topology.
Maybe its slightly better than broken tangents? Still I wouldn't see the benefit of using it as a normal map replacement over synced tangents if it can't do accurate shading. It really is a huge deal breaker if it can't, for a lot of types of work. Even sync-ed tangents aren't perfect, but they're about 98%, if this is... 90% or so, thats not nearly good enough.
Its like a cure for cancer that also causes cancer. =P
At the end of the day you're left with the same problem as with a broken tangent space.
You may think i'm just being an ass here, but i'm looking at this from an actual production point of view, I mean sure you can get good results with this, I'm not going to say otherwise, but if its more work(new workflows + same old problems with broken TS) to a production artist to use it and no visual gain, theres just no benefit in it.
I would love to try it out, but I've only got a DX10 card, so unfortunately I will remain a skeptic until someone proves otherwise(that this can replicate accurate shading on complex geometry).
That's exactly how I felt. I asked Andy to tell me more about it, and I lost interest once it became clear to me it has at least as many cons as the current way, and is in fact worse than a 100% synced TS solution.
It's baked in XNormal and imported into Unity using the same 3DS Max 2012 exported FBX file with explicit normals and tangents (both Unity and XNormal will import them).
But didn't you say earlier that this method isn't properly synced in Unity? Again, baking with a broken tangent space(just because you can import tangents doesn't mean its synced) and then converting will likely result in less than optimal results.
I've managed to get the tangents synced as close as possible. Unity won't import explicit binormals from .FBX (it generates them in-shader them via the usual cross(normal, tangent.xyz) * tangent.w;) but the tangents appear to being imported fine.
I don't know how xNormal handles .FBX files exported with binormals so I can't be certain it's baking with an identical set of data compared to how Unity renders it. I'll do some testing with the 3-point modifier and binormals copy/pasted to vertex colour, see if that makes a difference to things. But it's as close as I can get them at the moment.
So, yes, converting from normal map to derivative map isn't ideal and so isn't a true/scientific comparison by any means but at the moment it's the best I can do :P
@mmikkelsen - I think this promising and I really appreciate your work, but I just wanted to throw my support behind what Earthquake is saying. The number one problem artists have with tangent space normal maps is the workflow pain that comes with smoothing errors like those shown in the "extreme case." It's very difficult for an artist to predict exactly when these errors will show up, and it's somewhat of a trial and error workflow to correct them (resulting in many time consuming edits and re-bakes). If I'm restating the obvious, forgive me.
We all try and get our engine and tools programmers to sync up the renderer to a single baker so we can eliminate these issues more easily, but we aren't always successful.
The best thing a tangent space normal map replacement could claim would be to eliminate smoothing errors. Any other benefits are secondary, and many artists will consider those benefits moot if this one problem cannot be solved better than a synced tangent space normal map workflow.
...
It also tends to fail during export when objects change hands. As an example one studio I know of did vertex cache optimization before tangent space calculation in their engine which triggered reordering which gave problems.
In other cases it comes down to as simple things as someone performing welding steps or just changing overall index list layout. Another issue is quads since results also change depending on which diagonal split is chosen. And all of this is assuming you're even able to get your hands on the source code used by the baker to generate the tangent spaces.
If you do end up taking the conventional normal mapping route then I strongly suggest that you use the tangent space generation that is used in xNormal since it does overcome a lot of these problems.
Off the top of my head that's all I have to say but I'll post again if there's more I think of or if there are any questions for me.
Morten.
Questions yes please.
I am trying to wrap my head around the initial discussion of pipline gotchas where... vertex cache triggering reordering in engine? cuz before baking???
performing welding steps/changing index list layout affects baked quality in engine? why and how?
or...
what is meant buy what diagonal split is chosen for quad messing up baking why and how and at what point? ( happens before baking and why or after baking and how? )
Sorry.. I am clueless. And as these seem to suggest easily solvable with pipeline consideration... I am very interested. ( as easy as baking with xNormal? )
Earthquake makes a very good point...
( On the other side of the coin )
Anything that fits perfectly within the path between Maya and Engine is blessed.
Thanks in advance fer any illumination on the pipeline issues u described. ( I have no clue on how such problems actually are manifested and what the direct results are )
Your argument is a fair one. However, there's more to it than convenient work flow.
The main reason I think this technique will end up being common in console game development is because of the memory savings. With a derivative map you don't save anything on the texture but you do save 14% on the vertices. When using the height map like I show in the demo you save an additional 50% on the bump map at some loss in quality. But once you throw in color textures, post filters etc. and see the game in motion it becomes more difficult to see this. Afterall, the end-user doesn't scrutinize each object up close for days like an artist does. I hope I don't offend anyone by saying that but I am sure everyone is aware of it already To us enthusiasts it can be difficult to see these things through the eyes of the end-user.
I don't think it's going to come down to ease of workflow or quality of results but simply savings. That's what's going to force this in the end. At least on consoles that is.
And mind you I am someone who's experienced everything from last minute dumping of mip levels and character lods to people getting into fights over 200kB of memory all out of desperation to get the game to fit
Btw, I also wanted to say if anyone gets an error message, when running the demo, on some compiler dll missing you just need to install the most recent version of DirectX from June 2010. Top link --> http://msdn.microsoft.com/en-us/directx/aa937788
What puzzles me is why it's not able to match synced tangent space normal maps.
Surely a lot of the underlying maths is pretty similar - take a map that alters the normals in some way, rotate everything to be in the same space (either the normal or the camera/light vectors), calculate the lighting, output the results. And this method really only relies on the normals being identical between baker/renderer (which is easily done these days).
What is it about tangents that allows them to give superior results?
I don't think it's going to come down to ease of workflow or quality of results but simply savings. That's what's going to force this in the end.
Not offensive but that would be completly horrifying... I sorta like the Tim Sweeny future where raw computing power makes all hack cleverness obsolete and seem like a bad dream. ( coming from his dorky god lips it just sounds so obvious )
if anyone gets an error message, when running the demo, on some compiler dll missing you just need to install the most recent version of DirectX from June 2010. Top link --> http://msdn.microsoft.com/en-us/directx/aa937788
I thought the demo looked great in comparison. thank you.
Again is there any explanation ( easiar for my thick skull to understand in lame layman illumination? ) for the query I made above?
I ask cuz I have been trying to get a friend to release some awesome modeling tools that he has been hesitant to because of putting out into the world what sorta sounds like the same topological concerns...
Just wondering if his fears were in fact valid.
> Tim Sweeny future where raw computing power makes all hack cleverness
Mind you that raw computing power is not the same as size of memory In fact what's going on is that the cut-off point is gradually changing in favor of burning more ALU as opposed to memory consumption and data transfers. Anyway, back to the subject.
So regarding your questions the deal is that many (though not all) tangent space implementations have order-dependencies in them meaning they end up with different spaces depending on which order the faces are processed in. So the point is with such an implementation you'd get different results if you were to build tangent space before or after vertex cache optimization because this changes ordering of faces.
Another bad example is mirroring. If your tangent space implementation has order-dependencies then the tangent space (even on a perfectly mirrored mesh) on the mirrored side can show up as different as opposed to being a perfect mirrored tspace as it should be. Other interesting cases of order-dependencies are quads. If you just always split them as 0,1,2 and 0,2,3 this would create an order-dependency where you get different results if the data was to reorder and thus tangent space generation is affected.
The other issues regarding index lists is that most implementations just assume that what index list comes with the mesh is the right one but often when meshes change hands going from application to application the mesh representation tends to change. Some use multiple index lists, some weld to remove duplicates, some decide to weld and switch to one index list entirely and in some cases the data is switched to unindexed entirely. These differences will with most implementations trigger different results in tangent space generation. With most unindexed won't work at all.
In other cases some might remove degenerate primitives and others might not and again most implementations would generate different results depending on this. So the point is that using the same implementation to generate the spaces doesn't even guaranty that you will get the same tangent spaces across applications. The specific implementation has to be designed to deal with all these caveats.
Vertex level tangent space generation is extremely fragile in regards to maintaining a consistent result.
So for this reason normal maps don't travel very well across applications as many artists have noticed by now. You basically need the magical per vertex key (tangent space) that was used to bake the map originally in order to play the result back correctly.
Just a quick heads-up. I've updated the demo with a third shader which shows how to do proper mixing of derivative maps while still using auto bump scale to achieve scale invariance like you have for normal maps.
It is a more typical scenario compared to triplanar bump because here all bump maps are mapped to the mesh using just one set of texture coordinates but with a different scale and offset per texture.
I quickly put together a shader that uses derivative maps in UDK, but for some reason it seems that the intensity of the map increases as I get closer to it, and weakens when I'm further away. Any idea what's going on?
I quickly put together a shader that uses derivative maps in UDK, but for some reason it seems that the intensity of the map increases as I get closer to it, and weakens when I'm further away. Any idea what's going on?
My guess is that the engine is filtering the texture mip maps so that you get the effect you described.
> I quickly put together a shader that uses derivative maps in UDK, but for some reason it seems that the intensity of the map increases as I get closer to it, and weakens when I'm further away. Any idea what's going on?
You are doing something wrong for sure. Take a look at the shaders in the demo and see how it's done there.
Just wanted to let everyone know that I have updated the Photoshop actions I posed a few pages back, which brings the current version up to 1.2
The update adds a new Normal to Derivative (Accurate) option and the old method has been renamed to Normal to Derivative (Fast).
What has changed?
Normal to Derivative (Fast) - A much faster, but less accurate conversion. Speed vs quality at the expense of accuracy in slopes. (Takes less than 1 second on a i7 950 @3.07GHz)
Normal to Derivative (Accurate) - A much more accurate, but slower conversion that calculates the true slopes of the derivative map via a signed division of the normals X and Y by Z. (Takes 5 seconds or so on a i7 950 @3.07GHz)
It now supports derivative maps and parallax mapping in its viewer without the use of geometric tangent spaces as we have been discussing in this thread. So if anyone, artist or programmer, would like a quick way to evaluate the technique(s) with your own 3D assets this might be a good wayto do it.
has anyone made a derivative map shader in UDK? Id like to try this out. from what ive seen it doesnt look any worse than tangent space and will allow you to bake tangent space maps with higher resolution bake meshes/add in details to the map via mixing in photoshop.
Replies
http://jbit.net/~sparky/academic/perturb.h
Feel free to comment!
I am also thinking of making a small demo available (nothing fancy though).
Its always awesome when someone has a question about something and the original author pops up to answer it.
Why exactly do we need the .GetDimensions function?
As I'm using this in a d3d9 environment I've been having to pass the texture width and height in externally to the shader - but if I put the full dimensions in (e.g. width:2048, height:1024) I find I have to turn the strength down extremely low (0.001 or so) for it to look correct - because it's multiplied so greatly by the multiplication by dim in float2 dBdUV = dim * derivMap * _BumpStrength.
But if I feed in the texture ratio rather than the size (e.g. width:2, height:1), it seems I can generally keep strength at 1.0 and it looks pretty good.
morten, I've been wondering how this works with anisotropy for a while. I guess there is a local coordinate frame with r1,r2 and the normal, but I'm not sure if that's the right set of coordinates to use considering there's some additional magic to compute the surface gradient. I know that anisotropy is a pretty big issue in itself, I'm just really interested in how they would behave with screen space derivatives.
It's just the d3d10-d3d11 way of getting the resolution of the top mip level. You can just pass these yourself which is probably what you're already doing?
The reason you have to scale by the texture resolution is because in a derivative map of the height map the slopes are stored w.r.t. texture space in units of texels.
The scale by the resolution will change it to units of an entire texture which allows us to use the normalized texture coordinates when using the chain rule to transform the derivative such that we obtain the derivative of the heights w.r.t. screen-space.
The size of the bump scale you have to apply is a separate issue. This has to do with the size of your object like if you're doing displacement mapping. In that case you would also have to adjust scale to match the extent of your object. However, if you don't want to set bump scale manually then you can generate the bump scale in code as was explained in the post --> http://mmikkelsen3d.blogspot.com/2011/11/derivative-maps-in-xnormal.html
Also notice if you choose to use this method to generate the bump_scale then the dependency on texture resolution disappears altogether for square textures. This is because the function SuggestInitialScaleBumpDerivative() returns a value that has been divided by sqrt(width*height).
Sounds like you have a bug somewhere. The method is definitely not application dependent though you want to watch out for things like size of object changing. Also set bFlipVertDeriv to true if the texture origin upper-left vs. lower-left doesn't match the derivative map.
> I've been wondering how this works with anisotropy for a while
The short answer is yes it still works well in such cases. There was a little bit more info from me in a previous post about why the silhouette is the perfect place to put derivative/tangent splits.
The more elaborate answer to the question is that one of the fundamental things I show in the paper is that Blinn's perturbed normal depends on the shape/size of the surface and the distribution of height values and not the underlying parametrization used to obtain the specific surface and the specific distribution of heights. Since inverse projection from the screen is a valid local parametrization (of the surface and the distribution of heights) it is correct to use this to obtain Blinn's normal.
There's still some oddities, visually, but I can't tell with any ease whether it's the baker, the shader or Unity. No worse than normal mapped, just different.
Cheers
As I was saying earlier you need to have "well-behaved" normals on the mesh you are
bump mapping (trivially the case for tessellated meshes).
Essentially, the mesh to be bump mapped has to look nice on its own when lit for this to work. Roughly, this means the variance of the vertex normal within the scope of one face must be "moderate". So for non tessellation cases this means if you want a hard edge you have to either use smoothing groups on the lo-res or shove in more faces. As I was saying earlier once an artist learns to work with this it is trivial to reproduce the same result in any engine.
Me and metalliandy are releasing a small demo a few days from now so there will be a reference implementation available. It's D3D11 only though. There'll be code and binary.
I understand for tessellated meshes, this method is a substantial benefit that doesn't require tangents. But for regular geometry, this sort of seems a step back at the moment.
Eek yeah, thats a pretty big dealbreaker, if this method works no better than a broken tangent space workflow and worse than a synced tangent space workflow, really what is the point of using it?
Working like this is the farthest thing from trivial, its what we've been doing for years and now that awareness is higher for proper synced workflows, going back to a method where extreme normal changes cause problems is just like.... going back in time.
Working with a proper sycned workflow with reliable, accurate results really can not be understated, the time it takes to create the content, and the resources you need to spend(verts, tris) are both significantly less.
I suggest people try it out. The point to what I was saying was it can't pull off the more extreme cases like the one we saw earlier in this thread.
In practice it works well for regular geometry too but we'll see what happens when people start trying to use this stuff. In the end it comes down to how practical it is for real world cases. There'll also be a viewer in xNormal's next release for this.
Well thats the problem though, if it can't pull off that test mesh well, it won't work for 90% of hard surface models. Its not really that extreme of a case, its actually very common for any sort of non-organic modeling to have that sort of topology.
Maybe its slightly better than broken tangents? Still I wouldn't see the benefit of using it as a normal map replacement over synced tangents if it can't do accurate shading. It really is a huge deal breaker if it can't, for a lot of types of work. Even sync-ed tangents aren't perfect, but they're about 98%, if this is... 90% or so, thats not nearly good enough.
Its like a cure for cancer that also causes cancer. =P
At the end of the day you're left with the same problem as with a broken tangent space.
You may think i'm just being an ass here, but i'm looking at this from an actual production point of view, I mean sure you can get good results with this, I'm not going to say otherwise, but if its more work(new workflows + same old problems with broken TS) to a production artist to use it and no visual gain, theres just no benefit in it.
I would love to try it out, but I've only got a DX10 card, so unfortunately I will remain a skeptic until someone proves otherwise(that this can replicate accurate shading on complex geometry).
This is what I've been messing around with. At the moment I'm using a derivative map that's generated from tangent space normal map because xNormal's throwing errors at me when I bake a deriv map.
I've tried it with 1-smooth-group cubes and it doesn't work so well. But then neither do normal maps (at least, in Unity).
http://mmikkelsen3d.blogspot.com/2011/12/so-finally-no-tangents-bump-demo-is-up.html
for some reason I have this error with the demo: http://stackoverflow.com/questions/2015810/visual-c-sharp-2010-msvcr100-dll-missing-when-opening-a-project-tried-everythi
(I also use windows 7 x64, with Visual C++ 2010 red. pkg installed)
"msvcr100.dll is missing, reinstall blabla"
if i copy the dll msvcr100.dll into the same directory I have a crash with just a pointer ("0xc000007b" - i guess that doesn't help..)
If you're baking to a broken tangent space, and then converting to derivative that may effect the end quality you're getting, you're essentially baking those smoothing errors into your deriv map. So doing a native driv map bake, or converting from object space if you're able to do either may give you better results?
Synced up normals work pretty well on simple box shapes, so I wouldn't really use Unity and its broken tangents as your reference point for TS normals. =P
Until the next release of XNormal I can't give a true comparison on the raw baked maps (as XNormal's deriv map baker's broken) so I'll have to convert TS Normal-to-Derivative. You can't easily convert OS Normal to Derivative as Derivative's still in UV space so I'm not going to compare them.
If you want to PM me the LP obj files (with all SGs applied etc.) and the baked maps, I would be happy to compare them in Blender for you and send you shots or send you a .blend file with everything set up so you can check them out.
xNormals bakes match Blender 100%, so its actually the best way to currently view them, imho
I'm baking Derivative maps in xNormal ok, so I'm really not sure what the problem could be.
Have you tried baking with obj, rather than fbx?
Also, i'm not sure if I mentioned this but the Normal to Derivative and Derivative to Normal actions assume that the original normal map is baked as X+Y+Z+ and the you want the final Normal Map to be X+Y+Z+ (in the case of the Derivative to Normal) if that helps any.
We want also add derivative maps into the 3D viewer for the next version, by Feb-March probably. Then we'll post our experiences dealing with these new type of maps... by now, R&D
That's exactly how I felt. I asked Andy to tell me more about it, and I lost interest once it became clear to me it has at least as many cons as the current way, and is in fact worse than a 100% synced TS solution.
But didn't you say earlier that this method isn't properly synced in Unity? Again, baking with a broken tangent space(just because you can import tangents doesn't mean its synced) and then converting will likely result in less than optimal results.
I don't know how xNormal handles .FBX files exported with binormals so I can't be certain it's baking with an identical set of data compared to how Unity renders it. I'll do some testing with the 3-point modifier and binormals copy/pasted to vertex colour, see if that makes a difference to things. But it's as close as I can get them at the moment.
So, yes, converting from normal map to derivative map isn't ideal and so isn't a true/scientific comparison by any means but at the moment it's the best I can do :P
We all try and get our engine and tools programmers to sync up the renderer to a single baker so we can eliminate these issues more easily, but we aren't always successful.
The best thing a tangent space normal map replacement could claim would be to eliminate smoothing errors. Any other benefits are secondary, and many artists will consider those benefits moot if this one problem cannot be solved better than a synced tangent space normal map workflow.
I am trying to wrap my head around the initial discussion of pipeline gotchas where... Questions yes please.
I am trying to wrap my head around the initial discussion of pipline gotchas where... vertex cache triggering reordering in engine? cuz before baking???
performing welding steps/changing index list layout affects baked quality in engine? why and how?
or...
what is meant buy what diagonal split is chosen for quad messing up baking why and how and at what point? ( happens before baking and why or after baking and how? )
Sorry.. I am clueless. And as these seem to suggest easily solvable with pipeline consideration... I am very interested. ( as easy as baking with xNormal? )
Earthquake makes a very good point...
( On the other side of the coin )
Anything that fits perfectly within the path between Maya and Engine is blessed.
Thanks in advance fer any illumination on the pipeline issues u described. ( I have no clue on how such problems actually are manifested and what the direct results are )
Your argument is a fair one. However, there's more to it than convenient work flow.
The main reason I think this technique will end up being common in console game development is because of the memory savings. With a derivative map you don't save anything on the texture but you do save 14% on the vertices. When using the height map like I show in the demo you save an additional 50% on the bump map at some loss in quality. But once you throw in color textures, post filters etc. and see the game in motion it becomes more difficult to see this. Afterall, the end-user doesn't scrutinize each object up close for days like an artist does. I hope I don't offend anyone by saying that but I am sure everyone is aware of it already To us enthusiasts it can be difficult to see these things through the eyes of the end-user.
I don't think it's going to come down to ease of workflow or quality of results but simply savings. That's what's going to force this in the end. At least on consoles that is.
And mind you I am someone who's experienced everything from last minute dumping of mip levels and character lods to people getting into fights over 200kB of memory all out of desperation to get the game to fit
Btw, I also wanted to say if anyone gets an error message, when running the demo, on some compiler dll missing you just need to install the most recent version of DirectX from June 2010. Top link --> http://msdn.microsoft.com/en-us/directx/aa937788
Surely a lot of the underlying maths is pretty similar - take a map that alters the normals in some way, rotate everything to be in the same space (either the normal or the camera/light vectors), calculate the lighting, output the results. And this method really only relies on the normals being identical between baker/renderer (which is easily done these days).
What is it about tangents that allows them to give superior results?
Not offensive but that would be completly horrifying... I sorta like the Tim Sweeny future where raw computing power makes all hack cleverness obsolete and seem like a bad dream. ( coming from his dorky god lips it just sounds so obvious )
I thought the demo looked great in comparison. thank you.
Again is there any explanation ( easiar for my thick skull to understand in lame layman illumination? ) for the query I made above?
I ask cuz I have been trying to get a friend to release some awesome modeling tools that he has been hesitant to because of putting out into the world what sorta sounds like the same topological concerns...
Just wondering if his fears were in fact valid.
Thanks for any light u can shed on what u meant?
> Tim Sweeny future where raw computing power makes all hack cleverness
Mind you that raw computing power is not the same as size of memory In fact what's going on is that the cut-off point is gradually changing in favor of burning more ALU as opposed to memory consumption and data transfers. Anyway, back to the subject.
So regarding your questions the deal is that many (though not all) tangent space implementations have order-dependencies in them meaning they end up with different spaces depending on which order the faces are processed in. So the point is with such an implementation you'd get different results if you were to build tangent space before or after vertex cache optimization because this changes ordering of faces.
Another bad example is mirroring. If your tangent space implementation has order-dependencies then the tangent space (even on a perfectly mirrored mesh) on the mirrored side can show up as different as opposed to being a perfect mirrored tspace as it should be. Other interesting cases of order-dependencies are quads. If you just always split them as 0,1,2 and 0,2,3 this would create an order-dependency where you get different results if the data was to reorder and thus tangent space generation is affected.
The other issues regarding index lists is that most implementations just assume that what index list comes with the mesh is the right one but often when meshes change hands going from application to application the mesh representation tends to change. Some use multiple index lists, some weld to remove duplicates, some decide to weld and switch to one index list entirely and in some cases the data is switched to unindexed entirely. These differences will with most implementations trigger different results in tangent space generation. With most unindexed won't work at all.
In other cases some might remove degenerate primitives and others might not and again most implementations would generate different results depending on this. So the point is that using the same implementation to generate the spaces doesn't even guaranty that you will get the same tangent spaces across applications. The specific implementation has to be designed to deal with all these caveats.
Vertex level tangent space generation is extremely fragile in regards to maintaining a consistent result.
So for this reason normal maps don't travel very well across applications as many artists have noticed by now. You basically need the magical per vertex key (tangent space) that was used to bake the map originally in order to play the result back correctly.
http://mmikkelsen3d.blogspot.com/2012/01/how-to-do-more-generic-derivative-map.html
It is a more typical scenario compared to triplanar bump because here all bump maps are mapped to the mesh using just one set of texture coordinates but with a different scale and offset per texture.
My guess is that the engine is filtering the texture mip maps so that you get the effect you described.
You are doing something wrong for sure. Take a look at the shaders in the demo and see how it's done there.
Nope. It's straight forward. Just follow the shaders in the demo.
The update adds a new Normal to Derivative (Accurate) option and the old method has been renamed to Normal to Derivative (Fast).
What has changed?
You can get the actions HERE
Enjoy!
Normal to Derivative (Fast) vs Normal to Derivative (Accurate)
It now supports derivative maps and parallax mapping in its viewer without the use of geometric tangent spaces as we have been discussing in this thread. So if anyone, artist or programmer, would like a quick way to evaluate the technique(s) with your own 3D assets this might be a good wayto do it.
http://web.archive.org/web/20131003040342/http://eat3d.com/files/eat3d_derivative_map_actions_set_1.2.zip
I put metalliandy's link into the Waybackmachine, then took his attachment link and put that into Waybackmachine. Bingo.