Home Technical Talk

Derivative Normal Maps - What are they?

1
polycounter lvl 12
Offline / Send Message
Ace-Angel polycounter lvl 12
So I was reading up on some white-paper, specifically this one from R&C, and came across something called D-Normal Maps.

http://www.insomniacgames.com/tech/articles/1108/files/Ratchet_and_Clank_WWS_Debrief_Feb_08.pdf

Article I found written about it: http://mmikkelsen3d.blogspot.com/2011/07/derivative-maps.html

What also caught my eye is that now xNormal supports it too, thanks to the author above (Mikkelsen).

From what I could gather, it's the same as a Normal map, but lacking the Z component in terms of pixel instruction, thus making it light weight (so the in long run, with detail maps, you get 3-4 instructions instead of 9-10).

I was wondering, in engines like UDK, CE3, Unity and etc, on how feasible this technique would be, and if they're open to people without the need for a full shader rewrite on base level (EI: buy source code) and if there are anymore information I could gather about it, such as what are the requirements from an artist (smoothing groups, snych's, etc), but also, on how it would be represented graphically vs. a standard normal map.

Replies

  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    if the 2 channels were normalized befor import, the shader can do very simple math you can implement with nodes to reconstruct the blue channel. very simple in udk, there may even be an easier way with one of the new material functions.


    EDIT: Oops, NVM, I thought you were talking about how the save memory by removing the blue channel and then reconstructing it in the shader.
  • Vailias
    Offline / Send Message
    Vailias polycounter lvl 18
    The derivative, in this case, is a slope value.

    If I understand it right, the map encodes the slope of the underlying surface along the U and V directions in the R and G channels respectively. Likely packed similarly to a standard normal map.

    No idea if you can use this in UDK, even as a custom shader setup.
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    This thread is already #5 in the google search results for Derivative Normal Maps. Derailed, sorry.
  • fade1
    Offline / Send Message
    fade1 polycounter lvl 14
    i had not time to read the paper, but for our current project we use normal maps just based on r and g channel. the b channel is free and is calculated on the fly by the gpu. (you just need two vectors to define a direction) in the blue channel i store ao information, so i save texture space and texture swaps.

    and for the artist it doesn't change anything. just whiten the blue channel or leave it as it is. the result in the viewport/rendering is the same as "regular" normal maps.
  • chronic
    Offline / Send Message
    chronic polycounter lvl 10
    This is what I've gotten from reading the paper and links:

    This concerns Detail Maps, not a new technique for normal mapping as we normally use it in a high to low res bake.

    The original paper by Morten S. Mikkelse presents a way to apply a single channel height/bump detail map to a mesh without the need for UV coords by calculating screen space derivatives.

    He extends the idea on his web site (check OP's link) by precomputing the derivates into 2 channels. (so instead of the single channel height map, you now have 2 channels for x, y derivatives, but less shader instructions)

    The advantage of using either of these two methods for Detail maps is that they look better under magnification than traditional tiled Detail normal maps. Its a great idea, and you should be able to implement it easily.
  • Ace-Angel
    Offline / Send Message
    Ace-Angel polycounter lvl 12
    Computron: I guess that's what it is? I have no idea, since the closest I have come to it is Append RG from textures and multiply it by 0,0,1 (which is very situational) or use DeriveZ (which costs a little more). Either way, I still see in terms of instructions, the Texture weighing in at 47 vs the Append/Mask + Z as 51, which doesn't look very savvy to me.

    Vailias: That's what I figured, but I guess my lack of any indepth understanding of this stuff leaves my mind boggling in questions as to how this is possible on the RG channels by itself. Does the Z channel truly not have any purpose as other then to look pretty?

    fade1: That's pretty interesting, so the the calculation is being done on the fly by the GPU you say? Hmmm.

    chronic: Oh...it's mainly for details then (sorry if it's wrong, that's what I understood, yes, I'm abit slow on shaders and stuff)? That's disappointing, I use a Cross based system to create my XYZ information from a greyscale images directly, so a person can have ARGB channels in their texture being rendered out as a Normal Map.

    Oh well, thanks for the feedback peeps, much appreciated.
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    http://mmikkelsen3d.blogspot.com/2011/11/derivative-maps-in-xnormal.html

    Immediate benefits:
    1. No more tangent-space seams, yay!
    2. Less VRAM used ( because you won't need to precompute tanget space... plus they use only 2 color channels instead of 3 ).
    3. More accuracy in magnification filter ( that is, when you close the camera :poly121: ).
    4. Works with all: tessellated surfaces, animation, UV mirroring, etc....

    I plan to write a blog entry too about them once I implement them in the 3D viewer for the next version... but they're truly promising.
  • Ace-Angel
    Offline / Send Message
    Ace-Angel polycounter lvl 12
    Thanks Jogshy, it would be indeed awesome to see them in action. Much appreciated for the drop-in.
  • equil
    partial derivative maps (the ratchet and clank thing) doesn't really have any gains over normal mapping that i can see. I implemented them into unity without much issue and the results are more or less the same. In fact, reconstructing the pDeriv unit vector took me more instructions than reconstructing a usual 2-component normal map. Comparison between them stored with x in red and y in green, and then x in alpha and y in green (dxtnm):
    http://makeartbutton.com/gfx/nm_pderiv_channels.png
    note how blotchy things look when not using dxtnm. http://makeartbutton.com/gfx/pderiv_nm.png this is what my pderiv map looked like. I didn't bother looking into why some parts get blown out.

    mikkelsen's derivative maps are not the same thing. What his technique does is (i think) perturb the normal instead of replace it, so you can technically extend it to multiple overlayed bumps. Not having to deal with tangent issues sounds really nice, but all the comparisons I've seen have shown normal maps yielding better results than derivative maps. And I'm kind of worried we'd loose anisotropy if we'd drop tangent space. But maybe that's not true?

    edit: in simpler terms one is a derivative normal map, while the other is a derivative height map.

    Thanks for that link jogshy.
  • Vailias
    Offline / Send Message
    Vailias polycounter lvl 18
    Ok some math explanation here.
    Basic calculus, there are 2 branches of this, one developed by Newton is derivative calculus, the other developed by Leibniz, is integral calculus.
    Derivative calculus is essentially being able to find the speed of a car at a given point in time based on the distance it travels over some period of time.
    Integral calculus is finding the distance a car travels over some period of time from knowing its speed at a point in time.
    I'm over simplifying, but that's the core.

    Graphing a car's distance traveled on one axis, and time on the other axis, you get a big curve in 2d space. The average slope of this curve between any two points on this curve is the average speed for the car during that time segment. The slope AT a SINGLE POINT is the instantaneous speed of that car, which is the derivative.

    So let's say you have a bumpy surface (like a high density model). If you gather the height offset from some baseline (like a low density model) into a dataset (like a height map texture) then you have what amounts to a 2d graph of the height differences between the baseline and the original surface.
    If you think of each row and column of pixels as a curve in space, you can imagine them as cross sections of the original surface.
    Then you can encode the instantaneous change in height, the slope, of each point (every pixel) in this height map into two new maps one for rows (we'll call it red) and one for columns (we'll call it chartreuse... or green since chartreuse is a bit long to say) Then you have a field of slope, or angular differences rather than normal directions, as with a projected normal map.

    From what I can tell from Mikkelsen's little article there, the gem of it is the shading calculations are being done in screenspace rather than tangent space by using the slope encoded in the map. This is why you don't get tangent seams, because of what space the calculations are being done in. So long as your model doesn't have existing visible splits at UV seams, this technique won't show them either.

    Also, I don't think "standard" normal mapping replaces the normal, unless its object space mapped. I remember that was a mistake I made when I first wrote up a normal mapping shader. Tangent Space Normal Maps should be treated as vector offsets to the interpolated surface normal. (I'm pretty sure anyway)

    Partial Derivative Maps: The main benefits seem to be summed up in the insomniac document. More simple reconstruction, and ease of blending in detail normals using the same encoding. Similar idea though, but rather than getting the slope of the high poly model at a pixel, its still getting a normal offset, but that offset is getting calculated from a raw vector into the slope of the normal itself in the x and y directions.

    @Ace: The blue channel is quite often, just there to look pretty, or essentially wasted space even in a standard normal map. You can reconstruct any data it would happen to hold mathematically in the shader. I think part of the reason its still common to have it included is legacy hardware and practices. This is a guess, but I believe in Shader model 1 and 2 hardware the vector reconstruction via cross product took up sufficient instructions as to be prohibitive for shader performance, so a direct texture lookup was preferable.
  • Gestalt
    Offline / Send Message
    Gestalt polycounter lvl 11
    I'm still not sure I really understand where the advantage is, I'm not very technical with these things. It seems one is normalized and the other one isn't, but shouldn't a normal map without a blue channel basically be saying the same thing? Would you have to manipulate the map into normal info anyway for it to work? Also couldn't you still do a direct texture lookup without a blue channel on a normal map provided the map was made correctly?
  • Bigjohn
    Offline / Send Message
    Bigjohn polycounter lvl 11
    There's too much text and not enough pictures for me to be able to wrap my mind around this.

    Sounds cool though. Better normal maps? That's what I got out of this whole thing.
  • Eric Chadwick
    Thanks for the text guys, I found it informative.
  • Eric Chadwick
    Reading through Mikkelsen's site, he has some interesting visual examples. The difference is really apparent when you get close to the surface...

    Normal mapped:
    glsl_normalmapped.png

    Derivative mapped:
    glsl_deriv.png
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    Derivative map mixing is also much easier, because there is no need to overlay maps and adjust the blue channel + normalise, so you can use whichever blend mode you like and it will always be correct.
    You can also increase the bump scale (strength) after baking without normalisation or any adverse effects :)
  • Michael Knubben
    Andy: Really? How does that work, then? I don't understand how it can counteract lowpoly shading but still allow you to fuck around with the contrast etc. Since you seem to know more about this (I gather you know this Morten personally), could you explain it 'as if' I were an idiot? (I am)
  • cryrid
    Offline / Send Message
    cryrid interpolator
    So is this still detail-map-only, or any normal map?
  • Gestalt
    Offline / Send Message
    Gestalt polycounter lvl 11
    Would anyone know how these could be used in UDK? I'd like to try them out but I'm not sure if there's an option or material setup.
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    @MightPea,

    It's a really complicated thing to explain and I'm not even sure if I understand it myself 100% yet (I only heard of then 3 or 4 weeks ago) :P
    Basically the way that Morten uses Derivative maps is a little different to the standard Derivative maps and from what I understand, he doesn't store the tangent spaces in the verts during runtime and its all sorted by shader magic and lots of complicated mathematics that I really don't understand.

    The thing with the mixing is that it's more like mixing 2 heightmaps, than it is mixing 2 normal maps, so there is no need for normalisation as derivative maps don'tt require it. As such it's much more versatile and easy to mix things together.

    I'm going to set up a scene that compares the differences when rendering in real time, which I will post here, which will hopefully make it a little easier to understand.

    Hopefully Morten will chime in when he has some spare time and will answer everything better than I can :)
    cryrid wrote: »
    So is this still detail-map-only, or any normal map?
    Its designed as a replacement for Tangent space normal maps. :)
    You can add multiple detail maps to a derivative map (overlay, soft light, hard light etc.) without worrying about it being incorrect mathematically, because its not a hack like the standard "Overlay" detail map method we all use for Normal maps.
    Gestalt wrote: »
    Would anyone know how these could be used in UDK? I'd like to try them out but I'm not sure if there's an option or material setup.
    I dont think they can be used in UDK at the moment.
    The only place that supports them, AFAIK, is xNormal and Blender, though I'm sure they will become more popular soon.
    They are still a very new map type, so the industry need to catch up in regards to support, but I'm sure it wont take forever :)


    In the meantime, I made a set Photoshop actions (with some guidance from Morten :)) that convert an RGB Heightmap into a Derivative map, along with some other features, and though people might like to give them a go.

    Currently the actions can do the following:-


    From a Heightmap

    • Height to Derivative (Extra Fine)
    • Height to Derivative (Fine)
    • Height to Derivative (Medium)
    • Height to Derivative (Extra Large)
    • Height to Derivative (XX Large)


    height_to_derivative.jpg


    From a
    Derivative Map
    • Derivative to Normal (no xN) - Not Normalised
    • Derivative to Normal (xN Installed) - Normalised via the xNormal Photoshop filters


    derivative_to_normal.jpg


    From a
    Normal Map
    • Normal to Derivative
    derivative_actions_menu.jpg

    You can get the actions HERE
  • equil
    this thread definitely needs more pictures.
    vtd.png
    i tried implementing it real quick using this test model by Earthquake. The derivative map was generated in xnormal and looks like this:
    testmesh_derivNormals.PNG
    seems like they work ~ok~. there seems to be a small accuracy gain over tangent space normal maps, since derivative vectors don't need to be unit length to generate a unit length normal. but...

    i'm getting mesh boundary discontinuities. this means that there's a seam on every triangle edge (note the faceted cylinders in particular). this is almost definitely an implementation issue on my end, but i haven't figured out what's wrong yet. hard to say whether tangent space seams go away when it looks like this though.

    there is no inherent texture memory gain. using dxtnm deriv maps and dxtnm normal maps are both 2-channel data stored in a 4 channel texture. the derivmaps seem to work a bit better in dxt1 than tangent space, but the quality still takes a minor hit. compared to tangent space normal maps the quality seems noticably less noisy though. pd_nm.png

    there is no interpolator gain (or uh, loss). you don't need to pass tangents but instead you rely on position. you do lose one vertex attribute, which makes your mesh data a bit smaller (4 bytes per vertex, i think?). actually, if you transform view and light vectors in the vertex shader i guess tangent space calculations would use less interpolators since you wouldn't need to pass the tangent to the pixel shader.

    it requires more shader instructions. i don't have an exact count, but just reading the blog and the paper it should be obvious that there's quite a bit of code involved.

    so what's the deal? from what i understand this technique just isn't made with dx9 in mind. there are some pretty interesting gains in the context of tesselation and domain shader optimization, but i don't have the hardware to test that out. my view is that it might be more interesting for the next generation. really hoping some of the 8monkey guys or EQ chimes in, they know their normal stuff a lot better than I do.

    edit: cool stuff andy! going to play with your actions and see if i can get some more comparisons
  • mmikkelsen
    Hey Equil,

    I just wanted to try and shed some light on some of the questions people have regarding the technique and the issues you've raised.

    First of all you are correct that this will not work well on examples such as these where the base lighting on the object doesn't look good. In other words the normals on the lo-res have to indicate, roughly, the actual tangent plane of the surface.

    With conventional normal mapping this does indeed work, however, only if you are pedantic about making sure that the tangent spaces that were used for baking are also the same that are used for rendering. I have found that in practice most game studios are not.

    In practice it is a lot easier to break this dependency then most seem to acknowledge. First of all pretty much every existing baker uses it's own proprietary implementation of tangent space generation. Furthermore, many implementations are dependent on the order in which faces are given which means their implementations fail to produced correctly mirrored tangent spaces when mirroring an object (many don't work with mirroring at all). It also tends to fail during export when objects change hands. As an example one studio I know of did vertex cache optimization before tangent space calculation in their engine which triggered reordering which gave problems. Another saw a difference in behavior because the baker was not trimming degenerates before evaluation and their own engine was. In other cases it comes down to as simple things as someone performing welding steps or just changing overall index list layout. Another issue is quads since results also change depending on which diagonal split is chosen. And all of this is assuming you're even able to get your hands on the source code used by the baker to generate the tangent spaces.

    Another option is relying on exports to pump out tangent spaces but I have in many cases seen this turn out bad for people too where they just assume the mesh format contains the spaces and then finding out after years they had been running into problems because the spaces were never in the formats they were using to export with.

    If you do end up taking the conventional normal mapping route then I strongly suggest that you use the tangent space generation that is used in xNormal since it does overcome a lot of these problems.
    For more information you can read about it here --> http://wiki.blender.org/index.php/Dev:Shading/Tangent_Space_Normal_Maps

    So anyway, why use this new method that does not rely on vertex level tangent spaces?
    Well a super good reason is that it does solve the issue of compliance.
    It's just a short pixel shader so if an artist authors the asset such that it looks good in one engine/tool/viewer then it'll look good anywhere. In my opinion this is a big win.

    I also wanted to say you are correct that this method is intended for recent/upcoming gpus+.

    There are other reasons why this is a win. You save the memory you would have spent on tangent spaces which is of course nice. But it also gives you a greater degree of freedom because you can apply more complex forms of deformation and surface synthesis without having to recreate vertex level tangent spaces w.r.t. texture domain.

    One obvious example is HW tessellation. When doing Gregory/ACC patches you are bringing in anywhere from 16-32 vertices per patch. So we prefer to have a slim vertex structure here. Essentially, all you need is the vertex positions and 4 texture coordinates. Normals are synthesized in the domain shader and then we have all we need to apply this new bump mapping technique in the pixel shader (pos, norm, uv) and get really nice results.

    Other reasons are mixing. By bringing height derivatives into screen-space we can accurately mix height derivatives being brought in from any number of different texture spaces. People mix with normal maps but it always gives issues since it is incorrect to do so and for the other reasons I mentioned.
    With this method you get perfect results and it's flexible enough that you can even use different methods/maps too such as height maps (using listing 2 in the paper), derivative maps as explained in the post, on my blog, or even a procedural function used to generate the derivative. All of these can be brought into screen-space cheaply (relative to next gen). Then be mixed correctly here and finally we perturb the normal only once. This is among other reasons why this method can do triplanar bump so well.

    It's a very efficient way to do this since synthesizing the surface tangents relative to the screen domain is significantly faster to do then trying to synthesize the tangents w.r.t. texture space. And also it allows us to mix the height derivatives in one consistent domain that we always have (screen-space).

    Yet another advantage is that unlike texture space this will always place the distinct splits in derivatives/tangents at the silhouette where they can't be seen.
    As an added bonus to this observation this is also where the lowest resolution mip maps are sampled which means height derivatives tend toward zero there.
    So when you think about it it really is the perfect place to put them.

    Off the top of my head that's all I have to say but I'll post again if there's more I think of or if there are any questions for me.

    Thanks!

    Morten.
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    Thanks for posting Morten :)
    Very good explanation!
  • Ace-Angel
    Offline / Send Message
    Ace-Angel polycounter lvl 12
    Indeed, pity it's for up coming GPU's. Either way, awesomesauce.
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    @Ace-Angel,

    By recent/upcoming GPU's, I'm assuming that Morten means any DX11 card. I can use derivative maps perfectly with my 460gtx :)
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    Dang, no DX10?

    So what I am trying to figure out, is how does this effect the artist? Any workflow changes? I read that you can composite normal maps with overlay in photoshop much better with these derivative normal maps, is this done differently now?

    In layman's terms, what are the other visual benefits, just more accurate normals from a matched baking/rendering algo and better detail maps?
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    DX9 and DX10 will both work with these maps, though they might be a little slower on dx9. I guess that as they are intended as future tech for the next gen, legacy cards have not been the focus in development and are not as well optimised for the complex shaders required.

    There are no workflow changes really and aside from making the LP look a little better, you can still work exactly the same as you do now :)

    You can composite derivative maps much better than normal maps as the normal map overlay/soft light etc. is a hack that is not mathematically correct. With derivative maps no such problem exists and you can blend maps together wise no such worries. You don't need to mess around with removing the blue channel etc. Mixing derivative maps is more like mixing height maps together than it is mixing normal maps.
  • jeffdr
    Offline / Send Message
    jeffdr polycounter lvl 11
    Ace-Angel wrote: »
    From what I could gather, it's the same as a Normal map, but lacking the Z component in terms of pixel instruction, thus making it light weight (so the in long run, with detail maps, you get 3-4 instructions instead of 9-10).

    Similar idea, but it's not the same thing as a normal with the z component taken out. It stores the slope in texture space, rather than tangent space. The difference is subtle there, but a derivative map has no dependence on the way in which it is mapped onto a mesh. This is its main benefit.

    As some have mentioned, yes it does apply well to detail maps. That is, it's easy to (correctly) layer maps of this kind in the code. It is not *only* for that though, it is potentially a full blown normal map replacement as well. Works great with tiling, mirroring, etc.

    Also very extremely in agreement here with Morten's comments on tangent space never matching up. That shit is a mess and a constant hassle, for artists and programmers, and I would happily welcome any reasonable technique that would let us do away with it. This looks like one of those.

    For those familiar with shader writing, I bet you could drop not only the tangent from the vertex attribute and interpolation, but the vertex normal as well. The whole tangent space parameterization (of which the vertex normal is a part) can disappear. Double savings.
    Ace-Angel wrote: »
    I was wondering, in engines like UDK, CE3, Unity and etc, on how feasible this technique would be

    Probably not very hard to implement. Speaking for Marmoset/Toolbag, I think it would be a few day's work + some testing. The hard part is hardware support. As many have said, D3D11 class hardware is required for these precise derivative shader instructions. So even once you've written it, the base of users who can use it is small (and excludes all console and mobile platforms that exist today).

    Until today, I didn't even realize there were people who wanted to try it out though. Perhaps Toolbag needs to take a swing at this...
  • jeffdr
    Offline / Send Message
    jeffdr polycounter lvl 11
    Vailias wrote: »
    Also, I don't think "standard" normal mapping replaces the normal, unless its object space mapped. I remember that was a mistake I made when I first wrote up a normal mapping shader. Tangent Space Normal Maps should be treated as vector offsets to the interpolated surface normal. (I'm pretty sure anyway

    Almost. A normal in a tangent space normal map defines a vector in tangent space, which is not the same thing as an offset from the vertex normal.

    A non mathematical way to think about it is:

    Red (X) denotes how much of the vertex tangent to use
    Green (Y) denotes how much of the vertex normal to use
    Blue (Z) denotes how much of the vertex binormal to use

    These three then get summed up, and that's your normal that gets used for rendering.
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    So, the yellow/redish-green maps are derivative normal maps?

    Don't most normal map compression schemes rely on the 128,128,255 color for good compression?

    Hooray! Even less humanly readable normal maps!

    So will software like crazybump need big updates to support this?
  • Stromberg90
    Offline / Send Message
    Stromberg90 polycounter lvl 11
    Computron wrote: »
    Hooray! Even less humanly readable normal maps! ?

    How come?
    I don't see how we will have any more "problem" understanding these than tagent space normal maps?
    Not saying that you are wrong, I just want to hear why :)
  • Ace-Angel
    Offline / Send Message
    Ace-Angel polycounter lvl 12
    You shouldn't be editing your Normal Map in any form anyhow, other then a smooth soft amount of smudging if you want to massage things into place that are small and easy fixups, but generally, a good cage will get your work done, there is no reason for you to be trying to 'read' your Normal Map since the shader is doing all the reading and working, and any edits should be coming from your models for correct bakes.

    Plus, even the smallest and most 'basic' edits to Normal maps will spell doom sometimes. Try editing a Normal Map, even on an organic model which is very forgiving, right around the UV seam and/or on a harsh bent. Your normal map will literally look like a black hole at a 45* inclination from the camera. As if it was shot through a black-hole and is negative zoning all light source in that pocket region.

    Also, if you're worried about Crazybump, just ask the Author for an update, and if you used nDo or want to convert already created Crazybump NM's, just use the actions metalliandy so kindly posted up, converting a Normal Map into a D-Map on the first page.

    I for one will welcome even the most unreadable NM's as long as I can finally bake a Normal Map which doesn't break my model at the most finicky of things, and requires me to create 4 different exports with tangents and bakes between a Static Mesh and a Skeletal Mesh, just to see what matches up with what.
  • Stromberg90
    Offline / Send Message
    Stromberg90 polycounter lvl 11
    Ace-Angel: Sure thing, I don't imply that people should edit their normal maps, I was just wondering why.
    And there is some that like editing their normal map, so it might have more to say for them.

    I agree with you, I would also like a normal map that just works in every engine.

    I know that it wasn't a direct reply to me ;)
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    @jeffdr,
    I would love Marmoset to support this! :D
    Any chance of using Morten's Tangent Space too?
  • mmikkelsen
    >Hooray! Even less humanly readable normal maps!

    Well if you want to be able to edit your maps post baking then one way to achieve this, that I like the idea of (though I am not an artist), is working with 16/32 bit single channel height maps which can be baked out in ZBrush and Blender multires bake. If you need to bake using independent lo-res and hi-res then you can of course do this in xNormal but in this case you'll have to apply a subdiv on your lo-res before giving it as a lo-res to xnormal. This gives you a similar smooth height map. I generally make the density level of faces the same as on the hi-res which gives the best results.

    Once you are done you can convert these to derivative maps using Andy's actions --> http://eat3d.com/forum/tips-tricks-and-free-videos/derivative-map-photoshop-actions
    and drop precision at this point to BC5 for DX10+ or DXT5 on older cards (if it's for a game).
    How to author the assets "derivative maps" and why the technique is good to use are in several ways separate issues. In principle an artist never has to look at the derivative map. A game studio could work with floating point height maps all the way during authoring if they wanted to and still be using the same technique.

    Blender also allows you to do bump painting and actually uses the same technique to perturb the normal and will show you the lighting interactively.

    http://vimeo.com/21186170
    http://cgcookie.com/blender/2011/10/18/using-the-texture-paint-layer-add-on/

    Results might be somewhat more coarse/grainy though because it's working straight off of the height map instead of producing the derivative map and the height map is currently given to the gpu in 8 bits (even when working on a float image). But visually it's more then enough to give you the idea of what it's going to look like.
  • metalliandy
    Offline / Send Message
    metalliandy interpolator
    I have updated the actions to 1.1 and bug where the blue channel was filled with the foreground colour, rather than black.
    You can get the actions HERE

    Also, here is a shot of the map mixing in Blender. The object to the left is a normal map and the right is a Derivative map. Both have an overlay blend mode set to 50% opacity (with the blue channel removed on the detail normal map)
    Derivative_vs_Normal_Mixing.jpg
  • EarthQuake
    @jeffdr,
    I would love Marmoset to support this! :D
    Any chance of using Morten's Tangent Space too?

    I think supporting Max or Maya would be much more productive.
  • mmikkelsen
    >I think supporting Max or Maya would be much more productive.

    I understand how you might feel that way but it doesn't really address the underlying issues though. For instance you wanna leave all the people who bake using xNormal and Unity hanging? And even if we did agree to use their implementation of tangent space generation their version doesn't address most (if any) of the issues I brought up.

    Order-independence, mesh data layout/indexing, degenerates and so on.
    And this is even assuming the programmer at the corresponding game studio has the knowledge and the interest to get their implementation integrated correctly into their own code-base.

    As a case study I integrated mikktspace into Blender. The mesh structure used in Blender is something no one has seen since the mid-eigties. Every time I see it it makes me want to shake that ass and break out singing "All that she wants! is another Be-Be! Uh-Ye-eah!".
    Basically, there is one single index list but it only references positions and unconditionally averaged (at each position) normals.
    Texture coordinates are stored unindexed (reminds me of how we did things on the Playstation 1). Additionally, the actual normals used for rendering don't exist in memory. At the last minute when passing the mesh for rendering, if a face is tagged as FLAT, it will calculate and pass the face normal and if the face is set to smooth it will grab the unconditionally averaged normal. In other words there is no index list to the "real" normals and they are not even stored unindexed either.

    Now if anyone can show me a self-contained implementation of tangent space generation code (mikktspace is one .c and one .h) that can pull off guaranteeing a perfect match between such a mesh layout and that of xNormal or even just the typical indexed layout used for rendering then I am all ears. So far I have never heard of any other implementation that can do this than mikktspace.

    So far it seems the industry has completely failed in committing to one successful implementation standard and this is one advantage to having people quit using vertex level tangent space altogether. It forces everyone to commit to just a few lines of trivially reproducible pixel shader code anywhere.
  • EarthQuake
    mmikkelsen wrote: »
    >I think supporting Max or Maya would be much more productive.

    I understand how you might feel that way but it doesn't really address the underlying issues though. For instance you wanna leave all the people who bake using xNormal and Unity hanging?

    Yes.

    Lets flip that around, you want to leave all the people who use Max or Maya hanging? These are real user bases we're talking here, that make up the vast majority of the games industry.

    Xnormal has a custom tangent loader, it can easily load a max/maya centric Marmoset Toolbag tangent space.

    Lets just look at it from a common-sense perspective, what is more likely to happen:

    A. Autodesk decides to conform to the "blender" standard.

    or

    B. Someone writes a plugin for blender to conform to one of Autodesk's standards.

    Maya is a great choice for toolbag, given that:
    A. Toolbag's pipeline is already centered around Maya
    B. Maya's renderer and internal viewport display are synced up and show perfect results.
    mmikkelsen wrote: »
    So far it seems the industry has completely failed in committing to one successful implementation standard and this is one advantage to having people quit using vertex level tangent space altogether. It forces everyone to commit to just a few lines of trivially reproducible pixel shader code anywhere.

    While I agree that it is an utter failure, expecting everyone to get together on the matter and sing kumbaya while simultaneously synching their code base is just completely unrealistic.

    Putting something together in blender is meaningless if you can't get into industry standard tools like Max and Maya.

    In addition to that, the need for absolutely everything to play nice is overstated, all that really needs to happen is for each studio or app to have a clearly defined standard. If I'm on X project using X app, I don't need to be reassured that bakes in A, B, C and D app are going to work, I just want a standard, locked down workflow I know I can count on.

    Now this is talking JUST tangent space here, any discussion about derivative normal maps is neither here nor there. Derivative normals presents a great opportunity to actually have some standardization, as nobody has implemented them yet.
  • jeffdr
    Offline / Send Message
    jeffdr polycounter lvl 11
    mmikkelsen wrote: »
    First of all you are correct that this will not work well on examples such as these where the base lighting on the object doesn't look good. In other words the normals on the lo-res have to indicate, roughly, the actual tangent plane of the surface.

    I'd like to hear some clarification on this. Is this because derivatives are unbounded values (that don't necessarily lie on [-1,1] or some convenient range like normals do)? How does one encode a derivative map exactly, without resorting to floating point textures or something?
  • mmikkelsen
    No it's actually fairly simple. Imagine you have a quad that is essentially perfectly flat (all 4 positions in the same plane). Then imagine you have one vertex normal that points far away from the face normal in one direction and then let's say the other 3 go moderately to the other direction. And let's say the ones going moderately in one direction are on the same triangle (after the quad is split).

    Then when taking ddx_fine() and ddy_fine() on the surface position and next these are crossed by the interpolated surface normal the results will get affected because the surface normal transitioning across the quad is so broken. However, this will not really be an issue with tessellation since here we synthesize the normal.

    Overall, the method works very well with regular meshes too though but you do need a well behaved transition of the interpolated surface normal to get good results. Intended hard edges work fine as well obviously.

    I definitely recommend you try it out so you'll see for yourself. I was very surprised myself by how well it works even for regular meshes.
  • Farfarer
    Morten: Any chance you could post the HLSL/CG code for building the screen-space transform matrix? Trying to build a shader to test these out but I'm struggling to get my head around what's needed to transform into screen-space rather than tangent space.
  • mmikkelsen
    Hey Talon,

    I can definitely help you out but before I hand you code did you look at listing 1 in the paper that is referenced from the blog post?

    When using the derivative map variant you simply replace the dBs and dBt in listing 1
    with the code from the blog post --> http://mmikkelsen3d.blogspot.com/2011/07/derivative-maps.html

    Don't forget to apply a float bump_scale to dBs and dBt though before using it as a replacement in listing 1. This was discussed in the more recent blog post --> http://mmikkelsen3d.blogspot.com/2011/11/derivative-maps-in-xnormal.html which also tells you how you can auto-generate the bump_scale if you don't want a user-defined one.

    If this doesn't answer all your questions then I'd be happy to provide more code or answer questions that you might have about the existing code.

    Cheers,

    Morten.
  • TSM
    I am part of the community over at Terathon Software, who produces the C4 engine, headed by Eric Lengyel. He had this to say on the technique :

    "I was the lead reviewer for Mikkelsen's paper in JGT. I implemented his technique in C4 and ran several comparisons of both visual quality and performance. My conclusions were that, even though the technique does work correctly, it was neither better looking nor was it faster. Therefore, we'll just stick with the ordinary tangent-space normal maps."

    In response, a member asked :

    I haven't read the paper closely, but if there is no need in tangents been computed, then why it isn't faster for skinned models?

    Eric:
    The shader is much slower.
  • mmikkelsen
    Hey TSM,

    I believe Eric tested this on an 8800 card (which is d3d10) and though the technique will work even on d3d9 the method was intended for very recent/upcoming gpus.

    Someone on gamedev reported this method as being marginally faster on a fermi card over conventional normal mapping.

    Either way my response is the same now as it was then. Relative to the shader length of a next generation game the cost of perturbing the normal one way or the other just once (independent of the number of lights) is not going to make a significant difference once you throw shadows, multiple lights, post filters and so on into the mix. And this method is very practical in many ways as has been discussed already.

    Another example is tessellation. With this you're producing a very dense mesh and using pregenerated tangent spaces will add significantly to the patch foot-print (after vertex shading). Irregular patches are 32 vertices a pop. Furthermore, you'd prefer the domain shader output to be as small as possible also.
  • mLichy
    Im pretty positive Mafia 2 also uses Derivative Normals.
  • mmikkelsen
    Hey mLichy,

    I think you're confusing the method that we're discussing with the method some have used already where they use a derivative map texture but they are using it almost the same way as with normal maps where per vertex tangents are used to achieve perturbation. See insomniacs presentation here -->
    http://www.insomniacgames.com/tech/articles/1108/files/Ratchet_and_Clank_WWS_Debrief_Feb_08.pdf

    Also as pointed out by jeffdr in this thread there are some other subtle differences regarding interpretation of the word derivative map in this context (in regards to using non orthonormal basis etc).

    Anyway, the technique we're discussing here is entirely different. The concept of derivatives and derivative maps is obviously well known. In fact derivative maps were suggested by Jim Blinn himself in his paper on Bump Mapping back in 1978.

    The technique that we are talking about does not rely on pregenerated tangent spaces.
  • mLichy
    Yeah, now that I think about it, I think they had the blue channel in the Alpha if I remember right.
  • Farfarer
    mmikkelsen wrote: »
    Hey Talon,

    I can definitely help you out but before I hand you code did you look at listing 1 in the paper that is referenced from the blog post?

    When using the derivative map variant you simply replace the dBs and dBt in listing 1
    with the code from the blog post --> http://mmikkelsen3d.blogspot.com/2011/07/derivative-maps.html

    Don't forget to apply a float bump_scale to dBs and dBt though before using it as a replacement in listing 1. This was discussed in the more recent blog post --> http://mmikkelsen3d.blogspot.com/2011/11/derivative-maps-in-xnormal.html which also tells you how you can auto-generate the bump_scale if you don't want a user-defined one.

    If this doesn't answer all your questions then I'd be happy to provide more code or answer questions that you might have about the existing code.

    Cheers,

    Morten.
    Cool, cheers. I'll check out those and see if I can get something going.
  • Farfarer
    Got them pretty much working in Unity :D

    Sadly Unity's currently d3d9, so no ddx/y_fine() or .GetDimensions(), but it works all the same.

    Looks a little wonky here but that's more because I did some rushed bakes than anything else.

    derivativeMapShader.jpg
  • mmikkelsen
    Excellent mate!

    Regarding ddx and ddy. I don't know if this covers all d3d10/d3d11 cards or not but
    I've tried d3d10 hlsl in fx composer 2.0 on one of each and in both cases it appears
    ddx and ddy are the same as ddx_fine and ddy_fine. Ironically, in d3d11 hlsl (which is not supported by fx composer) ddx and ddy are the same as ddx_coarse and ddy_coarse (at least on the cards I have tried). So ironically ddx and ddy are lower quality in d3d11 then they are in d3d10.

    Though I have never seen any official info on the definitions this is what I find when testing the differences between ddx_coarse and ddx_fine on the card in my laptop.

    It would appear that ddx_coarse returns the same value in blocks of 2x2 pixels and ddx_fine and ddy_fine returns the same values in blocks of 2x1 and 1x2 respectively.

    If anyone knows more do share :)

    So anyway, you probably shouldn't worry about using ddx and ddy as long as you're not using d3d11 in which case you must use ddx_fine and ddy_fine.
1
Sign In or Register to comment.