Home Technical Talk

Bump instead of normals, or.. wat?

http://jbit.net/~sparky/sfgrad_bump/mm_sfgrad_bump.pdf

So, lets see if someone a bit smarter than me can figure this out. I had a bit of a chat with Jeff(from 8ml) about this and have a vague understanding, but would love to know what other people thought about it/if you understand any of it and what it really means, from a content creation/display standpoint.

From what I gather:

Pros
1. No need to account for smoothing errors with the image content, which means:
a. Easier to edit content
b. No "resolution based" smoothing errors
c. No need to store tangents or worry about tangent space in shader
d. etc
2. Uncompressed 8bit may offer higher quality than compressed 24bit?

Cons
1. No floaters, depth must be taken into account, not just direction. Similar to displacement mapping.
2. More complex shader, needs recent-ish hardware
3. Most likely voodoo witch craft, who knows if it actually works lolol

Replies

  • jeffdr
    Options
    Offline / Send Message
    jeffdr polycounter lvl 11
    EarthQuake wrote: »
    3. Most likely voodoo witch craft, who knows if it actually works lolol

    Con #3 is my concern too :P Haven't tried it myself yet.

    I think you could find a way to bake floaters, but it might require custom baking code/tools.

    This could be a big thing for content generation, I feel. You can just freely "paint" detail into the map. It also opens a lot of doors on the code end for dynamic effects; procedural deformations and detail bump maps (layering multiple bump maps) becomes easier and cheaper to add on. In short, its just much simpler to work mathematically with height than with surface normals.
  • equil
    Options
    Offline / Send Message
    i've been trying to get this working but it's unfortunately way over my head too. No tangent break seams sounds lovely. A pretty big drawback is that it doesn't work with deferred though, and requires shader model 5.0. The derivative map thing sounds more reasonable since it doesn't use ddx_fine (i think?), and apparently all of this still works fine with mirrored geometry (since it's kind of screen space).

    And boy do I wish academic papers were more implementation-oriented.
  • jeffdr
    Options
    Offline / Send Message
    jeffdr polycounter lvl 11
    Oh, balls, I didn't see that it required SM 5.0. That makes it basically untenable as a current technique then since you need a brand new D3D11 card to use it. :(

    I think when they say deferred lighting wouldn't work with it, they mean that you can't defer the derivative (but you could just as easily output to a screen space normal geometry buffer as per usual). This would be ok with me. The main win seems to be not having to deal with tangent space in your shader engine & pipeline.
  • Joshua Stubbles
    Options
    Offline / Send Message
    Joshua Stubbles polycounter lvl 19
    Not being able to use floating geometry is a big concern. That's a ton of extra time needed to cut shapes in, make sure they smooth properly and such. If it's really quality stuff I'm sure that's the direction we'll all be heading. =\
  • arrangemonk
    Options
    Offline / Send Message
    arrangemonk polycounter lvl 15
    @perna: wrong
    nromalmap can be converted to heightmap and heightmap to normalmap
    so at certain resolution you should be able to bake as usual
  • Pedro Amorim
    Options
    Offline / Send Message
    @perna: wrong
    nromalmap can be converted to heightmap and heightmap to normalmap
    so at certain resolution you should be able to bake as usual

    prepare to die
  • arrangemonk
    Options
    Offline / Send Message
    arrangemonk polycounter lvl 15
    you die because of lack of knowledge

    since the main problem is incompatibility of tangent space calculations most of the time
    staying in your specific algorithm in a time alwas gives proper results for the heightmap (atleast when taking the underlaying mesh into account)

    but i can not tell if simple conversion on a 2d (eg integration) base gives the same result
    but i still see the problem with resolution dependance
    resulution%20dependance.png

    edit:
    here you see, when increasing the "bump value" you gat waviness and as result you get some sort of opposite trangulation than your mesh has.
    but it would look fine at the height its defigned for, and im sure different meshes have different heights, so you need a height parameter for each mesh, which could lead to problmens
  • Frankie
    Options
    Offline / Send Message
    Frankie polycounter lvl 19
    @perna: wrong
    nromalmap can be converted to heightmap and heightmap to normalmap
    so at certain resolution you should be able to bake as usual

    Every time you convert it you loose accuracy, in most cases I think it would be a massive difference.

    I don't understand how this would work on smooth shaded low poly models.
    Actually, I don't understand any of it :poly124: Is it just a faster way of using heightmaps in certain situations? Or is it meant for everything...

    Especially this part, "A normal map may be thought of as height derivative map which requires two values" As I don't derive my normals from a height map, I miss out that stage and directly take them from a high poly model.
  • arrangemonk
    Options
    Offline / Send Message
    arrangemonk polycounter lvl 15
    as you see in the picture highpoly model has height difference to lowpoly and surface direction difference
  • Frankie
    Options
    Offline / Send Message
    Frankie polycounter lvl 19
    make a regular height map with a constant normal that isn't directly up (eg 45 degrees) then make it 512 or above and then convert it to normal map and you will get banding every other pixel...
  • arrangemonk
    Options
    Offline / Send Message
    arrangemonk polycounter lvl 15
    that doesnt happen when baking a normalmap
    since the original must be a plane with an angle, which is to avoid usually
    dont make exaples that do not exist

    Edit: i suspect you of doing vodoo
  • equil
    Options
    Offline / Send Message
    Frankie wrote: »
    Especially this part, "A normal map may be thought of as height derivative map which requires two values" As I don't derive my normals from a height map, I miss out that stage and directly take them from a high poly model.

    The derivative of something is the rate of change. The derivative of a height is a slope. So just think of normal maps as slope maps or something. This doesn't really change no matter how you generate them. And in fact all this technique really does is derive this slope at runtime.

    - - -

    There are things you can't do with a normal map and things you can't do with a bump map but I could see this being useful on the hardware it works on, since you'd probably do displacement at that stage, and just authoring one map for everything would be pretty sweet (and a map that's easy to mess with in photoshop at that).

    Also the derivative maps he mentions in the article are pretty much normal maps without any tangent basis encoded. The upside with them is no tangent seams, but on the other hand it seems to use a lot of texture reads.

    tl;dr: might be useful when we move over to dx11.
  • Frankie
    Options
    Offline / Send Message
    Frankie polycounter lvl 19
    If I wanted to I could create a normal map 100% filled with every pixel having a constant normal pointing at 45 degrees in any direction. Why I would wan't to do that I'm not sure but it shows the beginning of the conversion breaking down.

    If you want a more realistic example try convert this, http://www.filterforge.com/filters/6523-normal.jpg

    Edit: Thanks equil, that clears up that part!
  • jeffdr
    Options
    Offline / Send Message
    jeffdr polycounter lvl 11
    Frankie wrote: »
    If I wanted to I could create a normal map 100% filled with every pixel having a constant normal pointing at 45 degrees in any direction. Why I would wan't to do that I'm not sure but it shows the beginning of the conversion breaking down.

    That would work just fine. The height map equivalent of the normal map you describe would just be a gradient.
  • Frankie
    Options
    Offline / Send Message
    Frankie polycounter lvl 19
    jeffdr wrote: »
    That would work just fine. The height map equivalent of the normal map you describe would just be a gradient.

    Untill you tried to get a smooth gradient with 256 grays spreading over 512 pixels so you'd get 2px banding. Or am I getting confused?
  • equil
    Options
    Offline / Send Message
    uh, bilinear filtering.
  • Frankie
    Options
    Offline / Send Message
    Frankie polycounter lvl 19
    not sure how that would help.....

    banding example on that case;
    http://dl.dropbox.com/u/26016303/banding.jpg

    I could be missing something basic, sorry if I'm going off topic.
  • arrangemonk
    Options
    Offline / Send Message
    arrangemonk polycounter lvl 15
    no issue in pixelshader, it works in float
    ant that example is a bit of extreme
    i would recomend working in 16 bit grayscale resolution or colorramp + conversion for that issue

    256/channel is the worst case you can have while editing images
    in my little conversion application (some oither tread) i used 32bit/channel and results were just fine
  • Frankie
    Options
    Offline / Send Message
    Frankie polycounter lvl 19
    yeah true! I hadn't really thought about just how much better it but I suppose 16 bit gray scale is 65536 times more accurate than the 8 bit one.
  • arrangemonk
    Options
    Offline / Send Message
    arrangemonk polycounter lvl 15
    actually its 256 times better
  • EarthQuake
    Options
    Offline / Send Message
    perna wrote: »
    So it seems this method makes production harder, which would make it the biggest fail. With synchronized tangent handling there's already little need to worry about poor shading.

    Right, this was my first concern too. All this stuff about converting normals to height etc just sounds like a huge mess as well, and needing to make sure your height is normalized to contain the correct information to prevent seams and look like the highpoly, that just sounds like the issues with improper tangent space all over again to me.

    I would however be very curious to see a real world example of this, on a regular asset, compared to normals etc, with the proper workflow outlined. If this can be accomplished with the same basic baking workflow, IE: simply bake a height map with some preset values and throw in on the mesh without issue, some of the pros would be very attractive.

    But I doubt that will happen or that it works seamlessly at all.
  • jeffdr
    Options
    Offline / Send Message
    jeffdr polycounter lvl 11
    If it were more feasible today (better hardware support), I'd consider switching to this technique just on the engine end of things (pre-converting normal mapped assets to use height maps behind the artist's back). Then one would have the code advantages without having to convince the artists to do anything.

    It occurs to me that the exact same map could be used simultaneously for displacement (especially since this technique requires D3D11 anyway... you might as well). Then from a single channel texture map you have per-pixel normals, displacement with LOD, and no mesh tangent data required for the engine. Vertex data size can be a significant obstacle to high poly performance, so dropping tangents and bitangents would be desirable from a tech standpoint if not a content one.
  • equil
    Options
    Offline / Send Message
    i don't really understand your adversion of this. When tesselation becomes ubiquitous we'll bake heightmaps for every asset anyway, and the issues with floaters (brush tool in photoshop?)/uv seam differences(just add padding?) apply for tesselation too. Could you guys just spell out the drawbacks you see with this for me?
  • jeffdr
    Options
    Offline / Send Message
    jeffdr polycounter lvl 11
    perna wrote: »
    There's no way to accurately extract the original height data from a normal map.

    Mathematically it works 100% (you just integrate instead of differentiate) - but there would be limits imposed by the normal map precision. A 24 bit normal map may not be precise enough to perfectly reconstruct the height. But if you are just after a height map for the purposes of emulating a normal map... then it might be fine.
    perna wrote: »
    Anyway, I don't get the point of this topic. We're already getting fantastic results from existing solutions.

    Fair enough. It's mainly a tech thing rather than any sort of image quality improvement. I was just curious to run it by EQ and the rest of the polycount crew to see what the artist's take is.
  • EarthQuake
    Options
    Offline / Send Message
    jeffdr wrote: »
    Mathematically it works 100% (you just integrate instead of differentiate) - but there would be limits imposed by the normal map precision. A 24 bit normal map may not be precise enough to perfectly reconstruct the height. But if you are just after a height map for the purposes of emulating a normal map... then it might be fine.

    Actually, because a normal map is just normal direction, and a height map baked from high is a map of the variation in distance in the two compared surfaces(high, low), these are two drastically different data sets. You can't extract an accurate height map from a normal map for the purposes of displacement for example. This isn't really an issue of precision, just an apples to oranges sort of thing.

    I dont think the idea that you would be able to use one map for displacement and bump is actually correct. Unless its just a flat plane with a painted height map. Any sort of baked data is going to bring up a variety of issues.

    However, I'm not even sure that you would want a baked height map for this sort of thing, as your "slopes" will be influenced by surface variation, ie when your lowpoly cylinder isn't as round as your high, there will be a large difference in height there, but you wouldn't want that translated into normal direction.
  • arrangemonk
    Options
    Offline / Send Message
    arrangemonk polycounter lvl 15
    LET THEM CODERS DREAM! one grayscale map instead of a 3 channel map is very tempting for coders
  • jeffdr
    Options
    Offline / Send Message
    jeffdr polycounter lvl 11
    EarthQuake wrote: »
    Actually, because a normal map is just normal direction, and a height map baked from high is a map of the variation in distance in the two compared surfaces(high, low), these are two drastically different data sets.

    Theory meets practice, and practice wins (the math works only in theory and with continuous functions). EQ and perna are right for the general case - normal maps can't capture vertical surfaces, so anything with a ledge perpendicular to the surface isn't present at all in the normal map, and therefore isn't taken into account in the integration and so shit doesn't work. The process would work fine for a lumpy organic surface maybe, but for some greebled device or something it wouldn't even come close. The more I think about it there would be definite precision issues also, with many small errors propagating to big height errors as the integrator ran.

    Another problem, if you integrate a function, the result you get contains an unknown constant. Which means you couldn't use it for displacement mapping very well because even if you had the whole surface reconstructed to scale (and you wouldn't, at least not very accurately), it could still be shifted up and down by an arbitrary amount.

    So that's out. You'd just have to bake depth directly, which is a new set of production problems as many have pointed out. The tech-side advantages of working with depth still apply: saving memory w/ displacement mapping, better compression, and better compatibility with surface deforming effects. A depth-based renderer is worth looking into, and I think it's hasty to dismiss it outright.
    perna wrote: »
    It's not a working solution so it has zero relevance
    Using height maps for normals does work, that's what the paper illustrates. Somewhat unknown is the level of quality.
    perna wrote: »
    Won't run on console
    True but that hardly makes it irrelevant. There are other platforms today and the current consoles won't be around forever.
    perna wrote: »
    Even if it ran on console and tools for the whole pipeline existed, it would only be a valid alternative to normal mapping if it held significant performance benefits.
    In general terms there are often strong reasons to switch tech that have nothing to do with run time performance, or even image quality.
  • jeffdr
    Options
    Offline / Send Message
    jeffdr polycounter lvl 11
    The normal map pipeline is very well established and gives the best results today, but like perna says there are tool chain issues and people are still screwing it up, in both art and code. A lot of guys are old pros at making normal maps, so much so that it must seem easy. But I think there are a lot of hardships there, and if they could be overcome it would be worth sacrificing performance if the art pipeline got easier. It's a tradeoff that's made all the time.

    That said maybe this derivative map stuff isn't the silver bullet. If it looks worse then it's dead in the water no matter what. But by eliminating the tangent space parametrization I feel it's stepping in the right direction tech-wise.

    (Lengthy possibly boring megapost on the future of real time computer graphics follows)

    We have all these systems of mapping detail onto rather bland un-detailed meshes. Texture mapping, normal mapping, and even displacement mapping are just kind of hacks to graft detail onto simpler mesh frames. The only reason we even specify normals is because we don't have sufficient resolution on our meshes to take a derivative at run time reliably (which is partly the reasoning behind the new proposal that started this thread).

    We don't have the ability to just throw million triangle meshes around like they're nothing, so it seems like we don't have much other choice today. But consider that we have been doing per-pixel lighting for years, and that even a single 1024x1024 map contains over a million points, and that there are several layers of data there (diffuse, normal, spec, gloss, sometimes more), and that even prev gen GPUs can handle it with no problems at all. These pixels are what we're really after, they contain nearly all of the detail data. The triangles are just the thin canvas upon which we stretch them. The silhouette.

    Maybe the problem is triangles.

    Consider what id has been doing recently. With Rage upcoming with their megatexture stuff, they've focused their emphasis not on fancy new shading techniques, but on getting more pixels to fit in memory (by streaming them in and out in blocks). In this way image quality can be attained by just clobbering things with resolution. It changes the art pipeline a bit, but only because the artists working there have suddenly been granted so much extra *space*.

    They have some other tech that's in R&D right now. A voxel-based GPU ray tracer, that does not use triangles at all. "What?" you say. It's totally not reasonable yet, barely runs on the best hardware today. But it has some advantages quality-wise, and art pipeline-wise. Models are stored as a network of points, each containing the data needed to be rendered (diffuse, spec, and so on). Normals are computed dynamically, because the mesh actually has the fidelity for this to be done with good quality, and it can happen as a side effect of the tracing process.

    And before you ask, no, it does not look like a horrible blob of sharp squares, as old voxel tech did. The data are stored similarly but the rendering process is different. Surfaces are continuous and filtered. They look smooth.

    The best analogy I can give is that it means treating geometry like textures are today. For example instead of mip mapping textures, you mip the entire geometry tree. So things scale bandwidth-wise and don't alias to hell as they move farther away.

    The art advantages are that the artist does not need to take any action to produce multiple levels of detail. At all. There is no low res and high res mesh, or normal or height baking steps that have to be tweaked and coddled. You just make a high detail object and then you are done. And you can do this without any regard whatsoever for using too much resolution, because things can always be scaled mip-style as needed by the engine or tool chain for you.

    There are often killer Achilles' heels in new tech like this, and I'm not really saying that voxels are the future per se. I am saying though that the smaller triangles become, the less value they offer in their current 3-sided form. And that we are still going to a lot of effort to NOT render triangles today (exhibit A: normal maps). And that as GPUs become more and more flexible parallel computing devices, we need to be ready to re-evaluate some basic assumptions.

    tl;dr - prepare for changes in coming decades
Sign In or Register to comment.