Home Technical Talk

Lightmaps: tech pros and cons

1
polycounter lvl 13
Offline / Send Message
breakneck polycounter lvl 13
Coming from and artists standpoint I am aware of the visual pros of using lightmaps.

Not specific to UDK, but overall, I would like to understand the pros and cons when it comes to the technical side of things. For example, how much of an impact does adding light maps add/subtract from performance? Or better yet, where can I expect to see better performance when implementing lightmaps into a game and what should I keep track of to not decrease preformance?
-any other info would be great. thanks!

Replies

  • Xendance
    Options
    Offline / Send Message
    Xendance polycounter lvl 7
    Re: performance, compared to what? Lightmaps vs. unlit, vertex lit, per pixel?
  • jocose
    Options
    Offline / Send Message
    jocose polycounter lvl 11
    Depends on how the engine handles things and what platform your running on. However, that's the canned answer. There are some things that would be universal issues you just need to look at the engine/platform to determine how much of an issue they are.

    The two things you want to look at are:

    1. Memory
    2. Drawcalls ( EDIT: I was incorrect about this please read some of the later posts for the correction)

    Adding lightmaps can, and often does, add another draw call to the model. This is because the model has to be painted twice. This effectively doubles the effort the system has to take to render it. That said, certain optimizations can be made to an engine to better handle this so you would have to look at your specific engine to figure out what is going on.

    Memory, on the other hand, is also another huge concern when storing that many maps. Lightmaps are small, but there are a lot of them so if your system is really short on memory then your going to be hurting in that department.

    However, things like...

    • compression
    • special types of texture filtering
    • texturing streaming

    ...can all effect how this plays out in the end.

    At the end of the day the game industry is pretty damned UNSTANDARDIZED. If you are coming at it thinking you can gleam hidden truths about how to best build your assets you are probably going to be in for a rough ride. As soon as you figure out one way of doing things your going to find the next generation of engines/platforms requires you to do it another way.

    They key is knowing the language, and the idea behind them, so you can learn new systems very quickly. Put all that in the back of your mind and then focus on making great art :)
  • breakneck
    Options
    Offline / Send Message
    breakneck polycounter lvl 13
    Xendance wrote: »
    Re: performance, compared to what? Lightmaps vs. unlit, vertex lit, per pixel?

    I am thinking performance compared to say unbaked ambient occulsion, the pros/cons over using vertex lighting. Also, a dominate directional light casting dynamic shadows on everything - is that a costly process compared to lightmaps?

    jocose: very good point about draw calls, I didn't think of that. That could end of being a huge problem if that stacks onto each object.

    also the memory thing, can lightmapping pack every lightmap in a level into one texture or does it have to be a separate lightmap of each object - making the engine call that many more textures?
  • jocose
    Options
    Offline / Send Message
    jocose polycounter lvl 11
    also the memory thing, can lightmapping pack every lightmap in a level into one texture or does it have to be a separate lightmap of each object - making the engine call that many more textures?
    As far as I know it depends on the engine. Some engines do create an atlas texture for their lightmaps, others don't.

    Also keep in mind depending on your lighting model. Dynamic lights can have huge overhead because they can add an extra draw call to whatever they interact with. So the drawcall cost in that case would be negated by using lightmaps over dynamic lights, but then again completely different depending on your dynamic lighting model (there are a lot of them).
  • d1ver
    Options
    Offline / Send Message
    d1ver polycounter lvl 14
    jocose wrote: »
    Adding lightmaps can, and often does, add another draw call to the model. This is because the model has to be painted twice. This effectively doubles the effort the system has to take to render it.

    Sorry m8, but what makes you say that? As far as I know it just adds another pass for the pixel shader. And probably the simplest one compared to spec map or normal map.

    Compared to dynamic lighting and vertex shading lightmapping is obviously the cheapest and the most efficient way to light things, and that would be "period" for unreal type of engines. With a price of not being interactive unfortunately. If your lightmaps are streamed with your textures then there's actually no memory footprint from lightmaps at all.

    The cryengine kind of engines seems to heavily rely on screenspace techinques, that have little to do with the environment artists work. Although I'm probably not the one to be taking responsibility of talking about this.

    But once again for UDK kind of stuff vertex shading can't be streamed with textures, thus cluttering your precious memory. And dynamic shadows are always more expensive then static, since they have to be calculated per frame. So lightmaps for the win.



    I have a bit about it in my artists' "hygiene" paper. pardon the shameless self promotion, but hopefully it could of any use.

    good luck!
  • jocose
    Options
    Offline / Send Message
    jocose polycounter lvl 11
    That's a fantastic write up thank you so much for sharing, and yes your correct it wouldn't add another draw call. I was wrong about that. I don't know why my brain doesn't operate on all 8 cylinders all the time. That's why I just love posting on PC always keep myself in check :)
  • cman2k
    Options
    Offline / Send Message
    cman2k polycounter lvl 17
    Here's a useful breakdown of current-gen lighting methods. These are all in addition to or in place of dynamic lighting. (ie; pure dynamic lighting isn't taken into account on this chart).

    comparison_of_real-time_lighting_methods_in_games_crytek_2010_august.png

    I've researched this topic a lot so if you want to talk more about it you should stop and chat with me sometime dude, ;)
  • ZacD
    Options
    Offline / Send Message
    ZacD ngon master
    Might be a good idea to summarize the other 2 methods or link to something that explains them. Or at least say what engines use each one.
  • cman2k
    Options
    Offline / Send Message
    cman2k polycounter lvl 17
    I agree. However I quickly stole this from a recent Crytek Powerpoint, haha. It's a good format though. I'll try and expand on it a bit.

    Lightmapping: Unreal Engine, Source Engine

    Pre-computed radiance transfer: Infernal Engine (Ghostbusters), Batman: Arkham, etc.

    Light Propogation Volumes: Crytek Engine
  • breakneck
    Options
    Offline / Send Message
    breakneck polycounter lvl 13
    d1ver: sweet paper- thanks for sharing

    cman: yea dude, we shall chit chat about this tomorrow :P
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    Off the top of my head, this is assuming you're comparing lightmaps to a real time shadow map solution.

    LIGHTMAPS
    pros:
    1.quality of the light rig is unlimited as it is baked, so you can do whatever you want global illumination and all that magic

    2.saves a lot of frame rate, our real time shadows costs about 4ms which is expensive

    3.can be blurred no razor shadows or artifacts on edges of shadows

    4.no transitions in the cascades and shadows can be seen in the distance

    5.almost non existent cost to render

    6.mip maps can be turned off for lightmaps to save memory

    7.shadow edges do not flicker when the camera moves

    cons:
    1.costs a lot of texture memory if you want to get good quality

    2.resolution bound, stair stepping can be seen on shadows, stretched pixels, black spots where geo intersects

    3.lightmap seams anywhere the lightmaps chage resolution between geo which is pretty much everywhere

    4.every single piece of geo in the world needs to be unwrapped with non overlapping uv's in the second uv set

    5.environment has to constantly be rebaked if anything moves or changes

    6.animating or destructible objects are harder to do or will just look wrong

    7.slow to iterate, need to wait for local bake or render farm bake and then integrate to preview and bug check

    8.bottlenecks the bug fixing process lighting artists need to rebake after bug fixing breaks lighting

    9.lighting does not affect the character, tunnel lighting has to be lit twice, once for the bake and once for the character

    10.character shadow cast shadow on top of lightmap shadow, need to implement solution to avoid double shadow bug

    11.dxt compression artifacts, dark areas have purple and green pixels in them, looks like crap, dxt favours textures above a certain brightness level

    12.low dynamic range, lightmaps can't go above a value of 0-1, colour clipping is seen above a value of one, looks like crap, range can be stretched above 0-1 but introduces even more dxt compression artifacts

    13.need to implement a second shadow solution for moving objects

    REAL TIME LIGHTING
    pros:
    1.doesn't cost much texture memory wise

    2.nothing has to be unwrapped to the second uv set

    3.no rebake when environment changes or objects are moved

    4.fast to iterate, if you have a nice pipeline you can tune the lighting in real time

    5.anyone can bug fix, lighting is hard to destroy artists see right away if lighting is broken

    6.lighting is unified, affects character, world, objects everything has the same quality, tunnel lights work on the character correctly, animating objects for free

    7.no double shadow bug when the character is standing in shadow

    8.high dynamic range, lighting can go above 0-1 range without artifacts

    cons:
    1.costs a lot of fps

    2.need to implement a solution to keep shadow edges from flickering

    3.is resolution bound looks low quality and jaggy when viewed up close, can be blurred but usually doesn't look good also costs fps

    4.light leaks, backfaces of geometry needs to be modeled depending on the shadow angle to the geo

    5.ugly transitions between shadow cascades, looks like razor slices in most games, creating nice transitions costs fps

    6.shadows turn off completely in the distance, ours turn off at 250 units from the camera

    7.adding extra lights costs fps

    8.casting shadows from lights besides the sun costs even more fps, usually rendering engineers only give you one or a couple shadow casting lights

    9.lightrig is limited by frame rate, can't do fancy global illumination, need to fake ao and bounce light with other technics don't look as good
  • MoP
    Options
    Offline / Send Message
    MoP polycounter lvl 18
    malcolm, that is a superb overview comparison between lightmaps and real-time shadow maps!
    Thanks for taking the time to write this up.
  • d1ver
    Options
    Offline / Send Message
    d1ver polycounter lvl 14
    Hey, no problem, jocose, that's what I'm here for too.)

    No problem, breakneck, I hope it's useful.)

    cman2k, thanks a lot, dude, for that table! It got me interested to dig into the other methods and it seems very interestng.)
  • breakneck
    Options
    Offline / Send Message
    breakneck polycounter lvl 13
    malcolm: AWESOME!!!
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    No problem MoP. breakneck, hopefully that's what you're looking for.
  • Autocon
    Options
    Offline / Send Message
    Autocon polycounter lvl 15
    Awesome write up Malcom.

    Just going to throw my two cents in here as well based off what was done for Reach.

    We used 6 different types of lighting. Dynamic, Vertex, Lightmap, Single Probe, Emissive and Uber. Some of these are lights themselves some are lighting methods, just main important ones I thought would be interesting to share.



    Dynamic - Obviously very expensive and used mainly for things like blinking lights, headlights and so forth. Everything you would expect to have a dynamic light.



    Lightmaps - They were the cheapest but did come at the price of requiring texture space. The better shadow quality you wanted the more rez you needed. For the most part Lightmaps were used for BSP or assets that we wanted really nice shadows on and were not instanced to very much.

    There was no need to unwrap a second UV which was nice. If I recall someone gave the analogy that they were computed like a PolyPainted Zbrush model that never had UV's created so everything was in little squares to get the most out of the space available. I could be totally wrong on this though but never had to unwrap a model for lightmaps.



    Per Vert - The most expensive pre-computed but gave some of the nicest results if you didnt have the texture memory. The reason they were the most expensive was each asset that was set to Per-Vert was basically duplicated ontop of the original asset and then each very was colored based on intensity/color of the lights around it.

    The more verts you had the better the lighting would look but the more expensive things got. This also had some issues where if a vert was covered up by another asset that vert would not be calculated and be colored black. This alowed use to force some great shadow in some areas by "hidding" verts behind another model and another row of verts close that was obstructed to basically "catch" the light and give some nice shadowing. This also forced use to have to add more verts to some models to make sure they lit properly in areas.



    Single Probe - (that was our name for it, not sure exactly what it would be called.) This method would only lit the asset and not allow it to cast any kind of shadows onto itself. Models would be lit by whatever the main dominate light effecting it after bounce lighting was computed. Generally this was the sun but in corridors it could just be an omni as long as it was the most dominate light to effect that object.

    Basically it would tell the asset how bright to be and what color to be based off the most dominate light effecting it. Issues such as incorrect color/brightness were very common and sadly present in the final game of Reach : /

    Emissive - You could flag individual polygons as Emissive Lights which would give off light much like an omni but where ever you flagged as an Emissive Polygon. This was used for lighting areas and used in bounce calculations, they could not give off shadows. Always coupled with an Emissive texture as that would give off the glow look of the light.



    Uber - By far the most expensive Dynamic light we had. Used almost exclusively for Cinematics/Trigger events as it was the only light that could be turned on and off completely. Only ever used 1 and that was in a cinematic space and I am not sure if there was any other use of them in production environments by other artists beyond trigger events as they were very expensive.
  • r_fletch_r
    Options
    Offline / Send Message
    r_fletch_r polycounter lvl 9
    12.low dynamic range, lightmaps can't go above a value of 0-1, colour clipping is seen above a value of one, looks like crap, range can be stretched above 0-1 but introduces even more dxt compression artifacts

    how do you mean colour clipping? are you talking about blow out? I was under the impression that lack of range in LDR lightmaps would only be an issue if you changed the tone mapping of the scene. I mean if the map is exposed the same as the lighting then even if its blown out it is contextually correct.

    Do you use the HDR because you want to be able to dynamically change the exposure of the camera?
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    Autocon, you worked on Reach that's cool I'm working on Kinect JoyRide, looking forward to getting my free copy of Reach from Microsoft when it comes out.

    r_fletch_r, no nothing to do with blowout or exposure. Lightmaps are all multiplied on top of the texture in default baked lighting so LDR lightmaps can only darken textures as strange as that sounds. For example if you have a dark grey texture and you shine a light on it and then bake the lighting the brightest light you can ever put on it would be a value of pure white, but that's pretty limiting what if you want the lighting to be brighter because you're in a cave or whatever. So you turn the light up to be brighter but this exceeds makes the resulting lightmap try to go brighter than a value of 1, or pure white but it can't because lightmaps can only store a value of 0-255 0r 0-1. That's when colour clipping occurs because the lightmap can't actually brighten the underlying texture anymore so it clips out and creates this crappy looking affect where you are viewing the diffuse texure in an unlit fashion or emissive as they call it in unreal and that fades off into lighting that hasn't reached its maximum range. This is where you have to stretch the range in kind of hack way and make the lightmaps 0-2 or 0-1.7 or whatever you choose but this half the precision of the lightmap and you introduce even more dxt artifacting which is already a problem, there's a solution to try and fix that banding but that's a whole other thread. I'll try to post an image.
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    Here's an example of colour clipping with lightmaps that have lights that try to make the lightmap go to a value of above 1 or 255 if you prefer.

    lightmapcolourclipping.jpg
  • Xendance
    Options
    Offline / Send Message
    Xendance polycounter lvl 7
    I used to get that when doing levels in UT 2004 editor, but I haven't ran into that problem in UDK. Maybe it's because of the 64-bit rendering pipeline the engine features?
  • Noors
    Options
    Offline / Send Message
    Noors greentooth
    malcolm wrote: »
    there's a solution to try and fix that banding but that's a whole other thread.

    I'm interested to know how you achieve this. I use 16 bits lightmaps when i want top quality but it's a bit brutal maybe.
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    Lightmaps should always be multiplied by at least 4. If they are created with a decent quality it will look good. Theres no reason to worry about it unless you are lighting a scene with almost no texture contrast.
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    Xendance you probably don't see the issue anymore because the range of the texture is being stretched to something higher than the 0-1 range, in UT 2004 it was probably just 0-1.

    Noors, never heard of anyone using a 16bit light map that would be huge texture memory, we used to use uncompressed lightmaps on the ps2 but back then we only lightmapped the terrain and the largest lightmap we used was 16x16 pixels.

    commander_keen do you mean stretching the lightmap range from 0-1 to 0-4, if so you are quartering your lightmap range so you will have a lot of dxt colour banding and artifacting.

    Chances are if you're purchasing an engine like unreal this particular issue is taken care of for you since it would be unacceptable to ship a baked lighting tool that couldn't be used to light anything.
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    You will only be able to see artifacts if you are lighting a pure white surface. in 99% of games you would never notice it. The alternative of lower range would result in a lot more problems.

    Even in unreal 1 the lightmaps must have been scaled more than 0-1, probably 0-2. If your lightmaps arent used to brighten the scene then what good are they?
  • Noors
    Options
    Offline / Send Message
    Noors greentooth
    malcolm, i use them only for some demos, i know it's very memory consuming, but since you have 8 bits for lighting and 8 bit for darkening, it improves the gradient quality and it's more accurate when using linear workflow.
    I guess back then, lightmaps/vertex color where only used for darkening diffuse textures. Remembering quake 3, diffuse textures are very enlighted.
    Now i guess even vertex colored objects use modulate 2x or more.
  • jocose
    Options
    Offline / Send Message
    jocose polycounter lvl 11
    I'm actually kind of ignorant when it comes to how light maps are applied to textures. In my simplistic understanding light maps are simply multiplied onto the diffuse. Does UDK for example do much more than that now?

    From what I read here: http://udn.epicgames.com/Three/Lightmass.html#Diffuse%20Textures
    During rendering, lit pixel color is determined as Diffuse * Lighting, so diffuse color directly affects how visible the lighting will be. High contrast or dark diffuse textures make lighting difficult to notice, while low contrast, mid-range diffuse textures let the lighting details show through.

    Is this actually the case? What about HDR how is that handled. If anyone has any good links for me to read on this i'd be intrested. I'm still looking around online and finding lots of stuff but I am thinking some of it might be more dated.

    Seams like there are quite a few ways to handle lightmaps.
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    I made an example of different modulation levels. Each texture is dxt1 compression. As you can see there are almost no visible compression artifacts with 4x, and 1x just looks horrible.
    whyyoushoulduse4xmodulate.jpg
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    Noors even back on the ps2 we had to stretch the range of the light maps from 0-2 as commander_keen says it's pretty much useless to only be able to darken textures with lightmaps in a conventional lighting setup.

    commander_keen something to note about your example is you are showing the difference in dxt compression based off of a bright lightmap. Try a very dark lightmap and if you are using the nvidia dxt compression algorithm you'll find the compression artifacts and banding are actually worse. nvidia dxt seems to favour bright textures. Not sure if unreal or anyone else is using a different dxt compression but we found back in 2005 even with a range of only 0-2 in darkly lit environments with lots of gradients the compression artifacts were unacceptable. I'd be really interested to know if anyone has integrated a better dxt compressor into their pipelines or if everyone is just using nvidia. Here's an example of what nvidia dxt compression looks like with a dark environment with a range of 0-2. Not shippable in my opinion.

    0to2_nvidai_dxt.jpg
  • ZacD
    Options
    Offline / Send Message
    ZacD ngon master
    I wonder how Mirror's Edge did it, its so clean that you'd notice any banding quickly.
  • Ben Apuna
    Options
    Offline / Send Message
    This is a great thread, lots of stuff I didn't know.

    commander_keen would you mind explaining the math and/or process behind the lightmap modulation?

    I'm currently coding my own pixel shaders in Unity for the Sidescroller challenge going on right now and I'd like to be able to do more than just multiply my lightmap on top of everything at the end.

    On a side note I'd totally be using your awesome node based editor for Unity 3.0 rather than coding them myself but I'm using Unity 2.6 Indie...
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    Ben Apuna, its simple. If you are baking your lightmaps in a 3d package like max you can setup your lighting so it looks how you want it, then when you actually bake the lightmaps bake them 2 or 4 times darker than they were when you were preview rendering. You could probably do this easily with some exposure control in max. Then in the shader you just sample the lightmap and multiply it by 2 or 4 like this:
    half4 diffuseColor = Tex2D(_MainTexture, IN.uv1coords);
    half4 lightmapColor = Tex2D(_LightMap, IN.uv2coords);
    return = diffuseColor*lightmapColor*4.0;
    
  • Ben Apuna
    Options
    Offline / Send Message
    Awesome! Thank you :)

    I'm using Modo but I'm pretty sure that exposure control is available with it's renders as well.

    I'll give that a go when I get to it.
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    ZacD haven't looked at mirror's edge in a long time but for the outdoor stuff there wouldn't be an issue because they use white diffuse textures, this means they can use the 0-1 range and just use lightmaps to darken the image as they don't need any overbright from the lighting. Also they might not be using the nvidia dxt compressor maybe they found a better one that can compress dark lightmaps, I think there is one called squish but you need a programmer to implement it and ati has one called compressonator, I can't remember but I think I tested the ati compressor and found it had the same issue but this was 5 years ago can't quite remember. As for their dark tunnel stuff I'd need to take a look at it up close, but perhaps they did the same thing we did to fix our darkly lit areas back in 2005 and bracketed the lightmap ranges for each environment. Our CG Supervisor came up with this nifty idea to only use the range of the lightmaps needed to create the art rather than using an explicit range like 0-2 or 0-4. We worked in a 0-2 range until we were happy with the lighting and the art director had signed off on it. Once that was done we'd find out what the darkest black was and the brightest white was in the scene and then reduce the range from 0-2 to something that only included the values used. Below is an example of the before and after the lightmap bracketing technique. In the particular example below the image on the left is stretched from 0-2 and the image on the right is bracketed to 0.5-1.2, this tightened up the range and reduced a lot of banding. The interesting part here for me is that there is no pure black in the lightmaps so starting your lightmap range at zero is already wasting precision.

    bracketmagicbig_01.jpg
  • Ben Apuna
    Options
    Offline / Send Message
    Great info malcolm.

    I'll probably be making a night time scene for my sidescroller challenge entry so I might try to implement your workaround in combination with the method described by commander_keen somehow.

    What do you suppose they are doing here with the type of lightmap that is shown in this writeup on the Resistance Fall of Man?

    http://www.cybergooch.com/tutorials/pages/lighting_rfom4.htm

    I've never seen lightmaps like that before. If you invert it in PS it sort of looks like a normal map...

    Do you think they are using the different color channels in a way to get around DXT compression limitations? I've heard that DXT favors the green channel over others when choosing what data to preserve.

    Kind of like how you can throw away the blue channel in a normal map and reconstruct it in the shader to improve the overall compression quality of the texture?
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    malcolm, yeah that can generally help the problem, but its really just a work around and not guaranteed to work in all cases. Mirrors edge has tons of blown out parts (and even uses dynamic exposure), so a range of at least 0-2 would be nessessary. There is no case where a static range of 0-1 is acceptable because textures are not always 100% white. Even though many of the textures in mirrors edge are bright they still have darker parts within the textures, and then there are the strong saturated textures which would look very bad if you just darken them.

    This brings up a good point though, if every material in your scene is modulated more than 1 you could get away with less lightmap modulation, assuming your not clipping that color in the albedo buffer (in deferred rendering) before applying the lightmap pass. Doing that would split the accuracy between the diffuse texture and the lightmap, so you could have your lightmap modulate from 0-2 and your actual materials also rendered within 0-2 range.

    Ben Apuna, that looks like some funky way of baking light normal and distance to calculate normal map shading and stuff. I dont know where they are storing the actual light color though O_o. Based on the screenshots it looks like they are using a uniform color for the entire lightmap which looks pretty bad...
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    Ben Apuna no I've never seen I lightmap like that before, interesting. dxt is a 5 6 5 compression as I remember I think it works by using 5bits red, 6bits green, 5bits blue. So yeah if you used the green channel that would be the part with the most precision/range. Another technique is to have all your lightmaps be grey scale and just use the green channel for the lightmap and colourize the lighting in the shader rather than baking colour into the lighting. As I recall on Skate 1 they used grey scale lightmaps and stored a seperate lightmap in each colour channel of a dxt 1 texture to try and save some memory.
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    commander_keen good point forgot about all those red and orange textures in mirror's edge. No idea then haven't thought about this stuff in a really long time.
  • Bruno Afonseca
    @ben: those are directional lightmaps if i remember correctly. they store direction (and intensity?) but colors, i have no idea. vertices maybe? i've read a paper about it some time ago but can't find it :(
  • arrangemonk
    Options
    Offline / Send Message
    arrangemonk polycounter lvl 15
    it seems all lightmaps are even density like texturing is usually done.
    but isnt lighting mostly boring on the floors and happen on the walls and complex shapes
    so in theory baking a hires lightmap and then shrinking the uv isles according to the statistical contrast of their containing pixels (the higher the contrast the bigger the isle), rearranging and baking to new map it should remain the quality but on lower resolution

    i think i already had that theory but couldnt test because of a diing computer or something related to lazyness
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    arrangemonk, I've seen that technique used on games before. It works but you need some fancy algorithm to figure it out in the pipeline.
  • r_fletch_r
    Options
    Offline / Send Message
    r_fletch_r polycounter lvl 9
    Thanks for the explanation earlier malcolm. Do you have any pics of that light map on textures? I've seen worse artifacts dissapear when layed atop a texture.

    Not that im saying it isn't an issue.
  • arrangemonk
    Options
    Offline / Send Message
    arrangemonk polycounter lvl 15
    fany algorithm:
    assuming lightmap has proper padding:
    1. binarisize lightmap, then per uv isle go linewise and count jumps from 1 to 0 and vice versa, then same for colums and add, then each uv isle has its specific value, and normalized(biggest = 1) is the scale factor, then scaling uv isles according to scale factor. then the uvs have to be repacked

    edit: i think its smarter to make that algo per trangle and not per uv isle

    a better result was to substract the next pixel from the current pixel and add the absolute value to the unnormalized factor, but this is slower

    edit2: learning image processing at school is awesome sometimes^^
  • Noors
    Options
    Offline / Send Message
    Noors greentooth
    Yeah those are directional lightmaps used by valve on HL2. Light intensity is stored on 1 channel for each direction, and color is probably stored in vertex color.
    See my incomplete thread here.
    http://www.polycount.com/forum/showthread.php?t=71538
    Though i'm not sure directionnal lightmaps is the way to go nowadays. UDK uses another system, but i don't know how it's working. Would be glad to hear about it.
    btw, since lightmaps doesn't use mipmaping (too low res already), uv shells don't need any padding, me thinks.
  • malcolm
    Options
    Offline / Send Message
    malcolm polycount sponsor
    r_fletch_r no I don't have an image of that exact shot with textures applied I have shots of that scene after it was bracket with textures. Even with textures before bracketing it was unshippable. Below you can see that environment in it's final state.

    http://www.malcolm341.com/franklingallery.htm
  • Ben Apuna
    Options
    Offline / Send Message
    Directional lightmaps, that explains it, thanks for pointing that out fonfa and Noors.

    Cool stuff, yeah I remember reading one of those Valve papers before and seeing Ali Rahimi's work using it, now it's starting to sink in a bit.

    The results of doing it that way are VERY nice, though it looks quite expensive to have so many textures dedicated to lighting. Perhaps that's why the RFoM might be using vertex colors to store light color rather than use an additional RGB map.

    Looking at a UDK level I've been working on there seem to be two versions of lightmaps. A complex version using three textures: DirectionalMaxComponent, NormalizedAverageColor, and grayscale ShadowMap. The simple version might be using two textures a SimpleLightmap and the grayscale ShadowMap from before.

    Sadly it seems like I can't export the textures, otherwise I'd post them here for further dissection. So I guess this will have to do for now.

    UDK_Lightmaps.png

    I suppose this is same technique they are using to bake all that goodness down onto the iPhone.
  • Noors
    Options
    Offline / Send Message
    Noors greentooth
    oh well it doesn't look that different from valve.

    Valve used 3 rgb lightmaps, each storing light intensity, color and shadows for each direction.

    Epic seems to store shadows, intensity and colors independently.It's maybe more flexible.
    Simple lightmap doesn't look to store shadows. It's just storing regular lighting multiplied by this NormalizedAverageColor, so i guess it's also combined with the ShadowMap later.
    My goal is to mimic that stuff with vray coz we don't have a lighting system in our engine.
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    http://www.bungie.net/Inside/publications.aspx

    bungie has some nice papers on their lighting pipeline. they analyze lightmap charts for variance and repack them in size.

    as for bracketing (custom ranges), the problem then lies that you have to feed those min/max values to the pipeline per-instance (or per-something) as well. Anyway it's good to have a variety of "lightmap tools/ideas" around, but what you end up using can be quite custom.
  • Bruno Afonseca
    Hey guys, had to dig this out because I have a question - Is it possible to raise the range of a lightmap using the default lightmap shader in max? I need that for an art test, would be cool having some parts looking a bit overexposed instead of being limited to the brightness of the base texture. It's supposed to be a daytime scene so being able to do that would help a lot.
  • Ben Apuna
    Options
    Offline / Send Message
    I'm going to bump this thread as I've stumbled across some tutorials on making directional lightmaps and a tool for generating Self Shadowed Bump(normal) maps.

    SSbump Generator, a free GUI based program that can be used to generate Self-Shadowed Bump Maps.

    Also this thread on the Unity forums has links to example files, tutorials, and a non-free set of shaders for the use of directional lightmaps in Unity. The tutorials and examples (which are free) cover how to create directional lightmaps in Maya and Modo.
  • Noors
    Options
    Offline / Send Message
    Noors greentooth
    Hi ! thread necro ^^

    So mmh what about linear workflow and lightmaps ?

    I baked my lightmap (vray) in gamma 1 and applied a 2.2 gamma correction in max with xoliul's shader, but i get horrible banding. It's logical as i re-expose an 8 bit file, but still, i thought it would look better with a bit of shader magic.

    lm_bitch.jpg

    How is it done in games ? Do you use 16bits lightmaps for tone/gamma correction ? Or you don't give a fuck about linear workflow ?

    edit : tested with a .hdr map (16bits/channel), is waaaay better as expected. Is that what valve calls HDR lightmaps ? LDR lightmaps beeing 8 bits/channel?


    Also, i assume the previous lightmaps example from UDK, use a separated shadowmap, to mask the specular in shadowed areas ? Is that right ?

    Thanks for any answers, i'm tired to get my own threads as result in google :D!!
1
Sign In or Register to comment.