Coming from and artists standpoint I am aware of the visual pros of using lightmaps.
Not specific to UDK, but overall, I would like to understand the pros and cons when it comes to the technical side of things. For example, how much of an impact does adding light maps add/subtract from performance? Or better yet, where can I expect to see better performance when implementing lightmaps into a game and what should I keep track of to not decrease preformance?
-any other info would be great. thanks!
Replies
The two things you want to look at are:
Adding lightmaps can, and often does, add another draw call to the model. This is because the model has to be painted twice. This effectively doubles the effort the system has to take to render it. That said, certain optimizations can be made to an engine to better handle this so you would have to look at your specific engine to figure out what is going on.
Memory, on the other hand, is also another huge concern when storing that many maps. Lightmaps are small, but there are a lot of them so if your system is really short on memory then your going to be hurting in that department.
However, things like...
...can all effect how this plays out in the end.
At the end of the day the game industry is pretty damned UNSTANDARDIZED. If you are coming at it thinking you can gleam hidden truths about how to best build your assets you are probably going to be in for a rough ride. As soon as you figure out one way of doing things your going to find the next generation of engines/platforms requires you to do it another way.
They key is knowing the language, and the idea behind them, so you can learn new systems very quickly. Put all that in the back of your mind and then focus on making great art
I am thinking performance compared to say unbaked ambient occulsion, the pros/cons over using vertex lighting. Also, a dominate directional light casting dynamic shadows on everything - is that a costly process compared to lightmaps?
jocose: very good point about draw calls, I didn't think of that. That could end of being a huge problem if that stacks onto each object.
also the memory thing, can lightmapping pack every lightmap in a level into one texture or does it have to be a separate lightmap of each object - making the engine call that many more textures?
Also keep in mind depending on your lighting model. Dynamic lights can have huge overhead because they can add an extra draw call to whatever they interact with. So the drawcall cost in that case would be negated by using lightmaps over dynamic lights, but then again completely different depending on your dynamic lighting model (there are a lot of them).
Sorry m8, but what makes you say that? As far as I know it just adds another pass for the pixel shader. And probably the simplest one compared to spec map or normal map.
Compared to dynamic lighting and vertex shading lightmapping is obviously the cheapest and the most efficient way to light things, and that would be "period" for unreal type of engines. With a price of not being interactive unfortunately. If your lightmaps are streamed with your textures then there's actually no memory footprint from lightmaps at all.
The cryengine kind of engines seems to heavily rely on screenspace techinques, that have little to do with the environment artists work. Although I'm probably not the one to be taking responsibility of talking about this.
But once again for UDK kind of stuff vertex shading can't be streamed with textures, thus cluttering your precious memory. And dynamic shadows are always more expensive then static, since they have to be calculated per frame. So lightmaps for the win.
I have a bit about it in my artists' "hygiene" paper. pardon the shameless self promotion, but hopefully it could of any use.
good luck!
I've researched this topic a lot so if you want to talk more about it you should stop and chat with me sometime dude,
Lightmapping: Unreal Engine, Source Engine
Pre-computed radiance transfer: Infernal Engine (Ghostbusters), Batman: Arkham, etc.
Light Propogation Volumes: Crytek Engine
cman: yea dude, we shall chit chat about this tomorrow :P
LIGHTMAPS
pros:
1.quality of the light rig is unlimited as it is baked, so you can do whatever you want global illumination and all that magic
2.saves a lot of frame rate, our real time shadows costs about 4ms which is expensive
3.can be blurred no razor shadows or artifacts on edges of shadows
4.no transitions in the cascades and shadows can be seen in the distance
5.almost non existent cost to render
6.mip maps can be turned off for lightmaps to save memory
7.shadow edges do not flicker when the camera moves
cons:
1.costs a lot of texture memory if you want to get good quality
2.resolution bound, stair stepping can be seen on shadows, stretched pixels, black spots where geo intersects
3.lightmap seams anywhere the lightmaps chage resolution between geo which is pretty much everywhere
4.every single piece of geo in the world needs to be unwrapped with non overlapping uv's in the second uv set
5.environment has to constantly be rebaked if anything moves or changes
6.animating or destructible objects are harder to do or will just look wrong
7.slow to iterate, need to wait for local bake or render farm bake and then integrate to preview and bug check
8.bottlenecks the bug fixing process lighting artists need to rebake after bug fixing breaks lighting
9.lighting does not affect the character, tunnel lighting has to be lit twice, once for the bake and once for the character
10.character shadow cast shadow on top of lightmap shadow, need to implement solution to avoid double shadow bug
11.dxt compression artifacts, dark areas have purple and green pixels in them, looks like crap, dxt favours textures above a certain brightness level
12.low dynamic range, lightmaps can't go above a value of 0-1, colour clipping is seen above a value of one, looks like crap, range can be stretched above 0-1 but introduces even more dxt compression artifacts
13.need to implement a second shadow solution for moving objects
REAL TIME LIGHTING
pros:
1.doesn't cost much texture memory wise
2.nothing has to be unwrapped to the second uv set
3.no rebake when environment changes or objects are moved
4.fast to iterate, if you have a nice pipeline you can tune the lighting in real time
5.anyone can bug fix, lighting is hard to destroy artists see right away if lighting is broken
6.lighting is unified, affects character, world, objects everything has the same quality, tunnel lights work on the character correctly, animating objects for free
7.no double shadow bug when the character is standing in shadow
8.high dynamic range, lighting can go above 0-1 range without artifacts
cons:
1.costs a lot of fps
2.need to implement a solution to keep shadow edges from flickering
3.is resolution bound looks low quality and jaggy when viewed up close, can be blurred but usually doesn't look good also costs fps
4.light leaks, backfaces of geometry needs to be modeled depending on the shadow angle to the geo
5.ugly transitions between shadow cascades, looks like razor slices in most games, creating nice transitions costs fps
6.shadows turn off completely in the distance, ours turn off at 250 units from the camera
7.adding extra lights costs fps
8.casting shadows from lights besides the sun costs even more fps, usually rendering engineers only give you one or a couple shadow casting lights
9.lightrig is limited by frame rate, can't do fancy global illumination, need to fake ao and bounce light with other technics don't look as good
Thanks for taking the time to write this up.
No problem, breakneck, I hope it's useful.)
cman2k, thanks a lot, dude, for that table! It got me interested to dig into the other methods and it seems very interestng.)
Just going to throw my two cents in here as well based off what was done for Reach.
We used 6 different types of lighting. Dynamic, Vertex, Lightmap, Single Probe, Emissive and Uber. Some of these are lights themselves some are lighting methods, just main important ones I thought would be interesting to share.
Dynamic - Obviously very expensive and used mainly for things like blinking lights, headlights and so forth. Everything you would expect to have a dynamic light.
Lightmaps - They were the cheapest but did come at the price of requiring texture space. The better shadow quality you wanted the more rez you needed. For the most part Lightmaps were used for BSP or assets that we wanted really nice shadows on and were not instanced to very much.
There was no need to unwrap a second UV which was nice. If I recall someone gave the analogy that they were computed like a PolyPainted Zbrush model that never had UV's created so everything was in little squares to get the most out of the space available. I could be totally wrong on this though but never had to unwrap a model for lightmaps.
Per Vert - The most expensive pre-computed but gave some of the nicest results if you didnt have the texture memory. The reason they were the most expensive was each asset that was set to Per-Vert was basically duplicated ontop of the original asset and then each very was colored based on intensity/color of the lights around it.
The more verts you had the better the lighting would look but the more expensive things got. This also had some issues where if a vert was covered up by another asset that vert would not be calculated and be colored black. This alowed use to force some great shadow in some areas by "hidding" verts behind another model and another row of verts close that was obstructed to basically "catch" the light and give some nice shadowing. This also forced use to have to add more verts to some models to make sure they lit properly in areas.
Single Probe - (that was our name for it, not sure exactly what it would be called.) This method would only lit the asset and not allow it to cast any kind of shadows onto itself. Models would be lit by whatever the main dominate light effecting it after bounce lighting was computed. Generally this was the sun but in corridors it could just be an omni as long as it was the most dominate light to effect that object.
Basically it would tell the asset how bright to be and what color to be based off the most dominate light effecting it. Issues such as incorrect color/brightness were very common and sadly present in the final game of Reach : /
Emissive - You could flag individual polygons as Emissive Lights which would give off light much like an omni but where ever you flagged as an Emissive Polygon. This was used for lighting areas and used in bounce calculations, they could not give off shadows. Always coupled with an Emissive texture as that would give off the glow look of the light.
Uber - By far the most expensive Dynamic light we had. Used almost exclusively for Cinematics/Trigger events as it was the only light that could be turned on and off completely. Only ever used 1 and that was in a cinematic space and I am not sure if there was any other use of them in production environments by other artists beyond trigger events as they were very expensive.
how do you mean colour clipping? are you talking about blow out? I was under the impression that lack of range in LDR lightmaps would only be an issue if you changed the tone mapping of the scene. I mean if the map is exposed the same as the lighting then even if its blown out it is contextually correct.
Do you use the HDR because you want to be able to dynamically change the exposure of the camera?
r_fletch_r, no nothing to do with blowout or exposure. Lightmaps are all multiplied on top of the texture in default baked lighting so LDR lightmaps can only darken textures as strange as that sounds. For example if you have a dark grey texture and you shine a light on it and then bake the lighting the brightest light you can ever put on it would be a value of pure white, but that's pretty limiting what if you want the lighting to be brighter because you're in a cave or whatever. So you turn the light up to be brighter but this exceeds makes the resulting lightmap try to go brighter than a value of 1, or pure white but it can't because lightmaps can only store a value of 0-255 0r 0-1. That's when colour clipping occurs because the lightmap can't actually brighten the underlying texture anymore so it clips out and creates this crappy looking affect where you are viewing the diffuse texure in an unlit fashion or emissive as they call it in unreal and that fades off into lighting that hasn't reached its maximum range. This is where you have to stretch the range in kind of hack way and make the lightmaps 0-2 or 0-1.7 or whatever you choose but this half the precision of the lightmap and you introduce even more dxt artifacting which is already a problem, there's a solution to try and fix that banding but that's a whole other thread. I'll try to post an image.
I'm interested to know how you achieve this. I use 16 bits lightmaps when i want top quality but it's a bit brutal maybe.
Noors, never heard of anyone using a 16bit light map that would be huge texture memory, we used to use uncompressed lightmaps on the ps2 but back then we only lightmapped the terrain and the largest lightmap we used was 16x16 pixels.
commander_keen do you mean stretching the lightmap range from 0-1 to 0-4, if so you are quartering your lightmap range so you will have a lot of dxt colour banding and artifacting.
Chances are if you're purchasing an engine like unreal this particular issue is taken care of for you since it would be unacceptable to ship a baked lighting tool that couldn't be used to light anything.
Even in unreal 1 the lightmaps must have been scaled more than 0-1, probably 0-2. If your lightmaps arent used to brighten the scene then what good are they?
I guess back then, lightmaps/vertex color where only used for darkening diffuse textures. Remembering quake 3, diffuse textures are very enlighted.
Now i guess even vertex colored objects use modulate 2x or more.
From what I read here: http://udn.epicgames.com/Three/Lightmass.html#Diffuse%20Textures
Is this actually the case? What about HDR how is that handled. If anyone has any good links for me to read on this i'd be intrested. I'm still looking around online and finding lots of stuff but I am thinking some of it might be more dated.
Seams like there are quite a few ways to handle lightmaps.
commander_keen something to note about your example is you are showing the difference in dxt compression based off of a bright lightmap. Try a very dark lightmap and if you are using the nvidia dxt compression algorithm you'll find the compression artifacts and banding are actually worse. nvidia dxt seems to favour bright textures. Not sure if unreal or anyone else is using a different dxt compression but we found back in 2005 even with a range of only 0-2 in darkly lit environments with lots of gradients the compression artifacts were unacceptable. I'd be really interested to know if anyone has integrated a better dxt compressor into their pipelines or if everyone is just using nvidia. Here's an example of what nvidia dxt compression looks like with a dark environment with a range of 0-2. Not shippable in my opinion.
commander_keen would you mind explaining the math and/or process behind the lightmap modulation?
I'm currently coding my own pixel shaders in Unity for the Sidescroller challenge going on right now and I'd like to be able to do more than just multiply my lightmap on top of everything at the end.
On a side note I'd totally be using your awesome node based editor for Unity 3.0 rather than coding them myself but I'm using Unity 2.6 Indie...
I'm using Modo but I'm pretty sure that exposure control is available with it's renders as well.
I'll give that a go when I get to it.
I'll probably be making a night time scene for my sidescroller challenge entry so I might try to implement your workaround in combination with the method described by commander_keen somehow.
What do you suppose they are doing here with the type of lightmap that is shown in this writeup on the Resistance Fall of Man?
http://www.cybergooch.com/tutorials/pages/lighting_rfom4.htm
I've never seen lightmaps like that before. If you invert it in PS it sort of looks like a normal map...
Do you think they are using the different color channels in a way to get around DXT compression limitations? I've heard that DXT favors the green channel over others when choosing what data to preserve.
Kind of like how you can throw away the blue channel in a normal map and reconstruct it in the shader to improve the overall compression quality of the texture?
This brings up a good point though, if every material in your scene is modulated more than 1 you could get away with less lightmap modulation, assuming your not clipping that color in the albedo buffer (in deferred rendering) before applying the lightmap pass. Doing that would split the accuracy between the diffuse texture and the lightmap, so you could have your lightmap modulate from 0-2 and your actual materials also rendered within 0-2 range.
Ben Apuna, that looks like some funky way of baking light normal and distance to calculate normal map shading and stuff. I dont know where they are storing the actual light color though O_o. Based on the screenshots it looks like they are using a uniform color for the entire lightmap which looks pretty bad...
but isnt lighting mostly boring on the floors and happen on the walls and complex shapes
so in theory baking a hires lightmap and then shrinking the uv isles according to the statistical contrast of their containing pixels (the higher the contrast the bigger the isle), rearranging and baking to new map it should remain the quality but on lower resolution
i think i already had that theory but couldnt test because of a diing computer or something related to lazyness
Not that im saying it isn't an issue.
assuming lightmap has proper padding:
1. binarisize lightmap, then per uv isle go linewise and count jumps from 1 to 0 and vice versa, then same for colums and add, then each uv isle has its specific value, and normalized(biggest = 1) is the scale factor, then scaling uv isles according to scale factor. then the uvs have to be repacked
edit: i think its smarter to make that algo per trangle and not per uv isle
a better result was to substract the next pixel from the current pixel and add the absolute value to the unnormalized factor, but this is slower
edit2: learning image processing at school is awesome sometimes^^
See my incomplete thread here.
http://www.polycount.com/forum/showthread.php?t=71538
Though i'm not sure directionnal lightmaps is the way to go nowadays. UDK uses another system, but i don't know how it's working. Would be glad to hear about it.
btw, since lightmaps doesn't use mipmaping (too low res already), uv shells don't need any padding, me thinks.
http://www.malcolm341.com/franklingallery.htm
Cool stuff, yeah I remember reading one of those Valve papers before and seeing Ali Rahimi's work using it, now it's starting to sink in a bit.
The results of doing it that way are VERY nice, though it looks quite expensive to have so many textures dedicated to lighting. Perhaps that's why the RFoM might be using vertex colors to store light color rather than use an additional RGB map.
Looking at a UDK level I've been working on there seem to be two versions of lightmaps. A complex version using three textures: DirectionalMaxComponent, NormalizedAverageColor, and grayscale ShadowMap. The simple version might be using two textures a SimpleLightmap and the grayscale ShadowMap from before.
Sadly it seems like I can't export the textures, otherwise I'd post them here for further dissection. So I guess this will have to do for now.
I suppose this is same technique they are using to bake all that goodness down onto the iPhone.
Valve used 3 rgb lightmaps, each storing light intensity, color and shadows for each direction.
Epic seems to store shadows, intensity and colors independently.It's maybe more flexible.
Simple lightmap doesn't look to store shadows. It's just storing regular lighting multiplied by this NormalizedAverageColor, so i guess it's also combined with the ShadowMap later.
My goal is to mimic that stuff with vray coz we don't have a lighting system in our engine.
bungie has some nice papers on their lighting pipeline. they analyze lightmap charts for variance and repack them in size.
as for bracketing (custom ranges), the problem then lies that you have to feed those min/max values to the pipeline per-instance (or per-something) as well. Anyway it's good to have a variety of "lightmap tools/ideas" around, but what you end up using can be quite custom.
SSbump Generator, a free GUI based program that can be used to generate Self-Shadowed Bump Maps.
Also this thread on the Unity forums has links to example files, tutorials, and a non-free set of shaders for the use of directional lightmaps in Unity. The tutorials and examples (which are free) cover how to create directional lightmaps in Maya and Modo.
So mmh what about linear workflow and lightmaps ?
I baked my lightmap (vray) in gamma 1 and applied a 2.2 gamma correction in max with xoliul's shader, but i get horrible banding. It's logical as i re-expose an 8 bit file, but still, i thought it would look better with a bit of shader magic.
How is it done in games ? Do you use 16bits lightmaps for tone/gamma correction ? Or you don't give a fuck about linear workflow ?
edit : tested with a .hdr map (16bits/channel), is waaaay better as expected. Is that what valve calls HDR lightmaps ? LDR lightmaps beeing 8 bits/channel?
Also, i assume the previous lightmaps example from UDK, use a separated shadowmap, to mask the specular in shadowed areas ? Is that right ?
Thanks for any answers, i'm tired to get my own threads as result in google