Home Technical Talk

Advice on lightmaps in game engines

polycounter lvl 18
Offline / Send Message
Emil Mujanovic polycounter lvl 18
I've been put in charge of doing some R&D for lighting solutions for our current project. Most of the lighting we've done in the past has been handled by vertex lighting and it has served us pretty well and achieved the results we've wanted.
With the early versions of our current project, we've noticed a lot of our environments were lacking crisp shadows which gave the illusion of floating geometry.
After a few tests, the only way we would get these results were by sub-dividing our geometry (going from 1000tris to 20,000tris for a ground plane).
My question to you is, what is the common practice with using Lightmaps?

We are developing this for the Wii and the engine we are using doesn't support multitexturing (applying two or more textures to a single bit of mesh in a single pass: Spec, Bump, Alpha, Normal, etc), but we are looking at implementing this so we can explore the Lightmap avenue.

Now... Does the mesh need to have a unique unwrap for this to work properly (no overlapping UVs)?
Do the UVs have to be contained within the 0 to 1 UV co-ordinates?
Any information you can give me on this would be great. I've done a little research already and found out the basics of how it works. Figured I'd ask here as well for those of you that have worked with this medium before.
Thanks.

-caseyjones

Replies

  • Rob Galanakis
    Options
    Offline / Send Message
    While I'm sure a number of people on this forum can (and should) give their advice and experience, since this is for your studio/game, let me give some better advice than what our ideas of implementation may give. Level lightmapping is much better handled programmaticly and procedurally when building levels. Every surface needs its own lightmap space and instances of the same object need their own unique space. It is possible to set this up in advance, such as creating your level in max, unwrapping all the objects with a second UV channel, and baking- obviously there are limitations but this is what I would do for a one-shot deal such as a tech demo. However this isn't a tech demo (AFAIK) so whatever a good solution you find, it will be not be handled by the artists. It will be handled by the engine (at runtime or pre-rendered/calculated. Whatever the implementation of a usable and flexible system will be by programmers.

    My suggestion is, find a programmer who has done this before- if there are none at your studio, then ask around, as this has been a common thing. However there just isn't much for an artist to do on this aspect, since it should be independent of the content pipeline for the most part. The programmer will be doing most of the work but it is always helpful to have an artist watching, since this is, after all, about graphics.

    But to answer your question directly: most lightmapping solutions I know (either in-engine or software plugins I don't know about (Maya2008 I think has one?)) handle lightmaps the same way: create a second UV channel for all geo and give all geo unique UV's in that uv space. Then render the lightmap. Multitexturing is required, no other way to do it I can think of if you're not doing vertex-baked lighting.

    If you want to do more research, the best bet it, look around at other engines that have good lightmapping solutions and look for papers (here, I'm sure, forum members can come through with papers they've found or read), but also look at engines that don't have great lightmapping, for ideas, and also lightmapping plugins/baking apps.
  • Emil Mujanovic
    Options
    Offline / Send Message
    Emil Mujanovic polycounter lvl 18
    Thanks for the reply, Professor420. It confirmed a lot of the suspicions I had and it has definitely given me a huge starting point.
    I'm working closely with our lead programmer to get an optimal lighting solution and he more than confident he'll be able to get this to work in our engine. His concerns were more to do with the increased texture counts and the high resolution textures required for an accurate lightmap.

    I'll post updates as I get them.

    -caseyjones
  • Rob Galanakis
    Options
    Offline / Send Message
    Something that came to mind is, suggest to him to use a unique lightmap in each channel of a texture, would save space if you are dealing with RGB or RGBA maps. Lightmapping's been around for a long while, so I'm sure you'll be able to overcome whatever limitations and get it working even on a Wii smile.gif Though my knowledge doesn't go to before this console generation so I can't give any real specifics but there are others here and elsewhere that could.
  • rooster
    Options
    Offline / Send Message
    rooster mod
    am I wrong, or would that mean you're stuck with greyscale lighting? if you want coloured effects you'd need the full range of channels right
  • JKMakowka
    Options
    Offline / Send Message
    JKMakowka polycounter lvl 18
    By the way... as this also applies to creating models which can be self shadowed:
    Am I guessing right that having a second UV map results in a duplication of vertixes and thus greatly increases the transform costs (as with splits in the uv map and so on... explained in the "why poly optimisation isn't always king" sticky).
    And thus it would be better in most cases to have the first uv-map made in a way that would support a lightmap also (e.g. no overlapping/mirrored uv-parts)?
  • Emil Mujanovic
    Options
    Offline / Send Message
    Emil Mujanovic polycounter lvl 18
    Thanks for the suggestion, Professor420. It was something we did think about, possibly storing some of the lightmap data in the alpha channel of our texture to conserve space. But as rooster pointed out, it would only give us a greyscale colour spectrum. We need a coloured lightmap, so we'll have to use the full RGB channels for the single lightmap.

    I did however have some success while messing around at work and was able to get a coloured lightmap.
    My procedure:
    1. I first built a flat plane and cube that sat in the middle. I unwrapped them to accomodate a tiling texture.
    2. I created a second UV channel and made a new unwrap so I wouldn't have any overlapping UVs.
    3. I then threw in a single spotlight and applied colour to both the light (peach) and the shadow (dark purple).
    4. Switched to Mental Ray renderer and set up my bake to texture settings using only light, and fiddled with some of the settings. Once that was done, I ran the bake and ended up with a coloured lightmap.
    5. Created a new Lambert material and assigned a tiling texture to the Colour channel and then assigned the lightmap texture to the ambient channel.
    6. Using the relationship editor (UV-Linking > UV Centric), I connected the first UV channel to the Colour channel from the Lambert material, and the second UV channel to the Ambient channel (this probably made no sense - needs screenshots).
    7. In the viewport it only looks like there is only the Colour channel texture displaying. One you render the scene, the Ambient channel acts as an additive colour overlay. Giving me the exact result I was after.

    My next step is to get this to display realtime in the viewport, so I can get more of an idea of how it all will look. If anyone has any suggestions on how to accomplish this, please let me know.
    Like I mentioned in the earlier post, I'm using Maya 8.0. Most of our experienced Maya guys had already left for the day before I had a chance to ask them. If I get a solution, I'll be sure to post it here and will eventually write up a tute with my findings.

    -caseyjones

    EDIT:
    JKMakowka: I don't think it doubles up on the vert count, but it does add to the processing having a second UV channel. So there will be a hit on the performance.
    Though, by transforms, I'm assuming you mean animated mesh? Well, the lightmaps will only be used on the environment mesh only, we had another lighting solution in place for the characters. Plus the characters will have an ambient occlusion pass baked into the texture itself and then will be dynamically tinted based on how the environment mesh is lit.
    I didn't read your post because I spent too long writing up mine :P
  • Rob Galanakis
    Options
    Offline / Send Message
    Hmmm maybe I'm being misunderstood. I should also mention I don't know about FFP, only shaders/programmable pipeline, so while it should be the same I can't make any guarentees. In your first UV channel, you have your normal texture setup: mirroring, tiling, etc. On a second channel, you would unwrap all objects in the scene to unique UV's. The objects would be very small on this second UV set, since there would be many and everything is unique.

    You pass two texture coordinates into the vertex shader. In response to JKM, no, this doesn't double vert count, it is only a bit (a float2) of extra vertex data to pass in per-vertex (though you will have to split more verts). Since this is for static geo, though, it won't transform so the vertex shaders should be cheap- for unique second UV channels, they shouldn't be broken up light an auto-generated level lightmap. So the overhead is small.

    You use the first vertex coords to look up the regular diffuse texture. You use the second to look up the lightmap texture. You multiply them together.

    If you are worried about memory, then there's no need to have coloured lightmaps- only reason is for radiosity solutions or static colored lights. They should probably be black and white for your puposes.

    (btw, Casey, they are multiplied, not added, together)
  • JKMakowka
    Options
    Offline / Send Message
    JKMakowka polycounter lvl 18
    Well I was taking about animated meshes projecting shadows onto themselves. But thanks for the info so far!
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    the workflow of making a "manual" lightmap-uv is fine. the "all code" solution prof means, basically unwraps geometry for you and tries to create huge uv charts for the lightmap. As we all know manual unwrapping cant be beaten, this is better if you have more organic/generic models. The automatic lightmappers mostly came from days were the level geometry was made of box brushes, which of course are dead easy to unwrap...

    what can be done in fixed function as well, is simply offsetting/scaling the lightmap-texcoords can be done via the texture matrix. luxinia's shader system was mostly built for fixed function hardware similar to Wii. For the editor I implemented a simple baker. We went for greyscale and encoded into RGBA different time of days...

    editor-baker_sm.png

    now while that time it was just flat projection from top for the UV coords, a manual UV set for the lightmaps would be no different codewise.

    Basically you manually unwrap each object unique. Each instance of that object will get a part within a bigger lightmap (for efficency its better to let objects share lightmaps). That "parting" can be done in code, also the baking should be done in code if possible. If you want to use maya as primary lightmap baker for the full level, you probably should get someone to do some MEL scripting work.
    so that you only need to "press bake" and for every instanced geometry multiple lightmaps are generated and then later all lightmaps of the scene are stored in an atlas, along with the transforms for lightmapUVs.
    this part should remain in the coders hand, as prof said. it just saves so much time when that process is automated...
  • Eric Chadwick
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    http://www.illuminatelabs.com/press/newsarchive/practical-precomputed-gi-presentation/view
    another paper on directional lightmaps (which are used in latest games), although technically too heavy for psp, but workflow is same

    also some stuff on lightmaps at
    http://www.blitzcode.net/3d_1.shtml
    http://www.blitzcode.net/3d_2.shtml
  • jogshy
    Options
    Offline / Send Message
    jogshy polycounter lvl 17
    Well, this is how I did in an old engine some time ago:

    - Used lightmaps on quad polygons (for walls). The walls were planar-UV-mapped automatically (and automatically packed) by the game editor not by the artist. Via texture matrix you could mirror, tile or rotate the wall's mapping.

    - Static meshes. The game editor calculated vertex colors automatically based on the scene's lights. This is very fast but does not allow to cast accurate shadows over them and requires enough tessellation to look ok.
    If you need better results just tell the artist to create a unique-non-overlapping UV channel to render the lightmap there ( so a lightmapped object will contain the base UVs on channel 0 and lightmap UVs in channel 1 )

    - Dynamic meshes used per-vertex dynamic lighting.

    If the HW you use does not allow to use multitexture you have two options:

    1. Do multipass with blending enabled... but that will duplicate the rendering time because the vertices and pixels need to be processed twice.

    2. Create a texture atlas with the base*lightmap color... but that will require more VRAM to store the baked textures.
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    wii allows like 8 or 16 multi-textures in single pass (similar to GC), so its more a question of someone adding it to the engine.

    I think if there is no hw support for multitexture, one should try to go the vertex lighting route, a few vertices more will be not as bad as a full second pass...
  • Emil Mujanovic
    Options
    Offline / Send Message
    Emil Mujanovic polycounter lvl 18
    Awesome links, EricChadwick and CrazyButcher. Thanks a bunch for those. They've definitely given me a push in the right direction and opened up a whole bunch of new avenues to explore.
    We are going to need to develop some automated process for lightmap packing because at the moment, all the tests I've done have been within Maya and the auto-unwrap feature is seriously balls. It leaves so many gaps that approximately 1/4 of the sheet is left unused.

    -caseyjones
  • Joao Sapiro
    Options
    Offline / Send Message
    Joao Sapiro sublime tool
    thanks for the linkz crazybutcher ! really interesting smile.gif
  • kabab
    Options
    Offline / Send Message
    [ QUOTE ]
    My next step is to get this to display realtime in the viewport, so I can get more of an idea of how it all will look. If anyone has any suggestions on how to accomplish this, please let me know.
    Like I mentioned in the earlier post, I'm using Maya 8.0. Most of our experienced Maya guys had already left for the day before I had a chance to ask them. If I get a solution, I'll be sure to post it here and will eventually write up a tute with my findings.

    [/ QUOTE ]
    To see it realtime in the Maya view port its pretty simple in your shader there should be a tab called "display" or something similar open that up that lets you pick what channels to show in the view port..

    Just pick combined.
Sign In or Register to comment.