I've been put in charge of doing some R&D for lighting solutions for our current project. Most of the lighting we've done in the past has been handled by vertex lighting and it has served us pretty well and achieved the results we've wanted.
With the early versions of our current project, we've noticed a lot of our environments were lacking crisp shadows which gave the illusion of floating geometry.
After a few tests, the only way we would get these results were by sub-dividing our geometry (going from 1000tris to 20,000tris for a ground plane).
My question to you is, what is the common practice with using Lightmaps?
We are developing this for the Wii and the engine we are using doesn't support multitexturing (applying two or more textures to a single bit of mesh in a single pass: Spec, Bump, Alpha, Normal, etc), but we are looking at implementing this so we can explore the Lightmap avenue.
Now... Does the mesh need to have a unique unwrap for this to work properly (no overlapping UVs)?
Do the UVs have to be contained within the 0 to 1 UV co-ordinates?
Any information you can give me on this would be great. I've done a little research already and found out the basics of how it works. Figured I'd ask here as well for those of you that have worked with this medium before.
Thanks.
-caseyjones
Replies
My suggestion is, find a programmer who has done this before- if there are none at your studio, then ask around, as this has been a common thing. However there just isn't much for an artist to do on this aspect, since it should be independent of the content pipeline for the most part. The programmer will be doing most of the work but it is always helpful to have an artist watching, since this is, after all, about graphics.
But to answer your question directly: most lightmapping solutions I know (either in-engine or software plugins I don't know about (Maya2008 I think has one?)) handle lightmaps the same way: create a second UV channel for all geo and give all geo unique UV's in that uv space. Then render the lightmap. Multitexturing is required, no other way to do it I can think of if you're not doing vertex-baked lighting.
If you want to do more research, the best bet it, look around at other engines that have good lightmapping solutions and look for papers (here, I'm sure, forum members can come through with papers they've found or read), but also look at engines that don't have great lightmapping, for ideas, and also lightmapping plugins/baking apps.
I'm working closely with our lead programmer to get an optimal lighting solution and he more than confident he'll be able to get this to work in our engine. His concerns were more to do with the increased texture counts and the high resolution textures required for an accurate lightmap.
I'll post updates as I get them.
-caseyjones
Am I guessing right that having a second UV map results in a duplication of vertixes and thus greatly increases the transform costs (as with splits in the uv map and so on... explained in the "why poly optimisation isn't always king" sticky).
And thus it would be better in most cases to have the first uv-map made in a way that would support a lightmap also (e.g. no overlapping/mirrored uv-parts)?
I did however have some success while messing around at work and was able to get a coloured lightmap.
My procedure:
1. I first built a flat plane and cube that sat in the middle. I unwrapped them to accomodate a tiling texture.
2. I created a second UV channel and made a new unwrap so I wouldn't have any overlapping UVs.
3. I then threw in a single spotlight and applied colour to both the light (peach) and the shadow (dark purple).
4. Switched to Mental Ray renderer and set up my bake to texture settings using only light, and fiddled with some of the settings. Once that was done, I ran the bake and ended up with a coloured lightmap.
5. Created a new Lambert material and assigned a tiling texture to the Colour channel and then assigned the lightmap texture to the ambient channel.
6. Using the relationship editor (UV-Linking > UV Centric), I connected the first UV channel to the Colour channel from the Lambert material, and the second UV channel to the Ambient channel (this probably made no sense - needs screenshots).
7. In the viewport it only looks like there is only the Colour channel texture displaying. One you render the scene, the Ambient channel acts as an additive colour overlay. Giving me the exact result I was after.
My next step is to get this to display realtime in the viewport, so I can get more of an idea of how it all will look. If anyone has any suggestions on how to accomplish this, please let me know.
Like I mentioned in the earlier post, I'm using Maya 8.0. Most of our experienced Maya guys had already left for the day before I had a chance to ask them. If I get a solution, I'll be sure to post it here and will eventually write up a tute with my findings.
-caseyjones
EDIT:
JKMakowka: I don't think it doubles up on the vert count, but it does add to the processing having a second UV channel. So there will be a hit on the performance.
Though, by transforms, I'm assuming you mean animated mesh? Well, the lightmaps will only be used on the environment mesh only, we had another lighting solution in place for the characters. Plus the characters will have an ambient occlusion pass baked into the texture itself and then will be dynamically tinted based on how the environment mesh is lit.
I didn't read your post because I spent too long writing up mine :P
You pass two texture coordinates into the vertex shader. In response to JKM, no, this doesn't double vert count, it is only a bit (a float2) of extra vertex data to pass in per-vertex (though you will have to split more verts). Since this is for static geo, though, it won't transform so the vertex shaders should be cheap- for unique second UV channels, they shouldn't be broken up light an auto-generated level lightmap. So the overhead is small.
You use the first vertex coords to look up the regular diffuse texture. You use the second to look up the lightmap texture. You multiply them together.
If you are worried about memory, then there's no need to have coloured lightmaps- only reason is for radiosity solutions or static colored lights. They should probably be black and white for your puposes.
(btw, Casey, they are multiplied, not added, together)
what can be done in fixed function as well, is simply offsetting/scaling the lightmap-texcoords can be done via the texture matrix. luxinia's shader system was mostly built for fixed function hardware similar to Wii. For the editor I implemented a simple baker. We went for greyscale and encoded into RGBA different time of days...
now while that time it was just flat projection from top for the UV coords, a manual UV set for the lightmaps would be no different codewise.
Basically you manually unwrap each object unique. Each instance of that object will get a part within a bigger lightmap (for efficency its better to let objects share lightmaps). That "parting" can be done in code, also the baking should be done in code if possible. If you want to use maya as primary lightmap baker for the full level, you probably should get someone to do some MEL scripting work.
so that you only need to "press bake" and for every instanced geometry multiple lightmaps are generated and then later all lightmaps of the scene are stored in an atlas, along with the transforms for lightmapUVs.
this part should remain in the coders hand, as prof said. it just saves so much time when that process is automated...
Deferred rendering in Killzone2 (8MB PDF)
Shading in Valves Source Engine (14MB PDF)
More Valve papers here
Packing lightmaps
giles - a radiosity lightmapper
Lighting in Onimusha 3
another paper on directional lightmaps (which are used in latest games), although technically too heavy for psp, but workflow is same
also some stuff on lightmaps at
http://www.blitzcode.net/3d_1.shtml
http://www.blitzcode.net/3d_2.shtml
- Used lightmaps on quad polygons (for walls). The walls were planar-UV-mapped automatically (and automatically packed) by the game editor not by the artist. Via texture matrix you could mirror, tile or rotate the wall's mapping.
- Static meshes. The game editor calculated vertex colors automatically based on the scene's lights. This is very fast but does not allow to cast accurate shadows over them and requires enough tessellation to look ok.
If you need better results just tell the artist to create a unique-non-overlapping UV channel to render the lightmap there ( so a lightmapped object will contain the base UVs on channel 0 and lightmap UVs in channel 1 )
- Dynamic meshes used per-vertex dynamic lighting.
If the HW you use does not allow to use multitexture you have two options:
1. Do multipass with blending enabled... but that will duplicate the rendering time because the vertices and pixels need to be processed twice.
2. Create a texture atlas with the base*lightmap color... but that will require more VRAM to store the baked textures.
I think if there is no hw support for multitexture, one should try to go the vertex lighting route, a few vertices more will be not as bad as a full second pass...
We are going to need to develop some automated process for lightmap packing because at the moment, all the tests I've done have been within Maya and the auto-unwrap feature is seriously balls. It leaves so many gaps that approximately 1/4 of the sheet is left unused.
-caseyjones
My next step is to get this to display realtime in the viewport, so I can get more of an idea of how it all will look. If anyone has any suggestions on how to accomplish this, please let me know.
Like I mentioned in the earlier post, I'm using Maya 8.0. Most of our experienced Maya guys had already left for the day before I had a chance to ask them. If I get a solution, I'll be sure to post it here and will eventually write up a tute with my findings.
[/ QUOTE ]
To see it realtime in the Maya view port its pretty simple in your shader there should be a tab called "display" or something similar open that up that lets you pick what channels to show in the view port..
Just pick combined.