Hey there !
I was wondering how UDK or other cool engines manage static vertex lighting+normal maps.
I've never really put my hands into those beasts, lack of time.
So i get the whole directionnal lightmap (or radiosity normal map) thingy, in order to make normal maps work on lightmapped object.
But what if they are lit with vertex color and not maps? Same process : lights directions and color stored in vertex data ? Actually, in our engine from 1980, we apply a lightmap on every single little normal mapped object, which really isn't optimized. So i'd like to use vertex color and still use normal maps.
Thanks for inputs.
Replies
I'd like to know how it is handled in UDK, how/where are stored the lighting informations.
It's probably transparent for UDK users (and that's good for them) but i'd like to mimic the process within 3ds max and custom shader. My idea is to store the lights data in vertex channels, instead of storing in maps. This will probably be less precise, but saves a second uv set and a directionnal lightmap, which would be welcome on small props.
" Maybe also with lightmapping though you could move the UV scope around with uniforms?)"
Sorry, i don't get what you mean.
If you can explain how you think storing it in a lightmap versus storing on a vertex is different, I can try to explain more/better?
In fact, i was more guessing that in case of vertex lighted object, UDK uses radiosity normal map, but instead of storing on maps, it stores on vertex data.
But first I'm not really sure about how RNM are set in UDK for better performances.
I think they mix the 3 directionnal intensity in the 3 channels of a map. Then there's a shadow map, to prevent objects to have specularity in dark areas, then i'm not sure on how the color of lights is stored.
Storing everything in vertex data in 3ds max might be tricky, but i guess is doable. I was just wondering if it was the way to go. It would require atleast 2 paintablevertex channels (vertex color+ vertex illum ?)
Well maybe i'm wrong, but i think this would be useful for people using unity for instance, or people who wants to use some specific render engine (vray) and afaik, i didn't see any tutorial on per vertex rnm. Though, since i've started to look at RNM, i see it becoming a bit more popular on Unity, but only with maps.
btw, nice tutorial on calculating rnm with vray here
http://on-mirrors-edge.com/forums/viewtopic.php?id=4675
better than mine on the wiki.
For Mirror's Edge each surface had 3 X RGB lightmaps (one for each incoming light direction, storing both color and intensity).
With a pixelratio of 1/cm, it soon becomes clear that this used a fair amount of memory, but since the diffuse textures in ME could in many cases be monochrome this was not a big problem for us.
However, Epic made a change to optimize their RNM setup in UDK so only two maps were used. Now color was stored in one map and the intensity in the other, separating the connection between the two which made it look very flat. The difference was probably not that visible in a game like Gears, but for Mirror's Edge it was devastating. Luckily we could revert it.
O
Separating color and light intensity is something currently done in compositing tho. Not sure why it would be different "in theory". I shall run some tests.
3*RGB maps is easier to get straight from max, but for my per vertex thingy, i'm affraid i can't use 3 vertex channels that support RGB.
If you separate them (2 per surface) you will get the intensity of the incoming light displayed correctly, but the color will be flat.
Having this done right is much of the "secret" behind the lighting in Mirror's Edge.
Well ok, i could still use one of this two methods depending the project and the optimization required.
A good example how RNMs separate colors depending on angle of indirect light in Mirrors Edge.
First image shows lightmap only. Second image shows the same scene in the game.
Look at the ventilation thingy.
So everything was baked down into maps ? Props etc... Nothing on vertices ?