Hi! I was testing, how the oldschool gi-ing with point lights could look in UDK, and I think it looks pretty good, and not looks too expensive, even if I use it in a bigger scene. I was thinking on that it could be used as the real time gi solution (that I really miss, and I think not just me, because who loves the pixelated lightmaps with hard to fix seams (sometimes)). Then I was thinking on how it could work, and I came up wtih this little writing:
New actor type: GLGI actor ("grid light global illumination")
If you place this to the scene, it places invisible point lights on the grid points (the
density should be an option that you can set.)Basically a 3d light "grid".
A new scripted point light is needed for this, and it would place these. It should know to change its color based
on the near by objects diffuse color. Lets say you have a long object with a horizontal gradient
texture on it. If you place a few from this new point lights along the mesh, they should be able
to recognize the nearest color. So if the mesh is red on one side, the nearest light should be red, and the next color is blue on the mesh, then the next light should be blue, etc. Because these would be fill lights, their intensity should change based on the near by main lights(+viewing if the mesh's material has emissive, and how much is the emissive multipler) . If they are close to a main light, then their intensity should be increased, and when they are further from a main light, their intensity should going to a lower value, until they reach the end of the main light's attenuation.
If there are some fill lights, and they arent inside of any main lights attenuation, then they should be disabled, and enabled only if they gets inside a main light actor's attenuation, except if there is an emissive area in a material close to it. There could be many main lights in a scene not just one! The main lights are the simple point light moveable actors that came with udk.
The fill lights should NOT cast shadows, just light!
The glgi would be for not moving meshes.About the moving meshes:
Because they also need to cast "gi" or moving illumination, and we dont want the fill lights "locked"/not moving look, there should be an another new actor type, that you can attach to your moving actors/skeletal meshes. Its light density should be also adjustable. The previously written rules would be for this actor too ( viewing the main lights distance, viewing the emissives, and the actor's diffuse colors that is attached to)
I really hope that somebody, who is experienced in such things, could help me with this ( I absolutely can't coding...). I saw that there are some unpaid jobs/mod requests here and not just here too, and they usually get what they would like to get, I thought that maybe somebody would be that kind to make this somehow, or at least help me making this!
Here is a picture about how it looks if I place the lights and set the colors with my hand by the way:
As you can see there are pretty nice color bleedings and cool ambient colors on everything, but setting this up with hand is really slow, and not working if there are moving meshes or lights too.
Any help would be really appreciated! Actually I would be one of the happiest udk users if this would be made. Please write if you can help, or if you can't but you would support the idea, or if you have any idea about how it would work better, or anything!
Thanks!
PS: Sorry, my english isn't perfect!
Replies
In UE4 this would be more doable but it remains heavy. Also the color detection would be heavy I believe if you want that real time.
The quality is not resolution dependent. You'll never get pixelated, really low res lightmaps, or lightmap seams. You don't have to make second uv channel. You don't have to be careful about how you place your uvs in your lightmap uvmap to not get seams. It doesn't have long rendering time to compute lightmaps. You don't need to worry about texel density on the lightmap. It doesn'T need texture space for lightmaps (yes, it takes other performances)It works with moving things. It works with moving "area light" type objects. Its real time.
What are you looking to use dynamic lighting for?
1. the result is pixelated,if you are not using ultra high resolution. Ultra high lightmap resolution is causing very long rendering time, and it would take a lot of texture space.
2. Mipmaps. Visible transitions, and even worse(more pixelated) look from further. If you are not using lod, this is very visible.
3. No edge padding. I really don't understand why this never got fixed, because if it would be, then we wouldn't get the seams.
4. Shading differences showing up on different uv areas sometimes. Here is a picture about this ( seams are also visible here, however the lightmap res is 512. The mesh is dense, but because of the shape, you can't place the verts properly on the grid) the walls are using 1k lightmaps/wall and they are also pixelated.
I know, less cuts would give more continuous result, but it would give stretching instead,and not normalized texel density.
5. Baked lighting isn't work when you have a lot of moving things/moving lights. My example would be a horror game , where you would use swinging/flickering lamps. Or an another example would be a Sims type game, where you can dynamically place objects/lights.Or the third example would be a game where the environment can change (some destructible walls/objects for example, or...like a legacy of kain game, where you can go to the underworld, and the environment is changing). These was just a few examples, its easy to find more.
The another idea for the dynamic gi would be a screen space reflection based color bleeding, but it would move if you move the camera. With high blurring, it wouldn't be really noticeable, but we can have only global reflections in UDK. If somebody could explain how the screen space (local) reflections works, and would it could be implemented through a shader, that would be awesome too!
Anyways, my opinion is that it isn't a problem if it wouldn't work good on the consoles because, they can still use lightmass , and it also isn't a problem if it wouldn't work good on lowprice/older pcs, if it works ok on the good pcs. After years, it will work better and better.
And there are some other options to optimize it. The two hand made scenes works smooth on my calculator, so I don't see reason to not try it out. It worth at least a try in my opinion.
And a really detailed level would be far more drawcalls because you wouldn't make the entire level as just a few giant meshes. They would be split up in many separate meshes, and it would quickly multiply the drawcalls through the roof. Unless you keep it all low poly and simple and do make one giant mesh out of large portions of the level. Still, in a forward renderer this is just not efficient.
To do so change the following in your UDKlightmass.ini:
[DevOptions.StaticLighting]
bAllowLightmapCompression=False
Might not solve all your problems, but it will also only have a fraction of the performance cost.
http://www.geomerics.com/
Some things to note: GI/radiosity data does not need to be high res, it's only supposed to be a wash of color. Enlighten's real-time solution uses a grid of light probes sort of what you're doing, but they still bake lightmaps for static geometry. Here's a link to one of their Siggraph presentations:
http://www.geomerics.com/downloads/radiosity_architecture.pdf
Mirrors Edge used UE3, yes it used a separate GI engine, but it wasn't dynamic, just really nice bounce light.
Also if you are interested in looking at examples Laurent Harduin's portfolio includes some really good bounce light simulation with hand placed point lights.
It doesn't require 30 lights to make something look like it has really good bounce light if the drive is artistic. If its technology and you just want dynamic lights everywhere I would look at other engines besides UE3 lie Hourences suggested.
Hourences - Isn't ue3 using deferred rendering in DX11 mode?
Arnage - Yeah unfortunately its not solving my main problem which it that lighmaps aren't dynamic.But its good to know, because I will surely need to use lightmaps in the future.
Harbinger - I know that the gi does not need to be high res, but the direct lighting's shadows are need to be that sometimes. I saw this pdf earlier btw.
radianceforge - The problem is still that the baked lighting is limiting the opportunities so much. Just think about a day-night cycle. I looked at Laurent's site and those are some really nice lightings in my opinion. For an other option, I was thinking about localized dynamic cubemap/planar capture reflections, with bilinear post blur filter if its possible. Currently I'm testing this. Obviously not the nicest option but better than nothing.
Ignore the green lines on the bottom, they are showing up only in the editor.
The real-time solution I'm thinking of is something like "Instant Radiosity".
http://www.liensberger.it/Web/Blog/wp-content/uploads/Instant_Radiosity_kl08.pdf
Basically it traces some paths from a primary light source, and creates new VPLs(Virtual Point Lights) from where they reflect/bounce around a bit.
//
Do your cubemaps have mip maps? If so, you could probably use like anywhere from 8, 16, or 32 samples and blur one of the smaller mips... depending on it's resolution. Ambient cubes don't need to be very big in the first place, so you could probably blur at like 32x32 Same thing for the floor!