Ah yeah, I remember reading up on this stuff when we were looking into ambient lighting models for Brink early on in development - a nice volume grid of pre-baked lighting information is definitely a good way to go, since it means stuff moving through the world can have constant seamless correct ambient lighting, even including bounced light from nearby surfaces etc.
no problem mop, nice links crazybutcher, the indirect shadows look great. Im surprised to see this stuff so well implemented so soon I wonder how expensive a process it is to run in cryengine3 seems to be fine for them but crysis nearly killed most peoples pcs.
well, the principles of this stuff running in real time have been around for at least a year, all it really takes is one or two programmers dedicated to it for a few weeks, maybe a month or two at most (really wild guesses at time here ), then it's implemented.
As CrazyButcher's links show, there have been papers on this since 2008, and as the video from Crytek says, they're running it on a pretty super-duper high-end PC (although hopefully NV280-level stuff will be common standard hardware in most people's PCs in a year or so - currently still limited to people who like to be close to the cutting edge of tech)
good thing that Turtle in Maya already support that long time ago.
turtle IS great, but its entirely computed/baked offline - the thing with the cryengine solution is thats its REAL TIME. You really have to see it in action to appreciate how fantastic it is (this isn't bias, it's really REALLY awesome!)
to win the middleware war engine makers have to look ahead quite a bit... a lot is now "changing" with the programmability of the GPUs. About the challenges of future and trends there is a nice talk by Epic's Tim Sweeney: http://graphics.cs.williams.edu/archive/SweeneyHPG2009/TimHPG2009.pdf
Hardware will become 20X faster, but:
* Game budgets will increase less than 2X.
Therefore...
* Developers must be willing to sacrifice performance in order to gain productivity.
* High-level programming beats low-level programming.
* Easier hardware beats faster hardware!
* We need great tools: compilers, engines, middleware libraries...
If it costs X (time, money, pain) to develop an efficient single-threaded algorithm, then…
* Multithreaded version costs 2X
* PlayStation 3 Cell version costs 5X
* Current “GPGPU” version is costs: 10Xor more
Over 2Xis uneconomical for most software companies!
Previous Generation:
* Lead-time for engine development was 3 years
* Unreal Engine 3: 2003: development started, 2006: first game shipped
Next Generation:
* Lead-time for engine development is 5 years
* Start in 2009, ship in 2014!
crazybutcher so if I understoood that presentation we might be making 4 million poly displacement mapped characters for a game based on a software rendering engine in 4 years time or less?
Is the only purpose from realtime GI day/night cycles and weather cycles?
nope
-realtime lighting is much faster and less time consuming for the art team as one time/weather cycle needs to be set up for all outside envs (maybe more if you have very different locals, but still just variations) all map/level/env artists need to worry about are small local lights.
-baked lighting often requires alot of extra work setting up objects sets extra uvs etc, even the most painless system still requires extra work to acheive perfect results. that is not including render times for the mapping which can be incredibly painfull at crunch time, w3hen a simple change means the level needs to be rebaked.
-there is a huge memory overhead for baked in lighting esp when your using directional lightmapping, 9 channels of pixels for every...say 1 pixel per 10cm, very quickly adds up, to alot of texture space, this has to be laoded/streamed and held in memory. esp a problem with streaming envs.
realtime will conquer in the next-gen as its cheaper to develope
Ged, yeah thats the spirit/goal that is set at least for the high-end stuff. Just like now we will still have games around with older tech.
The other point being that there is more diversity (ideally) with different approaches to rendering possible again. If you look at the past in the time before hardware rendering came, games had quite distinctive looks (eg. check slide 6 http://www-e.uni-magdeburg.de/kubisch/Whats_cool_in_game_graphics_eng.pdf)
And it's about cutting costs, if you "just" need the hipoly, you save costs. Just like Shepeiro mentioning the cutting costs and resources used with dynamic lighting solutions.
How the "can use hipoly asset directly" will be achieved is still unsure. And the ultimate war between the top middleware makers. But I think that we will see a further narrowing in games and movie assets & pipelines, as movies would benefit from the faster rendering times in near real-time engines as well (just look at how many GPU accelerators come out now). Interesting times ahead for sure...
However the real question is imo if animation/ai can catch up. (uncanny valley...) We already have very good looking stuff I think, but "simple" things (wii, lego...) feel more complete than many "realism" games imo.
Yeah I did notice this in the past for example I used to play syndicate and magic carpet and aces over europe and big red racing and each game looked completely different! I always assumed that programming the engines that ran those games was very time consuming and the advantage we now have is middleware or api's that are full of easy to implement powerful commands for rendering, lighting and shading so we should be able to develop games far faster than before...but that paper demonstrates that the opposite has become true and we are making a rod for our own backs and limiting the possibilities for game development.
That said I totally see how GI and real time lighting in games is a powerful ally as would be the ability to use the highpoly model directly and not only do these things make games look incredible they also cut costs because of the time saved baking etc, its quite exciting
Why use real time lighting instead of baked lighting? A couple reasons:
You can move stuff.
You can destroy stuff.
Dynamic objects & static objects light using the same tech, so they "fit" better.
Plus what SHEPEIRO said.
Lightmaps suck - I hope they go away! They are just something to use in the interim until we can do the exact same thing in real time. Shadows & GI are the 2 biggest problems right now.
You can move stuff.
You can destroy stuff.
Dynamic objects & static objects light using the same tech, so they "fit" better.
Plus what SHEPEIRO said.
basically more interaction. Many games these days use pre- baked maps (AO/ Dirt, heightmap,...) if more and more of that stuff can be ported to real time solutions you can add more new interaction stuff that was not possible before.
I expect some great atmospheric lighting effects in upcoming games. With more dynamic shadows from objects that wave in the wind (see crysis) or just produce more smooth dynamic shadows and not like what we often see these days: blocky shadow maps.
Yep, I see no reason why people should cling to pre-baked lightmaps for much longer - fully dynamic stuff is the future, and it's rapidly becoming a production reality.
I sent this link to our render programmer to read, he said the cryengine stuff still requires a bake of the light volumes. So it's not as magical as I had thought. I think you have to do some other rigging too so it's not as easy as just dropping your scene in and getting magical GI results. No lightmap uv's though which is a plus. I'll be the first to ditch baked lightmaps as soon as the quality of the realtime maps gets close to baked but I don't see that happening for quite some time still.
read closely and dont mix up them describing related work, with the decisions they made for their own stuff. There is no rigging involved nor other representations, that was a specific goal by them to make stuff game friendly.
Yeah and that part where the guy is just dragging the bright blue couch / red rug around the room and the bounce light is updating in real time, implies it's not pre-baked anyway.
I guess it's possible that they allow the real-time update in editor and then "bake" once the map is finalised to improve performance, but that wouldn't allow for any dynamic stuff in-game, and if it's real-time in the editor then I don't see why they wouldn't enable that in-game too.
I dont think they win any performacne if they would bake it. That would mean having several methods running at the same time, ie what is the point of that? And it would mean a whole new logistics system to get the bake data, and differentiated between unchangeable and chanageable states... imo it would make things more complex. If you pay the price for the dynamic system anyway, then better do everything with it.
But I see your point in saying as engine middleware, they could offer a static solution (requring to setup the static system for runtime however...) in case a client wants to spend the frametime on other stuff and not use the dynamic system (doubtful I think, you dont disable stuff when you pay for the full thing)
I'm pretty sure they pre bake/generate the light volumes, ie similar to how you have to pre bake/generate spherical harmonic probes for ambient character/prop lighting. Once the light volumes are generated for a scene then you can move shit around in real time and see the magic. I would imagine when you change textures or geometry for the scene then you have to regenerate the light volumes. So you're essentially rebaking the scene each time a scene element changes. If this takes 3 seconds than it's for the win but it seems unlikely given how long it takes to generate sh probes for a medium sized world.
While I was still at ea we were talking with these guys, I think there tech is similar.
They pre generate/bake the bounce info and then you can move shit around the scene in real time and the bounce updates on the fly. I think the battlefield guys were also evaluating this at the time for bad company 2. We found two major flaws that steered us away from the so called real time gi. One, you still have to pre bake/generate it and it's not instant by any means. And two it actually costs substantially more memory than storing a dxt1 lightmap.
Hm, malcolm, I don't understand how baking this information would allow you to move stuff around?
For example if you have a red box in the corner of a white room, and bake that as light volumes, what if someone moves that red box to a different corner? How can it update baked information, when the object wasn't there before - the local colour in that volume has changed...
I can't imagine you'd bake stuff to allow for every combination of coloured objects in a room...
Unless I just don't understand something fundamental to this? In general "baking" means that the result can't be dynamic, since the situation that you used to compute your "baked" solution has now changed, therefore the solution no longer applies.
I'm pretty sure they pre bake/generate the light volumes, ie similar to how you have to pre bake/generate spherical harmonic probes for ambient character/prop lighting. Once the light volumes are generated for a scene then you can move shit around in real time and see the magic. I would imagine when you change textures or geometry for the scene then you have to regenerate the light volumes. So you're essentially rebaking the scene each time a scene element changes. If this takes 3 seconds than it's for the win but it seems unlikely given how long it takes to generate sh probes for a medium sized world.
While I was still at ea we were talking with these guys, I think there tech is similar.
They pre generate/bake the bounce info and then you can move shit around the scene in real time and the bounce updates on the fly. I think the battlefield guys were also evaluating this at the time for bad company 2. We found two major flaws that steered us away from the so called real time gi. One, you still have to pre bake/generate it and it's not instant by any means. And two it actually costs substantially more memory than storing a dxt1 lightmap.
I could be wrong, but it sounds like you're just talking about SH maps? This quote from the doc those guys released is very telling to me:
"While Enlighten does not support dynamic geometry affecting the indirect lighting, all dynamic
geometry (and any other geometry that you do not wish to make part of the radiosity calculations)
can still be lit by indirect light."
So if you move your red table next to the white wall, you won't get a red bounce on the wall from it. This is what those cryengine vids seem to indicate that it does provide, which is why it looks so very cool and different from what others are doing.
neutron engine uses http://www.mpi-inf.mpg.de/~ritschel/SSDO/
SSDO is mentioned in the cry paper, but they discuss using it only for far objects or close details... SSDO can do bounce only from information in screenspace, whilst crytek does it differently.
I think all of you who want to understand whats going on, should really take the time and read it all thru (more than once, and more than looking at pictures), and look into some of the related work that is quoted. yes that will take some time, but it will stop you from doing wrong assumptions and stating false information.
I'm pretty sure they pre bake/generate the light volumes...
Tell me where in the paper (pdf) do you see evidence for this. You are "pretty sure" based on what? As glib said, what you describe is the classic old SH volumes, nothing new at all. Noone at Siggraph (the mecca conference for graphics) would care if this was just an established technique. Please take that into context, read in detail. Base your statements on facts. I hope I dont sound like an ass here, but it's very common that one briefly looks over stuff, sees some pictures and bases a opinion on stuff, which could be a complete wrong conception of things. (and this also holds true for peer opinions)
No, it's %100 real-time. I saw it at SIGGRAPH, heard him talk about it, I've talked to him through email about a few issues with it too.
The main advantage is that it is all real-time, no baking. Enlighten is really nice, but the pre-baking time is very slow, you can't move static geo around, you have to create custom shapes or UV's, etc etc.
Real time: generate RSM, use RSM to generate lots of point lights (VPL's) - inject VPL's into SH volumes (LPV) - propagate light through the LPVs - render light volume in SS and apply lighting.
Some problems are that it only bounces once, only from the light source you create the RSM from, hard to scale to massive enviros without cascades which might be tricky.
I should use less memory than other solutions (Enlighten). Not sure how much CPU time is taken, although 3.5 ms for consoles is very light on the GPU side, which is good! We're probably going to have a look at this as well.
I think the enlighten tech creates much nicer results, with the downfall of not being 100% dynamic. The crytek stuff looks really imprecise (at least judging by the videos I've seen)
Replies
Thanks for the link.
this is a related work might be interesting (they mention it in the text, but I think there stuff is more optimized for real world application)
http://www.mpi-inf.mpg.de/resources/ImperfectShadowMaps/
http://www.mpi-inf.mpg.de/~ritschel/SSDO/
One of the authors will join our university
As CrazyButcher's links show, there have been papers on this since 2008, and as the video from Crytek says, they're running it on a pretty super-duper high-end PC (although hopefully NV280-level stuff will be common standard hardware in most people's PCs in a year or so - currently still limited to people who like to be close to the cutting edge of tech)
turtle IS great, but its entirely computed/baked offline - the thing with the cryengine solution is thats its REAL TIME. You really have to see it in action to appreciate how fantastic it is (this isn't bias, it's really REALLY awesome!)
http://graphics.cs.williams.edu/archive/SweeneyHPG2009/TimHPG2009.pdf
nope
-realtime lighting is much faster and less time consuming for the art team as one time/weather cycle needs to be set up for all outside envs (maybe more if you have very different locals, but still just variations) all map/level/env artists need to worry about are small local lights.
-baked lighting often requires alot of extra work setting up objects sets extra uvs etc, even the most painless system still requires extra work to acheive perfect results. that is not including render times for the mapping which can be incredibly painfull at crunch time, w3hen a simple change means the level needs to be rebaked.
-there is a huge memory overhead for baked in lighting esp when your using directional lightmapping, 9 channels of pixels for every...say 1 pixel per 10cm, very quickly adds up, to alot of texture space, this has to be laoded/streamed and held in memory. esp a problem with streaming envs.
realtime will conquer in the next-gen as its cheaper to develope
The other point being that there is more diversity (ideally) with different approaches to rendering possible again. If you look at the past in the time before hardware rendering came, games had quite distinctive looks (eg. check slide 6 http://www-e.uni-magdeburg.de/kubisch/Whats_cool_in_game_graphics_eng.pdf)
And it's about cutting costs, if you "just" need the hipoly, you save costs. Just like Shepeiro mentioning the cutting costs and resources used with dynamic lighting solutions.
How the "can use hipoly asset directly" will be achieved is still unsure. And the ultimate war between the top middleware makers. But I think that we will see a further narrowing in games and movie assets & pipelines, as movies would benefit from the faster rendering times in near real-time engines as well (just look at how many GPU accelerators come out now). Interesting times ahead for sure...
However the real question is imo if animation/ai can catch up. (uncanny valley...) We already have very good looking stuff I think, but "simple" things (wii, lego...) feel more complete than many "realism" games imo.
Yeah I did notice this in the past for example I used to play syndicate and magic carpet and aces over europe and big red racing and each game looked completely different! I always assumed that programming the engines that ran those games was very time consuming and the advantage we now have is middleware or api's that are full of easy to implement powerful commands for rendering, lighting and shading so we should be able to develop games far faster than before...but that paper demonstrates that the opposite has become true and we are making a rod for our own backs and limiting the possibilities for game development.
That said I totally see how GI and real time lighting in games is a powerful ally as would be the ability to use the highpoly model directly and not only do these things make games look incredible they also cut costs because of the time saved baking etc, its quite exciting
You can move stuff.
You can destroy stuff.
Dynamic objects & static objects light using the same tech, so they "fit" better.
Plus what SHEPEIRO said.
Lightmaps suck - I hope they go away! They are just something to use in the interim until we can do the exact same thing in real time. Shadows & GI are the 2 biggest problems right now.
I expect some great atmospheric lighting effects in upcoming games. With more dynamic shadows from objects that wave in the wind (see crysis) or just produce more smooth dynamic shadows and not like what we often see these days: blocky shadow maps.
http://www9.informatik.uni-erlangen.de/publications/publication/Pub.2005.tech.IMMD.IMMD9.reflec/
read closely and dont mix up them describing related work, with the decisions they made for their own stuff. There is no rigging involved nor other representations, that was a specific goal by them to make stuff game friendly.
I guess it's possible that they allow the real-time update in editor and then "bake" once the map is finalised to improve performance, but that wouldn't allow for any dynamic stuff in-game, and if it's real-time in the editor then I don't see why they wouldn't enable that in-game too.
But I see your point in saying as engine middleware, they could offer a static solution (requring to setup the static system for runtime however...) in case a client wants to spend the frametime on other stuff and not use the dynamic system (doubtful I think, you dont disable stuff when you pay for the full thing)
this thing here is a more "correct" real-time version (but slower) than their thing
http://www-sop.inria.fr/reves/Basilic/2007/DSDD07/
While I was still at ea we were talking with these guys, I think there tech is similar.
http://www.geomerics.com/enlighten-media.htm
They pre generate/bake the bounce info and then you can move shit around the scene in real time and the bounce updates on the fly. I think the battlefield guys were also evaluating this at the time for bad company 2. We found two major flaws that steered us away from the so called real time gi. One, you still have to pre bake/generate it and it's not instant by any means. And two it actually costs substantially more memory than storing a dxt1 lightmap.
For example if you have a red box in the corner of a white room, and bake that as light volumes, what if someone moves that red box to a different corner? How can it update baked information, when the object wasn't there before - the local colour in that volume has changed...
I can't imagine you'd bake stuff to allow for every combination of coloured objects in a room...
Unless I just don't understand something fundamental to this? In general "baking" means that the result can't be dynamic, since the situation that you used to compute your "baked" solution has now changed, therefore the solution no longer applies.
"While Enlighten does not support dynamic geometry affecting the indirect lighting, all dynamic
geometry (and any other geometry that you do not wish to make part of the radiosity calculations)
can still be lit by indirect light."
So if you move your red table next to the white wall, you won't get a red bounce on the wall from it. This is what those cryengine vids seem to indicate that it does provide, which is why it looks so very cool and different from what others are doing.
SSDO is mentioned in the cry paper, but they discuss using it only for far objects or close details... SSDO can do bounce only from information in screenspace, whilst crytek does it differently.
I think all of you who want to understand whats going on, should really take the time and read it all thru (more than once, and more than looking at pictures), and look into some of the related work that is quoted. yes that will take some time, but it will stop you from doing wrong assumptions and stating false information.
Tell me where in the paper (pdf) do you see evidence for this. You are "pretty sure" based on what? As glib said, what you describe is the classic old SH volumes, nothing new at all. Noone at Siggraph (the mecca conference for graphics) would care if this was just an established technique. Please take that into context, read in detail. Base your statements on facts. I hope I dont sound like an ass here, but it's very common that one briefly looks over stuff, sees some pictures and bases a opinion on stuff, which could be a complete wrong conception of things. (and this also holds true for peer opinions)
The main advantage is that it is all real-time, no baking. Enlighten is really nice, but the pre-baking time is very slow, you can't move static geo around, you have to create custom shapes or UV's, etc etc.
Real time: generate RSM, use RSM to generate lots of point lights (VPL's) - inject VPL's into SH volumes (LPV) - propagate light through the LPVs - render light volume in SS and apply lighting.
Some problems are that it only bounces once, only from the light source you create the RSM from, hard to scale to massive enviros without cascades which might be tricky.
I should use less memory than other solutions (Enlighten). Not sure how much CPU time is taken, although 3.5 ms for consoles is very light on the GPU side, which is good! We're probably going to have a look at this as well.
A gamer won't care about fancy tech. He wants awesome graphics and gameplay.