Home Unreal Engine

Unreal Engine 4

145791039

Replies

  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    But are the large particle counts restricted to the small oval shape material?
    I'm talking about the millions of particles they were showcasing:
    2PaRS.jpg

    Halo: Reach does GPU particles similarly and they are also like that, only really used for sparks.
  • Anchang-Style
    Offline / Send Message
    Anchang-Style polycounter lvl 7
    For the hair: i stil dont really get how its working with Tessalation. The SE Engine work the hair with Tessalation, similiar as the Samaritan Demo did the smoke by Tessalation, does it mean the hair is a fiber like Polygon Based output as ZBrush does?

    Well for the demo: iam really impressed and excited about it and really hope Epic can push both MS and Sony to not try to save 20 bucks on RAM, or 30 on the HDD yet still make the next gen possible. The tech looks amazing. I love how everything looks so noisy thanks to Tessalation. No more flat surfaces with clearly visible Normal Maps (same with SE tech demo), everything got a certain surface to it. The particles blow me away and the lightning. The Art direction for the trailer could be a bit more glorious, thats what i love about SE, that the art for trailers is usually through the roof. But the fact this will be avaible to everyone, rather soon and is at its core such an amazing technology is all that counts for me about this tech demo (the SE one actualy was more interesting when they started messing with the hair color and stuff than what was shown in the trailer, but since this will neve be avaible to normal users, it looses its fascination qucikly).
  • Joshua Stubbles
    Offline / Send Message
    Joshua Stubbles polycounter lvl 19
    Talbot wrote: »
    this could be what they are shooting for. I mean look at the gap between xbox and 360, or ps2 to ps3

    When the PS3 & 360 came out, they were already 2yrs behind in GPU tech compared to the PC market. With the CPU's already being manufactured for the PS4/xbox720, I think we're closer than two years out for system launches. That means they're likely using something more akin to a GeForce GTX560 in terms of power, if that (they still cost $150-200). A GTX680 that runs the UE4 demo costs $500 just for the card. There's no way that's going to be in the next wave of consoles unless they launch in 2015-2016 at the earliest.
  • Anchang-Style
    Offline / Send Message
    Anchang-Style polycounter lvl 7
    It's kinda complicated to get an idea of the new system. I mean the rumored hardwere with the Liverpool Fusion CPU on X86...? Fusion and still a dedicated GPU? x86? really? Weird. Then a GPU codenamed Tahiti. There is an AMD Tahiti-XT out right now, that is supposd to go against the 570 or even 580. But is not really cheap either.
  • iniside
    Offline / Send Message
    iniside polycounter lvl 6
    It's kinda complicated to get an idea of the new system. I mean the rumored hardwere with the Liverpool Fusion CPU on X86...? Fusion and still a dedicated GPU? x86? really? Weird. Then a GPU codenamed Tahiti. There is an AMD Tahiti-XT out right now, that is supposd to go against the 570 or even 580. But is not really cheap either.
    GPU integrated in CPU may help running physics computations. Nothing surprising here for me.
  • Anchang-Style
    Offline / Send Message
    Anchang-Style polycounter lvl 7
    Well yeah thats true, forgot, that there are ways to actually use the integrated GPU for mondane tasks.
  • Snefer
    Offline / Send Message
    Snefer polycounter lvl 16
    When the PS3 & 360 came out, they were already 2yrs behind in GPU tech compared to the PC market. With the CPU's already being manufactured for the PS4/xbox720, I think we're closer than two years out for system launches. That means they're likely using something more akin to a GeForce GTX560 in terms of power, if that (they still cost $150-200). A GTX680 that runs the UE4 demo costs $500 just for the card. There's no way that's going to be in the next wave of consoles unless they launch in 2015-2016 at the earliest.

    Raw hardware power will be less than the machine they demonstrated on, but then again you squeeze sooo much more out of the consoles than PCs. So I think that halfway through the next genereation we will have surpassed this visual quality, no probs : ))
  • IchII3D
    Offline / Send Message
    IchII3D polycounter lvl 12
    Computron wrote: »
    But are the large particle counts restricted to the small oval shape material?
    I'm talking about the millions of particles they were showcasing:

    http://i.imgur.com/2PaRS.jpg

    Halo: Reach does GPU particles similarly and they are also like that, only really used for sparks.

    The physical simulation of the particles will be the main cost on this type of effect. You have the physics side and the render side of the effect. As an example on today's consoles you could easily have an explosion emit 100-300 sparks for a short period of time with limited physics, as long as they are simply dots and don't take up my pixel space. The main cost is the transparency based on screen space, 1 particle can become more expensive than 300 if it took up 50% of your screen space with transparency.

    As a general rule of thumb 2 layers of transparency is fine, 3-4 layers and you start running into problems. If you have 5-6 layers covering large amounts of screen space then their goes your frame render time. But its all relative to screen space.

    This is why you typically don't get dense vegetation much in games or when you do see trees in console games they have a habit of pushing all the foliage as far up as possible.

    Also just to add, GPU particles are possible on 360 but not PS3 (Not without massive head aches) so that's why you generally haven't seen them on many multi platform console titles.
  • nick2730
    anyone see this yet, awesome looks like a new UI. Actually show him working in engine in realtime, everything seems to be going like crytek love it
    http://www.ign.com/videos/2012/06/08/unreal-4-engine-development-demo
  • Bigjohn
    Offline / Send Message
    Bigjohn polycounter lvl 11
    When the PS3 & 360 came out, they were already 2yrs behind in GPU tech compared to the PC market. With the CPU's already being manufactured for the PS4/xbox720, I think we're closer than two years out for system launches. That means they're likely using something more akin to a GeForce GTX560 in terms of power, if that (they still cost $150-200). A GTX680 that runs the UE4 demo costs $500 just for the card. There's no way that's going to be in the next wave of consoles unless they launch in 2015-2016 at the earliest.

    There are a few things wrong here. You might be right, but I'm hoping for awesomeness's sake that you're wrong :)

    First, if we assume that we'll see the same 2-year gap in GPU tech, then we're already on mark. The GTX680 is out right now. If the consoles come out with its equivalent in 2 years from now, then that's the same 2-year gap.

    Second, the price of those GPUs is what we pay if we buy them from Newegg or something. MS and Sony won't just go on Newegg and order like a million GPUs. Their price would be much lower, so then the price of the console will be right back at the same level of ~$500, which is reasonable.

    Third, those are the numbers from right now. Just a couple of years ago in 2010 the Gtx 480 was top of the line at ~$500, and you can find it for about $200 now. That's less than half the price.

    So I don't see any reason why this isn't feasible.
  • Sandro
    New features shown are indeed awesome, especially lighting and particle stuff.

    I'm sure about UI though. Why is subdividing screenspace into infinite number of dockable windows so trendy nowadays?
  • ZacD
    Online / Send Message
    ZacD ngon master
    Also they normally sell consoles at a slight loss for the first year or so, besides Nintendo.
  • SHEPEIRO
    Offline / Send Message
    SHEPEIRO polycounter lvl 17
    demo looks amazing

    but anyone else feel that the voxel based lighting lacks a little realism... not sure why ive been watching and re-watching maybe its a lack of bounces or a lack of precision in the voxel tree or something...maybe a touch of SSAO might sort out...it makes certain intersecting surfaces quite harsh...

    still pretty damn spangly
  • ErichWK
    Offline / Send Message
    ErichWK polycounter lvl 12
    Well, maybe we can save time having having to do massive optimizations and making light maps and baking and having to get our games to run on PS3??
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    Just read that they took out the ability to bake lightmaps.
    Not sure how they expect it to scale down to mobile systems as they mentioned.
    Bring your own lighting solution?
  • eld
    Offline / Send Message
    eld polycounter lvl 18
    Computron wrote: »
    Just read that they took out the ability to bake lightmaps.
    Not sure how they expect it to scale down to mobile systems as they mentioned.
    Bring your own lighting solution?

    easy: unreal engine 3 :P

    The whole point of unreal engine 4 is moving towards a fully real-time solution, which is good in many ways, lightmaps are a burden on any pipeline, both with memory cost and lack of flexibility.

    Many games out today are already sporting fully realtime scenarios, sans the fancy bouncy lights and radiosity we get in lightmaps, now we can have the best of both worlds!
  • Ace-Angel
    Offline / Send Message
    Ace-Angel polycounter lvl 12
    Not to crap in anyone's cornflakes (Unreal 4 looks dope as hell) but I can't help but wonder if all this shiny new tech will continue the tend of production costs for "AAA" games to go up, up up. Publishers are already rolling the dice on any IP that isn't Call of Duty or it's ilk, which is why the climate has been so sink or swim this last decade or so.

    Hopefully these tools also allow devs to get their work done with less time and money so publishers can afford to take some more risks on new IPs, and more studios can flourish under healthier industry economics.
    That's only true if the engine and people working on it don't know anything.

    For example, having each artist spend 15+ minutes in correcting certain things for a mesh (per asset) so they get better normal map bakes is a waste of time and manpower. If an artist makes 10 pieces alone, they are wasting both their and the companies times in 2 man hours, that about 20 man hours for 10 artists alone, and I'm being generous here with 15 minutes per asset, what about that busty space marine girl with all kinds of weapons and stuff getting up close and personal? That's easily over several hours per part.

    So no, next-gen is supposed to make things cheaper, not more expensive, it only gets/got expensive because companies like Autodesk haven't bothered solving the 'small' issues which over the years stacked up and taken a toll, as well the companies themselves who think 'Eh, you know what, let them kludge it out, who cares if we're wasting several hundred man hours in a single day in our studio, it's not like we need to publish head-ache-less products in under 3 years without resorting to firing people all the time'.

    It's the pipeline that is at issue here, and from the looks of it, hopefully Epic have realized this and taken out the hammer and streamlined the process.
    Computron wrote: »
    Just read that they took out the ability to bake lightmaps.
    Not sure how they expect it to scale down to mobile systems as they mentioned.
    Bring your own lighting solution?

    Most people I met (at least, many technical artists) always vouched for baking their lightmaps in UDK, exporting them, fixing them up (dilating, blurring, etc) and importing them back into the engine, since by default, lightmaps in many engines are subpar when it comes to end-result.

    Many more use Plugins like Hare and Turtle to bake them out.

    So I can see why Epic would want to remove them somehow, although I figured they would have fixed the smaller issue and keep them as a side option, I guess we'll see if they changed their stance in 2014. Ofcourse, the talk of next-gen is also at heart here, so that could the reason too.
  • Daelus
    Computron wrote: »
    Just read that they took out the ability to bake lightmaps.
    Not sure how they expect it to scale down to mobile systems as they mentioned.
    Bring your own lighting solution?

    They took out the necessity, not the ability. Or at least that's what I got out of it.
  • Xendance
    Offline / Send Message
    Xendance polycounter lvl 7
    So does anyone actually know what their algorithm for the lighting is?
  • Gestalt
    Offline / Send Message
    Gestalt polycounter lvl 11
    I heard the term 'voxel' mentioned when referring to lighting, so I'm thinking that they're doing something along the lines of voxel cone tracing.
  • iniside
    Offline / Send Message
    iniside polycounter lvl 6
    Daelus wrote: »
    They took out the necessity, not the ability. Or at least that's what I got out of it.

    No. They clearly said there is no any form of lightmapping. Why would you keep lightmapping in next-gen engine ?
  • ambershee
    Offline / Send Message
    ambershee polycounter lvl 17
    ..because memory is cheap and your environments don't require dynamic lighting?
  • artquest
    Offline / Send Message
    artquest polycounter lvl 14
    Not to crap in anyone's cornflakes (Unreal 4 looks dope as hell) but I can't help but wonder if all this shiny new tech will continue the tend of production costs for "AAA" games going up, up up. Publishers are already rolling the dice on any IP that isn't Call of Duty or it's ilk, which is why the climate has been so sink or swim this last decade or so.

    Hopefully these tools also allow devs to get their work done with less time and money so publishers can afford to take some more risks on new IPs, and more studios can flourish under healthier industry economics.

    To be honest, from what I saw in the demo the actual assets themselves didn't look much different than current gen models. Just model a high and low rez, bake a height map for tessellation and normal map and your done. It's really up to the studio and what kind of game you're making how much detail you put into your assets.

    With software like zbrush and 3d coat or other retopo software you can pump out a great looking, highly detailed asset relatively quickly imo.

    That being said.. if things keep pushing towards film quality (and games are catching up very quickily... to the point that film/tv is using more realtime rendering engines for vfx! Zoic even said if they were to do 300 again, they could do it in realtime.)
    I think we may see a move towards the film style pipeline, where you split modelers out from the texture artists in most cases.
  • blankslatejoe
    Offline / Send Message
    blankslatejoe polycounter lvl 19
    By fully deferred they mean splitting the rendering into pieces (normal pass, base pass, etc) and then lighting on a more screen space level, so you're not taking a drawcall hit, but rather a shader-complexity hit, I believe.

    It's what you see in Crytek, Metro2033, and a bunch of other more recent shooters... there might be a baked solution for calculating the bounced light, and that's probably where the 'voxel' lighting thing comes in, I imagine, as it would bake a grid of voxels that contain spherical harmonics information or something. It's probably similar solution to Enlighten.([ame="http://www.youtube.com/watch?v=Dd8yMPZzWfE"]Geomerics Enlighten Demo[/ame]).

    But yeah, baking lighting is tedious and kludgey and going deferred is going to save tons of iteration time when it comes to lighting.
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    On the subject of removing lightmaps:
    eld wrote: »
    easy: unreal engine 3 :P

    The whole point of unreal engine 4 is moving towards a fully real-time solution, which is good in many ways, lightmaps are a burden on any pipeline, both with memory cost and lack of flexibility.

    Many games out today are already sporting fully realtime scenarios, sans the fancy bouncy lights and radiosity we get in lightmaps, now we can have the best of both worlds!

    I understand, but they specifically mentioned that the UE4 would scale across all the current hardware classes that UE3 does, which would have fit great with my theory that they would have partnered with Geomerics and used Enlighten. That way, artists can have realtime dynamic GI (like in BF3) on workstations or enthusiest gamer hardware and because of the way enlighten works, they could use (without having to rebake) that same preview GI as a quality static lightmaps on console and mobile, instantly.

    That means:

    Same engine, same game, multiple platforms at the appropriate level of detail, very happy developers.
    Gestalt wrote: »
    I heard the term 'voxel' mentioned when referring to lighting, so I'm thinking that they're doing something along the lines of voxel cone tracing.

    They are doing something very similar. Nvidia's website went slightly in depth in their writeup; they do a cone trace at a per pixel level. It uses a ton of RAM and DX11 compute shaders and requires DMA for texture so unlike geomerics, it's not likely to scale down from highe end PC DX11 class hardware.
    Daelus wrote: »
    They took out the necessity, not the ability. Or at least that's what I got out of it.

    I read that they flat out took the ability. maybe that was a misquote, we will see soon.

    By fully deferred they mean splitting the rendering into pieces (normal pass, base pass, etc) and then lighting on a more screen space level, so you're not taking a drawcall hit, but rather a shader-complexity hit, I believe.

    It's what you see in Crytek, Metro2033, and a bunch of other more recent shooters... there might be a baked solution for calculating the bounced light, and that's probably where the 'voxel' lighting thing comes in, I imagine, as it would bake a grid of voxels that contain spherical harmonics information or something. It's probably similar solution to Enlighten.(Geomerics Enlighten Demo).

    But yeah, baking lighting is tedious and kludgey and going deferred is going to save tons of iteration time when it comes to lighting.

    Geomerics is different. Epic's solution more like an advanced version of cry3's LPVs, with a lot higher frequency detail because of their method of sparsely voxelizing their scenes, as well as greater support for accurate direct/indirect specular.

    Baking is not necessarily a slow process. Enlighten does it in realtime on many diferent classes of hardware, but at a lower-medium frequency/ lower level of quality. but hey, it works well enough for all the frostbite 2 games!

    Deferred indirect lighting on cacaded propagation volume is how Cry3 does their GI, while UE4 cone traces per pixel, so it doesn't have anything to do with their indirect lighting.It will be interesting to see what their new material editor allows for with the defered materials. That is the one thing that Cry3 is missing right now for me. I will be curious to see what kind of limitations will that impose?
    ambershee wrote: »
    ..because memory is cheap and your environments don't require dynamic lighting?

    Exactly. Geomerics makes this point very clear in their presentation, 'rethinking game lighting pipelines.' Lightmaps are a very convenient and proven format that require very little memory, can be streamed and can updated/rebaked dynamically in real-time as necessary, which is actually how Enlighten works. BF3's Frostbite engine already use this tech for their realtime GI.

    Anyway, I was wrong, they are not using enlighten, but what's interesting is that Epic did not use Cyrril Crassin's work (As I posted earlier with the voxel cone tracing nvida write-up) from what I have read so far, but rather they opted for an internally developped solution from one of their engineers. I wonder how it differs from Cyrril's work, since that was last year and Epic's new method seams to be a lot faster.

    I want to hear about mobile soon.
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    artquest wrote: »
    To be honest, from what I saw in the demo the actual assets themselves didn't look much different than current gen models. Just model a high and low rez, bake a height map for tessellation and normal map and your done. It's really up to the studio and what kind of game you're making how much detail you put into your assets.

    With software like zbrush and 3d coat or other retopo software you can pump out a great looking, highly detailed asset relatively quickly imo.

    That being said.. if things keep pushing towards film quality (and games are catching up very quickily... to the point that film/tv is using more realtime rendering engines for vfx! Zoic even said if they were to do 300 again, they could do it in realtime.)
    I think we may see a move towards the film style pipeline, where you split modelers out from the texture artists in most cases.

    I was hoping for a bigger push toward DCC tool integration (think cry3's 3ds-max/maya exporter plugins), and things like project skyline.
    I was also wishing they would allow extension for some more exotic techniques/content-pipelines like Mega textures or the lionhead's (lesser known, but even more ambitions) Mega Meshes even.
  • Gestalt
    Offline / Send Message
    Gestalt polycounter lvl 11
    Do we know what type of textures/meshes/techniques UE4 works with yet? I was sort of hoping things would move to where you could essentially drop a high-poly, vert painted static mesh into the editor and it would automatically process and tessellate it. I guess sticking to textures has plenty of benefits though in terms of modularity and reuse/tiling/etc.
  • CrazyButcher
  • SHEPEIRO
    Offline / Send Message
    SHEPEIRO polycounter lvl 17
    SHEPEIRO wrote: »
    demo looks amazing

    but anyone else feel that the voxel based lighting lacks a little realism... not sure why ive been watching and re-watching maybe its a lack of bounces or a lack of precision in the voxel tree or something...maybe a touch of SSAO might sort out...it makes certain intersecting surfaces quite harsh...

    still pretty damn spangly

    just to answer my own question quoted from neogaf...

    "NVidia: Please give us an overview of how the algorithm works from generating the octree, to cone tracing, to the gathering pass.

    Tim Sweeney: The technique is known as SVOGI – Sparse Voxel Octree Global Illumination, and was developed by Andrew Scheidecker at Epic. UE4 maintains a real-time octree data structure encoding a multi-resolution record of all of the visible direct light emitters in the scene, which are represented as directionally-colored voxels. That octree is maintained by voxelizing any parts of the scene that change, and using a traditional set of Direct Lighting techniques, such as shadow buffers, to capture first-bounce lighting.

    Performing a cone-trace through this octree data structure (given a starting point, direction, and angle) yields an approximation of the light incident along that path.

    The trick is to make cone-tracing fast enough, via GPU acceleration, that we can do it once or more per-pixel in real-time. Performing six wide cone-traces per pixel (one for each cardinal direction) yields an approximation of second-bounce indirect lighting. Performing a narrower cone-trace in the direction of specular reflection enables metallic reflections, in which the entire scene is reflected off each glossy surface."

    so thats two bounce lighting 1 direct (current tech light hits surface and is refflected/diffused enters camera) and 1 "bounce" (betwwen materials befor entering camera.......i presume.

    this is obviously a major step forward from direct lighting only or baked lighting... but im wondering if a low level of baked lighting would improve the look at all.
  • Anchang-Style
    Offline / Send Message
    Anchang-Style polycounter lvl 7
    So thats where the Voxel Engine tests went into, very interesting thanks for that :)
  • iniside
    Offline / Send Message
    iniside polycounter lvl 6
    ambershee wrote: »
    ..because memory is cheap and your environments don't require dynamic lighting?

    Ok. Then you don't have any particles casting lights, any dynamic reflections, movable lights etc ? Everything is perfectly static ?

    What I mean if you have light mapping in engine, you have to mix lightmapping with dynamic lighting to blend together nicely.
    I believe that maintaining two almost separate lighting system in single engine is not something you want to do for sake of cleaner code and easier to maintain.
  • SHEPEIRO
    Offline / Send Message
    SHEPEIRO polycounter lvl 17
    maintaining lightbaking aswell as RT lighting would be more of a tools issue for epic than a runtime issue for games that run with either

    running with voxel lighting turned off would (probably, just mildly educated speculation here) just cut a chunk of code out when compiled (clean)
    with voxel lighting turned on assets would have their maps and 2nd uvs stripped and the shaders changed which is a pretty simple process for a multiplatform engine like UE
  • r_fletch_r
    Offline / Send Message
    r_fletch_r polycounter lvl 9
    I may be understanding this wrong but if the voxel grid is pretty course doesnt this mean there's going to be light leaking all over the shop? or does SVO make up for this while maintaining sane amounts of memory usage
  • blankslatejoe
    Offline / Send Message
    blankslatejoe polycounter lvl 19
    another downside of mixing baked with dynamic lights: once something is baked then trying to mix a baked shadow with a dynamic one leads to nothing but headaches--i could see the baked thing working for maybe subtle AO/GI thing, but if you use it for direct shadowing you'd end up with that ol' shadowstacking problem that you see in earlier UE3 games' characters...only now it would be on the entire environment... unless you managed the 'which shadow do i cast?' flags in UED closely...which..is fun....and also is exactly the sort of thing UE4 seems like its trying to simplify!
  • Bigjohn
    Offline / Send Message
    Bigjohn polycounter lvl 11
    Man... I considered myself a fairly technical person, and always tried to keep up on the latest maps and whatnot, but wow, this Voxel GI thing is going waaaaaaaay over my head.

    Anyone else feel the same? Or am I just being retarded?
  • Xoliul
  • Andreas
    Offline / Send Message
    Andreas polycounter lvl 11
    ... I really hate that man...

    I love how he spent 3 minutes saying 'Kismet is better, and I hope we can have photo-realistic graphics this coming gen.'
  • Joseph Silverman
    Offline / Send Message
    Joseph Silverman polycounter lvl 17
    Andreas wrote: »
    ... I really hate that man...


    =/

    why? what is wrong with what he just said?
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    r_fletch_r wrote: »
    I may be understanding this wrong but if the voxel grid is pretty course doesnt this mean there's going to be light leaking all over the shop? or does SVO make up for this while maintaining sane amounts of memory usage

    Light leaking was a problem with propagation algorithms in cry3's course LPV's. You are correct, the sparse nature of the voxel structure and the fact that its tracing cones in a hemisphere per pixel means very little bleeding if any. It's more similar to final gathering if you are familiar with that.
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    Bigjohn wrote: »
    Man... I considered myself a fairly technical person, and always tried to keep up on the latest maps and whatnot, but wow, this Voxel GI thing is going waaaaaaaay over my head.

    Anyone else feel the same? Or am I just being retarded?

    Do you read any whitepapers or watch tech presentations? I suggest this one by Cyrril Crassin on his very similar method for RTGI.
  • Andreas
    Offline / Send Message
    Andreas polycounter lvl 11
    =/

    why? what is wrong with what he just said?

    Nothing much lol, I just really went off him during the making of documentary of the first Gears. For some reason half of it was about him and his girlfriend and how she dumped him hah. The other half was Microsoft going 'hmm... can we really get away with this chainsaw gun? Seems a bit too much.' :D
  • ambershee
    Offline / Send Message
    ambershee polycounter lvl 17
    iniside wrote: »
    I believe that maintaining two almost separate lighting system in single engine is not something you want to do for sake of cleaner code and easier to maintain.

    The system which generates lightmaps is completely discreet from that which casts dynamic lights, they are not related, and it is not difficult to maintain. They are later combined in the shader, which can optionally compile in/out whatever is required without any real difficulty (and probably already does this kind of work in UE4 for other features).

    The reason there's no lightmapping in UE4 is because the target platforms are not expected to want to make memory tradeoffs in exchange for performance (since just like the current generation, they're likely lacking memory where it counts). That and dynamic lighting often tends to look better, but not where global illumination is required. I bet if you put their dynamic GI in a room without heavily perturbed surfaces, it will look absolutely terrible.
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    ambershee wrote: »
    The reason there's no lightmapping in UE4 is because the target platforms are not expected to want to make memory tradeoffs in exchange for performance (since just like the current generation, they're likely lacking memory where it counts).



    From what I understand that is not likely the case ambershee.

    If the way they do their GI is any bit similar to how Cyrril does it, then they will require a very large 3d texture and a lot more memory than they would with lightmaps. (They specifically mentioned that their method requires a ton of RAM) Also if UE4 is targeted toward a wide range of hardware as cliffy says, something like Enlighten's real-time dynamic texture baked GI makes more sense to me, in terms of memory savings and performance.

    Here is the abstract for the presentations I linked above that I wish more people would understand about game lighting:
    Overview
    Many urban myths surround game lighting technology: baking lighting into textures is slow, high-quality real-time radiosity is not affordable, most lighting research is not practical for games, hardware is not yet fast enough for complex lighting, etc. This talk challenges these myths, showing that considerable, unexploited potential still exists on current consoles. An argument is made for why the static/real-time divide no longer is relevant, and how rethinking the way lighting is generated can provide higher quality, more iterative lighting. Examples are provided for what alternative lighting pipelines are possible today with existing techniques, and what considerations should be made to prepare for the next generation.
  • ZacD
    Online / Send Message
    ZacD ngon master
    Computron wrote: »
    Do you read any whitepapers or watch tech presentations? I suggest this one by Cyrril Crassin on his very similar method for RTGI.

    Cool, pretty easy to explain concept and I want to make sure I understand it but the guy is a bit too hard to understand at points. Its a 3d grid that gets saved off into a 3d texture that has different levels of subdivision, and from what I understand, up to 9 subdivisions for that example scene. Where there is no geometry the subdivision is low, and its also smart to separate static and dynamic objects. Since all of the lighting exists in 3d space it is easy to create specular reflections, and glossy reflections are oddly cheaper, because it just needs to look at a higher subdivision. I don't understand exactly how the light bounces are calculated though, and he said it does 2 bounces in the example.
  • Computron
    Offline / Send Message
    Computron polycounter lvl 7
    I'll try and explain it as well as easily as I can, correct me if I make any mistakes guys, but remember I am generalizing.

    So, the lights aren't actually bouncing photons around the scene two times and then showing up in the camera, but this method approximates what that would look like and performs fast enough to be real-time.

    For diffuse interreflection:

    First, for every light, you would render the scene from it's 'point of view'.
    Any surface's pixels that are directly illuminated by it store their brightness in their respective voxel in the 3d grid. This is the direct light represented in voxels.

    After this, there are some fancy filtering algorithms that make sure these voxels will work with all the various subdivision. Same basic idea as with mip-maps, but in 3d. So it creates the mip-maps for the lights, which allows things like 'Diffuse Inter-reflection' to be calculated faster because they don't require super high resolution to be super accurate. more on that later.

    Then you render from the game camera.
    For every pixel of any surface you render, you can get indirect lighting by basically rerendering a low-resolution, voxelized version of that pixel's surrounding environment from it's 'point of view'. (which, in this case, is a hemisphere pointing out from that pixel's normal)
    You would then average what that pixel 'sees' into a value that represents it's brightness and color. this aproximates Diffuse Inter-reflection for ever pixel. (i.e. lighting from all directions).

    Because your scene has a 3d voxel grid with multiple subdivisions (an thus, mip-map levels) representing it, lighting calculations can be done much quicker with cone tracing than if you would ray-trace or other methods...


    For specular reflections:

    These are done the same way, but they don't sample the lighting in all directions, but rather in the direction of the bounce.
    The glossiness (and BRDF if you want to get technical) of the pixel will determine how wide of an area they will sample (the cones angle). Rough or less-glossy/more-diffuse surfaces perform faster because they can use a lower mip map level much like Diffuse Inter-reflection.

    So, mirror like reflections probably wont be possible with this method without greatly increasing the resolution of the voxel grid.


    I don't have any directly applicable visual examples, but here is a link to an old explanation of how the radiosity method is calculated with lots of good visuals.

    Now, it's not the same thing, but it explains the 'pixels point of view' and the 'averaging to get visiblity' concept pretty well.
    Instead, the first pass is done from the point of view of the light as I had mentioned earlier and instead of rasterizing the scene in hemicubes, you would cone trace in the voxel grid (in real time rather than waiting for a bake) and you would only get results up to as good as the 2nd pass picture shows.
  • Ace-Angel
    Offline / Send Message
    Ace-Angel polycounter lvl 12
    =/

    why? what is wrong with what he just said?

    I think his attitude, his T-Shirt, and how he spins passively most interviews into "Hey, I'm old school, I have unproven facts and I have a GF", not to mention how he started into the Game Dev by simply playing Mario alot might have something to do with it...

    He is the reason many smuck gamers who know nothing about game Dev want to become one...
  • iniside
    Offline / Send Message
    iniside polycounter lvl 6
    Ace-Angel wrote: »
    I think his attitude, his T-Shirt, and how he spins passively most interviews into "Hey, I'm old school, I have unproven facts and I have a GF", not to mention how he started into the Game Dev by simply playing Mario alot might have something to do with it...

    He is the reason many smuck gamers who know nothing about game Dev want to become one...

    Or physical and proven example that if you want something strong, you may end up doing it.
    Depends how you look at it.
  • WarrenM
    Cliff is good at what he does and he's successful. Thus, there are haters. :)
  • SHEPEIRO
    Offline / Send Message
    SHEPEIRO polycounter lvl 17
    another downside of mixing baked with dynamic lights: once something is baked then trying to mix a baked shadow with a dynamic one leads to nothing but headaches--i could see the baked thing working for maybe subtle AO/GI thing, but if you use it for direct shadowing you'd end up with that ol' shadowstacking problem that you see in earlier UE3 games' characters...only now it would be on the entire environment... unless you managed the 'which shadow do i cast?' flags in UED closely...which..is fun....and also is exactly the sort of thing UE4 seems like its trying to simplify!

    yeah i agree...i was thinking extremely generic almost like a large scale SSAO pass sitting in the very bottom of the range... no lights as such just a ray cast for occlusion at 2m or something could possibly even be vert based as it wouldnt need much accuracy etc but to just lift the lighting a touch...

    would need to see it on front to be ceratin though video compression and all that
  • SHEPEIRO
    Offline / Send Message
    SHEPEIRO polycounter lvl 17
    Computron wrote: »
    Light leaking was a problem with propagation algorithms in cry3's course LPV's. You are correct, the sparse nature of the voxel structure and the fact that its tracing cones in a hemisphere per pixel means very little bleeding if any. It's more similar to final gathering if you are familiar with that.

    got any sources for that info... interested
145791039
Sign In or Register to comment.