looks insane, i really would how UVs are going to work if you can just import stuff from zbrush with those high polycounts, wonder how texturing in general will work
"And that, in a nutshell, is the definition of a micro-polygon engine. The cost in terms of GPU resources is likely to be very high, but with next-gen, there's the horsepower to pull it off and the advantages are self-evident. Rendering one triangle per pixel essentially means that performance scales closely with resolution. "Interestingly, it does work very well with our dynamic resolution technique as well," adds Penwarden. "So, when GPU load gets high we can lower the screen resolution a bit and then we can adapt to that. In the demo we actually did use dynamic resolution, although it ends up rendering at about 1440p most of the time.""
"A number of different components are required to render this level of detail, right?" offers Sweeney. "One is the GPU performance and GPU architecture to draw an incredible amount of geometry that you're talking about - a very large number of teraflops being required for this. The other is the ability to load and stream it efficiently. One of the big efforts that's been done and is ongoing in Unreal Engine 5 now is optimising for next generation storage to make loading faster by multiples of current performance. Not just a little bit faster but a lot faster, so that you can bring in this geometry and display it, despite it not all fitting and memory, you know, taking advantage of next generation SSD architectures and everything else... Sony is pioneering here with the PlayStation 5 architecture. It's got a God-tier storage system which is pretty far ahead of PCs, bon a high-end PC with an SSD and especially with NVMe, you get awesome performance too."
But it's Sweeney's invocation of REYES that I'm particularly struck by. The UE5 tech demo doesn't show extreme detail at close range, it's also delivering huge draw distances and no visible evidence whatsoever of LOD pop-in. Everything is seamless, consistent. What's the secret? "I suppose the secret is that what Nanite aims to do is render effectively one triangle for pixel, so once you get down to that level of detail, the sort of ongoing changes in LOD are imperceptible," answers Nick Penwarden. "That's the idea of Render Everything Your Eye Sees," adds Tim Sweeney. "Render so much that if we rendered more you couldn't tell the difference, then as the amount of detail we're rendering changes, you shouldn't be able to perceive difference."
It's got a God-tier storage system which is pretty far ahead of PCs, bon a high-end PC with an SSD and especially with NVMe, you get awesome performance too."
This line is odd, surely everyone in the game industry is using SSDs and some NVMes at this point? Is the storage in the PS5 a different architecture than this? Is it faster?
Man i wonder how would be the asset creation pipeline for something like this. Dynamesh > Decimate > Import on substance painter with 8k textures > Crash.
The tech demo looks amazing!!! I can`t belive we finaly get rid of baking. @Udjani probably we can just create a mid poly model and throw that in painter and use normal maps decals for small details.
Whats kinda weird to me is that how is this suddenly running like this on a console hardware. The specs of the ps5 are good but not unbelievable. They say the ssd has 5gb/s bandwidth, but a lot of nvme pc ssd-s are similar. Also, nvme haven't showed much difference in games in any way compared to a sata ssd until now. I'm also wondering if this "lumen" thing is a hardware independent raytracing? RTX ray tracing on pc has bad performance and needs to be used with a lot of caution. What has changed?
@Obscura I remember when amd CEO said that they would support raytracing but only when it could be done right, maybe this is it. Also there is alot of rumors about the new nvidia cards that are supposed to be much better in raytracing.
I wonder if Epic is going to help turn Quixel mixer into something viable to texture these dense assets. It doesn't need much just an auto UV algorithm that doesn't completely suck.
That was amazing. Unlimited detail finally happened.
That said, games with needlessly high poly models are going to need equally insane amounts of storage. Console SSDs will only be so large, and many people will be downloading these games instead of buying them on disc (not too keen on downloading 500+ GB's of data for a single game...)
zachagreg said: This line is odd, surely everyone in the game industry is using SSDs and some NVMes at this point? Is the storage in the PS5 a different architecture than this? Is it faster?
The PS5 has specialized I/O hardware for the purpose of getting around typical NVMe bottlenecks, and the SSD in the PS5 itself is also somewhat customized. NVMe on PC has thus far been badly bottlenecked, to the point where the difference between a SATA and NVMe drive is imperceptible 99% of the time (not counting synthetic benchmarks). Sony is the only company so far that seems to have seriously wanted to tackle this issue.
Will skipping the optimization phases help reduce the infamous 'Crunch Time' at AAA studios?
If game studios truly start skipping the optimization parts of the art pipeline, what are the file sizes going to be like? Are the mid-late generation UE5 games going to be like 9 discs to install, and one disc to play? What about downloading games?
Will this have any impact on the film/VFX/Animation industry? The last few years I've seen Unreal Engine start popping up in behind the scenes content, like for "The Mandalorian". Are render farms going to one day be a thing of the past?
Will I even be able to run Unreal Engine 5 on my current gen PC?
Will skipping the optimization phases help reduce the infamous 'Crunch Time' at AAA studios?
If game studios truly start skipping the optimization parts of the art pipeline, what are the file sizes going to be like? Are the mid-late generation UE5 games going to be like 9 discs to install, and one disc to play? What about downloading games?
Will this have any impact on the film/VFX/Animation industry? The last few years I've seen Unreal Engine start popping up in behind the scenes content, like for "The Mandalorian". Are render farms going to one day be a thing of the past?
Will I even be able to run Unreal Engine 5 on my current gen PC?
Its all about the Cloud, its all over after the Next Gen Console Thread. This generation will be the end of what we know as consoles in the house with SSD and Disks.
Was I the only only who got his next gen expectations crushed early by those hair physics?
Anyway, the big thing seems to be about scalability which I imagine can only mean nice things for the whole PITA that are LOD's. The rest of the lofty promises I will treat with tempered expectations. As should you.
Oh and apparently we are going to need all new rigs and all new content creation software to even handle the asset complexity some people are dreaming about. Sounds nice...?
zachagreg said:This line is odd, surely everyone in the game industry is using SSDs and some NVMes at this point? Is the storage in the PS5 a different architecture than this? Is it faster?
Whats kinda weird to me is that how is this suddenly running like this on a console hardware. The specs of the ps5 are good but not unbelievable. They say the ssd has 5gb/s bandwidth, but a lot of nvme pc ssd-s are similar. Also, nvme haven't showed much difference in games in any way compared to a sata ssd until now. I'm also wondering if this "lumen" thing is a hardware independent raytracing? RTX ray tracing on pc has bad performance and needs to be used with a lot of caution. What has changed?
One thing that most people overlook about SSDs is that they still to this day ramain being optional piece of hardware. Basically all software is required to work properly without one, you don't see it listed as a hard requirement anywhere. That saying, no software is actually designed with SSD's in mind. Yes, you can speed up some processes by using an SSD, but it works transparently to most of the current gen software. Now, things get different with next gen consoles where SSD is a given, and while it maybe not be the fastest in terms of raw bandwidth, it is still lightning fast compared to what current gen software are disigned to work with on top of being heavily customized to a specific needs. So i believe that having SSD as a standart component of every console unit will lead to a new software design which will make SSD technology really shine. Far beyond simply improving loading times in specific places.
This is every artist's wet dream, finally a step forward. The workflow of making a high poly to throw away purely to waste time by baking down to a lower res model is obsolete and over a decade old. So the fact that this has come out will unleash our potential!
Don't be scared guys, this purely benefits us and our artistic qualities as artists. The poly count isn't what is standing out here (that's just for marketing), but the fact that we can now remove the obsolete workflow of baking/retopo for enviro/hard surface assets. We will have to learn to texture assets now more like film/vfx artists, but it will be worth it. Realtime engine film making has been crossing over into gamedev for a while now and prominently moreso with shows like the Mandalorian using unreal engine as a backdrop.
Unfortunately, I don't see how this could benefit character artists since they will be constrained to make usable topology for animators still.
We should all be celebrating for the bright future we have. We get more creativity at our hands to be actual artists rather than wasting time cranking out technical art confining to budgets like machines.
I seriously doubt this will remove the need or want for baking overall
From what ive seen about the DirectX 12 presentation you'll have to cut up your meshes in chunks pre-import for this. And yes stones are clearly the best case for such tech.
Also whats the file size going to be like? Making a full game in that fashion is then 20 TB? Where would I get that from? Loading and rendering that is nice and all but it needs to come from somewhere. Im not importing 500 MB highpoly meshes by the masses in any scenario, thats just awful, but its great to have the option. The lighting is more impressive, especially as unity has none that works decently atm.
No intermediate language on Unreal 5 is really a huge misstep from Epic however. Theres no way I get real programmers on C++ or Blueprints and in a case where they like it, Unreal for many people or teams will still be a no-go as a cause sadly. With unity being at a vulnerable point, this would have been a very strong move, but instead even more graphics instead of things the engine desperately needs, like viability to make non FPS/3PS games for non AAA teams who can afford dev teams crunching their lives away at C++ to make things work.
I was wondering about the hard surface workflow too so i made a test.
Got an ak-47 subd model, removed all of the very simple meshes like bolts and cylinders that would be very easy to make an optimazed version, and then decimated the rest of the model. It ended up with 460k tri, and the fbx is 44 MB. Comparing with an optimazed version of another lowpoly gun that i have, wich the fbx only weights 880 KB is quite a lot of difference.
Even if unreal would have no problem at all with mesh like this, exporting/importing and storing could be quite a pain.
Anyways still think and hope that we can say goodbye to baking normals on razor sharp edges like we do. Having a 3-5 edge bevel with weighted normals would make the mesh look much nicer and wouldn't be heavy too.
Anything that gets animated does still require good topo. Only if something is technically possible doesnt mean its the right thing todo. There is no need to create trillions of poly if noone will see it and its only a waste of resources.
Is there any of the AAA devs here able to tell us what is the average file size from meshes from a whole AAA project?
It would be interesting to know if or how possible it would be to decimate the highpoly meshes. We would get absurdly high detail meshes for around x50 or x100 the file size of current meshes (or at least that is what I calculated over the top of my head).
It looks amazing, especially considering this is running on a console. I cannot imagine what kind of resources would be needed to create a full scale game with such a level of detail, If just one asset looks "average" you ruin the whole scene. Would it be feasible? As much as I love this eye-candy, in a game I would prefer visual variety to visual awesomeness, if I had to choose.
Would I be wrong to assume that we could see practical use for displacement mapping (with proper subpixel tesselation). Not really what the presentation was about but if hte engine can handle that kind of polycount, that might become possible for it to be used in production...
Would I be wrong to assume that we could see practical use for displacement mapping (with proper subpixel tesselation). Not really what the presentation was about but if hte engine can handle that kind of polycount, that might become possible for it to be used in production...
I think you would be in the realm of reality with that assumption. I know that the actual computation of tessellation and displacement will eventually add up but I do think rather than these super highpoly assets being thrown into engine at a couple million we will start to see a great rise in displacement map usage and tessellation in games. That to me seems like a more usable solution than these large file sizes everyone is talking about.
But it may also not be that case at all due to however "Nanite" works.
Is there any of the AAA devs here able to tell us what is the average file size from meshes from a whole AAA project?
Hundreds of gb, just for meshes. For textures, you're looking in the terabytes range most likely - that's typically what large numbers of Painter files runs into. Especially at the resolutions they're talking about here.
obviously I'm talking about the high-res source data, not what it gets compressed and optimized down to
This is a really cool tech demo, but good luck making a real game with it. There almost certainly a *ton* of severe drawbacks and restrictions to using this tech, which you simply haven't been told about because it is a marketing video. It would be quite naive to think this is going to significantly disrupt the status-quo of how art production for games is done in the next few years.
Would I be wrong to assume that we could see practical use for displacement mapping (with proper subpixel tesselation). Not really what the presentation was about but if hte engine can handle that kind of polycount, that might become possible for it to be used in production...
Not really. Based on the little hits here and there, its more like micropolygons in reyes renderers. Both of these has a wiki page though. Its some heavily technical stuff, and I think artists don't need to know how this works, because it happens under the hood. It basically recreates your mesh with some dense topology. And auto lods with less dense but still dense topology. Like mesh shaders.
I have my doubts on this. The demo looks amazing but the whole 'no baking' talk is nonsensical(from what we currently know). Importing hundreds of static meshes that has been subdivided and dynameshed straight into the engine would result in the frame rate going into the toilet. The file sizes are massive compared to a baked low poly with a normal map. I can barely import a few million+ poly meshes into Marmoset Toolbag on my beefy PC(2070, 32GB RAM, 3700x, etc) without severe loading and occasional crashing.
Would be interesting to see how this new system cope with trees, 3d grass etc. things that traditionally use alpha clip/blend even in movies.
And if it's micropolygon based my guess it still would be necessary to bake displacement textures? Streaming all the geometry as it is in and out all the time would be insane.
Even Clarisse which able to render billions of poly in a viewport needs optimization so I am not sure how could they do it just by engine. Some super clever AI guessing it altogether with GI without actual ray tracing?
Isn't there going to be some presentation coming up soon for the next generation of Nvidia GPUs? I'm betting they are going to 1up on this demo with another round of game<->movie convergence.
Looking forward to people registering to ask if you can have a career in games just sculpting really high poly rocks.
Hundreds of gb, just for meshes. For textures, you're looking in the terabytes range most likely - that's typically what large numbers of Painter files runs into. Especially at the resolutions they're talking about here.
So I don't know how any of this works, but out of curiosity, are there not also huge savings to be made by upping the model resolution? As you said, textures are orders of magnitude bigger than meshes, so I would assume removing the need for some of those textures by increasing model complexity would be a net positive for final game size. For example, would a low poly gun with high-res normal and AO maps be larger than a high poly gun that doesn't need those maps?
Hundreds of gb, just for meshes. For textures, you're looking in the terabytes range most likely - that's typically what large numbers of Painter files runs into. Especially at the resolutions they're talking about here.
So I don't know how any of this works, but out of curiosity, are there not also huge savings to be made by upping the model resolution? As you said, textures are orders of magnitude bigger than meshes, so I would assume removing the need for some of those textures by increasing model complexity would be a net positive for final game size. For example, would a low poly gun with high-res normal and AO maps be larger than a high poly gun that doesn't need those maps?
The highpoly raw mesh of the gun will still need textures somehow... It cant be vert paint since you'll need multiple channels for different values of specular/roughness/etc. etc. It'll most likely need a ridiculously high res texture a la megascans.
I don't think we need to worry about memory issues of highres textures anymore. Virtual texturing in Unreal does some crazy job on compressing them. It happened today that I was comparing some sparse 8k non virtual texture to a virtual one. It had alpha channel too. The standard one used 65 mb memory while the virtual one only used 6.
The highpoly raw mesh of the gun will still need textures somehow... It cant be vert paint since you'll need multiple channels for different values of specular/roughness/etc. etc. It'll most likely need a ridiculously high res texture a la megascans.
Sure, but you needed those textures regardless of what rendering method you use. But I'm wondering if the file size increase from higher triangle count can be offset by the loss of the normal/AO maps?
I'm going to work on the assumption that nanite revolves around vector displacement or voxelisation since I can't think of any other way to make a mesh resolution independent. Whether that's exposed to the user or done as part of the build process is anyone's guess at this stage.
Assuming that's the case(probably is) theyll be streaming in whichever mip of the displacement texture is required for the current view distance and generating the geometry from that so in terms of resource cost at runtime it should be pretty dynamic and flexible (and efficient) .
You'll still have to make good models with uvs and lods though I'm afraid, this stuff can only add information, not take it away
Replies
This looks so impressive. No more lightmap or texture baking? Millions of polygons? Unbelievable stuff...unreal even oh ho ho!
https://www.eurogamer.net/articles/digitalfoundry-2020-this-is-next-gen-unreal-engine-running-on-playstation-5
"And that, in a nutshell, is the definition of a micro-polygon engine. The cost in terms of GPU resources is likely to be very high, but with next-gen, there's the horsepower to pull it off and the advantages are self-evident. Rendering one triangle per pixel essentially means that performance scales closely with resolution. "Interestingly, it does work very well with our dynamic resolution technique as well," adds Penwarden. "So, when GPU load gets high we can lower the screen resolution a bit and then we can adapt to that. In the demo we actually did use dynamic resolution, although it ends up rendering at about 1440p most of the time.""
"A number of different components are required to render this level of detail, right?" offers Sweeney. "One is the GPU performance and GPU architecture to draw an incredible amount of geometry that you're talking about - a very large number of teraflops being required for this. The other is the ability to load and stream it efficiently. One of the big efforts that's been done and is ongoing in Unreal Engine 5 now is optimising for next generation storage to make loading faster by multiples of current performance. Not just a little bit faster but a lot faster, so that you can bring in this geometry and display it, despite it not all fitting and memory, you know, taking advantage of next generation SSD architectures and everything else... Sony is pioneering here with the PlayStation 5 architecture. It's got a God-tier storage system which is pretty far ahead of PCs, bon a high-end PC with an SSD and especially with NVMe, you get awesome performance too."
But it's Sweeney's invocation of REYES that I'm particularly struck by. The UE5 tech demo doesn't show extreme detail at close range, it's also delivering huge draw distances and no visible evidence whatsoever of LOD pop-in. Everything is seamless, consistent. What's the secret? "I suppose the secret is that what Nanite aims to do is render effectively one triangle for pixel, so once you get down to that level of detail, the sort of ongoing changes in LOD are imperceptible," answers Nick Penwarden. "That's the idea of Render Everything Your Eye Sees," adds Tim Sweeney. "Render so much that if we rendered more you couldn't tell the difference, then as the amount of detail we're rendering changes, you shouldn't be able to perceive difference."
We do that all day for offline rendering.
That said, games with needlessly high poly models are going to need equally insane amounts of storage. Console SSDs will only be so large, and many people will be downloading these games instead of buying them on disc (not too keen on downloading 500+ GB's of data for a single game...)
The PS5 has specialized I/O hardware for the purpose of getting around typical NVMe bottlenecks, and the SSD in the PS5 itself is also somewhat customized. NVMe on PC has thus far been badly bottlenecked, to the point where the difference between a SATA and NVMe drive is imperceptible 99% of the time (not counting synthetic benchmarks). Sony is the only company so far that seems to have seriously wanted to tackle this issue.
https://www.anandtech.com/show/15352/ces-2020-samsung-980-pro-pcie-40-ssd-makes-an-appearance
What are next gen art pipelines going to be like?
Will skipping the optimization phases help reduce the infamous 'Crunch Time' at AAA studios?
If game studios truly start skipping the optimization parts of the art pipeline, what are the file sizes going to be like? Are the mid-late generation UE5 games going to be like 9 discs to install, and one disc to play? What about downloading games?
Will this have any impact on the film/VFX/Animation industry? The last few years I've seen Unreal Engine start popping up in behind the scenes content, like for "The Mandalorian". Are render farms going to one day be a thing of the past?
Will I even be able to run Unreal Engine 5 on my current gen PC?
Doubtful. Bad management will still be bad management.
Don't be scared guys, this purely benefits us and our artistic qualities as artists. The poly count isn't what is standing out here (that's just for marketing), but the fact that we can now remove the obsolete workflow of baking/retopo for enviro/hard surface assets. We will have to learn to texture assets now more like film/vfx artists, but it will be worth it. Realtime engine film making has been crossing over into gamedev for a while now and prominently moreso with shows like the Mandalorian using unreal engine as a backdrop.
Unfortunately, I don't see how this could benefit character artists since they will be constrained to make usable topology for animators still.
We should all be celebrating for the bright future we have. We get more creativity at our hands to be actual artists rather than wasting time cranking out technical art confining to budgets like machines.
From what ive seen about the DirectX 12 presentation you'll have to cut up your meshes in chunks pre-import for this. And yes stones are clearly the best case for such tech.
https://www.youtube.com/watch?v=CFXKTXtil34
Also whats the file size going to be like? Making a full game in that fashion is then 20 TB? Where would I get that from?
Loading and rendering that is nice and all but it needs to come from somewhere. Im not importing 500 MB highpoly meshes by the masses in any scenario, thats just awful, but its great to have the option. The lighting is more impressive, especially as unity has none that works decently atm.
No intermediate language on Unreal 5 is really a huge misstep from Epic however.
Theres no way I get real programmers on C++ or Blueprints and in a case where they like it, Unreal for many people or teams will still be a no-go as a cause sadly. With unity being at a vulnerable point, this would have been a very strong move, but instead even more graphics instead of things the engine desperately needs, like viability to make non FPS/3PS games for non AAA teams who can afford dev teams crunching their lives away at C++ to make things work.
Got an ak-47 subd model, removed all of the very simple meshes like bolts and cylinders that would be very easy to make an optimazed version, and then decimated the rest of the model. It ended up with 460k tri, and the fbx is 44 MB. Comparing with an optimazed version of another lowpoly gun that i have, wich the fbx only weights 880 KB is quite a lot of difference.
Even if unreal would have no problem at all with mesh like this, exporting/importing and storing could be quite a pain.
Anyways still think and hope that we can say goodbye to baking normals on razor sharp edges like we do. Having a 3-5 edge bevel with weighted normals would make the mesh look much nicer and wouldn't be heavy too.
Only if something is technically possible doesnt mean its the right thing todo.
There is no need to create trillions of poly if noone will see it and its only a waste of resources.
As much as needed as little as possible.
It would be interesting to know if or how possible it would be to decimate the highpoly meshes. We would get absurdly high detail meshes for around x50 or x100 the file size of current meshes (or at least that is what I calculated over the top of my head).
https://quixel.com/megascans/collections?category=environment&category=natural&category=limestone-quarry
I cannot imagine what kind of resources would be needed to create a full scale game with such a level of detail, If just one asset looks "average" you ruin the whole scene. Would it be feasible?
As much as I love this eye-candy, in a game I would prefer visual variety to visual awesomeness, if I had to choose.
This is a really cool tech demo, but good luck making a real game with it. There almost certainly a *ton* of severe drawbacks and restrictions to using this tech, which you simply haven't been told about because it is a marketing video. It would be quite naive to think this is going to significantly disrupt the status-quo of how art production for games is done in the next few years.
Should be enough for.. one or two games.. ahhhh com'on .. let it be tree!
Hehe , exactly my thoughts
Assuming that's the case(probably is) theyll be streaming in whichever mip of the displacement texture is required for the current view distance and generating the geometry from that so in terms of resource cost at runtime it should be pretty dynamic and flexible (and efficient) .
You'll still have to make good models with uvs and lods though I'm afraid, this stuff can only add information, not take it away