I dl'd it. Took like 8 hours because I live in a rural area and lol @ US internet infrastructure. So far it's just UE4 with a new UI. I can't seem to find how to do nanite stuff. I imported a ~12M polygon mesh which took a few minutes to load but it never did the magic nanite stuff. It definitely did not like that many polygons. I assume there's some kind of plugin or something coming down the line for that later. Also there's no documentation that I can find.
I dl'd it. Took like 8 hours because I live in a rural area and lol @ US internet infrastructure. So far it's just UE4 with a new UI. I can't seem to find how to do nanite stuff. I imported a ~12M polygon mesh which took a few minutes to load but it never did the magic nanite stuff. It definitely did not like that many polygons. I assume there's some kind of plugin or something coming down the line for that later. Also there's no documentation that I can find.
Dont be foolish and think to use unlimited polycount is the new workflow. Your TDs are goin to kill you. You wont have enough disk space in production. And it wont make the game better.
There are two ways to create UVs for such highpoly assets. One is to uv your basemesh. If you dont have a basemesh create a autoretopo. Do the uvs there and transfer them.
Each one is around 12M. I'm still on a GTX 970 and DDR3 16gb. It definitely starts cutting into my frame rate but it's fine for making portfolio stuff.
even if you could just toss zbrush sculpts into the engine that doesn't really open up so much possibility. Thats a slow way to work in general and many assets cant be made that way. Many assets have to be made in regular DCC's like max/maya/blender and a pipeline with every minor prop being too heavy will be a nightmare to handle even if the game engine can carry the heavy weight.
I think it will help people relax in general by giving more breathing room and also help the teams who really know what they are doing push tech, but I dont think it means artist suddenly stop doing retopo and only become polygon soup chefs.
If you read through the documentation it is very revealing. Nanite + Lumen both look like .... they were built for a quite targetted set of content, rather than being highly generalized solutions. It'll be interesting to see how the technology develops.
If you read through the documentation it is very revealing. Nanite + Lumen both look like .... they were built for a quite targetted set of content, rather than being highly generalized solutions. It'll be interesting to see how the technology develops.
Seems to me like it could be part of Epics push into Film & TV. Make it so those teams don't have to radically alter their asset production to make use of real-time rendering.
Seems to me like it could be part of Epics push into Film & TV. Make it so those teams don't have to radically alter their asset production to make use of real-time rendering.
Sounds like it might be the case. With the expectation that games hardware will catch up in good time by which these systems then can gradually start to be deployed there too. And in the meantime you might still be able to utilize this stuff here and there for special cases.
Like - I imagine - the Alembic import hair systems might have been put in with similar objectives but could see some action in games too, depending on the scope.
For everything else you have pretty hard limits for how large or demanding your games can get before you start to limit your audience one way or another.
There are cool things in it for sure, but from what I've seen the translucency and reflections seem to have taken a hit. I hope they find a better solution soon.
Lumen is a huge improvement over the disparate pre-existing lighting systems - that's the killer feature imo.
Nanite looks like it's going to be pretty useful - nevermind the bullshit about dumping million Tri meshes into your project but the instancing seems to be really effective and the autolodding is a lot better than letting unreal or simplygon (or an artist) take a massive shit on the geometry.
I did manage to provoke some bugs - raytraced shadows started artefacting on nanite geometry and i think the streaming system got a little bit confused. But for an early access release it seems to be in pretty good shape
The sample project doesn't run too well even on beefy machines though. Nanite is certainly useful, but I realized that lumen for people who already owns an rtx cards, is not too interesting. Especially given that currently it runs worse than dxr ray tracing and its lower quality too.
The sample project doesn't run too well even on beefy machines though. Nanite is certainly useful, but I realized that lumen for people who already owns an rtx cards, is not too interesting. Especially given that currently it runs worse than dxr ray tracing and its lower quality too.
I didn't open the sample project - I assumed it was bollocks like most of their others
I've not tried on my big computer yet but on a 10900k with a 2070super I'm definitely seeing better performance for a given quality level than I am on ue4 with rtx turned on - particularly with regard to GI. It's possibly not as accurate but I'll gladly take that in exchange for how much simpler and more predictable lighting is - especially with fully dynamic scenes using realistic values.
Admittedly that's still a pretty high end machine but we're looking at tech that won't mature for a couple of years -.midway through this console generation I'd expect it to be a shit load faster and thus a viable option for use in an actual game.
I'll be interested to see what it's like on the 3080 since ue4 with rtx is actually a viable option on that but that's a job for the weekend
I've tested the sample project on multiple high end machines by now (intel vs amd mainly) with 2080tis on 1080 to 1440p on ultra settings and I'm getting between 45 and 30 fps depending on the view. it is maxed out settings except the console settings. I would say this is ok but not great and it is certainly not like it will just work for a fully blown game project with all the highpoly meshes artists would want to just throw in. There is still a nice middleground and you still need to be careful with the powerful tools you just got.
Absolutely - there's no magic bullet here. What they've done with nanite is remove the problems associated with shit or overworked artists - I assume you pay for it with memory although I haven't found a stat that actually tells you what the memory cost of a nanite mesh is
So far - there seem to be a lot of (prop) situations where nanite negates the need for a baked/unique normal map - whether that's useful in practice or not depends how memory for the nanite meshes is managed but there's potential there.
@poopipe, in the documentation i linked a few post above there is a section those goes into a little depth about those concerns:
a little ways down from my screen capture they also consider the bigger picture when you include textures, etc. It looks like the nanite mesh is mildly larger than a low poly with 4k maps. But who is using 4k maps for terrain assets? Anyway, they got some words about that stuff.
So now that UE5 is out, what's the TLDR on nanite?
My understanding is it's basically a system where a mesh is pre-processed into an octree format ("virtualized geometry") to make it fast enough to find out which triangle per-pixel is closest to the camera each frame, thus avoiding the need to render more then one triangle per-pixel (a sort of per-pixel occlusion)? This is then combined with an automatic LOD system (though I'm not sure if they're pre-processed or generated at runtime).
Haven't really dealt with the engine for a year or more now and was wondering: are they still expected to release 4.x versions at this point or is everything now being put into UE5?
Updates/fixes for 4.x are pretty much going to be non-existent now based on what I heard in some (very good) training I took part in recently so presumably it's an official line.
This is a shame since there's a number of things still borked in 4.x but it does rather suggest 5 isn't far off.
Updates/fixes for 4.x are pretty much going to be non-existent now based on what I heard in some (very good) training I took part in recently so presumably it's an official line.
This is a shame since there's a number of things still borked in 4.x but it does rather suggest 5 isn't far off.
Thanks, shame indeed. My mind seems to auto translate 'early access' to 'rough edges crashtastic edition' somehow.
Once can see things a bit differently though - 4.6 could very well become the last/ultimate 4.x reversion, and they could simply maintain it going forward, slowly but surely eradicating bugs and backporting some of the quality of life features from 5.0. Who knows if that will happen (did they even do that with 2.x and 3.x ?), but that'd be pretty great.
UE5 is not production ready.. So 4.26 will - as far as I know - at least get a 4.27. And there is still a lot of similarity between both 4 and 5 so I see the transition rather smooth on both ends.
I downloaded the project and played with it a little bit. The min requirements say 1080ti or vega 64 and 12core cpu. I have 8 core ryzen 1700 and a 4gb radeon 380, so i was expecting to not even run. Instead..... it runs, at 20 FPS in the editor, 10-15 in "play mode" and it looks like a movie, basically.
About the "this is not infinite polys", "this doesn't change everything" using uppercase words to mock, some insult i read there in one post..... i know some people "know stuff" work with engines and 3d graphics and what not, but these statements are not something you can just make because "i'm an expert" or something, sometimes that is not even needed. I remember 1 year ago some people saying that in fact this was absolutely FAKE and the demo would be like terabytes and what not, well they were wrong and like always no one that say things that are completely wrong comes back to fix the mistake, like usually happens in the news. But no problem we can keep destroying this building with new destructive weapons, so now is "not for all cases" "not unlimited polys" and other negative comments, or overwhelming pessimistic i would say.
First, there is a content creator (happy as i'm) playing with this engine already and just put 300 billion tris assets on screen, it runs with no problem, as it did with 30 million. So first conclusion "not infinite polys" should be at least "not infinite polys in all situations" also the word "infinite" or "unlimited" is tricky because this engine it's not really rendering that. In the hard drive? absolutely, hard drives have a limit, but i don't think ANYONE said anything about infinite assets, which is the most accurate word, assets, not polys, polys can be infinite. Instanced or whatever it is, the same assets repeated over and over? yeah....such a bad thing right? well, maybe not because not necessarily the repetition would be noticed, hello, almost every videogame has repeated textures?
Another thing about size, this is 100gb, the map is HUGE even when the playable area of this project is small, the project itself is loading a huge area that in a comercial product could be a game map, so maybe the game is not 100gb, could be 150gb, is that the end of the world? is that "a really really bad thing, a limitation" ? Red dead Redemption 2 was 100gb for an old gen game, so expect any decent AAA game to use more than that.
Besides games, that need the developers to decide to use this, and that is an unpredictable thing since human behavior can not be predicted based on technicalities or objective aspects, the best engine on paper could be ignored by most, (Cry engine 2+) developers were lazy up until now, i don't see that changing, probably because economic reasons i don't know just not judging it. But, if some studios decide to use this, there is many things this can be used for. Objects that can be repeated so that is an advantage, for example a building in a city or well.....anything. I think they will use this for landscape. Also this runs great with the latest cards, future cards will have more power but all of that will go to resolution or something else, and DLSS is improving, so people in two years could be improving storage more than gpus, i think this is the correct way of reading this situation. Nothing stop people piling up storage. The present consoles COULD be the bottleneck, nut if the game CAN run in present consoles at least one game, i imagine Sony releasing one of those using this, there must be a reason for Sony to be involved in the development of Nanite to huh? Still considering the stuff in this project, 100gb is probably a demonstration of a standard for games using nanite instead of a "oh, these 100 gb did just a little and a real game would need gazillions gb" no..... just don't say that, it's CLEARLY not true. So expect some games at least, AAA best, but i think many will adopt it in 2-4 years.
Someone said movies i believe? that's a great point, because for the plus ultra pesimists, that use every little technical corner to attack this engine and less pessimistic people.....sorry, you have no place here in the movie industry sir. Movie studios, commercials, and whatever you like will have a revolutionary tool in hand with this.
I enjoyed played this demo with a holy Radeon 380 4gb absolutely outside of the minimum specs and a dirt cheap SSD, just in case. As all the latest games i downloaded were 100gb or getting close. An Chine anime game with crappy graphics is 25 gb. So maybe don't be a "SSD phobic" and the problem is solved
Another thing i forgot, i already mentioned my Radeon R9 380 4gb? ok, and that it runs at 10 - 20 FPS? great. So why is people saying that this doesn't run well on high end computers? and also why is people thinking that this BETA/ALPHA whatever it is, project.... is a game and what a game will be in optimization? i believe games are compiled first and by the time the Engine is released, there will be a lot of optimization in many aspects, not just general performance but also the "size problem" some are so scared of.
Note, the project is 100gb, but 24gb when packaged. 75% of the space is being taken up by textures, many of which are 8k and might not need to be. Also it contains a lot of unique textures, if you could find a good way to use tiling textures, masks, and detail textures, could probably get a lot of those environment assets to share textures.
Attempting to do Vertex Painting in UE5. It won't paint in certain areas of the geometry. If I do a "fill" it floods with proper material correctly. Was working fine in UE4. Any thoughts or ideas?
Tested it on some Starter Content as well with same issues.
First, there is a content creator (happy as i'm) playing with this engine already and just put 300 billion tris assets on screen, it runs with no problem, as it did with 30 million. So first conclusion "not infinite polys" should be at least "not infinite polys in all situations" also the word "infinite" or "unlimited" is tricky because this engine it's not really rendering that. In the hard drive? absolutely, hard drives have a limit,
Content creators make their dime off of creating attention-grabbing videos. It's not that it's impossible, or that they're faking it. It's that they're doing something a developer wouldn't/couldn't do in a real development scenario. Even the documentation from Unreal points out the limitations of nanite. Sure, nanite could get better and produce even smaller file sizes. Then it would be more competitive with current sizes. But it's likely not going to be so good that you would want to have 300B tris on screen. This is why people, who are working in real development environments, are saying that it gives artists more room but doesn't mean you just dump your zbrush meshes into it.
I'm sure Epic understands the nature of social media and is counting on content creators to create videos like that. It creates hype and interest. You shouldn't take that for the state of things though. You're basically for what is functionally marketing/advertising.
Content creators also tend to have enthusiast level hardware because having a beefy rig is part of their job as content creators. It's how they produce content. They would absolutely be willing to go buy a couple new SSDs if it let them make videos about having 1 trillion tris on screen because that would translate to more eyeballs.
He seems like a film graphics guy, which is quite different from talking about nanite and game development. Plus he generates a lot of content off of showing off high-end graphics. Of course he's going to have a video about 300B triangles in UE5. Look how much content he already made for UE5. Look how much content people on youtube have made already. Look at Artstation trending right now. If you want to get eyeballs on your portfolio or channel, throw old work in UE5 and upload it.
Don't fall for the social media hype. Listen to the people here who have experience doing real game development.
I'm using an rtx 3090 and amd 5090x with 64gb ram, sample project is on average at 55-60 fps, and the dark world at 70. But spikes occur which freeze the game for a whole second
Just because you can use these meshes now, doesn't mean you should. You don't want your project to exceed 100-150 gb either way. Haven't tested everything, but for me it means you can use much more detail on your hero prop , displacement quality and much more small meshes for the ground like leaves and branches, as far as games go. Even if you had everything super high poly on LOD 0,it still doesn't worth the amount of space they take. That sample scene is 100gb but with lots of mockups of the same props
Thats smaller than expected. But i have more concerns about the data created during the modeling process. If you have a version control system like perforce you need a giant storage. Or dont save to the network.
I'm using an rtx 3090 and amd 5090x with 64gb ram, sample project is on average at 55-60 fps, and the dark world at 70. But spikes occur which freeze the game for a whole second
Just because you can use these meshes now, doesn't mean you should. You don't want your project to exceed 100-150 gb either way. Haven't tested everything, but for me it means you can use much more detail on your hero prop , displacement quality and much more small meshes for the ground like leaves and branches, as far as games go. Even if you had everything super high poly on LOD 0,it still doesn't worth the amount of space they take. That sample scene is 100gb but with lots of mockups of the same props
The demo has performance spikes because there's no loading screens so shaders are being built and things are being cached. A second playthrough should be smoother.
The demo is 24gb packaged, with 75% of the space being taken up by textures (lots of unique 8k megascans textures). The package game doesn't include the source assets like the project needs to.
I just realised that, if everything runs so much faster in UE5, does that mean we are free to use many more drawcalls? Because I would rather spend my budget there, than in high poly meshes
How is this going to impact Environment Art? I started 3 years ago, got a job last year and I just got to the point where I consider myself good to really good. I can't draw and don't know anatomy so most of the stuff I make is real world stuff from references and concept art, especially guns. Also, I really like doing these kinds of objects, really understand how they work and adapt them into a game asset, even if it's just a humble TV. I don't like the idea of scans replacing what I do. What use is there for me if you can just scan all the parts of a gun or prop and make a perfect game asset of it?
I realize that photogrammetry can't be used for stuff that doesn't exist but won't it basically cut a lot of the jobs and only the most experienced will get what's left? I think I'm pretty good but I don't have years of experience under my belt and I'm afraid I won't be able to catch up.
Replies
can't be arsed firing it up
clearly im jaded
i think it shows how in this video.
Use only the poly you "really" need.
https://docs.unrealengine.com/5.0/en-US/RenderingFeatures/Nanite/
I think it will help people relax in general by giving more breathing room and also help the teams who really know what they are doing push tech, but I dont think it means artist suddenly stop doing retopo and only become polygon soup chefs.
edit - actualy a lot of these thoughts are better answered by epic already: Nanite Virtualized Geometry | Unreal Engine Documentation
Lumen is a huge improvement over the disparate pre-existing lighting systems - that's the killer feature imo.
Nanite looks like it's going to be pretty useful - nevermind the bullshit about dumping million Tri meshes into your project but the instancing seems to be really effective and the autolodding is a lot better than letting unreal or simplygon (or an artist) take a massive shit on the geometry.
I did manage to provoke some bugs - raytraced shadows started artefacting on nanite geometry and i think the streaming system got a little bit confused. But for an early access release it seems to be in pretty good shape
You can do a Smart UV Project in Blender and use the Triplanar function in Painter to make seams go away.
I've not tried on my big computer yet but on a 10900k with a 2070super I'm definitely seeing better performance for a given quality level than I am on ue4 with rtx turned on - particularly with regard to GI.
It's possibly not as accurate but I'll gladly take that in exchange for how much simpler and more predictable lighting is - especially with fully dynamic scenes using realistic values.
Admittedly that's still a pretty high end machine but we're looking at tech that won't mature for a couple of years -.midway through this console generation I'd expect it to be a shit load faster and thus a viable option for use in an actual game.
I'll be interested to see what it's like on the 3080 since ue4 with rtx is actually a viable option on that but that's a job for the weekend
What they've done with nanite is remove the problems associated with shit or overworked artists - I assume you pay for it with memory although I haven't found a stat that actually tells you what the memory cost of a nanite mesh is
So far - there seem to be a lot of (prop) situations where nanite negates the need for a baked/unique normal map - whether that's useful in practice or not depends how memory for the nanite meshes is managed but there's potential there.
a little ways down from my screen capture they also consider the bigger picture when you include textures, etc. It looks like the nanite mesh is mildly larger than a low poly with 4k maps. But who is using 4k maps for terrain assets? Anyway, they got some words about that stuff.
That totaly depends on the game. How big is the world how manny assets do you have in your game.
Geometry is 15 times bigger if we use the chart posted above.
My understanding is it's basically a system where a mesh is pre-processed into an octree format ("virtualized geometry") to make it fast enough to find out which triangle per-pixel is closest to the camera each frame, thus avoiding the need to render more then one triangle per-pixel (a sort of per-pixel occlusion)? This is then combined with an automatic LOD system (though I'm not sure if they're pre-processed or generated at runtime).
It's not quite 1 triangle per pixel but otherwise I think you've basically got it.
It seems to be really, really good with instancing
And I don't think its a gimmick
This is a shame since there's a number of things still borked in 4.x but it does rather suggest 5 isn't far off.
Really wish I was able to get a 3070 or 3080...
http://www.elopezr.com/a-macro-view-of-nanite/
And there is still a lot of similarity between both 4 and 5 so I see the transition rather smooth on both ends.
About the "this is not infinite polys", "this doesn't change everything" using uppercase words to mock, some insult i read there in one post..... i know some people "know stuff" work with engines and 3d graphics and what not, but these statements are not something you can just make because "i'm an expert" or something, sometimes that is not even needed. I remember 1 year ago some people saying that in fact this was absolutely FAKE and the demo would be like terabytes and what not, well they were wrong and like always no one that say things that are completely wrong comes back to fix the mistake, like usually happens in the news. But no problem we can keep destroying this building with new destructive weapons, so now is "not for all cases" "not unlimited polys" and other negative comments, or overwhelming pessimistic i would say.
First, there is a content creator (happy as i'm) playing with this engine already and just put 300 billion tris assets on screen, it runs with no problem, as it did with 30 million. So first conclusion "not infinite polys" should be at least "not infinite polys in all situations" also the word "infinite" or "unlimited" is tricky because this engine it's not really rendering that. In the hard drive? absolutely, hard drives have a limit, but i don't think ANYONE said anything about infinite assets, which is the most accurate word, assets, not polys, polys can be infinite. Instanced or whatever it is, the same assets repeated over and over? yeah....such a bad thing right? well, maybe not because not necessarily the repetition would be noticed, hello, almost every videogame has repeated textures?
Another thing about size, this is 100gb, the map is HUGE even when the playable area of this project is small, the project itself is loading a huge area that in a comercial product could be a game map, so maybe the game is not 100gb, could be 150gb, is that the end of the world? is that "a really really bad thing, a limitation" ? Red dead Redemption 2 was 100gb for an old gen game, so expect any decent AAA game to use more than that.
Besides games, that need the developers to decide to use this, and that is an unpredictable thing since human behavior can not be predicted based on technicalities or objective aspects, the best engine on paper could be ignored by most, (Cry engine 2+) developers were lazy up until now, i don't see that changing, probably because economic reasons i don't know just not judging it. But, if some studios decide to use this, there is many things this can be used for. Objects that can be repeated so that is an advantage, for example a building in a city or well.....anything. I think they will use this for landscape. Also this runs great with the latest cards, future cards will have more power but all of that will go to resolution or something else, and DLSS is improving, so people in two years could be improving storage more than gpus, i think this is the correct way of reading this situation. Nothing stop people piling up storage. The present consoles COULD be the bottleneck, nut if the game CAN run in present consoles at least one game, i imagine Sony releasing one of those using this, there must be a reason for Sony to be involved in the development of Nanite to huh? Still considering the stuff in this project, 100gb is probably a demonstration of a standard for games using nanite instead of a "oh, these 100 gb did just a little and a real game would need gazillions gb" no..... just don't say that, it's CLEARLY not true. So expect some games at least, AAA best, but i think many will adopt it in 2-4 years.
Someone said movies i believe? that's a great point, because for the plus ultra pesimists, that use every little technical corner to attack this engine and less pessimistic people.....sorry, you have no place here in the movie industry sir. Movie studios, commercials, and whatever you like will have a revolutionary tool in hand with this.
I enjoyed played this demo with a holy Radeon 380 4gb absolutely outside of the minimum specs and a dirt cheap SSD, just in case. As all the latest games i downloaded were 100gb or getting close. An Chine anime game with crappy graphics is 25 gb. So maybe don't be a "SSD phobic" and the problem is solved
https://forums.unrealengine.com/t/inside-unreal-nanite/232881
https://www.notion.so/Brief-Analysis-of-Nanite-94be60f292434ba3ae62fa4bcf7d9379
Just because you can use these meshes now, doesn't mean you should. You don't want your project to exceed 100-150 gb either way. Haven't tested everything, but for me it means you can use much more detail on your hero prop , displacement quality and much more small meshes for the ground like leaves and branches, as far as games go. Even if you had everything super high poly on LOD 0,it still doesn't worth the amount of space they take. That sample scene is 100gb but with lots of mockups of the same props
Yes but a majority of that was not nanite meshes...the texture data was far larger for instance.
If you believe the developers, the nanite data on disk was 16.14gb in the demo.
https://www.youtube.com/watch?v=TMorJX3Nj6U&t=4300s&ab_channel=UnrealEngine
We figure it out soon.
The demo is 24gb packaged, with 75% of the space being taken up by textures (lots of unique 8k megascans textures). The package game doesn't include the source assets like the project needs to.
I realize that photogrammetry can't be used for stuff that doesn't exist but won't it basically cut a lot of the jobs and only the most experienced will get what's left? I think I'm pretty good but I don't have years of experience under my belt and I'm afraid I won't be able to catch up.