So far, it seems like you can only use it for static stuff. Based on the nature of the tech (if its actually what we are thinking it is), not a big surprise. If its actually micropolygons, then lod is automatic. There is no vertex index buffer and such. But a position map and a vertex normal map. Lod can be done by mipmapping these - less pixels, less polygons to spawn. Also, I believe a displacement map doesn't make sense in this context.
Maybe they have some new next-gen compression algorithms to go along with the jump up to 20 terabyte game files. They just didn't talk about that part because it's not as sexy in a graphics demo.
Actually, it should be possible to use this on animated meshes, but it would be pretty much like alembic. It would be a simple texture swap per frame. Not worse than alembic, and you can stream it. So there is no vertex transform in a way that we are used to.
Maybe they have some new next-gen compression algorithms to go along with the jump up to 20 terabyte game files. They just didn't talk about that part because it's not as sexy in a graphics demo.
I doubt something of enough magnitude. No, I think they are really planing for the future.. I meant to be impressed by my 2GB hard drive where I didn't have to uninstall my old games when new ones arrived. Now.. find a modern game that fits on 2 GB
Actually, it should be possible to use this on animated meshes, but it would be pretty much like alembic. It would be a simple texture swap per frame. Not worse than alembic, and you can stream it. So there is no vertex transform in a way that we are used to.
I'm just thinking loud though.
I think it's more likely to be a set of transforms from a base mesh(ie vector displacement) than to be a pseudo volumetric object defined entirely by a texture(position/normal combo)
You lose an awful lot of flexibility by not having UVs to play with (tileables, shader animation etc..) not to mention parity with vfx pipelines which is clearly where they're looking.
I mean. You can store the uvs of the original mesh too alongside the position and normals. In that case, the data types would be almost identical to the current ones. The only difference is that its in a more gpu friendly format.
I hate to sound cynical, but some of the oooohs and aaaahs in reaction to this demo are really quite naive.
It doesn't matter if this new asset type allows for "importing straight from Zbrush". Because besides local color, these fancy stone statues, envirenment rocks and metal pans still need at the very least AO and roughness data to look the way they do ... and even a Zbrush model with gazilions of polygons doesn't store that at this time. Of course in the case of some assets it may be possible to derive all passes from a limited dataset, but then things become more limiting as opposed to more freeing.
Besides, baking isn't necessarily a waste of time - it generates some useful datasets that applications like Substance can leverage to generate cavity, orientation, and so on. Think of high detail/sharp props like a gun part, the speed dial of a sports car, the body of a digital camera. Such assets actually benefit from a lowpoly+bake+textures approach. Just imagine how painful such assets would be to create if you didn't have the ability to apply textures to them, and even more so if you had no access to UVs.
Now of course nothing is fake in that demo ; but no, this tech will not magically replace any bake-centric pipeline with a bloated mesh/vertex color pipeline. To be fair Epic is partly to blame on this as they framed this demo the perfect way to confuse the average youtube gaming expert into thinking "OMG infinite detail".
Lastly, besides these very specific technical considerations ... never in the history of CG has "more detail" meant "faster creation time for a project" (besides the use of photoscan libraries of course, but that's besides the point). It has always been the exact opposite : more powerful tech handling more detail has *always* meant slower creation time overall, and heavier load on the shoulders of artists in the trenches. If anything, film asset creation is now comparatively becoming lighter than asset creation for games, since at equal level of detail film has no need for scenes to hold up to scrutiny at all angles, and detail/materials/textures can be scaled up/down per shot. Just think about it for a second.
Now on the plus side ... this tech would work extremely well for games embracing a somewhat medium detail, clay-like aesthetic like Dreams. As a matter of fact this is probably very similar tech under the hood, isn't it ? That would make sense.
At the end of the day the smoothest games hitting 60fps and scaling down well to low end platforms will always be the most well-optimized - not the ones with assets created in a bruteforce manner.
Since optimisation won't be an issue anymore, the texturing might move towards something like PTEX, maybe a better technique or evolved form of it? No need for uvs and with Ptex, u can have high resolution textures., That's what Disney has been using for their cinematic animations because they haven't been a fan of Uv texturing.
On this occasion, it's probably fair to say that this tech truly deserves the nomenclature, *Disruptive*
"To be fair Epic is partly to blame on this as they framed this demo the
perfect way to confusethe average youtube gaming expert into thinking
"OMG infinite detail"
...indeed, like their reaction over the whole euclideon engine hullabaloo, a ways back.
Keep in mind 5G networking, they are starting to roll out across the world and it now runs at 10GB per second so
it wouldn’t be inconceivable to stream such a game from a remote server at low
latency. Its looking like good news for Google Stadia in the long run.
Because besides local color, these fancy stone statues, envirenment rocks and metal pans still need at the very least AO and roughness data to look the way they do
This doesn't seem right.. Roughness sure, but this scene was built almost entirely out of Megascans stuff, and none of their photogrammetry models come with AO maps. The surface scans do, but I thought that was because those textures are used on surfaces with minimal geometry. If the model has all the silhouette information and you combine it with this upgraded lighting system, do you really need AO?
but no, this tech will not magically replace any bake-centric pipeline with a bloated mesh/vertex color pipeline.
And this seems to be a misconception a few people have. They never said UVs were going away, or that this was some kind of vertex colour pipeline. In the Eurogamer article, they talk about still having all the maps you're used to, except now you don't need AO, and that normal maps are small, tiling detail maps. As oglu said on the first page of this thread, how bad would it be to unwrap a base model, presumably once you reach a stage where the silhouette won't change and you're just adding surface details, then keep subdividing over that?
never in the history of CG has "more detail" meant "faster creation time for a project"
I'll admit this is where I could be very naive, but don't artists at AAA studios already make these super detailed models that just get baked down? If you take out the optimising LODs and making sure all the details you need are accurately captured in the bakes, surely that's time saved? Of course that time might be used to make more props or something, in which case, yes, more work. But at least that would be artistic work, not a technical requirement.
I have to say I'm with pior about the necessary workflow. You might save some time with skipping over LODs but you still need to make LOD0. Which is now having a even higher details resolution. You still need proper UVs and you still need proper topology. The biggest impact I would see is getting rid of necessary steps making sure you get proper (normal) bakes via cages, UV-splits, etc.
So you gain a bit but you most likely also have to pay a bit.
However.. when I understand this right, this will be just another workflow you add to the already existing one. So e.g. for animated characters you have to good old baking times + LODs.
On another page: I could imagine this becomes a bit nerve wrecking at the beginning as your asset resolutions grows but maybe not your workstation RAM
never in the history of CG has "more detail" meant "faster creation time for a project"
I'll admit this is where I could be very naive, but don't artists at AAA studios already make these super detailed models that just get baked down? If you take out the optimising LODs and making sure all the details you need are accurately captured in the bakes, surely that's time saved? Of course that time might be used to make more props or something, in which case, yes, more work. But at least that would be artistic work, not a technical requirement.
The highres sculpts aren't really in a format you could just stick into a game. A ton of subtools and cheap cheats to achieve the look, uneven mesh density between subtools - or for the folks who go crazy with dynamesh and the sort - uneven density within a highres mesh, really unotimized or broken up UVs...
And if you for some reason need multiple UV or vertex color channels or custom normals then you will have to deal with the DCC software commonly available. Which simply cannot handle high polygon counts (reliably). It's not even a question of buying a faster machine, the software internals aren't built for this.
So even just from a workflow perspective it's not going to be a general solution (any time soon).
The highres sculpts aren't really in a format you could just stick into a game. A ton of subtools and cheap cheats to achieve the look, uneven mesh density between subtools - or for the folks who go crazy with dynamesh and the sort - uneven density within a highres mesh, really unotimized or broken up UVs...
Okay, so here's where I prove I don't have a clue what I'm talking about haha (if I hadn't already...)
How would you normally bake sculpts like that? If you can bake textures from sculpts with uneven mesh density etc without it being noticeable on the resulting textures, why would it be noticeable if you just used that sculpt as is? As for UVs, would they really fall apart if you unwrapped at a "mid-poly" stage when the silhouette is defined and the shapes are manageable, before subdividing further for the surface details?
The highres sculpts aren't really in a format you could just stick into a game. A ton of subtools and cheap cheats to achieve the look, uneven mesh density between subtools
Does that matter though, with their micropolygon engine? Aren't the meshes retesselated at render times?
And this seems to be a misconception a few
people have. They never said UVs were going away, or that this was some
kind of vertex colour pipeline.
I think they said
they use the full 8K maps with virtual texturing. So the high res models
still need UVs and the usual texturing process. I imagine if it's the
case, after some performance updates to support such geometry - bringing
a zbrush model in Substance Painter, auto-unwrapping, baking curvature
or AO or whatever else you need directly from the mesh. Think you could
also make use of the displacement and use height maps to add geometric
details to a mid-poly model, then just export the mesh with the
displacement subdivision on.
Any thoughts on what this could mean for vegetation in games? If we could start using geometry for twigs and leaves instead of billboards it would be a dramatic visual improvement, however I imagine that there's no feasible performant way to actually deform/animate that much geometry? They would obviously still have to be authored with proper topology and uv sets.
Even if using high poly meshes for everything was technically feasible with this new tech, we're not getting any of that on anything else than hero assets for a long time still. This tech demo likely already weighs in the dozens of GB. Having a game full of high poly meshes and 8k textures is simply not feasible with today's storage and bandwidth limitations even if the hardware could theoretically run it. It would be a logistical nightmare from the authoring to the storing, versioning all the way to shipping a 1TB game.
Even the VFX industry typically doesn't use high poly meshes because DCCs and pipelines in general can't handle them, it's much more efficient to use simpler topology, subdivision and displacement.
I know Polycount is not huge on the engine side of things, but I expect a few things with this tech:
- If you worry about shipping a game with full res source asset, which cause both runtime memory, storage and content security concerns, then Epic is likely storing the source asset in a pre-processed compressed way: like a voxel representation of the source asset, but allow fast rasterization at runtime.
- The key benefit doesn't appear to be "no more LOD" or "20 mil micro polygon on screen", the hardware limit is still the same, but rather the by-product from this tech: High-res Real-time Shadow maps. This can't be easily emulated in a traditional pipeline.
- This tech won't solve the texture limit, which is again imposed by hardware. But for scenes like the demo, with tiling textures and only a few hero asset textures, you can use 8K without breaking PS5.
- The Dynamic GI is also SDF / voxel-based, which is a more "traditional" solution if you compare it to static GI baking; I guess it will require more computation at runtime, thus can only be done on PS5-level hardware; previously I wonder if it depends on the special pre-processed mesh data from Nanite, from the article I guess not.
- Things like foliage and transparency will be very hard to do, not sure they can make it even with another 6-12 months of R&D.
"- If you worry about shipping a game with full res source asset, which cause both runtime memory, storage and content security concerns, then Epic is likely storing the source asset in a pre-processed compressed way: like a voxel representation of the source asset, but allow fast rasterization at runtime."
-Voxel representation needs to be stored for the sdf system anyways. The virtual texturing feature compresses textures like crazy. Some textures can show 10x smaller memory footprint and storage size than a standard texture.
"- This tech won't solve the texture limit, which is again imposed by hardware. But for scenes like the demo, with tiling textures and only a few hero asset textures, you can use 8K without breaking PS5."
- Again, virtual texturing is the key.
"- The Dynamic GI is also SDF / voxel-based, which is a more "traditional" solution if you compare it to static GI baking; I guess it will require more computation at runtime, thus can only be done on PS5-level hardware; previously I wonder if it depends on the special pre-processed mesh data from Nanite, from the article I guess not."
-Yes it needs some pre-process. Much like the current distance field shadowing system. Each mesh needs to have its sdf counterpart, which gets created and stored upon importing the mesh. SDF serves as a scene representation and "acceleration structure (instead of a bounding volume hierarchy for triangle ray tracing). If sdf exist, ray marching it is faster. But to put simple, the stored sdf is already a voxel representation - Distance to the nearest non-empty voxel. By voxels, they most likely mean that the sdf has a color channel too. The color channels are needed in order to be able to gather color information from the surrounding of the given pixel.
Just to be clear - I am absolutely not bashing on the tech as what it shows opens up great possibilities. And I can ignore the "youtube oriented" statements made in the video ("you can totally import a model straight from Zbrush, no problem !").
My comments are a reaction to the naiveté of the some of the comments seen here. My point is that for a given artstyle, a larger dataset has *never* meat that things become magically faster to make. The tech can bring some editing flexibility for the few cases when baking can indeed be bybassed ; but the cleanest games out there would actually look *worse* if they were done with such dense source assets as opposed to using lean lowpoly models. It's counter-intuitive (and outside of the grasp of anyone who has never built an actual game asset themselves before, hence one cannot blame folks like the DigitalFoundry guys for not quite seeing that) but that's the reality and IMHO the elegant beauty of game art. Just imagine doing clean modular environment pieces without the ability to make things match at the edges. Or how slow it would be if everything had to be uniquely sculpted as opposed to tiling in a few trim sheets shared across dozens of assets. One single rock or wall piece might be easier/faster to sculpt than building it with shared modularity in mind ; but when you have dozens of modular pieces to create then all of a sudden the sculpt-centric approach falls apart very quickly.
What saddens me is that this reveals somewhat of a blindspot in the appreciation of how gameart is made. Of course to each their own but it's a bit of a shame that lean practices might become harder and harder to explain to newcomers even though said practices are the key to speedy production and can allow people to realize their dream game projects on their own. Instead most seem to believe that everything is about highres models/sculpts and 4k textures in substance painter. Yay youtube tutorials.
Now that said ... I really can't wait to see games actually embracing the few original artstyles that this high detail tech may empower. Basically potentially bringing the clay/voxel aesthetic of Dreams to UE powered games. I can imagine some indie studios leveraging this in very unique ways - almost like a new equivalent to faceted lowpoly.
However one shouldn't forget that even though sculpting always seems "fun and relaxing" it absolutely isn't fast. Ever. I think this is a misconception coming from the fact that many artists just lose track of time when spending hours "in the zone" in sculpting programs as opposed to manipulating a few verts nice and quick. At the end of the day, the ability to leverage big datasets boils down to a resource/time management issue. Just be wary if you catch yourself thinking that "more sculpting = more fun !" as it inevitably backfires.
This is IMHO especially important to grasp because the singularity point of games looking realistic (or realistic enough) has already been reached a few years back (I'd say around the time of MGSV Ground Zeroes, Assassin's Creed Black Flag, GTAV, and so on). So from there embarking on a game project aiming to look realistic will *always* mean that asset creation will take longer and longer with each year that passes. Of course there's nothing new about the proposition of "more detail every year" ; I just mean that the artistic return on investment from working on realistic games (or stylized games aiming to look like action figures/CG movies) will always diminish from there. As the leaps in overall visual fidelity become smaller and smaller past the point of realistic enough, assets take ironically longer and longer to make as they need to support the increasing level of detail supported by the tech ... That makes it a pretty important point to consider when choosing a career path.
obviously my experience is more limited than many here, but i agree most with @pior .
Even if the word was "send directly from zbrush, no baking no retopo anymore." that doesn't actually do anything for me.
when I am making a game on my own, usually my goal is to get a certain amount of work down in a certain amount of time. So a couple clean base meshes and modular kits that I can scale into an entire games worth of content is key. I rarely open zbrush and if I do it is only to put some final touches on a low poly model.
of course that's totally different from big studios with a high paid artist who does nothing but sculpts a heroes wrinkle maps for months at a time, but I'd expect more game studios work like I do than not.
also, a general note, technology will never save you work. you are always going to be working as many hours as the man can squeeze out of you. if tech buys you more time from one area, it will be filled with another. So maybe you don't like UV's or baking because it's boring, but you ain't going to be happy with anything when you do it for too many hours on end. This is why it's important to enjoy the work as it is. Otherwise you never will.
It is probably worth thinking about what this new tech is aiming for. Which gab it fills. Practically focusing on one of the areas which imo always lacked in terms of visual quality: natural organic structures. These unique rock, sand and mud surfaces can now be replaced by scanned or sculpted high res assets without LOD popping. and proper contact shadows.
Imagine the Unreal 5 Demo in a clean sci-fi environment... I guess it's safe to assume that choosing this kind of theme for the demo had a reason.
tl;dr: imo the demo showcases a best case scenario for the new tech.
The Unreal Engine's pipeline with virtual film production techniques will become much less seamless with the release of UE5. Not just in asset creation but also materials, lighting, cameras and much more, especially with tools like material X and Universal Scene Description (USD) and what ever else is in the pipeline. It all makes for very interesting times ahead.
This is all great news for all industries, AKA, as the "Real-time for Everyone" descriptor points out on Epic's website.
Well, I hate to be a downer again but some of these pipeline reports are also littered with over-inflated PR talk. Anyone who's worked with UE4 knows that even though the engine is fantastic, it is far from being as snappy as these spotlight videos make it look. Like how editing any material that isn't an MI causes hundreds (or even thousands) of shaders to recompile. Or the way it hangs for seconds on asset delete. Or the asset redirectors. Oh, the asset redirectors.
For some reason the people talking about how revolutionary these processes are for fast iteration ... are almost never the ones actually working on creating/implementing the assets and virtual sets. Of course everything is fast once compiled or in PIE, but getting there is absolutely not that smooth of a ride.
sorry for my noob question but did they create everything you can see or do they use sculpting-software like zbrush and just render it with the engine?
@garciiia the latter - they haven't incorporated an all-in-one production software. Pretty sure the narrator even mentions "this statue was 32 million polygons inside of zBrush and brought into unreal".
So im terms of changes to a general asset pipeline, what can we be reasonbly expecting?
My impression was that you could basically do game's props as you would for something like a pre-rendered scene, create a base mesh, unwrap and texture it, subdivide as needed with displacements and detail normals?
Is this something we can be expecting or will there still be a broad need for proper low poly baking?
So im terms of changes to a general asset pipeline, what can we be reasonbly expecting?
My impression was that you could basically do game's props as you would for something like a pre-rendered scene, create a base mesh, unwrap and texture it, subdivide as needed with displacements and detail normals?
Is this something we can be expecting or will there still be a broad need for proper low poly baking?
so for now will we just see low-polys that are higher res, or do you think that not much will change there?
I'm not sure what this actually changes if you still have to bake down most assets to low polys as usual
As @Jerc mentioned this will be an additional workflow for important and / or specific assets.
The point some people are missing is that you already gain a lot if you just do LOD0 the usual way and then just skip the LOD creation process. And skipping prebaked lights. This all frees up memory you can use to make stuff look better and not 'just' faster. Nobody says "wow look how awesome these LODs look like"
Though, we have to see how the pipeline will look like at the end of the new console generation.
but the cleanest games out there would actually look *worse* if they were done with such dense source assets as opposed to using lean lowpoly models. Just imagine doing clean modular environment pieces without the ability to make things match at the edges. Or how slow it would be if everything had to be uniquely sculpted as opposed to tiling in a few trim sheets shared across dozens of assets. One single rock or wall piece might be easier/faster to sculpt than building it with shared modularity in mind ; but when you have dozens of modular pieces to create then all of a sudden the sculpt-centric approach falls apart very quickly
....
Now that said ... I really can't wait to see games actually embracing the few original artstyles that this high detail tech may empower. Basically potentially bringing the clay/voxel aesthetic of Dreams to UE powered games. I can imagine some indie studios leveraging this in very unique ways - almost like a new equivalent to faceted lowpoly.
....
This is IMHO especially important to grasp because the singularity point of games looking realistic (or realistic enough) has already been reached a few years back (I'd say around the time of MGSV Ground Zeroes, Assassin's Creed Black Flag, GTAV, and so on).
I think the "cleanest" games will be in the realms of stylized art now. They would look worse with dense assets, so continuing to make low poly models and baking will still be part of the aesthetics of those games. And you'll be free to do it like before, for "clean" stuff. But that's a not-so-big part of games, for the rest, the big chunk of realistic or big selling AAA games that most players want, Nanite tech is going to be ideal. Normal maps can't last forever as the de facto tech in games. Industry is changing , in the same way pixel art was the norm and it got replaced, BUT it lives on as retro artstyle or stylized.
New tech does mean a lot of questions now for artists, especially with insufficient details about it, workflows not yet established, etc. Trim sheets and edge matching and whatever else feels really hard to do with super dense geo can still happen. It's an exciting part to adapt to it, including artists and software developers, with new tools to help artists do those parts faster and easier.
I don't think games looking "realistic enough" happened or will anytime soon. When I was a kid I thought Doom 2's graphics was mind-blowing. 5 years from now all those games you listed will look outdated compared to the next-gen stuff that is coming out. And that's gonna happen again and again, the bar is always going up and up and we'll always have a new standard to compare it too.
... "I don't think games looking "realistic enough" happened or will anytime soon. "
Well, to be clear I mean in the sense of "realistic enough to convey a sense of recognisable space from the real world". I suppose one could extend it all the way back to GTA3 but that would be pushing it. Maybe GTA4. For racing games this could even extend to the first Gran Tursimo.
Of course I am not saying that visuals didn't get better since the era of these titles. My point is that it the tech to deliver thrilling experiences has been decent enough for longer than most realize, and that one should be aware of that when thinking about ones carreer as that means that the time spent on an equivalent asset then and now gradually moves from days to weeks to months (litterally). When you think about it, everything that this UE5 demo does at the emotional level (perception of contrast, space, lighting) could have been done on UE3 tech or earlier. Of course it would have required more hacks/optimizations and a more intimate knowledge of lighting (as older engines were obviously far less physically accurate when it comes to light calculation) but the ETA would still have been 10x shorter or less.
There to me lies the biggest misunderstanding about game art, which itself can lead to catastrophic crunches : The clever tricks used to work around the limitations of a given tech are almost never a waste of time compared to doing things bruteforce and "optimizing later". The upfront cost of clever optimization always pays off in the end ; and inversely, the temptation to work linearly or in unoptimized ways *always* backfires.
The above is not so much a reaction to the tech itself ; rather, simply me dismissing naive beliefs like "No more baking ! So many polygons ! This will save so much time !" as what they are : a gross misconception of the ever-growing time required to match an ever-growing expectation of "realistic" detail.
I made a little demonstration video about micropolygons for people who don't understand it yet. I see a lof of "pseudovolumetric" and "voxel" and "claylike" here. And actually it isn't really like that.
I had to skip one step because I didn't know how to do it, but this doesn't really change the outcome. My one has some waste while the actual tech wouldnt.
1. After the mesh gets imported, ue5 makes a new uv channel for the mesh, and it will have 100% space utilization (see the link I posted earlier). This is the step I skipped.
2. Bake position, normal, color, and other maps using this new Unwrap. Since I skipped step 1, my maps looks like this. I don't show the colormap, because it looks the same as the original:
3. Spawn a grid of polygons using the gpu (compute). The xy divisions of the grid should depend on the texture res (4k texture-> 4k * 4k polygons). So, mip mapping this texture can serve as an input of the lod system. The uvs, of the grid should be a planar mapping of the whole grid, containing all polygons inside 0-1 space.
In my example, there is a lot of waste because of the lack of step 1. This is basically a "new way" of storing and reading a mesh. Its fully gpu based so it allows much more polygons for cheaper than before.
Yes, there is definitely some limitations of this tech, and it isn't ideal for everything.
Replies
Check this out:
http://hhoppe.com/gim.pdf
And this:
https://en.wikipedia.org/wiki/Reyes_rendering
And this:
https://en.wikipedia.org/wiki/Micropolygon
This actually makes a lot of sense. Sound like a more gpu friendly way to do such things.
Edit: oh well @Obscura..
I'm just thinking loud though.
Now.. find a modern game that fits on 2 GB
I think it's more likely to be a set of transforms from a base mesh(ie vector displacement) than to be a pseudo volumetric object defined entirely by a texture(position/normal combo)
You lose an awful lot of flexibility by not having UVs to play with (tileables, shader animation etc..) not to mention parity with vfx pipelines which is clearly where they're looking.
https://www.eurogamer.net/articles/digitalfoundry-2020-unreal-engine-5-playstation-5-tech-demo-analysis
So yeah its micropolygons and the lighting is hardware independent ray tracing, using a more advanced distance fields implementation, and some other techniques. So basically ray marching...
It doesn't matter if this new asset type allows for "importing straight from Zbrush". Because besides local color, these fancy stone statues, envirenment rocks and metal pans still need at the very least AO and roughness data to look the way they do ... and even a Zbrush model with gazilions of polygons doesn't store that at this time. Of course in the case of some assets it may be possible to derive all passes from a limited dataset, but then things become more limiting as opposed to more freeing.
Besides, baking isn't necessarily a waste of time - it generates some useful datasets that applications like Substance can leverage to generate cavity, orientation, and so on.
Think of high detail/sharp props like a gun part, the speed dial of a sports car, the body of a digital camera. Such assets actually benefit from a lowpoly+bake+textures approach. Just imagine how painful such assets would be to create if you didn't have the ability to apply textures to them, and even more so if you had no access to UVs.
Now of course nothing is fake in that demo ; but no, this tech will not magically replace any bake-centric pipeline with a bloated mesh/vertex color pipeline. To be fair Epic is partly to blame on this as they framed this demo the perfect way to confuse the average youtube gaming expert into thinking "OMG infinite detail".
Lastly, besides these very specific technical considerations ... never in the history of CG has "more detail" meant "faster creation time for a project" (besides the use of photoscan libraries of course, but that's besides the point). It has always been the exact opposite : more powerful tech handling more detail has *always* meant slower creation time overall, and heavier load on the shoulders of artists in the trenches. If anything, film asset creation is now comparatively becoming lighter than asset creation for games, since at equal level of detail film has no need for scenes to hold up to scrutiny at all angles, and detail/materials/textures can be scaled up/down per shot. Just think about it for a second.
Now on the plus side ... this tech would work extremely well for games embracing a somewhat medium detail, clay-like aesthetic like Dreams. As a matter of fact this is probably very similar tech under the hood, isn't it ? That would make sense.
At the end of the day the smoothest games hitting 60fps and scaling down well to low end platforms will always be the most well-optimized - not the ones with assets created in a bruteforce manner.
https://youtu.be/YgrIOuJSaKg
https://www.youtube.com/watch?v=iIDzZJpDlpA
Keep in mind 5G networking, they are starting to roll out across the world and it now runs at 10GB per second so it wouldn’t be inconceivable to stream such a game from a remote server at low latency. Its looking like good news for Google Stadia in the long run.
And this seems to be a misconception a few people have. They never said UVs were going away, or that this was some kind of vertex colour pipeline. In the Eurogamer article, they talk about still having all the maps you're used to, except now you don't need AO, and that normal maps are small, tiling detail maps. As oglu said on the first page of this thread, how bad would it be to unwrap a base model, presumably once you reach a stage where the silhouette won't change and you're just adding surface details, then keep subdividing over that?
I'll admit this is where I could be very naive, but don't artists at AAA studios already make these super detailed models that just get baked down? If you take out the optimising LODs and making sure all the details you need are accurately captured in the bakes, surely that's time saved? Of course that time might be used to make more props or something, in which case, yes, more work. But at least that would be artistic work, not a technical requirement.
The biggest impact I would see is getting rid of necessary steps making sure you get proper (normal) bakes via cages, UV-splits, etc.
So you gain a bit but you most likely also have to pay a bit.
However.. when I understand this right, this will be just another workflow you add to the already existing one. So e.g. for animated characters you have to good old baking times + LODs.
On another page: I could imagine this becomes a bit nerve wrecking at the beginning as your asset resolutions grows but maybe not your workstation RAM
Okay, so here's where I prove I don't have a clue what I'm talking about haha (if I hadn't already...)
How would you normally bake sculpts like that? If you can bake textures from sculpts with uneven mesh density etc without it being noticeable on the resulting textures, why would it be noticeable if you just used that sculpt as is? As for UVs, would they really fall apart if you unwrapped at a "mid-poly" stage when the silhouette is defined and the shapes are manageable, before subdividing further for the surface details?
I think they said they use the full 8K maps with virtual texturing. So the high res models still need UVs and the usual texturing process. I imagine if it's the case, after some performance updates to support such geometry - bringing a zbrush model in Substance Painter, auto-unwrapping, baking curvature or AO or whatever else you need directly from the mesh. Think you could also make use of the displacement and use height maps to add geometric details to a mid-poly model, then just export the mesh with the displacement subdivision on.
This tech demo likely already weighs in the dozens of GB. Having a game full of high poly meshes and 8k textures is simply not feasible with today's storage and bandwidth limitations even if the hardware could theoretically run it. It would be a logistical nightmare from the authoring to the storing, versioning all the way to shipping a 1TB game.
Even the VFX industry typically doesn't use high poly meshes because DCCs and pipelines in general can't handle them, it's much more efficient to use simpler topology, subdivision and displacement.
- If you worry about shipping a game with full res source asset, which cause both runtime memory, storage and content security concerns, then Epic is likely storing the source asset in a pre-processed compressed way: like a voxel representation of the source asset, but allow fast rasterization at runtime.
- The key benefit doesn't appear to be "no more LOD" or "20 mil micro polygon on screen", the hardware limit is still the same, but rather the by-product from this tech: High-res Real-time Shadow maps. This can't be easily emulated in a traditional pipeline.
- This tech won't solve the texture limit, which is again imposed by hardware. But for scenes like the demo, with tiling textures and only a few hero asset textures, you can use 8K without breaking PS5.
- The Dynamic GI is also SDF / voxel-based, which is a more "traditional" solution if you compare it to static GI baking; I guess it will require more computation at runtime, thus can only be done on PS5-level hardware; previously I wonder if it depends on the special pre-processed mesh data from Nanite, from the article I guess not.
- Things like foliage and transparency will be very hard to do, not sure they can make it even with another 6-12 months of R&D.
-Voxel representation needs to be stored for the sdf system anyways. The virtual texturing feature compresses textures like crazy. Some textures can show 10x smaller memory footprint and storage size than a standard texture.
"- This tech won't solve the texture limit, which is again imposed by hardware. But for scenes like the demo, with tiling textures and only a few hero asset textures, you can use 8K without breaking PS5."
- Again, virtual texturing is the key.
"- The Dynamic GI is also SDF / voxel-based, which is a more "traditional" solution if you compare it to static GI baking; I guess it will require more computation at runtime, thus can only be done on PS5-level hardware; previously I wonder if it depends on the special pre-processed mesh data from Nanite, from the article I guess not."
-Yes it needs some pre-process. Much like the current distance field shadowing system. Each mesh needs to have its sdf counterpart, which gets created and stored upon importing the mesh. SDF serves as a scene representation and "acceleration structure (instead of a bounding volume hierarchy for triangle ray tracing). If sdf exist, ray marching it is faster. But to put simple, the stored sdf is already a voxel representation - Distance to the nearest non-empty voxel. By voxels, they most likely mean that the sdf has a color channel too. The color channels are needed in order to be able to gather color information from the surrounding of the given pixel.
https://www.youtube.com/watch?v=PJDoyc5cCys
My comments are a reaction to the naiveté of the some of the comments seen here. My point is that for a given artstyle, a larger dataset has *never* meat that things become magically faster to make. The tech can bring some editing flexibility for the few cases when baking can indeed be bybassed ; but the cleanest games out there would actually look *worse* if they were done with such dense source assets as opposed to using lean lowpoly models. It's counter-intuitive (and outside of the grasp of anyone who has never built an actual game asset themselves before, hence one cannot blame folks like the DigitalFoundry guys for not quite seeing that) but that's the reality and IMHO the elegant beauty of game art. Just imagine doing clean modular environment pieces without the ability to make things match at the edges. Or how slow it would be if everything had to be uniquely sculpted as opposed to tiling in a few trim sheets shared across dozens of assets. One single rock or wall piece might be easier/faster to sculpt than building it with shared modularity in mind ; but when you have dozens of modular pieces to create then all of a sudden the sculpt-centric approach falls apart very quickly.
What saddens me is that this reveals somewhat of a blindspot in the appreciation of how gameart is made. Of course to each their own but it's a bit of a shame that lean practices might become harder and harder to explain to newcomers even though said practices are the key to speedy production and can allow people to realize their dream game projects on their own. Instead most seem to believe that everything is about highres models/sculpts and 4k textures in substance painter. Yay youtube tutorials.
Now that said ... I really can't wait to see games actually embracing the few original artstyles that this high detail tech may empower. Basically potentially bringing the clay/voxel aesthetic of Dreams to UE powered games. I can imagine some indie studios leveraging this in very unique ways - almost like a new equivalent to faceted lowpoly.
However one shouldn't forget that even though sculpting always seems "fun and relaxing" it absolutely isn't fast. Ever. I think this is a misconception coming from the fact that many artists just lose track of time when spending hours "in the zone" in sculpting programs as opposed to manipulating a few verts nice and quick. At the end of the day, the ability to leverage big datasets boils down to a resource/time management issue. Just be wary if you catch yourself thinking that "more sculpting = more fun !" as it inevitably backfires.
This is IMHO especially important to grasp because the singularity point of games looking realistic (or realistic enough) has already been reached a few years back (I'd say around the time of MGSV Ground Zeroes, Assassin's Creed Black Flag, GTAV, and so on). So from there embarking on a game project aiming to look realistic will *always* mean that asset creation will take longer and longer with each year that passes. Of course there's nothing new about the proposition of "more detail every year" ; I just mean that the artistic return on investment from working on realistic games (or stylized games aiming to look like action figures/CG movies) will always diminish from there. As the leaps in overall visual fidelity become smaller and smaller past the point of realistic enough, assets take ironically longer and longer to make as they need to support the increasing level of detail supported by the tech ... That makes it a pretty important point to consider when choosing a career path.
Practically focusing on one of the areas which imo always lacked in terms of visual quality: natural organic structures.
These unique rock, sand and mud surfaces can now be replaced by scanned or sculpted high res assets without LOD popping. and proper contact shadows.
Imagine the Unreal 5 Demo in a clean sci-fi environment...
I guess it's safe to assume that choosing this kind of theme for the demo had a reason.
tl;dr: imo the demo showcases a best case scenario for the new tech.
This is all great news for all industries, AKA, as the "Real-time for Everyone" descriptor points out on Epic's website.
https://www.youtube.com/watch?v=gUnxzVOs3rk
https://www.youtube.com/watch?v=x2aCSV5zYlA
https://www.youtube.com/watch?v=voVPO8w9-x4
https://www.youtube.com/watch?v=kF5kzJSgMN8
For some reason the people talking about how revolutionary these processes are for fast iteration ... are almost never the ones actually working on creating/implementing the assets and virtual sets. Of course everything is fast once compiled or in PIE, but getting there is absolutely not that smooth of a ride.
I think you meant to say the opposite there.
the latter - they haven't incorporated an all-in-one production software. Pretty sure the narrator even mentions "this statue was 32 million polygons inside of zBrush and brought into unreal".
The point some people are missing is that you already gain a lot if you just do LOD0 the usual way and then just skip the LOD creation process. And skipping prebaked lights. This all frees up memory you can use to make stuff look better and not 'just' faster. Nobody says "wow look how awesome these LODs look like"
Though, we have to see how the pipeline will look like at the end of the new console generation.
Well, to be clear I mean in the sense of "realistic enough to convey a sense of recognisable space from the real world". I suppose one could extend it all the way back to GTA3 but that would be pushing it. Maybe GTA4. For racing games this could even extend to the first Gran Tursimo.
Of course I am not saying that visuals didn't get better since the era of these titles. My point is that it the tech to deliver thrilling experiences has been decent enough for longer than most realize, and that one should be aware of that when thinking about ones carreer as that means that the time spent on an equivalent asset then and now gradually moves from days to weeks to months (litterally). When you think about it, everything that this UE5 demo does at the emotional level (perception of contrast, space, lighting) could have been done on UE3 tech or earlier. Of course it would have required more hacks/optimizations and a more intimate knowledge of lighting (as older engines were obviously far less physically accurate when it comes to light calculation) but the ETA would still have been 10x shorter or less.
There to me lies the biggest misunderstanding about game art, which itself can lead to catastrophic crunches : The clever tricks used to work around the limitations of a given tech are almost never a waste of time compared to doing things bruteforce and "optimizing later". The upfront cost of clever optimization always pays off in the end ; and inversely, the temptation to work linearly or in unoptimized ways *always* backfires.
The above is not so much a reaction to the tech itself ; rather, simply me dismissing naive beliefs like "No more baking ! So many polygons ! This will save so much time !" as what they are : a gross misconception of the ever-growing time required to match an ever-growing expectation of "realistic" detail.
I had to skip one step because I didn't know how to do it, but this doesn't really change the outcome. My one has some waste while the actual tech wouldnt.
1. After the mesh gets imported, ue5 makes a new uv channel for the mesh, and it will have 100% space utilization (see the link I posted earlier). This is the step I skipped.
2. Bake position, normal, color, and other maps using this new Unwrap. Since I skipped step 1, my maps looks like this. I don't show the colormap, because it looks the same as the original:
3. Spawn a grid of polygons using the gpu (compute). The xy divisions of the grid should depend on the texture res (4k texture-> 4k * 4k polygons). So, mip mapping this texture can serve as an input of the lod system. The uvs, of the grid should be a planar mapping of the whole grid, containing all polygons inside 0-1 space.
4-5. Transform the vertices of the polygon grid, using the baked position map. Then shade them using the other maps:
https://www.youtube.com/watch?v=bFEkC64N7mM&feature=youtu.be
In my example, there is a lot of waste because of the lack of step 1. This is basically a "new way" of storing and reading a mesh. Its fully gpu based so it allows much more polygons for cheaper than before.
Yes, there is definitely some limitations of this tech, and it isn't ideal for everything.