I made a little demonstration video about micropolygons for people who don't understand it yet. I see a lof of "pseudovolumetric" and "voxel" and "claylike" here. And actually it isn't really like that.
isn't it what Dx10 run time tesselation does only in tangent space. I wonder what's an advantage of making it in object space? You still need to convert vectors to world space in run time, right?
The difference is that you store the whole mesh like this. Not just the displacement. The input format is more ideal for the gpu than what we do currently. No actual mesh needs to be stored (no traditional vertex and id list), just some textures (position and normal is must, but you can store other stuff too), and we create some polygon grids at runtime, which are deformed to the desired shape using the position map. This is fundameltally different from what we do currently. A 4k texture can hold a ~33 million tris mesh. Mipmap the texture, and you automatically get a quarter tris version at runtime.
Also, tessellation depends on the base topology. This doesn't. One pixel means one quad. An entirely new topology gets created (a uniform grid, which gets warped into the shape.)
@Obscura Probably I'm getting this all wrong, but does the process you describe basically mean that all meshes are remeshed (3. Spawn a grid of polygons using the gpu...) and the vertex position data is stored in an image file format rather than an usual format?
@Obscura Probably I'm getting this all wrong, but does the process you describe basically mean that all meshes are remeshed (3. Spawn a grid of polygons using the gpu...) and the vertex position data is stored in an image file format rather than an usual format?
Thanks Obscura , I think I get it. No more issue with lods not matching tangent space normal maps . Cool, but my guess it requires floating point 32 bit textures, right? and what about texture compression? If a dx11mesh is not that dense it still could be less data to stream into GPU, less bottlenecks etc.
Just trying to figure out if we could get an advantage of this in a typical MMO game run on different hardware.
I used 16 bits position and normal maps in my example. Some meshes may require 32 bit position, yes, but its probably rare. I'm just assuming this, but Epic probably used virtual texturing for this too. So its still not horrible. You can probably also set the max res, and its not necessary to go with 4k ones. a 256x256 texture can still hold a 65k polygons mesh, with a few lod levels.
I do also think that Obscura is on the right track, but it must be an improved technique from what is described in these papers. The triangles seem to be created in a way that is content-aware because they can be highly irregular matching to the geometry. I wish we would have seen a nanite visualization of something that wasn't photo scanned and they should come out and tell how much space that level and it's asset took. I personally believe that the file size of the level shown is surprisingly small for how detailed it is. A technique like this should allow for highly compressed model files in comparison to traditional "vertex" based models and they don't need prebaked lightmaps, so it's just a lot of instanced meshes placed everywhere.
They run this and traditional rasterization at the same time. So you have the option to just put a mesh in as it is. I think what we can see on the picture is a mix of micropolygons and standard meshes.
... I personally believe that the file size of the level shown is surprisingly small for how detailed it is. ..
considering that the statue used 24 8k maps, I doubt that the size is surprisingly small and I don't think they cared about data size when they created this demo.
They run this and traditional rasterization at the same time. So you have the option to just put a mesh in as it is. I think what we can see on the picture is a mix of micropolygons and standard meshes.
That sounds possible. Brian Karis said that it has two different software rasterizers running in parallel so it could be related to that.
considering that the statue used 24 8k maps, I doubt that the size is surprisingly small and I don't think they cared about data size when they created this demo.
Where did you get that 24 8K maps figure from? The first time I've heard that.
I'm basing my assumption just on my previous experience with how they presented new features for their engine in the recent past. Of course, there is still a lot of marketing mumbo jumbo involved but they do tend to show and present things that are actually useable for what they advertise them for and games taking up terabytes of space just isn't useable.
Edit: I've found the 24 8k textures quote in that Eurogamer article, it's honestly a bit confusing what it exactly means because according to that it uses 8 sets of tiling 8k normal maps, what the fuck is that supposed to be are those tiling maps representing detail at a subatomic level? I feel like large parts are missing there and even then the 24 8k textures are like 2 gigabytes max probably less with better compression and channel packing which doesn't seem super terrible for a hero asset.
"He says that each asset has 8K texture for base colour, another 8K
texture for metalness/roughness and a final 8K texture for the normal
map. But this isn't a traditional normal map used to approximate higher
detail, but rather a tiling texture for surface details.
"For
example, the statue of the warrior that you can see in the temple is
made of eight pieces (head, torso, arms, legs, etc). Each piece has a
set of three textures (base colour, metalness/roughness, and normal maps
for tiny scratches). So, we end up with eight sets of 8K textures, for a
total of 24 8K textures for one statue alone," he adds."
I'm curious how this tech scales with reduced memory. Like instead of using 8K maps, what if they were using maps that were 2048x2048, 1024x1024, or even lower? At what point would the meshs turn into soup?
Edit: I've found the 24 8k textures quote in that Eurogamer article, it's honestly a bit confusing what it exactly means because according to that it uses 8 sets of tiling 8k normal maps, what the fuck is that supposed to be are those tiling maps representing detail at a subatomic level? I feel like large parts are missing there and even then the 24 8k textures are like 2 gigabytes max probably less with better compression and channel packing which doesn't seem super terrible for a hero asset.
Yes that the article I got it from. Maybe they use 8k textures just to stress test the engine and make it sound more "awesome" for PR. I cannot imagine that there is a visible difference between 8k and 4k, or even 2k, for this kind of asset at a reasonable distance.
I'm curious how this tech scales with reduced memory. Like instead of using 8K maps, what if they were using maps that were 2048x2048, 1024x1024, or even lower? At what point would the meshs turn into soup?
Even a 1k map can store a lot of polygons, but yeah, some shapes would degrade a lot. I would say this depends, but probably somewhere under 256 or so. The rock in my example is fine using 256, but thats a very simple mesh.
Thank you so much for the time and effort @Obscura ! I certainly wish that Epic could have shown that sort of stuff as opposed to presenting things so superficially. I suppose it's just regular PR, but it's quite a departure from their history of being very open and straightforward about the tech, like whenever Sweeney was presenting older demos for instance. In the sense that even though they always pulled some impressive "gotcha this is realtime !" moves, they used to at least lift the curtain after the fact - but not here.
Regarding the "clay" aspect : I think there is some truth to that. As soon as shapes (especially very sharp/clean ones) get processed in such ways there's somewhat of distinctive softness being intruduced even at high densities - akin to what can be seen on raw photoscans.
Anyways - as said, very interesting stuff. Stll, anyone hoping for positive production paradigm shifts because of tech being able to handle *more* detail than before (regardless of how it's done) is in for a rude awakening imho
If anything this could gel quite well with the approach used on faces in LA Noire, which if I am not mistaken was another topology-agnostic approach. One could certainly imagine a pretty cool visual style emerging from that, using real puppets (Dark Crystal) as source data for character's heads, regular topology and textures for the bodies, and photoscans of miniature sets for enviroments. Essentially relegating CG (in the sense of "CG assets created from scratch") to secondary/non-hero assets for instance.
Yet ... there's another thing to be said about any objectively measured increase of level of detail, and that is the gradual loss of suspension of disbelief. As more and more detailled games like Uncharted get, the more exausting they become to look at, so to speak - because gameplay elements (you can climb here, not there) get more and more muddled with their surroundings. Of course this can be proactively adressed - but that too requires more thought and production than ever before, as opposed to being implicitely conveyed by visual simplicity. That's a tangent topic of course, but I don't see this (imho unfortunate) trend changing anytime soon, as someone will always ask for this ladder or that door to "blend in just a little more".
Look at it this way, if it seems too good to be true, it usually is. This tech looks awesome at the use-case demonstrated. As with literally all other new tech, this will become another additional tool in the toolbox to make stuff look good. In the wider industry, when you look at many different games and projects - it is not going to replace (or in many cases significantly disrupt) the workflows which are used today.
A lot of the opinions being expressed in this thread are very naive.
Regarding the "clay" aspect : I think there is some truth to that. As soon as shapes (especially very sharp/clean ones) get processed in such ways there's somewhat of distinctive softness being intruduced even at high densities - akin to what can be seen on raw photoscans.
Thats right. Especially on the lods. Yeah it would have a similar "softness" to what scans have sometimes. But differently than Dreams for example.
I believe, for foliage, and some other stuff, you will still need to use the regular approach. But scans or organic shapes can use this tech. As long as you are developing for some modern hardware.
Obscura, is this technique going to limit what we can do with materials or spline meshes? Could we still do some sort of snow shader, vertex painting, world position offset? Is it at all usable for foliage?
Since UE5 is capable of handling drastically more polygons, I still think using normal maps is useful for micro details, instead of putting them to the actual geometry. Basically leaning towards film/vfx asset workflow, but baking only ambient occlusion?
But it also comes down to the file sizes. I noticed some time ago that Blender has few types of glTF formats to export, and they were a lot smaller than typical .fbx or .obj files. Not sure if that file format is going to be popular, unless the .usd is going to be packed similarly or better to smaller sizes.
I couldn't help to notice that, when they go into triangle/noise mode, the character seems to disappear. Might be an indication they're not comfortable applying this technique on non-static assets (might also just be that the character is just properly optimised and doesn't benefit from this).
They also claim there's no more need for lowpolys, but does this also apply to physics and collision meshes? Even if the collision meshes conform with the highpolys, wouldn't the adaptive triangulating let the physics open for (a lot of) glitches? If that's even feasible at all.
It somewhat bothers me the YT movie does not have an 60FPS option.
Look at it this way, if it seems too good to be true, it usually is.
It happens with every console iteration, doesn't it? From Gamecube's Space world Zelda demo to PS3's Killzone 2 trailer. Better to be cautious than hyped. There's always a noticeable increase in graphical prowess, but this Unreal 5 demo reeks more of a "cinematographical vertical slice" than an actual game demo to me. (Not saying I'm not impressed though.)
Even if this was magically infinite detail, we don't have infinite time. I can't imagine really being able to take full advantage of something like that on even a 5 year development cycle. I assume it will be like tessellation where you sparingly use it. However it would be cool if we can start using it to make procedural objects like this
That way instead of worrying about modeling you can replace some of your static props and environment details with procedural textures. But it still looks like you would need some kind of position data, not just displacement? Anyways, just looking forward to new workflows.
Since the Xbox 360 artists have been putting insane amounts of detail into games, the Gears of War 3 art dump always comes to mind (https://polycount.com/discussion/159588/kevin-johnstone-of-epic-games). I don't think too much detail is actually a problem, the detail is already there, we might have to apply it differently in our workflows.
I'm expecting in the next 5 years, we're going to see tools designed to be used with photogrammetry/mega scan data that will be able to transform assets while still keeping the same look. Take a rock scan and turn it into a column, take patch of ground and turn it into a landscape, take a rock and get 40 variations, take a rusty metal utility box and apply it to a block out of machinery. Autofill and content aware filters, but for 3d.
Alright I have another one - as being the annoying contrarian is always fun
I see quite a few comments (a few here of course, but mostly on more layman outlets like gaming news and video comments) regarding "no more baking normalmaps OMG yes !".
I find it somewhat amusing, because ... when you actually stop to think about it, baking is actually an extremely fast and automatic process. Litterally ! You load your source and target, load presets, press a button, wait for a few seconds to a minute, done. It's magical.
What's actually taking a ton of time is what leads to it. Creating the high (source data), creating the low for the game to handle/for animations, and of course creating UVs for it.
So whenever I hear/read that "OMG no more baking, just direct modeling !" ... even putting aside the fact that it is a gross simplification of the tech, I'd invite anyone to *actually* think about what it would take to create the same asset than, say ... the last one you actually worked on, but without baking. The truth is that you'd still have to create the source art, and you'd still have to UV/texture it - except this time instead of doing this on a lightweight unified low you'd do it on subdiv/highpoly meshes, using UVs where needed, and so on - basically exactly the same process that has been used in CG movies for decades. Except that movies can afford to selectively deal with level of detail per shot, while games can't. Therefore even if it was possible to make a whole game without baking nmaps ... you'd still have to aim for consistent texture density, objects being UVed cleanly, and so on.
Again that's not saying that nothing can come out of the tech - I am 100% sure that brilliant artists will leverage it in unexpected ways. And close-up cinematics shots will likely benefit greatly from it. But it certainly won't make anything faster overall as far as asset creation is concerned
I made a little demonstration video about micropolygons for people who don't understand it yet. I see a lof of "pseudovolumetric" and "voxel" and "claylike" here. And actually it isn't really like that.
I had to skip one step because I didn't know how to do it, but this doesn't really change the outcome. My one has some waste while the actual tech wouldnt.
1. After the mesh gets imported, ue5 makes a new uv channel for the mesh, and it will have 100% space utilization (see the link I posted earlier). This is the step I skipped.
2. Bake position, normal, color, and other maps using this new Unwrap. Since I skipped step 1, my maps looks like this. I don't show the colormap, because it looks the same as the original:
3. Spawn a grid of polygons using the gpu (compute). The xy divisions of the grid should depend on the texture res (4k texture-> 4k * 4k polygons). So, mip mapping this texture can serve as an input of the lod system. The uvs, of the grid should be a planar mapping of the whole grid, containing all polygons inside 0-1 space.
In my example, there is a lot of waste because of the lack of step 1. This is basically a "new way" of storing and reading a mesh. Its fully gpu based so it allows much more polygons for cheaper than before.
Yes, there is definitely some limitations of this tech, and it isn't ideal for everything.
Is it more efficient than using plain polygons? I'm assuming the
lossless vector displacement maps on top of the normals, diffuse and
speculars will take up more memory and computation power rather than
loading instanced meshes. Or is there an advantage that comes with this method?
Its more efficient for the gpu to access textures. The vertex transform can be applied in SIMD fashion. This is why it has no problem drawing a lot of million polys meshes.This is similar to why you can draw and handle many more particles using the gpu, than if you would do it using the cpu. The traditional method wouldn't allow this. It would be very slow. The raw memory requirements are similar to the traditional method. But streaming is a big part of this.
One of the things I think is super interesting about this approach, is that it's using a similar approach to texture optimization to how PTEX works. I was wondering how long it would be until we reached that point, and we seem to be nearly there.
Kinda. It's difficult to say without knowing the specifics. The unreal stuff seems to be post-authoring though, as in it's converted at import-time rather than being generated at authoring-time. But it's an interesting step towards optimizing texture usage - no more wasted pixels in UV layouts...
Assuming that mips are the foundation of the mesh decimation they can't just be baking each poly location/normal to a pixel to get lossless UVs can they? That'd only work with an uncompressed and unmipped image surely?
i still think it's a shit idea and they should be doing it all with displacement off a low res mesh so it's actually useful
Pior is right about how some people go "make art button" style
NO MORE LODS, NO MORE LOWPOLYS, NO MORE BAKING, NO MORE UVS, NO MORE POLYS, NO MORE TEXTURES, NO MORE ANYTHING!! Just skip straight to drinking some coffee
So for tech-illiterate folks like myself, (assuming this were the future of game art) does this mean uv-unwrapping would be antiquated?
The way I understand it you'd still have the same starting point as now, just on render time things are streamed in differently on a newly created polygon soup under the hood.
I have been looking into this since i saw the demo. Always hated tech demos and personally, don't believe in most things, ray tracing for me is more marketing that useful, i would invest in any other aspect of the game instead of raytracing. I knew about some real time techniques very good, like Lumion that i know of since the ray tracing thing appeared. so i would say i was seeing the new generation of games as a little upgrade that could be better if procedural techniques are implemented correctly, just to grow texture details in things like clothing or maybe skin. Tech demos for me have always been marketing, showing high poly assets with good textures to look impressive. Not them, not RTX cards, not....anything impress me, except artists.
This time after the demo i was watching ended, my world changed. The future already changed.
There is some "but if..." here and there, mine is data storage. It's not completely solved that puzzle. But to be clear this is way too detailed, different, specific, technical and ....possible, oh also BIG, to say "hey it's just marketing", the LIE would be too big and wouldn't make any sense to do this. There is also a side move with the 1 million limit for the free pass to developers, that is another sign.
A lot of people talks about this. Some with knowledge, some not. The ones with knowledge and experience on the field, some against and some in favor.
Looking for more info, more than i already searched de past few days, i found a paper from 2009 where Tim Sweeney explains a technology using micropolygons and the concept of REYES. Something that comes from the 80's and Lucas arts, and then Pixar worked with. So Tim already had his eyes on this (at the least) in 2009 and he said "it's the future". Another piece that fits the puzzle.
So in short and answering some comments about "being naive" and that this is not that simple or not like a revolution or whatever... i would say that NOT believing this is just what we saw and that games will do that as we saw in the demo, that this is a revolution and a leap forward in CGI in general bigger than ever, that is being naive.
There is a question i accept. How much of this demo are we going to get. My opinion, and everything here is just that, opinion, is that we are going to get all of it or like at the least, half. And that will be a whole lot more than enough.
Be happy. Be hyped. Wait for the downloadable demo to enjoy.
I have been looking into this since i saw the demo. Always hated tech demos and personally, don't believe in most things, ray tracing for me is more marketing that useful, i would invest in any other aspect of the game instead of raytracing. I knew about some real time techniques very good, like Lumion that i know of since the ray tracing thing appeared. so i would say i was seeing the new generation of games as a little upgrade that could be better if procedural techniques are implemented correctly, just to grow texture details in things like clothing or maybe skin. Tech demos for me have always been marketing, showing high poly assets with good textures to look impressive. Not them, not RTX cards, not....anything impress me, except artists.
This time after the demo i was watching ended, my world changed. The future already changed.
There is some "but if..." here and there, mine is data storage. It's not completely solved that puzzle. But to be clear this is way too detailed, different, specific, technical and ....possible, oh also BIG, to say "hey it's just marketing", the LIE would be too big and wouldn't make any sense to do this. There is also a side move with the 1 million limit for the free pass to developers, that is another sign.
A lot of people talks about this. Some with knowledge, some not. The ones with knowledge and experience on the field, some against and some in favor.
Looking for more info, more than i already searched de past few days, i found a paper from 2009 where Tim Sweeney explains a technology using micropolygons and the concept of REYES. Something that comes from the 80's and Lucas arts, and then Pixar worked with. So Tim already had his eyes on this (at the least) in 2009 and he said "it's the future". Another piece that fits the puzzle.
So in short and answering some comments about "being naive" and that this is not that simple or not like a revolution or whatever... i would say that NOT believing this is just what we saw and that games will do that as we saw in the demo, that this is a revolution and a leap forward in CGI in general bigger than ever, that is being naive.
There is a question i accept. How much of this demo are we going to get. My opinion, and everything here is just that, opinion, is that we are going to get all of it or like at the least, half. And that will be a whole lot more than enough.
Be happy. Be hyped. Wait for the downloadable demo to enjoy.
Good points, I think it's obvious this is no good for a games art pipeline due to memory limits and it would be a painful creation process with those file sizes. It is however great news for games lighting, arch viz, automotive, film production, medical simulation, aeronautic and whole rafter of other visualization industries.
Welcome to the dawn of the Unreal 5 Engine! it's also the great for making games you know
At this moment, this feels like a real threat for pc gaming. They will need to sort this out asap because otherwise our fancy pc will become pretty much useless relative to a console very soon. The only reassuring thing is that there are more fields other than gaming and it just can't happen that films and such other things get stuck on shit hardware. And then gaming is also saved. I'm really curious to see where this goes. I'm also wonderin how did we get to this point where the playback device is more powerful than the creator device. I actually kinda get it but this is so wrong.
From what I understand there is a piece of hardware in the new consoles that is not located inside the ssd, and it helps the system dealing with such large amount of data coming from the ssd. And this piece is not present in pcs.
This would kinda explain why we never get our pc nvmes to its full advertised potential.
I feel like most negative or "you´re being naive" comments come from a gaming background.
I´ve worked as a 3D generalist for the last 8 years, I mostly work in classic offline renderers like Vray. I´ve dabbled here and there with realtime, mostly due to some clients asking for VR tech. When the first demos of archviz came out, i first started thinking I might be missing out on "the next big thing". But having also played with some GPU renderers, I always felt like there were too many restrictions I had to work around to make it worth. No poly limit and instant GI are two more steps just vanishing from that list.
I really don´t think we´re gonna get massively more detailed games just because that tech demo makes it look like, all your concerns in that area are probably at least valid. Looking at the top 50 List on steam...man, there are soooo many "crappy" looking retro games...
But have you actually watched "The mandalorian"? That is HUGE. There is a unreal for virtual production page on facebook that is rapidly growing, mostly due to Corona, but those practices will certainly stay. Do you know how huge automotive and archvis industries alone are? There are currently still a couple of areas that will still rely on offline renderers (refractions, volumetrics), but that list is shrinking every year.
Personally, I´m moving my shortfilm from Vray to Unreal because of that demo and I´m sure many others will follow.
But have you actually watched "The mandalorian"? That is HUGE. There is a unreal for virtual production page on facebook that is rapidly growing, mostly due to Corona, but those practices will certainly stay.
Mostly due to Corona? Can u give more info on that? Are u saying that ppl are moving from Corona to Unreal?
Agree with your post, More people are moving to real time rendering. The gap is almost closing.
Are u saying that ppl are moving from Corona to Unreal?
Haha, yeah, lets all move from corona to unreal....:)
I still don´t get a lot of the technical explanations, but if its like obscura says it is, it sounds great. Of course. Only for those that can or want to afford the asset creation time. Have you looked at the steam charts lately? How many titles use absolutely outdated graphics and still have their audience?
I´ve done it all, I know its not just "press a button". I´m still super excited. I´m ALSO moving my shortfilm to unreal because of megascans, to be quite honest. I have some pretty big environments to fill, I know I could do it "manually", but that just takes a lot of time, not skill. Been there, done that. I´d rather spend that time on cinematography, light, composition etc.
And I also think we´ll see more and more procedural scan based approaches in the future.
But can someone explain how this will or won´t also work for moving characters? I wan´t my 11 Udim creature textures, but tesselation in unreal 4 isn´t getting me the detail and its still one of my biggest concerns in moving from offline renderer to unreal. I´m fine to wait for unreal 5, if that works. But if its just for static meshes, I wanna prepare my workflow accordingly, focusing my efforts on the environments and rendering the characters offline...
UE5 priorities are optimizing for SSDs, helping make larger/more dynamic worlds, general engine performance/speed ups, more online game improvements (more/bigger BRs?), still multi-platform/scalability focus.
UE5 will be for mobile, current gen, and next gen in mind. No break going from UE4 to UE5, it will seem like a larger than normal UE4 update if you're migrating a project.
General improvements into reduce/removing cpu time spend on loading/streaming with the engine.
Focus on improving remote working/collaborating on large projects
4.25 will support building for PS5/XBSX and will be receive extended support and updates throughout the console release/launch cycle.
Fortnite will be a day 1 release on PS5/XSX, UE5 version should be coming in about a year.
Nanite
Keeps full source data, even for real time. Supports high object counts as well, supports more than a million assets at once. The entire town in the demo was built and kept as smaller modular pieces.
Low CPU cost and flat CPU cost. streaming pool is about 800 MB of vram for the demo.
Sounds like they are still making hard disk optimizations for it?
Doesn't support masked/translucent yet. Object movement is supported.
Doesn't support skeletal meshes or deformed objects. Doesn't work with grass/leaves/hair or small dense things similar to that
Lumen No baking No lightmap uvs. Supports shadowing skylights Has indirect specular reflections Supports time of day, weather, flashlights, destruction, construction, works for all that Running at 30 fps for next gen currently, aiming for 60 fps No mirror reflections yet
Lots more Quixel mega scans coming
PhysX not getting much support in UE5, will exist in a deprecated state. With Chaos they are looking to improve online/network physics support
Yeah, that behind the scenes for how they made the demo was great. Its really just tons of hipoly assets in the editor, nothing that only gets created during runtime, or some other hack.
Replies
isn't it what Dx10 run time tesselation does only in tangent space. I wonder what's an advantage of making it in object space? You still need to convert vectors to world space in run time, right?
"For example, the statue of the warrior that you can see in the temple is made of eight pieces (head, torso, arms, legs, etc). Each piece has a set of three textures (base colour, metalness/roughness, and normal maps for tiny scratches). So, we end up with eight sets of 8K textures, for a total of 24 8K textures for one statue alone," he adds."
https://www.eurogamer.net/articles/digitalfoundry-2020-unreal-engine-5-playstation-5-tech-demo-analysis
Maybe they use 8k textures just to stress test the engine and make it sound more "awesome" for PR. I cannot imagine that there is a visible difference between 8k and 4k, or even 2k, for this kind of asset at a reasonable distance.
Por fin! with this, we could use subdivision surface models for almost all. I won't need to worry about baking times and stupid technical issues.
http://c0de517e.blogspot.com/2020/05/some-thoughts-on-unreal-5s-nanite-in.html?m=1
Regarding the "clay" aspect : I think there is some truth to that. As soon as shapes (especially very sharp/clean ones) get processed in such ways there's somewhat of distinctive softness being intruduced even at high densities - akin to what can be seen on raw photoscans.
Anyways - as said, very interesting stuff. Stll, anyone hoping for positive production paradigm shifts because of tech being able to handle *more* detail than before (regardless of how it's done) is in for a rude awakening imho
If anything this could gel quite well with the approach used on faces in LA Noire, which if I am not mistaken was another topology-agnostic approach. One could certainly imagine a pretty cool visual style emerging from that, using real puppets (Dark Crystal) as source data for character's heads, regular topology and textures for the bodies, and photoscans of miniature sets for enviroments. Essentially relegating CG (in the sense of "CG assets created from scratch") to secondary/non-hero assets for instance.
Yet ... there's another thing to be said about any objectively measured increase of level of detail, and that is the gradual loss of suspension of disbelief. As more and more detailled games like Uncharted get, the more exausting they become to look at, so to speak - because gameplay elements (you can climb here, not there) get more and more muddled with their surroundings. Of course this can be proactively adressed - but that too requires more thought and production than ever before, as opposed to being implicitely conveyed by visual simplicity. That's a tangent topic of course, but I don't see this (imho unfortunate) trend changing anytime soon, as someone will always ask for this ladder or that door to "blend in just a little more".
A lot of the opinions being expressed in this thread are very naive.
Thats right. Especially on the lods. Yeah it would have a similar "softness" to what scans have sometimes. But differently than Dreams for example.
I believe, for foliage, and some other stuff, you will still need to use the regular approach. But scans or organic shapes can use this tech. As long as you are developing for some modern hardware.
I'm expecting in the next 5 years, we're going to see tools designed to be used with photogrammetry/mega scan data that will be able to transform assets while still keeping the same look. Take a rock scan and turn it into a column, take patch of ground and turn it into a landscape, take a rock and get 40 variations, take a rusty metal utility box and apply it to a block out of machinery. Autofill and content aware filters, but for 3d.
I see quite a few comments (a few here of course, but mostly on more layman outlets like gaming news and video comments) regarding "no more baking normalmaps OMG yes !".
I find it somewhat amusing, because ... when you actually stop to think about it, baking is actually an extremely fast and automatic process. Litterally ! You load your source and target, load presets, press a button, wait for a few seconds to a minute, done. It's magical.
What's actually taking a ton of time is what leads to it. Creating the high (source data), creating the low for the game to handle/for animations, and of course creating UVs for it.
So whenever I hear/read that "OMG no more baking, just direct modeling !" ... even putting aside the fact that it is a gross simplification of the tech, I'd invite anyone to *actually* think about what it would take to create the same asset than, say ... the last one you actually worked on, but without baking. The truth is that you'd still have to create the source art, and you'd still have to UV/texture it - except this time instead of doing this on a lightweight unified low you'd do it on subdiv/highpoly meshes, using UVs where needed, and so on - basically exactly the same process that has been used in CG movies for decades. Except that movies can afford to selectively deal with level of detail per shot, while games can't. Therefore even if it was possible to make a whole game without baking nmaps ... you'd still have to aim for consistent texture density, objects being UVed cleanly, and so on.
Again that's not saying that nothing can come out of the tech - I am 100% sure that brilliant artists will leverage it in unexpected ways. And close-up cinematics shots will likely benefit greatly from it. But it certainly won't make anything faster overall as far as asset creation is concerned
i still think it's a shit idea and they should be doing it all with displacement off a low res mesh so it's actually useful
Similarly, what happens to workflows like Substance Painter that rely on the traditional low poly+baked maps?
This time after the demo i was watching ended, my world changed. The future already changed.
There is some "but if..." here and there, mine is data storage. It's not completely solved that puzzle. But to be clear this is way too detailed, different, specific, technical and ....possible, oh also BIG, to say "hey it's just marketing", the LIE would be too big and wouldn't make any sense to do this. There is also a side move with the 1 million limit for the free pass to developers, that is another sign.
A lot of people talks about this. Some with knowledge, some not. The ones with knowledge and experience on the field, some against and some in favor.
Looking for more info, more than i already searched de past few days, i found a paper from 2009 where Tim Sweeney explains a technology using micropolygons and the concept of REYES. Something that comes from the 80's and Lucas arts, and then Pixar worked with. So Tim already had his eyes on this (at the least) in 2009 and he said "it's the future". Another piece that fits the puzzle.
So in short and answering some comments about "being naive" and that this is not that simple or not like a revolution or whatever... i would say that NOT believing this is just what we saw and that games will do that as we saw in the demo, that this is a revolution and a leap forward in CGI in general bigger than ever, that is being naive.
There is a question i accept. How much of this demo are we going to get. My opinion, and everything here is just that, opinion, is that we are going to get all of it or like at the least, half. And that will be a whole lot more than enough.
Be happy. Be hyped. Wait for the downloadable demo to enjoy.
Good points, I think it's obvious this is no good for a games art pipeline due to memory limits and it would be a painful creation process with those file sizes. It is however great news for games lighting, arch viz, automotive, film production, medical simulation, aeronautic and whole rafter of other visualization industries.
Welcome to the dawn of the Unreal 5 Engine! it's also the great for making games you know
This would kinda explain why we never get our pc nvmes to its full advertised potential.
I´ve worked as a 3D generalist for the last 8 years, I mostly work in classic offline renderers like Vray.
I´ve dabbled here and there with realtime, mostly due to some clients asking for VR tech.
When the first demos of archviz came out, i first started thinking I might be missing out on "the next big thing".
But having also played with some GPU renderers, I always felt like there were too many restrictions I had to work around to make it worth.
No poly limit and instant GI are two more steps just vanishing from that list.
I really don´t think we´re gonna get massively more detailed games just because that tech demo makes it look like, all your concerns in that area are probably at least valid.
Looking at the top 50 List on steam...man, there are soooo many "crappy" looking retro games...
But have you actually watched "The mandalorian"? That is HUGE. There is a unreal for virtual production page on facebook that is rapidly growing, mostly due to Corona, but those practices will certainly stay.
Do you know how huge automotive and archvis industries alone are?
There are currently still a couple of areas that will still rely on offline renderers (refractions, volumetrics), but that list is shrinking every year.
Personally, I´m moving my shortfilm from Vray to Unreal because of that demo and I´m sure many others will follow.
Agree with your post, More people are moving to real time rendering. The gap is almost closing.
I still don´t get a lot of the technical explanations, but if its like obscura says it is, it sounds great.
Of course. Only for those that can or want to afford the asset creation time. Have you looked at the steam charts lately? How many titles use absolutely outdated graphics and still have their audience?
I´ve done it all, I know its not just "press a button".
I´m still super excited.
I´m ALSO moving my shortfilm to unreal because of megascans, to be quite honest. I have some pretty big environments to fill, I know I could do it "manually", but that just takes a lot of time, not skill.
Been there, done that.
I´d rather spend that time on cinematography, light, composition etc.
And I also think we´ll see more and more procedural scan based approaches in the future.
But can someone explain how this will or won´t also work for moving characters?
I wan´t my 11 Udim creature textures, but tesselation in unreal 4 isn´t getting me the detail and its still one of my biggest concerns in moving from offline renderer to unreal.
I´m fine to wait for unreal 5, if that works.
But if its just for static meshes, I wanna prepare my workflow accordingly, focusing my efforts on the environments and rendering the characters offline...
https://youtu.be/roMYi7BU1YY?t=1996
UE5 preview should be coming early next year.
UE5 will be for mobile, current gen, and next gen in mind. No break going from UE4 to UE5, it will seem like a larger than normal UE4 update if you're migrating a project.
General improvements into reduce/removing cpu time spend on loading/streaming with the engine.
Fortnite will be a day 1 release on PS5/XSX, UE5 version should be coming in about a year.
Supports high object counts as well, supports more than a million assets at once.
The entire town in the demo was built and kept as smaller modular pieces.
Doesn't work with grass/leaves/hair or small dense things similar to that
No baking
No lightmap uvs.
Supports shadowing skylights
Has indirect specular reflections
Supports time of day, weather, flashlights, destruction, construction, works for all that
Running at 30 fps for next gen currently, aiming for 60 fps
No mirror reflections yet
Lots more Quixel mega scans coming
PhysX not getting much support in UE5, will exist in a deprecated state.
With Chaos they are looking to improve online/network physics support