^ this. There are LOADS of tiny things I want them to add. For example, speeding up the material editor interface so you can easily connect nodes, fixing the colorpicker so its not so laggy in the material editor. But I would prefer a new engine to those tiny issues
aaaah...cant waaaiiiit to see what the guys at epic are up to now, im expecting to be blown away, plz dont let me down : DD
You guys keep mentioning what are really just small fixes and incremental features, but do they really warrant a whole new unreal engine?
Maybe they WILL have support for new rendering schemes, like partial raytracing and POINT CLOUD DAAAAATAR. I'm not saying they will replace polygon's, I don't think that will happen any time soon, but rather work in tandem? each of them have their uses and their downsides.
For example, Cryengine recently started implementing screenbased path tracing for glossy surfaces(SSR/RLR). EDIT: They also use voxels for terrain. (Caves and such)
Maybe its a full rewrite even. or it could just be another incremental set of features like the last release of Unreal Engine 3.99575. I don't know...
It's not going to be a full rewrite. They've gained a huge amount of market share with UE3, and tons of developers have learned to use it. They're not going to throw that momenutum away. UE4 will have lots of shiny new features, to be sure, but it will not be fundamentally different from UE3.
I could ask the same about unreal engine 2.5 -> 3.0, did it really warrant a new engine?
I mean, people often mistake bioshock for an unreal engine 3 title.
When we do see unreal engine 4, we will see what it will be all about.
It's not going to be a full rewrite. They've gained a huge amount of market share with UE3, and tons of developers have learned to use it. They're not going to throw that momenutum away. UE4 will have lots of shiny new features, to be sure, but it will not be fundamentally different from UE3.
Mark Rein, the vice-president of Epic Games, revealed on August 18, 2005 that Unreal Engine 4 had been in development since 2003.The engine targets the next generation of PC hardware and consoles after the seventh generation. The only person to work on the Unreal Engine 4 core system design up to that point was Tim Sweeney, technical director and founder of Epic Games.
Sweeney gave a speech at PoPL06 (the Symposium on Principles of Programming Languages) that described aspects of how the Unreal Engine 3 worked at the time and "what we would like to write" in future. He predicted the next generation of games consoles would arrive in 2009, at which time game designers would work with CPUs that had 20 or more cores, 80 or more hardware threads, and more than a teraflop of computing power.In March 2008, Sweeney predicted that the number of developers working on Unreal Engine 4 would be ramped up to three or four engineers by the end of that year, and implied that it would be aimed predominantly at the next generation of consoles rather than PCs.Sweeney has stated in a recent interview with IGN that Unreal Engine 4 will probably be ready for use in 2014.
In February 2012 Mark Rein, the CEO of Epic said "People are going to be shocked later this year when they see Unreal Engine 4".
Also, we continuously work on transitions, when we go through large portions of the engine. We completely throw out parts and create large subsystems from the ground up, while we are reusing some things that are still valid.
I could ask the same about unreal engine 2.5 -> 3.0, did it really warrant a new engine?
I mean, people often mistake bioshock for an unreal engine 3 title.
I don't want to get semantic, but that made sense to me if only because they went from unreal engine 2.5 -> 3
Mark Rein, the vice-president of Epic Games, revealed on August 18, 2005 that Unreal Engine 4 had been in development since 2003.The engine targets the next generation of PC hardware and consoles after the seventh generation. The only person to work on the Unreal Engine 4 core system design up to that point was Tim Sweeney, technical director and founder of Epic Games.
Sweeney gave a speech at PoPL06 (the Symposium on Principles of Programming Languages) that described aspects of how the Unreal Engine 3 worked at the time and "what we would like to write" in future. He predicted the next generation of games consoles would arrive in 2009, at which time game designers would work with CPUs that had 20 or more cores, 80 or more hardware threads, and more than a teraflop of computing power.In March 2008, Sweeney predicted that the number of developers working on Unreal Engine 4 would be ramped up to three or four engineers by the end of that year, and implied that it would be aimed predominantly at the next generation of consoles rather than PCs.Sweeney has stated in a recent interview with IGN that Unreal Engine 4 will probably be ready for use in 2014.
In February 2012 Mark Rein, the CEO of Epic said "People are going to be shocked later this year when they see Unreal Engine 4".
Interesting how he predicted it would be ready for new consoles in 2009 with 20 cores, and now its changed to 2014 while we are stuck with mostly quad cores (How many FLOPS are we at roughly for consumer gradwe gaming?).
Interesting how he predicted it would be ready for new consoles in 2009 with 20 cores, and now its changed to 2014 while we are stuck with mostly quad cores (How many FLOPS are we at roughly for consumer gradwe gaming?).
Technically it could've been that way, but I believe it halted a bit from few developers actually utilizing many cores up until now, and thus hardware developers never went as far as that.
Console focus is partially what to blame for why we never hit more cores, but also partially to praise for actually forcing developers to use more than one-two cores.
I don't think he was entirely wrong; take a look at modern GPUs and take a look at how many multiprocessors (equivalent of a CPU core) they have. This workstation is capable of 12 logical cores on the CPU, but graphics wise I have 32 multiprocessing units at my disposal - I could argue that if my hardware were more standardised much like current consoles, that I'm using a machine with 44 'cores' available for various ends.
It's all been about need and what game developers and engineers can actually on any sane level utilize, which why again; the console market is the forefront of games for good and bad.
I could ask the same about unreal engine 2.5 -> 3.0, did it really warrant a new engine?
I mean, people often mistake bioshock for an unreal engine 3 title.
Nah it caused a lot of confusion because it looked awesome but was still UE2.5. They just added a lot of stuff themselves. I think even Bioshock 2 was 2.5, but don't quote me there.
They used unreal engine 2.5 for swat4, which is what they continued to build on for bioshock I'd guess, and bioshock2 being a direct sequel would most likely have used the same.
I loved it!, it was coop magic, and police brutality! So much tension in the realism.
SWAT 4 pissed me off because they dumped you into some very realistic and very ugly scenarios and then once you shot your way through them... nothing. Just on to the next set of bad guys.
No closure, nothing. Not knowing what happened afterwards especially bugged me with the serial killer and the tenement cult. Those two levels were full-on fucking nightmarish and the game gave you no exit mentally or emotionally from those experiences.
I expect they'll throw the video out to the public a week or two after GDC. Epic have releaseed a showcase video every year post-GDC for a few years now.
I think most of the "big improvements" in the next console cycle with tech like UE4 will be stuff like better shaders capable of more instructions, higher draw call limits, better fx, much better lighting, plus all the fancy DX11 stuff in limited use nowadays like Tesselation. Most of that we can already see in Samaritan.
But looking at Samaritan, and even taking what they said with a grain of salt (ie UE4 will blow it sky high, which I'm not sure I believe), what do you guys expect to see in "traditional" terms? Meaning, what sort of character polycounts, textures, that kinda stuff.
A few games have big polycounts already for their in-game characters (with Uncharted being the biggest outside fighting games, I think, at around 35k), but if I had to do an "average" for most games, I would say important game characters are around 10-15k and NPCs much lower.
Is it reasonable to think that characters (say, Garrus in ME4) will have 100k tris, with alot of stuff modelled in and complemented by normals and/or displacement with Tesselation?
Will they be just like models nowadays in terms of target specs and manual craft, with artists letting displacement maps and tesselation handle making everything "higher detail"?
The rumored GPU for Xbox 720 isn't that mind blowing, so we're not gonna see 4k textures. But will we see a ton of 2k maps on everything (again, imagine Garrus with a 2k each for weapon, legs, arms, face, torso, etc), or will it still be more conservative?
Im guessing tis going to be like the last generational leap where texture sizes double where its useful. where you used to use a 512 (80-90% of most stuff) you can have more latitude to use a 1024. gotta remember you are still pixel bound by screen resolution, so in most cases it wont even matter if its a 2048 on a wall, because your tv is only 1080 pixels high in resolution.
my guess is that the main improvements will be on things like lighting, physics, and surface shaders. or more of the power will be used to do interesting things like AI/crowd behavior in large scenes or managing a ton of stuff going on in the environment. Games already look pretty phenominal, while im sure enxt gen will be awesome looking, I would bet that developers will be able to use the power to inject a more emotional connection with what they are trying to convey. really make you feel emotion film is able to do while most games are still failing to do. stuff like that kara demo is interesting to see.
We already have 4k textures in some games for the current consoles (LAIR comes to mind).
I know. I meant as standard. I know console games can leverage the hardware much better because of the single spec, but I don't see how a medium range GPU like the one rumored for 720 can handle that.
Ah yes, PixelMasher. One thing I definitely forgot in my original post, I expect to see a big improvement in physics, hair, and cloth. Again, Samaritan already shows some of those latter two.
Im guessing tis going to be like the last generational leap where texture sizes double where its useful. where you used to use a 512 (80-90% of most stuff) you can have more latitude to use a 1024. gotta remember you are still pixel bound by screen resolution, so in most cases it wont even matter if its a 2048 on a wall, because your tv is only 1080 pixels high in resolution.
Just because the texture is larger than your screen size doesn't mean you are being wastefull. For example, a lot of procedural shader utility maps can be quite high res. Another example, Normal-mapped Hard surface assets are not a flat plane, they have a bunch of curved surfaces and lots of little UV Islands that need a good amount of padding. 1k textures are hardly enough for most unique hard surface assets of a significant size. Here is a 1x2k Door I made:
it looks pretty good out far but not so much up close:
Not having support for 2k+ textures is ridiculous.
I do all this high detail sub-division modelling work only to have it bake/export into highly compressed and low-res 'pixel soup'. :poly127:
BTW, What would be even more ridiculous would be if they didn't have a synced normal map tangent basis.
You guys are totally missing the point Computron is trying to make. Just because resolution only goes up to 1080p doesn't mean you are wasting pixels if you have a 2k or 4k texture. He's pointing out that the information is mapped in space and not to the screen.
My asset was unwrapped uniquely/no-overlap since its asymmetrical.
You are right about one thing, with todays limitations, you will most likely have to do some smart unwrapping and symmetry, but I still hope that UE4 will allow for many more nice High-Res Large Hard-Surface Unique/Asymmetrical Assets. (HRLHSUAA )
If newer graphics cards support higher res, UE4 most likely will as well.
You guys are totally missing the point Computron is trying to make. Just because resolution only goes up to 1080p doesn't mean you are wasting pixels if you have a 2k or 4k texture. He's pointing out that the information is mapped in space and not to the screen.
I was just about to point this out... Depending on how complex the actual shapes are could really screw up the texture layout. If you have a lot of undercuts where stuff can't be unwrapped as one big piece and you have to have islands because it would overlap or stuff like larger vehicles or even the thing Computron posted having access to large texture sizes would mean being able to have that show up looking good on screen.
Adding stuff like bolts and smaller details to a lot of environment stuff is nice but sometimes just isn't feasible because of the actual resolution in the texture and on the mesh not the screen it's being displayed on. Would be nice to have that crisp detail kick through...
My asset was unwrapped uniquely/no-overlap since its asymmetrical.
There's always symmetry within asymmetry, there's always memory to save from smart unwraps in any case,
and while it would be a wonderful world if we could uniquely unwrap everything and use consistant texel-sizes, the guy who tiles, mirrors and distribute texture-density after detail will have a much more detailed object at the same texture-size.
There's always symmetry within asymmetry, there's always memory to save from smart unwraps in any case,
and while it would be a wonderful world if we could uniquely unwrap everything and use consistant texel-sizes, the guy who tiles, mirrors and distribute texture-density after detail will have a much more detailed object at the same texture-size.
The point was to illustrate how textures are spacially mapped, rather than on the screen.
It's also not just about textel density, its also about your time. Modularity takes time to plan and skill to achieve seamlessly. Sometimes its just easier to have proper unrestricted tools, but also a lot of times its easier to do things modular within certain restrictions.
Your right that it would be a wonderful world to have things unique and consistent, and the point is that is well within the realm of possibility for UE4.
Speaking of which, I can't wait to see what they do with Doom 4 and idTech 5. Given that Doom is a more linear game, I'd like to think they are targeting a much higher texel density, especially given their target of 30 FPS. I also hope they don't end up baking the normal maps in with the lighting to save space. (At least on PC, please)
Maybe UE4 will have support for Sparse Virtual Textures kinda like Rage, or more like how BF3 used them for the terrain. That and a texture stamping utility would kick ass.
On the subject of lighting, by 2013-14 whose to say we don't just get a much more dense and efficient version of the (already existing, tested and usable) Crysis 2's LPVs/Irradiance-Volumes?
This method takes a polygonal scene and dynamicly voxelizes it (even animated geometry as shown 4 minutes into the first video below). Any GI Lighting calculations are then performed in this voxel space in real-time.
In there paper, they compare their technique to the LPV technique used by Crysis 2 and as you can see, they have a much more detailed approximation, with an extra bounce of light while still being faster by a few ms:
Texture Resolution and polygon count aside, I am more looking forward to these 'hybrid rendering' systems, as Carmack calls them, enabling some truly spectacular lighting and other dynamic systems. The impression I get from Epic's GDC showing 'hearsay' is that its going to be something big like this.
yea uniqley unwrapped objects will benefit from the texture boost for sure, what I should have said is, going in with the mentality of slapping a 2-4k texture on most things is going to make it look amazingly better is probably going to end up in the 1st cycle of launch titles having a huge optimization phase before shipping to get things to run better.
He's not saying that's a problem.. He's saying that because everybody is so gung ho about the new advancements in technology, they're not gonna be thinking about optimization and not focus detail where it's needed. I'm going to assume that most of these games probably aren't going to have uber different budgets (some will, most probably won't), so detail is still going to need to be placed strategically (just like it is now)...
Replies
I never said that, but you moan miserably about little things like they are gamestoppers.
You basically said if they don't provide access to mip levels they shouldn't think about another iteration of the engine? Give me a fucking break.
aaaah...cant waaaiiiit to see what the guys at epic are up to now, im expecting to be blown away, plz dont let me down : DD
hahaha iam waiting for that BS to be digged up again^^.
Maybe they WILL have support for new rendering schemes, like partial raytracing and POINT CLOUD DAAAAATAR. I'm not saying they will replace polygon's, I don't think that will happen any time soon, but rather work in tandem? each of them have their uses and their downsides.
For example, Cryengine recently started implementing screenbased path tracing for glossy surfaces(SSR/RLR). EDIT: They also use voxels for terrain. (Caves and such)
Maybe its a full rewrite even. or it could just be another incremental set of features like the last release of Unreal Engine 3.99575. I don't know...
I mean, people often mistake bioshock for an unreal engine 3 title.
When we do see unreal engine 4, we will see what it will be all about.
http://en.wikipedia.org/wiki/Unreal_Engine#Unreal_Engine_4
Also, a full rewrite of the engine does not mean that your going to have to re-learn everything. The Interface may very well stay the same.
I don't want to get semantic, but that made sense to me if only because they went from unreal engine 2.5 -> 3
There is technically no unreal engine 3.0+
Interesting how he predicted it would be ready for new consoles in 2009 with 20 cores, and now its changed to 2014 while we are stuck with mostly quad cores (How many FLOPS are we at roughly for consumer gradwe gaming?).
Technically it could've been that way, but I believe it halted a bit from few developers actually utilizing many cores up until now, and thus hardware developers never went as far as that.
Console focus is partially what to blame for why we never hit more cores, but also partially to praise for actually forcing developers to use more than one-two cores.
http://en.wikipedia.org/wiki/Teraflops_Research_Chip
1000 cores?
http://www.physorg.com/news/2011-01-scientists-cores-chip.html
4096 cores?
http://en.wikipedia.org/wiki/Adapteva
It's all been about need and what game developers and engineers can actually on any sane level utilize, which why again; the console market is the forefront of games for good and bad.
http://www.gametrailers.com/side-mission/2012/02/27/unreal-engine-4-being-shown-at-gdc-next-week-just-not-to-the-public/
... can't wait!
Well it's news to me that it isn't!
but that's one of those franchises that most likely won't ever see a sequel again.. too complex for the average gamer ;/
er, off topic.
I loved it!, it was coop magic, and police brutality! So much tension in the realism.
SWAT 4 pissed me off because they dumped you into some very realistic and very ugly scenarios and then once you shot your way through them... nothing. Just on to the next set of bad guys.
No closure, nothing. Not knowing what happened afterwards especially bugged me with the serial killer and the tenement cult. Those two levels were full-on fucking nightmarish and the game gave you no exit mentally or emotionally from those experiences.
is this the new stuff?
But looking at Samaritan, and even taking what they said with a grain of salt (ie UE4 will blow it sky high, which I'm not sure I believe), what do you guys expect to see in "traditional" terms? Meaning, what sort of character polycounts, textures, that kinda stuff.
A few games have big polycounts already for their in-game characters (with Uncharted being the biggest outside fighting games, I think, at around 35k), but if I had to do an "average" for most games, I would say important game characters are around 10-15k and NPCs much lower.
Is it reasonable to think that characters (say, Garrus in ME4) will have 100k tris, with alot of stuff modelled in and complemented by normals and/or displacement with Tesselation?
Will they be just like models nowadays in terms of target specs and manual craft, with artists letting displacement maps and tesselation handle making everything "higher detail"?
The rumored GPU for Xbox 720 isn't that mind blowing, so we're not gonna see 4k textures. But will we see a ton of 2k maps on everything (again, imagine Garrus with a 2k each for weapon, legs, arms, face, torso, etc), or will it still be more conservative?
We already have 4k textures in some games for the current consoles (LAIR comes to mind).
my guess is that the main improvements will be on things like lighting, physics, and surface shaders. or more of the power will be used to do interesting things like AI/crowd behavior in large scenes or managing a ton of stuff going on in the environment. Games already look pretty phenominal, while im sure enxt gen will be awesome looking, I would bet that developers will be able to use the power to inject a more emotional connection with what they are trying to convey. really make you feel emotion film is able to do while most games are still failing to do. stuff like that kara demo is interesting to see.
Ah yes, PixelMasher. One thing I definitely forgot in my original post, I expect to see a big improvement in physics, hair, and cloth. Again, Samaritan already shows some of those latter two.
Just because the texture is larger than your screen size doesn't mean you are being wastefull. For example, a lot of procedural shader utility maps can be quite high res. Another example, Normal-mapped Hard surface assets are not a flat plane, they have a bunch of curved surfaces and lots of little UV Islands that need a good amount of padding. 1k textures are hardly enough for most unique hard surface assets of a significant size. Here is a 1x2k Door I made:
it looks pretty good out far but not so much up close:
Not having support for 2k+ textures is ridiculous.
I do all this high detail sub-division modelling work only to have it bake/export into highly compressed and low-res 'pixel soup'. :poly127:
BTW, What would be even more ridiculous would be if they didn't have a synced normal map tangent basis.
You are right about one thing, with todays limitations, you will most likely have to do some smart unwrapping and symmetry, but I still hope that UE4 will allow for many more nice High-Res Large Hard-Surface Unique/Asymmetrical Assets. (HRLHSUAA )
If newer graphics cards support higher res, UE4 most likely will as well.
I was just about to point this out... Depending on how complex the actual shapes are could really screw up the texture layout. If you have a lot of undercuts where stuff can't be unwrapped as one big piece and you have to have islands because it would overlap or stuff like larger vehicles or even the thing Computron posted having access to large texture sizes would mean being able to have that show up looking good on screen.
Adding stuff like bolts and smaller details to a lot of environment stuff is nice but sometimes just isn't feasible because of the actual resolution in the texture and on the mesh not the screen it's being displayed on. Would be nice to have that crisp detail kick through...
There's always symmetry within asymmetry, there's always memory to save from smart unwraps in any case,
and while it would be a wonderful world if we could uniquely unwrap everything and use consistant texel-sizes, the guy who tiles, mirrors and distribute texture-density after detail will have a much more detailed object at the same texture-size.
The point was to illustrate how textures are spacially mapped, rather than on the screen.
It's also not just about textel density, its also about your time. Modularity takes time to plan and skill to achieve seamlessly. Sometimes its just easier to have proper unrestricted tools, but also a lot of times its easier to do things modular within certain restrictions.
Your right that it would be a wonderful world to have things unique and consistent, and the point is that is well within the realm of possibility for UE4.
Speaking of which, I can't wait to see what they do with Doom 4 and idTech 5. Given that Doom is a more linear game, I'd like to think they are targeting a much higher texel density, especially given their target of 30 FPS. I also hope they don't end up baking the normal maps in with the lighting to save space. (At least on PC, please)
Maybe UE4 will have support for Sparse Virtual Textures kinda like Rage, or more like how BF3 used them for the terrain. That and a texture stamping utility would kick ass.
There is already quite a lot of lot of work being put in to a variant of it: "Interactive Indirect Illumination Using Voxel Cone Tracing"
This method takes a polygonal scene and dynamicly voxelizes it (even animated geometry as shown 4 minutes into the first video below). Any GI Lighting calculations are then performed in this voxel space in real-time.
[ame="http://www.youtube.com/watch?v=QNQtwzVGmsM"]http://www.youtube.com/watch?v=QNQtwzVGmsM[/ame]
In there paper, they compare their technique to the LPV technique used by Crysis 2 and as you can see, they have a much more detailed approximation, with an extra bounce of light while still being faster by a few ms:
Texture Resolution and polygon count aside, I am more looking forward to these 'hybrid rendering' systems, as Carmack calls them, enabling some truly spectacular lighting and other dynamic systems. The impression I get from Epic's GDC showing 'hearsay' is that its going to be something big like this.