I'm considering making a game in UE4 (for fun), and figured why not try and make it as detailed as possible. So many games cater to consoles, so I was wondering how far one could push a game in terms of graphics, if it were intended solely for a PC running a 1080-Ti, i7-7700k (or above), and 64GB ram? Even if I ended up actually finishing such a game, those specs would probably be much more affordable by that time anyway, so why not go crazy and have fun.
I'm wondering just how detailed the character models could be; assuming there's ~10 on the screen at any given time, with hyper-realistic hair/fur, set in 3rd person, the level assets & environment using 4k textures, and volumetric effects. How many polys do you imagine you could get away with on the characters, while maintaining a respectable draw distance?
Replies
And oh... its the unlimited graphics guys again...
The first immediate problem I ran into is that these assets are friggin huge. Think about it. If you're doing 4k textures everything, you're going to need a lot of disk space to save them all. And then you're either going to have to stream those assets or have them fit within your memory.
And since games right now don't have [fully dynamic] GI/raytracing, you're going to have to dedicate more memory budget into lightmaps.
Naw, it wasn't worth it. I also felt like throwing more polygons just gets boring.
The jump from PS1 to PS2 was a massive difference in both modeling and how it affected gameplay. Everyone saw the difference that GTA 3 had over Driver or Ocarina of Time.
But adding 10 or 20k more polygons to a PS4 level Vehicle isn't going to eliciate the same response. You would have to achieve a jump that's closer to VFX/Movie quality prop, but that technology is much more demanding and farther away.
i'm also not going to bother with a serious answer to the OP cos it's silly. Every engine is different, so take the one of your choice and try it. Tch!
If I were to go back to UE4 today, I would stick to my original plan of a game that only uses hand painted diffuse textures and nothing else.
For photorealism, I'm making pre-rendered cutscenes where I just design an asset to a certain quality and hit render. But this has been pushing my game even farther back in development (average scene takes 7~ 8 hours to render).
Until then I believe they can be adequately be disproved from a theoretical standpoint. "Trillions" of "atoms". Whatever. At 32 bit float precision an 8GB graphics card can hold just over 2 billion float 4's in memory. Even if they're streaming from the hard drive that's data being constantly uploaded and overwritten, that is slow. Then you have the colour of the "atom", that's another float 4, oh look now it's only 536 million points in memory.
Let's not forget that what they're technically doing is rendering the points as hulls, so they actually are polygons anyway, a point is just a point, it needs a surface descriptor to actually appear. It's a very inefficient way to render. Next question, how the hell are they going to skin a few hundred thousand points on a characters arm compared to a few hundred vertices? They showed off a video with an animated snail with specularity (another piece of data to provide the atoms, divide above), but it just looks like a baked cache.
Unlimited my ass. It also looks like raw unprocessed scan data, far inferior to cleaned up scans as low to mid res geometry.
Total points they can render provided position, colour and specularity is probably approx: 954,437,176... hmm trillions indeed. It also looks as though they need a few hundred million points to render just what is in front of the camera, I bet they're streaming data in from the HDD as they pan around, hence why there isn't actually any high frame rate footage of their render. It's a great way to abolish loading screens, if the actual renderer IS the loading screen.
It's not impossible, it's just inefficient, and I find the CEO/company's standpoint of attacking polygons for being inefficient as being laughably hypocritical. Cue them making another video about how computer scientists think what they're doing is impossible and that we hate them (they bring it up every video and spend an inordinate amount of time on it, prove them wrong with a practical demo then!). We don't hate the technology, we hate the lies, other companies have telegraphed far more about the shortcomings of their product. Euclidean offers perfection and yet it's been years and no products, if it worked it'd be everywhere by now.
FYI Bruce Dell, 2015 has been and gone, where are those games you said were coming?
I'm excited for the future of this technology but I hope this company takes their head out of their ass.
I just look at those YouTube comments and get a bit infuriated when people act like current game devs are wasting their time when the answer to all our problems is this tech that has never seen the light of day.
I wish I knew these people so I could sell them a bridge
Edit: I'm also drunk. I shouldn't forum.
https://www.youtube.com/watch?v=rqm0mG-VBsA
I can't believe I still have this image..old school
Felidire said:
I'm considering making a game in UE4 (for fun), and figured why not try and make it as detailed as possible. So many games cater to consoles, so I was wondering how far one could push a game in terms of graphics, if it were intended solely for a PC running a 1080-Ti, i7-7700k (or above), and 64GB ram? Even if I ended up actually finishing such a game, those specs would probably be much more affordable by that time anyway, so why not go crazy and have fun.
I'm wondering just how detailed the character models could be; assuming there's ~10 on the screen at any given time, with hyper-realistic hair/fur, set in 3rd person, the level assets & environment using 4k textures, and volumetric effects. How many polys do you imagine you could get away with on the characters, while maintaining a respectable draw distance?
...no need throwing processing grunt and/or polys at the issue.
...so you don't have to wait after working on your pet project for more than a decade.