Im wondering how big of an increase there is from modular environments and where it actually increases performance.
In my mind, logically, I feel it would only really help load times since the computer will load less unique geometry. But does it help framerates as well? To me it seems once its loaded it wouldnt really matter. There would still be a similar amount of geometry whether or not you work modularly.
Am i wrong?
Replies
In other words, question is too broad.
Look up "geometry instancing".
If your engine is more bound by the gpu's fill rate then it may not help as much.
I do know from some demos and just using 3ds max, that instancing ala modular building is a huge help.
I have been able to have 15-20x the tris on screen with heavy instancing than with all unique geo while keeping a slow yet still useable framerate.
And yet engines like UE 3 don't seem to instance meshes when you duplicate them (only in special cases like landscape foliage). Every discrete mesh is its own draw call.
Worth mentioning: any time you change the scale of your object, it must create new mesh data for that object in nearly every engine (if not every engine)...
SO if you're instancing objects, and changing the scale of every one of them, your memory savings will be minimal.
If you're instancing geometry and sharing textures, you get a performance gain because the hardware doesn't have to do texture look-ups for the new textures in the materials (if i understand that correctly)
LOTS of stuff to read on this topic if you want to fully understand it...
Biggest gains will be had in production cost savings (as was mentioned before)... performance is such a case by case thing that you just have to keep watching it... instance a bunch of objects, look at the numbers, instance two objects a bunch of times, look at the numbers... until you get an idea of what's going on there.
Just depends on too many things to be able to say... but I think 12.3 is about accurate.
As far as loading time, mesh loading is usually a very small part of it. Texture and shader compiling along with any engine/game specific preprocessing have the biggest effect on loading, and a good streaming system will pretty much completely remove texture loading time from that.
What makes you think that way?
All resulting in a pretty damn boring enviromnent, where every city, and every building within, and every interior within in each building looks exactly the same *clears throat* "Oblivion, Fallout 3, New Vegas, cough". Its a shame that people seem to have to make such a heavy trade off to maximise performance. Either that, or like other people have said, it makes for a quicker more "bish bash bosh" workflow, while sacrificing SO much originality and character between different areas of an enviroment.
I think developers should still use modular enviroments, but they need to utilise it better, so we get less of "hey, how come everything looks so boring and identical", and more of "wow, everything looks so... I dont know, amazing. Each city has its own sense of character. I've only seen this building 3 times". I think its nothing out of the ordinary to create a big enviroment which has MINIMAL "copy & paste geometry". It just takes more TLC and creativity on the designers part. I mean, they did it with "San Andreas", and to an extent they did it with "Red Dead-Redemption" (cant remember if RD-Revolver was similar in that respect), and other older games I cant remember lol. Also, even from a console perspective its possible. Thanks to Blu-Ray, we can cram something like 40Gb into ONE disc. Lets not even get started on how that applies to PC gaming. If They started using Blu-Ray or better yet, a new type of DVD-drive friendly disc (lol?), they could fit over 4 DVD's worth of content (EG a huge sprawling, unique world, PACKED with thousands of lines of unique dialogue) into ONE disc. People would complain that they dont have 40GB to set aside for ONE game, but times have to move on, I remember when 1MB of RAM was a HUGE deal, back in the days of the Atari-ST, and you could pay like £40 for it. Now you spend £40 and get a thousand times more RAM. Its crazy! I think we're WELL overdue for the next MAJOR evolution in gaming technology.
I understand that it would mean adding another 6 months, maybe even a year or more to the production time, which might not be an option if you have stringent time constraints, but if other developers can do it, then all of them should atleast TRY IMHO, because hyper-boring free-roam enviroments are ruining a genre (if you can call it a genre) that I love so much! I used to love exploring open world maps, to see what I can find, but when everything looks the same, whats the point? Might as well just play the story and be done with it, never straying off the main path.
I only know a little about the technical side of the reasoning behind "modularitis", but surely theres a way to make a games environment look consistantly unique, without needing a goverment classified hpyer-computer to run it. I do understand why its not a good idea to make huge objects like buildings, one single mesh though, and the general theory behind the performance/graphics trade off problems.
I love free roaming games, or what they used to be, or strive TO be, so I hope they one day manage to break free of this "modular disease". I know somebodies gonna come along one day with a new game engine with cutting edge technology, and its gonna blow EVERYTHING else out of the water. Its not gonna be the next engine from Crytek either, OR Epic. Its gonna be a newcomer with a big budget and a lot of ideas, or a similar pipe dream like that!
Anyway, enough, I dont mean to rant like an old man HAHA!
Reusing the same textures is obviously important, but i think creatively using a different mesh to reuse the textures will be the way to go in most situations, especially buildings. Though a great way of doing it in a non redundant way is to break up walls/roofs into small modular pieces and create new buildings out of those, like legos. As long as we dont get the same looking object as a whole.
I guess in the end theres the lazy man's modular building and then there is the creative and good way to do it where you dont make it so obvious, but I think that is going to usually entail using less of it.
The main question is if you were to get a job as an environment artist, would people get mad at you for not using it enough because of performance. and yea, it sounds like its just a scene size dependency thing.
If you are getting a job, just learn to use modularity. Don't avoid it, embrace it. It is a system designed to help you save time, not just framerate and memory.
Duplicate static meshes in the editor and use "stat d3d9rhi" command. It'll show the amount of primitive draw calls per frame.
still saving on texture space, and materials though, got multiple elements using the same material than you got those eleamtns duplicated all overh the place.
http://www.scottjonescg.co.uk/FYPResearch/Investigation_into_modular_design_within_computer_games_v1.0.pdf
However there's other benefits: You can attach parameters to the individual transforms of the objects that help with defining the gameplay of the space. You can also have richer physics and interactivity overall if your engine draws separate objects. And the culling is easier to handle.
You're not just making an environment. You're making a game. So your way of working has to help with the gameplay as well as just the way it looks.
I can't think of many engines that draw the scene as one big mesh? I think I saw mention that halo was like this?
That was an awesome link ( at least new to me )
Thanks!
Halo uses large, stitched BSP for big sections of the level. Like large brush strokes. things like terrain and buildings. That helps with visibility culling. They dont render everything as one draw call. They also use tons of instances to fill in the detail strokes. So not exactly.
Let me see if I got this right. If I duplicate an identical mesh in UDK, when it renders it, even though it is the same textures and materials and mesh, it will still be another draw call? that mean the video card stalls until the CPU loads the next set of instructions, right? Why wouldn't It reuse the old instructions? Is this why epic suggests merging your instance meshes for mobile UDK games? What happens when a mesh uses a multi-sub object material?
Multi sub materials, ie multiple material ID's in UDK are all their own draw call. So if you have a static mesh with three mat ID's, it's three draw calls.
Here are some pics to illustrate it:
Upper one has about 2k cubes with one material with a texture that has three different colours.
Lower one is the same cube except the coloured faces are seperate material IDs (so three in total) and the colours are just constants assigned in the material editor.
http://www.cs.helsinki.fi/u/jonimake/good.jpg
http://www.cs.helsinki.fi/u/jonimake/bad.jpg
That was with my old core 2 duo e6700, I was getting severely cpu limited with over 2k draw calls.
Well only Epic programmers can answer that
Don't know if multithreaded rendering from newer directx API's help in the case of UE 3. Haven't tried.
With modular chunks you also have the added headache of seams so plot out your UV's carefully.
EDIT:
One other thing to think about, is that working modularity you have the option of combining meshes and exporting larger chunks. So you get the added bonus of building in chunks but the final exported pieces don't necessarily need to be
Also from what I understand its not so much of a performance boost as it saves on texture space, final foot print and shader complexity. Of course it all depends on how the game is built, and how the pieces are used...
I don't know the exact answer but it's actually pretty tricky. Someone mentioned culling, which would be negated by batching together meshes like that. You also have problems with dynamic lights. If you batch a ton of meshes together and there's a dynamic light hitting one corner of one of them, you'll be rendering that huge batched mesh multiple times which might undo any of the gains that the batching got you.
Though now UE 3 has deferred rendering in DX 11 mode, which should help with the dynamic light redrawing (or rather make it unnecessary). Though only when there are no shadows being casted from the light afaik.
But anyway, that's for DX 11 only with that specific engine
On another note, why is deffered limited to DX11 in UDK? it seems like a lot of features that I have seen work in DX 9 (and even DX10, what happened there?) are now being limited to DX11. for example, Crysis 2 introduced POM as a DX11 only feature, but crysis 1 had it in dx9/10, although I suspect this has something to do with their transition into fully deffered shading.
There is basically no speed differences. When talking about modular pieces such as a building, you will not have 1,000 of the same piece. So drawing 10 separate modular pieces, vs grouping them together will not really change anything. The reason to make modular stuff is 80% dev time / ease for designers and maybe in certain cases (mainly consoles) 20% to save some memory. Also most man made things are just modular. You mass produce metals,buildings, etc, which means lots of repitition.
In respects to batching and instancing, you don't want to draw stuff you cant see. The more you batch, the bigger the object gets. So by batching and not doing separate draw calls, you are basically saying, "I will always draw ALL of these, as long as I can only see 1 of them." Like batching trees, do you batch the whole forest and say "Draw all or none of it" or do you say "Draw a few trees I can see, draw low poly versions of trees far away, and draw NONE of the trees I don't even see". So UE3 can't just batch ALL static meshes, they would have to figure out what to batch, or let it up to the artists/designers.