Home Stickies

[Technical Talk] - FAQ: Game art optimisation (do polygon counts really matter?)

2

Replies

  • EarthQuake
    I'm going to go out on a limb and say no, definately not. Those sort of projects are designed to run in a webbrowser, on a large range of hardware and would likely not benifit from the optimizations of a current gen rendering engine.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 15
    Ged I think its a mix of everything that results in low-end graphics. The last time I looked at Director3D, it was like Directx7 or 8 renderer below, and it didnt use any "modern" features, read "modern" being 4 year old already... Those bigger content apps normally dont go with the modern hardware, and do a lot more cpu stuff, and hit those batching limits earlier. They are mostly meant to run on "anything" that is some integrated chip sets, with age old drivers. Though I am not sure how good flash or the others are. I know certain java libraries that make use of performance enhancing capabilities do exist, but I dont know how widespread the stuff is.
    It would be good to just test the engine with dummy assets of differnt resolution, and see how it behaves on target hardware.
  • Mark Dygert
    This reminds me of the color pallet optimization discussions. Do we use 16 shades of brown and make the most of it or do we pick stock colors and hope people like all rainbow levels. Now no one cares what pallet your textures use. We're breaking down the barriers that tie artists hands.

    I think its important to keep the post as it was presented. It's not a license to waste, but approval to stop over working something to the point it hurts the end result. It's also a call to take the game as a whole into account when modeling one tiny aspect of it. I think people in general, (beginners especially) will over estimate the time that will be allowed per asset. Yes you can make a loverly dumpster out of 250tris and 2mo to work on it. Or you could make an entire alley with 25k tris and those same 2mo.

    You want to be careful and not run the other way and never optimize. Being neat and tidy can be a boost to production time, especially if that asset is going to be worked on by other people. passing on something that is easy to work on can be pretty critical when the bugs start rolling in. I always hate having to go back into other peoples files, label materials and sleuth around a file for 20min before I can start fixing things. Spend 20min organizing up front to save someone else 20min of headache. Technically its a wash but people won't mind working on your files if they aren't a nightmare. At that point its not an issue of game resources but production time, which for me is king over all.

    The market of games I work on is much lower then the low end mentioned in those PDF's and as such we still have to keep to the old idea of optimize until it hurts, but just a little. It will be a few years before I can toss polys to the wind and not care. I thank Microsoft for pushing quality video cards and making it a center piece of a good vista PC. It will only quicken the death of this timely tradition that keeps me from creating more.
  • rooster
    Offline / Send Message
    rooster mod
    i think you made a great point in pnp vig, that polygons and draw calls aren't the only resource, time is THE resource
  • JKMakowka
    Ok maybe a bit OT, but what about animation costs (e.g. transformation costs). More vertixes would certainly mean higher transformation costs or is that mostly limited by the number of bones (and level of weights) anyways?
    And what about vertex animations (.md3) and those new Dx10 geometry shaders?

    Of course given the fact that the object isn't fillrate limited anyways.

    Edit: hmm to clarify: I think I read some where that DX9 and below hardware only does vertex animations and all the bones and vertex weighting is done on the CPU (and then transferred as vertex animations to the GPU), while on DX10 hardware with geometry shaders the GPU can do that. Is that right?
  • perna
    Offline / Send Message
    perna quad damage
    JM: Someone else can give you accurate data on what you want to know, but think about it in a practical way: How many of the polygons you see onscreen in your game are bone animated anyway? Like in a typical modern fps, not many. If you're gonna have a whole bunch of characters on-screen, well there's LOD for that.
  • eld
    Offline / Send Message
    eld polycounter lvl 11
    Vig, while it is true that time is the most expensive thing, optimizations and such knowledge is a skill just like the art itself is,

    a great artist with technical knowlege can do those optimizations quick, and if those are done for each single prop then there's something to gain from it.

    The optimizations I do for work doesn't take much extra time, it's nearly always just a quick plan on how to make the object, and a thoughtprocess when in the creating.

    It even helps quite alot with the artistical side too!
  • Bruno Afonseca
    Offline / Send Message
    Bruno Afonseca Polycount Sponsor
    interesting material. but i think you guys are just getting each other wrong.

    timing is part of a game artist's skill too, along with optimizing. you just gotta find the balance between it.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 15
    JKM: since the first shadermodel 1 cards (geforce3...) it was possible to do skinning on the GPU. Basically with higher shader models, it became more efficient to do (can do more bones and more weights).
    This in fact is done, so what you heard is wrong wink.gif geometry shaders are mostly good for "generating vertices", which was not possible before. That can also be used to generate 6 copies of a mesh and render in all 6 cubemaps at once, for example. Geometry shaders can also be used to generate shadow silhouettes for stencil shadows. For those shadows it was indeed necessary to do only CPU before, simply as on the CPU was able to detect silhouette edges. hence doom3 is very stressful for the CPU as well. Most games however dont do stencil shadows, and benefit from GPU vertex processing for it. There are some workarounds that can do silhouette detection on older GPU hardware, too, but not so common I think.

    for GPU skinning on sm1/2 and even most sm3 hardware, the bones are stored in "uniform/constant" memory, of a vertex shader. Typically sm1 had limits like 25 bones, and sm2 is like up to 75 bones. Then you must feed per vertex the bone index and a weight. typically that is like 2 shorts per assignment. Vertex shaders will then be written for a certain number of maximum weigts per vertex (say 2 or 3), and all vertices (regardless of their actual weights used) will be transformed the same way. Hence if you know the weights per vertex the engine allows, there is no reason at all to not use them at full extent. Typical would be 2 or 3 max weights.
    the bones matrices are computed by CPU, before and sent as those "constants". The less max weights per bone = less instructions in shader + less per-vertex data to be stored. The less bones in total = less "constants" to be sent every time the model is rendered.

    vertex animation aka morphing is a bit different story, and requires another per-vertex attribute stream that is either also preloaded and "fixed" (think morph targets), or dynamically changed every frame (aka md3). The latter is particularly ugly as it means sending pervertex data every frame, which is supposed to be avoided.

    Skinning basically allows all mesh data to be preloaded and stored in vidmem, and only the bones' matrices must be resend. Hence its the preferred way.
    However there is several higher techniques possible for animation, that store matrices in textures (sm3 vertex shaders can access textures, but kinda slow), or use renderto vertexstream, stuff and so on. However not in the common case. ut3 and crysis still use just the constants, as I would say nearly everyone else.
    On consoles with dedicated vertex processing hardware (like what SSE was supposed to be for CPU), skinning might be done in software for load balancing. Like PS3's Cell has 7 streaming units, that can work with the GPU directly and "help out". Or when real complex vertex stuff is done (unlikely so), or stencil shadowing...
  • Wells
    Offline / Send Message
    Wells polycounter lvl 15
    this thread is incredibly informative.

    thanks for taking the time and effort.

    i'm learning!
  • adam
    Offline / Send Message
    adam polycounter lvl 15
    Just so I may recap on a couple of points made early on:

    Vert count between 2 polys, in-engine, will increase if:
    -The 2 poly's share seperate smoothing group
    -The 2 poly's are a part of two different UV islands

    Vert count between 2 polys, in-engine, will stay 'the same' to the application's count if:
    -The 2 poly's share the same smoothing group
    -The 2 poly's are a part of the same UV island

    Is this correct?
  • Rick Stirling
    Offline / Send Message
    Rick Stirling polycounter lvl 14
    Adam, I believe that is is correct, and I believe you can also add in shaders/materials. If you apply a different shader to each polygon that will break it into 2 objects.
  • perna
    Offline / Send Message
    perna quad damage
    AdamBrome: The best way to look at it is one vertex can only contain one data entry of each type, be it position, normal, uv coordinate or color. Whenever you need two data entries (as with a smooth group break: You'll need two normals), you need two vertices.

    So, if the "same" vert has 2 positions on a UV map (such as you get when there are seams) there needs to be 2 3D verts as well.

    In max, when you're in UV edit mode, you can select geometry in the 3d viewporport and the selection will be reflected in the UV viewport.
    Sometimes you'll select an edge in 3d and it selects TWO in the uv viewport. You'll select a vert in 3d and it selects SEVERAL in the uv viewport. The highest count is always the real count. If if shows 4 verts selected in the uv window, then you actually have 4 verts in the 3d window as well, they're just "grouped" and handle as one.

    Smoothing groups just control vertex normals. Whenever you make a hard break with different SGs you create more verts. You'll have one vert pointing one direction and the other pointing another direction. That's how you get the hard shading there.

    So from this, you'll understand that things aren't split up twice. If your UV seam is in the same place as your smoothing group seams, that doesn't mean 4 times the verts.

    edit: Yeah plus what Rick says. That's handled differently than the above stuff though, as material isn't a per-vertex thing, you first set material, then you push the geo data, set another material, push other data. So basically seperate material means seperate model.
  • Rick Stirling
    Offline / Send Message
    Rick Stirling polycounter lvl 14
    I *think* that when it comes to the shader, that breaks the polygons into a different Drawcall, instead of a batch, but I'm willing to stand corrected.

    As to the smoothing groups adding extra verts, if you get your normal maps nailed you can often forego the smoothing groups and set your entire object to a single SG. Also, in the past we'd use smoothing groups on hard edges to stop the polygon shading leaking round (cuffs, jacket hems, hard edged machinery). Since adding this group adds extra verts and (will usually) break your batching, it's (usually) cheaper just to chuck in those extra polygons that a bevel will give you.

    Usually cheaper, but when you are dealing with deformable objects (skin/morph), you've got more transforms to compute, so it's a toss up there.
  • perna
    Offline / Send Message
    perna quad damage
    Rick: Transforms are on vertex-level
  • Eric Chadwick
    Adam this picture really helped me get my head around it.

    byf2_figure2.jpg

    Good thread dudes.
  • adam
    Offline / Send Message
    adam polycounter lvl 15
    Per, Rick, and Eric.. thanks!

    While I have definitions for the words in my head, can someone else state what these mean: Batch & Drawcall. I will put what I think they mean as I am sure its wrong.

    "Batch"
    -If a model is duplicated a number of times and its material isn't changed then they're all apart of the same 'batch' so long as they aren't grouped or defined by LOD's. Guh, that make sense? Probably not..

    "Drawcall"
    -Not entirely sure. I want to say that its when material layers (spec, diffuse, etc) are called to the frame buffer but I am not certain.

    EDIT: Also, vertex normals. Aaargh, wtf! haha I always thought the normal of a triangle to be important and now I learn of vertex normals? Anyone have a handy picture demonstrating how a vertex's normal is defined?
  • Rick Stirling
    Offline / Send Message
    Rick Stirling polycounter lvl 14
    Not entirely sure...but I think I'm in the right area here:

    Batch: a chunk of verts that are sent from the CPU to the GPU. Batches are part of a draw call, and you can have several batches in a single drawcall. Batches are chunks of continuous verts that don't get broken by smooth groups, uv boundaries etc.
  • perna
    Offline / Send Message
    perna quad damage
    AdamBrome: Strictly, there is no such thing as a polygon normal. Of course you can measure the normal of a polygon if you want, but it has no relevance in rendering. You know how on a non-flat object such as a sphere the shading goes gradually from light to dark? This shading goes from vertex to vertex, and the vertex normals determine how much influence the light should have.

    It's important to realize that if light was calculated "realistically", light on an angular lowpoly object would never look smooth. Even hipoly FMV work has to use the fake gouraud style shading to get by.

    There's a method to interpolate vertex normals on a curve as opposed to linearly, that is used in offline rendering and can be done in realtime shading as well.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 15
    for coders mostly batch = drawcall.

    drawcall:
    typically consists of "world matrix", fragment & vertex shader, render states (blend,alpha-, depth-test...), textures to be used and the geometry
    the geometry is a vertex buffer + indices which make the triangles (1,2,3, 2,3,4 ...), you dont need to use all vertices in the buffer, so it doesnt really matter...

    it might be that the vertex data is made of different streams, that reside in different buffers as well. but lets not overcomplicate things.

    the reason those non-material "splits" are nasty, is simply storage space. the more "duplicates" the larger the vertexbuffer, and the less chance of reusing the same vertex.

    the vertex chunk often resides on graphics card memory (static meshes). sometimes it may be copied dynamically (lots of copies concated of the same stuff for simple instancing, or say some manual vertices say from rocket trails, particles, shadow silhouettes..). These kind of copies are not super fast, most data in games is static.

    it is simply the raw data for each vertex, i.e. it is already broken down to the "unique" vertex level. several vertices of course can be indexed multiple times when they share a triangle that doesnt require splits..

    batching:
    being able to "render as much as possible" in a single drawcall. that means try to maximize the triangles being drawn, as every time you "start a new drawcall" its not as fast as if rendering a lot with one call/batch.

    so say we have that huge vertex data chunk, already stored on vram, so we dont need to send it per frame. Then batching basically means trying to make long series of "indices" into those vertices, that we actually need now.

    I can recommend reading the pdfs linked to in the very first post (the very first slide of the batchbatchbatch paper starts with "What Is a Batch?")
  • Ryno
    Offline / Send Message
    Ryno polycounter lvl 15
    Another reason to avoid extreme optimization: LODs.

    So a few months back, Okkun made a good point to our team as we were making a bunch of LOD models. Some of our artists were focused on reducing poly counts to exactly the target numbers, even if it meant massacring a model. He made the point that if performance was poor, what might be the first way to improve things?

    They'd bring the LODs in much closer, where those craptacular models are right in plain view.

    So rather than completely destroying an LOD model to save a whopping 34 polygons, it might be better to just leave them in and maintain the proper silhouette and UV borders. Even if several dozen of these models are on screen at the same time, an extra 5000 polys means very little to a modern graphics card. But as Per was pointing out, bad looking art is bad looking art no matter how you slice it. And in the case of some of these LODs, the degradation for a very minimal polygon savings was quite extreme.
  • cardigan
    I've had long long discussions with my lead engine coder about this, and he has raised something which I don't think has been mentioned here yet:

    Having bevels along edges that could otherwise be hard (as in the first toolbox example on this thread), whilst not increasing the vertex count, does lead to quite a lot more small triangles on screen, especially if applied to everything in an environment.

    My lead engine guy says that because gpus can fill multiple pixels in parallel, but only within the same triangle, having lots of triangles that contain very few pixels leads to stalls in filling the pixels and thereby hits your fillrate.

    Example - if your triangle is 2 pixels big on screen the maximum number of pixels that can be processed simultaneously is 2, when actually the GPU could be pushing a lot more through.

    As our engine was fillrate limited (I believe most are these days), he felt that this was a significant factor and therefore said that we should use hard edges where possible.

    Has anyone else heard this? Any thoughts?
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 15
    yes thats true, but I think it was mentioned before (I put it into first post now). Thin triangles or triangles smaller than a pixel or just a few pixels will be bad for the reasons he mentioned.

    That boxmesh example here, depends of course on the size of the box "on screen". Once it's just background, or always small on screen, the bevels would be overkill, and one can live with a few minor shading artefacts.
    Ie its a question of LOD and "how the model is used". If you have a game with corridors and the box will always be exposed in reasonable size, there is no reason for LOD/lower res model, as the bevel tris will be okay (unless you talk about ultra fine bevels, which always would be too thin).
  • cardigan
    I've been wanting to actually do a side by side performance comparison test and see what the impact is when dealing with a whole environment, unfortunately it requires building the environment twice. If I get round to it I'll report back!
  • 00Zero
    perfect example: CoD WaW uses MODELS for their grass. not alphas.
  • Rob Galanakis
  • Frankie
    Offline / Send Message
    Frankie polycounter lvl 15
    The start of this thread is pretty old and I haven't spent much time keeping up to date with the cutting edge of 3d engines. Are there any big changes in the way things are done worth knowing about that haven't been mentioned in this thread?
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 15
    not much should have changed, basically this is more or less a hardware / priniciple rendering issue, instead of individual 3d engine's advancements.

    We are pretty much fixed to "older" hardware anyway (especially with the consoles). Imo we wont see a "real change" until the all new consoles or all new gpu systems (larrabee or whatever advanced ati/nvi gpus in years to come) are mainstream, ie a few years to go still...

    And it's hard to define "mainstream" with tons of wii, iphone, psp, ds or casual Pcs having "last-gen" hardware.
  • Tumerboy
    Offline / Send Message
    Tumerboy polycounter lvl 12
    No, this was simply pointing out that technology is to a point where counting every last triangle isn't really a big deal any more. This is not an excuse to do sloppy work, or to not take appropriate optimization steps, but rather to say that it's better to leave in a few bevels to make the model look better.
  • Frankie
    Offline / Send Message
    Frankie polycounter lvl 15
    Thanks for the reply CB, I'm also interested in how the new shadowing meathod works and how expensive it is. (compared to stencil shadows) if you dont mind explaining :)
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 15
    You have to render the scene's depth from the lights frustum (classic shadow mapping). But as you only require depth, this is very fast (ie mostly vertex bound).

    There is numerous tweaks to enhance quality, by rendering multiple (cascaded) shadowmaps for the sun, shearing... Which will just raise the number of how often a single object might be rendered to depth.

    Once the shadowmaps are created, "normal" shaders can sample them (again different methods exist, for soft shadows, remove aliasing... or whatever).

    The setup for stencil shadows as such was more complicated (requiring stencil buffer, requiring volumes/silhouettes...) However for less complex geometry this is still practical as it gives you pixel-perfect shadows, while the shadowmap approach has to do more tricks to hide aliasing. Then again its better suited for soft shadows.

    Shadowmapping is more pipeline friendly, as you have a real "shadow texture" to sample and therefore can do more tricks. But it needs more tweaks and advanced methods to get really nice results.

    So how expensive it is, depends on the quality you want to achieve, but you can get "more" out of it then stencil. And silhouette generation for stencil stuff requires CPU (slow) interaction on pre SM4 hardware (ie the majority of hardware out there).
  • Richard Kain
    Offline / Send Message
    Richard Kain polycounter lvl 14
    I am given to understand that a lot of outside-the-box coders are seriously considering a revolution in rendering. Apparently a lot of them think that the advent of multi-core processors is going to make the traditional GPU obsolete, and that the standard rendering pipeline will also be phased out in a few more years. As I understand it, the ability to mutli-thread opens up new avenues for software-based rendering, avenues that will make software based rendering comparable to, and in some cases superior to, traditional GPU-supported rendering.

    Older rendering techniques that were discarded years ago are now being looked at anew. I believe there are some who are attempting to resurrect voxel rendering. It could be that in the future polygonal modeling will be rendered obsolete.

    Of course, this is pure conjecture at this point. It will be years before multi-core processors are common enough to make such development financially viable. And current polygon-based tools and engines are so prevalent that such a major shift in methodolgy is sure to be slow.

    Still, its probably a good time to be a C programmer.
  • Frankie
    Offline / Send Message
    Frankie polycounter lvl 15
    Thanks for the replies, interesting to read.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 15
    removed the link, it was an artificial benchmark test (48 to 48 000 sided circle) to prove the adversary effect of thin triangles on performance

    NOT suggesting to use certain layouts for "caps" of low poly, with so little geometry at hands interpolation artifacts are much more dominant...
  • Muzzoid
    Offline / Send Message
    Muzzoid polycounter lvl 10
    crazy butcher, wow that is such a difference in speed. Definately something to keep in mind :). thanks for that link.
  • Mark Dygert
    Hey, that's the crazy guy that documented the interior mapping pixel shader isn't it?

    Interesting read, thanks for posting, but like you said I don't think I'll be redoing any cylinders anytime soon...
  • ArtsyFartsy
    added this link
    http://www.humus.name/index.php?page=Comments&ID=228
    to the first post

    illustrates how triangulation strategies affect speed. Basically shows the micro thin triangles issue vs large area triangles. Bear in mind that the "number of sides" is really high in the stats, something you will not reach in regular modelling. Ie the low numbers you work with have same performance costs, so dont redo your cylinders ;)

    That was a great little read.

    I guess these are issues the engine programmer needs to deal with rather than the modeler, since the modeler will supply mostly quad based geometry.

    However, if you're following the workflow of reimporting high poly geometry into a modeling app (3ds max/maya) and then optimizing it, then the optimizer should perform the recursive area subdivision which is, I think, the way 3ds max does it.
  • Mark Dygert
    I think you're right, this is the method 3ds uses to create cylinder caps, nice to know there's some logic behind what looked like chaos.
    That was a great little read.

    I guess these are issues the engine programmer needs to deal with rather than the modeler, since the modeler will supply mostly quad based geometry.
    Modelers need to be aware of what their hidden edges are doing, and not think that engines work in quads but know that everything will be interpreted into triangles. In Max its pretty easy to flip hidden edges around, which sometimes falls on the rigging/animation guy. In Maya I think you have to force the edge by creating it or making it visible? It tends to re triangulate the hidden edges on its own... maybe there's a way to force it to stop and I haven't found it yet.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 15
    @vig, yes humus did that

    @artsy, I agree normally you dont run into extreme situations like that, ie most triangles in a gamemesh should have similar sized edges.
    For reimport I think useability is more important, ie something like the last triangulation scheme would be sucky to work with.
  • Mark Dygert
    Would it be easily scriptable to recalculate the edges based on the last method, say on a selection or only on polys with more then 2 tris? Maybe evaluate the mesh kind of like STL Check, highlight any polys that might not be optimal and give the person the chance to deselect/select them before changing them around?

    I'm not suggesting anyone get cracking on this, just wonder if its possible... seems like it would...
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 15
    well first of all we are talking of more than 48 sided ngons, I very much doubt you will find those in real world ;) I somewhat think these images of the 12 sided cylinders burnt into your head, that even on that detail level layout would matter much... which I doubt ;)

    I second the idea of highlighting triangles with extreme porportions, and whose relative area compared to the rest of a mesh is very small...

    but I would leave it to the person fixing stuff
  • Zwebbie
    While the difference in speed is impressive, the Max Area method of triangulation completely ruins your normals. When I have to choose between a performance boost and correct normal maps, I prefer the latter.
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 15
    I will remove the link, it creates too much false impressions

    zwebbie you will never gain a performance boost on regular "ingame" cylinders because they have too little sides... and yes with so little geometry interpolation issues are much more important.
  • EarthQuake
    Ok math dorks, since i saw some mention of how this applies to cones in that article, i was curious myself.

    WHICH IS BEST

    A = standard cone
    B = Each loop has half the amount of edges
    C = Most uniform method i could come up with

    cones.jpg

    Edit, actually im sure 64, 48, 32, 16 would have been more uniform for C, oh well.
  • Proxzee
    I think optimization will still be relevant, since most people do not own High end cards.

    The casual market is still huge, and things like handhelds and phones still require low fidelity models.
  • [PB]Snoelk
    Offline / Send Message
    [PB]Snoelk polycounter lvl 7
    i think the b cone would be best.
    cone a needs plenty of smoothing groups the look smoothed without over-smooth the cone end. something like face a smoothes with b, face b with c and a, c with d and b.
    uv space uses the same vertices as the normal mesh and 2 vertices more for the seam.
    in cone b you can smooth all group 1 except the cone head. uv space uses same as mesh and 2 vertices more for the seam.
    cone c works like cone b but uses more vertices.
  • Chai
    Offline / Send Message
    Chai polycounter lvl 13
    I go with option D, just a tiny poly at the tip using different smoothgroup.
  • sama.van
    Offline / Send Message
    sama.van polycounter lvl 13
    I never added that one here. but maybe it could help some?

    It was an attempt for someone to create a detailled military box from... a box (=cube )... :)
    This is not really original work but it could help some to understand how to delete some polygons with "good" diffuse and some other shader?


    http://www.samavan.com/3D/Realistic/Box_A/samavan_Obj_Box_A001.jpg

    http://fc02.deviantart.net/fs48/o/2009/337/9/6/965775fcac1a424cde129ca0fad29247.jpg

    http://www.samavan.com/3D/Realistic/Box_A/samavan_Obj_Box_A002.jpg
  • Hitez
    I spent a solid few months of optimizing polys, lightmap UV channels, collish meshs for everything in UT and the act of
    stripping 2million polys out of a level generally improved the FPS by 2 or 3 frames.

    Polycount is not the huge issue people were rightly sure it was previously. The bigger issue now is texture resolution because all assets carry 3 textures as standard, normal map, diffuse and spec and thats before you have additional mask light textures for emmisives and reflection or whatever other stuff you are supporting in the shader.

    Shader complexity is also a bigger issue now because it requires longer rendering time.
    Section counts are a bigger issue , meshs carrying 2 of each texture and thus requiring a 2nd rendering pass.

    I can't explain things technically enough for people but the coders have explained to me a couple times that the problem with just beating everything down with optimization on things like polycount doesn't affect things as much because of different things being CPU or GPU bound.

    Mesh count is a big issue now that everything is a static mesh rather than the majority being BSP. BSP is terribly inefficient compared to mesh rendering also.

    A mesh is pretty much free for the 1st 600 polys, beyond that its cost can be reduced dramatically by using lightmaps for generating self shadowing and so on rather than vertex lighting.

    The reason also I was saying I wouldnt take out the horizontal spans on this piece was also largely because as an environment artist you have to be thinking about
    the crimes against scale the level designers will often make
    with your work to make a scene work.

    Just because I know its a box , doesn't mean it wont get used as something else much larger so I always try to make sure
    it can hold up, whatever it is, when 4 times the scale!

    Butcher mentioned instancing, this is another feature that we relied upon much more heavily to gain performance.
    Due to textures / BSP being more expensive now and polycounts cheaper we made things modular, very very modular.

    For instance, I made a square 384/384 straight wall using a BSP tiling texture and generated about 60 modular lego pieces that use the same texture and all fit with themselves and each other to replace BSP shelling out of levels.

    This led to lots of optimizations in general and quick easy shelling of levels and it gave our levels a baseline for
    addition of new forms to the base geometry level.

    I doubt I'm changing anyone's opinion here, maybe making a normal map driven next gen game will convince you though.
    And by next gen, i just mean the current new technology like ID's new engine, crysis engine, UE3 etc because they
    are normal map driven and the press likes fancy names
    for simple progression of technology.
    r.

    DID ANYONE READ ALL THIS?! that fucking blew my mind thats so insane. all that crazy dedication, no wonder i still think UT is one of the best looking of the "next-gen" era.
  • fade1
    Offline / Send Message
    fade1 polycounter lvl 11
    Even if i come from the lowspec game dev side(wii&ds), i just can underline kevin johnstone's post. Of course you shouldn't waste polys on senseless edge loops and details, but it's the shaders and sfx, who send your performance to stop motion. To get our games on wii running 60fps our bootleneck is the shaders, which need to be simplified here an there. The vertex count is just a base issue and something comperable easy to optimize.
2
Sign In or Register to comment.