Extremly laconic exposition of the following text, which I figured Might be usefull to have in one place:
Things to keep in mind:
It’s not the tri count that matters, but vertex count. Smoothing Groups(Soft/Hard Edges), UV seams and Multiple Materials increase the number of verts, so you want to have the least number of those possible. Vert counts hardly matter, if they don’t add another drawcall, so feel free to find them a proper use. Uberlowpoly models are bad for performance.
Triangulate using “Max Area” or “Shortest Edge” principles.
Materials(Shaders) are the most fruitful thing to optimize.
If you know you’re going to have dynamic lighting, then break bigger objects into smaller ones.
Lightmap static meshes and make them lightmapping ready if your engine supports it.
Workability and Beauty is still king.
Make shure you've aquired all the specific knowledge you need out of the one responsible.
Things to check upon:
- Deleted History, Frozen Transformations/Reset XForm, Collapsed Stack
- Inverted Normals
- Mesh splits/ Open Edges
- Multiple edges/ Double faces
- Smoothing groups/soft-hard edges
- UV splits
- Grid Alignment/ Modularity/Pivot point placement
- Material Optimization
- Lightmapping possibility
If you're interested in more details, I suggest you just take it from the top. And ready yourself to spend sometime reading.
I felt like doing this paper, because.
Because becoming a video game artist isn’t all that easy. And information unfortunately isn’t all that widely available. I learned a lot from the materials other people were so kind to provide and being incredibly thankful I see no other way to repay them, but to spread the knowledge.
Information is too much of a valuable resource and it’s pretty much the only thing, except confidence, that separates a nooby from a professional.
As time went buy I’ve managed to accumulate some amount of information, which I’ve never seen being presented in one place and somewhat carefully systematized. It’s just stuff that I’ve picked up all over internet (especially from you guys), and that came out of my personal experience crammed together. This paper is aimed at people, who are almost totally unfamiliar with technical aspects of videogame artists work and, will hopefully help them paint a much clearer picture, once they are done reading. I also tried to be pretty thorough, because I want to translate this paper into russian for all those artist, who can’t get useful knowledge due to not knowing english. I won’t be able to link them to all the great articles and forum threads, which means it had to be pretty much all in one place. So please pardon me if this won’t become a discovery for you. Even though this goes out to those, who need knowledge most, I hope, that even some hardcore veterans might find something small but useful in here, or just have a place where they can link somebody, when they encounter some familiar questions.
Aaand. I have one last thing to ask of you. Providing people with knowledge is a very responsible task. I don’t pretend to know everything and, least of all I want to provide somebody with false information. This paper here is just how I came to see things. But things vary so much and change so fast, that I am worried. So if you encounter in this text something you feel doubtful about and can clearly explain why, please don’t hesitate to point this out. I didn’t write this for the sake of writing it, but for the sake of people using it, so feedback would be really appreciated. And now on to the paper:
[size=+2]Video Game Artists Hygiene. Theory and practice.
Video Game Artists work is all about efficiently producing an incredible looking asset. In this article I won’t speak about what makes an asset look good, but only about what makes it’s production efficient.
But before we proceed speaking about efficiency it is crucial to clearly define, what term efficient actually means, related to videogame asset production:
1. Your practice, whether you realize it or not, is all about learning to spend less of your resources to do more work. And speaking about resources spent to have things done, the most significant and valuable one would be time. And not even in your work, but in your whole life, and I suggest you don’t forget that. This, without any argument I hope, concludes, that saving asset production time is efficient
2. On the other hand, video game graphics, being displayed in real time, have to face certain restrictions. With each project each team tries to surpass the others and hopefully themselves, but the amount of memory and the processing power consoles can offer doesn’t get no bigger until the next generation hits the shelves. So in the name of making better, prettier games we need to make our assets engine-friendly. In other words, we could say, that saving asset render time is efficient
From here this article splits in two parts. The first one will be about things you need to know to produce an efficient asset. The second one will be about things you need to do, to make sure, that the asset you’ve produced is efficient. Let’s get rolling, people=)
[size=+2]Things you'd want to know
Brain Cells Do Not Recover
Even though most the of things to follow, are about making your models more engine friendly, please don’t forget, that this stuff is just an insight on how things work, that you should use a guideline, to know which side to approach your work from. And all of this shouldn’t be time consuming at all. It has to come naturally while you’re working.
Saving your or your teammates time is much more important of a guideline. I would say if your optimization would cause an additional hour of work for you or anyone further down the production pipeline, than you probably don’t want it. (provided you’ve been building your assets with following information in mind from the start). Extra hundred of tris won’t make fps drop below the floor. If you ain’t gonna save another draw call for an object, which is going to be heavily instanced or onscreen all the time, than it’s not worth it. Feeling work go smooth would make for a happier team, that is able to produce a lot more stuff a lot faster. Don’t turn work into struggle for anyone including yourself. Always take concern in needs of people working with stuff you did.
There’s sure an endless amount of stuff you could do to save your or colleagues time, so I wont even try to cover them all, but still I’d like to throw some in as an example (which I may expand upon until it becomes worthy of some personal space):
- Check everything Twice when you’re done.
- Use your UV for selection while making LoD models, if your model has a lot of repetitive parts.
- Your texture artists would thank you if you’d make that UV seams someplace hard to spot. Plus please take some time and try to straighten your lines and keep them mostly perpendicular or parallel, to “utilize the square nature of the pixels” better.
This incredibly adds to workability, while distortion usually goes completely unnoticed. And don’t forget some padding, especially if your piece will be seen from a distance a lot.
- Level designers would think you’re the best guy to work with if you’ll keep to the grid (if you game uses one) and always place the pivot point in the most comfortable place, which you can ask them about.
[size=+1]In the beginning there was Vertex.[/size]
Remember geometry for a second. When we have a dot we, well, we have a dot. A dot is a unit of 1 dimensional space. If we move up to 2 dimensional space, we’d be able to operate with dot’s in it too. But if we take two of them, then we’d be able to define a line. A line is a building block of 2 dimensional space. But if you take a closer look, you can see that a line is simply an endless number of dots put alongside each other according to a certain rule or a linear function as you would’ve said in high school. Now lets move a level up again. In 3 dimensional space we can operate with both dots(or vertices) and lines. But, if we add one more dot to the previous two, that defined a line, we’d be able to define a face. And a face would be a building block in 3 dimensional space, that forms shapes, which we are able to look at from different angles.
I think we are all used to receiving triangle count as a main guideline for creating an asset or a character. And I think the fact of it being a building block of a 3 dimensional space has something to do with it. )
But. That’s in human way of thinking. We, humans, also have numbers from 0 to 9, but hardware processors don’t. It’s just 0 and 1 - binary - the most basic number representation system. In order for a processor to execute any thing at all, you have to break it into the smallest and simplest operations that it can solve consecutively. And I am terribly sorry for dragging you through this little technical and geometrical issues, but it was necessary, to make you see that processors work the same way in order to display 3d graphics – from the basics. And even though a triangle is a building block of 3 dimensional space, it is still composed of 3 lines, which in their turn are defined by 3 vertices. So basically, it’s the not the tris that you have to save, but vertices. Someone by now would say “Who cares? The less the tri count the less vertices there are!” And he’d be absolutely right. But unfortunately, the number of tris is not the only thing affecting your vert count. There’s also some a bit more subtle stuff going on.
And I’m sorry we have to do this again, but here comes programmers stuff. Keep it together, people=)
A 3d model stored in memory actually presents itself as a number of vertex structure based objects. Structures, speaking object oriented programming language(figuratively), are predefined groups of different types of data and functions composed together to present a single entity. There could be tons of such entities which all share the same variable types and functions, just different values stored in them. Such entities are called objects. That could be a lousy explanation, but there’s much more depth to it programming wise. Here’s a simplified example of how a structure would look like:
If you think of it, it’s really obvious that vertex structure should only contain necessary data. Anything redundant could become a great waste of memory, when your scenes hit a couple of millions vertexes mark.
There are a great deal of attributes that a vertex structure incapacitates but, I’ll speak only about artist related ones (since I don’t know anything about the others). As far as I know, a vertex structure has enough variables declared for only one set of arguments. What does it mean for us, artists? It means that a vertex can’t have 2 sets of uv coordinates, or 2 normals, or two material ID’s. Now that’s odd, ‘cause we all have seen how smoothing groups(soft\hard edges) work, or applied multiple materials at objects and everything was fine. Or at least it looked fine. How is that possible? Well, it appears, that the most affordable way, to add that extra attribute to a vertex is to simply create another vertex right alongside it!
Speaking a bit more unrelated to technicalities, every time you set another smoothing group for a selection of polys or make a hard edge in maya, invisibly to you, the number of border vertices doubles. The same goes for every UV seam you create. And for every additional material you apply to your model. If you want to create engine friendly pieces you just have to take this into account. The guys at bigger companies sure as hell know this. Epics Unreal Development Kit automatically compares the number of imported and generated vertices on assets import and warns you if the numbers differ for more than 25 percents. Those a pretty tight shoes to fill, but no one said producing efficient art was easy. If Epics programmers consider it an issue serious enough to be checked upon import, I suggest you should think about it.
Connecting the dots
This small chapter here concerns the stuff that keeps those vertices together – the edges. The way they form triangles is important for an artist, who wants to produce efficient assets. And not only because they define shape, but because they also define how fast your triangles are rendered in a pretty non trivial way.
How would you render a pixel if it’s right on the edge that 2 triangles share? You would render the pixel twice, for both triangles and then blend the results. And that leads us to a pretty interesting concept, that the tighter edge density, the more rerendered pixels you’ll get and that means bigger render time. This issue should hardly affect the way you model, but knowing about it could come in handy in some other specific cases.
Triangulation would be a perfect example of such a case. It’s a pretty known issue, that thin tris aren’t all that good to render. And that could be right while talking about modeling. But talking about triangulation, if you’re saying you’ve made a triangle thinner would mean, that with the exactly same action, you’ve made another one wider. Imagine if we zoom out from a uniformly triangulated model: the smaller the object becomes on screen, the tighter the edge density and the bigger the chance of rerendering same pixels will be.
But, if you neglect uniform triangulation and worry about making every triangle have the largest area possible(thus making it incapacitate more pixels), so in the end you’d get triangles with consecutively decreasing area sizes then once you zoom out again the amount of areas with higher edge density would be limited to a much smaller number of on-screen pixels. And the smaller the object becomes on screen, the smaller amount of potentially redrawn pixels it’ll have. You could also try to work this the other way around, and start with making the triangle edges shortest possible. This would make for a more efficient asset and save you some render time on doing multiple passes for the same pixel.
In case you'll find yourself wondering wheter to add some neat chamfers, make it all one smoothing group, but inevitably increase small traingle count or just leave it a bit rough, I suggest you go with what makes your model look better first of all. This guidline overrules all the others.[size=+1]
Eating in portions, makes for a fuller stomach.
Exactly the way your engine draws your object triangle by triangle, it draws the whole scene object by object. In order for your object to be rendered – a draw call must be sent. Since hardware is created by humans it’s pretty much bureaucrtical.) You can’t just go ahead and render everything you want. First you’ve got to have some preparation done. CPU(central processing unit) and GPU(graphics processing unit) share the duties somewhat like this: While GPU goes ahead and just renders stuff, CPU gathers information and prepares next batches to be sent to GPU. What’s important for us here, is that, if CPU is unable to supply GPU with the next batch by the time it’s finished with the current, the GPU has nothing to do. From this we can conclude that rendering an object with a small amount of tris isn’t all that efficient. You’ll spend more time preparing for the render, then on the render itself and waste the precious milliseconds your graphics card could be crunching some sweet stuff.
A frame from NVidias 2005(?) GDC presentation
The number of tris GPU can render until the next batch is ready to be submitted significantly varies, but here are some examples. I’ve seen it somewhere in the UDN that for Unreal Engine 3 numbers are in between a 1000 and 2000 triangles. While working with BigWorld engine we’ve set the barrier at 800 even though some of the programmers said that it could be around a 1000.
Defining such a number for your project would be an incredible help for your art team. It would save a real ton of both production and render time. Plus it’ll serve as a perfect guideline for artists to solve some tricky situations completely on their own.
You’d want to have a less detailed model, only when there’s really no point in making it more complex, and you’d have to spent some extra time on things no one would ever notice. And that luckily works the other way around – you wouldn’t want to make your model lowpolier than this, unless you have your specific reasons. Plus the reserve of tris you have could really be well spent on shaving of those invisible vertices mentioned earlier. Add some chamfered edges and fearlessly assign one smoothing group to the whole object(make all the edges soft). It may sound weird but by having a smother more tessellated models you could actually help the performance.
If you’d like your game to be more efficient try to avoid making very low polygonal objects a single independent asset. If you are making a tavern scene, you really don’t want to have every fork, knife and dishes hand placed in the game editor. You’d rather combine them into sets or even combine them with a table. Yeah, you would have less variety, but believe me, when done right, no one would even notice.
But this in no case means that you should run around applying turbosmooth to everything. There are some things to watch out for. Like stencil shadows, instancing and even vertex lighting, to name a few. Plus some engines combine multiple objects in a single drawcall, so watch out. But I’ll speak about that in the very ending of “Things You Need To Know” part of this paper.[size=+1]
Vertex VS Pixel[/size]
If I’d ask you, as an artist, what is the main difference between art production for last and current generation of consoles, what would you say?
I’m pretty damn sure, that the most common answer would be introduction of per texel shading and use of multiple textures, to simulate different physical qualities of a single surface, becoming standard de facto. Yeah sure polycounts have grown, animation rigs now have more bones and procedurally generated physical movement is everywhere. But normal and spec maps are the ones contributing the most visual difference. And this difference comes at a price. Nowadays I hear terms “fill rate driven engine”, “fill rate bound engine” and “fill rate oriented engine” thrown around more and more. All those terms didn’t come out of nowhere and the reason behind there appearance is, that, in modern day engines, most of objects render time is spent processing and applying all those maps based on incoming lights and cameras position.
Complex/Simple Shaders in UDK
From a viewpoint of an artist, who strives to produce effective art this means following things:
Optimizing your materials is much more fruitful, than optimizing vertex counts. Adding an extra 10, 20 or even 500 tris ain’t as nearly as stressing for performance as applying another material on an object. Shaving hundreds of tris off your model would hardly ever bring a bigger bang, than deciding that your object could do without an opacity map, or glow map, or bump offset or even a secular map. Kevin Johnstone of Epic Entertainment once said, that while working on Unreal Tournament 3 he optimized a single level by 2-3 millions triangles just to gain somewhat around 2-3 fps. I think this example makes it obvious. It’s not the tri count, that affects performance the most. I’d say it’s the number of draw calls and shader and lighting complexity that counts. Then there are vertex transformation costs when you have some really complex rigs or a lot of physically controlled objects. And post processing.
Surely, as an artist you have a lot more control over your materials, rather then the lighting, but there are still some things you can do, depending on your engine.
If you know you’re going to have dynamic lighting in the scene, then you don’t want to have huge object only a small part of which will be lit dynamically at a single moment of time. Break it into smaller pieces. Or for example if you are doing a haunted hotel scene, where the player will have to navigate dark hallways, lighting his way with a flashlight, you’d rather have every chandelier on the wall a separate object. Even if its like 30-50 tris. It’s may seem logical, in order to optimize things, to go ahead and attach all the chandeliers into a single object, since they are pretty low poly and share the same material, but, all the profit that comes of it, wouldn’t compare with the stress ‘caused to process dynamic lighting every frame for an object so widely dispersed across the level.
Even though I am speaking off my Unreal Engine 3 experience, I believe those guys know what they are doing, and their knowledge could be taken into account. If your engine gives you a choice between vertex lighting and lightmapping you’d want to go with the second.
First of all, because in case of vertex lighting you need to store in memory the data for every single vertex you have, and that kinda makes you wish you had less vertices, but, since we’ve figured out that we have them for free until the next batch is ready, we’d rather use them for good.
You could use a 128 or 64 or even a 32 by 32 light map in some cases that would still look smoother then vertex lighting but eat up a lot less memory.
Plus, since a lightmap is pretty much a usual bitmap you can weave it into your texture streaming pipeline and not affect the overall texture memory budget. And I can hardly think of a way to make vertex lighting almost free memory wise, so lightmaps for the win.
If you want to make your asset a bit more engine friendly and your engine supports lightmapping, then I suggest you don’t hesitate to make a second uv set, for the lightmaps.[size=+1]
The most important thing[/size]
After all the things said, there’s still one most important thing that you need to know. And that is that things differ. Sometimes dramatically. As with everything in life, there’s no universal recipe and the best thing you can do is figure out what does your specific case look like. Get all the information you can from the people responsible. No one knows your engine better then the programmers. They know a lot of stuff that could be useful for artists, but sometimes, due to lack of dialogue, this information remains with them. Miscommunication may lead to problems that could’ve been easily avoided, or be the reason you’ve done a shit ton of unnecessary work or wasted a truckload of time that could’ve been spent much wiser. Speak, you’re all making one game after all and your success depends on how well you’re able to cooperate. Communication with programmers could actually be the work of your lead or a tech artist, so you could just ask them instead. Asking has never hurt anyone and it’s actually the best way to get an answer.
Dalai Lama once said:
“Learn your rules diligently, so you would now where to break them.”
And I can do nothing, but agree with him. Obeying rules all the time is the best way to not ever do anything original. All rules or restrictions have some solid arguments to back them up, and fit some general conditions. But conditions vary. If you take a closer look, every other asset could be an exception to some extent. And, ideally, having faced some tricky situation artists should be able to make decisions on their own, sometimes even break the rules if they know that the project will benefit from it, and them braking the rules wouldn’t hurt anything. But, If you don’t know the facts behind the rules I doubt you would ever go breaking them. So I seriously encourage you take interest in your work. There’s more to video games production than art.
If you’re doing freelance work and you feel like your model would really benefit from that extra tris, than I’d say you ask. For it is in the best interest of the people you’re working for. Plus, if for some reason, they are unaware of all the stuff listed above then there’s a big chance, that you’ve helped them a lot – and that’s some respect points for you.)
You can grab the other about "things you'd want to do" right here.