Hi all,
I am currently doing some research into workflows that are commonly used in the gaming industry to create next gen graphics for games such as Gears, Assassins creed, Cod 4, Halo3, fallout3, Gta4 ect.
I have a bit of a questionnaire coming up so the more replies I get the better, even if someone else gives the same or similar response every response counts!
The following questions could be for characters, environment props anything really that you feel you have a solid workflow for, the more examples I get the better! Ok then here we go...
So once you have a concept what workflow would you use to create a highly detailed normal map for a lower poly asset?
What software would you use and how would it be implemented into your workflow? For example would you model in Maya, sculpt in Zbrush then use projection mapping in max. Or would you model in max then sculpt in Mudbox then bake in Xnormal?
Why would you choose a certain piece of software over another? Is it personal preference or is there something in one package that just isnt as good in another package? For example you may prefer Mudbox 2009 opposed to Zbrush because the interface is more intuitive, or it may simply be that its cheaper to purchase.
Do you use apps to assist you such as Xnormal, Nvidea Photoshop filter, UVLayout ect? How do they fit in to your work flow?
Do you think normal maps help when creating the colour map? Which one do you think shows the most detail when applied to an in game model, the normal map or the colour map?
If you could change anything about your workflow to make it more efficient, what would it be and why?
Finally do you currently or have you previously worked in the gaming industry before? If so who for?
The more detail that you can give about the methods, apps, software or techniques that you use the better, if you could write an essay on this please write an essay!
Thanks so much in advance!:thumbup:
Replies
1. Blockout mesh to test in game to make sure nothing unexpected will arise from the asset - this can be very quick, as long as the proportions are correct. Just need to test stuff like player collision (eg. if the player has to walk through / around / inside the asset, this can be vital otherwise you can waste a lot of time making highpoly stuff that you then have to edit a lot later).
2. Make highpoly meshes - initially all sub-d for most things, take anything into ZBrush that needs an extra detail or organics pass. Work smart, re-use stuff, use lots of modular pieces and shortcuts like deformers, instances, snapshots along paths etc.
At work I'd always model in Maya and sculpt in ZBrush. At home I use Max and ZBrush, doesn't really matter. Don't really use any other apps for geometry creation.
3. Build low-poly around the highpoly mesh (can re-use either lowest sub-d level and optimise, or use the blockout mesh if you made one and add detail where necessary). All in Maya or Max.
4. Bake in the app it was modelled in usually, or if I'm at work I'll bake using in-house tools to make sure everything is perfect for the game engine.
5. At work I use Maya because our pipeline is based around it - all the exporters and tools are designed for Maya, so anything else can be a pain to import/export between different apps. I prefer to keep the workflow as straightforward as possible - the less importing and exporting, the better. You can lose useful data or waste time if you have to move stuff between lots of different apps.
I use ZBrush because it's all I ever use, really, and it gets the job done. I trialled Mudbox for a while and found no compelling reason to switch. The UI is a little more friendly but I just didn't really like the "feel" of the tools, no good reason really.
6. I occasionally use Xnormal at home since I think it does faster/cleaner bakes than Max. I use NVidia photoshop filter at home and at work because it's simple and fast to generate high-frequency detail from a heightmap. I use Crazybump too for getting a fast "pits and peaks" map from a normalmap bake, very handy for masking stuff out in Photoshop during texturing, and generally giving textures a nice "pop" without doing lots of extra manual work. I'm lazy
Basically these apps all fit into my texturing workflow because all the output comes into Photoshop and stays there.
7. Normal maps definitely help when creating the colour map. As mentioned I use crazybump to get a quick map to help masking stuff out. I am not sure about what you mean when you say "which one shows the most detail"... all texture maps should contain the same amount of detail really, otherwise they won't match up. Similarly, a poor diffuse or specular map can make a great normal-map look rubbish, similarly a rubbish normal-map can make a great diffuse map look terrible. You don't mention specular maps - in many cases these can be even more vital than the diffuse map, since they really help define the surface properties and make materials feel more realistic.
In any case, all maps are important and I try to make sure that every map gets as much love as the others - of course, the most vital thing is the end result in game.
8. If I could change anything about my workflow it'd probably only be to keep everything in a single app. I'd love for Crazybump to be a Photoshop plugin, and similar I'd love to have Max or Maya with integrated powerful sculpting tools. Probably not going to happen any time soon though.
9. Yes, I currently work for Splash Damage, and previously I have worked for Visual Science and Liquid Development.
Hope this is of some help.
I made a thread similar to this one do you think you could just merge my message into this one.
Does it make a difference in terms of the quality of the model or is that just personal perference??
If you build the "final" lowpoly first then you should have a really good idea of how you want the final piece to look because otherwise you might waste time having to re-do a fair bit of the geometry or UV-maps if it doesn't work ideally with the highpoly you build afterwards.
As I said though, a lot of the time it's good to start with a "blockout" model to get a solid idea of size/shape and how the player will interact with it, and once the blockout is done then it can form the basis of both your highpoly and, later, the final lowpoly mesh. It can be the starting point for both.
I know none you could care less but I am going to try and make a Spartan Shield in a little while when I get my new computer it will be the first normal mapped asset i have ever created and I am kinda excited and can't wait to use some of the techniques you mentioned Mop.
When you have finished creating the sculpt in Zbrush how would you then get a normal map from it? Would you bake in Zbrush or using in-house tools?
Do you ever use Polycruncher or anything like that to optimise the mesh if its really high poly?
I know that Epic use (or used) this workflow for a lot of their assets on Gears and Unreal Tournament - poly crunch down a zbrushed piece of geometry so it can be brought into max and managed in the final scene.
I have tried baking normals in ZBrush and while it does give good results, and is actually pretty fast compared to some other apps, I find the process is a bit fiddly and imprecise, compared to some other app like Max, Maya or XNormal where you can control the ray-casting cage (it's possible that you might be able to do this in ZBrush too but I've never found the option, the UI is far from user-friendly!).
At work I would bake using the in-house tools, they natively support large OBJs from ZBrush so we don't have to do any conversion on them, just load and bake. You could do the same with a tool like XNormal though, and I imagine this is how most people and studios will do it (bake inside their "main" app - Max, Maya, XSI etc).
Generally I think you want to try and keep a scene as organised as possible, so that if something needs to be re-baked, you don't have to jump through hoops re-exporting stuff, optimising high meshes, and making sure it's all set up the same way - for this reason I keep everything together inside a Maya scene at work, with groups set up to quickly re-export the highpoly or lowpoly meshes if anything changes, so a re-bake or tweak can be done fast.
But on the other hand, maybe it makes sense if it really doesn't make a difference on the normalmap, because baking that geo should be faster, should try this and see if it speeds up the baking a lot without quality drop.
Is there a way in max to bake 2 pifferent pieces at the same time?
If you have a bunch of pieces to render out, I also suggest setting up a local backburner server on your own machine (Launch server, monitor and manager) and set RTT to network render. That way as soon as all the jobs are sent you can continue working in 3dsmax while it renders in the background.
If you use Mental Ray it likes to hog your system resources so you might want to set up the server on a separate machine, or use scanline for rendering normal maps. I personally don't use MR/AO much and use the skylight/lightdome and scanline method to bake out AO. A little more control and a little less system hogging.
Hi all, I need to all about workflow too. I come from a vis motion graphics background and we do things differently to you gamer types. Max is my software. Your advice Mop is most enlightening.
I have a question though if anyone can help. Its about sub-d modelling. I think I know what it is, and there is a modifier in max, but I don't think I should be using that, not on geometric objects like a building. I'm modelling a church, I started out spline modelling it and fleshing out the polys and working the meshes. But the subdivide modifier gives me triangles. Ideally I want unform perpendicular polys. Is there a plugin or something I should use, or am I doing things wrong. I've seen images of models that other guys
have done and they have a nice looking sub-d mesh. How do I get to this, working in Max. I haven't used z-brush yet but will be looking at it soon. Is that the missing link.
Sorry guys I'd upload an image but my ftp isn't working right now. (sat on a balcony in spain and the intenet is crap.)
Appreciate all replies..:poly121:
What I have been doing is to model everything using edit poly (often on top of renderable splines) and then I slap a Turbosmooth modifier on top of everything. Then I add whatever edges my mesh needs under the turbosmooth to get the shape right. If you keep your base mesh all quads, it *should* subdivide into quads. If you are seeing triangles after adding a turbosmooth to a model thats made of quads, you might want to check your video driver settings.. With Maxstreme 8 it tends to default to showing all triangle strips for some reason (on my machine anyway). The triangles are always there, but most often the computer will only display actual edges and hide the triangle edges.
Just my 2 cents (0.0151 Euro :P) Hope it helps!
-N
ah, thanks. I call it texture bakin' !
So why'd you leave the archvis Mechadus, bored of the same old , same old, or just laid off work like me due to the credit crunch.? In fact I was bored of the same old developments but I didnt want to be laid off, maybe its a blessing in disguise. Suppose that depends on what happens next tho., ( and how hard I try )
:-|