Just wondering how AAA studios making realistic graphics games (or really any studio in general) uses megascans in their daily workflow? Playing any newer realistic graphic game like MW2, i know there’s a lot of megascans textures/props used. But how much is being scanned vs. being made by artists? Is using Designer for textures just not the workflow being used much anymore since you can just scan textures in now? I guess there’s no adjusting if you scan things, so I’m thinking maybe a mixture of both?
Replies
Im not sure if MW2 used so many megascans at all, definitely not "as is"
Personally just throwing in megascans is not a great way to make a production environment which can be art directed / color changed and have a clean and fast library
You should plan what exactly you need, and especially in AAA no environment artists will just open up bridge and drag in from thousands of unrelated assets.
Some lead will surely go hand pick some things, make them into a set, merge their textures into sets, make masks from the textures which can be color or wear changed etc, put them into a asset bundle group, adjust their textures so they match etc.
I learned in the last years that you should start with a big library to play with and then figure out what you really need, and then make your real library just the few essential and best, most flexible assets.
From your 150 gray mountain rock library in reality you need like 10 for any gray rock environment you'll ever make maybe. You don't need 40 megascans branches for the ground but like 6. Etc. Much of the Art lead work is optimizing and creating these workflows and libraries.
This is an always evolving process and depends on the studio. Maybe you go and make your own scans based on your best performing key shape+sized assets. Maybe you sculpt highpolies based on exactly what you want and overlay scanned textures on the non edge areas. Maybe you take your 8 megascans rocks and batch and mask them and use them in 3 different environments with different other recolored scan textures. Maybe you take megascans meshes and rebake them and batch them all in one texture and retexture them. Maybe you just put in scanned assets and don't care about having to manually tweak 200 diffuses if you ever would want to change all colors or just use a cheap color multiply and pay in optimization and project messyness later when you already convinced people to buy. Maybe you take scans as reference for your own highpolies. Maybe you throw in scanned rocks into a voxelizer and remake new rocks from scratch traditionally but with a more realistic base shape. Maybe you use trim sheets for everything and don't bake or require anything new at all. Maybe you don't care about scans at all because your artists are brutal good at highpolies and just use them for reference if at all.
There are many ways to do things, it depends on the studios need in certain areas, budgets, desire for flexibility and art direction and planned longevity of these assets and everyone does it differently. Really any of these approaches are valid and can lead to great results if executed well.
We hardly use them at all. They repeat like crazy and too hard to weed it out or request too much of extra job. Easier is just go take your own photogammetry series of exact coverage you need or use substance designer .