Just wanted to give you guys here a heads up; I didn't want to flood this more "technical talk" thread with a bunch of images, but I did JUST that on the ZBrush central forums; if you're looking for more technique-y sculpt-y stuff (or just more renders of things) check out the ZBrush central forum post:
Hi Pavlovich, thanks for pointing me to this thread. How do you guys currently handle UVs? That cann't be automated, is it? Right now that's the biggest timesink in my workflow and if I can figure out how to speed that up I should be golden. Thanks for all the info in here. It's much appreciated!
How do you guys currently handle UVs? That cann't be automated, is it? Right now that's the biggest timesink in my workflow and if I can figure out how to speed that up I should be golden.
Sorry I missed this yesterday! Yeah, UV's are a tough one. ZBrushes UV master is the closest thing I've found for "surprisingly ok" UVs, especially in organic situations, they have a pretty sweet algorithm for determining seams and pelting without overlaps. However, it's not really command line friendly, and it does require artist input (not a huge deal, but trying to limit that as much as possible).
Anyway, a smart auto UV solution is definitely a huge part of that holy grail (of this year anyway)...there may be some interesting solutions involving non destructive subd boolean topology and making good UV decisions based on that kind of topology, mixed with UVs based on and angle thresholds...and maybe integrating the UVMaster algorithm and or attraction based on AO...
Somebody or some group way super smarter than me will figure it out. Right now we're inching forward on the iteration part of better UV decisions made by the robots, but it's not there yet, right now UVs are automated, but they're not ship quality--we can iterate on the models and materials, but eventually during production someone will have to go in there and make the "real" UVs. Have to start somewhere I guess!!
Yeah, like Mike said. Right now we use the default auto uv stuff, which has a ton of shells. It doesn't really matter to some extent since the textures are baked and auto generated from tiling materials, but they're not ship ready. Because we lose a lot of resolution in the little shells and get more seams than we would prefer.
UVMaster has the best algorithm right now for a one button solution, most software have good unfold algorithms at this point, the trick is selecting the seams. There are a couple of good white papers on possible solutions, I'm thinking on implementing one or both of them this year if I don't hear back from the software folk.
Headus, Modo and Maya all have really good pelting algorithms, so you can actually do the manual process pretty quickly now. Because we are projecting our textures those UVs are actually pretty good, the less distortion the better. The only issues we've had is with rotated UVs and tiling patterns, but the newest substance takes care of that with the new 3 axis projection stuff.
Holy crap. My head's spinning and all. Thank you thank you thank you.
We're not in game content - CG cinematics - but I'm sure a LOT of this still applies, particularly the texture stuff. It's like an earthquake. Amazing.
That's best done piece by piece though, but it would be easy to automate (obviously not perfect, still requires some manual selection, and straightening). Then you could send that to IPackThat, once it is released, and you'd be done.
Pretty cool to think about automating a lot of the current pipeline though Less time spent on small / less significant props would be awesome. If anything it's more time to polish hero assets or generate more content.
Damn this was really cool. It's really interesting to see where things are heading and what is to be expected in the near future. Really liked the section about blending concept and production into one, with all the sketch refine - play it in game stuff.
A quick question here: You mention that the process of assigning materials to ID colours becomes automated due to the fact that Substance reads the overall ID map and is able to assign materials depending on the ammount of red/green/blue in the texture.Is this something that's inbuilt within Substance or did you guys code it in? (I'm fairly new to Substance and it's magic so forgive me if this is a silly question).
Looking forward to the longer version of this for sure!
Time to go learn Substance properly too.
A quick question here: You mention that the process of assigning materials to ID colours becomes automated due to the fact that Substance reads the overall ID map and is able to assign materials depending on the ammount of red/green/blue in the texture.Is this something that's inbuilt within Substance or did you guys code it in? (I'm fairly new to Substance and it's magic so forgive me if this is a silly question).
The idea behind this technique is (from what i've understood) to make Substance reconize any color as a material.So Red can be Brushed Metal,orange wax and so on.When you're assigning your colors on the High Poly you have to respect this metric so Substance can reconize what is what.
Again just guessing,and i think Quixel Suite and Substance already have some sort of preset browser for this stuff,just don't know if it gives you the same automation process as what Michael has shown in his video
A quick question here: You mention that the process of assigning materials to ID colours becomes automated due to the fact that Substance reads the overall ID map and is able to assign materials depending on the ammount of red/green/blue in the texture.Is this something that's inbuilt within Substance or did you guys code it in? (I'm fairly new to Substance and it's magic so forgive me if this is a silly question).
The idea behind this technique is (from what i've understood) to make Substance reconize any color as a material.So Red can be Brushed Metal,orange wax and so on.When you're assigning your colors on the High Poly you have to respect this metric so Substance can reconize what is what.
Again just guessing,and i think Quixel Suite and Substance already have some sort of preset browser for this stuff,just don't know if it gives you the same automation process as what Michael has shown in his video
Thanks for the reply Fansub. What I meant is the process of looking at an ID map and reading it in terms of, there's 60% red -> primary material -> assign to X material. Just interested in hearing whether it's an integrated feature of Substance Hope that makes sense!
I think you need to basically come up with your own standards for material ID colors, as far as I know, there's no universal ordering and color assigment for color ID.
I've been contemplating creating a standard color chart for artists to follow when using substance along with the right smart materials/templates that would pretty much apply all materials to the right places automatically. Would it be something you guys think would be useful and followed or every artist or studio is just going to use his own thing?
I tend to think artists like not to follow guidelines and would rather make their own custom sauce instead but I may be wrong
Awesome, glad to hear it! I've been in game-res land too long, I'd love to hop over to cinematic quality some day and see how the process translates!
I will certainly report back once we get some results, although it may take some time - it's the least I can give back
It's probably too early to draw too many conclusions, but the most important issue I see right now is that our characters require high quality quad topology, because we do a lot of complex deformation stuff - blendshape based facial animation, cloth sims, secondary dynamics, shot based sculpting on point caches etc. And of course we absolutely need subdivs to get enough vertices for displacements. I'm afraid we don't yet have good enough automation to procedurally build these - we've experimented a little but in my opinion it takes more time to fix an auto retopo mesh than it'd take to build it by hand.
Still, triangle meshes are perfectly fine to get the asset into the shot and render it to see how it works, what has to be tweaked in the shapes, materials and Substance textures. Then we can do the mesh and UVs manually knowing that the asset will work well enough in the big picture. This is something we've been trying to figure out for a while now, it just never really occurred to me that we could automate the texture prototyping process to this level (I'm a modeler guy). See, we usually hand paint everything, so all the work on a rough tri mesh would've either been lost or messed up during reprojections (loss of the PSD layers).
So, I'm super excited now and will make sure to let you know how it works out!
I've been contemplating creating a standard color chart for artists to follow when using substance along with the right smart materials/templates that would pretty much apply all materials to the right places automatically. Would it be something you guys think would be useful and followed or every artist or studio is just going to use his own thing?
I tend to think artists like not to follow guidelines and would rather make their own custom sauce instead but I may be wrong
A quick question here: You mention that the process of assigning materials to ID colours becomes automated due to the fact that Substance reads the overall ID map and is able to assign materials depending on the ammount of red/green/blue in the texture.Is this something that's inbuilt within Substance or did you guys code it in? (I'm fairly new to Substance and it's magic so forgive me if this is a silly question).
This is something that we coded, but we're transitioning to a convention on sticking to 255s and 128s. So red, green, blue, yellow, magenta, cyan, etc etc
Then the way we setup our material graphs in Substance is in order of importance on the material pallete. So if you have a scientific faction, your most important (red) is white plastic, then your second most (green) is a red painted trim, then your third (blue) is rubber, so on and so forth.
We have a system right now that will take an existing material map and remap it to red/green/blue/... based on the number of pixels found. So if you did Purple, Orange and Pink, we remap it to Red/Green/Blue.
But in theory if you stick with a convention of maybe 10 materials sorted by importance (just stick to the same order all the time), you won't need the remapping part of the pipeline.
It's basically sorting the colors based on importance, that way we're not tied to a chart that is like "yellow green is always steel". You abstract that out, so you can have the same gun be a "good guy faction" or a "bad guy faction" and still mostly work out of the gate.
...will make sure to let you know how it works out!
Awesome!! Like you say, worst case scenario you still have a prototyping tool to answer quick questions, but there might be more elegant solutions on the manual process side of cinematic-quality workflows...I want to look into that more as well on my side. Like I mentioned before, I've been in game-res land for too long, might be cool to itch my cinematic scratch for a little bit!
This is something that we coded, but we're transitioning to a convention on sticking to 255s and 128s. So red, green, blue, yellow, magenta, cyan, etc etc
Then the way we setup our material graphs in Substance is in order of importance on the material pallete. So if you have a scientific faction, your most important (red) is white plastic, then your second most (green) is a red painted trim, then your third (blue) is rubber, so on and so forth.
We have a system right now that will take an existing material map and remap it to red/green/blue/... based on the number of pixels found. So if you did Purple, Orange and Pink, we remap it to Red/Green/Blue.
But in theory if you stick with a convention of maybe 10 materials sorted by importance (just stick to the same order all the time), you won't need the remapping part of the pipeline.
It's basically sorting the colors based on importance, that way we're not tied to a chart that is like "yellow green is always steel". You abstract that out, so you can have the same gun be a "good guy faction" or a "bad guy faction" and still mostly work out of the gate.
Hopefully that makes sense!
Thank you for the detailed explanation, it's cleared up a few doubts for sure. Starting to learn Substance now and trying to workout a good production pipeline I want to test with my own environments!
Great presentation. Thanks to you both, Ikruel and Pavlovich, for further explanations here!
There was a slide about auto exploding low-polys.
How do you explode high-polys then, so they match their positions when baking?
And how does it deal with high-polys that have more elements than low-polys? By some sort of manual polygon grouping?
The artists provide multiple high res objs. The pipeline then generates multiple game res objs (or the artists provides the matching low res)
The way they seem to do is by subtool in Zbrush. In the beginning we started with looking at the bounding box of each object and evenly spacing the high and low. But the bake times were taking too long because loading the complete high was sometimes a bit much to handle (20 pieces of 2 million polys each). So we moved to baking individual high>low and then compositing the resulting maps together using a UV mask.
So you point the tool at a directory that has "head_high.obj, body_high.obj, grenades_high.obj, arms_high.obj" and if the tool doesn't see the corresponding _low it generates them automatically. Then bakes the pairs and composites the resulting maps
In the beginning we started with looking at the bounding box of each object and evenly spacing the high and low. But the bake times were taking too long because loading the complete high was sometimes a bit much to handle (20 pieces of 2 million polys each).
After I asked the question, I started writing a script for Blender based on a similar approach. So I have a script that compares bounds, as you said, then puts pairs on an even grid.
Floating geometry is still not resolved. I thought about merging it with nearest high-polys.
This mesh heaviness is an interesting issue. Blender can have multiple 'scenes' in a single file. Maybe I should separate each pair into a new scene.
So you point the tool at a directory that has "head_high.obj, body_high.obj, grenades_high.obj, arms_high.obj" and if the tool doesn't see the corresponding _low it generates them automatically. Then bakes the pairs and composites the resulting maps
Yes, it does! Thanks again
Actually, that is one of the presentation's workflow ideas I liked the most. Allowing for manual work while generating everything that is missing / trivial / not done yet.
That's awesome that you're writing it in Blender, I might send you some PMs in the future. I have a long term goal of contributing to Blender and get it a bit more game production friendly, but I haven't found the time to dedicate to it. It seems like you can do a lot just in Python which seems awesome. I could use some help getting started.
You shouldn't need to save the scenes out ever, they're just for baking, they're temp files. So if you can do the bake with everything loaded awesome, if not break it up and bake them individually. We're doing some pretty heavy meshes at work so the system was starting to take too long.
Having the tool figure out and try to match which part is which seems overly complicated no? Just establish a naming convention on the pieces and let the artists control the pairing. I've also considered adding a _cage in case the artists want a different mesh for baking as opposed to the low res, but we haven't really ran into a case where this was absolutely needed.
And yeah, the whole point of the pipeline is that the artists have complete control of the output, otherwise it wouldn't have worked, specially with the auto gamerez and UVs, people immediately freak out when they hear that. But giving them the option to override everything gives them a sense of safety even if they never use that feature.
I found scripting Blender both pleasureable and useful. In the previous company, we scripted it heavily. It served as the level editor (even generator at some point) for handheld racing games.
Python integration is solid. C++ is only for modifications of Blender itself (like viewport, custom mesh modifiers).
Every GUI item is executing some command in Python. Or reading a variable via Python. So tooltips can teach you the API
As for the mesh explosion: I somehow intuitively came up with the idea of auto-finding pairs. Maybe I was still angry at a task of baking human skeleton last week, where each bone had to be a separate piece
Replies
http://www.zbrushcentral.com/showthread.php?192625-GDC-2015-Next-Gen-Pipelines-(starring-ZBrush!)
Sorry I missed this yesterday! Yeah, UV's are a tough one. ZBrushes UV master is the closest thing I've found for "surprisingly ok" UVs, especially in organic situations, they have a pretty sweet algorithm for determining seams and pelting without overlaps. However, it's not really command line friendly, and it does require artist input (not a huge deal, but trying to limit that as much as possible).
Anyway, a smart auto UV solution is definitely a huge part of that holy grail (of this year anyway)...there may be some interesting solutions involving non destructive subd boolean topology and making good UV decisions based on that kind of topology, mixed with UVs based on and angle thresholds...and maybe integrating the UVMaster algorithm and or attraction based on AO...
Somebody or some group way super smarter than me will figure it out. Right now we're inching forward on the iteration part of better UV decisions made by the robots, but it's not there yet, right now UVs are automated, but they're not ship quality--we can iterate on the models and materials, but eventually during production someone will have to go in there and make the "real" UVs. Have to start somewhere I guess!!
UVMaster has the best algorithm right now for a one button solution, most software have good unfold algorithms at this point, the trick is selecting the seams. There are a couple of good white papers on possible solutions, I'm thinking on implementing one or both of them this year if I don't hear back from the software folk.
Headus, Modo and Maya all have really good pelting algorithms, so you can actually do the manual process pretty quickly now. Because we are projecting our textures those UVs are actually pretty good, the less distortion the better. The only issues we've had is with rotated UVs and tiling patterns, but the newest substance takes care of that with the new 3 axis projection stuff.
We're not in game content - CG cinematics - but I'm sure a LOT of this still applies, particularly the texture stuff. It's like an earthquake. Amazing.
Select:
+boundary
+Sharp edges
Unwrap (Cylindrical)
Orient pieces
That's best done piece by piece though, but it would be easy to automate (obviously not perfect, still requires some manual selection, and straightening). Then you could send that to IPackThat, once it is released, and you'd be done.
Pretty cool to think about automating a lot of the current pipeline though Less time spent on small / less significant props would be awesome. If anything it's more time to polish hero assets or generate more content.
A quick question here: You mention that the process of assigning materials to ID colours becomes automated due to the fact that Substance reads the overall ID map and is able to assign materials depending on the ammount of red/green/blue in the texture.Is this something that's inbuilt within Substance or did you guys code it in? (I'm fairly new to Substance and it's magic so forgive me if this is a silly question).
Looking forward to the longer version of this for sure!
Time to go learn Substance properly too.
The idea behind this technique is (from what i've understood) to make Substance reconize any color as a material.So Red can be Brushed Metal,orange wax and so on.When you're assigning your colors on the High Poly you have to respect this metric so Substance can reconize what is what.
Again just guessing,and i think Quixel Suite and Substance already have some sort of preset browser for this stuff,just don't know if it gives you the same automation process as what Michael has shown in his video
Again just guessing,and i think Quixel Suite and Substance already have some sort of preset browser for this stuff,just don't know if it gives you the same automation process as what Michael has shown in his video
I tend to think artists like not to follow guidelines and would rather make their own custom sauce instead but I may be wrong
I will certainly report back once we get some results, although it may take some time - it's the least I can give back
It's probably too early to draw too many conclusions, but the most important issue I see right now is that our characters require high quality quad topology, because we do a lot of complex deformation stuff - blendshape based facial animation, cloth sims, secondary dynamics, shot based sculpting on point caches etc. And of course we absolutely need subdivs to get enough vertices for displacements. I'm afraid we don't yet have good enough automation to procedurally build these - we've experimented a little but in my opinion it takes more time to fix an auto retopo mesh than it'd take to build it by hand.
Still, triangle meshes are perfectly fine to get the asset into the shot and render it to see how it works, what has to be tweaked in the shapes, materials and Substance textures. Then we can do the mesh and UVs manually knowing that the asset will work well enough in the big picture. This is something we've been trying to figure out for a while now, it just never really occurred to me that we could automate the texture prototyping process to this level (I'm a modeler guy). See, we usually hand paint everything, so all the work on a rough tri mesh would've either been lost or messed up during reprojections (loss of the PSD layers).
So, I'm super excited now and will make sure to let you know how it works out!
If you do, make one of these to go with it http://i.imgur.com/rMOFPgk.png
I'd leave a lot more room for customs, and have the normal material ID colors be listed first and those be the custom ones.
This is something that we coded, but we're transitioning to a convention on sticking to 255s and 128s. So red, green, blue, yellow, magenta, cyan, etc etc
Then the way we setup our material graphs in Substance is in order of importance on the material pallete. So if you have a scientific faction, your most important (red) is white plastic, then your second most (green) is a red painted trim, then your third (blue) is rubber, so on and so forth.
We have a system right now that will take an existing material map and remap it to red/green/blue/... based on the number of pixels found. So if you did Purple, Orange and Pink, we remap it to Red/Green/Blue.
But in theory if you stick with a convention of maybe 10 materials sorted by importance (just stick to the same order all the time), you won't need the remapping part of the pipeline.
It's basically sorting the colors based on importance, that way we're not tied to a chart that is like "yellow green is always steel". You abstract that out, so you can have the same gun be a "good guy faction" or a "bad guy faction" and still mostly work out of the gate.
Hopefully that makes sense!
Awesome!! Like you say, worst case scenario you still have a prototyping tool to answer quick questions, but there might be more elegant solutions on the manual process side of cinematic-quality workflows...I want to look into that more as well on my side. Like I mentioned before, I've been in game-res land for too long, might be cool to itch my cinematic scratch for a little bit!
Thank you for the detailed explanation, it's cleared up a few doubts for sure. Starting to learn Substance now and trying to workout a good production pipeline I want to test with my own environments!
There was a slide about auto exploding low-polys.
How do you explode high-polys then, so they match their positions when baking?
And how does it deal with high-polys that have more elements than low-polys? By some sort of manual polygon grouping?
The artists provide multiple high res objs. The pipeline then generates multiple game res objs (or the artists provides the matching low res)
The way they seem to do is by subtool in Zbrush. In the beginning we started with looking at the bounding box of each object and evenly spacing the high and low. But the bake times were taking too long because loading the complete high was sometimes a bit much to handle (20 pieces of 2 million polys each). So we moved to baking individual high>low and then compositing the resulting maps together using a UV mask.
So you point the tool at a directory that has "head_high.obj, body_high.obj, grenades_high.obj, arms_high.obj" and if the tool doesn't see the corresponding _low it generates them automatically. Then bakes the pairs and composites the resulting maps
Hope this makes sense!
After I asked the question, I started writing a script for Blender based on a similar approach. So I have a script that compares bounds, as you said, then puts pairs on an even grid.
Floating geometry is still not resolved. I thought about merging it with nearest high-polys.
This mesh heaviness is an interesting issue. Blender can have multiple 'scenes' in a single file. Maybe I should separate each pair into a new scene.
Yes, it does! Thanks again
Actually, that is one of the presentation's workflow ideas I liked the most. Allowing for manual work while generating everything that is missing / trivial / not done yet.
That's awesome that you're writing it in Blender, I might send you some PMs in the future. I have a long term goal of contributing to Blender and get it a bit more game production friendly, but I haven't found the time to dedicate to it. It seems like you can do a lot just in Python which seems awesome. I could use some help getting started.
You shouldn't need to save the scenes out ever, they're just for baking, they're temp files. So if you can do the bake with everything loaded awesome, if not break it up and bake them individually. We're doing some pretty heavy meshes at work so the system was starting to take too long.
Having the tool figure out and try to match which part is which seems overly complicated no? Just establish a naming convention on the pieces and let the artists control the pairing. I've also considered adding a _cage in case the artists want a different mesh for baking as opposed to the low res, but we haven't really ran into a case where this was absolutely needed.
And yeah, the whole point of the pipeline is that the artists have complete control of the output, otherwise it wouldn't have worked, specially with the auto gamerez and UVs, people immediately freak out when they hear that. But giving them the option to override everything gives them a sense of safety even if they never use that feature.
I found scripting Blender both pleasureable and useful. In the previous company, we scripted it heavily. It served as the level editor (even generator at some point) for handheld racing games.
Python integration is solid. C++ is only for modifications of Blender itself (like viewport, custom mesh modifiers).
Every GUI item is executing some command in Python. Or reading a variable via Python. So tooltips can teach you the API
As for the mesh explosion: I somehow intuitively came up with the idea of auto-finding pairs. Maybe I was still angry at a task of baking human skeleton last week, where each bone had to be a separate piece