I keep running out of ram whilst using Substance Designer but before I go and buy a very expensive 32gb kit (currently on 16) I want to see if there's something I'm doing wrong.
What I have is a material blend node with a colour mask and all 16 slots have a material feeding into them. The materials are mostly ones which I have made myself and each has a a 2048 colour,normal and roughness saved as a png to keep file size down.
In each substance I have downsized the input textures to 1024 in the hope that the program will keep the 1024 version in memory and not the full 2048 but I'm not sure if this is how it works.
I've now reached the point where I can't open my graph, even with no other programs running.
Has anyone else had issues like this or am I using Designer wrong?
Replies
Then, you can select your bitmap resources in the explorer to edit them and change the bitmap format from Raw to Jpeg (you can fine tune the compression quality). You probably don't want to compress the textures you want to keep in 16 bits or the normals, but the others should be fine. This should help reduce the memory pressure on the Substance Cooker.
If it still fails, can you export your Substance with dependencies, and share it with us ?
After I posted my message I split my graph into 2, and then referenced the split part back into my main graph. That allowed me to open the file and continue working.
I have gone through my materials now and turned the linked bitmaps to jpeg compression. Which is highest quality by the way on the compression setting 1 or 0?
Now that I've changed my materials from RAW to jpeg my ram usage doesn't appear to be any different but I'm sure it will help once I've changed it everywhere that I can.
I'll see how it goes and post again if I run into more trouble, thanks again.
Could you send me by PM the graph that eat that much of ram ?
Yes, you can export easily by doing "right-click > Export with dependencies..." on your substance in the explorer.
It seems you want to output textures at 4K, but Designer works in your ram to be efficient. When I opened you graph my computer consumed a total amount of 15gb of ram. This can result is some lag/freeze from windows if you reach the maximum amount of ram available. Is that a problem ? Not really, that's normal. The only solution is to be patient, wait until Designer finish its work. Once the computation will be done designer will dump everything it doesn't need anymore and therefore free some ram.
We never wanted with the x64 version of Designer to limit ourselves. However working in 4K is very demanding and can eat a lot of resources. I would suggest you to work in 1K/2K most of the time, and switch in 4K in the end of when you want to export your bitmaps.
I suggest to watch this excellent tutorial made by Wes : https://www.youtube.com/watch?v=5_sFdP4TnbI?t=22m12s
So regarding your graph :
- The first thing to note is that the bitmap size and its node size in the graph are kinda independent. Say for example you drag'n'drop a normal map of 2K in your graph. What you will get is a node set to absolute, so independent in resolution of the rest of the graph, at 2K. You can from that point set it to "relative to parent" or keep it in absolute but at a different resolution, say for example 1K. You source image is still at 2K, but the version used by Designer will be at 1K.
Therefore, if you rebake your texture at 512x512 for example but you didn't change the node in Designer, you will still get a texture at 1K (Designer will upscale or downscale the bitmap source to fit the node requirements).
- The second thing to note is that you also have the "relative to input" resolution mode. I noticed for example in your graph one transform node in "relative to input" a 256x256 node connected to it input. However the transform node has it Width and Height set to 12, which therefore output an upscaled version of the 256x256 blend at 4096x4096. And you graph is set to "relative to parent".
So what happen in the end ? You graph in "relative to parent" means its size can be dynamic, but by default when viewing it everything will be at 256x256. What happens with your 4K node set in "relative to input" ? If the graph is set to an absolute of 512 for example, you previous 256 node will switch to 512, and since you increase the size based on the input, you will switch to 8K in theory. It would be more suited to set your graph at an absolute node of 1K/2K for example, and all your node set as relative to parent (or at least the first one of the tree).
Let me know if something is not clear !