I need some advice on how to best breakdown converting pictures that I take myself or source online to tweak-able materials. What I am doing now is this:
-Import source photo to Bitmap2Material to generate BaseColor, Normal, and Height Information
-Import those Bitmaps into Substance Designer
-Tweak/Make changes as needed
-Save as .Sbs into a library folder for later use inside designer
My biggest issue I am encountering now is that the size of the exported .Sbsar files are massive, and I attribute that to all the bitmaps being used inside the material. So that brings me to a new question, any tips for optimizing the size of the material when I am done? I suppose I could just use the base color map, and then generate the normal from the height map, and use a simplified roughness map.
The source pictures that I am using now from Textures.com are great, but to me they still lack that crisp detail that fully procedural materials do even at 4k resolution. Is that just a downside of using pictures instead of going full procedural? If so whats the point of even using them?
Just trying to work things out so if I need to completely abandon what I am doing thats fine, I just don't want to get too far down a path before turning back.
Thanks!
Replies
What is the size of your project? A good thing substance designer has (but in your case a bad thing) is that the resolution there is non-destructive. Meaning you can pump it up and down without destroying your pictures. Also wherever you have Grayscale information, you might want to put a grayscale conversion node(or just test it) and wherever you have only 1 color as an image, reduce its size to 1 pixel. Do not use bitmaps for 1 color only, substance designer has color picker so you can just add a 1 pixel uniform color node
As a small test I exported a .Sbsar file with 3 bitmap nodes at 2k(Base Color, Normal, Height), used a HBAO node to pull in the height data, and used a uniform color node set to 16x16 grayscale for roughness and metallic. When I exported it the file size was 11mb! even setting the compression to JPEG on all the bitmap nodes did not help very much.
I am concerned that trying to use that in an actual game would cause all sorts of issues with filesizes that large. Which begs the question if the file sizes are going to be that large what is the point of using those photos and trying to convert them in the first place? I am still learning SD but this allows me to play around a bit easier, I was just curious if anyone else uses a similar workflow and how they get around these issues.
I think the biggest hurdle is thinking a few moves ahead in how to want your result to look and how to work backwards with the nodes to produce that result. In addition learning how to really use all the nodes together is what makes the program so powerful, but also very difficult to use. Ill keep plugging along!