I need to use an Alpha Channel for two different items One is the alpha, another is for a shader. What would be the best way to
A: Add noise to help disguise fading out of the shader and alpha since it wont have the full ability of all 256 levels.
B: These aren't split evenly in half, so I'm trying to think of a way to work on each separately then combine.
C: There was a way to "cut-off" a images total amount in either r, g, or b. As example, so instead of 256 levels, the image would stop* at 197 in red.
*I don't want it to stop though, I want it to re-level / compress from the full 256 range to that number.
How much compression noise will dxt-t do on the alpha channel? How to compensate for value shifts of pixels if it does indeed compress/add noise.
Replies
You are asking for answers to questions you really don't have enough information to be asking. Your are asking for very technical considerations that, to a technical person, sound curious bordering on absurd- I wouldn't even attempt to answer them because odds are there is better way to do what you want to do than the narrow questions posed.
Take a step back and maybe explain the larger context, because your questions don't really make sense.
I don't know how much information I can give out without getting permission from the developer and programmer. NDA's and all which Im sure you can appreciate. Hence why I am not going into specifics. When next I see the programmer I will ask what I can describe.
Lets say I find it absurd as well this whole situation, but since the oh so wonderful engine I work with (TGEA) has so many wacked out ways, and my programmer wants to limit it to one pass. So the Alpha in question has to work for both a transparency and a shader. Both will use different parts of the greyscale spectrum on the alpha to accomplish this. Of course since neither has the full spectrum to work with, I need to find ways to add diffusion noise to each inside their respective limit with areas that are semi transparent or semi affected by this shader.
The best idea I can think of is working on each separately at full greyscale, then limit/crunch, (whatever word you would like) Each to their respective range. Then combine into one. The problem is I am not sure which Photoshop tool to use to do this that would give the most exacting results.
The issue is from the page you linked "It's 8-bit alpha is interpolated, which yields much smoother results. " With a dxt-5. The format I must use. So.. How the heck am I going to do the above, then when the dxt-5 is created still limiting the interpolation to each pixels given value as made in the original bitmap? If not possible, I need suggestions to minimize the effect. Otherwise I will have stray pixels showing the shader or part of the transparency when they should be the opposite.
Now, if this bit more helps any, hopefully my questions make more sense?
If you change the output levels you can limit the number of grey's each image is using to greater or less than 128. Then you can just multiply them together to create your final alpha.
How will that even work?
Let's say you have two required things (your opacity/alpha and your "shader parameter"), each limited to a range of 128 levels... what if the opacity is 32 and the shader value is 32? That adds up to 64. But this could be an opacity of 64 and a shader value of 0, or just as easily an opacity of 12 and a shader value of 52.
This sounds like an exercise in futility to me.
Either drop your "shader parameter" (or use something else for it, like vertex colour), or drop the alpha. The only other solution would be using an extra texture. If your programmer suggested doing this, ask him how exactly he was planning on separating out the components.
If it was you who came up with this plan, consider that you may need to learn more about how this stuff works
You have 256 levels in 8 bits of information... each bit is a 0 or 1 (binary). The reason you have 256 levels is:
2 combinations * 2 combinations * 2 * 2 * 2 * 2 * 2 * 2 = 256
Now let's say you want to split this channel in half. It isn't 128 + 128. It is:
2 * 2 * 2 * 2 = 16
2 * 2 * 2 * 2 = 16
So, you actually only have 16 levels per 'information', NOT 128 or 256! Oh noez! Well, this isn't terrible... you can still do 128 in the first 7 bits and then a 1-bit alpha in the last bit (on or off), which can work alright. And TBH I'm not sure how this'll work with DXT compression either, or how you'd set this up in PS (your programmers should be able to write you a tool to pack stuff together, you should never have to do it in PS).
all this sounds extremely mad/weird. And in no way does the taks sound like an artist should ever have to think about it.
I did tell him not to split it directly in half since I want the shader to have more values than the alpha (since the alpha will pretty much be either on/off with a slight variation). The only other option I was given was adding a whole bunch of extra faces to have the alpha areas on a separate map. Since these are curved, I couldn't see it happening without upping the tri count. While the items Im working with now are already quite heavy in that area. (They have to be considering what they are and how they can be seen).