Do they really *have* to be 1 Pixel high ?
Doom3 has a few Textures that are basically just Ramps, for Light Falloffs etc, they are like 8 Pixels high. The Width is some Power of two as well.
I guess you could feed any texture to the Graphics-Card as a "1D Texture" ( with some tex1D command in the fragment shader for example ) and it will sample it as such. No idea how those are sampled by default tho - maybe just from the "top left" to the "top right" ?
have to be one pixel? no, but why bother with more if your sampler only uses one dimension of the texture? The rest would just be wasted space and memory
and it's a one dimensional texture because when it is indexed by a shader it only has one component, the other is always zero, because (remember kids) counting digits start with one, integers start with zero. Pixel (0,0) is the upper left hand corner.
EQ: Wake me up when computer graphics terminology makes sense or is intuitively consistent.
rebb: Yes, a 1D texture is one pixel high. However, you can achieve the same effect with any height, it just is more expensive in terms of texture space and commands, but much simpler in terms of pipeline (1D textures are not very common), so I doubt we'll see a use of many 1D textures anymore, but instead, very thin 2D textures.
A tex1D command just uses a scalar instead of a 2D texture coordinate (float2 or what have you). So, like the U of a UV... if the gradient is linear, no matter what V is, it will be the same. You can accomplish the same things, its just a pipeline issue.
I can't test right now but you should be able to use a 2D texture for a tex1d command, and a 1D for a tex2D command.
As for the DDS issue, the minimum size needs to be 4 pixels in a dimension for a 2D .dds file, if I remember correctly (can't test that right now either), not sure why in all honesty though.
I don't mean to hijack Ben's thread, but could someone explain to me the applications of a one-dimensional texture as opposed to a two-dimensional one? Professor's explanation was helpful, and I looked it up on Wikipedia, but I still don't know where they would be used, or what they're used for.
Its not one-dimensional in any sort of *actual* sense, as in what those words actually mean. But for something like a gredient, where its going to be the same if its 256x256(if you're not counting dithering of course), or 1x256 stretched out to the same size, you realy arent losing any information. Really just saves memory. This whole 1d texture thing is something some retarded programmer thought up to make himself seem smarter than he really is.
Sure, what you see on the screen is a 2D picture. (It sorta has to be, otherwise you couldn't see it.)
But the thing being represented is a list of colors in a certain order. That's what's one-dimensional: not the picture, but the subject of the picture. It's a 2D picture of a 1D object.
The screen shows 2D pictures of your 3D objects also, doesn't it? This works the same way.
1d textures can be used to store more complex calculations. Although this is less true today with powerful hardware. But think about any mathematical function that takes a single scalar value as input. Of course the result (unless float textures used) wouldnt be as precise, but especially in older card generations it was easier to do the calculation before and get results this way, instead of calculating per pixel.
2d textures can be used the same way for functions that take 2 scalar inputs. (Think x to the power of y for glossiness).
Now the nice thing about such textures is that actually you dont have to calculate the "exact simple" result, but you can do more complex twists to it, which would require much more instructions in on-the-fly calculation scenario.
say you precalculate particle colors in a 1d texture, or sizes, and then get the proper one using the relative age of the particle.
Or transfer functions for other type of data (think like palette shifting in the old days, just for stuff like medical/engineering data).
1D is actually very often the case in other coding, think arrays. So I heavily disagree with EQ saying it's bullshit. The hardware has to be optimized for the best case for each dimension, and if it saves time and you know you only have 1D data, then why use more costly 2D sampling...
For it qualify as 1d (for those arguing about this *actually being it), it would have to have the same perceived width at every zoom-level, ie the smallest one possible, and would therefore have to be vector-based.
I'm nitpicking, i know, but i don't think this is really the same as a 3d object being displayed as a 2d one.
In the sense of only using the 1 dimension, that's where my theory becomes shaky. What Eq and me are nitpicking about is purely the fact that the actual texture is 2d, like a couple of killjoys.
It is a bit like using 3D Studio to draw a 2D picture, but what else are you gonna do?
Open up notepad and type numbers in a list?
[/ QUOTE ]
I actually made a bitmap by typing numbers into an ascii file once. It's not that hard but I had to do a bit of coding in postscript to get it into the proper array.
But I thought that Daz was talking about something like a const in Ue3. It's just a straight vector that you can replace your diffuse or another bitmap with. It has no height at all because it's just a single value that you can increase or decrease.
A 1d texture most definately is a line...the dividing one between technically adept minds and artistic ones it seems.
For my own part, one of the greatest leaps i made was when i understood the power of storing values/computational results in an incredibly fast, massively parralell system.
everything from radiance functions, fast square roots, falloff tables, collision bounds, particle data, state machines etc can be stored on the gfx card/pipeline..not to mention the unique ability to actually transform this data from a more 'artistic' perspective...want some anti-aliasing on that n dimensional FFT lookup table ?
Strange though that, a lot of artists i speak to are fully accepting of concepts like normal/bump mapping and yet share EQ's perverse view that the gfx pipeline delevopers are all mindless numpties just out to obfuscate the process in the hope of looking superior.
[ QUOTE ]
A line is definately 1d, thats for sure. Actually having an artist create a 1d texture in photoshop is just nonsense.
[/ QUOTE ]
It would be if the 1d texture was a single color gradiating to another single color, but it's most definitely not when you want to have multiple colors with varying falloffs in between.
It's also beneficial when you want multiple artists on the team to be able to create their own 1d textures, and many of them wouldn't have the knowledge or patience to code one up using numerical inputs, but wouldn't mind painting a 1x256 texture exactly how they want it.
Thanks Ryan and the other programmers for explaining the usage better than I could.
I think the issue is that we're so used to thinking of spacial dimensions, rather than the base concept of a dimension.
Technically speaking any given texture could very well be thought of as an 8 dimensional structure rather than a 2 dimensional one
you have 2 spacial dimensions(U and V) and 4 color dimensions, RGBA, and each can be accessed individually and separately from the rest. Heck if you treat the luminance value of the pixel as a third dimension then its a 12D structure.
If anyone remembers the Nalu demo from NVIDIA a few years ago, a few of the rather complex functions of how light should pass through her hair were precomputed and stored in textures, with differing results in separate channels.
And if you want a good example of a use of a 1 dimensional texture, ever hear of a palette? 256 indexed color values can just as easily be stored as a 1X256 texture, and then it can benefit from all the things you can do with a GPU natively.
And yeah the result about any linear function can be stored as a one D texture, then you can linearly interpolate between the stored datapoints(pixels) if you need in between values.
@Vailias :
Sure making it really just 1 Pixel high might be a bit more "efficient" memory-wise, but i guess id made them higher so the artists could actually work with the textures without squinting a lot .
Technically speaking any given texture could very well be thought of as an 8 dimensional structure rather than a 2 dimensional one
[/ QUOTE ]
While i can see your point, i couldn't even imagine thinking in 4 dimensions, let alone 8
I prefer to see data type textures in the same way you would think about world space. it being a container of other 'lesser' hierarchical spaces (obj, camera, light etc). an array of arrays as it were.
but then again, if you can see in 8 dimensions, can you transport me to work in the next 3 nano seconds as i am very late
[ QUOTE ]
I think it opens up some really awesome shader possibilities.
[/ QUOTE ]
qfa
without trying to sound condescending, if you lerp'd between the starting pixel and end, you'd have pretty much the same results only using 3 1*2 px textures.
it's always nice to see artists stepping into, (thankfully)what used to be purely the realm of coders/boffins
Thanks for this thread. I didn't know anything about this stuff before, glad it's stayed (mostly) educational
Per: Bear in mind that not everyone used to be a programmer ¬_¬
for shaderfx users, this is a breeze to set up, and it's pretty awesome fun.
Thanks poop, because reading this was one of those things which made something click in my head regarding shaders, and I am all excited about some ideas I'm having right now.
I was using these about two years ago with a friends engine for controlling lighting falloff, specular falloff, etc. It was really cool, if you used block colors you'd get a cell-shaded effect, and for specular falloff it was a full color map so you could do things like add faint colored rings to the specular to get cool iridenscence and other effects.
I'm not familiar with mental mill, but I've done similar things in Maya using ramp and facing ratio nodes. You can make a nice xray shader by using them for transparencies as well, or add glow-ing edges to things by using transparency + emissive
We're using 1D bitmaps here too, for similar artist-controlled gradient effects (thermal shaders like the Predator movies, fog falloff ramps, etc.). In fact I just made a couple of these today.
I usually don't need to go any more than 256 wide, so I can use 8bit TGA. Works well.
FWIW, Nvidia's DXT compressor intelligently pads a 1-pixel tall image with 3 empty pixels, just so it can save a conformant DXT. MSDN talks about this trick I think.
Unfortunately DXT's 4x4 block-compression sucks when you try to feed it a smooth gradient, so I don't use it. Too bad Nvidia doesn't save a valid 8bit RGB DDS. Whatever, not like I need the mips.
Replies
Do you mean something like 256px by 0px? I don't think that's possible.
Ho ho ho ho ho!
Edit: Oh... It actually exists? I guess the pun was out of line...
2x Edit: Oh zing!
They can be quite useful when you get creative with shaders (they can act as really cheap ramps or gradients).
Ok, I'll try a different format, thanks.
Ok, I got my answer. I have to make it 256x1, not 1x256. Thanks anyway guys.
Doom3 has a few Textures that are basically just Ramps, for Light Falloffs etc, they are like 8 Pixels high. The Width is some Power of two as well.
I guess you could feed any texture to the Graphics-Card as a "1D Texture" ( with some tex1D command in the fragment shader for example ) and it will sample it as such. No idea how those are sampled by default tho - maybe just from the "top left" to the "top right" ?
i know this!!!
1d? Wouldn't that just be a single point?...
[/ QUOTE ]
0 dimensions = a point
1 dimension = a line
2 dimensions = a field
3 dimensions = a volume
and it's a one dimensional texture because when it is indexed by a shader it only has one component, the other is always zero, because (remember kids) counting digits start with one, integers start with zero. Pixel (0,0) is the upper left hand corner.
rebb: Yes, a 1D texture is one pixel high. However, you can achieve the same effect with any height, it just is more expensive in terms of texture space and commands, but much simpler in terms of pipeline (1D textures are not very common), so I doubt we'll see a use of many 1D textures anymore, but instead, very thin 2D textures.
A tex1D command just uses a scalar instead of a 2D texture coordinate (float2 or what have you). So, like the U of a UV... if the gradient is linear, no matter what V is, it will be the same. You can accomplish the same things, its just a pipeline issue.
I can't test right now but you should be able to use a 2D texture for a tex1d command, and a 1D for a tex2D command.
As for the DDS issue, the minimum size needs to be 4 pixels in a dimension for a 2D .dds file, if I remember correctly (can't test that right now either), not sure why in all honesty though.
It has length only, no width or depth.
Get with it, EQ
Sure, what you see on the screen is a 2D picture. (It sorta has to be, otherwise you couldn't see it.)
But the thing being represented is a list of colors in a certain order. That's what's one-dimensional: not the picture, but the subject of the picture. It's a 2D picture of a 1D object.
The screen shows 2D pictures of your 3D objects also, doesn't it? This works the same way.
2d textures can be used the same way for functions that take 2 scalar inputs. (Think x to the power of y for glossiness).
Now the nice thing about such textures is that actually you dont have to calculate the "exact simple" result, but you can do more complex twists to it, which would require much more instructions in on-the-fly calculation scenario.
say you precalculate particle colors in a 1d texture, or sizes, and then get the proper one using the relative age of the particle.
Or transfer functions for other type of data (think like palette shifting in the old days, just for stuff like medical/engineering data).
1D is actually very often the case in other coding, think arrays. So I heavily disagree with EQ saying it's bullshit. The hardware has to be optimized for the best case for each dimension, and if it saves time and you know you only have 1D data, then why use more costly 2D sampling...
I'm nitpicking, i know, but i don't think this is really the same as a 3d object being displayed as a 2d one.
In the sense of only using the 1 dimension, that's where my theory becomes shaky. What Eq and me are nitpicking about is purely the fact that the actual texture is 2d, like a couple of killjoys.
A line is definately 1d, thats for sure. Actually having an artist create a 1d texture in photoshop is just nonsense.
[/ QUOTE ]
It is a bit like using 3D Studio to draw a 2D picture, but what else are you gonna do?
Open up notepad and type numbers in a list?
I don't think this is really the same as a 3d object being displayed as a 2d one.
[/ QUOTE ]
It's the same, in that the picture has two dimensions, but the object it's intended to represent has a different number of dimensions.
(Yeah, you design a 1D colored line by editing a 2D picture representing said line. It's a hack, but nobody has bothered to write "ColoredLineShop.")
EDIT: okay, now I'm trying really hard to resist the urge to write ColoredLineShop.
It is a bit like using 3D Studio to draw a 2D picture, but what else are you gonna do?
Open up notepad and type numbers in a list?
[/ QUOTE ]
I actually made a bitmap by typing numbers into an ascii file once. It's not that hard but I had to do a bit of coding in postscript to get it into the proper array.
But I thought that Daz was talking about something like a const in Ue3. It's just a straight vector that you can replace your diffuse or another bitmap with. It has no height at all because it's just a single value that you can increase or decrease.
For my own part, one of the greatest leaps i made was when i understood the power of storing values/computational results in an incredibly fast, massively parralell system.
everything from radiance functions, fast square roots, falloff tables, collision bounds, particle data, state machines etc can be stored on the gfx card/pipeline..not to mention the unique ability to actually transform this data from a more 'artistic' perspective...want some anti-aliasing on that n dimensional FFT lookup table ?
Strange though that, a lot of artists i speak to are fully accepting of concepts like normal/bump mapping and yet share EQ's perverse view that the gfx pipeline delevopers are all mindless numpties just out to obfuscate the process in the hope of looking superior.
A line is definately 1d, thats for sure. Actually having an artist create a 1d texture in photoshop is just nonsense.
[/ QUOTE ]
It would be if the 1d texture was a single color gradiating to another single color, but it's most definitely not when you want to have multiple colors with varying falloffs in between.
It's also beneficial when you want multiple artists on the team to be able to create their own 1d textures, and many of them wouldn't have the knowledge or patience to code one up using numerical inputs, but wouldn't mind painting a 1x256 texture exactly how they want it.
Thanks Ryan and the other programmers for explaining the usage better than I could.
Technically speaking any given texture could very well be thought of as an 8 dimensional structure rather than a 2 dimensional one
you have 2 spacial dimensions(U and V) and 4 color dimensions, RGBA, and each can be accessed individually and separately from the rest. Heck if you treat the luminance value of the pixel as a third dimension then its a 12D structure.
If anyone remembers the Nalu demo from NVIDIA a few years ago, a few of the rather complex functions of how light should pass through her hair were precomputed and stored in textures, with differing results in separate channels.
And if you want a good example of a use of a 1 dimensional texture, ever hear of a palette? 256 indexed color values can just as easily be stored as a 1X256 texture, and then it can benefit from all the things you can do with a GPU natively.
And yeah the result about any linear function can be stored as a one D texture, then you can linearly interpolate between the stored datapoints(pixels) if you need in between values.
Sure making it really just 1 Pixel high might be a bit more "efficient" memory-wise, but i guess id made them higher so the artists could actually work with the textures without squinting a lot .
Technically speaking any given texture could very well be thought of as an 8 dimensional structure rather than a 2 dimensional one
[/ QUOTE ]
While i can see your point, i couldn't even imagine thinking in 4 dimensions, let alone 8
I prefer to see data type textures in the same way you would think about world space. it being a container of other 'lesser' hierarchical spaces (obj, camera, light etc). an array of arrays as it were.
but then again, if you can see in 8 dimensions, can you transport me to work in the next 3 nano seconds as i am very late
These are my three 1x256 1d strips (enlarged to show texture!)
And here is the result when using it as a shading lookup (this is inside Mental Mill)
Hope that clarifies what I'm using it for. I think it opens up some really awesome shader possibilities.
I think it opens up some really awesome shader possibilities.
[/ QUOTE ]
qfa
without trying to sound condescending, if you lerp'd between the starting pixel and end, you'd have pretty much the same results only using 3 1*2 px textures.
it's always nice to see artists stepping into, (thankfully)what used to be purely the realm of coders/boffins
Per: Bear in mind that not everyone used to be a programmer ¬_¬
without trying to sound condescending, if you lerp'd between the starting pixel and end, you'd have pretty much the same results
[/ QUOTE ]
Lerps look about like this:
Ben's functions look like this:
Thanks poop, because reading this was one of those things which made something click in my head regarding shaders, and I am all excited about some ideas I'm having right now.
[ QUOTE ]
without trying to sound condescending, if you lerp'd between the starting pixel and end, you'd have pretty much the same results
[/ QUOTE ]
Lerps look about like this:
Ben's functions look like this:
[/ QUOTE ]
my mistake (and a crappy lcd), apologies all round
I usually don't need to go any more than 256 wide, so I can use 8bit TGA. Works well.
FWIW, Nvidia's DXT compressor intelligently pads a 1-pixel tall image with 3 empty pixels, just so it can save a conformant DXT. MSDN talks about this trick I think.
Unfortunately DXT's 4x4 block-compression sucks when you try to feed it a smooth gradient, so I don't use it. Too bad Nvidia doesn't save a valid 8bit RGB DDS. Whatever, not like I need the mips.
Cool thread.