Home Technical Talk

Shader programming: GLSL etc

polycounter lvl 18
Offline / Send Message
JKMakowka polycounter lvl 18
There is a surprising lack of discussion about this (it seems) integral part of game design on this forum, which kind of surprises me as it would be probably be quite beneficial for every game artist to at least understand what the "shader guy" is talking about wink.gif

So I recently ventured into these unknown lands of wonders, and was wondering if anyone got some especially good tutorials and/or tools he would like to share.
Or just some general insights to maybe get this discussion started so that others on this board might benefit too.

Replies

  • Rick Stirling
    Offline / Send Message
    Rick Stirling polycounter lvl 18
    Yes, I'd be very interested in that.

    GLSL vs HLSL etc.

    I dowloaded an HLSL primer a few months ago, but to be honest, it was just mathematics, too programmer centric (and I have a degree in programming)
  • hyrumark
    Offline / Send Message
    hyrumark polycounter lvl 12
    I just bought Ben Clowards's DVD. I'm about halfway through it and already it's been worth the price.
  • CheeseOnToast
    Offline / Send Message
    CheeseOnToast greentooth
    From a layman artist's point of view hyrumark? Or do you have a programming background already?
  • JKMakowka
    Offline / Send Message
    JKMakowka polycounter lvl 18
    About HLSL/GLSL etc: this is what I have gathered so far:

    HLSL/GSLS/CG are all high level languages that get translated to an assembly language before execution.
    In the case of OpenGL that is always the ARB assembly language (that can also be used to programm directly, and was often done in the recent years becasue of higher compatibility with older hardware.)

    HLSL/GLSL/CG are pretty similar in syntax, and there are a few tools to convert between them (but often it doesn't work well).

    HLSL is for DirectX only, GLSL is for OpenGL 2.0.
    CG is Nvidia's own solution which is very similar to HLSL, but (I think) can also be used for OpenGL. It works better on Nvidia hardware of course wink.gif

    More insights on this?
  • Rob Galanakis
    Are you concerned about shader programming in general? Or the differences between HLSL and CG and GLSL?
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    some stuff is wrong what you just said jkm

    Cg HLSL GLSL
    Cg was developped by Nvidia, it was the first high-level language for the mass market sorta. In a cooperation project HLSL was made with Microsoft, hence Cg and HLSL are very close codewise, and often no translation is necessary.

    Cg supports a few more things than the others, ie interfaces, unsized arrays and so on, which makes dynamic shader code generation easier.

    Cg and HLSL compile to "profiles", which often are ASM code like representation of a shader. It is very important to know that there is no common assembly language for all graphic cards, but every chip generation has its own microcodes... its not like in the CPU world, where there is a standard. So even the ASM like stuff on GPU will be compiled/optimized again to native instructions.

    HLSL supports various "shader model" profiles, each shader model has a differnt set of features / instruction limits. ps2_0 ps3_0 and so on

    Cg has the most profiles, it can compile to the opengl ARB asm like code, to Nvidia specific stuff (mostly), and the hlsl profiles. It can also turn Cg to hlsl and glsl code.
    GLSL code generation is still quite buggy. Mostly Cg is very Nvidia centric (well its their thing) so a lot of latest stuff like branching/loops will not work in OpenGL so well on non-nvidia hardware.
    The reason is that the ARB ASM languages are not updated anymore and basically frozen at directx' shader model 2 featureset, without the instruction limits however. Nvidia has their own ASM extensions that expand on this. Ati and others have not.
    Latest Version of Cg can also compile to Geforce8 stuff, so you get Geometry shaders and so on.

    GLSL is a different story from the other two, because it does not compile to ASM at all, but directly to microcode of GPU. So the vendor has to do a robust compiler himself but can also benefit from a bit more optimization information. Nevertheless it makes GLSL a bit buggy. Especially ATI have suffered here a lot, but their new Vista OpenGL driver + Linux one features a complete new core. Sometime they will make that available to XP, too.
    I am not sure how Intel's GLSL support is, but well.
    With the lack of a "target compilation" anything other than directly to the target hardware, there is no "profiles" in GLSL, a shader either compiles or not. Depending on current hardware/driver. This leaves maximum flexibility to GLSL to encorporate new stuff, vendor specific and so on, which was always the strength of GL. But also means a certain lack of standards and a bit uglier for software developers.

    Ati offers a HLSL to GLSL library, and Cg tries to be as multiplatform as possible (it is used for PS3 as well).

    As said syntax wise Cg and HLSL are basically the same. GLSL is a bit different. GLSL is also used for OpenGL ES 2.0 which are portable 3d devices, think mobile phones.


    Shaders
    Nevertheless principles of shaders are always the same. You have a vertex and a pixel (in opengl called fragment) stage (and latest hardware may have the geometry stage).

    The Vertex Shader gets data from the application "per vertex". Attributes like Colors / Positions / Texcoords. Shaders "native" datatype are Vectors with 4 components. Also attributes are like that. Typical limits is that you can send up to 16 Vector4s per Vertex to the VertexShader. How you use those 16 Vectors is up to you, but you must always use 1 to send Positions. In many shading languages you will get "default" bindings like POSITON NORMAL TEXCOORD0 and so on. Which is more the internal "feeding" name.

    The Vertex shader's sole job is computing the position on screen. ie turn 3d coordinates (mostly object space). into a box called "clipspace" basically -1 to 1 in each dimension.
    Any other stuff it outputs, like Color, Texcoords and such is totally optional.

    The Fragment/Pixel Shader takes those outputs of a Vertexshader. Where each output now becomes an interpolated value of the 3 Vertices involved in a triangle. We always shade triangles, nothing less... Fragment shader must output one Vector4 which typically is the Color of a pixel.
    It can modify the pixel's depth, but shouldn't, as that kills a lot of speed. With multiple rendertargets, it can output more than one Vector4.
    It can never read the "current pixels" color, or depth, to perform blending. It can discard a pixel so (ie not write anything).
    Another difference in OpenGL and DirectX is that depth-textures can be read "by value" ie 0-1 in OpenGL, and not just "by comparison" ie 0/1 as in DirectX prior 10.

    After vector4 is passed out, additional tests are performed (these are still fixed function render state and not part of the shadercode). Such tests are alphatest, depthtest, stenciltest. If all tests pass, you do the framebuffer blending. Ie just overwrite, or "decal" or "modulate" or "add"... the way a pixel is blended is very simple and doesnt allow a lot of variety.

    As said you cannot read a current pixel, so many techniques will actually render the whole scene not to the windows framebuffer but to a texture. And then perform postprocessing effects by drawing a simple fullscreen quad reading from the scene texture. But again you can never read and write from/to the same texture.
    You may have multiple scene textures that encode color, shadow, lighting... and mix them at the end.

    precision within a shader is typically 16,24 or 32 bit. But the result color is stored at 8-bit (256 values representing 0-1) per component. If you render to a texture, you can render to higher precisions and outside of 0-1.

    the key to writing shaders is learning something about vector math. Cross and Dot products, matrix multiplication. You dont need to know a lot however. It mostly ends up being * + - and / anyway wink.gif

    I could go on endlessly but basically you would need to focus on what kind of effect work you want to do.



    GPU a Stream Processor
    it is very important to know that GPUs are "stream" processors, ie they just flow straight thru the data. They are memoryless. Say you have a multipass algorithm, your vertex transforms will be done every time. Also because they have small caches, even in the same pass, if more triangles use the same vertex and the vertex is outside of the cache, it will be fully reevaluated again. That is why triangle strips and internal optimizations of the order of triangles are important.

    Only latest sm4 (geforce8) hardware knows "primitives" ie triangle IDs and such. Before every vertex will not know its "neighbors", nor its triangle its part of, same for the pixel. They just get those attribute vectors + shared common "uniforms/constants". Those uniforms are like parameters "lightcolor", matrices and so on. And of course they can access textures and sample from them. You can give every vertex manually an id and use it to index a constant array. Like bone-skinning can be done that way, each vert gets an idea to the matrix array of all bones + weight (or multiple thereof).

    Basically you can only use the Color output value to "memorize" your results. Geforce8/SM4 can actually store transform results of a vertexshader, too.

    Because the lack of "standards" or say standards are made afterwards, OpenGL will nomally feature the latest technology. As every vendor can just extend their driver. Hence you get all the so-called DirectX10 features, like Geometry Shaders, Texture Arrays in OpenGL already under XP and Linux.


    Engines Shadersystems
    About engines and shaders. Many (at least the real top ones) engines will have systems that generate shader code from different options. Like varying lightcounts, if shadow is on, off... That large number of permutations of a single effect will lead to tons of shaders per game. DirectX allows precompiling and storing binary (ie non readable as a txt file). Engines like Crysis/Unreal3 ship with large archives of precompiled shaders for different hardware.

    The "internal shader system" way, however prevents you guys writing a .fx file in shadermonkey/fxcomposer or whatever and just "use it". As those engines are far too much optimized/pipelined you just wont have that freedom, but mostly tweak values of given shaders. Or use their tools to build a shader.

    You can see the .fx stuff in max and so on more as "I want something like this" or a simple presentation of one of the engine shaders, so you can more easily tweak/view your models outside the engine.
    Of course less "über" engines, will allow a bit more flexibility plugging in your own shader, and not worry about unifying lighting/shadowing and whatever.
  • JKMakowka
    Offline / Send Message
    JKMakowka polycounter lvl 18
    Well since I am using Linux (= only OpenGL) I don't really care about anything other than GLSL and ARB... so yes shader programming in general wink.gif

    Edit: Ahh thanks CB. I stand corrected smile.gif
  • Eric Chadwick
    Mr. Cloward was in a similar boat to you a couple years ago it seems. His blog makes for some great reading, he shared his learning process pretty thoroughly.
    http://bcloward.blogspot.com/2005_01_01_archive.html
  • hyrumark
    Offline / Send Message
    hyrumark polycounter lvl 12
    [ QUOTE ]
    From a layman artist's point of view hyrumark? Or do you have a programming background already?

    [/ QUOTE ]

    It's definitely geared for someone with little to no programming knowledge (like me).

    While I don't expect to be a shader expert just from watching the DVD, at least now I can open an existing shader and understand the structure and generally what is going on, etc. From there I can tinker and make some changes and modifications, and generally predict the results. I recommend the Ben Cloward DVD.
  • Ruz
    Offline / Send Message
    Ruz polycount lvl 666
    direct x is the princess of the graphics world, while open gl is the ugly unpopular ogre:)
  • rebb
    Offline / Send Message
    rebb polycounter lvl 17
    Maybe Direct3D is really the Ogre, but because it has a secret Mound of Gold, it was able to afford a really fancy princess dress and some plastical surgery over the years wink.gif.
  • JKMakowka
    Offline / Send Message
    JKMakowka polycounter lvl 18
    Lol smile.gif

    So what shader editors are you useing to test your code?
    These two (from my open-source and linux centric view point) seems to be not all that bad:
    http://lumina.sourceforge.net/
    http://code.google.com/p/qshaderedit/
  • CrazyButcher
    Offline / Send Message
    CrazyButcher polycounter lvl 20
    I use crimsoneditor + compilers (cgc or fxc) directly.
    crimsoneditor is one of the few txt editors I know that allow easily setting up multiple commandline tools, that you can just call with a shortcut. Something I need a lot and miss from any other free editor. For viewing I just use luxinia/3dsmax.

    while I dont use glsl, I found the ATI tool to analyze compile results of glsl worthy a look. qshaderedit looks great, too. too bad it's only linux.
    I dont like rendermonkey and fxcomposer, they are just too bloated
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    GLSL is good because is very portable(multivendor, multiplatform)... but is lacking a bit currently because does not have an "effect" system like HLSL. Also is a bit buggy sometimes(due to the different compiler implementation of the different IHVs). I think we need to wait for OpenGL 3.0 to solve some design problems(effect system in the same file, UI annotations, offline/realtime compilation, shader debugging + emulation + profiling, etc ).

    HLSL on the other hand is more mature and allows to use "effects" in the same file. I prefer this to GLSL, but is Microsoft only so not very portable.

    About Cg... I find it completely worthless. Is NVIDIA only, not at all well optimized for ATI or other IHVs cards.

    About shader IDEs... I don't like them. Good game engines use their own shading editor and system( for example the UE3 or Dice's frostbite ). That's the best way to ensure optimal speed and portability. Notice some systems don't support GLSL/HLSL shaders(for example the GameCube, Wii, some mobile phones, etc...) so need other ways to customize the rendering system.

    However, I must admit to know HLSL + GLSL + Cg + some lighting and material theory will be very good to write custom shaders or the material system of any 3D engine. Knowledge never hurts wink.gif
  • Ben Cloward
    Offline / Send Message
    Ben Cloward polycounter lvl 18
    Hey Guys!

    It's really exciting for me to see that more artists are getting into shader writing. I'd like to see every game/real-time artist learn just a bit of shader writing as it really gives you a ton of power over what your final art looks like - when you have access to the actual code that's under the hood.

    Learning to create shaders has been a real thrill for me. I especially love those moments when - after working on a specific shader for a couple of days it all comes together and starts to look really cool. This happened to me when I was writing my skin shader - and again when I was creating my hair shader.

    My second DVD on shader programming will soon be available on the Cg-Academy web site. I've finished all of the recording and it's currently being edited. It is focused on lighting in shaders - point lights, directional lights, spot lights, global illumination, etc.

    I'd also like to recommend The Cg Tutorial. It's a great book that starts at a very basic level and works up to some pretty cool stuff. With almost no programming experience, it really helped me get started learning.

    -Ben
  • hyrumark
    Offline / Send Message
    hyrumark polycounter lvl 12
    [ QUOTE ]


    I'd also like to recommend The Cg Tutorial. It's a great book that starts at a very basic level and works up to some pretty cool stuff. With almost no programming experience, it really helped me get started learning.

    -Ben

    [/ QUOTE ]

    Where can I find this book?

    EDIT: Nevermind, found the link from your DVD!
Sign In or Register to comment.