@JackyBoy
this In.position etc. is C syntax, search for "structs in C" and you will find enough tutorials about this topic.
@dnc
Sadly that ShaderFx version expires, so I couldn't worked any longer on that shader. Instead I'm learning hlsl now with notepad from scratch.
It goes quite well. I've written a simple diffuse lambert shader today, but there a still some thing that I haven't overcome yet.
Now I want to add phong specular lighting ,but I dont know how to calculate this eyevector / viewvector inside 3ds max ?
@JackyBoy
Now I want to add phong specular lighting ,but I dont know how to calculate this eyevector / viewvector inside 3ds max ?
The view position is simply the world position of the view setup, so taking the view position - world position will give you the vector that points from the world, to the view...
You can get the view position from a matrix. You can get it by taking the fourth row of the inverse view matrix.
So to finally get your normalized view vector or eye vector it would simply be
During my work on that specular lighting I've found a bug in my shader, whenever
I put a light between my models the outer one will be lit like the inner one?
I think a picture describes this better:
and my lambert shader:
//////////////////////////////////////////////////////////////
// Data Structs
// input from application
struct application2vertex {
float4 position : POSITION;
float4 normal : NORMAL0;
};
// output to fragment program
struct vertex2fragment {
float4 position : POSITION;
float4 worldNormal : NORMAL0;
float4 worldSpacePos : TEXCOORD0;
};
//////////////////////////////////////////////////////////////
// Vertex Shader
vertex2fragment vertexShader(application2vertex In)
{
vertex2fragment Out;
Out.worldNormal = mul(In.normal,World); //transform normals from obj space to world
//
//
Out.worldSpacePos = mul(In.position,World);
Out.position = mul(In.position, WorldViewProjection); //transform vertex position from object space to clipspace
return Out;
}
//////////////////////////////////////////////////////////////
// Pixel Shader
float4 pixelShader(vertex2fragment In) : COLOR
{
float4 Normal = In.worldNormal;
float3 LightDir = normalize(light1Pos - In.worldSpacePos); //calculate lightvector
float diffuseLight = max(dot(Normal,LightDir),0);
float4 lambert = saturate(DiffuseColor * diffuseLight * light1Color);
float4 final = saturate((AmbientColour * AmbientIntensity) + (lambert));
return final;
}
//////////////////////////////////////////////////////////////
// Techniques
technique Base
{
pass one
{
VertexShader = compile vs_3_0 vertexShader();
ZEnable = true;
ZWriteEnable = true;
ZFunc = LessEqual;
CullMode = none;
AlphaBlendEnable = false;
AlphaTestEnable = false;
PixelShader = compile ps_3_0 pixelShader();
}
}
Trying to jump in quick to show off my recently learnt knowledge
But is it because in your Pixel shader you have your lightDir calculated with "light1pos" to work out the light vector, when in fact you need multiple passes for it to work out each individual light and blend? Or at least have LightDir2 with light2pos if that's possible?
During my work on that specular lighting I've found a bug in my shader, whenever
I put a light between my models the outer one will be lit like the inner one?
I think a picture describes this better:
and my lambert shader:
//////////////////////////////////////////////////////////////
// Data Structs
// input from application
struct application2vertex {
float4 position : POSITION;
float4 normal : NORMAL0;
};
// output to fragment program
struct vertex2fragment {
float4 position : POSITION;
float4 worldNormal : NORMAL0;
float4 worldSpacePos : TEXCOORD0;
};
//////////////////////////////////////////////////////////////
// Vertex Shader
vertex2fragment vertexShader(application2vertex In)
{
vertex2fragment Out;
Out.worldNormal = mul(In.normal,World); //transform normals from obj space to world
//
//
Out.worldSpacePos = mul(In.position,World);
Out.position = mul(In.position, WorldViewProjection); //transform vertex position from object space to clipspace
return Out;
}
//////////////////////////////////////////////////////////////
// Pixel Shader
float4 pixelShader(vertex2fragment In) : COLOR
{
float4 Normal = In.worldNormal;
float3 LightDir = normalize(light1Pos - In.worldSpacePos); //calculate lightvector
float diffuseLight = max(dot(Normal,LightDir),0);
float4 lambert = saturate(DiffuseColor * diffuseLight * light1Color);
float4 final = saturate((AmbientColour * AmbientIntensity) + (lambert));
return final;
}
//////////////////////////////////////////////////////////////
// Techniques
technique Base
{
pass one
{
VertexShader = compile vs_3_0 vertexShader();
ZEnable = true;
ZWriteEnable = true;
ZFunc = LessEqual;
CullMode = none;
AlphaBlendEnable = false;
AlphaTestEnable = false;
PixelShader = compile ps_3_0 pixelShader();
}
}
The lambert looks ok, where is the code for your specular pls?
Specular is a lot more open in how you calculate it. Try not to get lost in all the crazy models and functions out there at the start (like Cook-Torrance and so on), some people tend to obsess a bit over that, I think it doesn't matter as long as it looks good.
Also, here's my function from my shader:
[COLOR=SeaGreen]//seperate specular calculation to make my life easier coding this thing
//color and masking is NOT done here; this is just for pure, raw specular calculation
//thanks to http://wiki.gamedev.net/index.php/D3DBook:(Lighting)_Blinn-Phong for the very clean and understandable explanation[/COLOR]
float4 blinnspecular(float3 normal, float3 lightvec, float3 eyevec, float4 glossiness)
{
normal = normalize(normal);
lightvec = normalize(lightvec);
eyevec = normalize(eyevec);
float3 halfvector = normalize(eyevec+lightvec); /[COLOR=SeaGreen]/add eye and light together for half vector (Blinn)[/COLOR]
float4 specular;
specular = dot(halfvector, normal); [COLOR=SeaGreen]//dot between half and normal (Blinn)[/COLOR]
specular = float4( pow(specular.r, glossiness.r),pow(specular.g, glossiness.g), pow(specular.b, glossiness.b), pow(specular.a, glossiness.a)); [COLOR=SeaGreen]//power specular to glossiness to sharpen highlight[/COLOR]
specular *= saturate(dot(normal,lightvec) * 4);[COLOR=SeaGreen] //fix for Specular through surface bug. what this does is just make sure no specular happens on unlit parts. the multiplier works as a bias[/COLOR]
return specular;
}
I tried to make it generic enough: no specmap required yet, can be used for any light. I just run this 3 times to calculate every light's spec on the model.
Take note of the final line: this is a hacky little fix for a bug where spec appears through surfaces. You won't find this in any official book or formula: I just write stuff that works for me. what it does is just quickly check if light actually reaches the area where specular is calculated, and makes sure no spec will be visible if the area is unlit.
LoL this makes me mad. I dont know if it is this hlsl compiler or max but it
is crazy. I've put my code into some fancy functions like Xoliul did, to get a
better overview. But instead working like before as I would expect it, it does not
I've got no compile errors or anything else, but my specular calculation doesn't work now. I cant see the problem here.
Its the same code as before , just put into functions.
Here is my full code:
I just tried your code, none of it works fine, both external and non external are doing really weird things... as in really, really broken lighting.
I suggest you fix that first before bothering with those functions. it's some sort of matrix/space problem as lighting seems offset and dependent on distance.
Found your problem: you are using a float4 as input for the normal by the application. That means a W-component is created for it as well, which you end up using in all your calculations. That W-component messes up pretty much everything.
I've now added normalmap,glossmap-support and light attenuation to my shader, and fixed some errors.
Its still not the prettiest, but I think I've learned enough to move further.
I can post my shader here, if anyone is interested ?
I'm now learning cubemaps and how to use them for reflection/refractions. And
I'm stuck here. I dont know what I'm doing wrong that my shader looks like this (left side):
There are a couple of things you probably should change here.
-First thing is when you are transforming your tangents, normals, and binormals into 'world space'. They naturally are in world space, so simply pass them to your fragment shader straight from the application. For example:
-Second thing is when using a normal map, you need to swizzle your normal map around before using it. So if a normal map is being used your normal map should become:
Hey all, learning is still going strong! But wondered if anyone could explain a little more to me about lights.
The shaders I have been tinkering with use 1 or 2 lights, but what I want to know is how would this differ if you were going to use it in a game engine, or even more lights in max (I don't need actual code, just the theory behind it). Do you have to code a "light1pos", "light2pos", "light3pos"... for every light you MIGHT have in the scene, or do you do it in a funky way where the shader realises there is another light so does a lightNpos + 1?
Good question Jacky.
As you see, you need to explicitly provide code for every additional light, it adds performance cost, and if you want to optimize in case there are fewer lights, you get branching as an additional issue: doing IF's in shaders is not as straightforward as regular cpu code, shaders are meant for the GPU to thunder through while doing the same code with just different variables: changing the code (like with an IF or when switching shader file) comes with a cost.
The solution to this lies with the engine, and adjusting your shaders accordingly. First of all, your engine needs to render either Forward or deferred. Forward rendering is what my shader is and what you guys are practicing. Simply a shader per object, working by itself; light code is done per object, you can have vastly different shaders for different objects. Deferred shading is completely different and is more like a post-effect. The renderer will do a number of passes, for example diffuse color, normals, specular, reflection, emmissive, all with standardized shaders for all objects. then a final pass will composite everything together. The advantage is that you're calculating your lights only for that single, final image, and not for every object. Much better performance wise, but limited in other ways (much harder to do nodebased unlimited shaders like Unreal). Deferred is a newer technique because GPU's couldn't handle so many rendertargets until about 8 years ago.
There's also different approaches, I think unreal does forward shading most of the time (lightmapped objects are forward shaded), and it will do one single dynamic light (the dominant light, like the sun). If there are any additional, dynamic lights (like explosions, flashing lights) they will be done in an additional deferred pass that gets composited on top.
Frostbite and Cryengine are both fully deferred I think. Frostbite has nodebased shaders, Cryengine limits you to a set of pre-built shaders.
I guess to conclude I could say this:
the way we are coding shaders is actually a bit inefficient and not suited for integration in a large engine, there's so much more that comes into play there. This stuff could work fine in a simple, limited "engine" or framework though, but you'd be limited to a set amount of lights (code picks the 3 closest ones for example then).
Ok, so I had a little look at how the differed works, and as far as I can tell you would still do it in HLSL but you need to add in a gBuffer (Or MRTs?). Then instead of doing the lighting in the pixel shader you store the various scene elements in the gBuffer then composite them in the pixel shader at the end? Aqain, no code need, just want to get the theory sorted in my head
Also, would this be a good thing to move onto next about learning shaders? Is that "the way the industry is heading" or is it just another technique and I shouldn't worry too much about it?
You would do some of the stuff in HLSL, but things will be a lot more spread out. the majority happens in the actual render framework code of the application.
I've never wirtten this stuff before, but to say it simplified you would have a few simple shaders (or one with multiple techniques) for the passes: one that purely returns diffuse, one that returns normals, one for spec, etc... These would each render to a rendertarget or gBuffer, though creating, handling and storing these is done outside of HLSL. HLSL/a shader is pretty "dumb" when viewed in the total render pipeline: it doesn't do much else than crunch pixels (even shadows for example require application code outside of HLSL).
Once you have stored all the passes in Gbuffers, you pass them to another composite-shader as textures. This composite shader then does another pass of crunching to composite the final image.
I wouldn't recommend doing that, really, unless you're prepared to write your own render framework to surround it. there's no point in "practising" that unless you also have access to the main application code surrounding it, as they are so closely tied together.
Not to say that is impossible, but since there is no existing, easy frame work for it (like max for forward shading), you'd need get into some heavier programming.
I have to be honest: despite doing shaders in my spare time and having a good understanding, I never actually write shader code for the game at work (though I have written "tool" shaders for Max). The engines are too complex (Unreal and Frostbite) and provide a good node based interface, so there is no need. If you work with a simpler engine (Unity too I think) or some indie project then there is a much larger possiblity. You still have to work closely with a programmer though.
if you want a higher level overview of what I think is a good path:
get up to a level where you can easily write a mutlifunctional, standard shading with some bells and whistles like normal mapping and cubemap reflections.
You could try some post-effect shaders in max. In theory simpler than object shaders (you're just working on a single image, kinda like image editing), the crappy interface in max makes it harder. Interesting nonetheless, i wrote a bunch of them a few years ago, I should put them up on my site again.
You could try some special shaders that are more specific in purpose. Stuff I've written/seen:
Vertex colored blending, or world-direction based (like terrian painting or snow on top of things).
Mixed object/post shader to do actual glow outside of model edges (with different techniques and passes).
Front/top/back/side modelsheet projection onto blockout model.
UV-distortion visualisation with ddX() and ddY(). Complicated shit already!
Procedural noise effects in worldspace, try a few different noises like perlin, Voronoi, etc... This stuff gets really, really complicated
All fun stuff! If you manage to do all of this, I'd say you got as far as "mastering shaders" goes for an artist. Any further (like coding a deferred pipeline) and you move into graphics programmer territory...
Got an opportunity to work on a deferred pipeline and most of the shaders were pretty self explanatory and the skills I learned from HLSL/CGFX was just as easy to port over. Definitely agree with the path Xoliul laid out. Sounds like a good flow and you touch a bunch of stuff in the shaders/graphics programming realm.
I have a much better idea of where shaders are fitting into the overall pipeline. Also good to know of the scope that I will be learning. That is a great list of shaders to try and create too, I shall definitely be using something along those lines for "further learning" (once I figure out what some of them mean of course... :poly124: )
I have got to a point where I just have a Textured normal map shader working (even got a little ahead of myself and threw in a cheeky array), but the only way I can get the alpha to work correctly is by having Out.a = 1.0f; at the end. If I just have it return Diffuse + Ambient, it looks correct but the light also seems to act as some kind of alpha and where it is lit it makes it transparent?
So I wondered if someone could have a little look and see why it is doing that? Another point is float3 and float4, I can't figure out when to use them. I have to swap between them and back again when writing my code, but that might just be down to inexperience.
Also if you could look at the general state of my code, see if I am doing things "correctly", or if anything is sticking out where it shouldn't be.
a dot product always returns a single float, not a vector. So what happens when you put this into a float 4, is that every component (r, g, b and a) get assigned this value. It will look grey scale with that same grey scale value in the alpha. Your alpha is just coming along all the way till the end then: the color texture's alpha is 1, this gets multiplied by your light so it becomes the same value.
What you're doing at the end, explicitly setting it to 1.0f, is a good thing though. I do that too: alpha tends to be a simple value. I just do an If-statement that sets Out.a to either DiffuseTexture.a or 1.0f.
Now regarding when to go for float3 or float4:
float 3 should be your default in almost every case for actual vector values. When doing vector calculations, the W/A component tends to act as a length of sorts, often messing up calculations (like what was happening to NBLM).
Only go for float 4 when you explicitly need it if it's a color value.
Really, the only times where you should use float4 is your actual output pixel value and whenever you want to sample a texture with alpha.
Admittedly, this often tends to get annoying since you can't perform any operations with a float3 and float4 together and you end up using float 4 for every color. The solution is to do what you do: override at the end to make sure.
I've found my problem. I've forgotten to assign a compiler :shifty:. It works now with:
string ParamID = "0x003";
I've also updated my code with your suggestions, thx Drew++!
Just that point with not transforming the normals is not 100% true. If you
just pass the normals,binormals and tangents to you pixelshader they may work as before,
but they are static then and get not affected by rotation.
I've found my problem. I've forgotten to assign a compiler :shifty:. It works now with:
string ParamID = "0x003";
I've also updated my code with your suggestions, thx Drew++!
Just that point with not transforming the normals is not 100% true. If you
just pass the normals,binormals and tangents to you pixelshader they may work as before,
but they are static then and get not affected by rotation.
Ahh right... That would be object space. I'm used to working with an engine that handles this for me :V
Hey all, trying to get Spec and gloss working now. I have my gloss in the aplpha of my spec channel. They look like this:
But it seems that my specular is always getting blown out.
Here it is with the gloss spinner at 100:
And here it is at 0:
Here is my code:
string ParamID = "0x003";
float4x4 wvp : WORLDVIEWPROJ < string UIWidget = "None"; >;
float4x4 WorldInverseTranspose : WORLDINVERSETRANSPOSE < string UIWidget = "None"; >;
float4x4 ViewInverse : VIEWINVERSE < string UIWidget = "None"; >;
float4x4 World : WORLD < string UIWidget = "None"; >;
//////////////////////////////////////////////////////////////
// Parameters section
float4 specularColor : SPECULAR
<
string UIName = "Specular Colour";
//string UIWidget = "ColorSwatch";
> = {0.15f, 0.15f, 0.15f, 1.0f};
float4 ambientColor : AMBIENT
<
string UIName = "Ambient Colour";
//string UIWidget = "ColorSwatch";
> = {0.15f, 0.15f, 0.15f, 1.0f};
float glossiness
<
string UIName = "Glossiness Level";
string UIType = "FloatSpinner";
float UIMin = 0.0f;
float UIMax = 100.0f;
float UIStep = 0.05;
> = 25.0f;
texture diffuseMap : DiffuseMap
<
string name = "default_color.dds";
string UIName = "Diffuse Texture";
string TextureType = "2D";
>;
texture normalMap : NormalMap
<
string name = "default_normal.dds";
string UIName = "Normal Map";
string TextureType = "2D";
>;
bool bFlipGreenChannel
<
string gui = "slider";
string UIName = "Flip Green";
> = true;
texture specularMap : SpecularMap
<
string name = "default_color.dds";
string UIName = "Specular Texture";
string TextureType = "2D";
>;
sampler2D diffuseMapSampler = sampler_state
{
Texture = <diffuseMap>;
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D normalMapSampler = sampler_state
{
Texture = <normalMap>;
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};
sampler2D specularMapSampler = sampler_state
{
Texture = <specularMap>;
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};
//////////////////////////////////////////////////////////////
// Lights section
float4 lightPos : POSITION
<
string UIName = "Light Position";
string Object = "PointLight";
string Space = "World";
int RefID = 0;
> = {100.0f, 100.0f, 100.0f, 0.0f};
float4 lightColor : LIGHTCOLOR
<
int LightRef = 0;
string UIWidget = "None";
> = { 1.0f, 1.0f, 1.0f, 0.0f };
//////////////////////////////////////////////////////////////
// Structs section
// input from application (Application 2 Vertex) These are "filled" from the app, you can get data from these
struct a2v{
float4 position : POSITION;
float2 texCoord : TEXCOORD0;
float4 normal : NORMAL; //float4 for multiplication purposes in 4x4 matrices
float4 binormal : BINORMAL;
float4 tangent : TANGENT;
};
// output to fragment program (Vertex 2 Fragment) These are "filled" from the Vertex shader
struct v2f{
float4 position : POSITION;
float2 texCoord : TEXCOORD0;
float3 lightVec : TEXCOORD1;
float3 worldNormal : TEXCOORD2;
float3 worldBinormal: TEXCOORD3;
float3 worldTangent : TEXCOORD4;
float3 eyeVec : TEXCOORD5;
};
//////////////////////////////////////////////////////////////
// Vertex Shader
v2f vShader(a2v In)
{
v2f Out;
Out.worldNormal = mul(In.normal, WorldInverseTranspose).xyz;
Out.worldTangent = mul(In.tangent, WorldInverseTranspose).xyz;
Out.worldBinormal = mul(In.binormal, WorldInverseTranspose).xyz;
float3 worldSpacePos = mul(In.position, World).xyz; //transform vert pos to world space
Out.eyeVec = ViewInverse[3] - worldSpacePos;
Out.lightVec = lightPos - worldSpacePos;
Out.texCoord = In.texCoord;
Out.position = mul(In.position, wvp);
return Out;
}
//////////////////////////////////////////////////////////////
// Pixel Shader
float4 pShader(v2f In) : COLOR
{
//float4 outColor;
float4 Out;
float4 colorTexture = tex2D(diffuseMapSampler, In.texCoord);
float3 normal = tex2D(normalMapSampler, In.texCoord).rgb * 2 - 1;
float4 specTexture = tex2D(specularMapSampler, In.texCoord);
float3x3 objToTangentSpace;
objToTangentSpace[0] = normalize(In.worldBinormal);
objToTangentSpace[1] = normalize(-In.worldTangent);
objToTangentSpace[2] = normalize(In.worldNormal);
if (bFlipGreenChannel) normal.g =-normal.g;
float3 N = mul(normal, objToTangentSpace);
N = normalize(N);
float3 L = normalize(In.lightVec); //makes lightVec length of 1
float3 V = normalize(In.eyeVec);
float light = saturate(dot(N,L));
float4 Diffuse = colorTexture * light * lightColor;
float4 Ambient = ambientColor * colorTexture;
float3 halfVec = normalize(V + L);
float4 Specular = saturate(dot(halfVec, N)); //working out where spec should be
float specPower = pow(glossiness, specTexture.a); //Power the Gloss map(specTexture alpha) by the glossiness spinner, so you can control gloss
Specular = pow(Specular, specPower);//powering the spec by the powered gloss map
Specular *= saturate(dot(N,L) * 4);//xoliul spec through surface bug fix
Specular *= specTexture * lightColor;//multiplying by light color so spec dissappears with light.
Out.rgb = Diffuse.rgb + Ambient.rgb + Specular.rgb;
Out.a = 1.0f;
return Out;
}
//////////////////////////////////////////////////////////////
// Techniques
technique Simple
{
pass one
{
VertexShader = compile vs_3_0 vShader();
PixelShader = compile ps_3_0 pShader();
}
}
I think it looks stange, but I can't figure out what I am doing wrong.
Another thing I have noticed is the further my omni is away the brighter and larger it becomes. Do I just ignore this for the time being until I go over light attenuation?
Did manage to get a "Flip green channel" tick box in there which I was quite happy about though
Edit: When I put Xoliul shader to gamma of 1, the light is very similar to mine, but still not as blown out. so may want to try and implement gamma into my shader. When set to 2.2 looks FAR better than mine.
Well, I just carried on plugging away and I have managed to get the spec working correctly, looks pretty much identical to XoliulShader (my benchmark to see if things are working correctly )
I was multiplying and powering by the wrong things, I was trying to figure it out just form the Xoliul source, but it is a bit hard that way when you have to jump around the shader to see where it is getting values from.
Here is my shader now in case anyone is interested.
Another tech question: how would this work if it was being used in an actual game engine? Would the "glossiness spinner" be hard coded into the engine? or would it usually be something that is edited on a per model basis and carried into engine?
Now I can carry on studying and will probably have another update next week with some cube map stuff
Sorry I haven't posted in a while I'm on my last week of uni and am really busy.
BUT
There's some good stuff coming out of this guys, so keep it up!
As for gloss maps I'm not sure what the standard (if there is one) method of doing the calculation for it is. Personally when I have used it (only once) I multiplied the gloss map with the specular power attribte. I don't know if this is right or not so I'm curious as to what other people do?
(I'll kick this back up as it is one of the only interesting threads left around here)
dnc: gloss map should be seen as a modifier for a rang of power values. if you just multiply it with a static power value, you cou8ld end up powering the specular to 0 (for pure black pixels on your gloss map). the best solution is to use the gloss map as a lerp alpha for your max and Min specular power values.
Another tech question: how would this work if it was being used in an actual game engine? Would the "glossiness spinner" be hard coded into the engine? or would it usually be something that is edited on a per model basis and carried into engine?
That really depends on the engine,but it would be a bit along these lines:
Let's assume the engine does not do anything node-based. It can either figure out what variables are exposed by looking at the UI code in the FX file, or it could work with an extra XML file that sits next to the FX file, telling it what values do what (more flexible than just DXSAS). The engine could then expose whatever of these values it has collected when you want to edit this shader 'instance" (a case where a model gets drawn with a specific set of parameters for this shader). These instance parameters are then stored along with the asset, and applied when rendertime comes.
haha, good to know I could make an interesting thread!
That's actually really interesting to know how various engines can do it different ways. I love learning the process on how other engines do it, really helps me get my head around how to attack it from an art standpoint.
I will definitely be carrying on my studies soon and reviving this thread, but unfortunately I have recently lost my job, so must prioritise getting work first. I will still be dipping in and out but can't focus as much time as I could before. Still love seeing how others are getting on though!
I've never watched those, but Ben Cloward was the co-creator of ShaderFX, so that should say enough
edit, some more stuff:
The official Microsoft HLSL reference on MSDN. Quite technical and not always easy, but the best source to find clear answers on the syntax i think. Basic, old article I wrote on color math. It's not really up to standard anymore, but it should help a bit for the very basics. I promise, I want to rewrite and continue this stuff this year!
And perhaps allow me to add my opinion on HLSL: I think it's the easiest, least nonsense and effort language there is. Once you feel at home in a shader's standard structure it's really very fun and satisfying to work with (just looking at that MSDN page makes me want to write stuff again haha). No setup required of program API's, no headaches with memory management, no compile steps, just CTRL-S, core functionality relatively portable between applications, etc...
was wondering where I can get a refresher course in the math symbols, I recognize some of them, but I cant for the life of me remember what the other ones are. a full list would be appreciated if anyone has them, or an 'idiots guide to basic math sybols' for the wiki or something.(yeah, the wiki)
Replies
this In.position etc. is C syntax, search for "structs in C" and you will find enough tutorials about this topic.
@dnc
Sadly that ShaderFx version expires, so I couldn't worked any longer on that shader. Instead I'm learning hlsl now with notepad from scratch.
It goes quite well. I've written a simple diffuse lambert shader today, but there a still some thing that I haven't overcome yet.
Now I want to add phong specular lighting ,but I dont know how to calculate this eyevector / viewvector inside 3ds max ?
The view position is simply the world position of the view setup, so taking the view position - world position will give you the vector that points from the world, to the view...
You can get the view position from a matrix. You can get it by taking the fourth row of the inverse view matrix.
So to finally get your normalized view vector or eye vector it would simply be hope this explains it well
During my work on that specular lighting I've found a bug in my shader, whenever
I put a light between my models the outer one will be lit like the inner one?
I think a picture describes this better:
and my lambert shader:
But is it because in your Pixel shader you have your lightDir calculated with "light1pos" to work out the light vector, when in fact you need multiple passes for it to work out each individual light and blend? Or at least have LightDir2 with light2pos if that's possible?
light1Pos = mul(light1Pos, World);
The lambert looks ok, where is the code for your specular pls?
Also, here's my function from my shader:
I tried to make it generic enough: no specmap required yet, can be used for any light. I just run this 3 times to calculate every light's spec on the model.
Take note of the final line: this is a hacky little fix for a bug where spec appears through surfaces. You won't find this in any official book or formula: I just write stuff that works for me. what it does is just quickly check if light actually reaches the area where specular is calculated, and makes sure no spec will be visible if the area is unlit.
is crazy. I've put my code into some fancy functions like Xoliul did, to get a
better overview. But instead working like before as I would expect it, it does not
I've got no compile errors or anything else, but my specular calculation doesn't work now. I cant see the problem here.
Its the same code as before , just put into functions.
Here is my full code: you can switch between the old and the new version with the "use external functions" flag in 3ds max
I suggest you fix that first before bothering with those functions. it's some sort of matrix/space problem as lighting seems offset and dependent on distance.
Found your problem: you are using a float4 as input for the normal by the application. That means a W-component is created for it as well, which you end up using in all your calculations. That W-component messes up pretty much everything.
Its still not the prettiest, but I think I've learned enough to move further.
I can post my shader here, if anyone is interested ?
I'm now learning cubemaps and how to use them for reflection/refractions. And
I'm stuck here. I dont know what I'm doing wrong that my shader looks like this (left side):
My code:
-First thing is when you are transforming your tangents, normals, and binormals into 'world space'. They naturally are in world space, so simply pass them to your fragment shader straight from the application. For example: -Second thing is when using a normal map, you need to swizzle your normal map around before using it. So if a normal map is being used your normal map should become: You would also need to do your "bFlipY" before this operation..
-Third thing is that cubemaps need a different normal lookup, since 3DS Max is Z up... so your normal strictly for cubemaps becomes: y and z are simply swapped
One more thing... Normalize your variables such as view vec, normals, etc.
The shaders I have been tinkering with use 1 or 2 lights, but what I want to know is how would this differ if you were going to use it in a game engine, or even more lights in max (I don't need actual code, just the theory behind it). Do you have to code a "light1pos", "light2pos", "light3pos"... for every light you MIGHT have in the scene, or do you do it in a funky way where the shader realises there is another light so does a lightNpos + 1?
As you see, you need to explicitly provide code for every additional light, it adds performance cost, and if you want to optimize in case there are fewer lights, you get branching as an additional issue: doing IF's in shaders is not as straightforward as regular cpu code, shaders are meant for the GPU to thunder through while doing the same code with just different variables: changing the code (like with an IF or when switching shader file) comes with a cost.
The solution to this lies with the engine, and adjusting your shaders accordingly. First of all, your engine needs to render either Forward or deferred. Forward rendering is what my shader is and what you guys are practicing. Simply a shader per object, working by itself; light code is done per object, you can have vastly different shaders for different objects. Deferred shading is completely different and is more like a post-effect. The renderer will do a number of passes, for example diffuse color, normals, specular, reflection, emmissive, all with standardized shaders for all objects. then a final pass will composite everything together. The advantage is that you're calculating your lights only for that single, final image, and not for every object. Much better performance wise, but limited in other ways (much harder to do nodebased unlimited shaders like Unreal). Deferred is a newer technique because GPU's couldn't handle so many rendertargets until about 8 years ago.
There's also different approaches, I think unreal does forward shading most of the time (lightmapped objects are forward shaded), and it will do one single dynamic light (the dominant light, like the sun). If there are any additional, dynamic lights (like explosions, flashing lights) they will be done in an additional deferred pass that gets composited on top.
Frostbite and Cryengine are both fully deferred I think. Frostbite has nodebased shaders, Cryengine limits you to a set of pre-built shaders.
I guess to conclude I could say this:
the way we are coding shaders is actually a bit inefficient and not suited for integration in a large engine, there's so much more that comes into play there. This stuff could work fine in a simple, limited "engine" or framework though, but you'd be limited to a set amount of lights (code picks the 3 closest ones for example then).
Ok, so I had a little look at how the differed works, and as far as I can tell you would still do it in HLSL but you need to add in a gBuffer (Or MRTs?). Then instead of doing the lighting in the pixel shader you store the various scene elements in the gBuffer then composite them in the pixel shader at the end? Aqain, no code need, just want to get the theory sorted in my head
Also, would this be a good thing to move onto next about learning shaders? Is that "the way the industry is heading" or is it just another technique and I shouldn't worry too much about it?
I've never wirtten this stuff before, but to say it simplified you would have a few simple shaders (or one with multiple techniques) for the passes: one that purely returns diffuse, one that returns normals, one for spec, etc... These would each render to a rendertarget or gBuffer, though creating, handling and storing these is done outside of HLSL. HLSL/a shader is pretty "dumb" when viewed in the total render pipeline: it doesn't do much else than crunch pixels (even shadows for example require application code outside of HLSL).
Once you have stored all the passes in Gbuffers, you pass them to another composite-shader as textures. This composite shader then does another pass of crunching to composite the final image.
I wouldn't recommend doing that, really, unless you're prepared to write your own render framework to surround it. there's no point in "practising" that unless you also have access to the main application code surrounding it, as they are so closely tied together.
Not to say that is impossible, but since there is no existing, easy frame work for it (like max for forward shading), you'd need get into some heavier programming.
I have to be honest: despite doing shaders in my spare time and having a good understanding, I never actually write shader code for the game at work (though I have written "tool" shaders for Max). The engines are too complex (Unreal and Frostbite) and provide a good node based interface, so there is no need. If you work with a simpler engine (Unity too I think) or some indie project then there is a much larger possiblity. You still have to work closely with a programmer though.
if you want a higher level overview of what I think is a good path:
get up to a level where you can easily write a mutlifunctional, standard shading with some bells and whistles like normal mapping and cubemap reflections.
You could try some post-effect shaders in max. In theory simpler than object shaders (you're just working on a single image, kinda like image editing), the crappy interface in max makes it harder. Interesting nonetheless, i wrote a bunch of them a few years ago, I should put them up on my site again.
You could try some special shaders that are more specific in purpose. Stuff I've written/seen:
Vertex colored blending, or world-direction based (like terrian painting or snow on top of things).
Mixed object/post shader to do actual glow outside of model edges (with different techniques and passes).
Front/top/back/side modelsheet projection onto blockout model.
UV-distortion visualisation with ddX() and ddY(). Complicated shit already!
Procedural noise effects in worldspace, try a few different noises like perlin, Voronoi, etc... This stuff gets really, really complicated
All fun stuff! If you manage to do all of this, I'd say you got as far as "mastering shaders" goes for an artist. Any further (like coding a deferred pipeline) and you move into graphics programmer territory...
I have a much better idea of where shaders are fitting into the overall pipeline. Also good to know of the scope that I will be learning. That is a great list of shaders to try and create too, I shall definitely be using something along those lines for "further learning" (once I figure out what some of them mean of course... :poly124: )
I have got to a point where I just have a Textured normal map shader working (even got a little ahead of myself and threw in a cheeky array), but the only way I can get the alpha to work correctly is by having Out.a = 1.0f; at the end. If I just have it return Diffuse + Ambient, it looks correct but the light also seems to act as some kind of alpha and where it is lit it makes it transparent?
So I wondered if someone could have a little look and see why it is doing that? Another point is float3 and float4, I can't figure out when to use them. I have to swap between them and back again when writing my code, but that might just be down to inexperience.
Also if you could look at the general state of my code, see if I am doing things "correctly", or if anything is sticking out where it shouldn't be.
Thanks for all the help!
I see some weirdness:
a dot product always returns a single float, not a vector. So what happens when you put this into a float 4, is that every component (r, g, b and a) get assigned this value. It will look grey scale with that same grey scale value in the alpha. Your alpha is just coming along all the way till the end then: the color texture's alpha is 1, this gets multiplied by your light so it becomes the same value.
What you're doing at the end, explicitly setting it to 1.0f, is a good thing though. I do that too: alpha tends to be a simple value. I just do an If-statement that sets Out.a to either DiffuseTexture.a or 1.0f.
Now regarding when to go for float3 or float4:
float 3 should be your default in almost every case for actual vector values. When doing vector calculations, the W/A component tends to act as a length of sorts, often messing up calculations (like what was happening to NBLM).
Only go for float 4 when you explicitly need it if it's a color value.
Really, the only times where you should use float4 is your actual output pixel value and whenever you want to sample a texture with alpha.
Admittedly, this often tends to get annoying since you can't perform any operations with a float3 and float4 together and you end up using float 4 for every color. The solution is to do what you do: override at the end to make sure.
Just that point with not transforming the normals is not 100% true. If you
just pass the normals,binormals and tangents to you pixelshader they may work as before,
but they are static then and get not affected by rotation.
Ahh right... That would be object space. I'm used to working with an engine that handles this for me :V
So just do this
But it seems that my specular is always getting blown out.
Here it is with the gloss spinner at 100:
And here it is at 0:
Here is my code:
I think it looks stange, but I can't figure out what I am doing wrong.
Another thing I have noticed is the further my omni is away the brighter and larger it becomes. Do I just ignore this for the time being until I go over light attenuation?
Did manage to get a "Flip green channel" tick box in there which I was quite happy about though
Edit: When I put Xoliul shader to gamma of 1, the light is very similar to mine, but still not as blown out. so may want to try and implement gamma into my shader. When set to 2.2 looks FAR better than mine.
I was multiplying and powering by the wrong things, I was trying to figure it out just form the Xoliul source, but it is a bit hard that way when you have to jump around the shader to see where it is getting values from.
Here is my shader now in case anyone is interested.
Another tech question: how would this work if it was being used in an actual game engine? Would the "glossiness spinner" be hard coded into the engine? or would it usually be something that is edited on a per model basis and carried into engine?
Now I can carry on studying and will probably have another update next week with some cube map stuff
BUT
There's some good stuff coming out of this guys, so keep it up!
As for gloss maps I'm not sure what the standard (if there is one) method of doing the calculation for it is. Personally when I have used it (only once) I multiplied the gloss map with the specular power attribte. I don't know if this is right or not so I'm curious as to what other people do?
dnc: gloss map should be seen as a modifier for a rang of power values. if you just multiply it with a static power value, you cou8ld end up powering the specular to 0 (for pure black pixels on your gloss map). the best solution is to use the gloss map as a lerp alpha for your max and Min specular power values.
That really depends on the engine,but it would be a bit along these lines:
Let's assume the engine does not do anything node-based. It can either figure out what variables are exposed by looking at the UI code in the FX file, or it could work with an extra XML file that sits next to the FX file, telling it what values do what (more flexible than just DXSAS). The engine could then expose whatever of these values it has collected when you want to edit this shader 'instance" (a case where a model gets drawn with a specific set of parameters for this shader). These instance parameters are then stored along with the asset, and applied when rendertime comes.
That's actually really interesting to know how various engines can do it different ways. I love learning the process on how other engines do it, really helps me get my head around how to attack it from an art standpoint.
I will definitely be carrying on my studies soon and reviving this thread, but unfortunately I have recently lost my job, so must prioritise getting work first. I will still be dipping in and out but can't focus as much time as I could before. Still love seeing how others are getting on though!
Do you know an equivalent for Maya?
http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_slides.pdf
was wondering where I can get a refresher course in the math symbols, I recognize some of them, but I cant for the life of me remember what the other ones are. a full list would be appreciated if anyone has them, or an 'idiots guide to basic math sybols' for the wiki or something.(yeah, the wiki)