If I'm not completely wrong the limit for those under SM3 is 10. I did test GLSL as Unity proposed that, but I think it has the same limits on those. My shader had 12 without lightmapping and per-pixel turned on. The shader is the same I posted a while ago, just with additional stuff like fresnel rim, screen-space texture interpolated with the lighting lookup texture and a tintable texture input using the models UV. In the end I wanted too much from it under older hardware and now I'm splitting some of those features into an image-effect shader.
Man congrats on the release, and the price is awesome, I'll await to see the mobile features for it before buying it though, as for the moment I prefer to write the shaders on my own and having mobile as my exclusive target they need alot of optimizations.
Anyway here is a feature request that maybe was not asked yet:
- As you are exposing variables/floats/sliders/Colors and stuff in the shader tree it would be cool to automatically generate a Control C# Script associated with exposed variables to get presets animations for those variables, defining behaviours like tweening, looping (repeat, ping pong, clamp...), time range adjusting, and everything possible to animate the look, defining what will go in the script and what kind of animation to set right in the properties of the exposed parameter.
This would be a great feature, and a speed up for workflow.
Possibly! Request it on the feedback page
SF is currently not delving outside its editor window or material & shader inspectors, and it might have several implications I hadn't thought of before. (For instance, if a control script exists, people are pretty likely to request a ton of features for that, even though it's not what SF is primarily for)
Also, there is a bug with 0.17 that was quite critical, that made some properties not show up. I submitted 0.18 that fixed the issue, last friday, but Unity is still verifying it. Taking rather long!
What tutorials do you think would be nice to make at this point?
I've been thinking of making a basic tutorial on how to understand vector3/2/1, components, component mask and append, just to make sure new users clearly understand how the basic flow of it works
Basic tutorial is surely fine, but maybe try to think of it in a bigger scope. This stuff is so powerfull... Ideally I would see this like videocopilot. Lot's of lessons, starting with basics and introducing really cool and complicated effects later. It would be shitload of work, but hell, community driven polycount effort could make it happen .
tl;dr
Basic stuff is cool, but so are very cophisticated special effects.
Perhaps I can alternate between advanced/simple tutorials for now
I've released a very basic one, perhaps I can make an advanced one now?
What would you want to see? Custom lighting? Distortion effects? Triplanar mapping? Flowmap? Using SF for particle effects?
A little update on the development - deferred might be a bit more straightforward than I initially thought, but due to how deferred lighting works, lighting options will be *very* limited.
For instance, you won't be able to use these inputs when using deferred:
Diffuse Power
Custom Lighting
Transmission
Light Wrapping
Alpha (Maybe)
Outline (Maybe)
Outline Color (Maybe)
And pretty much all light nodes will be unavailable when using deferred:
Half Direction
Light Attenuation
Light Color
Light Direction
Light Position
Shader Forge Beta 0.19 now released:
Added node: Remap Range
You can now specify offset factor and offset units
Lightmapping and lightprobes are now mutually exclusive
Added credits button in the main menu
Fixed a bug where you got a compile error when EditorLabel.cs was present when making a build
Fixed a bug where the dot product node didn't generate its node preview properly
Fixed a bug where using a various characters in property names would break the shader
The instruction counter now shows the current render platform (OpenGL / Direct3D 9 / etc.)
A bug where shaders didn't save when using the built-in Perforce version control, may have been fixed now
Acegikmo, as far as tutorials go, I have some thoughts.
A tool like SF lowers the barrier to entry considerably for people who want to make amazing effects but lack graphics and math knowledge.
I think SF presents a great opportunity for teaching some of the more advanced concepts through sample node setups with step-by-step explanations.
Tutorials like that UDK water one are useless for really learning the concepts needed to produce the result, that one even says at one point "the image speaks for itself"... really? No comments, just a network of various math operations.
Shader Forge Beta 0.20 now released:
• Added node: Remap (Same as Remap Range, but with input connectors instead of constants)
• Added node: Depth Blend (Requires a camera with depthTextureMode set to Depth or DepthNormals)
• Added node: Scene Depth (Requires a camera with depthTextureMode set to Depth or DepthNormals)
• Fixed a bug where hotkeys didn't work on Windows (Ctrl+C/V/X/D)
• You can now use the MIP inputs of textures and cubemaps when on the OpenGL platform (note that this will disable the instruction counter)
• The Scene Color node is now affected by Refraction
• Fixed a bug where the multi-add/multiply/min/max nodes did invalid typecasting
• Fixed a bug where the rotator node speed input expected a Vector2
• Fixed a bug where the geometry normals sometimes weren't normalized, even though the quality setting told it to
• Renamed node: "Constant Clamp" is now known as "Clamp (Simple)"
• Renamed node: "Constant Lerp" is now known as "Lerp (Simple)"
• Renamed node: "Remap Range" is now known as "Remap (Simple)"
• Normal map negative values are no longer visualized as positive
• The image on the Main node now looks better in the light Unity skin
Yeah! Looking fantastic. I will be picking this up to use at home as soon as I have the money. And maybe once I'm done my current project at work, I will suggest we use Shader Forge for the next one.
Also, the automatic screenshot feature is now working! It's not completely done, I have yet to add buttons in the interface to take the screenshot, but here's how an auto-screenshot looks now:
Ah yeah I saw that! I am about to purchase it although I just wanted to ask a quick question. Is it compatible with Unity Mobile? Or would anything I create destroy performance? I'm mainly working on mobile projects and I would see a great use for this.
Ah yeah I saw that! I am about to purchase it although I just wanted to ask a quick question. Is it compatible with Unity Mobile? Or would anything I create destroy performance? I'm mainly working on mobile projects and I would see a great use for this.
Got this yesterday, been testing like mad today. Seems like a wonderful tool, though I'll have to spend some time getting properly into it. Looking forward to more (and more elaborate) documentation and more tutorials. I'm an illustrator first and foremost, so a lot of the current tooltips are so much gobbledigook to me right now.
But apart from beginner's problems, I am loving it. Keep up the fantastic work.
Shader Forge Beta 0.21 / 0.22 now released:
There's now a button available in the top left corner of the shader preview, to take a screenshot of the whole node tree
Fixed a bug where some setups leading to the UV channel of texture nodes would break
Fixed a bug where normal maps didnt respect the normal quality
Fixed a bug where transparent objects caused depth issues
Fixed a bug where Unlit shaders had to have a Light Color node in order to use Light Attenuation
Theres now a script included for enabling depth rendering on cameras
Updated readme.txt
Question: let's say I want to draw sprites that appear 3-dimensional. For that I'd use a normal map and a map storing depth information to get the virtual world-space location for every pixel i draw with a fixed perspective.
Is it possible to create a shader writing the correct depth information with your tool and if so, what would be the recommended way of doing that.
Question: let's say I want to draw sprites that appear 3-dimensional. For that I'd use a normal map and a map storing depth information to get the virtual world-space location for every pixel i draw with a fixed perspective.
Is it possible to create a shader writing the correct depth information with your tool and if so, what would be the recommended way of doing that.
I don't think you can actually write depth to the depth buffer without the geometry being in that actual location, so you'll be limited on a per-vertex depth offset, in which case you'll be able to use vertex offset at least
I don't think you can actually write depth to the depth buffer without the geometry being in that actual location, so you'll be limited on a per-vertex depth offset, in which case you'll be able to use vertex offset at least
There should be a way of doing that. it has been executed by this guy for example https://www.youtube.com/watch?v=-Q6ISVaM5Ww
Check time index 3:54. You see he writes height map information to the depth buffer to let the 2D sprites intersect in 3D space. From what I understood from my short conversation with him he renders the distance from sprite location normal to the (fixed) view direction into those height maps and uses that information to re-calculate the world location of each pixel inside the shader. I just don't understand how he writes that world location info back to the depth buffer to achieve the intersecting. Any ideas? I know this is a bit off-topic, but I'm hoping that it could actually lead to an idea for an additional node that could be used to achieve this effect in games.
There are multiple ways. I haven't used the animation editor for animating material parameters, but you should be able to use it. If not, you can animate them with code, using material.SetFloat(...) etc.
Hey there! So for a few days now I've been playing around with the ShaderForge and I'm really excited for it to be in Unity because it makes my life so much easier. I've been working on trying to translate some Shaders I made in Unreal into ShaderForge and I'm having a really hard time replacing the "BumpOffset" node when translating from Unreal to SF. Here is a quick example of what I'm trying to get it to function like if anyone has any ideas on how I could go about translating it.
If this is the wrong place to ask something like this, a redirection would be lovely as well. Thanks.
It seems like even though I can set keyframes, they zero out. Is there a way to make a material instance for multiple objects (let's say that there are two materials in the same scene where their parameters are being animated independently)?
It seems like even though I can set keyframes, they zero out. Is there a way to make a material instance for multiple objects (let's say that there are two materials in the same scene where their parameters are being animated independently)?
Guess you have to animate it in code then!
If you use renderer.material.Set... Unity will instantiate the material automatically.
If you use renderer.sharedMaterial.Set... Unity will tweak the original material, shared by all your objects using it.
Hey there! So for a few days now I've been playing around with the ShaderForge and I'm really excited for it to be in Unity because it makes my life so much easier. I've been working on trying to translate some Shaders I made in Unreal into ShaderForge and I'm having a really hard time replacing the "BumpOffset" node when translating from Unreal to SF. Here is a quick example of what I'm trying to get it to function like if anyone has any ideas on how I could go about translating it.
Shader Forge Beta 0.23 now released:
• You can now zoom the node view
• Added node: Normal Blend. This node will combine two normals; a detail normal perturbed by a base normal
• Added node: Blend. Photoshop-style blending with 18 blending modes:
- Darken, Multiply, Color Burn, Linear Burn
- Lighten, Screen, Color Dodge, Linear Dodge
- Overlay, Hard Light, Vivid Light, Linear Light, Pin Light, Hard Mix
- Difference, Exclusion, Subtract, Divide
• The Component Mask node will now display outputs per-channel, in addition to the combined one. For example, if you mask out RGB, you will now get 4 connectors, RGB, R, G and B
• There’s now a checkbox under settings, where you can enable/disable node-preview updating, in case you’re having performance issues in large node trees
• Fixed a bug where you couldn’t use depth nodes in the refraction input
• Fixed a bug where the Length node output the wrong component count
• Fixed a bug where objects in the node view could be selected through the side panels
• Fixed a bug where the screenshot button overlapped the toolbar when expanding settings
The main problem I have with the UDK material editor is:
no ability to control multi-pass materials
no access to differed layers/ability to affect differed layers.
If Shader forge can allow me to specify how to use each pass, what to effect on each pass and allow me to grab hold of and create full-screen differed effects (not just post effects) I'll gladly purchase and switch to unity.
You can't set up multi-pass shader manually yet, but I have very loosely thought about implementing it, though it's a *huge* topic that would require loads of restructuring.
As for deferred Gbuffers and such, it's also a highly advanced topic.
Some thing border on being too low-level for node-based editing. If you're already feeling the need to make multi-pass shaders and custom deferred pipes, you're most likely better off learning to code shader by hand
That said, I have thought about that too, but I'm waiting with it until Unity's new rendering tech comes up, as it will alter the deferred pipe, and it might, just might, get implemented
While I understand you entirely, I do think there is no disadvantage to giving high-powered tools a more broad audience.
The original idea of node-based materials was to allow artists to have more control over the look of a material, not coders. The intention was simply that artists could hook up textures nice and quick, and use a few nodes to make the material shiny. but because of the level of access allowed by these tools, Artists started playing around with stuff that they would have no ability to even know about from the previous generation of engines. stuff like This starts showing up, and even more complex materials appear. imagine if you gave Mr. Artist even lower access, with more powerful tools instead of telling Mr. Artist "learn to code."
just think about it. Artists generally aren't coders, our brains don't operate in a way that lines of text combine to form a living, breathing material: if our brains DID operate like that, we would probably not be artists. The node-based structure is so powerful because Artists (like myself) can understand and follow the procedure without having to worry about declaring variables or proper syntax.
Sorry if I came across as telling you to go code instead of doing it in a node-based approach, it wasn't my intention at all! SF was definitely designed to be artist friendly, with lots of depth!
My point is, however, that these are *very* advanced topics.
"there is no disadvantage to giving high-powered tools a more broad audience"
The disadvantage is the time spent making it. The time could be spent working on features that would be more useful for a larger audience, improve the workflow, work on the documentation, and so forth.
When I started working on SF, I did an on-paper design of a whole system, that actually did use a multi-pass and vert/frag split, in order to get as much control as possible.
However, the more I researched, the more I realized that about 98% of shaders don't need custom multi-pass control, so I asked myself, why should I spend about 40% of the development time, working on a feature for about 2% of the audience?
So, in short, I'd love to make SF super deep, but I don't know currently if the time spent on it would be worth it. But that doesn't mean it won't ever happen
Replies
tl;dr I went a bit too nuts :P
Anyway here is a feature request that maybe was not asked yet:
- As you are exposing variables/floats/sliders/Colors and stuff in the shader tree it would be cool to automatically generate a Control C# Script associated with exposed variables to get presets animations for those variables, defining behaviours like tweening, looping (repeat, ping pong, clamp...), time range adjusting, and everything possible to animate the look, defining what will go in the script and what kind of animation to set right in the properties of the exposed parameter.
This would be a great feature, and a speed up for workflow.
Bug.
SF is currently not delving outside its editor window or material & shader inspectors, and it might have several implications I hadn't thought of before. (For instance, if a control script exists, people are pretty likely to request a ton of features for that, even though it's not what SF is primarily for)
It's going very well so far!
Also, there is a bug with 0.17 that was quite critical, that made some properties not show up. I submitted 0.18 that fixed the issue, last friday, but Unity is still verifying it. Taking rather long!
You can now use Shader Forge in Unity 4.2.0
New node: Vector Projection
New node: Vector Rejection
The main menu now has a much more visible out-of-date notification, when a new version is out
Fixed a bug where properties didn't show up when connected to some inputs
Fixed a bug where deleting nodes didn't update the Main input availability status
A bug where shaders didn't save when using the built-in Perforce version control, may have been fixed now
Click to see all changelogs
(Don't forget to delete the old Shader Forge before updating to this one!)
I've been thinking of making a basic tutorial on how to understand vector3/2/1, components, component mask and append, just to make sure new users clearly understand how the basic flow of it works
Basic tutorial is surely fine, but maybe try to think of it in a bigger scope. This stuff is so powerfull... Ideally I would see this like videocopilot. Lot's of lessons, starting with basics and introducing really cool and complicated effects later. It would be shitload of work, but hell, community driven polycount effort could make it happen .
tl;dr
Basic stuff is cool, but so are very cophisticated special effects.
I've released a very basic one, perhaps I can make an advanced one now?
What would you want to see? Custom lighting? Distortion effects? Triplanar mapping? Flowmap? Using SF for particle effects?
Take a look at this one tutorial for udk. (Don't mind it's unreadable).
http://minimin0425.blogspot.com/2013/05/udk-water-material-tutorial.html
It brings up few technical aspects and also looks really good. I'd tackle it in similar way, if possible .
A little update on the development - deferred might be a bit more straightforward than I initially thought, but due to how deferred lighting works, lighting options will be *very* limited.
For instance, you won't be able to use these inputs when using deferred:
Diffuse Power
Custom Lighting
Transmission
Light Wrapping
Alpha (Maybe)
Outline (Maybe)
Outline Color (Maybe)
And pretty much all light nodes will be unavailable when using deferred:
Half Direction
Light Attenuation
Light Color
Light Direction
Light Position
Added node: Remap Range
You can now specify offset factor and offset units
Lightmapping and lightprobes are now mutually exclusive
Added credits button in the main menu
Fixed a bug where you got a compile error when EditorLabel.cs was present when making a build
Fixed a bug where the dot product node didn't generate its node preview properly
Fixed a bug where using a various characters in property names would break the shader
The instruction counter now shows the current render platform (OpenGL / Direct3D 9 / etc.)
A bug where shaders didn't save when using the built-in Perforce version control, may have been fixed now
Click to see all changelogs
(Don't forget to delete the old Shader Forge before updating to this one!)
A tool like SF lowers the barrier to entry considerably for people who want to make amazing effects but lack graphics and math knowledge.
I think SF presents a great opportunity for teaching some of the more advanced concepts through sample node setups with step-by-step explanations.
Tutorials like that UDK water one are useless for really learning the concepts needed to produce the result, that one even says at one point "the image speaks for itself"... really? No comments, just a network of various math operations.
So instead of a new tutorial - here's a little preview of what's coming in 0.20!
• Added node: Remap (Same as Remap Range, but with input connectors instead of constants)
• Added node: Depth Blend (Requires a camera with depthTextureMode set to Depth or DepthNormals)
• Added node: Scene Depth (Requires a camera with depthTextureMode set to Depth or DepthNormals)
• Fixed a bug where hotkeys didn't work on Windows (Ctrl+C/V/X/D)
• You can now use the MIP inputs of textures and cubemaps when on the OpenGL platform (note that this will disable the instruction counter)
• The Scene Color node is now affected by Refraction
• Fixed a bug where the multi-add/multiply/min/max nodes did invalid typecasting
• Fixed a bug where the rotator node speed input expected a Vector2
• Fixed a bug where the geometry normals sometimes weren't normalized, even though the quality setting told it to
• Renamed node: "Constant Clamp" is now known as "Clamp (Simple)"
• Renamed node: "Constant Lerp" is now known as "Lerp (Simple)"
• Renamed node: "Remap Range" is now known as "Remap (Simple)"
• Normal map negative values are no longer visualized as positive
• The image on the Main node now looks better in the light Unity skin
Click to see all changelogs
(Don't forget to delete the old Shader Forge before updating to this one!)
This time it's on vertex color blending and UV tiling:
[ame="http://www.youtube.com/watch?v=2ZNJ_KytrE4"]Shader Forge - Vertex color blending & UV tiling - YouTube[/ame]
[ame="http://www.youtube.com/watch?v=EjCXwV0YYdU"]Shader Forge - Custom Blinn-Phong - YouTube[/ame]
I've been experimenting with a screenshot feature today - working out the optimal auto-placement of the 3D preview in the node tree
Looks like manhattan distance might the the best norm to go for
(It doesn't avoid connection lines yet though, just the nodes)
Also, do you all think I should put up a blog?
Are you planning to make a tutorial on detail normal maps creation ?
It would be really great !
Sure, I could do that
I have however planned to make a node specifically for combining normal maps, so I might do it after that is done.
Also, because of my tendency to post things at the bottom of previous pages:
( It's already out in case you didn't know )
Also, the automatic screenshot feature is now working! It's not completely done, I have yet to add buttons in the interface to take the screenshot, but here's how an auto-screenshot looks now:
Thanks.
Works okay! But you have to be careful
http://acegikmo.com/shaderforge/faq/?Q=mobileopt#mobileopt
But apart from beginner's problems, I am loving it. Keep up the fantastic work.
Thanks! Yeah so just being sensible with it.
There's now a button available in the top left corner of the shader preview, to take a screenshot of the whole node tree
Fixed a bug where some setups leading to the UV channel of texture nodes would break
Fixed a bug where normal maps didnt respect the normal quality
Fixed a bug where transparent objects caused depth issues
Fixed a bug where Unlit shaders had to have a Light Color node in order to use Light Attenuation
Theres now a script included for enabling depth rendering on cameras
Updated readme.txt
Click to see all changelogs
(Don't forget to delete the old Shader Forge before updating to this one!)
Is it possible to create a shader writing the correct depth information with your tool and if so, what would be the recommended way of doing that.
I don't think you can actually write depth to the depth buffer without the geometry being in that actual location, so you'll be limited on a per-vertex depth offset, in which case you'll be able to use vertex offset at least
There should be a way of doing that. it has been executed by this guy for example https://www.youtube.com/watch?v=-Q6ISVaM5Ww
Check time index 3:54. You see he writes height map information to the depth buffer to let the 2D sprites intersect in 3D space. From what I understood from my short conversation with him he renders the distance from sprite location normal to the (fixed) view direction into those height maps and uses that information to re-calculate the world location of each pixel inside the shader. I just don't understand how he writes that world location info back to the depth buffer to achieve the intersecting. Any ideas? I know this is a bit off-topic, but I'm hoping that it could actually lead to an idea for an additional node that could be used to achieve this effect in games.
If this is the wrong place to ask something like this, a redirection would be lovely as well. Thanks.
Guess you have to animate it in code then!
If you use renderer.material.Set... Unity will instantiate the material automatically.
If you use renderer.sharedMaterial.Set... Unity will tweak the original material, shared by all your objects using it.
That would be the Parallax node
http://acegikmo.com/shaderforge/nodes/?search=Parallax
Check out the parallax example shader if you want a usage reference!
• You can now zoom the node view
• Added node: Normal Blend. This node will combine two normals; a detail normal perturbed by a base normal
• Added node: Blend. Photoshop-style blending with 18 blending modes:
- Darken, Multiply, Color Burn, Linear Burn
- Lighten, Screen, Color Dodge, Linear Dodge
- Overlay, Hard Light, Vivid Light, Linear Light, Pin Light, Hard Mix
- Difference, Exclusion, Subtract, Divide
• The Component Mask node will now display outputs per-channel, in addition to the combined one. For example, if you mask out RGB, you will now get 4 connectors, RGB, R, G and B
• There’s now a checkbox under settings, where you can enable/disable node-preview updating, in case you’re having performance issues in large node trees
• Fixed a bug where you couldn’t use depth nodes in the refraction input
• Fixed a bug where the Length node output the wrong component count
• Fixed a bug where objects in the node view could be selected through the side panels
• Fixed a bug where the screenshot button overlapped the toolbar when expanding settings
Click to see all changelogs
(Don't forget to delete the old Shader Forge before updating to this one!)
Possibly! There are no plans for it at the moment, but it could happen if people want it to
no ability to control multi-pass materials
no access to differed layers/ability to affect differed layers.
If Shader forge can allow me to specify how to use each pass, what to effect on each pass and allow me to grab hold of and create full-screen differed effects (not just post effects) I'll gladly purchase and switch to unity.
As for deferred Gbuffers and such, it's also a highly advanced topic.
Some thing border on being too low-level for node-based editing. If you're already feeling the need to make multi-pass shaders and custom deferred pipes, you're most likely better off learning to code shader by hand
That said, I have thought about that too, but I'm waiting with it until Unity's new rendering tech comes up, as it will alter the deferred pipe, and it might, just might, get implemented
The original idea of node-based materials was to allow artists to have more control over the look of a material, not coders. The intention was simply that artists could hook up textures nice and quick, and use a few nodes to make the material shiny. but because of the level of access allowed by these tools, Artists started playing around with stuff that they would have no ability to even know about from the previous generation of engines. stuff like This starts showing up, and even more complex materials appear. imagine if you gave Mr. Artist even lower access, with more powerful tools instead of telling Mr. Artist "learn to code."
just think about it. Artists generally aren't coders, our brains don't operate in a way that lines of text combine to form a living, breathing material: if our brains DID operate like that, we would probably not be artists. The node-based structure is so powerful because Artists (like myself) can understand and follow the procedure without having to worry about declaring variables or proper syntax.
My point is, however, that these are *very* advanced topics.
"there is no disadvantage to giving high-powered tools a more broad audience"
The disadvantage is the time spent making it. The time could be spent working on features that would be more useful for a larger audience, improve the workflow, work on the documentation, and so forth.
When I started working on SF, I did an on-paper design of a whole system, that actually did use a multi-pass and vert/frag split, in order to get as much control as possible.
However, the more I researched, the more I realized that about 98% of shaders don't need custom multi-pass control, so I asked myself, why should I spend about 40% of the development time, working on a feature for about 2% of the audience?
So, in short, I'd love to make SF super deep, but I don't know currently if the time spent on it would be worth it. But that doesn't mean it won't ever happen