Showing posts with label shaders. Show all posts
Showing posts with label shaders. Show all posts

Tuesday, April 05, 2016

Performant Stylized Shaders in Unity - Shader Basics

This tutorial is for people who wish to know the basics of fast shaders, and are already somewhat familiar with programming and working with 3D content!

A shader is a piece of code that runs on your graphics card, and determines how triangles are drawn on the screen! Typically, a shader is composed of a vertex shader, for placing vertices in their final location the screen, and a pixel shader (generally referred to as a ‘fragment shader’) to choose the exact color on the screen! While shaders can often look very confusing at a first glance, they’re ultimately pretty straightforward, and can be a really fun blend of math and art!

Creating your own shaders, while not particularly necessary with all the premade ones running about, can lend your game a really unique and characteristic look, or enable you to achieve beautiful and mesmerizing effects! In this series, I’ll be focusing on shaders for mobile or high-performance environments, with a generally stylized approach. Complicated lighting equations are great and can look really beautiful, but are often out of reach when targeting mobile devices, or for those without a heavy background in math!

In Unity, there are two types of shaders, vertex/fragment shaders, and surface shaders! Surface shaders are not normal shaders, they’re actually “magic” code! They take what you write, and wrap it in a pile of extra shader code that you never really get to see. It generally works spectacularly well, and ties straight into all of Unity’s internal lighting and rendering systems, but this can leave a lot to be desired in terms of control and optimization! For these tutorials, we’ll stick to the basic vertex/fragment shaders, which are generally /way/ better for fast, un-lit, or stylized graphics!

You can create a simple vertex/fragment shader in Unity by right clicking in your Project window->Create->Shader->Unlit Shader! It gives you most of the basic stuff that you need to survive, but here I’m showing an even more stripped down shader, without any of the Unity metadata attached! This won’t compile by itself, it does need the metadata in order to work, but if you put this in between the CGPROGRAM and ENDCG tags, you should be good to go! (Check here for how the full .shader file should look!)

#pragma vertex   vert
#pragma fragment frag

#include "UnityCG.cginc"

struct appdata {
 float4 vertex :POSITION;
};
struct v2f {
 float4 vertex :SV_POSITION;
};

v2f vert (appdata data) {
 v2f result;
 result.vertex = mul(UNITY_MATRIX_MVP, data.vertex);
 return result;
}

fixed4 frag (v2f data) :SV_Target {
 return fixed4(0.5, 0.8, 0.5, 1);
}


Breaking it down:
#pragma vertex   vert
#pragma fragment frag

The first thing happening here, we tell CG that we have a vertex shader named vert, and a pixel/fragment shader named frag! It then knows to pass data to these functions automatically when rendering the 3D model. The vertex shader will be called once for each vertex on the mesh, and the data for that vertex is passed in. The fragment shader will be called once for each pixel on the triangles that get drawn, and the data passed in will be values interpolated from the 3 vertex shader results that make its triangle!

struct appdata {
 float4 vertex :POSITION;
};
struct v2f {
 float4 vertex :SV_POSITION;
};

One of the great thing about shaders is that they’re super customizable, so we can specify exactly what data we want to work with! All function arguments and return values have ‘semantics’ tags attached to them, indicating what type of data they are. The :POSITION semantic indicates the vertex position from the mesh, just as :TEXCOORD0, :TEXCOORD1, :COLOR0 would indicate indicate texture coordinate data for channels 0 and 1, and color data for the 0 channel. You can find a full list of semantics options here! These structures are then used as arguments and return values for the vertex and fragment shaders!

v2f vert (appdata data) {
 v2f result;
 result.vertex = mul(UNITY_MATRIX_MVP, data.vertex);
 return result;
}

The vertex shader itself! We get an appdata structure with the mesh vertex position in it, and we need to transform it into 2D screen coordinates that can then be easily rendered! Fortunately, this is really easy in Unity. UNITY_MATRIX_MVP is a value defined by Unity that describes the Model->View->Projection transform matrix for the active model and camera. Multiplying a vector by this matrix will transform it from its location in the source model (Model Space), into its position in the 3D world (World Space). From there, it’s then transformed to be in a position relative to the camera view (View Space), as though the camera was the center of the world. Then it’s projected onto the screen itself as a 2D point (Screen Space)!



The MVP matrix is a combination of those three individual transforms, and it can often be useful to do them separately or manually for certain effects! For example, if you want a wind effect that happens based on its location in the world, you would need the world position! You could do something like this instead:

v2f vert (appdata data) {
 v2f result;
 
 float4 worldPos = mul(_Object2World, data.vertex);
 worldPos.x += sin((worldPos.x + worldPos.z) + _Time.z) * 0.2;
 
 result.vertex = mul(UNITY_MATRIX_VP, worldPos);
 return result;
}


Which transforms the vertex into world space with the _Object2World matrix, does a waving sin operation on it, and then transforms it the rest of the way through the View and Projection matrices using UNITY_MATRIX_VP! For a list of more built-in shader variables that you can use (like _Time!) check out this docs page!

fixed4 frag (v2f data) :SV_Target {
 return fixed4(0.5, 0.8, 0.5, 1);
}

And last, there’s the fragment shader! This takes the data from the vertex shader, and decides on a Red, Green, Blue, Alpha color value. Alpha being transparency, but we’ll need to dig through Unity’s metadata for that! In this case, we’re just returning a simple color value, so it’s a solid value throughout! Here, you may notice the return value is a fixed4 rather than a structure, but note that it still uses the :SV_Target semantic for the return type at the end of the function definition! If you’re wondering the difference between float4 and fixed4, it’s basically 128 bits of accuracy vs 32 bits, more on that later, but we only really need to return 32 bits of color data back to the graphics card! 

Again though, we can do cool math with this too! In this case, just attach the x and y axes to the R and G channels of the color!

fixed4 frag (v2f data) :SV_Target {
 return fixed4(
  (data.vertex.x/_ScreenParams.x)*1.5, 
  (data.vertex.y/_ScreenParams.y)*1.5, 
  0.5, 
  1);
}

So that’s the basics of the syntax for creating shaders in Unity! You can do a fair bit with this already, but the real fun bits come in when you tie into Unity’s inspector, using textures for color and data, and a bit of graphics specific math! We’ll get into that next time, but for now, here are the final shader files with their Unity metadata wrapper which were used to create the screenshots you see here!

Sunday, June 16, 2013

Alpha test-like results using regular alpha blend.

So it turns out that using alpha test on mobile hardware can be a rather expensive thing. Technically, so is alpha blending, but hey, transparency is kinda important! I'd just like to present a really simple method here for obtaining alpha test-like results while still using regular old alpha blending.

Edges on an alpha-test material ("Unlit/Tranparent Cutout" from Unity)

Alpha test edges are beautifully smooth, and extremely desirable in many cases, try telling one of your artists they can't use it for the project, see how that goes! Here's an example with a standard alpha blend:

Alpha blend, eww, gross ("Unlit/Transparent" from Unity)

There's really just no substitute for that crisp edge, but fortunately, you don't have to settle for one! If you think about it, it's really just a sharp edge where the alpha goes straight from 0.0->1.0, if we can replicate that, then we should be all set! Math to the rescue, and simple math at that!

Using the plain old y = mx + b algorithm, we can create a graph that does exactly that! With a super steep slope, and a small offset to x, take a look at the graph for y = 10,000(x-0.5):

I love Google

So check that out! Looks pretty much like an on/off switch to me, all that's missing is to clamp it between 0.0 and 1.0 with a call to saturate, and we should be all set! We can attach that 0.5 to a variable, and expose it to users as a cutoff slider.

Custom alpha blend shader with our awesome math function.

I'm honestly not sure I can actually tell the difference here! And that's pretty awesome. The code for it is super simple, but the results look great! I haven't actually tested all this on mobile just yet, but if what I hear is correct, this could be a better solution than alpha testing. Although, I honestly wouldn't be surprised to see that change from device to device. If people are interested, I can try and have some stats on that in the future!

Here's the code, CG, in the format of a Unity shader, or you can download it here.


Shader "Custom/AlphaBlend Cutout" {
 Properties {
  _Tex   ("Texture (RGB)", 2D   ) = "white" {}
  _Cutoff("Alpha cutoff",  Float) = 0.5
 }
 SubShader {
  Tags { "Queue"="Transparent" "RenderType"="Transparent"  }
  Blend SrcAlpha OneMinusSrcAlpha

  LOD 200
  
  Pass {
   CGPROGRAM
   #pragma vertex   vert
   #pragma fragment frag
   #include "UnityCG.cginc"

   sampler2D _Tex;
   float     _Cutoff;

   struct VS_OUT {
    float4 position : SV_POSITION;
    float2 uv       : TEXCOORD0;
   };

   VS_OUT vert (appdata_full input) {
    VS_OUT result;
    result.position = mul (UNITY_MATRIX_MVP, input.vertex);
    result.uv       = input.texcoord;

    return result;
   }

   half4 frag (VS_OUT input) : COLOR {
    half4 color = tex2D(_Tex, input.uv);
    float alpha = saturate(10000*(color.a-_Cutoff));
    return half4(color.rgb,alpha);
   }
   ENDCG
  }
 }
}



Sunday, March 25, 2012

Basic XNA Post Shader Tutorial

So here's going to be a really simple, basic introduction to post shaders in XNA! If you aren't familiar with what a post shader is, try thinking of the popular special effects like Bloom, or Motion Blur. These are special effects that get done after the entire screen has been completely drawn! A shader or two is then used to manipulate the resulting image to do something cool.

In this example, I will show you how to do a simple single pass post shader using basic color information. More advanced techniques will use more than just color information, and use multiple layers of post shaders. So for this example, we're just going to invert the colors of the screen!

I'm going to start with a basic project that draws a simple model to the screen, you can follow this tutorial here, or just download this project as a starting point. If you just need a 3D model to get you started, you're also welcome to use these: glaive model, glaive texture.


These are the things that we'll need to do to get this post stuff working:

  • Make a post shader
  • Load it
  • Create an off-screen surface to draw to
  • Draw to the off-screen surface
  • Draw the off-screen surface using the post shader
And fortunately, none of these bits are particularly hard. One or two might be a little arcane, but that's what webpage bookmarks and copy/paste are for ;)

We'll start at the top with defining our variables, so first! Define this in your Game1.cs, right below the GraphicsDeviceManager and SpriteBatch.

Effect         postShader;
RenderTarget2D offscreenSurface;

So the Effect will store our post shader, it's essentially the exact same thing as any other shader you might deal with, but the devil is in the details. You'll see exactly what I'm talking about when you get to the .fx file for it! The RenderTarget2D holds our off-screen surface, it's basically a Texture2D that's been specialized for having things rendered to it. You can also use it for things like mirrors or reflections on water, or even TV screens and security cameras!

Next would be initializing them. So in the Game1 LoadContent method, add these lines:

postShader       = Content.Load<Effect>("PostShader");
offscreenSurface = new RenderTarget2D(graphics.GraphicsDevice,
                                      graphics.GraphicsDevice.Viewport.Width,
                                      graphics.GraphicsDevice.Viewport.Height,
                                      false,
                                      graphics.GraphicsDevice.DisplayMode.Format,
                                      DepthFormat.Depth24);

We'll add in the PostShader.fx file shortly, but check out that constructor for RenderTarget2D! You can see there, we're specifying the viewport width and height. This tells it how large the texture we're storing for it will be. The values we're specifying here are exactly the same size of the window, but theoretically, we could make it smaller, or larger! Making it smaller could even be thought of as a performance optimization, as this texture will eventually get sampled up to the size of the window anyhow (I saw this as an option once in Unreal Tournament III, pretty spiffy!).

False there specifies no mip-maps, which would be pointless anyhow, since we aren't zooming in and out from the window. If you don't know about mip-maps, go learn about them, they're cool =D

The last two arguments specify to use the same color format as the screen, and a 24 bit zBuffer.

The remaining bit we need to code in C# is also pretty easy! At the top of the Draw method, add this line:

graphics.GraphicsDevice.SetRenderTarget(offscreenSurface);

Which should then be followed by whatever draw code you may have. What this line does, is tell the graphics card to draw everything from here on out to our specific off-screen surface! Instant awesome as far as I'm concerned~

Later in the Draw method, after base.Draw(gameTime), add in this remaining code:

graphics.GraphicsDevice.SetRenderTarget(null);

spriteBatch.Begin(SpriteSortMode.Immediate, 
                  BlendState.Opaque, 
                  null, null, null, 
                  postShader);
spriteBatch.Draw (offscreenSurface, 
                  new Rectangle(0, 
                                0, 
                                graphics.GraphicsDevice.Viewport.Width, 
                                graphics.GraphicsDevice.Viewport.Height), 
                  Color.White);
spriteBatch.End  ();

graphics.GraphicsDevice.DepthStencilState = DepthStencilState.Default;

The first line there clears the render target, letting the graphics card render to the screen again, instead of our off-screen surface.

The spriteBatch Begin allows us to set the card up for 2D drawing, and also lets us specify the shader as its last argument! This is the cool part, where we let the SpriteBatch do all the heavy lifting for us. Theoretically, we could create a quad with 3D geometry, set up a camera and a whole pile of things, draw our off-screen surface manually, but I've settled on this being the easiest way to get it done.

Drawing the off-screen surface is now exactly like drawing a regular 2D image onto the screen! Nothing complicated there =D

Then all the way at the end, we reset the DepthStencilState, as the SpriteBatch will change it for us when Begin is called. If we don't reset it, then we get all sorts of fun drawing artifacts in our 3D geometry.

The only thing that remains now is the shader! So right click on your content project and Add->New Item->Effect File, and call it "PostShader". Then completely replace the code there with this:

sampler TextureSampler : register( s0 );

float4 PixelShaderFunction(float2 UV : TEXCOORD0) : COLOR0
{
 float4 color  = tex2D(TextureSampler, UV);

 float4 result = float4(1, 1, 1, 1);
 result.xyz -= color.xyz;

 return result;
}

technique DefaultTechnique
{
 pass Pass1
 {
  PixelShader = compile ps_2_0 PixelShaderFunction();
 }
}

As you can see, this is where some strange things happen. There is no Vertex Shader in this effect file, just a Pixel Shader! We also aren't quite defining the sampler, merely pointing it to a register. Since we're taking advantage of the SpriteBatch for drawing our plane, we don't have to worry about those things. Our shader is almost like an override method, it just plops in and changes the behavior of the Pixel Shader only.

In this particular shader, the PixelShaderFunction gets called once for every single UV coordinate pair on the image you're trying to draw. The tex2D function then takes the UV coordinate and the sampler, and looks up the appropriate color, which we can then do whatever we like to =D


Now you can tweak it and play with it however you feel like!

You can download the completed tutorial project (with comments!) here.

Monday, July 20, 2009

A Quick Note on Shaders

A vertex shader function must always return an initialized value tagged as POSITION0, but then, the pixel shader function can't touch it. It throws an obscure error, along the lines of invalid input semantic 'POSITION0'

Learned that the hard way :/

If you really need to access the position, you need to copy it into a variable tagged TEXCOORD0, and then use that. It makes sense, really, if you think about it... Positions are for vertex shaders, textures are for the pixel shaders! Ish, oh well.