Tuesday, April 05, 2016

Performant Stylized Shaders in Unity - Shader Basics

This tutorial is for people who wish to know the basics of fast shaders, and are already somewhat familiar with programming and working with 3D content!

A shader is a piece of code that runs on your graphics card, and determines how triangles are drawn on the screen! Typically, a shader is composed of a vertex shader, for placing vertices in their final location the screen, and a pixel shader (generally referred to as a ‘fragment shader’) to choose the exact color on the screen! While shaders can often look very confusing at a first glance, they’re ultimately pretty straightforward, and can be a really fun blend of math and art!

Creating your own shaders, while not particularly necessary with all the premade ones running about, can lend your game a really unique and characteristic look, or enable you to achieve beautiful and mesmerizing effects! In this series, I’ll be focusing on shaders for mobile or high-performance environments, with a generally stylized approach. Complicated lighting equations are great and can look really beautiful, but are often out of reach when targeting mobile devices, or for those without a heavy background in math!

In Unity, there are two types of shaders, vertex/fragment shaders, and surface shaders! Surface shaders are not normal shaders, they’re actually “magic” code! They take what you write, and wrap it in a pile of extra shader code that you never really get to see. It generally works spectacularly well, and ties straight into all of Unity’s internal lighting and rendering systems, but this can leave a lot to be desired in terms of control and optimization! For these tutorials, we’ll stick to the basic vertex/fragment shaders, which are generally /way/ better for fast, un-lit, or stylized graphics!

You can create a simple vertex/fragment shader in Unity by right clicking in your Project window->Create->Shader->Unlit Shader! It gives you most of the basic stuff that you need to survive, but here I’m showing an even more stripped down shader, without any of the Unity metadata attached! This won’t compile by itself, it does need the metadata in order to work, but if you put this in between the CGPROGRAM and ENDCG tags, you should be good to go! (Check here for how the full .shader file should look!)

#pragma vertex   vert
#pragma fragment frag

#include "UnityCG.cginc"

struct appdata {
 float4 vertex :POSITION;
};
struct v2f {
 float4 vertex :SV_POSITION;
};

v2f vert (appdata data) {
 v2f result;
 result.vertex = mul(UNITY_MATRIX_MVP, data.vertex);
 return result;
}

fixed4 frag (v2f data) :SV_Target {
 return fixed4(0.5, 0.8, 0.5, 1);
}


Breaking it down:
#pragma vertex   vert
#pragma fragment frag

The first thing happening here, we tell CG that we have a vertex shader named vert, and a pixel/fragment shader named frag! It then knows to pass data to these functions automatically when rendering the 3D model. The vertex shader will be called once for each vertex on the mesh, and the data for that vertex is passed in. The fragment shader will be called once for each pixel on the triangles that get drawn, and the data passed in will be values interpolated from the 3 vertex shader results that make its triangle!

struct appdata {
 float4 vertex :POSITION;
};
struct v2f {
 float4 vertex :SV_POSITION;
};

One of the great thing about shaders is that they’re super customizable, so we can specify exactly what data we want to work with! All function arguments and return values have ‘semantics’ tags attached to them, indicating what type of data they are. The :POSITION semantic indicates the vertex position from the mesh, just as :TEXCOORD0, :TEXCOORD1, :COLOR0 would indicate indicate texture coordinate data for channels 0 and 1, and color data for the 0 channel. You can find a full list of semantics options here! These structures are then used as arguments and return values for the vertex and fragment shaders!

v2f vert (appdata data) {
 v2f result;
 result.vertex = mul(UNITY_MATRIX_MVP, data.vertex);
 return result;
}

The vertex shader itself! We get an appdata structure with the mesh vertex position in it, and we need to transform it into 2D screen coordinates that can then be easily rendered! Fortunately, this is really easy in Unity. UNITY_MATRIX_MVP is a value defined by Unity that describes the Model->View->Projection transform matrix for the active model and camera. Multiplying a vector by this matrix will transform it from its location in the source model (Model Space), into its position in the 3D world (World Space). From there, it’s then transformed to be in a position relative to the camera view (View Space), as though the camera was the center of the world. Then it’s projected onto the screen itself as a 2D point (Screen Space)!



The MVP matrix is a combination of those three individual transforms, and it can often be useful to do them separately or manually for certain effects! For example, if you want a wind effect that happens based on its location in the world, you would need the world position! You could do something like this instead:

v2f vert (appdata data) {
 v2f result;
 
 float4 worldPos = mul(_Object2World, data.vertex);
 worldPos.x += sin((worldPos.x + worldPos.z) + _Time.z) * 0.2;
 
 result.vertex = mul(UNITY_MATRIX_VP, worldPos);
 return result;
}


Which transforms the vertex into world space with the _Object2World matrix, does a waving sin operation on it, and then transforms it the rest of the way through the View and Projection matrices using UNITY_MATRIX_VP! For a list of more built-in shader variables that you can use (like _Time!) check out this docs page!

fixed4 frag (v2f data) :SV_Target {
 return fixed4(0.5, 0.8, 0.5, 1);
}

And last, there’s the fragment shader! This takes the data from the vertex shader, and decides on a Red, Green, Blue, Alpha color value. Alpha being transparency, but we’ll need to dig through Unity’s metadata for that! In this case, we’re just returning a simple color value, so it’s a solid value throughout! Here, you may notice the return value is a fixed4 rather than a structure, but note that it still uses the :SV_Target semantic for the return type at the end of the function definition! If you’re wondering the difference between float4 and fixed4, it’s basically 128 bits of accuracy vs 32 bits, more on that later, but we only really need to return 32 bits of color data back to the graphics card! 

Again though, we can do cool math with this too! In this case, just attach the x and y axes to the R and G channels of the color!

fixed4 frag (v2f data) :SV_Target {
 return fixed4(
  (data.vertex.x/_ScreenParams.x)*1.5, 
  (data.vertex.y/_ScreenParams.y)*1.5, 
  0.5, 
  1);
}

So that’s the basics of the syntax for creating shaders in Unity! You can do a fair bit with this already, but the real fun bits come in when you tie into Unity’s inspector, using textures for color and data, and a bit of graphics specific math! We’ll get into that next time, but for now, here are the final shader files with their Unity metadata wrapper which were used to create the screenshots you see here!

Friday, May 16, 2014

ScriptableObjects in Unity and how/why you should use them!

I relatively recently discovered Unity's ScriptableObjects, and fell in love with them immediately. For some reason, you don't see them around a whole lot, yet they solve a number of important problems, like... How do you store data in Unity? I assure you, my early attempts were quite horrific.

The problem

Lets imagine you have a shooting game. Guns are an integral part of a game like this, and so you need lots of guns in your game! In theory, you could just prefab all your guns, and work with them that way, and in a simple world, this generally works pretty well! But in this game, you need to interact with these guns in a lot of different ways, selecting them, customizing them, stat tweaking them, admiring their detail, shooting them, etc. Having all that in a prefab becomes ridiculous quite quickly!

I did mention lots of guns, right?

Soo, the next intuitive leap is to do some sort of data file, XML, JSON, plain text, or whatever. The problem with this is zero tie-in with the Unity editor itself! At least, not without a substantial amount of extra work. That might be acceptable after enough work, but ScriptableObjects provide a significantly better alternative!

The solution

ScriptableObject is a class, very much like the MonoBehaviour, without a lot of the component specific items. It gets serialized, saved, and loaded automatically through Unity, as well as standard inspector window support! You can easily spot Unity using ScriptableObjects in the Edit->Project Settings menu, and other similar locations. Unfortunately, Unity didn't make it quite as simple to use, you really won't find anything mentioned about ScriptableObjects in the UI.

So how do we use 'em? Lucky for us, it's relatively trivial~
Step 1. Inherit from ScriptableObject
Step 2. Treat it like a MonoBehaviour without events.
Step 3. Tie it into the editor.

Tying it into the editor can be fun, but here is where you get some handy-dandy code =D Unity treats ScriptableObjects the same way it does everything else, so you have to create an asset file, and add it to the file system!

The code

This is a little function I use that will create an asset for any ScriptableObject that I throw at it. I usually keep it around in a tiny little utility file I call SOUtil.cs (click here to download it!) It looks like this:
#if UNITY_EDITOR
    public static void CreateAsset(System.Type aType, string aBaseName) {
        ScriptableObject style = ScriptableObject.CreateInstance(aType);

        string path = UnityEditor.AssetDatabase.GetAssetPath(UnityEditor.Selection.activeInstanceID);
        if (System.IO.Path.GetExtension(path) != "") path = System.IO.Path.GetDirectoryName(path);
        if (path                              == "") path = "Assets";

        string name = path + "/" + aBaseName + ".asset";
        int    id   = 0;
        while (UnityEditor.AssetDatabase.LoadAssetAtPath(name, aType) != null) {
            id  += 1;
            name = path + "/" + aBaseName + id + ".asset";
        }

        UnityEditor.AssetDatabase.CreateAsset(style, name);
        UnityEditor.AssetDatabase.SaveAssets();

        UnityEditor.EditorUtility.FocusProjectWindow();
        UnityEditor.Selection    .activeObject = style;
    }
#endif

As you can see, this is a pretty darned generic function. It creates an object, finds a path for it, gives it a default name, saves it, and then sets it as the user's focus! One thing to keep in mind when working with ScriptableObject, you shouldn't create them using the 'new' keyword. Always use ScriptableObject.CreateInstance.

Ok, cool! That's the generic part, now, what about actually tying in specific objects? Well, here's a super simple object to give you a good idea about that.
public class SOString : ScriptableObject {
    public string stringValue = "";

#if UNITY_EDITOR
    const string editorMenuName = "String";
    [UnityEditor.MenuItem("GameObject/Create Scriptable Object/Create " + editorMenuName, false, 0  ), 
     UnityEditor.MenuItem("Assets/Create/Create Scriptable Object/"     + editorMenuName, false, 101)]
    public static void CreateAsset() {
        SOUtil.CreateAsset(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType, editorMenuName);
    }
#endif
}

Not that hard. A couple of menu tie-ins, for the GameObject menu, and the project window context menu, and then a call to our CreateAsset method, automatically picking type through reflection, and providing a default asset name! Now you can drag and drop these little suckers all around your inspectors, just like any other GameObject or MonoBehaviour!

Editor code is darned cool.
Enjoy =D

Sunday, June 23, 2013

Quickly placing random objects in a landscape without overlap

For me, creating procedural content is one of the most awesome things ever! Despite having created it yourself, you're still not quite sure what you'll expect this time. So today I'm going to discuss a problem that I often have: placing objects randomly in a scene, without having them overlap eachother!

It's a small scene, but nothing's overlapping, not even with the path. Looks like the code works to me!

The easiest approach to creating a random landscape is to just launch into a for loop, and just start assigning random coordinate! And that's often pretty great all by itself, but you'll get -tons- of overlap.

Sooo... let's just check if it overlaps with anything, and if it does, assign it a new random position? Great! What if there's no room, or very little room? You could easily spend a lot of loops trying to pick a good spot, or even worse, catch yourself in an infinite loop. It also takes an unpredictable duration, which is generally a bad idea.

The solution I've decided on isn't perfect, but it is a decent approximate, and it's fairly fast. The idea is to create a grid over the area, and use each cell in it to store the distance to the nearest object. If we're using a bounding circle to determine where an object will fit, then these values should instantly tell us whether or not that cell is a valid location!

ASCII Visualizations are the best.

As you can see here, we've got an ASCII gradient, to show distance from a single placed object, with 'x' marking the actual areas the object takes up. The data is actually stored as floats, so it's very simple to tell the actual distance. Here's the code I used to make it:

PlacerRandom random = new PlacerRandom(0);
Placer placer = new Placer(20, 20, random);
placer.PlaceCircle(4, 4, 2);
placer.DebugDistance();

You can find the Placer code at the bottom of this post. From here, it's pretty simple to pick out valid cells, and then choose one of those to spawn your object at. Using the placer's GetCircle method, you can pick out a random location that will fit a given radius.

This scene contains a path, and 4 objects. It looks a bit more confusing.

I also discovered the need to add paths through my world, so I wanted to be able to set lines that would be clear of stuff. Using the closest point on a line algorithm with a distance test, it's not hard to do exactly the same sort of thing with a line. Here's the code I used to create this scene:

PlacerRandom random = new PlacerRandom(0);
Placer placer = new Placer(20, 20, random);
placer.PlaceLine(0, 8, 20, 15, 1f);
   
float x = 0, y = 0;
float radius = 2;
for (int i = 0; i < 4; i++) {
 placer.GetCircle(radius, out x, out y);
 placer.PlaceCircle(x, y, radius);
}
placer.DebugDistance();

The speed for these is pretty excellent on small maps, for a 32x32 grid, getting and placing 100 circles was about 3ms on my machine. Unfortunately, this goes up pretty fast, with a 128x128 grid taking about 33ms for the same thing. Fortunately, it's not likely you'll be doing this sort of thing every frame, but if you're generating tiles as you move, having a hiccup might not be ideal either.

There's plenty of room for optimization, better math, different distance algorithms, silly things, but it's already pretty workable.

Oh, and don't forget, even after you've placed all those objects, that distance information can still be pretty handy! As you can see in my leading image, I've used it to do some fake shadow estimates. With a bit of tweaking, that'll look excellent =D

You can download the code for Placer.cs and PlacerRandom.cs. I haven't really polished them yet, I still consider them a WIP , but it should be pretty easy to use! Also, this code should work right away in Unity.

Sunday, June 16, 2013

Alpha test-like results using regular alpha blend.

So it turns out that using alpha test on mobile hardware can be a rather expensive thing. Technically, so is alpha blending, but hey, transparency is kinda important! I'd just like to present a really simple method here for obtaining alpha test-like results while still using regular old alpha blending.

Edges on an alpha-test material ("Unlit/Tranparent Cutout" from Unity)

Alpha test edges are beautifully smooth, and extremely desirable in many cases, try telling one of your artists they can't use it for the project, see how that goes! Here's an example with a standard alpha blend:

Alpha blend, eww, gross ("Unlit/Transparent" from Unity)

There's really just no substitute for that crisp edge, but fortunately, you don't have to settle for one! If you think about it, it's really just a sharp edge where the alpha goes straight from 0.0->1.0, if we can replicate that, then we should be all set! Math to the rescue, and simple math at that!

Using the plain old y = mx + b algorithm, we can create a graph that does exactly that! With a super steep slope, and a small offset to x, take a look at the graph for y = 10,000(x-0.5):

I love Google

So check that out! Looks pretty much like an on/off switch to me, all that's missing is to clamp it between 0.0 and 1.0 with a call to saturate, and we should be all set! We can attach that 0.5 to a variable, and expose it to users as a cutoff slider.

Custom alpha blend shader with our awesome math function.

I'm honestly not sure I can actually tell the difference here! And that's pretty awesome. The code for it is super simple, but the results look great! I haven't actually tested all this on mobile just yet, but if what I hear is correct, this could be a better solution than alpha testing. Although, I honestly wouldn't be surprised to see that change from device to device. If people are interested, I can try and have some stats on that in the future!

Here's the code, CG, in the format of a Unity shader, or you can download it here.


Shader "Custom/AlphaBlend Cutout" {
 Properties {
  _Tex   ("Texture (RGB)", 2D   ) = "white" {}
  _Cutoff("Alpha cutoff",  Float) = 0.5
 }
 SubShader {
  Tags { "Queue"="Transparent" "RenderType"="Transparent"  }
  Blend SrcAlpha OneMinusSrcAlpha

  LOD 200
  
  Pass {
   CGPROGRAM
   #pragma vertex   vert
   #pragma fragment frag
   #include "UnityCG.cginc"

   sampler2D _Tex;
   float     _Cutoff;

   struct VS_OUT {
    float4 position : SV_POSITION;
    float2 uv       : TEXCOORD0;
   };

   VS_OUT vert (appdata_full input) {
    VS_OUT result;
    result.position = mul (UNITY_MATRIX_MVP, input.vertex);
    result.uv       = input.texcoord;

    return result;
   }

   half4 frag (VS_OUT input) : COLOR {
    half4 color = tex2D(_Tex, input.uv);
    float alpha = saturate(10000*(color.a-_Cutoff));
    return half4(color.rgb,alpha);
   }
   ENDCG
  }
 }
}



Wednesday, March 06, 2013

3D LUTs

I recently discovered that Photoshop CS6 supports 3D LUTs (Look Up Table)! I've encountered them briefly in the past, but never took serious note, as they didn't do anything terribly useful for me. Now, however, they've become a potential creative outlet, given their technical nature!

It's easiest to compare a 3D LUT to a matrix, if you so happen to be familiar with those. It's a collection of color transforms all wrapped into one big chunk that lets you move colors from one "space" to another!

You can think of an RGB color as an XYZ coordinate, which is then essentiall an index in the LUT. You put a color in, it gives a color back! Here's an example of an 8x8x8 identity LUT. You put a color in, and it should give you that same color back.


You'll see 3D LUTs used most frequently to work with color correction or post processing on video. That's boring. As a photographer, I want to use it to do something creative with my stills! So I've started making this tool to manipulate LUTs in a fashion that can be used for creative control over images. It's still in the beginning phases, but it's still pretty spiffy.

I had a thought recently that videos are actually just 3D images, with time being the Z-axis. But aren't 3D LUTs basically the same thing? 3D cubes of color? Absolutely! So I made a tool to resample a video into a 3D LUT!

Here's the Sintel trailer converted into an 8x8x8 3D LUT =D

Kinda wonky looking, but it is an entire trailer. I'm going to experiment with recording abstract, shorter video clips in the future, this is just proof of concept still. But either way, this is usable! Here's what it looks like before and after applied to an image:


Interesting, for sure! But not attractive just yet. I have found that smaller LUTs look a little nicer, the 4x4x4 LUT did quite well. Again, better video clips will help for sure, but I also haven't even gotten around to the editing part of the tool! After that happens, stuff should start getting magic =D

I'll have the tool up when it's actually usable without manual scripting! Also, those LUTs may be rotated weird, I'm more interested in the creative vision of things, rather than technical accuracy.

Check an early beta version here!


Tuesday, February 12, 2013

Pomegranate, before and after

I thought it'd be cool to show some of my process for editing pictures, at the very least, a before and after comparison. You can find the full size finished image over on deviantArt, go favorite it or something! Here ya go!


Before

After


Camera: Canon 7D
Lens: Tamron 28-75mm f/2.8
Focal length: 75mm
Aperture: f/2.8
ISO: 1600
Lighting: speedlight off to my left, ~8 meters @1/64, cloudy ambient "skylight" above.

So as you can see, I darkened the background a lot! I really wanted the subject to stand out from the background, and the initial image was quite flat. I used a darkening curves adjustment layer once with an "Apply Image" mask ( which also adds in some nice contrast ), and another one with a vignette mask.

If you look close, I also took out a ton of details on the floor and the wall, just to keep it a little smoother and cleaner. Just boring old spot healing there!

As far as the figure itself, I left it primarily untouched except for dodging and burning. I added a little bit of depth to the dress, and some extra lights and shadows in her hair.

And last, a curves adjustment layer to play with the blue channel, blue in shadows, and a tiny -tiny- bit of yellow in the highlights. I actually had to mess around with that one quite a bit after I checked it on my other monitor! Evidently my secondary monitor is far warmer than my primary~

A bit of a self-critique on this picture, I'd say that I messed up the lighting a fair bit. It was an overcast day, and it's the first time I've really worked with a speedlight outdoors! Her face needs some extra light from a slightly different direction, perhaps a stronger flash, or a second flash with a "snoot". I also could have benefited from a smaller aperture, at full resolution, this picture was very much not sharp.

Either way, I'm darned happy with this picture, and hopefully you like it to!

Tuesday, May 15, 2012

Corona SDK code dump

Here's a couple of small sample scripts I wrote for a presentation on Corona. They're pretty small and simple, but give a decent overview of how to do some basic stuff with Corona. I meant to write a lot more about them here, but I haven't had the time lately, so the comments will have to suffice. Enjoy!

Also, some pretty useful links at the bottom here =D

Samples

Here's a brief sample of a LUA script that creates basic display objects:
-- basic object types
local rectangleObject = display.newRect       (10, 10,  160, 60)
local roundedObject   = display.newRoundedRect(10, 80,  160, 60, 20)
local circleObject    = display.newCircle     (90, 180, 30)
local lineObject      = display.newLine       (10, 220, 170, 280)
local textObject      = display.newText       ("Test text!", 10, 290, "Arial", 24)

-- setting color!
rectangleObject:setFillColor(200, 100, 100)
roundedObject  :setFillColor(100, 200, 100)
circleObject   :setFillColor(100, 100, 200)

-- a few handy properties
textObject.x = 300
textObject.y = 300
textObject.rotation = 45

-- some handy display properties
circleObject.x = display.contentWidth  / 2
circleObject.y = display.contentHeight / 2

-- a note on . vs : for method calls, . is a static method, where : is for an instance method
-- it could also be said that : as in obj:Move( x, y ) is the same as obj.Move(obj, x, y)

Here's another example showing events with Corona:

-- create a basic object
local obj = display.newCircle(display.contentWidth/2, display.contentHeight/2,  40)
obj:setFillColor(200, 100, 100)

-- define some functions to use for catching events
local function onEnterFrame( aEvent )
    obj.y = display.contentHeight/2 + math.sin(aEvent.time * 0.01) * 20
end

local function onTouch( aEvent )
    print( "Caught a touch!" )
end

-- attach the functions to the events
Runtime:addEventListener( "enterFrame", onEnterFrame )
obj    :addEventListener( "touch",      onTouch      )

An example using transitions and time delayed events:
-- create a basic object
local obj = display.newCircle(display.contentWidth/2, display.contentHeight/2,  40)
obj:setFillColor(200, 100, 100)

-- Create a transition! It's easy!
transition.to(obj, {time=2000, x=0, y=0, alpha=0})

-- also, create a time delayed event
local function onTimer( aEvent )
    transition.to(obj, {time=2000, x=display.contentWidth/2, y=display.contentHeight/2, alpha=1})
end
timer.performWithDelay(3000, onTimer)

And lastly, a quick example using physics:

-- set up physics
physics = require("physics")
physics.start      ( true )
physics.setGravity ( 0, 10 );
physics.setDrawMode( "hybrid" )

-- create some basic objects
local ball = display.newCircle(display.contentWidth/2, display.contentHeight/2,   40)
local wall = display.newRect  (0, display.contentHeight-40, display.contentWidth, 40)
ball:setFillColor(200, 100, 100)
wall:setFillColor(100, 200, 100)

-- attach them to physics
physics.addBody( ball,           { density = 1.0, friction = 0.3, bounce = 0.2, radius = 40 } )
physics.addBody( wall, "static", { density = 1.0, friction = 0.3, bounce = 0.2 } )

-- attach the physics gravity to the accelerometer of the device
local function onTilt( event )
    physics.setGravity( 10 * event.xGravity, -10 * event.yGravity )
end
Runtime:addEventListener( "accelerometer", onTilt )

And a quick interactive demo:
-- set up physics
physics = require("physics")
physics.start      (true)
physics.setGravity (0, 0);
physics.setDrawMode("hybrid")

-- define variables
local left      = -1000
local right     =  1000
local top       = -1000
local bottom    =  1000
local minWidth  =  50
local minHeight =  50
local maxWidth  =  150
local maxHeight =  150
local touchDown = {x = 0, y = 0}

-- create the ball that the user directs
local ball = display.newCircle(0, 0, 40)
ball   :setFillColor(100, 200, 100)
physics.addBody     (ball, {density = 1.0, friction = 0.3, bounce = 0.2, radius = 40})

-- create a decorative background and a physical foreground
for i=1,100 do
    -- create a random decorative block
    local x      = math.random(left,      right    )
    local y      = math.random(top,       bottom   )
    local width  = math.random(minWidth,  maxWidth )
    local height = math.random(minHeight, maxHeight)
    
    local backBlock = display.newRect(x, y, width, height)
    backBlock:setFillColor(100, 100, 100)
    
    -- create a random interactive block
    x      = math.random(left,      right    )
    y      = math.random(top,       bottom   )
    width  = math.random(minWidth,  maxWidth )
    height = math.random(minHeight, maxHeight)
    
    local frontBlock = display.newRect(x, y, width, height)
    frontBlock:setFillColor(200, 200, 200)
    physics   .addBody     (frontBlock, {density = 1.0, friction = 0.3, bounce = 0.2})
end

-- look for user flicks
local function onTouch(aEvent)
    if aEvent.phase == "began" then
        touchDown.x = aEvent.x
        touchDown.y = aEvent.y
    end
    if aEvent.phase == "ended" then
        ball:setLinearVelocity(aEvent.x - touchDown.x, aEvent.y - touchDown.y)
    end
end
Runtime:addEventListener("touch", onTouch)

Links