Shader Series 1 – High Level Concepts

This is the first in a series I’m creating to walk people from “heard of shaders” to “writing complex shaders from scratch”.  This one covers shader basics and high level concepts. Future articles will focus on more specific shader implementations, using shaders in tools like Unity, or advanced shader concepts.

Edit: This was the first entry in my Shader Series, which can be explored here.

This article may contain affiliate links, mistakes, or bad puns. Please read the disclaimer for more information.

Some Concepts

There are a handful of different kinds of shaders.  Much of my focus will be on pixel or fragment shaders, with some digressions into the world of the vertex shader. I do this because I find that fragment shaders make for the best medium to explore concepts.

In most of my lessons I’ll cover how they could be applied in Unity, but in general these lessons should would work in any context.

What is a Fragment Shader

Fragment shader programs run on each fragment during the drawing process.  For a full-screen post-processing shader, that means each pixel. Hence this shader the two possible names (fragment or pixel).  If it’s a texture on the side of a 3D cube that appears skewed to the camera, then each fragment is not actually a pixel.

What about Surface shaders?  Some engines (such as Unity) have the concept of “surface” shaders.  In reality, this is a submethod run inside Unity’s built in fragment shader.  The idea of a surface shader is that the engine is giving you the opportunity to define the unlit look of the fragment shader, then Unity (or whatever program) deals with the lighting part.  For the sake of most of our lessons, the concepts that apply to a fragment shader apply to a surface one as well.

Other Shader Types

The second most common shader you’ll run into is the vertex shader.  The vertex shader program runs on each vertex of a mesh. It allows you to move a vertex around, do some math that can affect lighting, and adjust colors.  I’ll hit on this shader a little, but fragments are my focus.  

Beyond the vertex shader, are less common ones such as geometry or compute.  These are not super relevant to what I’m trying to teach here, so I’ll mostly pretend they don’t exist.  Poof, they are gone.

Pull a Fragment, Don’t Push It

One of the most fundamental mindset shifts needed to start creating great fragment shaders is switching from a “push” to a “pull” view of the world.  Looking through the eyes of your code, you are generally a thing that needs to be put somewhere on the screen.  A sprite that has to walk to the right. To do that, the code pushes the sprite towards the positive X.  

Fragment shaders invert this logic.  In a shader, you instead are a where that needs to pull a thing to you.  You are a where, because you are the pixel (or fragment).  In this context, if you need to move a character to the right, then you (as the pixel) need to see what part of the character is to your left, and draw it.  

Of note, in vertex shaders, you are still a thing (the vertex) that needs to push yourself where you want to go.

Pixel Movement in Shaders
Code pushes objects where they need to go. Fragment shaders pull colors where they should be.

This image shows that the character is pushing towards the right, or the pixel is pulling from the left. 

Normalized Numbers

One of the other key concepts to be aware of when writing shaders is that almost all numbers are normalized between 0 and 1.  The colors that you generally think about in a 0 to 255 space, are now 0 to 1 (meaning 128 is 0.5, 64 is 0.25, etc.).  

Similarly, your coordinates within a sprite would have been 0 to width and 0 to height, but are both now 0 to 1. This is particularly interesting because it means each pixel’s height and width does not translate to the same fractional number unless the sprite is square.  If your sprite were 64×128, then one pixel would be about 0.0156 wide and 0.0078 tall.

RGBA

Colors in shaders are in 4 element arrays (such as vector4, float4, or fixed4 depending on language and exact type in question).  You can access these via variable.rgba, which translate to Red, Green, Blue, and Alpha. Of note, with these variable types, you can also access the same values with variable.xyzw.  Once you are in the shader, a float4 is a float4, it doesn’t matter if it had been a color or a coordinate or just a collection of four floats.

Sampling

Sampling is the process of reading colors from a texture. Most of the time in shaders, you’ll be reading colors from textures, so while this is a very simple concept, it’s at least worth mentioning. Sampler functions take the texture and coordinates as inputs, and return an RGBA color output.

Two key texture settings determine the way the coordinate inputs are handled: Wrap and Filter. Unity’s texture importers call these “Wrap Mode” and “Filter Mode”.

Wrap

Wrap describes what the sampler should do when your texture coordinates are outside the bounds of the texture (greater than 1 or less than 0). The basic choices are:

  • Repeat. This will tile the texture. A coordinate of 3.2 will sample from the 0.2 spot.
  • Clamp. This will continue drawing the edge color. 3.2 will sample from 1.0
  • Mirror. Similar to the Repeat mode, this mirrors each tiled piece. 1.2 will sample from 0.8.
Filter

Filter describes what the sampler should do when you are not sampling exactly on a pixel. Say your texture was 10×10 pixels and each pixel was a different color. If you sampled at (0.1,0.1) you’d get an exact color from your image. If you sampled at (0.15,0.1) you’d be sampling between two different pixels. Filtering determines how that sampling is done. Or say you have a 1000×1000 texture, rendering into a 10×10 space on screen. Here filtering determines if you are going to blur pixels together.

Fundamentally, there are two choices for filter mode. “Point” and “anything else”. Point means you’ll get exactly one pixel every time you sample. If you sample between two pixels, it’ll round down. If your filter mode isn’t set to point, then it’ll do some form of averaging. What form depends on the mode. Generally your choices would be Bilinear, Trilinear, and Anisotropic. Those choices reflect ascending levels of quality and cost.

Tools

There are a lot of tools you can use to play with shaders. Here are my favorites.

Unity

I tend to use Unity a lot to play with shaders.  Generally, if I’m writing a shader, it’s for a game, so why not see it in the exact medium for which it is intended? There are a couple ways to create and work with shaders in Unity.  There’s the multitude of asset store helpers, Unity’s built in shader graph, and old-school text editing.  In general, I’ll try to teach concepts, then display them in the medium that makes most sense for that concept (code, graph, etc.).

ShaderToy.com

The other place I do much of my shader-playing is on shadertoy.com.  It’s a great & free platform for testing out webGL shaders. Just be careful to not feel bad about yourself when you see what some others are putting up.  Many use that platform, not to create game content, but to create art. What they create is amazing and enjoyable to see, but isn’t what you could practically use in a game.

Shadron

Shadron is another useful tool for testing out shaders. Unlike the above two, it’s not intended for real-time use. This tool bills itself as a “procedural graphics editor”. Basically that means it’s an image (or video) creating tool, where you create the art via shaders. It’s great if you actually want to create pre-rendered art, or if you need to send someone an image/video of something you’re trying to pull off

Summary

These concepts are just some of the core basics to be familiar with before diving into the rest of the shader tutorials.  Understanding that most of your work will be done at the fragment level, and understanding the mindset of normalized math is a key to early success.  I’ll have a series overview page up soon, so keep an eye out for it.

Do something. Make progress Have fun.