Do you ever say to yourself “I wish math was sneakily hidden in more places around me”? Obviously you do. Well, today I’m going to cover how hiding math inside a texture can boost your shader capabilities. For context, an example of this would be using a normal map in a shader in Unity.
If you remember from an earlier lesson, color in a texture is stored as a number between zero and one. That number generally represents a color, but it can just as easily represent information. Information you need your shader to have easy access to.
Using a texture to store data enables you to provide your shader with four chunks of data (in the RGBA channels) unique every single pixel on your screen. Here we’ll walk through some of the what and the why of this trick.
This is the fourth entry in my Shader Series.
This article may contain affiliate links, mistakes, or bad puns. Please read the disclaimer for more information.
Surface Maps
The most common use of math hidden in colors is surface maps. So common in fact, that you may forget those bluish hues are indeed just data. These are things like a normal map, bump map, or height map.
The normal map is probably the most interesting one to discuss. It encodes a 3D vector (xyz) into the color. The way it can do this in full 3D space is to take values that are in the range -1 to +1 and shrink them into 0 to 1. So a color of 0.5 represents 0 in the decoded vector.
Given your perspective of the surface (x is left/right, y is up/down and z is coming out at you), only x and y can have the full range of values. -1 to 1 in the vector, which encodes to 0 and 1 in color. The z value on the other hand, is pointing towards you, so it’s always positive. Here the vector of 0 to 1 encodes to a blue value between 0.5 and 1. Hence normal maps always looking bluish.
Normals
Without needing much help from us, normal maps can provide some lighting trickery to add detail to an otherwise flat surface. For example, you are able to provide a normal map texture to Unity’s built in Standard shader by default.
If you are doing the normal mapped lighting yourself, you could use some logic like this:
fixed4 texColor = tex2D(_SomeTex, iTexCoord.uv) * 2 - 1;
return dot(texColor, _WorldSpaceLightPos0);
The code above is incomplete, but I’m not going to get into these details. This lesson is about hiding math in colors, not how to calculate lighting.
This is where something like Unity’s Surface shader is so handy. You can write a shader function to muck with your normals, and then let Unity do the lighting itself. Below is a Unity surface shader that takes the main texture input and sets it as the normals.
void surf (Input IN, inout SurfaceOutputStandard o)
{
fixed4 c = tex2D (_MainTex, IN.uv_MainTex);
o.Albedo = fixed3(.4,.5,1);
o.Normal = c.rgb*2-1;
//...
}
Note that in Unity this code works because the normal map texture is setup (on importer settings) as a standard 2D texture. If its importer settings had this identified as a normal map, we’d need a little more logic, which I’ll show in a later example on this page.
Noise
Another common use for hiding math in colors is noise. Noise can be calculated on the fly, but it can be expensive. Depending on your needs, and your hardware, it can often be cheaper to generate the noise ahead of time. The image below is one I’ve used quite a few times, and you are welcome to as well. It’s a nice cloudy noise texture, but most importantly, it tiles seamlessly. Meaning if you were to go off the right edge of it, and wrap around to the left, there would be no break in continuity. This is really helpful when moving your noise texture across a surface. If you are just animating your UV coordinates, as discussed in “Time Movement” section of the basic shader tutorial, you’ll never see a break in the clouds. (Click for full size).
In this example, the noise is functionally represented as a height map. As such, I only really need one color channel. I could embed other data (more noise?) Into the other channels if i wanted to. You could also turn that height into a normal map if that’s a more useful for your particular shader.
Dissolve Effect
The above cloud texture can be used for a lot of things, but a common use for something like that is a dissolve effect (or materialize, which is just a reverse-dissolve).
Below is a simple shader that uses _SineTime to oscillate a “present state” between zero and one. If the present state is a value smaller than the one encoded in the cloud (at this pixel), then the current pixel is drawn. If not, it isn’t. At the edge, I also do a fade to black so that things look a little better (that’s what the saturate call is).
You’ll notice in this shader I have a comment of “efficient version” which returns a result in 2 lines, followed by an “easy to read version” which does it in a few more. The second version won’t ever get executed because of the early return. I have them both in there, so you can more easily figure out what’s going on, but also know the better way to do it if borrowing this code.
fixed4 dissolve (v2f iTexCoord) : SV_Target
{
fixed4 texColor = tex2D(_MainTex, iTexCoord.uv);
fixed4 cloudColor = tex2D(_CloudTex, iTexCoord.uv);
float normalizedSinTime = (1+_SinTime.z)*0.5;
float delta = cloudColor.r - normalizedSinTime;
//efficient version.
float mult = lerp(0, saturate(delta*10), smoothstep(-0.01, 0, delta));
return texColor * mult;
//easy to read version
if(delta > 0)
{
delta = saturate(delta * 10);
return texColor * delta;
}
return fixed4(0,0,0,0);
}
Aside about Efficiency
I wanted to make two quick notes about efficiency. Maybe these should be on my concepts page, or have a dedicated article for efficiency, but for now, I’ll just make a note here.
First, shaders are not good at branching logic. If
statements are legal, but costly. In many shaders you’ll write, it probably doesn’t matter. The performance cost may be small enough to merit the better readability. That being said, I do want to encourage best practices. So wherever possible, I’ll write my shaders in both the efficient way and the easy to read way. This gives you one version to help grasp what’s going on, and one to copy into your own systems.
Second, shaders are decent at sampling, when used sparingly. Obviously almost all shaders will need to sample at least once. My above dissolve shader samples twice. Once for the base texture, and once for the noise. On today’s hardware, you can get away with quite a few samples, but it’s still something to keep an eye on. If you are running into performance problems, the sampling might be one of the first places to look.
Normal Manipulation
To combine the topics above, I’m going to take the concept of a normal map, and the noise texture (represented as noisy normals), and combine them to have a shader generating dynamic normals. I sample the main normal map (same circle as above) based on where I am in the image, but the second, noisy one, I move through over time. This allows the surface to change its reflective appearance.
To illustrate the difference between 2D imported assets in Unity, and Normal Map assets, I’ve made one of each in this sample. The main normal map is a 2D image. The noisy one is set as a true Normal Map. They key difference is that after I sample from the normal map, I use a Unity provided method to decode it. This does two things. One is the move from a 0 to 1 space into a -1 to 1. The other is some swizzling. Why does Unity need to swizzle? Because for efficiency, it actually stores the normals in a rearranged format, so it can optimize the compression. To achieve all this, you just need to feed the texture colors into a Unity provided method UnpackNormal
void surf (Input IN, inout SurfaceOutputStandard o)
{
fixed4 c = tex2D (_MainTex, IN.uv_MainTex);
float2 noiseCoord = IN.uv_MainTex;
noiseCoord.x += _Time.x;
fixed3 noise = UnpackNormal(tex2D(_NoiseTex, noiseCoord)).rgb;
float3 normal = c.rgb*2-1;
float mult = clamp(normal.b, 0, 1);
float3 noiseNormal = noise * mult;
normal += noiseNormal;
o.Albedo = fixed3(.4,.5,1);
o.Normal = normal;
o.Metallic = 0;
o.Smoothness = 1;
o.Alpha = c.a;
}
Displacement Maps
A third usage commonly seen for data encoded into colors is in a displacement map. Arguably this fits in the category of Surface Maps above, but it’s functionality is different enough that I like giving this map its own treatment. This is very similar to a normal map, but serves a slightly different function. In a fragment shader, this map would only utilize an x and y component (rg), whereas in a vertex shader, it might be 3D (xyz or rgb).
The 2D fragment version of this is used during sampling in a fragment shader. The rg color is read in, translated from 0 to 1 into -1 to 1, and treated as a pixel offset. If your shader had UV coordinates (sampling coordinates) of (0.2,0.3) and a displacement if (0.1,-0.05), you’d actually sample your color from (0.3, 0.25). A common use case for this would be textured glass, or water. Looking through either surface will warp and displace what’s behind it.
To utilize a displacement map in 3D space, you can sample it from within the vertex shader and use it to offset your vertices. This also can commonly be used for water. Using a moving displacement map can allow you to animate waves across a plane.
Rain Screen Effect
Combining a lot of what we’ve been talking about, another popular effect is creating rain drops on a screen. Doing this really well is far more complex than just a shader, and requires some well managed game-side systems. In Unity, there are several great asset store items to either provide this as-is, or give you a launching point. My favorite is the free Rain Drop Effect 2. I like this one for three reasons. First, it’s free. Second, the effects (yes, plural) look good and support extensive customization. Third, and my favorite, is that you have access to the source (both C# and shader) when you download it. That means you can utilize the effect, as well as learn from it, and if needed tweak it.
sRGB
One last warning I need to give for this, and it relates to color space. If you encode numbers into the color of a texture, it would obviously be beneficial to get the same numbers out that you put in. Surprisingly, this isn’t automatic. By default in many applications, including Unity, the color space used is sRGB. You’ll find this as a checkbox in your Unity texture importer settings.
With sRGB on, your colors are gamma corrected. Essentially this means adjusted to make up for oddities in how displays work. The tool adjusts the colors one way, because your monitor will adjust them the other. When your colors are actually math, you don’t want the engine doing some pre-adjustment. So turn that option off on any mathy textures.
Conclusion
There are a ton of samples I could get into here, with many more ways to use this little trick. Instead of going too overboard, I tried to keep these samples simple, to focus on the main concepts. In later lessons I’ll get into some more advanced techniques, but for now, this should get you started.
Do something. Make progress. Have fun.
I have a noise .png saved somewhere that tiles nicely like the one you posted. Heck, it could be the same one. The dissolve I used in game was for a transition from one scene to another. I guess you’d call it a compositor because it worked on every pixel of the screen right? Anyway, it was just the one sample I believe so performance was good.
Thanks for the comment. Photoshop can spit out an image like that, so they are common. Full screen effects have many different names depending on engine or context. In Unity and some other programs they are generally referred to as post processing.