Unity Vertex Shader and Geometry Shader Tutorial

Vertex And Geometry Title

UPDATE there is now a post up on how to set up a geometry shader in URP. The steps are a little different than what’s described in this article, as this was focused on the legacy renderer. There’s also a youtube version of that URP article.

I’ve been working on a shader tutorial for paper in UnityBurning it, folding it, those sorts of things. But as I’ve done so, I’ve had to do a good bit of coding in Unity vertex shaders, and a little in geometry shaders as well.  I realized that I haven’t talked about either on this site, so I figure I should toss up an intro before getting into the paper goodness.

This article may contain affiliate links, mistakes, or bad puns. Please read the disclaimer for more information.

This will build on roughly everything in my shader tutorial series, but most relevant is likely the Unity shader graph tutorial.

If you have any questions or comments please hit me up on twitter, or reply to this forum thread. And if the mood strikes, a follow on Twitter, a download of my free shader graph node, or a purchase of a recommended asset is always appreciated.

Intro

Vertex shaders are programs run on each vertex in your model.  In a 2D sprite, this will just be the four corners. In more complex models, there will be many more.

As to geometry shaders, well, they run on the geometry!  That probably doesn’t clear things up. Basically, they work on each triangle.  So in the 2D sprite example, the geometry shader would be executed twice, once for each triangle.  If you remember back in my very first shader tutorial post, I said I was going to largely ignore geometry shaders.  This was both because I rarely needed them in my own work, and I felt they were a bit too advanced to want to dive into here. Then I read an excellent post on them by Harry Alisavakis .  That helped me see some additional applications of the shader, but more importantly, showed that the shader could be explained in a fairly straightforward way.  I’ll cover some of the same topics he hit on, but I definitely recommend checking out his site (and follow him on Twitter, he’s great).

Of note, geometry shaders are totally unrelated to funny geometry t-shirts. Just in case you were wondering.

If you don’t view his tutorial, you should at least take a quick look at this graphic that he uses (that he borrowed from the Vulkan tutorial site).

click to zoom

Push Don’t Pull

In my earlier concept post, I discussed how you need to flip your logic when in fragment shaders.  In game code you were accustomed to pushing items, but in a fragment shader code, you need to think about pulling a color. With vertices, things are back to the push model.  You are a vertex, and you (the code) want to push yourself (the vertex) somewhere.  In the vertex shader you are a single vertex, in the geometry shader you are several, but the mindset is the same for either.

diagram showing pushing vertices in a vertex shader vs pulling colors in a fragment shader

What to do in a Vertex Shader

There are generally two things I do in a vertex shader. One is calculating something that I’ll need in the fragment shader, and the other is altering the vertices positions or normals.  In the case of the calculations, I may be doing that because there is data available to you in the vertex shader (such as positional data) that I don’t have access to in the fragment shader.  Alternatively, a reason to do calculations in the vertex shader as opposed to the fragment is just efficiency. The vertex shader is run once per vertex, whereas the fragment shader is run (generally) many more times.

Unfortunately if we’re talking about the Unity shader graph, the fragment and vertex nodes are all jumbled together.  I’m honestly not sure how it decides what calculations go in which shader, outside the situations where it has no choice.  So in shader graph, mostly you’ll be writing vertex like logic just to move vertices. 

This is likely fine, as the use case of moving vertices is the more common. Generally this will be because you are somehow wanting to deform the object.  For example, you may want to wobble a fish’s tail, or ripple the surface of some water.  

What is a Geometry Shader

Geometry shaders are weird.  And powerful. Weirderful. There are a lot of different things you might do with one, but I’ll cover three common ones below.  First, let me show a snippet of what this shader looks like. To set it up, you alter the input & outputs of your vert & frag shaders slightly, and then create this method:

[maxvertexcount(3)]
void geom(triangle v2g input[3], inout TriangleStream<g2f> triStream)
{
	g2f o;
	
	for(int i = 0; i < 3; i++)
	{
		o.vertex = UnityObjectToClipPos(input[i].vertex);
		UNITY_TRANSFER_FOG(o,o.vertex);
		o.uv = input[i].uv;
		o.normal = UnityObjectToWorldNormal(input[i].normal);
		o._ShadowCoord = ComputeScreenPos(o.vertex);
		#if UNITY_PASS_SHADOWCASTER
		o.vertex = UnityApplyLinearShadowBias(o.vertex);
		#endif
		triStream.Append(o);
	}
	triStream.RestartStrip();
}

The brief version of what’s going on here:

  1. [maxvertexcount(n)] tells the shader compiler how many vertices this shader will end with.  That’s correct, geometry shaders can add vertices.  I’ll talk more about that later.
  2. for() loop iterates over all three input vertices and creates the vertices needing to be returned.  What’s happening there is the basic pass through logic of this shader.

You’ll notice that I have methods like UnityObjectToClipPos() operating on the vertex position variable.  Generally this would be in the vertex shader. If you put it (or other vertex shader methods) into the geometry shader, you have to take them out of the vertex shader.  

What to do in a Geometry Shader

Now, on to three examples of things to do in this shader.  

First, recalculating normals. If you deform an object’s vertices, you often will need to recalculate the normals.  Depending on the math used to deform them, this could be all done in the vertex shader, but often, it’s easier to do it in the geometry shader.  Here you have access to all three corners of a given triangle, so determining the normal of that face is straightforward. 

The downside of recalculating the normals in a geometry shader is you lose the smoothing that occurs when each vertex is duplicated for the associated triangles.  Below is a screenshot of a spheres with and without recalculated normals.

recalculated normals in geometry shader

Second, object deformation.  Just like the vertex shader, you can move vertices around in a geometry shader.  The main advantage (or disadvantage, depending on your case) of doing this here is that you get unique vertices for each triangle.  In the case of a quad, the vertex shader sees four vertices. The geometry shader sees two triangles, each with three vertices. At this stage, those vertices are unique, so you actually have six vertices you’re working with.

Below I’ve extruded all vertices via the formula position.xyz += normal.xyz * 0.5 within the geometry shader.  This grows sphere outward along the normal. Because the vertices are not shared between triangles in the geometry shader, the sphere on the right is capable of being cut into each individual triangle.

goemetry shader extruding along normals

Third, adding vertices.  As I mentioned when describing [maxvertexcount(n)], you can increase n to set up the shader to add new triangles.  Below is a simple example where I take each triangle I want to draw, and create another triangle behind it facing the opposite direction.  This makes each surface two sided. This shader also does the normal-based extrusion. Now you can see the inside of the sphere!

geometry shader adding vertices
[maxvertexcount(6)]
void geom(triangle v2g input[3], inout TriangleStream<g2f> triStream)
{
	g2f o;
        //recalculate the normal
	float3 normal = normalize(cross(input[1].vertex - input[0].vertex, input[2].vertex - input[0].vertex));
	
        //draw the normal triangles, but extruded outwards
	for(int i = 0; i < 3; i++)
	{
		float4 vert = input[i].vertex;
		vert.xyz += normal * 0.5;  //math to move tri's out
		o.vertex = UnityObjectToClipPos(vert);
		UNITY_TRANSFER_FOG(o,o.vertex);
		o.uv = input[i].uv;
		o.normal = UnityObjectToWorldNormal(normal);
		o._ShadowCoord = ComputeScreenPos(o.vertex);
		#if UNITY_PASS_SHADOWCASTER
		o.vertex = UnityApplyLinearShadowBias(o.vertex);
		#endif
		triStream.Append(o);
	}
	triStream.RestartStrip();
	
        //process verts in reverse order, creating a backwards tri
	for(i = 2; i >= 0; i--)
	{
		float4 vert = input[i].vertex;
		vert.xyz += normal * 0.5;  //math to move tri's out
		o.vertex = UnityObjectToClipPos(vert);
		UNITY_TRANSFER_FOG(o,o.vertex);
		o.uv = input[i].uv;
                //invert normal
		o.normal = UnityObjectToWorldNormal(-1*normal);
		o._ShadowCoord = ComputeScreenPos(o.vertex);
		#if UNITY_PASS_SHADOWCASTER
		o.vertex = UnityApplyLinearShadowBias(o.vertex);
		#endif
		triStream.Append(o);
	}
	triStream.RestartStrip();
}

In this case, my adding vertices was fairly straightforward. More often, this is where people will add complex geometry to their objects. The most common example is starting with a simple plane, and using a geometry shader to add blades of grass to it.

My title image for this post is another example. That multi-sphere image is actually just one sphere with a geometry shader that adds six more spheres at different sizes, positions, and colors.

Geometry Shader adding spheres during render

Example

To explore vertex manipulation options, I’ll dig into an example.  Below I have 4 different ways to do the same thing. Each has a varying degree of success, but I want to show you what happens as you mess with vertices in several different ways. The first three are code based, and the last is in Unity shader graph:

  1. Surface Shader with a Vertex program added
  2. Unlit Shader with a Vertex program
  3. Unlit Shader with both Vertex and Geometry programs added
  4. Unity Shader Graph 

All versions of the shader push the vertices of the sphere outwards horizontally in a spiky pattern.  Kinda like a star fruit, only less delicious looking.

star fruit image
Image by Christian Ananta from Pixabay

Logically, it’ll do this:

//a number that goes 0 to 8*PI around sphere
float fourFullCycles = 8 * PI * v.texcoord.x;
//sine of that number.  Will have four peaks and four valleys
float fourSineCycles = sin(fourFullCycles);
//absolute value of sine result.  Eight peaks
float eightBumps = abs(fourSineCycles);
//grow the sphere outwards horizontally based on inverted bumps (to make pointy ends)
v.vertex.xz *= 2 - eightBumps;

I’ll just condense that down to one line for the actual code samples.

Attempt 1 – Surface Shader

First we’ll try to do what we want in a surface shader, because, surprisingly we can add a vertex shader method to a surface shader program.  So make a regular sphere in a built-in-renderer project, create a material, and a surface shader (Create->Shader->Standard Surface Shader).

I won’t go into much detail here, because, well, it just doesn’t work right. This is an excerpt of the changes in the surface shader file:

#pragma surface surf Standard fullforwardshadows vertex:vert

void vert (inout appdata_full v) {
          v.vertex.xz *= 2 - ( abs( sin(25.13 * v.texcoord.x) ) );
}

This moves the vertices, but the lighting is all wrong.

As you can see in these two pictures, the sphere still casts a sphere-like shadow.  And in an even more mind melting shot, the shadow on the ground somehow draws over the extruded shape other than in the sphere region.

surface shader casting wrong shadow
surface shader rendering under ground shadow

Attempt 2 – Unlit Shader, Vertex Shader Edition

Next up we’ll take an Unlit shader, do some of our own simple lighting, and add a vertex shader. So it’s the same setup as before, but I’m starting with an Unlit shader on my sphere (Create->Shader->Unlit Shader).

I’ll add lighting as a step one.  Then I’ll add the position multiplier that spikes out our sphere.  

I’m not going to show the code for this shader just yet.  The next section, where I add the geometry shader, is an extension of this one, so I’ll show the code there, that will include this lighting. 

All in all, this looks better.  The object can properly be both over and under shadows as you can see in the images below. 

unlit vertex shader with shadow over and under

There some weird artifacts along the surface, but the bigger issue is that the normals are not updated from what they were on the sphere.  If you look at the shape without a shadow over it, you can see that the spike pointing towards the light is brightest, even though it’s sides are at fairly sharp angles to the light.

So how would we update the normals?  As I mentioned in the intro section on geometry shaders, we can do it there.  That’s what I’ll show below.  

The main alternative would be to recalculate normals within the vertex shader.  This will often be the better visual, but it’s also generally really hard. So at this stage of the example, we’re going to skip this option (we’ll revisit later though).

Attempt 3 – Unlit Shader, Geometry Edition

In a geometry shader we can easily recalculate normals.  

float3 calculatedNormal = normalize(cross(input[1].vertex - input[0].vertex, input[2].vertex - input[0].vertex));

Here’s the full code for an unlit shader, with lighting & shadow added, as well as a geometry shader. 

Shader "Unlit/VertAndGeometry"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
		CGINCLUDE
            #include "UnityCG.cginc"
			#include "Autolight.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
				float4 normal : NORMAL;
                float2 uv : TEXCOORD0;
            };

            struct v2g
			{
                float4 vertex : POSITION;
				float3 normal : NORMAL;
                float2 uv : TEXCOORD0;
			};
            struct g2f
            {
                float2 uv : TEXCOORD0;
                UNITY_FOG_COORDS(1)
                float4 vertex : SV_POSITION;
				float3 normal : NORMAL;
				unityShadowCoord4 _ShadowCoord : TEXCOORD1;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            v2g vert (appdata v)
            {
                v2g o;
				
				//move my verts
				float4 position = v.vertex;
				position.xz *= 2 - ( abs( sin(25.13 * v.uv.x) ) );
				
                o.vertex = position;
				o.normal = v.normal;
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                UNITY_TRANSFER_FOG(o,o.vertex);
                return o;
            }
			[maxvertexcount(3)]
            void geom(triangle v2g input[3], inout TriangleStream<g2f> triStream)
            {
                g2f o;
				float3 normal = normalize(cross(input[1].vertex - input[0].vertex, input[2].vertex - input[0].vertex));
				
                for(int i = 0; i < 3; i++)
                {
					float4 vert = input[i].vertex;
					o.vertex = UnityObjectToClipPos(vert);
                    UNITY_TRANSFER_FOG(o,o.vertex);
                    o.uv = input[i].uv;
					o.normal = UnityObjectToWorldNormal((normal));
					o._ShadowCoord = ComputeScreenPos(o.vertex);
		            #if UNITY_PASS_SHADOWCASTER
					o.vertex = UnityApplyLinearShadowBias(o.vertex);
					#endif
                    triStream.Append(o);
                }
 
                triStream.RestartStrip();
            }
		ENDCG
		
        Pass
        {
			Tags { "RenderType"="Opaque" "LightMode" = "ForwardBase"}
			LOD 100
            CGPROGRAM
            #pragma vertex vert
			#pragma geometry geom
            #pragma fragment frag
            #pragma multi_compile_fog
            #pragma multi_compile_fwdbase
            #pragma shader_feature IS_LIT

            fixed4 frag (g2f i) : SV_Target
            {
                // orangy color
                fixed4 col = fixed4(0.9,0.7,0.1,1);
				//lighting
				fixed light = saturate (dot (normalize(_WorldSpaceLightPos0), i.normal));
				float shadow = SHADOW_ATTENUATION(i);
				col.rgb *= light  * shadow + float4( ShadeSH9(float4(i.normal, 1)), 1.0) ;	
                // apply fog
                UNITY_APPLY_FOG(i.fogCoord, col);
                return col;
            }
            ENDCG
        }
		
        Pass
        {
        Tags { "RenderType"="Opaque" "LightMode" = "ShadowCaster" }
        LOD 100
		CGPROGRAM
            #pragma vertex vert
			#pragma geometry geom
            #pragma fragment fragShadow
            #pragma target 4.6
            #pragma multi_compile_shadowcaster
            float4 fragShadow(g2f i) : SV_Target
            {
                SHADOW_CASTER_FRAGMENT(i)
            }   
        ENDCG
	    }
    }
}

What does this give us?

rotating shape with geometry shader calculated normals

With the fixed normals, the wing pointing towards the light is dim, because it only catches light at a sharp angle. The wings pointing straight out are the brightest part of the object.  

This looks pretty good other than the lack of normal smoothing.  There are a few possible fixes for the smoothing. Adding more tessellation (either to the source model, or within the tesselation stage of the render) is a way to make a curvy model look smoother.  Alternatively, if your model is particularly angular, you may not need or want to correct for this look.

Attempt 4 – Unity Shader Graph

We’ve seen what can be done in a fairly straightforward way in the built in pipeline, writing code shaders.  Now lets try with shader graph.

A note for HDPR vs URP/LWRP. My sample below is set up for HDRP, where all normals are in Object Space. If you are using this shader in URP or LWRP, it should all be the same, except you’ll need a Transform node at the very end. Use a Transform to convert from Object Space to Tangent Space just before the normals are fed into the Master Node.

I’m going to use the same logic to drive vertex position, which looks like this:

Shader Graph vertex position logic
click to zoom

Which gives us this:

Shader Graph result without normal correction

As you take a look, you’ll see that for the most part, this looks really good.  Better than “attempt 2” for sure. The shadows are right. The surface is smooth and well lit. But the normals. Oh the normals.  They are still not updated.

Unfortunately in the graph we do not have the easy-out that a geometry shader provides. So we have to do the hard thing I mentioned earlier.  Calculate normals based on math. In my case, I’m going to do a very rough approximation. I’m going to convert the xz of the normal vector into polar coordinates, rotate it clockwise or counter clockwise depending on which part of the sine wave I’m on, then convert it back to xz.  Luckily i can do the “concert back from polar coordinates” step with my handy Cartesian Coordinate node in the asset store.

A quick side note, two of the wings have some weird spikes in them. That’s due to some edge math being not quite right. I could fix it by complicating our math slightly, but will ignore it for this demo.

Normal Calculation Logic

Let me walk through the process I used to get to the final graph. As a first step, I like to set up the framework, without actually changing anything. So I converted the normals to polar and back unchanged just to make sure nothing was going weird with my conversion. In this image, I add 0 to the angle as a placeholder for the upcoming effort.

Shader graph normal rotation baseline
click to zoom

Next i split the result of my normal logic (currently unchanged), and fed this into the albedo color like so [normal.x, -1*normal.x, 0]. This allowed me to play with rotating my normals in a way i could clearly see.

Tip: when driving data into color channels to debug, make an unlit shader so lighting won’t alter your colors.

Now with the normal.x in the color channel, I used the Add node on the normal’s angle and started playing with the numbers. With these nodes i discovered, an angle of 1 represented 360 degrees (i had forgotten until I started playing with it). So adding 0.167 rotated the normals about 60 degrees. 

Now I need to make it rotate one direction while the vertices are on their way out, and another while they are on their way in. Luckily, my vertex movement is based on eight pi, so I can easily determine the direction using sixteen pi.  Turns out after playing with it that i also want to scale down the rotation towards the top and bottom of the sphere. So I added in some logic based on uv.y as well. The logic becomes:

//determine the 16*PI pattern around the sphere
float horizRot = clamp(sin(uv.x * 16 * PI)*2, -1, 1) * -0.16;
//make a scaled and offset Y coordinate
float expandedY = uv.y*2.5-0.25;
//the two saturates give us a 0->1->0 range.
float finalAdd = horizRot * saturate(expandedY) * saturate(2-expandedY);

That finalAdd variable is what’s fed into the Add node I put in the previous graph image.

Shader graph normal rotation logic
click to zoom

With that in place, I’ll set the albedo back to the orangish i had before, and take a look at the finished product.

Final shader graph image result

Wrapping up

To summarize my four attempts to move vertices:

  1. A vertex function inside a surface shader: complete disaster. It does have the clear advantage of being super easy to create. So if your scenario happens to look ok, this could be good.
  2. An unlit shader with custom lighting: works relatively well. Clear downside is the requirement to use hard math to calculate adjusted normals.  The less obvious downside is doing my own lighting. That’s also hard to make look really good.
  3. A geometry shader added to #2. Makes the normal calculation much easier at the expense of a loss of smoothing. Shares the downside of self managed lighting.
  4. Shader graph. Similar to #2, i have to calculate my own normal adjustments.  Unlike #2 however, i get really good lighting for free.

All in all, you have many options when messing with vertex positions.  Each option has some benefits and drawbacks. What’s right for you will depend on your situation.

If you find yourself unable to use the kind of normal calculation done in this sample, be sure to look at the next article in this series. How to calculate shader graph normals walks through a more general solution that can apply to almost any geometry modification.

My overall goal today was to get you familiar with these two types of shaders so that future tutorials would have a foundation to build on. This wound up being much longer than I expected, and I still feel like there is much more to say.  I look forward to continuing to dive into these depths together with you.

Do something. Make progress. Have fun.

3 comments on Unity Vertex Shader and Geometry Shader Tutorial

  1. Hello, I found this a great explanation of the geometry and vertex shaders.
    I have been wondering for some time if there is any way to generate new geometry for a mesh using shader Graph.

    please give some help

    1. Well, you can’t directly do that. Though I’m hoping the shader graph folks fix that.

      For now, I have one idea I’ve been meaning to experiment with, but haven’t yet. That is to use shader graph, then get its generated shader file and hand edit that to add a geometry shader. I have no idea if that’ll work. Just something I want to try. If I get something working I’ll be sure to write up a post about it.

Comments are closed.