In my last tutorial (vertex & geometry shaders in Unity), I went over the basics of manipulating vertices in both vertex and geometry shaders. If you remember, the challenge that kept arising was how to deal with normals. Especially shader graph normals. I had some solutions, but they weren’t as robust as I might like. Today we’ll explore how to fix normals, and how not fixing them can affect your lighting.
I am not strange. I am just not normal – Salvador Dali
Update – I’ve created a video version of this tutorial, available here.
Proper normals are critical to having proper lighting. Below I’ve bent a tree trunk, and show both fixed and not fixed normals.
This article may contain affiliate links, mistakes, or bad puns. Please read the disclaimer for more information.
Update: I’ve created a github for sample projects. It has one called NormalCalculation that matches the contents of this tutorial. Be sure to follow me on twitter or (and?) subscribe on this page to be informed of updates like this.
Update 2: I’ve added a NormalCalc_URP project to the above github sample, as the original one was just for HDRP. Also, I’ve had to disable comments here, but you can reach out on the forums, or on twitter with questions.
Not Just For Graphs
While the title of this post mentions Shader Graph, this technique isn’t actually graph specific. I chose this title because one of the main alternatives to this technique (using a geometry shader) isn’t available in a graph.
Given this reality, I’m showing all the logic below both in code and in graph form. This has the added benefit of showing the logic in a way that (I believe) is easier to explain.
There will be some subtle differences, like my code below uses float3’s for position because that’s what the graph has, even though a true coded shader would have a float4. They differences should be easy to find if you started copy/pasting this code into your own shader.
All the shader graph logic will work in both HDRP and URP/LWRP.
Defining Our Normal Problem
First lets make sure we are all on the same page about what I mean by “normals” and why they might need calculating.
Normals, or more formally normal vectors, are vectors of length 1 that are perpendicular to the surface of the object. Here’s the helpful image from wikipedia showing what normals on a curved surface would look like.
When you are in a vertex shader, normals are fed in by the engine from the mesh. In addition, while in a fragment shader, you can add per-fragment normal data from a normal map. I talk about this generally in the colors as math post. In shader graph, the master node has separate inputs for per-vertex normals and normal map. This article is just talking about dealing with the per-vertex normals.
So, if we have all this normal data already, why do we need to calculate normals?
This can be illustrated with that same image above. Imagine the source mesh is a flat plane and our vertex shader is moving vertices up or down to give the plane a wavy surface. If we didn’t fix the normals, all those blue arrows would just point straight up. But they need to point out from the adjusted surface as they do in that drawing.
Why They Matter
Normals are used to calculate lighting. After moving vertices, you can end up with the wrong parts of your surface being bright, dim, or shiny. In the screenshot below I bend the tree trunk into a half circle. This causes what was at the top facing away from the light, to be at the far end of the arc, facing directly towards the light. Without fixing the normals, that section is dark. With fixing them, its bright.
Adjusting Provided Normals
At a high level, you can solve this problem either by adjusting the normals that were passed in, or recalculating them from scratch.
In the last tutorial I chose the adjustment option when in shader graph. It worked fine in that case because it was easy to extend the vertex math to control normals. Unfortunately you’ll often find your vertex math too complex for this.
In addition to the complexity issue, this solution is cumbersome as it has to be custom written each time you change your manipulation logic. This makes it much harder to maintain and modify existing shaders.
Recalculate Normals
The other option is to get access to enough points on the surface to actually calculate the normal. If you remember your high school math class (trigonometry? geometry? shape-ometry?) all you need is your current point, and two more (that aren’t all collinear). I highly recommend keeping this cheat sheet with you at all times. From there, you can calculate two vectors on the surface (a and b below), and use the cross product to calculate the normal.
In the last tutorial, I was able to use this technique inside geometry shaders because a geometry shader always has a full triangle. In shader graph we don’t have access to neighbor information. Instead we need to predict where some neighbors might be.
How To Calculate Shader Graph Normals
The steps to this process are:
- Approximate original-surface neighbors
- Find another point on the plane represented by the normal.
- Find two coplanar vectors
- Find two points, equidistant from the original point.
- Deform position of neighbors & original vertex.
- Calculate normal
If your deformation logic is expensive, you’ve just tripled its cost, which is the main downside of this technique. The upside is that it works in almost all cases, and will generally give you better looking (more correct) results than the first setup.
Overall there’s a lot going on here, so I’ve included a screenshot of the full graph. It’s not very useful to if trying to see individual nodes, but it helps when wiring up the overall graph. Also, as noted earlier, this project is on github, so you can follow along with that as well.
Step 1: Approximate Original-Surface Neighbors
Our first step will be to get some neighbors. Since we don’t have access to real neighbors, we need math to imagine some. If we were on a plane, it’d be pretty easy. In object space, we’d just add some small offset n to the x and z to get our new spots: position + (n,0,0)
and position + (0,0,n)
.
But our shader is running on a sphere. Which means we’d like to get neighbors that are some small distance around the sphere’s longitude and latitude. And what if the object was more complex? Things get even harder to determine.
To solve this, we rely on the provided normal. This input is a vector perpendicular to the surface. Even though the surface may not be flat, the normal somewhat pretends it is. We can continue that pretending by finding spots along the surface (the plane) that the normal represents. Below is an illustration of this, with blue being the normals, and red being our approximate neighbors. The shorter we make the red arrow, the closer to on-surface the red dots will be.
Points and Planes
Things get pretty mathy here, so I’ll try to zip through it. First off, the information we have is a normal vector <a,b,c> and a point (x,y,z). Those two pieces of data uniquely define a plane. What we need to do is find two other points on that plane, and use them to calculate a new normal.
- Approximate original-surface neighbors
- Find another point on the plane represented by the normal
- Find two coplanar vectors
- Find two points, equidistant from the original point
These two new points are what we will later deform.
Step 1.1: Find another point on the plane represented by the normal
This is the longest step in our entire process, because the math is a bit complex, and somewhat hard to translate into shader graph. We’ll define our plane (which uses the definition of a plane: d=ax+by+cz), then calculate another point on the plane (position1). Note this point will not be a consistent distance from our original point, which is why we’ll later normalize the distances.
float d = normal.x * point.x + normal.y * point.y + normal.z * point.z; float offset = 0.05; //some small value float3 position1; //<-- this is that first “other point” we want if(normal.x == 0) { float x1 = position.x + offset; position1 = float3(x1, position.y, position.z); } else { float y1 = position.y + offset; float x1 = (d - normal.y * y1 - normal.z * position.z) / normal.x; position1 = float3(x1, y1, position.z); }
You’ll notice I have an if statement in the above code. As discussed in previous posts, code with if
branches will generally easier to read, but inefficient in execution. If statements are also a huge pain to do in shader graph. The fix is generally to calculate both legs, and then do a lerp and smoothstep to combine them. In our case, that would result in a divide by zero, so we need to first make an alternate version of normal.x. We turn what was a straight line through zero, into more of a v shape with the bottom flattened out:
float segment1 = abs(normal.x); float segment2 = segment1 - 0.001; float segment3 = saturate(segment2); float fixedNormalX = segment3 + 0.001; float signOfX = sign(normal.x); // and replace: //float x1 = (d - normal.y * y1 - normal.z * position.z) / normal.x; // with float x1 = signOfX * (d - normal.y * y1 - normal.z * position.z) / fixedNormalX;
Below is this step converted to oh-so-many graph nodes. Click any graph image for a zoomed in view.
I replaced the 0.001 with 0.1 so you can clearly see it’s never zero in the “fixed normal.x” below.
This gives us the two possible values for position1, and we need to use a smoothstep and lerp to emulate the if statement. The logic is like so:
//(abs(normal.x) - 0.001) is actually “segment2” from earlier logic... float equalZero = smoothstep(0,0.01, -1 * (abs(normal.x) - 0.001)); position1 = lerp(position1WhenXNonZero, position1WhenXZero, equalZero);
Step 1.2: Find two coplanar vectors
Given the first extra point we found in step 1a, we can calculate a normalized vector pointing towards it. From there we can get another normalized vector on that same plane, but perpendicular to our first one.
float4 vector1 = normalize(position1 - position); float4 vector2 = float4(normalize(cross(normalize(normal), vector1)), 0); //do I need to do normalize(normal)? IDK. safety first!
Step 1.3: Find two points, equidistant from the original point
Scaling those two normalized vectors down, and adding them to our original position gives us two new points. This vectorScale value may require some tweaking depending on your model. More on this in Step 2.
float vectorScale = 0.01; float4 neighbor1 = position + vector1 * vectorScale; float4 neighbor2 = position + vector2 * vectorScale;
Step 2: Deform position of neighbors & original vertex.
Now we will move the original vertex position and these two neighbors using the same logic. Since the neighbors were roughly on the original surface, they will roughly be on the new, modified surface.
Move logic into sub-graph
Since we need to do this movement logic three times, it’s a good idea to move it into a sub-graph. To do so, we need to make sure it’s controlled by the fewest inputs possible. Ideally just vertex position and any constants you have fed into your shader as a whole.
Up until now in this tutorial, I’ve been showing code/graphs that are generic for any normal calculation. The deformation logic, obviously is not generic. I’m going to choose to use some logic similar to what I did in the last tutorial. I won’t do the exact same logic for reasons I get into near the end of the tutorial. Instead it’ll be a puffy plus. Kinda like a weird puffy health pack!
position.xz *= 2 + sin(8 * 3.1415 * radAngle.y);
Here I’m using UV’s, which complicates life (how do we get our pretend neighbor’s UV???). So I want to stop that. In the above logic, uv.x indicated how far around the sphere we were horizontally. Luckily we can calculate this using position in object space and the Polar Coordinate shader graph node. So our logic becomes this:
float2 radius_angle = PolarCoordinates(position.xz); position.xz *= 2 + sin(8 * PI * radus_angle.y);<br>
To finalize this step, we select all the nodes in the above image except the Position input, right click, and select “Convert To Sub-Graph”. This will make a sub-graph that takes a position as input, and returns an adjusted position as output. For more info on sub-graphs, see my post on custom shader graph nodes.
This is the resulting shape without fixed normals.
Move the three positions
With the deformation logic in a sub graph, we can calculate where our position would be, and where the neighbors would move to if they were really on the surface.
position = SimpleDeform(position); neighbor1 = SimpleDeform(neighbor1); neighbor2 = SimpleDeform(neighbor2);
Step 3: Calculate Normal
With all three points moved we can find vectors pointing from our new position to its new neighbors. Doing a cross product of those will result in our new normal.
We feed this into the Vertex Normal input of the master node, and we wind up with weird health pack that has much better lighting.
When This Needs Modification
This technique has one main situation where it runs into trouble: Whenever there are very sharp turns at a vertex. The star fruit shader math from the last tutorial is a good example. It pushes the vertices out in spikes. At the tip of each spike, the edges are sharp, and have a quick return on both sides. This means that along the sharp edge, we need the normal to be pointing straight out. If we use the technique above, which grabs a point just to the side of the current one, that point will be a good bit back along the spike’s edge.
A solution that will get us pretty close to correct in these spike scenarios is just to calculate our normal multiple times, and average the result. For the star fruit example, it works fairly well to toss a negative in the early calculations for position1 (in both legs of the if).
Conclusion
There are lots of reasons you’ll find yourself manipulating vertices in a shader. In almost all of those cases, correct lighting will not just magically happen. Sometimes it’ll be close enough to not care, but often it’ll be noticeably better fixed up.
Web hosting provided by BlueHost.
Do something. Make progress. Have fun.
Great job to make a tutorial on this! Is there a chance you could upload a higher res version of the final graph, or the final file(s)? I’ve made a mistake somewhere and am having trouble tracking it down.
Thanks for the feedback!
It’ll likely be a couple days, but I’ll put something together.
Ok, it’s up: https://github.com/gamedevbill/Tutorials.
If you’re still stuck, comment here or reach out on Twitter @gamedevbill. Though if you think a back & forth is needed, perhaps reply to my post on the Unity forums (https://forum.unity.com/threads/tutorial-how-to-calculate-vertex-normals-in-shader-graph.833710/) as that’s an easy platform for conversation.
Very nice work (even if i understand 1/8 of it), i was able to adapt it to my shader thingy stuff and it looks as i wanted it for now. Big thanks, you rock !
Glad to hear it! Thanks for the encouragement 🙂
Thanks a lot for the detailed explanation, Bill! I started dipping my toes in writing shaders just several days ago, and yesterday I stumbled for the first time upon this problem. Will take me some time to unpack all the logic here, but it was easier enough to follow to build the graph step by step and see the correct results.
Glad to hear it. Thanks for the feedback