After creating a raytracer, I also learned how to implement a rasterizer. The rasterizer supports rendering meshes, applying textures and interpreting effect maps (normal, specular, glossiness). You can move the camera around and adjust the FOV. In comparison to a raytracer we don't shoot rays through each pixel. Instead, we project the geometry onto the screen and check whether a pixel lies in the geometry.
This is the first stage of the rasterization pipeline. It's all about projecting 3D geometry onto our 2D view plane.
The vertex positions get transformed to screen position by taking the camera View matrix and the camera Projection (FOV & frustum) matrix into account.
Here, we also rearrange the vertices to satisfy the triangle topology (list or strip).
After projecting the geometry onto the 2D view plane, we enter the raterization stage. Here, we check if a pixel in the 2D view plane overlaps with the projected geometry.
In order to gain meaningful depth data of a triangle, we need to interpolate the z-coordinates of the vertices with the vertices' weights. (In other words: calculate the depth of the hit point on the triangle, not the vertices')
1 / (v0.weight / v0.z + v1.weight / v1.z + v2.weight / v2.z);
Now the depth value is mapped between 0 and 1. Most attributes should be interpolated like this.
Lastly, we view the depth buffer to check if the current pixel already has a closer value to the camera. If not we update the depth buffer with the value we just calculated.
For every triangle, we calculate a 2D bound box. For every pixel in that box, we check whether its center lies within the triangle. As you can see on the left, the pixel lies within the triangle when the result of the Cross product lies between 0 and 1. This value is the 'weight' of the vertex you Crossed from. We need these weight for the depth test.
In this stage we handle all things importnat for coloring the pixels. Mainly, we sample textures to be able to display color an object and we sample normal maps, specular maps and glossiness maps to manipulate lighting.
To find the observed area we take the same approach as with the raytracer. This time we just have a directional light. By calculating the Dot product on the opposite light direction with the surface normal, we have the value that represents how direct the light hits the hit point.
We use a diffuse texture to color our object. We specifically want a Lambert diffuse result. By multiplying the UV sampled color with the light intensity and dividing by Pi, we get the Lambert diffuse value. Multiply this by the observed area and we get a nicely colored object.
We use specular and gloss textures to add some shinyness to the object. In my implementation, I use Phong formulas to handle this. First we calculate the reflected light direction over the surface normal. Then, we Dot this with the view direction.
We sample the grayscale value from the gloss texture and use this as the exponent on our previous result. Lastly, we multiply all of this by the sampled Specular value.
To manipulate the way light acts a upon surface, we can use normal maps. Every pixel sampled from the texture holds a three-component value (RGB), but we can interpret this as a (normal) vector.
Because we want the normal component to have a range from -1 to 1, we first remap the sample (result*2-1). Then, we need to transform the result to tangent space. (Cross normal and tangent vector to get the bi-normal and multiply the sample with following matrix:
Now, with that result, we can use these normals instead for our calculations.