Developing this project made me gain my first experience with using the GPU. Using DirectX11 and HLSL I was able to implement shaders and load them to be used in the rendering pipeline. I also implemented handling transparency using blend modes provided by DirectX11.
The DirectX11 rendering pipeline specifies a few shader stages. In this project, I used the Vertex and Pixel shader utilizing HLSL shaders. The Geometry shader gets used in cases where you need to render furr or grass etc. The rasterizer stage mainly does what I had to implement in the Software Rasterizer project.
When reading textures, there are multiple ways to sample color from it. The most basic filter option is Point Filter. This just returns the color that uv coordinate landed on. Using Point Filter makes the projected textures look pixelated in some occasions.
If you want a smoother blending of colors, Linear Filter returns a mix of the surrounding colors the uv coordinate landed on.
When using Anisotropic Filter, the sampling area adapts to the shape of the texture polygon on the screen.
With the DirectX11 API, we are able to render textures that use transparency. When you want to render something that uses transparancy, you to create a different HLSL shader.
There are two types of transparency, namely Transmission and Partial Coverage.
In this project we use Partial Coverage to render the fire behind the vehicle.
When using transparency, we should specify a specific blend mode to tell DirectX11 how the transparency should affect the pixel. The way DirectX11 handles this is by combining a source color with a destination color. You could interpret the destination color as the background and the source as the actual texture sample.
Result = Color_src x BlendFactor_src + Color_dst x BlendFactor_dst
The BlendFactors modify the source and the destination colors and the operation determines how they are combined.
For the fire of the vehicle, we use src_alpha for the source Blend Factor and inv_src_alpha for the destination Blend Factor. We can add these to get the desired result.
Since transparency causes objects to become visible trough the transparent object, the depth buffer has a problem.
We can fix this by specifying the depth-stencil state in the HLSL shader. For our transparent objects, we want to perform a depth-test but we don't want to write to the depth buffer.
A Stencil buffer is used to mask pixels in an image, to produce special effects. The mask controls whether the pixel is drawn or not. We don't use this functionality in this project.