Volumetric lighting


Showcase of volumetric lighting.

(My Implementation)

How it looks in RD2.

(Red Dead Redemption 2)

Introduction

In this project we get to choose whatever subject we find interesting and conduct a five week long project from start to finish by ourselves, implementing a feature. The feature I chose was volumetric lighting simply because I think it’s one of the biggest features when it comes to making your lighting look good. The feature is developed using our previously created engine.

Challanges:

  • How does volumetric lighting work?
  • What level of fidelity and which features can I manage to implement in five weeks?
  • How does a compute-shader work?
  • What metrics should I use in regards to performance and looks?

Sources:

I began by gathering materials and reading up on how this feature has been implemented in the past. I also felt the need to explore how newer games accomplish this effect in comparison to older games, since it would not be realistic to try and match my implementation to the latest and greatest.

  1. SIGGraph 2014, Volumetric lighting in Assassin's creed 4.
  2. SIGGraph 2015, Physically-based & Unified Volumetric Rendering in Frostbite.
  3. Volumetric Lighting - God Rays, Atmospheric/ Fog Rendering, & More Explained!
  4. GPU Pro 6 - Wolfgang Engel (Book)
Example of volumetric lighting.
Another example of volumetric lighting.

Examples 🖼️

These images really showcase the drastic difference when using volumetric lighting or not. It helps give a sense of depth and a sense of atmosphere. In real life you can clearly see that objects that are far away will become increasingly hazy as there’s a lot of air in between the viewer and the object. These particles scatter the light that hits them and give off a small bit of light and slightly color the light that then enters the eye.

Example how it looks in real life.

IRL Example 🌳

In this picture you can clearly see how the trees far off in the background almost appear white because of fog and scattering of the light.

Scientific showcase of how it works behind the scenes.
Showcase of ray march from camera.

Ray marched volumetric lighting 📸

This is how a classic ray marched volumetric lighting solution looks like. You take each pixel of the camera's view and “shoot” a ray from it. You then have to decide on an arbitrary amount and spacing of the samples on the ray. Each sample will have to look if there’s fog present at that position and then fetch all the lights in the scene to do the scattering equations.

This approach is typically very expensive or not so expensive at the cost of quality. You can optimize this approach by reducing the amount of samples or steps the ray is divided into, meaning you have to do fewer light calculations. You can also reduce the resolution of the viewport so that you render volumetrics at half or lower resolution.

Since you oftentimes have to reduce the amount of steps you sample at, you’re left with an unstable volumetric solution where it, depending on the position of the camera, can miss shadows or objects.

Showcase of frustum aligned volumetric lighting.

Frustum aligned voxel-based volumetric lighting 🌫️

To find a more consistent and efficient way to do this the creators of Assassin's Creed 4 implemented a frustum aligned grid of voxels. This is basically a grid of voxels, mapped to the camera and stored in a 3D-Texture. As illustrated in the image the frustum based voxels, or “froxels” become larger the further away from the camera they get. This automatically makes the volumetrics closer to the camera a higher resolution.

The biggest difference is that we can in a single pass iterate through all of the froxels to calculate lighting conditions and then separately iterate through the volume to calculate the final light transmission that then will be drawn on top of the scene-view. In some sense we still “ray march” but instead of using an actual ray the marching takes place along the z-axis of our froxel-grid.

To further develop the concept the creators of Red dead redemption 2 also added classic ray marching beyond the far-plane of the froxel-grid. This was done so that they can get god rays and other volumetric effects in the far-away background.

Getting Started 🏃‍♂️

Having read some material I began by creating my first compute-shader. To start with I just want the compute-shader to “summarize” a 3D texture into a 2D texture by adding all of the layers along the Z-Axis together. After fiddling around with it and learning how compute-shaders work the next goal was to couple the shader to the world.

Showcase of how it looking getting started.

I started by defining an area in the world where the fog would be and then based on the world position of the current froxel I would be able to calculate a UV-coordinate relative to the fog.

This is what the fog looks like when assigning the color based on the UV-coordinate and then summarizing the fog on screen. I then draw the fog on top of the rest of the pipeline (a skybox).

Another showcase of me getting started with this project.

I then found a Perlin-noise function online to generate noise in a 3D texture. I pass the texture to the compute-shader and map the sampled value to the alpha-channel of our frustum-texture.

I also added some controls to manipulate the scale and offset of the noise. This can later be used to control the look of the fog and to emulate wind.

Programmatic flowchart when getting started.

This is basically how the flow of the volumetric calculations look right now. With this in place I feel like the most challenging part of the project is done. I feel comfortable using compute-shaders and unordered access views and can now iterate upon what I have.

Adding lighting and depth.

Adding Lighting and depth 💡

I changed from displaying the UV-coordinate as color to just plain white. I then integrated the depth buffer to handle depth and objects being “inside” of the fog.

Showcase of red light in a scene.

The next natural step was to add lighting, using the existing PBR-functions I had created for the rest of the rendering pipeline.

Showcase of white light in the scene.

This was easy enough and since the pipeline already supports lights with shadows through deferred rendering. Because of this, most of what I needed was accessible and shadows worked right off the bat.

Showcase of proper scattering.

Proper Scattering ⚖️

My next big problem is the way that I summarize the results of the gathered density/light information. My method leads to over exposure and incorrect lighting. As described in the picture the transmitted light should be derived from scattering and absorption. The correct way to do this is described in the mentioned source material.

Program flowchart finished.

To solve this problem I created a new stage of the volumetrics pipeline. Using another compute-shader to march through the Frustum volume and recursively calculate the color from back to front, according to atmospheric scattering I got a much better result.

This shader correctly evaluates the light and outputs the final color to the last slice of the 3D-texture. The final stage is to simply copy the front of the 3D-texture to a 2D-texture.

Showcase of cube with light on it.

In this picture you can clearly see the effect of correctly calculating the light that gets through the fog. Compared to the old method where everything eventually went white this looks a lot better!

Ball with light on it inside the scene.

Here you can clearly see the difference. Even though the light looks decent enough in combination with shadows the image is ruined since everything inevitably becomes over-exposed. This is not the case with the new light-calculations.

Some fog catching up the light.

Iterating and Improving 🔁

After improving upon the scattering and absorption algorithm you can clearly see that the light is visibly inside of the fog and the resulting image looks realistic enough. Although I still have artifacts and banding issues from my low-resolution textures.

Showcase of two pigs getting light shine over them through the fog.

To combat banding and make the fog appear more smooth I introduced some noise when sampling the textures which I then blurred using a gaussian-blur pass.

The final improvement was to Sample using a Trilinear Sample-State instead of just using the data at a given position from an unordered access view. This also meant that I had to switch to a shader resource view when sampling the noise for the fog.

Another showcase of pig getting light shined down on it.

The fog is really coming along and I can now produce fairly good looking god-rays although when coming up close to the fog everything just seems to blur into a haze.

Showcase of the final product.

Wrapping up 🏁

To combat the lack of detail in the appearance of the fog I added a secondary, finer noise texture and a weight to blend the two of them together. Along with the finer noise I added wind to slowly scroll the noise texture. I also adapted the noise generation to seamlessly blend from one side to the other, allowing me to have the fog be global and tileable. While at it I also made the fog height based, so that it fades the further up it goes.

Scene with and without volumetric lighting 🔀

More showcase of the final product.

(With Volumetric Lighting)

Showcase of volumetric lighting off.

(Without Volumetric Lighting)

It looks even better in motion! 😃

More showcase of the final product.
Another gif of the final product.