Real-Time Ray Tracing with Compute Shaders and Denoising

A take on a compute-shader based ray tracer for CSC 473

Joseph Johnson
Fall 2020

Project Description

Ray tracing is an advanced rendering technique that involves casting rays in a scene and detecting the light as the ray bounces off objects in the scene. This allows for realistic images to be produced because it more accurately models how light interacts with objects in the real world.

This is a computationally expensive problem - at least one ray is cast per pixel and sequentially computing the light returned for each ray takes a significant amount of time. This is amplified when multiple rays must be cast per pixel for more realistic looking images.

The aim of this project was to create a ray tracer using compute shaders. Each ray is independent of other rays being sent and can therefore be computed in any order.

Details

For each pixel, a shader invocation is dispatched to cast a ray into the scene and calculate the color. This ray will test to every object in the scene to see what is the closest hit in its path. The ray then bounces off the closest object and keeps doing so until it hits nothing or reaches a maximum recursive depth. After tracing each pixel, we store color, depth, and normal information in textures that are used to render the various modes.

Six different modes were implemented: a denoising mode, low sample mode, simple blur mode, depth mode, normal mode, and luminance mode. These can be switched between using the m key. The primary mode, denoising, is calculated by taking a weighted average for each pixel with its neighboring pixels. The weight for each of the neighbors is determined by how close the normal, depth, and luminence are to the current pixel. With this denoising, cleaner images can be produced with less samples. Without considering the information stored for each pixel, the edges would become blurred and would increase the general blurryness of the image.

A display of the various modes can be viewed below.

Low Sample Mode

A low sample ray trace of the scene

Simple Blur Mode

Blurred image (edges aren't preserved)

Denoise Mode

Denoised image (as described above)

Depth Mode

Visualization of the depth-buffer

Normal Mode

Visualization of the normal-buffer

Luminance Mode

Visualization of the luminance-buffer

I also implemented the ability to navigate the virtual world using the WASD keys to move forward, left, backwards, and right respectively. A scene where the user has moved from the original start position can be viewed below.

Navigation of the scene is possible using the WASD keys

Results

The scene was able to be rendered at 25 FPS. The same scene was rendered in 6 seconds with sequential execution. This means that the compute shaders provided around a 150x speedup!

References

https://research.nvidia.com/publication/2017-07_Spatiotemporal-Variance-Guided-Filtering%3A