Ray tracing is a technique used in computer graphics for generating images. The path of light is traced through pixels in an image plane and interacts with objects in a scene. The technique is capable of producing very photo realistic images, but has a high computational cost. Ray tracing is best suited for applications where the images can be rendered slowly ahead of time and poorly suited for real-time applications.
The technologies included in my implementation of a ray tracer include, but are not limited to:
When a ray hits a reflective surface, it can be reflected back out into the scene to gather additional color. To achieve this, the surface normal is calculated at the point of intersection. With the normal the reflected angle can be calculated and a reflected ray can be cast into the scene. Since reflection is recursive and could result in an infinite loop, the maximum amount of reflections for a ray is capped at 6.
When a ray hits a refractive surface, it can be refracted through the object to gather additional color. When a ray hits a refractive surface, it is assumed to be coming from air. The objects index of refraction can then be used to bend the ray slightly as it enters the object. When the ray exits the object, it is assumed to be entering air again and is bent again.
One of the fundamental problems in ray tracing is how to sample the image. The most simple approach is to send one ray through the center of each pixel. However, this results in very sharp edges as the scene is being under sampled. Anti-aliasing attempts to solve this problem. With anti-aliasing, multiple rays are sent through each pixel, each at a slightly different direction. The results of each ray are then averaged. This gives most softer edges around objects and a more photo realistic image.
In a real camera, the shutter captures light for some period of time. This results in moving objects becoming blurred. In a simple ray tracer, the "shutter" speed is instant and this effect is lost. Motion blur in a ray tracer is achieved by sending several rays for each pixel and giving each ray a random time associated with it. This time is in the range of 0 to however long the "shutter" is open for. When a ray is testing for intersection against an object, the object first adjusts itself according to its velocity and the rays time.
Texture mapping is the idea of wrapping a two dimensional image around a three dimensional object. When a textured object is hit by a ray, the 3D point of intersection is transformed into a pixel coordinate of the 2D image. The pixel color of the image is then used as the color of the object, at that point.
In a naive ray tracer, each ray tests for intersection against every object in the world. This results in many wasted operations. Why should a ray test for intersection against an object when they are not even remotely close to each other? A bounding volume hierarchy is an optimization to address this issue. Each object in the world is enclosed in a bounding box, and nearby bounding boxes are combined into larger bounding boxes. When constructed, this results in a balanced binary tree of objects that can be traversed when testing for intersection, avoiding many costly intersection tests and greatly improving performance.