In real life, light rays are casted from a light source into the world around it. These rays then hit various objects and surfaces in the area and continue to cast rays dependant on the given object's or surface's properties. Depending on where the human eye is looking at, it will catch these rays that are transmitted from the given objects. This is how we perceive the world around us. Reflection, shadows, ambient light, etc., are all created due to the casting of these light rays.
In graphics, we want to replicate all of these properties in as much detail as possible. However, it is very inefficient to trace each ray of light from the light source, as all of the rays would not reach the camera (eye). As such, we perform what is informally called, backwards ray tracing. This allows to cast rays from the camera's point of view while hitting most objects, surfaces, and light sources whose rays would normally reach the eye.
In order to create a scene, objects representing actual shapes, lights, and a camera are needed. In addition, these objects will need to come with their own properties such as location, color, and other specific finish attributes such as reflection and ambient coefficients. For these reasons, I use what is called a POV-Ray formatted text file. This type of file allows me to specify for a given shape, light source, or camera, all needed attributes to trace a light ray to and from that object.
As these types of files will have multitudes of information for 1 object, I decided to create a class for each type of object. At the highest level, they are broken into these 4 classes:
GeometricObject is the parent class of several child shape classes such as:
Separating the classes at a higher level allows me to micro-manage the interactions between those objects. This works very nicely when calculating various position and illumination equations. For obvious reasons, each GeometricObject child class must be separated so they can hold their own unique values such as radius, or base and height, etc. However, the GeometricObject class serves a great purpose. It holds all the general information of any object such as location, color, and material attributes. It also specifies the intersect() and normal() function which must be used during the ray tracing algorithm to determine if a ray has intersected with this object. Again, each child class must implement their own version of these functions as the required calculations are unique to each shape.
The ray tracing algorithm works as follows:
for each pixel in the image {
The first and most important step of this entire algorithm is to create a ray. Each ray is composed of 2 3D vectors: one representing the origin of the ray and the other representing its direction. For each pixel of the image, 1 ray will be cast through it. However, referring to the figure below, you will notice that the pixels are represented in 2D space while the rest of the image is represented in 3D space.
This presents quite a problem when calculating the direction of a ray. The x and y coordinates must be converted to 3D world coordinates and the Z value of the ray's directional vector must be calculated. Luckily for us, we can just calculate the length of the directional vector from the camera, which will give us the proper Z value where the pixel screen is located at; thus creating the directional vector for each ray.
Once a ray has been cast, it is tested against each GeometricObject in the scene to determine if there is an intersection. If there is, this means we must calculate and illuminate the correct RGB values for each pixel. But for now, it is wise just to copy the exact color from the intersected object to make sure the intersection functions are working correctly. If 1 ray intersects with more than 1 object, then the algorithm must determine which object is closest to the camera and color the pixel with the color of that object. Given the following POVRAY file:
camera {
location <0, 0, -5>
up <0, 1, 0>
right <1.33333, 0, 0>
look_at <0, 0, 0>
}
light_source {<20, 0, 0> color rgb <1.0, 1.0, 1.0>}
sphere { <0, 0, -1>, 2
pigment { color rgb <1.0, 0.0, 1.0>}
finish {ambient 0.3 diffuse 0.4}
}
sphere { <3, 0, 5>, 3
pigment { color rgb <0.0, 1.0, 0.0>}
finish {ambient 0.3 diffuse 0.3}
}
The ray casting algorithm will generate the following picture.
Using the Phong Model lighting equation, I am able to produce shading on any drawn object. However, the actual calculation used to implement this is slightly different than the equation discussed in class. The equation is still composed of adding the separate components of light: ambient, diffuse, and specular. It is in the calculation of these components where the equation begins to differ. Ambient light is calculated the same: take the ambient light value from the object and multiply it with the color of the light source and the object's color. Diffues light is also calculated in a similar manner. Take the diffuse light value from the object and multiply it with the color of the light source, color of the object, and the dot product of the normal at the intersection point and the vector of the direction of the light source. Specular light, on the other hand, is calculated very differently. Take the specular light value from the object, multiply it with the color of the light. Now here comes the real tricky part: take that value and multiply it with the dot product of the vector that bisects the direction of light vector and the camera's eye vector, and raise that dot product to 1/roughness, where roughness is also a value taken from the object. With Phong lighting fully implemented, the ray tracer will generate the following picture.
Unfortunately, I was not able to implment this part of the algorithm due to lack of time needed to fully convert the mathematical algorithms into code. However, I do understand this process and would like to go over it.
After confirming intersection with an object, cast a separate ray from the object towards the light source. If there is an intersection before the ray hits the light source, this means that the current object is blocked from this light source by the newly intersected object. As such, no diffuse or specular components should be calculated from this light source. However, it is VERY important that ambient light is still calculated, as ambient light does not come from 1 light source directly. It is also important to note that other light sources may exist and would still provide diffuse and specular lighting to this object. This concept can be seen in the second picture down of this webpage.
After confirming intersection with an object, cast a new ray from the intersection point with a direction vector that is mirroed across the current object's surface normal. Once the color values are returned from determining this ray's intersections, multiply them against the reflection value of this object and add the result to the overall color of the current pixel.
After confirming intersection with an object, cast a new ray from the intersection point with a direction vector calculated using Snell's law which will need the index of refraction value from the current object. Using another (complicated) equation associted with Snell's law will produce the color values from the sent ray. This value should be added to the overall color of the current pixel.
The following picture shows where each new ray is sent:
NOTE: To avoid infinite recursion, store the depth to which a ray has been traced, and stop recursing at some point.
Ray Tracing is a very powerful algorithm that does an excellent job of making the 3D World become life-like. I had an excellent time working on this project and understanding its concepts. I can NOT wait for this type of algorithm to be implemented in real time so it can be used in games. As far as my project is concerned, I intend to continue working on it so I can successfully calculate the intersections of all POV-Ray shapes as well as implement the recursive additions to fully see the glory of ray tracing.
The executable and example POV-Ray are available for download. To run the ray tracer:
FinalParser.exe [file name] [width] [height]
A VERY IMPORTANT NOTE: only spheres are fully functional in my implementation. Using any other shape will not crash the ray tracer, but it will just draw the image incorrectly.