Photon mapping is a two-pass raytracing method that first casts rays, which represent photons, from the light sources in the scene, stores how the photons interact with diffuse surfaces, and then casts viewing rays from the camera. Wherever a viewing ray intersects a diffuse surface, the standard direct lighting calculations are made, and the additional indirect radiance is approximated based on the nearest photons stored during the photon tracing. To reduce lighting redundancies during radiance approximations for this project, photons are not stored on their first recursion through the scene. That is, a photon has to either reflect (diffusely or specularly) or refract at least once before it is stored. The color, incoming direction, and position of each photon is stored.
Photons are stored in a data structure called a balanced kd-tree. A kd-tree is a binary tree whose internal nodes define a splitting plane, so it is conducive to nearest neighbor searches which need to be performed for every radiance estimate. The radiance estimate actually uses more than one photon, so it would be better to call it an N nearest neighbor search. Kd-trees can be optimized for space by packing them into an array (similar to a heap), but I have elected not to do so in the interest of speeding up the rendering.
Furthermore, I have elected to use two photon maps: one for general indirect illumination and one for caustics. Caustics are the light patterns caused by refractive and reflective objects. Separating the maps actually speeds up the nearest neighbor searching by short-circuiting it in the majority of the scene because caustics tend to be very localized. It also increases the resolution of the caustics simply because there are more photons in a smaller area. The only difference in the generation of the caustics map and the global map is that photons are casted using importance sampling in the direction of previously identified reflective and refractive geometry in the scene.
One of the motivations for implementing photon mapping in my ray tracer was to model the caustic patterns of water similar to what is seen at the bottom of a swimming pool. I decided to model the surface of water using parameterized, randomly placed gravity-capillary waves approximated by triangles. I was able to achieve a 217-triangle resolution without a significant increase in the number of intersection tests by taking the points where the ray enters and exits the surfaces bounding box to define a two-dimensional range of triangles that could possibly be intersected by the ray. This could be further improved by determining a two-dimensional line of triangles (similar to a raster algorithm). For the intents and purposes (and time constraints) of this project, however, the first method I described is fast enough and only tests half of the triangles in the range before an intersection is found in the average case.
As of the time of this writing, this ray tracer can only render simple scenes in any reasonable amount of time with very accurate radiance estimates. For example, the top image to the left contains only three boxes, one plane, and two spheres, but it took just over 65 minutes to render with 400,000 photons in the photon maps combined, and 500 photons were used per radiance estimate. For more complex scenes, a bounding volume hierarchy would be necessary. One final interesting note on speed in photon mapping is that concave scenes tend to render faster than convex ones. For example, the top scene on the left rendered much faster than the scene below it.