For my project I set out to make a music visualizer using OpenGL, doing all of the analyzation of the audio data without help from other libraries. I did some initial research into trying to make a spectrum visualizer, so being able to isolate individual frequencies or frequency ranges from within the music files. This required use of a very abstract mathematical concept called Fourier Transforms, and I determined that implementing this technology would take too long for what I wanted to do. I learned how to read the data in .wav files byte by byte, and used the data to come up with average amplitude data to use to modify the objects in my scene.
In order to get the desired visual effects, I averaged the past (sample rate / frame rate) samples to come up with an accurate representation of the audio data that would play every frame in the animation. I used the resulting averages to scale and elevate my spheres on the sides of the screen. Once I was sure that the averages were functioning properly, I went on to design my mesh.
The mesh has dynamic resolution, which can be changed by editing "#define MESH_DEPTH" in main.cpp. Changing this value will change how many old averages the mesh will keep track of and use. The general structure of the mesh, triangles arranged into one plane, is stored in an .obj file that is created when the program starts up. The properties of the mesh are determined based on MESH_DEPTH. My program then adds temporary height modifiers to each of the vertices in the mesh, which are stored in a one dimensional array containing MESH_DEPTH total averages. Each frame the averages are pushed back one spot in the array, allowing room for the newest average.
In retrospect I could have probably done away with creating a .obj mesh, then reading it in and modifying its heights, and rather designed a data structure that didn't require going to hard disk and back. I mainly did it this way because I wasn't completely familiar with what I needed to do in order to get faces to share vertices in a data structure like we've been using in the labs to draw planes. Obj files allowed for an easy way to specify that multiple faces use the same vertex, which allowed me to only need to store one height value per vertex.
I had to do a bit of work to get the changes in average amplitude to be obvious, given that averaging amplitude over a couple thousand frames yields little change over time for most songs. I solved this by amplifying the differences in the averages by subtracting a flat amount from all of them, then scaling the result up with a multiplier. This yielded far more visible differences in the averages.
The mesh utilizes normal based shading, while the spheres use phong shading, but with their diffuse color being based off their normals. Honestly I don't know why I didn't use Phong lighting this for the mesh as well.... This can be changed in my DrawGL() function by setting the 0 in the line "glUniform1i(h_uShadeM, 0);" before the call to drawMesh() to 2. I think I forgot to change that as the submission deadline was nearing. The result of the phong lighting isn't too prevalent on the mesh however, since it's mostly flat planes anyways. In the above screenshot you can tell that my method of using the .obj file paid off in getting interpolated normals working.
The only non-OpenGL library I used on this project is Windows.h, which allowed me to play the songs easily. Consequently this makes my project only capable of running on Windows machines. The synchronization between the audio and the video relies heavily upon OpenGL's ability to run at 60 fps. The animation may become desynced on computers with poor processing power.
In the end, this project taught me more about how you can dynamically alter meshes and how most music visualizers work. Even though I decided not to implement some of the more advanced technologies I researched, I did learn more about them.