Sea No Evil is a 3D fight-for-survival as the stranded player steers their crew and ship to avoid the monsters and hazards of a turbulent sea.
The player must fight back against the unrelenting waves of the storm in order to make it back to safety at the lighthouse.
All that stands in their way is a swarm of thrashing kraken tentacles and a minefield of sharp, hull-splitting crags.
This perilous experience features a fully fleshed out level packed to the brim with lethal hazards.
Can the player make it out with their crew alive? We will wait and sea.
Controls
Keyboard
WASD - move and steer ship
R - restart level, respawn ship
Enter - skip intro, start game
Mouse - move camera view up and down
Developer Shortcuts
X - die
C - godmode (no collisions, no timer, higher max speed)
Q - show collision tree bounding boxes
Camera & View
There were a few special features we added to our game camera, at the suggestion of a few helpful guests during the various
demos throughout the quarter. The first is an interpolation of the camera; this is subtle, but adds a much smoother feeling.
This is done by keeping track of where the camera “should” be (a fixed distance behind the boat), and using glm’s “mix” function,
which interpolates between values, the values we used being the “should be” position and the actual position, using time to continuously interpolate
from actual to “should be.” The next camera feature we added was a small sway of the camera to match the rocking of the boat.
To do this, we update the “up” vector of the camera the same way we rock the boat,
but we use an extra float value to lessen the swaying in order to reduce motion sickness (without this, even we tended to get a little queasy).
The boat rocking method we created provides four adjacent points around the boat, so we use these points to create two vectors,
then perform glm::cross() on them to get the appropriate up vector.
Environment
One of the most notable features of our environment is the waves; to accomplish this, I first used an outside reference [4],
but this didn’t seem to scale properly with our ocean quad, at least the way we implemented everything.
The mathematical function (coined “getWaterOffset”, applied to the ocean’s vertex shader and used for other features)
calculates a variable height based on the vertex point in world space (x and z), which when static provides almost a grid of small hills and valleys.
I apply a small offset and involve time in the equations to get the ocean moving, and thus we have a wavy ocean controlled by a wave amplitude and wave length,
which provide admittedly arbitrary control over the “motion of the ocean.”
Aside from the ocean, we use a basic rock and tentacle model for the obstacles, animating the tentacles and checking for collisions, which will be later discussed.
Due to the ocean mesh being dynamic, meaning that its vertex positions are changing over time, its vertex normals must be updated accordingly in real time as well.
As we all know, the basic way to calculate the normal of each vertex is by sampling its surrounding triangle faces, sum up all of the normals from those faces, and normalize them.
Unfortunately, the vertex shader stage does not have access to its surrounding vertex attributes. However, since our ocean mesh is simply a big quad with only its y vertex positions
being changed over time, we can get the x and z offsets (note: x and z coordinates don’t change over time). From those x and z offsets, we can compute the positions of the vertices
surrounding the current vertex, specifically their x and z coordinates. And, since our wave algorithm changes the y position of each vertex accordingly to a math equation, meaning
for each x and z pair we get a y value, we can compute the neighboring vertices’ y values from those neighboring x and z values.
Now, we got all the information we need from the neighboring vertices. We can compute the neighboring faces with cross products and thus their normal vectors as well.
Finally, we sum those normals together and normalize them.
Animation
The most dynamic animated object in our game would likely be the player’s boat, in that it follows the tilt and rock of the ocean waves, even being pushed by the waves by a small amount.
This is accomplished by using the aforementioned “getWaterOffset” function; I use this by feeding it four adjacent points with respect to the boat: North, East, South, and West.
Once getWaterOffset gives us the appropriate heights for those points on the ocean, By comparing North vs South and East vs West, we know the slope between these two points,
and then use the slope to calculate an amount (in degrees, then converted to radians) to rotate the model by. North v. South points lead to the x-axis rotation, and the East v.
West points lead to the z-axis rotation. By combining these, we’re given what appears to be accurate 360 degree “boat rocking” tracking, while really all we’re doing is tracking
four adjacent points. The accuracy could theoretically be improved by tracking more points around the boat and combining those axes of rotation, but this seems to do the trick for us.
The next animated objects in the scene are the tentacles. We opted for a simple rigidbody animation,
twisting them and bobbing/scaling them up and down to give them a slightly livelier feel.
Collision Detection
We implemented a Boundary Volume Hierarchy (BVH) tree for our high performance collision detection [9][10][11]. Currently, we only have the BVH to store information of static objects,
namely the rocks. Each of the rocks and tentacles meshes in the world would have its axis-aligned bounding box (AABB) calculated. The rock AABBs would be inserted into our BVH tree.
The tree would calculate the appropriate AABBs for each sibling group in each generation of the tree. For example, 2 rocks that are near each other in the world would be determined as
siblings by the tree, thus their individual AABBs would be combined for that generation. The last generation, or the leaf nodes, would store the AABB information of each individual rock.
BVH trees are very popular in ray tracing, in which case a ray in the world is queried to see if it collides with any of the bounding volumes of the objects in the world
(it could be any kind of bounding volumes, we are using AABB). Specifically, in our case an AABB of a tentacle is queried against the rocks BVH tree. If the tree returns a
non-empty list of rock AABBs as potential colliders, we simply invert the current direction that the queried tentacle is moving in. There is a small immunity time window when
the tentacle’s direction can’t be changed, so that it would have enough time to move out of the AABB of the rock. Without this, the tentacle would be continuingly detected as
colliding with the rock, and its movement direction would be inverted non-stop, resulting in it being stuck around the rock.
For simpler collisions (boat vs obstacles), we use a sort of bounding ring around each obstacle and test the distances between objects. For example,
to detect a collision between the boat and a rock, both the boat and rock are given their own set radii. We calculate the distance between the boat and rock,
then determine if the distance is less than or equal to the two radii added up. If so, a collision has happened and we take the according action
(game over, turn the tentacle around, etc.).
Shadows
Shadow mapping was done in a two step-rendering technique as followed in lecture [12][13]. The first step is to draw the objects that we want to have shadow
from the light perspective onto a non-default framebuffer. The said framebuffer would have a texture bound to it as a depth buffer. For the second pass render,
we render the objects to the screen normally and test their depth with the depth texture that we pass into the fragment shader. Textured and material objects are
both passed to their respective shaders but the main calculations for the shadow is done in the wave shaders.
This process completes our two-step rendering to generate shadows for our game.
Culling
View frustum culling was one of the technologies we implemented to help improve the performance of our game. View frustum culling only draws to the
scene the objects that are visible to where our camera is facing which allows us to forgo rendering a large number of objects. The culling only applies
to our tentacles and rocks since they are the main objects within our game. The integration from lab to our game went seamlessly and improved the performance of our render [17].
Non-Photo Realistic Shaders
Sea No Evil draws heavy inspiration from the game Return of the Obra Dinn. As a result, the game features a black-and-white color palette along with a dithering shading.
These combine to create a similar visual effect as seen in the original game.
The concept of dithering is fairly similar to cell shading in the sense that we have to do a process called quantization.
Quantization is also known as discretization, meaning for every pixel we have to find the closest color in the palette and display that color instead
resulting in areas of the same color. Due to this, our dithering filtered scene looks somewhat similar to being cell shaded. We implemented a specific type of
dithering effect to our scene called Ordered Dithering [5]. The first step in Order Dithering is quantization as described; however, instead of finding 1 closest color,
we find 2. Then, the pixel’s global coordinate (from gl_FragCoord) is compared to an index matrix. Depending on which value in the index matrix the pixel got matched with,
we either choose the first closest color or the second for the final output color.
Lightning
This is a fairly simple and rudimentary implementation of a L-system [7]. Basically, we have multiple generations, and with each generation
we are going to do something (accordingly to a set of rules) to the previous generation. In this case, our 0th generation is going to be just a straight
line from the starting point of the lightning bolt to the end point of the lightning bolt [7][8] (both points are randomly generated, with the end point to
be relatively close to the starting point -- x and z coordinates wise; the starting point is always 100 units in the +y direction, while the end point is always at y = 0.
Only z and x values are randomly generated). And with each generation, the mid point of the line is computed, and it will be offset by a tiny amount along a plane that is
always facing the camera (similar to a billboard). Over many generations, we would end up with a zig-zag line pattern from an initial straight line.
Each lightning bolt has a lifetime attribute. Over many delta times, the value would go down and that translates to the brightness of the bolt.
So about every 2 seconds, 2 lightning bolts would be generated. After the first one got drawn and its brightness would go down to zero,
the second lightning bolt would be drawn. So this would give the illusion of the lightning flashing.
Rain
This is an application of framebuffer objects [14][15]. The entire scene is drawn to a non-default framebuffer object.
The texture attached to said framebuffer is used as a color buffer, and it is then sampled in the fragment shader when the whole scene is drawn to a quad,
which is when the scene is rendered to the default framebuffer.
The rain filter was found on shadertoy [6], it was a scene of a thunderstorm drawn to a quad. It was quite heavy in math without much explanation.
As one would expect, shadertoy is for showcasing; it’s not a tutorial site. But, we were able to modify the fragment shader to fit our scene.
The majority of the scene was removed: trees, thunder flashing, grounds, and moon. The only thing we left was the rain. The fragment shader takes in the
system time to randomly generate raindrops in the screen space, and the raindrops will move downward over time naturally.
Object Outlining
This utilizes the power of the stencil buffer [16]. From the rain technology above, the scene is first drawn to a non-default framebuffer
(the shadow is actually drawn first technically). There is a render buffer object attached to said framebuffer as a stencil buffer and depth buffer.
If the outline mode is activated (by changing the macro ENABLE_OUTLINE to be true), the application will enable stencil testing.
So now the scene would be drawn 3 times (4 if we count the shadow mapping), the first time is drawn normally and
we write the stencil values to the stencil buffer. Then, we draw the scene again, but this time the meshes are scaled up and
we compare the stencil values, if the stencil values of the pixels are matching the stencil values that were there before the second pass,
we remove the pixels, leaving only the outline remaining (the remains of bigger scaled meshes).
HUD & Intro
Using the FreeType library, Sea No Evil implements a HUD giving players information about the timer, current goal,
and state of the game (such as losing or winning). Additionally, this technology is used in the beginning to introduce
the player to the current setting as well as giving them a bit of story through a cutscene.