The following table lists some current and past Master's thesis supervised by me.
The topic usually is a link to further information on the project (such as a Wiki), although not all of them seem to work.
Note: This table is generated via XML, and may not display properly on some browsers.
Completion Date Student Name Topic Overview Keywords Project Proposal Final Report De Haven, Ryan Applying Modified Genetic Algorithms to the Traveling Salesman ProblemThe focus of the thesis will be to work towards a new approach to genetic algorithms. The new genetic algorithm will be tested against previous algorithms that concern the traveling salesman problem. The goal of the thesis is to create a genetic algorithm that can modify its mutation and crossing algorithms to create a new genetic algorithm over generations of genetic algorithms.
- Intended users Hoover, Christopher Adaptive Route Planning In urban areas, commuters often have to deal with heavy rush hour traffic. Main routes, such as freeways, may be drastically slowed under such conditions. Thus, secondary routes that ordinarily would require longer trips may be faster than main routes during rush hour. However, although rush hour traffic often occurs predictably at certain times of day, it may also vary significantly from day to day. This can make it difficult for commuters to select the fastest route to their destinations at a given time of day. In such situations, commuters would benefit from a way to quickly predict which routes are likely to have the lowest travel time given current conditions.
Next, consider a hypothetical person who needs to drive to a business conference in an unfamiliar city. With mapping services such as Google Maps, it is easy for him to find a route with step-by-step directions to his destination. However, suppose that one of the roads on his route is closed due to construction, and that traffic is being diverted onto a detour route. Since he does not know the city’s roads, he might very well get lost trying to find his way back to his original route. This could have been avoided if the originally generated route had taken the temporary road closure into account.
These discussions showcase two limitations of today’s widely available route generation services. While they may offer an estimated travel time based on speed limits along a route, they do not adjust these travel times based on conditions such as current traffic levels. They also do not consider conditions that may invalidate otherwise optimal routes, such as the detour previously described. Both of these limitations suggest that current route generators could be improved if they were extended to monitor road conditions in real time. My proposed project is to construct such a system.
Kacha, Shradda Personalized Service via Micro-Checkin through Mobile DevicesKuroda, Scott Agent-Based Security Code AnalysisAs computers become more and more prominent in everyday life, the need for high quality code continues has become increasingly important. High quality, robust, code should be able to handle any type of input. If something unexpected were to occur, the program must exit gracefully, avoiding unexpected, forced exits. In pursuit of this goal, good programming practices should be established as future programmers are developed today.
To work towards this good practices should be established during the education process. Working towards this goal is the high level topic for my thesis. I will work to develop an environment that, through the use of intelligent agents, will provide feedback to written code. Not only will the agents provide some feedback on submitted code, but this system will also leverage currently available static code analysis software packages. Though not the main focus of my thesis, these independent code analysis software packages will be used in such a way that they may be removed, or new analysis tools added with as few problems as possible.
The agents for the system will complete a variety of tasks, working together to provide a relatively complete set of feedback to a developer. Some tasks agents will perform include, but are not limited to, analysis of information generated from currently available code analysis tools, behavioral analysis of code, report generation, and rule generation to potentially be used in conjunction with code analysis and behavior analysis tools. Pike, Frank Wheelchair Control through Brain-Computer InterfacesAllen Dunlea Automating Usability Evaluations For my thesis I propose to test how applicable an automated usability critiquing system could be for developers. I plan to develop a system to the best of my ability and then test how useful that system could actually be to developers.
My current plan is to develop a system for automatically checking usability requirements on web pages, but in my thesis I plan to discuss how my findings can be applied to other areas where usability is applicable. I plan on leveraging Selium IDE (a website testing suite) and implementing some of my own algorithms to test a site.
I do not expect to be able to do a full usability evaluation of a site but I should be able to check a number of things and I should be able to show how my project could be extended to a more complete package. Jonathan McElroyAutomatic Document FilerThis is my thesis about an Automatic Classifier and Document Filer. Greg Flanagan Conceptual Requirement Validation for Architecture Design SystemsComputer-aided architectural design (CAAD) programs represent architectural design at a low level of spatial abstraction. While this representation model allows CAAD programs to capture the precise spatial characteristics of a design, it means that CAAD programs lack the underlying computational apparatus necessary to reason about design at a conceptual level.
This thesis is a first step towards building a framework that bridges the gap between the conceptual aspects of a design and its low level CAAD-based spatial representation. Specifically, this thesis presents a new framework, referred to as the Conceptual Requirements Reasoner (CRR), which provides an architect with a framework to validate conceptual design requirements. The CRR will demonstrate how qualitative spatial representation and reasoning techniques can be used as the link between a design's conceptual requirements and its underlying quantitative spatial representation.
A museum case study is presented to demonstrate the application of the CRR in a real world design context. It introduces a set of museum design requirements identified in research and shows how these requirements can be validated using the CRR. The results of the case study shows that the CRR is an effective tool for conceptual requirements reasoning.thesis at Cal Poly's Digital Commons S11 Emily SchwarzPlant Identification A field guide is a tool to identify an object of natural history. Field guides
cover a wide range of topics from plants to fungi, birds to mammals, and shells to minerals. Traditionally, field guides are books, usually small enough to be carried outdoors . They enjoy wide popularity in modern life; almost every American home and library owns at least one field guide, and the same is also true for other areas of the world.
At this time, companies, non-prots, and universities are developing computer
technologies to replace printed field guides for identifying plants. This thesis
examines the state of the art in field guides for plants. First, a framework is
established for evaluating both printed and digital field guides. Second, four
print and three digital field guides are evaluated against the criteria. Third, a
novel digital field guide is presented and evaluated.thesis at Cal Poly's Digital Commons S11 Ryan ReckSuffix Trees for Document RetrievalA look at the suitability of Suffix Trees for full text indexing and retrieval. Nate Black Head Tracking with View Dependent Projection.Implements a head tracking system using color histograms. Display uses a view dependent projection to enhance the illusion of depth. Evan Hecht KnowledgeBase System to classify documents into an intuitive organization. It will involve classification and clustering, and may use an ontology. User-input will be critical for both initializing the system and improving it while it's running. I expect to emphasize the AI component, without writing or researching much about the user interaction. S10 Brian Blonski The Use of Context in an Efficient Finite-State-Machine based Head-Gesture Recognition SystemThis thesis explores the use of head gesture recognition as an intuitive interface for computer interaction. This research presents a novel vision-based head gesture recognition system which utilizes contextual clues to reduce false positives. The system is used as a computer interface for answering dialog boxes. This work seeks to validate similar research, but focuses on using more efficient techniques using everyday hardware. A survey of image processing techniques for recognizing and tracking facial features is presented along with a comparison of several methods for tracking and identifying gestures over time. The design explains an efficient reusable head gesture recognition system using efficient lightweight algorithms to minimize resource utilization. The research conducted consists of a comparison between the base gesture recognition system and an optimized system that uses contextual clues to reduce false positives. The results confirm that simple contextual clues can lead to a significant reduction of false positives. The head gesture recognition system achieves an overall accuracy of 96% using contextual clues and significantly reduces false positives. In addition, the results from a usability study are presented showing that head gesture recognition is considered an intuitive interface and desirable above conventional input for answering dialog boxes. By providing the detailed design and architecture of a head gesture recognition system using efficient techniques and simple hardware, this thesis demonstrates the feasibility of implementing head gesture recognition as an intuitive form of interaction using preexisting infrastructure, and also provides evidence that such a system is desirable. thesis at Cal Poly's Digital Commons W10Matt ColónControlling the Uncontrollable: A New Approach to Digital Storytelling Using Autonomous Virtual Actors and Environmental Manipulation In most video games today that focus on a single storyline, scripting languages are used for controlling the artificial intelligence of the virtual actors. While scripting is a great tool for reliably performing a story, it has many disadvantages; mainly, it is limited by only being able to respond to those situations that were explicitly declared, causing unreliable responses to unknown situations, and the believability of the virtual actor is hindered by possible conflicts between scripted actions and appropriate responses as perceived by the viewer. This paper presents a novel method of storytelling by manipulating the environment, whether physically or the agent's perception of it, around the goals and behaviors of the virtual actor in order to advance the storyline rather than controlling the virtual actor explicitly. The virtual actor in this method is completely autonomous and the environment is manipulated by a story manager so that the virtual actor chooses to satisfy its goals in accordance with the direction of the story. Comparisons are made between scripting, traditional autonomy, Lionhead Studio's Black & White, Mateas and Stern's Façade, and autonomy with environmental manipulation in terms of design, performance, believability, and reusability.
It was concluded that molding an environment around a virtual actor with the help of a story manager gives the actor the ability to reliably perform both event-based storylines and emergent storylines while preserving the believability and reusability of the actor and environment. While autonomous actors have traditionally been used solely for emergent storytelling, this new storytelling method enables them to be used reliably and efficiently to tell event-based storylines as well while reaping the benefits of their autonomous nature. In addition, the separation of the virtual actors from the environment and story manager in terms of design promotes a cleaner, reusable architecture that also allows for independent development and improvement. By modeling artificial intelligence design after Herbert Simon's “artifact,” emphasizing the encapsulation of the inner mechanisms of virtual actors, the next era of digital storytelling can be driven by the design and development of reusable storytelling components and the interaction between the virtual actor and its environment.
artificial intelligence, intelligent agent, storytelling, digital storytelling, scripting, autonomy, virtual actor, virtual environment, story manager, environmental manipulation thesis at Cal Poly's Digital Commons W10 Jason AndersonAutonomous Satellite Operations for Cubesat Satellites In the world of educational satellites, student teams manually conduct op- erations daily, sending commands and collecting downlinked data. Educational satellites typically travel in a Low Earth Orbit allowing line of sight communica- tion for approximately thirty minutes each day. This is manageable for student teams as the required manpower is minimal. The international Global Educa- tional Network for Satellite Operations (GENSO), however, promises satellite contact upwards of sixteen hours per day by connecting earth stations all over the world through the Internet. This dramatic increase in satellite communica- tion time is unreasonable for student teams to conduct manual operations and alternatives must be explored. This thesis first introduces a framework for devel- oping different Artificial Intelligences to conduct autonomous satellite operations for CubeSat satellites. Three different implementations are then compared us- ing Cal Poly’s CP6 CubeSat and the University of Tokyo’s XI-IV CubeSat to determine which method is most effective. Autonomous Operations, CubeSat, Lights Out Operations, Earth Station, Validation Framework, Rule Based System, Process Extractionthesis at Cal Poly's Digital Commons W10 Kevin McCulloughExploring the Relationship of the Closeness of a GeneticAlgorithm’s Chromosome Encoding to its Problem Space For historical reasons, implementers of genetic algorithms often use a haploid binary primitive type for chromosome encoding. I will demonstrate that one can reduce development effort and achieve higher fitness by designing a genetic algorithm with an encoding scheme that closely matches the problem space. I will show that implicit parallelism does not result in binary encoded chromosomes obtaining higher fitness scores than other encodings. I will also show that Hamming distances should be understood as part of the relationship between the closeness of an encoding to the problem instead of assuming they should always be held constant. Closeness to the problem includes leveraging structures that are intended to model a specific aspect of the environment. I will show that diploid chromosomes leverage abeyance to benefit their adaptability in dynamic environments. Finally, I will show that if not all of the parts of the GA are close to the problem, the benefits of the parts that are can be negated by the parts that are not.thesis at Cal Poly's Digital Commons F09Cory White A Neural Network Approach to Border Gateway Protocol Peer Failure Detection and PredictionThe size and speed of computer networks continue to expand at a rapid pace, as do the corresponding errors, failures, and faults inherent within such extensive networks. This thesis introduces a novel approach to interface Border Gateway Protocol (BGP) computer networks with neural networks to learn the precursor connectivity patterns that emerge prior to a node failure. Details of the design and construction of a framework that utilizes neural networks to learn and monitor BGP connection states as a means of detecting and predicting BGP peer node failure are presented. Moreover, this framework is used to monitor a BGP network and a suite of tests are conducted to establish that this neural network approach as a viable strategy for predicting BGP peer node failure. For all performed experiments both of the proposed neural network architectures succeed in memorizing and utilizing the network connectivity patterns. Lastly, a discussion of this framework's generic design is presented to acknowledge how other types of networks and alternate machine learning techniques can be accommodated with relative ease. thesis at Cal Poly's Digital Commons F09 Matt Derry Evaluating Head Gestures for Panning 2-D Spatial InformationNew, often free, spatial information applications such as mapping tools, topological imaging, and geographic information systems are becoming increasingly available to the average computer user. These systems, which were once available only to government, scholastic, and corporate institutions with highly skilled operators, are driving a need for new and innovative ways for the average user to navigate and control spatial information intuitively, accurately, and efficiently. Gestures provide a method of control that is well suited to navigating the large datasets often associated with spatial information applications. Several different types of gestures and different applications that navigate spatial data are examined. This leads to the introduction of a system that uses a visual head tracking scheme for controlling of the most common navigation action in the most common type of spatial information application, panning a 2-D map. The proposed head tracking scheme uses head pointing to control the direction of panning. The head tracking control is evaluated against the traditional control methods of the mouse and touchpad, showing a significant performance increase over the touchpad and comparable performance to the mouse, despite limited practice with head tracking. thesis at Cal Poly's Digital Commons S09Daniel Miller A System for Natural Language Unmarked Clausal Transformations in Text-to-Text ApplicationsA system is proposed which separates clauses from complex sentences into simpler stand-alone sentences. This is useful as an initial step on raw text, where the resulting processed text may be fed into text-to-text applications such as Automatic Summarization, Question Answering, and Machine Translation, where complex sentences are difficult to process. Grammatical natural language transformations provide a possible method to simplify complex sentences to enhance the results of text-to-text applications. Using shallow parsing, this system improves the performance of existing systems to identify and separate marked and unmarked embedded clauses in complex sentence structure resulting in syntactically simplified source for further processing. thesis at Cal Poly's Digital Commons S09 Brett Bojduj Extraction of Causal-Association Networks from Unstructured Text DataCausality is an expression of the interactions between variables in a system. Humans often explicitly express causal relations through natural language, so extracting these relations can provide insight into how a system functions. This thesis presents a system that uses a grammar parser to extract causes and effects from unstructured text through a simple, pre-defined grammar pattern. By filtering out non-causal sentences before the extraction process begins, the presented methodology is able to achieve a precision of 85.91% and a recall of 73.99%. The polarity of the extracted relations is then classified using a Fisher classifier. The result is a set of directed relations of causes and effects, with polarity as either increasing or decreasing. These relations can then be used to create networks of causes and effects. This “Causal-Association Network” (CAN) can be used to aid decision-making in complex domains such as economics or medicine, that rely upon dynamic interactions between many variables. thesis at Cal Poly's Digital Commons W09Charles C.H. Wei Bottom-Up Ontology Creation with a Direct Instance Input Interface In general an ontology is created by following a top-down, or so called genus-species approach, where the species are differentiated from the genus and from each other by means of differentiae [8]. The superconcept is the genus, every subconcept is a species, and the differentiae correspond to roles. To complete it a user organizes data into a proper structure, accompanied with the instances in that domain in order to complete the construction of an ontology. It is a concept learning procedure in a school, for example. Students first learn the general knowledge and apply it to their exercise and homework for practice. After they are more familiar with the knowledge, they can use what they have learned to solve the problems in their daily life. The deductive learning approach is based on the fundamental knowledge that a student has acquired already.
By contrast, a more intuitive way of learning is the bottom-up approach, which is based on atomism. That is also a frequently used way for humans to acquire knowledge. From sensing the world by vision, hearing, and touching, people learn information about actual objects, i.e., instances, in the world. After an instance has been collected, a relationship between it and existing knowledge will be created and an ontology will be formed automatically.
The primary goal of this thesis is to make a better instance input interface for the ontology development tool Protégé to simplify the procedure of ontology construction. The second goal is to show the feasibility of a bottom-up approach for the building of an ontology. Without setting up the organization of classes and properties (slots) first, a user simply inputs all the information from an instance and the program will form an ontology automatically. It means after an instance has been entered, the system will find a proper location inside of the ontology to store it.
thesis at Cal Poly's Digital Commons F08 Ngan T. Phan An Exploration of Tablet-Based Presentation Systems and Learning Styles Learning in the classroom can occur as a combination of students' personal effort to study class material, the instructor's attempt to present class material, and the interaction that takes place between instructor and students. In a more traditional setting, instructors can lecture by writing notes on a chalkboard or a whiteboard. If instructors want to display prepared lecture slides, they can use the overhead projector and write additional notes on top of these overhead transparencies. With many technological advances, various researchers are advocating towards integration between technology and learning. With the advent of tablet PCs, researchers recognize the potential usefulness of its functions within the classroom. Not only can electronic materials be presented via the computer, tablet PCs allow instructors to handwrite notes on top of the slides, mimicking manual devices such as the overhead.
Even though the use of tablet PCs can be advantageous to instructors and students, no research found so far has focused on the issue of how well tablet PC features address varying learning styles of students (e.g. visually oriented vs. text-based learning). According to Felder, "understanding learning style differences is thus an important step in designing balanced instruction that is effective for all students” [22]. Hence, this research explores the correlation between tablet-based presentation systems and learning styles by taking two approaches: performing a pilot study and distributing a survey. The results from these approaches are evaluated to yield statistically significant conclusions on how well tablet-based presentation systems encompass the different learning needs of student. thesis at Cal Poly's Digital Commons