CPE/CSC 480-F09 Artificial Intelligence Assignment 1: Search Methods
In this assignment, you will explore various aspects of search methods. Your task is to program an agent that systematically investigates its environment by using some of the search methods discussed in class.
The Agent Environment
The emphasis in this assignment lies on the search methods, and not on the programming of the environment. You will use the Bot Environment that was also used in previous lab exercises. The "Course Materials" section of Blackboard contains a slightly revised version of the environment for this assignment.
Search Methods
In this assignment, you have to implement the following search methods:
  • uniform-cost (lowest-cost-first) search
  • greedy best-first search
  • A* search
The goal of this lab is to implement an A* search algorithm for the Wumpus agent. Since the A* algorithm is a combination of two more basic ones, uniform-cost and best-first search, you will implement these methods first, and then combine them into the A* method. For all three methods, the agent uses an off-line search: First, it constructs a path to the goal, visualized through the "fairy". Second, the agent carries out this path. Once the agent reaches the goal, print out the search cost and path cost to the logging window.
The task of the agent is as follows: It starts from its initial position (default is the tile in the upper left corner) and has to reach a certain tile identified as the goal position. On its path to the goal, it may have to clean tiles, and the amount of dirt on a tile affects the path cost of the agent. For some of the search methods, the agent needs additional information to estimate the cost from the current node to the goal node. In this case, the agent can ask the field manager for a hint. The manager may provide some hints, such as:
  • An estimate of the distance between the current and the goal node based on the geometrical properties of the field.
  • The compass direction of the goal relative to the current position (i.e. North, South, East, West).
These hints, however, incur additional costs for the agent. Usually the costs are higher for more valuable hints. Especially for environments with a known geometry (such as a grid), the agent can often also calculate such hints on its own.
Off-Line vs. On-Line Search
In an artificial environment, it is frequently easy to perform an off-line search by first examining the search space, determining the best (or a "good enough") solution, and then executing the respective actions.
In a real-world setting, the agent might have to perform an
on-line search, interleaving the computations to expand the search space with the execution of the corresponding steps in the real world.
In this assignment, you implement an
off-line search. After the agent has identified the goal, it should determine the best possible solution among the ones it explored, perform the actions to achieve the goal, and print it to the display and the log file.
This approach has an important consequence for the agent. When an off-line agent encounters a dead end during its investigation (i.e. there are no uninspected reachable squares from the current square), it can continue its investigation with another square on the fringe easily by "mentally" jumping to that square. An on-line agent, however, has to back up and retrace its steps in order to reach the next square on the fringe, thus increasing the path cost considerably. When you calculate the cost, make sure to differentiate between the
search cost and the path cost.
Implementation Hints
The agents must keep track of the tiles it has already examined, and for some search methods, it must be able to backtrack (retrace its steps). The agent should build a data structure that tracks the path it has taken so far, allowing it to retrace its steps. The agent can also construct its own internal map, which may make navigation considerably easier. Please note that the agent does not know all relevant aspects of the environment in advance, such as the dimension of the playground, or the placement of obstacles.
In this assignment, you will use the
BotEnvironment (practically the same as in Lab 3); see also the implementation hints in the Lab 3 write-up. A key difference for this assignment is that now you will have to sort your fringe according to your evaluator function.
To sort your fringe, you can look at all nodes in the fringe using:
LinkedList fringe = getFringe();
Now, you can investigate each node on the fringe. For example, you can get the fourth node on the fringe:
Node nextNode = ((Node)fringe.get(4));
An easy way to store the evaluator functions "f" for these nodes is to use arrays of constant dimensions like:

double[][] f = new double[100][100];
double[][] g = new double[100][100];
double[][] h = new double[100][100];

Be sure to make your heuristic function admissible. To help, the goal location can be hard-coded at (10, 10) in the constructor of your agent.
Recall that your
searchStep() function should implement only the search part of finding a path to the goal. The movementStep() function should carry out this path.
You can test out your path planning and path following using the map
CPE480-Lab4-AStarSearch.sbm. Note that the grader will use other maps as well to determine if your agent works.
Your program must provide the following information:
  • the configuration of the field (size and name of the map)
  • search method used
  • number of search steps performed
  • current node in the search tree (position of the "fairy")
  • path cost to the current node g(n)
  • estimated cost from the current node to the goal (heuristic) h(n), if applicable
  • estimated total cost f(n) = g(n) + h(n)
This information should be updated as it changes, and written into a log file for later analysis.
Administrative Aspects
Assignment Submission
This assignment must be submitted electronically via Blackboard through the "Lab and Assignment Submission" menu entry by the deadline specified above. Please submit the following material, preferably in an archive (.zip, .gz, .rar or .tar):
  • [Update: We’re skipping this part of the assignment. observations about the behavior of your agents via a Web form (link will be provided later)]
  • a plain text file (not a MS Word document) named README.txt with your name, a brief explanation of your program, and instructions for running your program
  • the Java source code for your agents
  • screen shots for the "Evaluation Map" and the "Search Map"; the screen shots should show both the fairy and the agent view, and the final score; use names that indicate the respective map (e.g. fkurfess-Challenge-results.png
  • log files that capture the performance of your agent on the maps provided; please save them as plain text files, and use names that indicate the respective map (e.g. fkurfess-Challenge-results.txt)
  • the Java executable (class file) for your agents
  • optionally additional files that may be relevant for the grading (e.g. make files, log files, or special environments you used)
Naming Conventions
Please use your Cal Poly login as the first part of your agent's name, and an indication of the search method as the second. For example, my agents would be fkurfess-greedy, fkurfess-uniform, and fkurfess-astar. This will allow us to keep all agents in the Agents directory, without having to edit your files or moving files back and forth when we do the grading.
Collaboration
This is an individual assignment. If you’re using code fragments from the related lab exercise with the breadth/depth-first agent done in a team, please indicate this in the README file. It is fine with me to discuss general aspects of this lab with others (e.g. general aspects of the different search strategies).
Questions about the Assignment
If you have general questions or comments concerning the programming aspects of the homework, post them on the Blackboard Discussion Forum for the assignment. The grader and I will check that forum on a regular basis, and try to answer your questions. If you know the answer to a support or clarification question posted by somebody else, feel free to answer it; this will count as extra participation credit.