Lab Exercise 2: Agents
The goal of this lab exercise is to familiarize you with a programming environment for agents. The agents "live" in the "Wumpus World", a simple world that consists of interconnected squares. Some of the squares consist of walls and can't be entered by the agent, others are deep pits the agent can fall into. Squares can also contain objects (such as a pile of gold) or other agents (such as the bad "Wumpus" that likes to eat agents). Square also may have properties that indicate the status of adjacent cells. For example, on the squares next to the Wumpus a smell can be perceived, and there is a breeze next to pits. The agent can perform simple actions like moving to the next square, turning, or picking up items.
Agent Environment
You can download the agent environment from the PolyLearn "Course Materials" section. It is a Java program, and comes with a README file and JavaDocs. You will use it for two or three lab exercises, and also for a programming environment. The environment was developed by Matt Colon, who was also the grader for this class in previous years. While we hope that it will allow you to concentrate on the essential aspects of agents, this is still a prototype version, and not quite a "production environment." Please let us know via the PolyLearn forum if you have problems with it. The environment comes with several sample maps, see the "testmaps.zip" file under "A2 Test Maps". For this exercise, you should mainly use "MediumMap.sbm"; it may be interesting, however to also try the other ones.
Tasks
The activities in this lab consist of several smaller programming tasks where you implement different types of simple agents. For each agent type, you also need to analyze its behavior and capabilities:
  • Is this agent capable of reaching the goal square from the start square?
  • Given enough time, is this agent guaranteed to reach the goal?
  • What are limiting circumstances and constraints?
  • How well does your agent perform?
  • What is the level of intelligence of this agent, compared to the other ones in this lab?
You’ll answer these questions in a Web form, and you’ll have to submit the Java code for your agent through PolyLearn.

Task 1: The Dumb Agent
In this part of the lab, you will simply compile and run an existing agent within an environment. First, download the CPE480-Lab2-BaseCode0.zip folder from the Course Materials section in PolyLearn. Additional information is under BotEnvironment JavaDocs, BotEnvironment Tutorial, and BotEnvironment FAQ. Extract the .zip file into a directory Wumpus-Lab-1.
In the directory Wumpus-Lab-1/Agents is an agent called
MyDumbAgent.java. Open the file in an editor to look at the code. The most important thing to note is the step() function:
public void step()
{
// Get Percepts
Node n = getCurrentNode();
int x = n.getX();
int y = n.getY();

// Carry out actions
turnRight();
moveForward();
}
This function is called every time a "step" in the Wumpus World environment is triggered. One can think of this function as the one iteration of the agent program that receives a percept, and determines the next action.
In this agent program, the agent will find out its current (x,y) location in the environment, then turn 90 degrees to the right and move forward.
To compile this agent in Windows, run the batch file
compileDumbAgent.bat in the Wumpus-Lab-1 directory. This will use javac to compile the agent defined in MyDumbAgent.java, ensuring the appropriate classpath is correct so as to include WumpusEnvironment.jar. For Linux (and other Unix-based systems like Mac OS X), use the terminal command line:
javac -cp WumpusEnvironment.jar:.: ./Agents/MyDumbAgent.java
Now to run the environment in Windows, run the batch file "run.bat" in the
Wumpus-Lab-1 directory. For Linux, use the terminal command line:
java -classpath .:WumpusEnvironment.jar BotEnvironment.WumpusEnvironment.

This will bring up the Wumpus environment. Under the File drop-down menu select "New Wumpus Environment Session…". Load the agent you just compiled.
From the Map drop-down menu, select "Open Map". For this task, select the map "CPE480-Lab2-Dumb.sbm". Two windows will pop up, one behind the other, which shows the environment. The agent can be seen in the top left corner. By clicking the STEP button, you can see the actions of the agent determined by the
step() function of its program. Note that AUTO STEP will continually trigger the agents step() function with a delay between steps set using the slider at the top of the dialogue window. Get familiar with how this software runs by stepping the agent manually several times, and by using the AUTO STEP option.
If you have problems with the Auto-Step behavior, try
java -Xint -classpath .;WumpusEnvironment.jar BotEnvironment.WumpusEnvironment and java -Xint -classpath .:WumpusEnvironment.jar BotEnvironment.WumpusEnvironment on Linux/Mac (colon instead of semicolon).

AutoStep seems to work with this flag.
-Xint disables compilation to native code; refer to the man page for java if you want to know more about what this flag does.
Task 2: The Table Agent
The table agent uses a simple look-up table to determine the agent’s actions. In this task, you will code up a very simple agent program that uses the table to guide the agent to the goal location. Call the agent MyTableAgent_NAME and save it in a file MyTableAgent_NAME.java, where NAME is your Cal Poly user id.
The table itself will be an array of directions in which the agent should face in order to proceed towards the goal. For example, create an agent
MyTableAgent.java and declare an integer array directionTable:
int[][] directionTable = new int[5][5];
For example, the table should direct the agent east if it resides in the first grid cell:
directionTable[1][1] = EAST;
Fill in the rest of the table so that agent will make its way to the goal in the environment
CPE480-Lab2-Table.sbm. Use this table with the functions turnTo(int direction) and moveForward() to get your agent to the goal. Confirm the agent program works.
Task 3: The Reflex Agent
In this task, you will create a reflex agent that simply goes forward when it can, and turns to random directions when it can’t. It will do this until it reaches the goal point. Call the agent MyReflexAgent_NAME and save it in a file MyReflexAgent_NAME.java, where NAME is your Cal Poly user id.
To accomplish this, you can make use of the integer returned from the function
moveForward(). For example, the line below will set the value of success to 0,1,2,3, or 4, where these values correspond to SAFE, HIT_WALL, HURT, DIED, or GOLD_FOUND, respectively.
int success = moveForward();
For generating random numbers, you will probably need to create a random number generator in your agent:
Random generator = new Random();
To create random integers, you can then use:
int randDirection = generator.nextInt(maxNumDirections);
maxNumDirections is the maximum number of directions the agent can turn to. Don’t forget to import java.util.* where the random class is defined.
You can also make use of the function
turnTo(int direction), which turns the agent into one of the four directions NORTH, EAST, SOUTH, and WEST. These directions correspond to integers 0, 1, 2, and 3, respectively.
Once the agent program is coded, run it with AUTO STEP several times and observe how its behavior changes based on the randomness. Does it always find the goal?
Task 4: The Model-Based Agent
In this task, you will create a Model-Based agent that follows the contour of walls until it gets to the goal square. At the start, the agent should just move forward until it hits a wall. Once it hits a wall, it should turn left and start following the contour of the wall until it finds the goal (which in this case is always at the bottom corner). Call the agent MyModelBasedAgent_NAME and save it in a file MyModelBasedAgent_NAME.java, where NAME is your Cal Poly user id.
To accomplish this, the agent will use some memory to store percepts obtained from its environment. In this case, the memory is simply whether or not the agent has hit a wall or not. This can be determined using the return of the
moveForward() command as in Task 3.
So, start by creating the memory bit. For example you could use:
boolean hitWall;
Initialize this variable as
false.
Now to code up the agent, you should make it move forward until it hits a wall. Then, the agent should follow the wall to its left using
turnLeft() and turnRight() functions. This will take some thinking, but very little code is needed. Note that you can assume there are no "islands" of walls in the environment. That is, all walls are connected via other walls to the boundary of the environment.


Administrative Aspects
Assignment Submission
This assignment must be submitted electronically via PolyLearn through the "Lab 2 Submission" link under Chapter 2 “Intelligent Agents” by the deadline specified in the sidebar. While I may do the grading by having the students demo their work in the lab, the timestamp of the submission to PolyLearn counts for meeting the deadline.
You need to submit the following items:
  • observations about the behavior of your agents via a Web form
  • a plain text file (not a MS Word document) named README.txt with your name, a brief explanation of your program, and instructions for running your program
  • the Java source code for your agents
  • the Java executable (class file) for your agents
Collaboration
This is a "pair" assignment, and you can either do it individually, or together with one other student. Each pair must figure out and formulate their own answers, and write their own programs. It is fine with me to discuss general aspects of this lab with others (e.g. how to determine the type and capability of an agent, or the limitations of a random agent). If you do that, enter both names in the Web form.
Questions about the Assignment
If you have general questions or comments concerning the programming aspects of the homework, post them on the PolyLearn Discussion Forum for the assignment. The grader and I will check that forum on a regular basis, and try to answer your questions. If you know the answer to a support or clarification question posted by somebody else, feel free to answer it; this will count as extra participation credit.