FJK Home | CPE/CSC 480 | Syllabus | Schedule | Lecture Notes | Assignments | Labs | Project | Other Links |
Points | 25 |
Deadline | Thu, Oct. 21, midnight |
In the second assignment, you will explore various aspects of search methods. Your task is to program an agent that systematically investigates its environment by using some of the search methods discussed in class.
The emphasis in this assignment lies on the search methods, and less on the programming of the environment. As part of his master's thesis, a graduate student has implemented an environment for the Wumpus World in Java. This environment will also be the basis for the search algorithms. It will be made available via Blackboard, accompanied by instructions on how to use it, and an example of a simple reflex agent that randomly moves around in the environment.
This "playground" is a rectangular array of squares where the agent can move around. Each square has coordinates, and may have properties such as empty, occupied, clean, dirty, etc.
The agent inspects a square for its properties by querying the environment. Based on that information and possibly knowledge from previous queries, the agent determines the next square to be investigated. The agent is capable of basic movements such as move-forward, turn-right, and turn-left. After the agent has made a decision about an action, it communicates the information to the environment, which is then updated accordingly.
The agent is not allowed to "have a peek" at the overall setup in order to optimize its activities; it must rely on the information it receives from the envorinment, and possibly on conjectures drawn from this information.
Design your system in such a way that it can be easily enhanced. For example, it should be straightforward to deal with extra properties for the field or individual squares, or to change its dimensions. The agent should be able to have different or additional sensors and actuators, perform different actions, use expanded internal representations, perform more complex internal calculations, etc. Ideally, you should be able to modify your agent so that it can be expanded for more complex tasks like retrieving the gold in the Wumpus world.
In a real-world setting, the agent most likely would perform an on-line search, interleaving the computations to determine the search with the execution of the corresponding steps. In this context, you should implement an off-line search where the agent explores the environment "mentally" by asking the environment manager about the properties of the squares it is inspecting, gradually building up a search tree and possibly a map of the environment. After it has identified the goal, it calculates a direct path from the initial position to the goal, and then executes the respective actions to move along the path.
This approach has an important consequence for the agent. When it encounters a dead end during its investigation (i.e. there are no uninspected reachable squares from the current square), it can continue its investigation with another square on the fringe easily by "mentally" jumping to that square. An off-line agent has to back up and retrace its steps in order to reach the next square on the fringe, thus increasing the path cost considerably. During the execution phase, when your agent actually performs the movement actions, it is restricted by the allowed movements, and can not "jump" to distant squares.
When you calculate the cost, make sure to differentiate between the search cost and the path cost. For the search cost, you need to consider all the possible paths that the agent mentally investigates. For the path cost, only the movements involved in following the chosen path need to be considered.
In class and in the textbook, the different search methods are described as variations of a very generic search method by modifying certain parameters. For your implementation, it is up to you to determine the best strategy for programming the different methods. In the long run, using the strategy from the textbook is probably better, but it may be more straighforward to work on each algorithm individually.
The agents must keep track of the tiles it has already examined, and for some search methods, it must be able to backtrack (retrace its steps). The agent should build a data structure that tracks the path it has taken so far, allowing it to retrace its steps. The agent can also construct its own internal map, which may make navigation considerably easier. Again, please note that the agent does not know all relevant aspects of the environment in advance, such as the dimension of the playground, or the placement of obstacles.
As a test case, you have to use the field below, with the following meaning of the symbols: the field has the dimension 20x20, and is delimited by dashes - and horizontal bars |. The starting position is marked by the letter S, and the goal is in the tile marked by the letter G. Obstacles in the field are indicated by X, and dirt is indicated by a number which reflects the amount of dirt (which is also the number of cleaning operations an agent has to perform). Instead of cleaning a dirty field, the agent may go around the dirt pile; this, of course, adds to the cost.
The search costs for the agent are:
The path costs for the agent are:
For the evaluation of your program, we will also use other maps. Check the Blackboard Discussion forum for additional samples. The grid below is one example; to use it in AIILE, it has to be modified somewhat (hyphens instead of spaces, no borders, and an initial line with the dimensions. An AIILE-compatible version can be downloaded as file 20x20-A2.env.
0 9 19 -------------------- 0|S X | | X | | | |XXXXXXXX XXXXXXXX| | | | | | | |XXXXXXXXXXXXX | | | 9| 1 | 10| 121 | | 12321 | | 1234321XXXXXX| | 12321X1G | | X121X121 | | X1X12321 | | X 121 | | | | | 19| | --------------------
Your program must provide the following information:
For this assignment, you can use an environment implemented by John Clayton as work towards his Master's thesis. Follow this link to the AIILE environment () to download it, and for further information. I have also set up a discussion forum on the Blackboard "Discussion Board" in case you have questions about this environment. While it is in an early stage, it should save you some work by letting you concentrate on the actual search methods.
You can also implement your own playground for an extra credit of 20% of the total number of points for this assignment. If you decide to do this, you may inspect the sample playground, but not use any of its code.
The provided setup shows only the environment from an omniscient observer's perspective. The agent itself can build up its own internal map as it explores the environment. Usually an appropriate data strucuture such as an array or a graph is used by the agent to represent information about the environment. Displaying the agent's view of the environment can be helpful for the analysis of the agent's behavior. Adding this display to the environment can earn you another 10% of extra credit. You can combine this with the implementation of your own environment for a total of 30% extra credit.
This assignment must be submitted electronically via the handin program on hornet. Please make sure that you have a functioning account on hornet. Follow this link for directions.
If you have gneral questions or comments concerning the programming aspects of the homework, post them on the Blackboard Discussion Forum for the assignment. The grader and I will check that forum on a regular basis, and try to answer your questions.
If you don't have programming experience in Java, follow this link to some pointers about Java.
FJK Home | CPE/CSC 480 | Syllabus | Schedule | Lecture Notes | Assignments | Labs | Project | Other Links |
Franz Kurfess |