Collected Feedback on Mid-quarter Project Displays

The feedback collected earlier today during the project displays is now available. I’ve added a PDF file as an attachment to the Wiki page on “Features” or to the entry page if a team uses only a single page. I’ve included the comments and the scores. However, these scores are not very important, and may simply reflect the partial completion of the implementation, which is not unexpected at this time of the quarter. I’m not brading the displays separately, although some information may make its way into the next couple of grades for project parts (“ Features, Requirements and Evaluation Criteria; Design and Implementation”).
Overall I’m fairly happy with the status of the projects, although the Wiki documentations for some are still a bit weak ;-)
0 Comments

480 Feedback on Mid-quarter Project Displays

The Web form for the feedback is at https://docs.google.com/spreadsheet/viewform?formkey=dHNXbENqUnVydnlHbVgyMlUxTVdxWEE6MA#gid=0 .
0 Comments

Project Documentation

I’m a bit behind with the evaluation of the project documentation, but I've just completed my initial scoring of Project Difficulty, Relevance, and the Description and Context aspects. The first two are fine, and all projects got 9 or 10 out of 10 (I may revise this if the scope or focus of a project changes significantly). The Project Description and Context, however, are far weaker, with scores from 0 (nothing there) to 8 out of 10. Since I haven’t discussed this in class in detail, the teams will have an opportunity to revise their Wikis, and request a re-grade.
Here are the main parts that I’ll use for the grading of the projects, although for grading purposes I may split them up in smaller pieces:
  • Project Description and Context
  • Features, Requirements and Evaluation Criteria
  • Design and Implementation
  • Experiments, Evaluation, and Conclusions
  • Team Contributions and Mutual Team Member Feedback
Each of the above parts contributes 20% to the overall project score. Typically the first four will be the same for all team members, unless I am aware of a major disparity in the respective efforts and contributions. The last one will be done on an individual basis, and incorporates feedback from the team mates.
For your reference, here are a few links to good examples of project documents from previous classes. The expectations are pretty much the same for the AI classes (480, 481, 580), and not that much different for the User-Centered Design and Human-Computer Interaction classes (484, 486).
0 Comments

Project Displays

For the project team displays on Thursday, we’ll use the lecture and the lab times, unless there’s a backlog of AI Nuggets that we can catch up on. If that’s the case, we’ll do the AI Nuggets first, and then switch over to the project displays.
Your project displays should include the following (see also the “Project Documentation” entry on this “
Class News” page):
  • Project Description and Context
  • Features, Requirements and Evaluation Criteria
  • some evidence of system design and implementation, ideally in the form of a prototype.
This material should be reasonably self-contained, so that somebody who happens to walk by can understand what your project is about. Usually a team member should be available to answer questions and give further explanations. If you want to, you can also do mini-presentations or demos for a small group of visitors.
You should set up your display in such a way that a small group of people standing around your display area is able to read your material. The most common way to achieve this is through poster boards, but you can also use large computer displays for that.
I’m also collecting
feedback on other teams. You can either do this individually, or coordinate it within your team. Either way, make sure that your team has at least one person giving feedback for all the other teams. It’s also advisable to keep track of which teams already have visited your display so you don’t take it down before everybody (including myself) has had a chance to look at it.
0 Comments

Lab 4 Grading

I’ve just completed the grading of the L4 submissions so far. This is one of my favorite labs to grade. It’s my impression that many of the students really get into this, and use the opportunity to apply issues relevant to AI to those entertainment characters. This makes it easy for me to give almost everybody a good grade ;-)
Of course I’m also learning a lot about movies, TV series, and games that I wasn’t aware of. There are the usual suspects that show up a lot (HAL, Sonny, Terminator, Davids from two movies, “Artificial Intelligence” and “Prometheus”, C-3PO and R2-D2, Bender, GladOS, Matrix), but even for those, there’s usually something interesting in the answers.

Right now, I'm essentially the only one getting to enjoy all of the interesting write-ups. I’ve thought about turning this into a more interactive and collaborative version, but didn’t get far enough to actually put it into practice. PolyLearn probably can do a much better job here than Blackboard, but I think it's still a bit limited. I tried out the Semantic Media Wiki that I'm using for the AI Nuggets, but it quickly became too involved for me to do it all by myself.
Here's what I tried to set up: A repository of such characters, with basic descriptions, accompanied buy potential criteria for evaluation purposes. Each student has to submit something that involves at least the following:
  • one or more characters
  • mini-discussions for three or more criteria and how they relate to the selected character(s)
  • two or more responses to previous discussions
In principle, this shouldn't be too difficult with a Semantic Media Wiki, but in practice, it becomes a bit tricky quickly. What turned out to be the final stumbling block where I gave up was the tracking of the student responses so I can grade them. 
If anybody’s looking for an independent study or senior project idea, this might be an interesting one. Since there is some overlap with the chat bot lab, it might also be possible to combine the two, or at least use the same infrastructure. Another option is to use an existing system like Quora or Reddit, and make this available on the Web. This, of course, might be a nightmare from a grading perspective …  
0 Comments