CSC 484 Assignment 1
Introduction to HCI Evaluation and Usability Analysis
(NOTE: The PDF version of this documented is better formatted.)
DUE: Written report, on or before 11:00AM Monday 14 April, via handin on vogon
Oral presentation in lab, week of 14 April
POINTS POSSIBLE: 100
WEIGHT: 10% of total class grade
READING: Textbook Ch 1; Ch 12, Sec 12.1 - 12.3; Ch 15, Sec 15.1 - 15.2
The purpose of this assignment is to get you started with the analysis of good and bad interaction design, as well as with usability analysis. Working in teams of approximately five people, you will analyze two tools that are designed to perform the same type of tasks. The tools can be desktop and/or web-based applications.
The scope of the tools should be focused on a reasonably specific area of functionality, and should involve some form of data entry and/or editing. Good examples of the type of tools I have in mind are email clients, calendaring tools, or specialized editors of some form.
I am most interested in tools that you have pre-existing strong opinions about, and for which you consider yourself to be reasonably expert users. Also, each member of an evaluation team must have legal user rights to both of the tools the team analyzes. This can be a full-featured temporary evaluation copy, if available.
I would like to form teams around the tools themselves, based on class member interest, expertise, and tool access. We'll see how things go in lab, and if necessary I'll assign tools to specific teams.
There are a number of ways to approach the selection of two specific tools:
To form an organizational basis for this assignment, construct a taxonomic feature list, with the following general structure:
Features: | Tool 1 | Tool 2 | Is Core? |
Feature Category 1 | |||
Feature Category 1.1 | |||
Feature 1.1.1 | 0-5 | 0-5 | yes/no |
. . . | |||
Feature 1.1.m | 0-5 | 0-5 | yes/no |
Feature 1.2 | 0-5 | 0-5 | yes/no |
. . . | |||
Feature Category 2: | |||
. . . | |||
Feature Category n | |||
. . . |
The rightmost entry for each feature is whether it is considered to be core functionality, for the purposes of the usability scenario. The determination of core functionality is done based on the collective expertise and judgment of the analysis team. For the purpose of limiting the scope of the assignment, you should limit core features to twelve or fewer. This may leave out some features for larger tools, but it's strictly being done here to manage the scope of the assignment.
The study of taxonomy is pursued significantly in biological sciences, where the goal is to categorize the plant and animal life into a logical hierarchy. For example, biologists start with the largest category of kingdom, which has the two members of plants and animals. From there, the biological taxonomy goes to phyla, classes, orders, etc., down to the smallest category of sub-species.
In a software tool analysis, taxonomy can be used to organize the functionality of the tools. For example, we can consider the function categories found typically in the top-level menubar to be primary candidates for the top-level categories of functionality. Each item in a menu is a subcategory, and items in submenus or dialogs will be subsubcategories. The same form of organization is used for features accessible from buttons, tabs, and other top-level forms of UI layout.
Since software tools are not as well organized as the animal kingdom, we will
have to look elsewhere than top-level menubars for feature categories. Indeed,
some tools have no menubar at all. Overall, the focus of our categorization
should be on functions that are accessible anywhere in the tool's user
interface, whether through menus, buttons, typing, etc. We specifically do not
care about features that are not directly accessible to the user.
The purpose of a usability scenario is to capture work performed by typical end users of the tools. This should be work performed that is central to the tools' purpose.
There are two guides for developing the usability scenario for this assignment:
For those of you who have done scenarios or use cases in CSC 308, the scenario here can be written in more general terms, without indicating specific user input/output values. The book and links in the book's website have examples of various usability scenarios, which you can use for guidance
As noted above, you will use the ten usability heuristics listed in Section 15.2.1 (pages 686 - 687) of the book. To perform the analysis, you will conduct scenario for each tool, and evaluate them as the scenario proceeds. The results of the evaluation are the following two pieces of information for each heuristic, for each tool:
Once the individual evaluations are performed, you will average all of the numeric ratings and collate all of the justifications, per the document structure described below.
Describe your overall user experience for the pair of tools you reviewed. You can do this as side-by-side comparison, or as a separate discussion for each tool. The total length of your experience discussion should be 500 to 1000 words, which is one or two single-spaced pages.
As noted in Section 1.4 of the book, there is no general framework for such descriptions. However, please do address the following specific issues somewhere in your discussion:
The results of the team are to be submitted in a single PDF document, with the following structure:
Title Page, with
names of tools reviewed,
names of team members1. Introduction
Describe the tool area, the tools reviewed, and how and why they were selected. Mention other tools considered, as appropriate. Summarize the findings overall for the evaluations.2. Feature Taxonomy
3. The Usability Scenario
4. Heuristic Evaluation
Provide two spreadsheet-like tables, one for each evaluated tool. Each table contains the numeric evaluations for each team member. Table rows are the ten evaluation heuristics. Table columns are the team member names, plus a rightmost column titled "Average", containing the average of the ratings in each row. Row/column cell entries are the numeric ratings for each member, and the Average column.4.1 Evaluation Justifications from Team Member A
...
4.N Evaluation Justifications from Team Member N5. User Experiences
5.1 User Experience of Team Member A
...
5.N User Experience of Team Member N6. Presentation Materials
Slides and other supporting material used in the presentation.
During week three of lab, your team will present the results of the evaluation. The presentations will last approximately 20 minutes each. You can structure the presentation along the lines of the evaluation document, with emphasis on what you think are the interesting or surprising results.
Select one team member to be the document submitter. That member submits the
written document as a single PDF file named "analysis.pdf", using
handin on vogon. The specific command is
handin gfisher a1
Your presentation will be given during one of the lab periods in the third week of class. The presentation schedule will be determined by the beginning of the second week.
The following is a general grading breakdown. Team members will be graded individually on their individual contributions in Sections 4 and 5. A more thorough set of grading criteria will be supplied in the online supplemental material for this assignment.
Section % Grading Criteria 1 5 Good rationale for chosen tools 2 10 Completeness and consistency of coverage 3 25 Thoroughness of coverage of core features 4 40 Sensibleness of ratings and thoughtfulness of justifications 5 10 Cogentness 6 10 Organization, clarity, and coverage of key results
In the PDF version of this writeup, lines that have been changed or added are
denoted with vertical bars in the right margin, as is this paragraph. The
table below describes the specifics of each change or addition.
Date Page Description 2 April First distribution, paper copies handed out in class. 4 April 1 Changed incorrect due date from 11 April to 14 April. 10 April 4 Clarified description of section 4 in the analysis document. Specifically, clarified that there are two rating tables to be produced -- one for each evaluated tool. Also, each table has a rightmost column, for the average of team member ratings.