Provide high-level introduction to ID, the process, and evaluation.
These are covered in Chapters 1, 9, and 12 respectively.
Week 3:
Cover details of the requirements and prototyping process.
Much of this is review material from CSC 308.
This material is in Chapters 10 and 11.
Weeks 4 and 5:
Cover psychological, sociological, and cognitive aspects of ID.
This is material not typically covered in depth in software engineering
courses, at least here at Poly.
It's covered in Chapters 2 through 5.
Weeks 6 and 7:
Summarize general paradigms of ID, i.e., ways of doing business.
Cover statistical aspects of ID, specifically data gathering from users and
analysis of the data.
This is from Chapters 6 through 8.
Weeks 8 and 9:
Cover details of user evaluation.
From Chapters 13 through 15.
Week 10:
Consider what's on the horizon and beyond.
This will touch on topics from the research papers, particularly those from
later in the quarter.
Assignment 2.
The assignment topic is high-level storyboarding, what the book calls a "low-
fidelity prototype" (Chapter 11, Section 11.2.3, Page 531).
Hopefully, it will be the beginning of your term project.
The culmination of the assignment is a two-day poster session in lab, during
week 5.
See the writeup for details.
Introduction to the class project.
You will work in teams of approximately six members each.
The default project theme is productivity software for post-
secondary education.s
"Default" means that you may work on a project in an entirely different area.
However, if your team has nothing else in mind, then post-secondary education
is a reasonable application domain.
This is our collective, shared are of expertise, as students and faculty in
this class.
"Productivity" means it improves the educational lives of students and/or
faculty.
We will spend time in lecture on Wednesday discussing your (possibly tentative)
project selection.
Even if you are not sure of a project topic, you need a work area for the
storyboarding task of Assignment 2.
See the Assignment 2 writeup for further details.
Now on to the topic of user evaluation, as presented in Chapter
12.
Introduction to evaluation (Section 12.1).
The purpose of any form of evaluation is to collect information about users.
There are multiple possible methods to do so.
In Assignment 1, you approached evaluation as a team of expert users,
evaluating fully finished products.
We did this as an immediately accessible form of evaluation, as a "warm-up"
exercise for the class.
Later in the quarter, you will perform a laboratory-based evaluation of actual
end users, who may or may not be experts in the domain of product use.
The "why", "what", "where", and "when" of evaluation (Section
12.2).
Why?
Check that users can do something useful with a product.
Check that they like it.
What?
Evaluate the product itself.
Evaluate domain-specific attributes, including performance, aesthetics,
physical characteristics.
Where?
Evaluate in a controlled laboratory setting.
Evaluate in natural settings of use.
When?
Evaluate at any appropriate stage of the development process.
Do concept evaluations at the beginning of developing a brand new product.
Evaluate specific new features when a product is being upgraded.
Evaluate a finished product, including for standards compliance.
Evaluation terminology (alphabetically listed in Box 12.1).
Analytic evaluation -- An approach using heuristics, walk-throughs, or
models, without actual end users.
Controlled experiment -- Evaluation of actual users, in a controlled
laboratory setting.
Field study -- Evaluation of actual end users, in their natural
environment.
Formative evaluation -- Done during design, to ensure continued user
satisfaction.
Heuristic evaluation -- Done using well-known guidelines, embodying
knowledge of typical users.
Predictive evaluation -- Done with theoretical models, to (attempt to)
predict user performance.
Summative evaluation -- Done when a design is complete, in particular
to assess standards compliance.
Usability lab -- A facility designed specifically for usability
studies.
User study -- Any kind of user study, at any stage development.
Usability testing -- A quantitative evaluation study.
User testing -- Evaluation of users performing specific tasks.
Approaches and methods (Section 12.3).
There are three widely-used approaches -- usability testing, field studies, and
analytic evaluation.
They can be used at various stages of product development, separately or in
combination.
Approaches (Section 12.3.1).
Usability testing
Done in a lab or similar setting.
The environment is well controlled by the evaluators.
Test subjects must focus on the tasks at hand and not be interrupted, e.g., by
phone calls or other typical day-to-day activities.
Quantifying user performance is an important aspect of usability testing.
All users are given the same tasks, and measured in specific ways.
When such tests are conducted over a single product's full life span, this is a
form of regression testing, in the software engineering sense.
I.e., the same tests are used with successive product releases, to ensure that
a core set of tasks can be performed in a new release at least as effectively
as in a preceding release.
This has been called "usability engineering".
Field studies
In contrast to usability testing, field studies are done in users' natural
settings.
Subjects are observed in an unobtrusive manner, recording their activities in
different forms, including audio and video if possible.
Subjects may be asked to fill out questionnaires about their experiences.
Analytic evaluation
This can be done using heuristic-based walkthroughs or
models.
This form of testing does not involve actual end users, where "actual" means
the intended user population or its designated representatives.
Rather, it is conducted by product developers, most typically domain experts.
Heuristics are developed to characterize typical user behavior.
They can be based on common-sense knowledge in a particular domain.
They also involve other general or specific guidelines of product usage.
Models are scientific attempts to characterize certain types of measurable user
behavior.
E.g., Fitt's law predicts the time it takes to reach a target using a pointing
device.
Cognitive walkthroughs are a form of modeling that involve simulating
a users problem-solving process, to determine how users will interact with a
product.
Overall, analytic evaluation can be useful, but is never a replacement for
actual end user testing.
Methods (Section 12.3.2)
The main methods employed in evaluation are:
Observing users.
In a lab.
In the field.
With direct evaluator contact and/or indirect recording.
Recording in various ways, including audio, video, and product instrumentation.
Asking users their opinions
Individual in-person interviews, with note taking.
In group meetings and discussion sessions.
Using questionnaires.
Asking experts their opinions.
Testing users' performance.
Modeling users' performance.
Table 12.1 on Page 594 is a useful summary of the different evaluation
approaches and methods.
Case studies (Section 12.4).
The book provides six short case studies to illustrate the use of various
evaluation methods.
Here is an overview of the evaluation methods we will be employing in 484:
Heuristic evaluation of an existing project in Assignment 1.
Informal interviews and questionnaires for project ideas in Assignment 2.
Lab-based usability study of an existing or new product in Assignment 3.