CSC 484 Lecture Notes Week 6
Different Types of Interfaces and Interactions
- 
Relevant reading -- Textbook Chapter 6.
 - 
Discussion of project overview and Milestone 2.
- 
See the handout from last week.
 - 
See important revisions posted online, in particular:
- 
Project presentation moved from weeks 7 and 10, to weeks 8 and 11.
 - 
New Section 1.3.3, on "Usability Study Participants".
 - 
References section at the end, containing references to
articles cited in your writeup, particularly Sections 1.3.1 and 1.3.2.
 
 
 - 
Class schedule updates.
- 
See the revised schedule online, at
http://users.csc.calpoly.edu/~gfisher/classes/484/handouts/schedule.html
 - 
Noteworthy updates:
- 
Project presentations moved to weeks 8 and 11.
 - 
Monday finals time used for final presentations, not written exam.
 - 
Weeks 9 and 10 labs devoted to usability studies.
 - 
Quiz 4 in Friday lecture Week 9, worth 6% of class grade.
 
 
 - 
Introduction to Chapter 6 (Section 6.1).
- 
The chapter covers the wide range of different interface types.
- 
WIMP -- windows, icons, menus, and pointing.
 - 
Advanced GUIs -- multi-media, virtual reality.
 - 
Ubiquitous -- wearable, mobile, in the surrounding
environment.
 
 - 
It discusses design issues relevant to the different types.
 - 
It also provides guidance about what type of interface(s) to choose for a
particular application or activity.
 
 - 
Interface paradigms (Section 6.2).
- 
Simply put, a paradigm is a way of doing business.
 - 
From a scientific perspective, a paradigm is a set of commonly agreed practices
that helps define:
- 
what scientific questions to ask,
 - 
what phenomena to observe,
 - 
the kind of experiments to conduct.
 
 - 
In interaction design paradigms, the questions include:
- 
How many people will interact with a designed system?
 - 
Is the system on a desktop, in a web browser, in a ubiquitous environment?
 - 
In what forms does the user present inputs to the system?
 - 
In what forms does the system provide output to the user?
 
 - 
The phenomena to observe are based on the various forms of user behavior
discussed in preceding chapters:
- 
Can people use an interactive system effectively?
 - 
What psychological phenomena are pertinent to users of an interactive system?
 - 
What social phenomena arise as a result of using an interactive system?
 - 
When do users enjoy or not enjoy using an interactive system?
 
 - 
Experimental findings in ID research are generally considered acceptable in one
of these forms:
- 
Qualitative results.
- 
These are based on asking users specific questions about their interactive
experiences.
 - 
Such results typically require statistical analysis to identify significant
trends among the users.
 
 - 
Quantitative results.
- 
These are based on measuring users' performance of specific interactive tasks.
 - 
Such results typically require a controlled experimental environment in which
tasks are performed.
 
 - 
Theory-based results.
- 
Qualitative or quantitative results can be backed by some proposed theory,
framework, or model of interactivity.
 - 
In such cases, the experiment is based on and derived from the theory.
 
 
 
Interface types (Section 6.3).
- 
Interaction styles:
- 
Command-based -- typically via typed text or spoken phrases.
 - 
Graphical -- typically via mouse or pen.
 - 
Multi-media -- add audio/video output and possibly input.
 
 - 
Interactive system properties:
- 
Intelligent -- add some form of AI to an interactive system.
 - 
Adaptive -- the interface changes dynamically, to adapt to users'
changing context and needs.
 - 
Ambient -- the interface extends beyond the users' desktop, into the
surrounding environment.
 - 
Mobile -- an interactive device goes with the user, as the user moves
about.
 
 - 
The book's chronological grouping:
- 
1980s
- 
Command
 - 
GUI
 
 - 
1990s
- 
Advanced GUI (multi-media, VR, visualization)
 - 
Web-based
 - 
Speech
 - 
Pen, gesturing (beyond keyboard/mouse), touch
 - 
Appliance, i.e., device-embedded
 
 - 
2000s
- 
Mobile
 - 
Multi-modal (beyond keyboard and mouse)
 - 
Sharable
 - 
Tangible (sensor-based i/o devices)
 - 
Augmented, virtual, mixed reality
 - 
Wearable
 - 
Robotic
 
 
 - 
This chronology is more aligned with PhD-level research, than with actual real-
world usage of technology.
- 
Application of 1990s research is still taking place in commercial systems.
 - 
E.g.,
- 
Google Docs and SketchUp
 - 
Apple Spaces and Expose
 - 
Windows desktop improvements
 - 
MS Office Galleries
 - 
iPod scroll wheel
 
 
 
 - 
1980s UIs (Section 6.3.1).
- 
These are well-known to us all.
 - 
In my opinion, Activity 6.1 and Box 6.1 are not particularly well-chosen or
cogent examples; you can undoubtedly come up with better examples of your own,
in the illustrated areas.
 - 
Research and design issues (many remaining relevant today).
- 
Command vocabulary.
 - 
Mnemonic icon design.
 - 
Window management.
 - 
Menu design an layout.
 - 
Other means to display, navigate, and abstract large amounts of
information.
 
 - 
Menu design issues.
- 
The are many published guidelines; the book cites a number of them.
 - 
The book also has a rather curious excerpt of the ISO standards for menu
design, in Figure 6.8.
 
 - 
Icon design issues.
- 
The visual appearance of icons has improved quite bit.
 - 
However, research suggests that icon recognition may not involve graphics
cognition.
 - 
Hence icons may just be more vocabulary.
 
 
 - 
Multi-Media (pp. 240-244).
- 
These are interfaces that include a mix of graphics, text, audio, video,
animation, and hyper-links.
 - 
They are intended to encourage interaction and exploration.
 - 
The book notes some significant caveats with respect to multi-media interfaces:
- 
There is General belief that 'more is more', in
implication being that this may not always be true.
 - 
The 'Added value' of multi-media is assumed, often
with little or no empirical evidence to back up the assumptions.
 - 
Studies have shown that multi-media UIs May promote fragmented
interactions, in that the flashier aspects of the interface may
distract users from focusing on the task at hand.
 
 - 
The book summarizes published guidelines that recommend the use of multi-media
in the following order:
- 
To start an interactive session, stimulate the user with audio/video.
 - 
Next, to focus on important information structure, present high-level
diagrams.
 - 
Finally, show details in hypertext.
 
 
 - 
Virtual reality and virtual environments (pp. 244-249).
- 
Such interfaces can create an illusion of participation in a seemingly
realistic world.
 - 
They can provide a sense of presence, meaning the user feels as if she
or he is within the virtual environment.
 - 
Physical input/output media include the following:
- 
3D projections or shutter glasses, for visual effects.
 - 
Joystick controls, for 3-space navigation.
 - 
Full headsets or "heads-up" displays, though the book reports that fully head-
enclosing devices have been reported as problematically uncomfortable or
constraining to users.
 
 - 
There are two perspectives a user can assume in a virtual environment:
- 
First-person direct control.
- 
The user acts as her or himself within the environment, controlling and
navigating directly.
 - 
Flight simulations and other training systems are examples of the first-person
perspective.
 
 - 
The other perspective is third-person indirect control.
- 
The user interacts via an "avatar" or some other agent.
 - 
The avatar interacts in the environment, under the user's control or is simply
observed by the user.
 - 
Interactive games are typically designed with a third-person perspective.
 
 
 - 
The issue of 2D versus 3D space is a much debated topic; questions include:
- 
Does 3D help with productivity?
 - 
Does it help with engagement?
 - 
Is it more fun?
 
 - 
Design issues for virtual reality and environments include:
- 
the degree of realism,
 - 
the types of input/output,
 - 
the types of user cognition involved in navigation
 - 
in general, what it takes for user to "suspend disbelief", in order to feel
present within a realistic space.
 
 
 - 
Information Visualization (pp. 249-251).
- 
These forms of interface provide visual abstraction for large data sets, e.g.,
geographic data.
 - 
The also provide alternate views for complex data, such as large amounts of
statistical information summarized with varying sizes and colors of geometric
shapes
 - 
Successful application areas include:
- 
geographic data, where users are provided with sophisticated ways to
zoom and pan the data;
 - 
algorithm animation, where aspects of program behavior are shown
visually as a program runs;
 - 
other interesting attempts, such as
- 
Marketmap -- provides a geometric visualization of stock market activity,
infosthetics.com/archives/2005/08/smartmoney_mark.html
 - 
Newsmap -- does a similar form of geometric visualization of world-wide news
stories,
marumushi.com/apps/newsmap/newsmap.cfm
 
 
 - 
R&D issues for data visualizations,
- 
appropriate spatial metaphors,
 - 
2D versus 3D (again).
 - 
Do visualizations really work?  (Check out the preceding links to see what you
think.)
 
 
 - 
Web-based UIs (pp. 251-258).
- 
There is on-going debate about whether to have "vanilla" or "multi-flavor" web
UIs.
 - 
Guru Nielson says vanilla.
 - 
Many others say glitz.
 - 
The world jury is way out on this.
 - 
As always in any interaction design effort, plead to your own jury.
- 
Know your users.  (Have we mentioned that yet?)
 - 
And know what you want from them.
 
 - 
Regarding all of the text that's out there in webspace -- do people read any of
it?
- 
Recent research says web travelers read around 20% of it.
 - 
See useit.com for a discussion (you can read about 20% of it to get
the idea).
 
 - 
Web design issues.
- 
There are gazillions of guidelines.
 - 
There is also copious research.
 - 
Increasingly, issues of web UI design are much the same as they are for non-web
UIs.
 - 
Given navigational aspects of web UIs, they may be be orgznized around
the following user questions.
- 
Where am I?
 - 
What's here?
 - 
Where can I go?
 
 - 
However, there are many desktop UIs for which these questions are equally
appropriate, and conversely, there are web-based UIs for which these questions
are not particularly important.  (See Figure 6.21.)
 
 
 - 
Speech (pp. 258-260).
- 
Speech has been used successfully in certain applications.
 - 
IVRs are coming along (Interactive Voice-Response systems).
 - 
Research and design issues:
- 
Despite the progress, there is much still to do.
 - 
Parsing remains a major problem.
 - 
Genuine two-way conversation is difficult.
 - 
Most speech APIs are quite complicated, e.g.
- 
Sun's FreeTTS synthesizer,
freetts.sourceforge.net/docs/index.php
- 
CMU's Sphinx-4 recognizer,
cmusphinx.sourceforge.net/sphinx4
 - 
CMU's Speech Graffiti
www.cs.cmu.edu/~usi
  
 
 
 - 
Pen, gesture, touch (pp. 258-260).
- 
Pen-based products started in 1990s.
 - 
Much R&D continues.
 - 
R&D issues include:
- 
distinguishing among different gestures;
 - 
gesture accuracy and efficiency compared to keyboard and mouse.
 
 
 - 
Appliance UIs (pp. 264-265).
- 
Your toaster and frig with brains.
 - 
Design issues:
- 
Keep it simple (really, this time).
 - 
Tradeoffs between hard vs soft UIs, e.g., using knobs and levers to control
your toaster, versus an LCD.
 
 
 - 
21st Century UIs (Section 6.3.3).
- 
We'll cover these later in the quarter, when we discuss the world-enveloping
field of ubiquitous computing.
 - 
We'll also cover a number of the preceding topics in further depth, in
particular visualizations and speech.
 
 
index
|
lectures
|
assignments
|
projects
|
handouts
|
solutions
|
examples
|
documentation
|
bin
|
grades