CSC 484 Lecture Notes Week 6
Different Types of Interfaces and Interactions



  1. Relevant reading -- Textbook Chapter 6.

  2. Discussion of project overview and Milestone 2.
    1. See the handout from last week.
    2. See important revisions posted online, in particular:
      1. Project presentation moved from weeks 7 and 10, to weeks 8 and 11.
      2. New Section 1.3.3, on "Usability Study Participants".
      3. References section at the end, containing references to articles cited in your writeup, particularly Sections 1.3.1 and 1.3.2.

  3. Class schedule updates.
    1. See the revised schedule online, at
      
      http://users.csc.calpoly.edu/~gfisher/classes/484/handouts/schedule.html
      
      
    2. Noteworthy updates:
      1. Project presentations moved to weeks 8 and 11.
      2. Monday finals time used for final presentations, not written exam.
      3. Weeks 9 and 10 labs devoted to usability studies.
      4. Quiz 4 in Friday lecture Week 9, worth 6% of class grade.

  4. Introduction to Chapter 6 (Section 6.1).
    1. The chapter covers the wide range of different interface types.
      1. WIMP -- windows, icons, menus, and pointing.
      2. Advanced GUIs -- multi-media, virtual reality.
      3. Ubiquitous -- wearable, mobile, in the surrounding environment.
    2. It discusses design issues relevant to the different types.
    3. It also provides guidance about what type of interface(s) to choose for a particular application or activity.

  5. Interface paradigms (Section 6.2).
    1. Simply put, a paradigm is a way of doing business.
    2. From a scientific perspective, a paradigm is a set of commonly agreed practices that helps define:
      1. what scientific questions to ask,
      2. what phenomena to observe,
      3. the kind of experiments to conduct.
    3. In interaction design paradigms, the questions include:
      1. How many people will interact with a designed system?
      2. Is the system on a desktop, in a web browser, in a ubiquitous environment?
      3. In what forms does the user present inputs to the system?
      4. In what forms does the system provide output to the user?
    4. The phenomena to observe are based on the various forms of user behavior discussed in preceding chapters:
      1. Can people use an interactive system effectively?
      2. What psychological phenomena are pertinent to users of an interactive system?
      3. What social phenomena arise as a result of using an interactive system?
      4. When do users enjoy or not enjoy using an interactive system?
    5. Experimental findings in ID research are generally considered acceptable in one of these forms:
      1. Qualitative results.
        1. These are based on asking users specific questions about their interactive experiences.
        2. Such results typically require statistical analysis to identify significant trends among the users.
      2. Quantitative results.
        1. These are based on measuring users' performance of specific interactive tasks.
        2. Such results typically require a controlled experimental environment in which tasks are performed.
      3. Theory-based results.
        1. Qualitative or quantitative results can be backed by some proposed theory, framework, or model of interactivity.
        2. In such cases, the experiment is based on and derived from the theory.
    Interface types (Section 6.3).
    1. Interaction styles:
      1. Command-based -- typically via typed text or spoken phrases.
      2. Graphical -- typically via mouse or pen.
      3. Multi-media -- add audio/video output and possibly input.
    2. Interactive system properties:
      1. Intelligent -- add some form of AI to an interactive system.
      2. Adaptive -- the interface changes dynamically, to adapt to users' changing context and needs.
      3. Ambient -- the interface extends beyond the users' desktop, into the surrounding environment.
      4. Mobile -- an interactive device goes with the user, as the user moves about.
    3. The book's chronological grouping:
      1. 1980s
        1. Command
        2. GUI
      2. 1990s
        1. Advanced GUI (multi-media, VR, visualization)
        2. Web-based
        3. Speech
        4. Pen, gesturing (beyond keyboard/mouse), touch
        5. Appliance, i.e., device-embedded
      3. 2000s
        1. Mobile
        2. Multi-modal (beyond keyboard and mouse)
        3. Sharable
        4. Tangible (sensor-based i/o devices)
        5. Augmented, virtual, mixed reality
        6. Wearable
        7. Robotic
    4. This chronology is more aligned with PhD-level research, than with actual real- world usage of technology.
      1. Application of 1990s research is still taking place in commercial systems.
      2. E.g.,
        1. Google Docs and SketchUp
        2. Apple Spaces and Expose
        3. Windows desktop improvements
        4. MS Office Galleries
        5. iPod scroll wheel

  6. 1980s UIs (Section 6.3.1).
    1. These are well-known to us all.
    2. In my opinion, Activity 6.1 and Box 6.1 are not particularly well-chosen or cogent examples; you can undoubtedly come up with better examples of your own, in the illustrated areas.
    3. Research and design issues (many remaining relevant today).
      1. Command vocabulary.
      2. Mnemonic icon design.
      3. Window management.
      4. Menu design an layout.
      5. Other means to display, navigate, and abstract large amounts of information.
    4. Menu design issues.
      1. The are many published guidelines; the book cites a number of them.
      2. The book also has a rather curious excerpt of the ISO standards for menu design, in Figure 6.8.
    5. Icon design issues.
      1. The visual appearance of icons has improved quite bit.
      2. However, research suggests that icon recognition may not involve graphics cognition.
      3. Hence icons may just be more vocabulary.

  7. Multi-Media (pp. 240-244).
    1. These are interfaces that include a mix of graphics, text, audio, video, animation, and hyper-links.
    2. They are intended to encourage interaction and exploration.
    3. The book notes some significant caveats with respect to multi-media interfaces:
      1. There is General belief that 'more is more', in implication being that this may not always be true.
      2. The 'Added value' of multi-media is assumed, often with little or no empirical evidence to back up the assumptions.
      3. Studies have shown that multi-media UIs May promote fragmented interactions, in that the flashier aspects of the interface may distract users from focusing on the task at hand.
    4. The book summarizes published guidelines that recommend the use of multi-media in the following order:
      1. To start an interactive session, stimulate the user with audio/video.
      2. Next, to focus on important information structure, present high-level diagrams.
      3. Finally, show details in hypertext.

  8. Virtual reality and virtual environments (pp. 244-249).
    1. Such interfaces can create an illusion of participation in a seemingly realistic world.
    2. They can provide a sense of presence, meaning the user feels as if she or he is within the virtual environment.
    3. Physical input/output media include the following:
      1. 3D projections or shutter glasses, for visual effects.
      2. Joystick controls, for 3-space navigation.
      3. Full headsets or "heads-up" displays, though the book reports that fully head- enclosing devices have been reported as problematically uncomfortable or constraining to users.
    4. There are two perspectives a user can assume in a virtual environment:
      1. First-person direct control.
        1. The user acts as her or himself within the environment, controlling and navigating directly.
        2. Flight simulations and other training systems are examples of the first-person perspective.
      2. The other perspective is third-person indirect control.
        1. The user interacts via an "avatar" or some other agent.
        2. The avatar interacts in the environment, under the user's control or is simply observed by the user.
        3. Interactive games are typically designed with a third-person perspective.
    5. The issue of 2D versus 3D space is a much debated topic; questions include:
      1. Does 3D help with productivity?
      2. Does it help with engagement?
      3. Is it more fun?
    6. Design issues for virtual reality and environments include:
      1. the degree of realism,
      2. the types of input/output,
      3. the types of user cognition involved in navigation
      4. in general, what it takes for user to "suspend disbelief", in order to feel present within a realistic space.

  9. Information Visualization (pp. 249-251).
    1. These forms of interface provide visual abstraction for large data sets, e.g., geographic data.
    2. The also provide alternate views for complex data, such as large amounts of statistical information summarized with varying sizes and colors of geometric shapes
    3. Successful application areas include:
      1. geographic data, where users are provided with sophisticated ways to zoom and pan the data;
      2. algorithm animation, where aspects of program behavior are shown visually as a program runs;
      3. other interesting attempts, such as
        1. Marketmap -- provides a geometric visualization of stock market activity,
          
          infosthetics.com/archives/2005/08/smartmoney_mark.html
          
          
        2. Newsmap -- does a similar form of geometric visualization of world-wide news stories,
          
          marumushi.com/apps/newsmap/newsmap.cfm
          
          
    4. R&D issues for data visualizations,
      1. appropriate spatial metaphors,
      2. 2D versus 3D (again).
      3. Do visualizations really work? (Check out the preceding links to see what you think.)

  10. Web-based UIs (pp. 251-258).
    1. There is on-going debate about whether to have "vanilla" or "multi-flavor" web UIs.
    2. Guru Nielson says vanilla.
    3. Many others say glitz.
    4. The world jury is way out on this.
    5. As always in any interaction design effort, plead to your own jury.
      1. Know your users. (Have we mentioned that yet?)
      2. And know what you want from them.
    6. Regarding all of the text that's out there in webspace -- do people read any of it?
      1. Recent research says web travelers read around 20% of it.
      2. See useit.com for a discussion (you can read about 20% of it to get the idea).
    7. Web design issues.
      1. There are gazillions of guidelines.
      2. There is also copious research.
      3. Increasingly, issues of web UI design are much the same as they are for non-web UIs.
      4. Given navigational aspects of web UIs, they may be be orgznized around the following user questions.
        1. Where am I?
        2. What's here?
        3. Where can I go?
      5. However, there are many desktop UIs for which these questions are equally appropriate, and conversely, there are web-based UIs for which these questions are not particularly important. (See Figure 6.21.)

  11. Speech (pp. 258-260).
    1. Speech has been used successfully in certain applications.
    2. IVRs are coming along (Interactive Voice-Response systems).
    3. Research and design issues:
      1. Despite the progress, there is much still to do.
      2. Parsing remains a major problem.
      3. Genuine two-way conversation is difficult.
      4. Most speech APIs are quite complicated, e.g.
        1. Sun's FreeTTS synthesizer, freetts.sourceforge.net/docs/index.php
        2. CMU's Sphinx-4 recognizer, cmusphinx.sourceforge.net/sphinx4
        3. CMU's Speech Graffiti www.cs.cmu.edu/~usi

  12. Pen, gesture, touch (pp. 258-260).
    1. Pen-based products started in 1990s.
    2. Much R&D continues.
    3. R&D issues include:
      1. distinguishing among different gestures;
      2. gesture accuracy and efficiency compared to keyboard and mouse.

  13. Appliance UIs (pp. 264-265).
    1. Your toaster and frig with brains.
    2. Design issues:
      1. Keep it simple (really, this time).
      2. Tradeoffs between hard vs soft UIs, e.g., using knobs and levers to control your toaster, versus an LCD.

  14. 21st Century UIs (Section 6.3.3).
    1. We'll cover these later in the quarter, when we discuss the world-enveloping field of ubiquitous computing.
    2. We'll also cover a number of the preceding topics in further depth, in particular visualizations and speech.



index | lectures | assignments | projects | handouts | solutions | examples | documentation | bin | grades