Tải bản đầy đủ (.pdf) (50 trang)

Tài liệu User Experience Re-Mastered Your Guide to Getting the Right Design- P1 ppt

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (513.99 KB, 50 trang )

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
User Experience
Re-Mastered
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Mobile Technology for Children: Designing for Interaction and
Learning
Edited by Allison Druin
Effective Prototyping with Excel
Nevin Berger, Michael Arent, Jonathan Arnowitz, and Fred
Sampson
Web Application Design Patterns
Pawan Vora
Evaluating Children’s Interactive Products: Principles and
Practices for Interaction Designers
Panos Markopoulos, Janet Read, Stuart MacFarlane, and
Johanna Hoysniemi
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory and
Other Nontraditional Interfaces
Edited by Phi Kortum
Measuring the User Experience: Collecting, Analyzing, and
Presenting Usability Metrics
Tom Tullis and Bill Albert
Moderating Usability Tests: Principles and Practices for
Interacting
Joseph Dumas and Beth Loring
Keeping Found Things Found: The Study and Practice of Personal
Information Management
William Jones
GUI Bloopers 2.0: Common User Interface Design Don’ts and Dos
Jeff Johnson
Visual Thinking for Design


Colin Ware
User-Centered Design Stories: Real-World UCD Case Studies
Carol Righi and Janice James
Sketching User Experiences: Getting the Design Right and the
Right Design
Bill Buxton
Text Entry Systems: Mobility, Accessibility, Universality
Scott MacKenzie and Kumiko Tanaka-ishi
Letting Go of the Words: Writing Web Content that Works
Janice “Ginny” Redish
Personas and User Archetypes: A Field Guide for Interaction
Designers
Jonathan Pruitt and Tamara Adlin
Cost-Justifying Usability
Edited by Randolph Bias and Deborah Mayhew
User Interface Design and Evaluation
Debbie Stone, Caroline Jarrett, Mark Woodroffe, and Shailey
Minocha
Rapid Contextual Design
Karen Holtzblatt, Jessamyn Burns Wendell, and Shelley
Wood
Voice Interaction Design: Crafting the New Conversational
Speech Systems
Randy Allen Harris
Understanding Users: A Practical Guide to User Requirements:
Methods, Tools, and Techniques
Catherine Courage and Kathy Baxter
The Web Application Design Handbook: Best Practices for
Web-Based Software
Susan Fowler and Victor Stanwick

The Mobile Connection: The Cell Phone’s Impact on Society
Richard Ling
Information Visualization: Perception for Design, 2nd Edition
Colin Ware
Interaction Design for Complex Problem Solving: Developing
Useful and Usable Software
Barbara Mirel
The Craft of Information Visualization: Readings and Refl ections
Written and edited by Ben Bederson and Ben Shneiderman
HCI Models, Theories, and Frameworks: Towards a
Multidisciplinary Science
Edited by John M. Carroll
Web Bloopers: 60 Common Web Design Mistakes, and How to
Avoid Them
Jeff Johnson
Observing the User Experience: A Practitioner’s Guide to User
Research
Mike Kuniavsky
Paper Prototyping: The Fast and Easy Way to Design and Refi ne
User Interfaces
Carolyn Snyder
The Morgan Kaufmann Series in Interactive Technologies
Series Editors: Stuart Card, PARC; Jonathan Grudin, Microsoft;
Jakob Nielsen, Nielsen Norman Group
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
User Experience
Re-Mastered
Your Guide to Getting
the Right Design
Edited by

Chauncey Wilson
AMSTERDAM • BOSTON • HEIDELBERG • LONDON
NEW YORK • OXFORD • PARIS • SAN DIEGO
SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Morgan Kaufmann Publishers is an imprint of Elsevier
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Morgan Kaufmann Publishers is an imprint of Elsevier.
30 Corporate Drive, Suite 400, Burlington, MA 01803, USA
This book is printed on acid-free paper. ϱ
© 2010 by Elsevier Inc. All rights reserved.
Chapter 1 was originally published in Usability Engineering, by Jakob Nielsen (Elsevier Inc. 1993).
Chapter 2 was originally published in Usability for the Web: Designing Web Sites that Work, by Tom Brinck (Elsevier Inc. 2002).
Chapter 3 was originally published in Understanding Your Users: A Practical Guide to User Requirements Methods, Tools, and Techniques, by
Catherine Courage and Kathy Baxter (Elsevier Inc. 2005).
Chapter 5 was originally published in Sketching User Experience: Getting the Design Right and the Right Design, by Bill Buxton (Elsevier Inc. 2007).
Chapter 6 was originally published in The Persona Lifecycle: Keeping People in Mind Throughout Product Design, by John Pruitt and Tamara Adlin
(Elsevier Inc. 2006).
Chapter 7 was originally published in Effective Prototyping for Software Makers, by Jonathan Arnowitz, Michael Arent, and Nevin Berger (Elsevier
Inc. 2006).
Chapters 8, 9, 11, 12 were originally published in User Interface Design and Evaluation, by Debbie Stone, Caroline Jarrett, Mark Woodroffe, and Shailey
Minocha. Copyright © The Open University 2005.
Chapter 10 was originally published in Observing the User Experience, by Mike Kuniavsky (Elsevier Inc. 2003).
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying,
recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permis-
sion, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance
Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contri-
butions contained in it are protected under copyright by the Publisher (other than as may be noted herein).
Notices
Knowledge and best practice in this fi eld are constantly changing. As new research and experience broaden our understanding, changes in
research methods, professional practices, or medical treatment may become necessary.

Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods,
compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety
of others, including parties for whom they have a professional responsibility.
To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage
to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products,
instructions, or ideas contained in the material herein.
Library of Congress Cataloging-in-Publication Data
User experience re-mastered: your guide to getting the right design/edited by Chauncey Wilson.
p. cm.
ISBN 978-0-12-375114-0
1. User interfaces (Computer systems)—Design. 2. Human-computer interaction. 3. Web sites—Design. I. Wilson, Chauncey.
QA76.9.U83U833 2009
006.7—dc22
2009028127
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library.
ISBN: 978-0-12-375114-0
For information on all Morgan Kaufmann publications,
visit our Web site at www.mkp.com or www.elsevierdirect.com
Printed in Canada.
09 10 11 12 13 14 15 16 5 4 3 2 1
Typeset by diacriTech, Chennai, India
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
v
Contents
CONTRIBUTORS xiii
PART 1

Defi ning Usability 1
CHAPTER 1 What Is Usability? (Jakob Nielsen) 3

Usability and Other Considerations 4
Defi nition of Usability 6
Learnability 7
Effi ciency of Use 9
Memorability 9
Few and Noncatastrophic Errors 10
Subjective Satisfaction 11
Example: Measuring the Usability of Icons 14
Usability Trade-Offs 17
Categories of Users and Individual User Differences 18
End Notes 22
CHAPTER 2 User Needs Analysis (Tom Brinck, Darren Gergle,
and Scott D. Wood)
23
Introduction 24
The Objectives of User Needs Analysis 24
Setting Your Objectives 25
The Stakeholders 25
Business Goals 28
User Goals 28
Defi ne the Usability Objectives 28
Defi ne the Functional Specifi cations 30
Background Research 31
Surveys 32
What to Ask About 32
How to Structure the Survey Responses? 33
Sampling 37
Avoiding Bias 41
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Contents

vi
When to Use Surveys 43
Competitive Analysis 43
Interviews and Focus Groups 46
Conducting the Interview or Focus Group 46
Organizations 49
Preparing for an Interview or Focus Group 49
Focus Groups 51
When to Conduct Interviews and Focus Groups 53
Informed Project Objectives 53
Task Analysis 53
What is Task Analysis? 54
Task Analysis for Web Site Design 56
Use Cases 57
Hierarchical Task Analysis 58
User-Level Goals and Procedures 58
Platform-Level Goals and Procedures 58
Application-Level Goals and Procedures 59
Understanding the Tasks and Their Context 59
Hierarchical Task Analysis for Web Site Design 60
Techniques for Understanding Tasks 60
Training Materials 61
Standard Operating Procedures 61
Observation 61
Interviews and Focus Groups 61
Think-Aloud Protocol 61
Instrumented Browsers 62
Contextual Inquiry 62
How Far Down Should You Decompose a Procedure? 63
A Hybrid Approach to Task Analysis 64

Start with Use Cases 64
Decompose Tasks Hierarchically 65
Determine Appropriate Technologies 66
Performance Improvements 66
Consistency 66
Brevity and Clarity 69
Combined Functionality and Fewer Server Requests 69
Example: Ineffi cient Tasks 70
Human-Error-Tolerant Design 71
Example: Error Recovery 71
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Contents


vii
CHAPTER 3 Card Sorting (Catherine Courage and Kathy Baxter) 73
Introduction 73
When Should You Conduct a Card Sort? 74
Things to be Aware of When Conducting a Card Sort 75
Group or Individual Card Sort? 75
Preparing to Conduct a Card Sort 75
Preparation Timeline 76
Identify Objects and Defi nitions for Sorting 76
Activity Materials 79
Additional Data Collected in a Card Sort 80
Players in Your Activity 82
Preparing to Conduct a Card Sort 82
Inviting Observers 83
Conducting a Card Sort 83
Activity Timeline 84

Welcome the Participants 84
Practice 84
Card Review and Sorting 84
Labeling Groups 86
Data Analysis and Interpretation 86
Suggested Resources for Additional Reading 90
Analysis with a Card Sorting Program 90
Analysis with a Statistics Package 90
Analysis with a Spreadsheet Package 90
Data That Computer Programs Cannot Handle 91
Interpreting the Results 92
Communicate the Findings 93
Preparing to Communicate Your Findings 93
Modifi cations 94
Limit the Number of Groups 94
Electronic Card Sorting 94
Suggested Resources for Additional Reading 95
Prename the Groups 95
Lessons Learned 96
Pulling It All Together 96
How Card Sorting Changed a Web Site Team’s View of How the Site
Should be Organized 97
Our Approach 97
Planning and Preparing for the Card Sorting 98
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Contents
viii
The Analysis 101
Main Findings 102
What Happened to the Web site? 103

Acknowledgments 104
PART 2

Generating Ideas 105
CHAPTER 4 Brainstorming (Chauncey Wilson) 107
Introduction 107
When Should You Use Brainstorming? 109
Strengths of Brainstorming 110
Weaknesses of Brainstorming 110
Procedures and Practical Advice on Brainstorming 111
Variations and Extensions to Brainstorming 119
Free Listing 119
Major Issues in the Use of Brainstorming 127
Data Analysis for Brainstorming 131
What Do You Need for Brainstorming? 132
Recommended Readings 134
CHAPTER 5 Sketching: A Key to Good Design (Bill Buxton) 135
The Question of Design 136
We Are Not All Designers 140
The Anatomy of Sketching 140
From Thinking on to Acting on 149
CHAPTER 6 Persona Conception and Gestation
(John Pruitt and Tamara Adlin) 155
Setting the Scene: What’s Going on in Your Organization Now? 155
What is Conception and Gestation for Personas? 156
The Six-Step Conception and Gestation Process 156
How Long Does Conception and Gestation Take? 158
How Many Personas Should You Create? 161
Persona Conception: Steps 1, 2, and 3 166
Step 1: Identify Important Categories of Users 166

Step 2: Process the Data 173
Plan Your Assimilation Meeting 177
Describe the Goal and Outcome of the Meeting 177
Identify Key Data Points (Factoids) in the Data Sources 178
Transfer Factoids to Sticky Notes 178
Post User Category Labels Around the Room 179
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Contents


ix
Assimilate the Factoids 179
Label the Clusters of Factoids 181
Step 3: Identify Subcategories of Users and
Create Skeletons 182
Persona Gestation: Steps 4, 5, and 6 186
Step 4: Prioritize the Skeletons 186
Step 5: Develop Selected Skeletons into Personas 190
Step 6: Validate Your Personas 209
How to Know You are Ready for Birth and Maturation 218
Summary 219
CHAPTER 7
Verify Prototype Assumptions and Requirements
(Jonathan Arnowitz, Michael Arent, and Nevin Berger) 221
Introduction 222
Prototyping Requirements are not Software Requirements 222
Transformation of Assumptions to Requirements 224
Step 1: Gather Requirements 225
Step 2: Inventory the Requirements and Assumptions 227
Step 3: Prioritize Requirements and Assumptions 228

Requirements and the Big Picture 229
Iteration 1: From Idea to First Visualization 229
Iteration 2: From Quick Wireframe to Wireframe 232
Iteration 3: From Wireframe to Storyboard 233
Iteration 4: From Storyboard to Paper Prototype 235
Iteration 5: From Paper Prototype to Coded Prototype 236
Iteration 6: From Coded Prototype to Software Requirements 238
Summary 239
PART 3

Designing Your Site 241
CHAPTER 8 Designing for the Web (Debbie Stone, Caroline Jarrett,
Mark Woodroffe, and Shailey Minocha)
243
Introduction 244
The Lovely Rooms Hotel Booking Service 245
Domain 245
Users 245
Tasks 245
Environment 246
Technology 246
Conceptual Design 246
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Contents
x
Design Principles for Web Sites 246
High-Quality Content 246
Often Updated 248
Minimal Download Time 248
Ease of Use 248

Relevant to User’s Needs 248
Unique to the Online Medium 248
Net-centric Corporate Culture 249
Designing Web Sites 249
Designing the Web Site Structure 249
Helping the Users Know Where They Are 252
Helping the Users Navigate around the Site 252
Navigation Aids 255
Designing Home Pages and Interior Pages 257
Designing the Home Page 258
Designing Interior Pages 258
Design Issues for Web Pages 260
Widgets on Web Pages 260
Scrolling 262
Designing for Different Screens and Platforms 262
Using the Screen Area Effectively 264
Improving the Download Time 264
Using Style Sheets 266
Designing for Accessibility 269
Writing the Content of Web Pages 269
Keep Text to a Minimum 269
Help Users to Scan 270
Dividing Long Blocks of Text into Separate Sections 271
Summary 271
PART 4

Evaluation, Analysis 273
CHAPTER 9 Final Preparations for the Evaluation (Debbie Stone,
Caroline Jarrett, Mark Woodroffe, and Shailey Minocha)
275

Introduction 275
Roles for Evaluators 277
Facilitator 277
Notetaker 278
Equipment Operator 278
Observer 278
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Contents


xi
Meeter and Greeter 279
Recruiter 279
The Lone Evaluator 279
Creating an Evaluation Script 279
An Example of an Evaluation Script 280
Forms to Use When Asking for Permission to Record 282
Nondisclosure Agreements 285
The Pilot Test 286
Participants for Your Pilot Test 286
Design and Assemble the Test Environment 286
Run the Pilot Test 286
Summary 287
CHAPTER 10 Usability Tests (Michael Kuniavsky) 289
Usability Tests 290
When to Test 290
Example of an Iterative Testing Process: Webmonkey 2.0
Global Navigation 291
How to Do it 294
Preparation 294

Conducting The Interview 311
The Physical Layout 311
How to Analyze it 317
Collecting Observations 318
Organizing Observations 320
Extracting Trends 320
Example 321
CHAPTER 11 Analysis and Interpretation of User Observation
Evaluation Data (Debbie Stone, Caroline Jarrett,
Mark Woodroffe, and Shailey Minocha)
327
Introduction: How to Analyze and Interpret Data from Your
Evaluation 328
Collating the Data 328
Summarizing the Data 330
Reviewing the Data to Identify Usability Problems 330
Working with Quantitative Data 332
Working with Qualitative Data 334
An Example of Data from Global Warming 334
Making Decisions with Qualitative Data 336
Interpretation of User-Observation Data 337
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Contents
xii
Assigning Severities 337
Recommending Changes 337
Writing the Evaluation Report 339
Should You Describe Your Method? 342
Describing Your Results 343
Summary 344

CHAPTER 12
Inspections of the User Interface (Debbie Stone,
Caroline Jarrett, Mark Woodroffe, and Shailey Minocha)
345
Introduction 346
Creating the Evaluation Plan for Heuristic Inspection 346
Choosing the Heuristics 346
The Inspectors 346
Conducting a Heuristic Inspection 350
Task Descriptions 350
The Location of the Evaluation Session 350
Collecting Evaluation Data 351
Analysis of Heuristic Inspection Data 351
Interpretation of Heuristic Inspection Data 352
Benefi ts and Limitations of Heuristic Evaluations 352
Variations of Usability Inspection 354
Participatory Heuristic Evaluations 354
Guideline Reviews 356
Standards Inspections 356
Cognitive Walkthrough 356
Peer Reviews 357
Summary 357
REFERENCES 359
INDEX 375
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Contributors
xiii
Jakob Nielsen User Advocate and Principal, Nielsen Norman Group.
Tom Brinck Creative Director, A9.com.
Darren Gergle Assistant Professor, Northwestern University.

Scott D. Wood Soar Technology, Inc.
Kathy Baxter Senior User Experience Researcher, Google.
Catherine Courage Vice President of User Experience, Citrix Systems.
Chauncey Wilson Senior User Researcher, Autodesk.
Bill Buxton Principal Researcher, Microsoft.
John Pruitt Senior Program Manager, Microsoft.
Tamara Adlin Founding Partner, Fell Swoop.
Michael Arent Vice President of User Interface Standards, SAP Labs.
Jonathan Arnowitz User Experience Strategist, Stroomt Interactions.
Nevin Berger Senior Director of User Experience, TechWeb of United Business
Media.
Dr. Debbie Stone Project Manager, Infi nite Group, and former Lecturer, Open
University.
Caroline Jarrett Director, EffortMark.
Shailey Minocha Senior Lecturer of Human–Computer Interaction, Open University.
Mark Woodroffe Deputy Head of the Computing Department, Open University.
Michael Kuniavsky Cofounder and Head of Design, ThingM Corp.
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
This page intentionally left blank
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
PART 1
PART 1
Defi ning Usability
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
This page intentionally left blank
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
3
CHAPTER
CHAPTER
1

1
What Is Usability?
Jakob Nielsen
Copyright
©
2010 Elsevier, Inc. All rights Reserved.
EDITOR’S COMMENTS
Jakob Nielsen has been a leading fi gure in the usability fi eld since the 1980s and this
chapter from his classic book, Usability Engineering (Nielsen, 1993), highlights the mul-
tidimensional nature of usability. To be usable, a product or service must consider, at a
minimum, these fi ve basic dimensions:
Learnability ■
Effi ciency ■
Memorability ■
Error tolerance and prevention ■
Satisfaction ■
An important point made by Nielsen and other usability experts is that the importance
of these dimensions will differ depending on the particular context and target users. For
something like a bank automated teller machine (ATM) or information kiosk in a museum,
learnability might be the major focus of usability practitioners. For complex systems such
as jet planes, railway systems, and nuclear power plants, the critical dimensions might
be error tolerance and error prevention, followed by memorability and effi ciency. If you
can’t remember the proper code to use when an alarm goes off in a nuclear power plant, a
catastrophic event affecting many people over several generations might occur.
In the years since this chapter was published, the phrase “user experience” has emerged
as the successor to “usability.” User experience practitioners consider additional dimen-
sions such as aesthetics, pleasure, and consistency with moral values, as important for
the success of many products and services. These user experience dimensions, while
important, still depend on a solid usability foundation. You can design an attractive product
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

User Experience Re-Mastered: Your Guide to Getting the Right Design
4
Back when computer vendors fi rst started viewing users as more than an incon-
venience, the term of choice was “user friendly” systems. This term is not really
appropriate, however, for several reasons. First, it is unnecessarily anthropomor-
phic – users don’t need machines to be friendly to them, they just need machines
that will not stand in their way when they try to get their work done. Second, it
implies that users’ needs can be described along a single dimension by systems
that are more or less friendly. In reality, different users have different needs, and
a system that is “friendly” to one may feel very tedious to another.
Because of these problems with the term user friendly , user interface pro-
fessionals have tended to use other terms in recent years. The fi eld itself is
known under names like computer–human interaction (CHI), human–
computer interaction (HCI), which is preferred by some who like “putting
the human fi rst” even if only done symbolically, user-centered design (UCD),
man-machine interface (MMI), human-machine interface (HMI), operator-
machine interface (OMI), user interface design (UID), human factors (HF),
and ergonomics,
1
etc.
I tend to use the term usability to denote the considerations that can be addressed
by the methods covered in this book. As shown in the following section, there
are also broader issues to consider within the overall framework of traditional
“user friendliness.”
USABILITY AND OTHER CONSIDERATIONS
To some extent, usability is a narrow concern compared with the larger issue
of system acceptability, which basically is the question of whether the sys-
tem is good enough to satisfy all the needs and requirements of the users
and other potential stakeholders, such as the users’ clients and managers. The
overall acceptability of a computer system is again a combination of its social

acceptability and its practical acceptability. As an example of social accept-
ability, consider a system to investigate whether people applying for unem-
ployment benefi ts are currently gainfully employed and thus have submitted
fraudulent applications. The system might do this by asking applicants a
number of questions and searching their answers for inconsistencies or pro-
fi les that are often indicative of cheaters. Some people may consider such
a fraud-preventing system highly socially desirable, but others may fi nd it
that is consistent with your moral values, but sales of that attractive product may suffer if it
is hard to learn, not very effi cient, and error prone.
Jakob’s chapter describes what is needed to establish a solid foundation for usability –
a timeless topic. You can read essays by Jakob on topics related to many aspects of
usability and user experience at .
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
What Is Usability?

CHAPTER 1
5
offensive to subject applicants to this kind of quizzing and socially undesirable
to delay benefi ts for people fi tting certain profi les. Notice that people in the
latter category may not fi nd the system acceptable even if it got high scores on
practical acceptability in terms of identifying many cheaters and were easy to
use for the applicants.
EDITOR’S NOTE: SOCIAL NETWORKING AND SOCIAL
RESPONSIBILITY
In the years since the publication of this chapter, social networking and other collaboration
technologies, such as Facebook, Twitter, and blogs, have become popular with millions of
users. These new technologies have great promise for bringing people together but also
pose new issues around social acceptability. Take Twitter as an example. Messages about
conditions at work or how bad one’s managers are might be considered socially accept-
able whining by the originator, but the person’s managers might view the same whining on

Twitter as detrimental to the company. Comments on social networking sites can persist
for years and a single photo or comment could harm someone’s chances for a new job
or affect revenue for a company. Human resource personnel can Google much personal
information when they screen candidates, but is that socially acceptable? Social network-
ing can be a boon or a disaster for individuals and organizations.

Given that a system is socially acceptable, we can further analyze its practical
acceptability within various categories, including traditional categories such as
cost, support, reliability, compatibility with existing systems, as well as the cat-
egory of usefulness. Usefulness is the issue of whether the system can be used to
achieve some desired goal. It can again be broken down into the two categories,
utility and usability (Grudin, 1992), where utility is the question of whether
the functionality of the system in principle can do what is needed and usabil-
ity is the question of how well users can use that functionality. Note that the
concept of “utility” does not necessarily have to be restricted to the domain of
hard work. Educational software (courseware) has high utility if students learn
from using it, and an entertainment product has high utility if it is fun to use.
Figure 1.1 shows the simple model of system acceptability outlined here. It is
clear from the fi gure that system acceptability has many components and that
usability must trade-off against many other considerations in a development
project.

Usability applies to all aspects of a system with which a human might interact,
including installation and maintenance procedures. It is very rare to fi nd a
computer feature that truly has no user interface components. Even a facility
to transfer data between two computers will normally include an interface
to trouble-shoot the link when something goes wrong (Mulligan, Altom,
& Simkin, 1991). As another example, I recently established two electronic
mail addresses for a committee I was managing. The two addresses were
ic93-papers- administrator and ic93-papers-committee (for e-mail to

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
User Experience Re-Mastered: Your Guide to Getting the Right Design
6
my assistant and to the entire membership, respectively). It turned out that sev-
eral people sent e-mail to the wrong address, not realizing where their mail
would go. My mistake was twofold: fi rst in not realizing that even a pair of
e-mail addresses constituted a user interface of sorts and second in breaking the
well-known usability principle of avoiding easily confused names. A user who
was taking a quick look at the “To:” fi eld of an e-mail message might be excused
for thinking that the message was going to one address even though it was in
fact going to the other.
DEFINITION OF USABILITY
It is important to realize that usability is not a single, one-dimensional property
of a user interface. Usability has multiple components and is traditionally asso-
ciated with these fi ve usability attributes:
Learnability: The system should be easy to learn so that the user can ■
rapidly start getting some work done with the system.
Effi ciency: The system should be effi cient to use so that once the user has ■
learned the system, a high level of productivity is possible.
Memorability: The system should be easy to remember so that the casual ■
user is able to return to the system after some period of not having used
it without having to learn everything all over again.
Errors: The system should have a low error rate so that users make few ■
errors during the use of the system, and so that if they do make errors
they can easily recover from them. Further, catastrophic errors must not
occur.
Satisfaction: The system should be pleasant to use so that users are sub- ■
jectively satisfi ed when using it; they like it.
F I G U R E 1 . 1
A model of the

attributes of system
acceptability.
Social
acceptability
Usefulness
Utility
Usability
Easy to learn
Efficient to use
Easy to remember
Few errors
Subjectively
pleasing
Cost
Compatibility
Reliability
Etc.
Practical
acceptability
System acceptability
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
What Is Usability?

CHAPTER 1
7
Each of these usability attributes will be discussed further in the following sec-
tions. Only by defi ning the abstract concept of “usability” in terms of these
more precise and measurable components can we arrive at an engineering disci-
pline where usability is not just argued about but is systematically approached,
improved, and evaluated (possibly measured). Even if you do not intend to run

formal measurement studies of the usability attributes of your system, it is an
illuminating exercise to consider how its usability could be made measurable.
Clarifying the measurable aspects of usability is much better than aiming at a
warm, fuzzy feeling of “user friendliness” (Shackel, 1991).
Usability is typically measured by having a number of test users (selected to be
as representative as possible of the intended users) use the system to perform a
prespecifi ed set of tasks, though it can also be measured by having real users in the
fi eld perform whatever tasks they are doing anyway. In either case, an important
point is that usability is measured relative to certain users and certain tasks. It
could well be the case that the same system would be measured as having different
usability characteristics if used by different users for different tasks. For example,
a user wishing to write a letter may prefer a different word processor than a user
wishing to maintain several hundred thousands of pages of technical documenta-
tion. Usability measurement, therefore, starts with the defi nition of a represen-
tative set of test tasks, relative to which the different usability attributes can be
measured.
To determine a system’s overall usability on the basis of a set of usability mea-
sures, one normally takes the mean value of each of the attributes that have been
measured and checks whether these means are better than some previously speci-
fi ed minimum. Because users are known to be very different, it is probably better
to consider the entire distribution of usability measures and not just the mean
value. For example, a criterion for subjective satisfaction might be that the mean
value should be at least 4 on a 1–5 scale; that at least 50 percent of the users
should have given the system the top rating, 5; and that no more than five percent
of the users gave the system the bottom rating, 1.
Learnability
Learnability is in some sense the most fundamental usability attribute, because
most systems need to be easy to learn and because the fi rst experience most
people have with a new system is that of learning to use it. Certainly, there are
some systems for which one can afford to train users extensively to overcome a

hard-to-learn interface, but in most cases, systems need to be easy to learn.
Ease of learning refers to the novice user’s experience on the initial part of the
learning curve, as shown in Fig. 1.2 . Highly learnable systems have a steep
incline for the fi rst part of the learning curve and allow users to reach a reason-
able level of usage profi ciency within a short time. Practically all user interfaces
have learning curves that start out with the user being able to do nothing (have
zero effi ciency) at time zero (when they fi rst start using it). Exceptions include
the so-called walk-up-and-use systems, such as museum information systems,
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
User Experience Re-Mastered: Your Guide to Getting the Right Design
8
that are only intended to be used once and there-
fore need to have essentially zero learning time,
allowing users to be successful from their very
fi rst attempt at using them.
The standard learning curve also does not apply to
cases where the users are transferring skills from
previous systems, such as when they upgrade
from a previous release of a word processor to
the new release (Telles, 1990). Assuming that the
new system is reasonably consistent with the old,
users should be able to start a fair bit up on the
learning curve for the new system (Polson,
Muncher, & Engelbeck, 1986).
Initial ease of learning is probably the easiest of the usability attributes to mea-
sure, with the possible exception of subjective satisfaction. One simply picks some
users who have not used the system before and measures the time it takes them
to reach a specifi ed level of profi ciency in using it. Of course, the test users should
be representative of the intended users of the system, and there might be a need
to collect separate measurements from complete novices without any prior com-

puter experience and from users with some typical computer experience. In ear-
lier years, learnability studies focused exclusively on users without any computer
experience, but because many people now have used computers, it is becoming
more important to include such users in studies of system learnability.
The most common way to express the specifi ed level of profi ciency is simply to
state that the users have to be able to complete a certain task successfully. Alterna-
tively, one can specify that users need to be able to complete a set of tasks in a cer-
tain, minimum time before one will consider them as having “learned” the system.
Of course, as shown in Fig. 1.2 , the learning curve actually represents a continu-
ous series of improved user performance and not a dichotomous “learned”/“not
learned” distinction. It is still common, however, to defi ne a certain level of per-
formance as indicating that the user has passed the learning stage and is able to
use the system and to measure the time it takes the user to reach that stage.
When analyzing learnability, one should keep in mind that users normally do
not take the time to learn a complete interface fully before starting to use it.
On the contrary, users often start using a system as soon as they have learned a
part of the interface. For example, a survey of business professionals who were
experienced personal computer users (Nielsen, 1989a) found that four of the
six highest-rated usability characteristics (out of 21 characteristics in the survey)
related to exploratory learning: easy-to-understand error messages, possible to
do useful work with program before having learned all of it, availability of undo,
and confi rming questions before execution of risky commands. Because of users’
tendency to jump right in and start using a system, one should not just measure
how long it takes users to achieve complete mastery of a system but also how
long it takes to achieve a suffi cient level of profi ciency to do useful work.
FIGURE 1.2
Learning curves for a
hypothetical system
that focuses on the
novice user, being

easy to learn but
less effi cient to use,
as well as one that
is hard to learn but
highly effi cient for
expert users.

Time
Usage Proficiency and Efficiency
Focus on
expert user
Focus on
novice user
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
What Is Usability?

CHAPTER 1
9
E f fi ciency of Use
Effi ciency refers to the expert user’s steady-state level of performance at the time
when the learning curve fl attens out ( Fig. 1.2 ). Of course, users may not neces-
sarily reach that fi nal level of performance any time soon. For example, some
operating systems are so complex that it takes several years to reach expert-level
performance and the ability to use certain composition operators to combine
commands (Doane, McNamara, Kintsch, Polson, & Clawson, 1992; Doane,
Pellegrino, & Klatzky, 1990). Also, some users will probably continue to learn
indefi nitely, though most users seem to plateau once they have learned “enough”
(Carroll & Rosson, 1987; Rosson, 1984). Unfortunately, this steady-state level of
performance may not be optimal for the users who, by learning a few additional
advanced features, sometimes would save more time over the course of their use

of the system than the time it took to learn them.
To measure effi ciency of use for experienced users, one obviously needs access
to experienced users. For systems that have been in use for some time, “experi-
ence” is often defi ned somewhat informally, and users are considered experienced
either if they say so themselves or if they have been users for more than a certain
amount of time, such as a year. Experience can also be defi ned more formally in
terms of number of hours spent using the system, and that defi nition is often used
in experiments with new systems without an established user base: test users are
brought in and asked to use the system for a certain number of hours, after which
their effi ciency is measured. Finally, it is possible to defi ne test users as experi-
enced in terms of the learning curve itself: a user’s performance is continuously
measured (e.g., in terms of number of seconds to do a specifi c task), and when the
performance has not increased for some time, the user is assumed to have reached
the steady-state level of performance for that user (Nielsen & Phillips, 1993).
A typical way to measure effi ciency of use is thus to decide on some defi nition
of expertise, to get a representative sample of users with that expertise, and to
measure the time it takes these users to perform some typical test tasks.
Memorability
Casual users are the third major category of users besides novice and expert
users. Casual users are people who are using a system intermittently rather than
having the fairly frequent use assumed for expert users. However, in contrast to
novice users, casual users have used a system before, so they do not need to learn
it from scratch, they just need to remember how to use it based on their previ-
ous learning. Casual use is typically seen for utility programs that are only used
under exceptional circumstances, for supplementary applications that do not
form part of a user’s primary work but are useful every now and then, as well as
for programs that are inherently only used at long intervals, such as a program
for making a quarterly report.
Having an interface that is easy to remember is also important for users who
return after having been on vacation or who for some other reason have tempo-

rarily stopped using a program. To a great extent, improvements in learnability
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
User Experience Re-Mastered: Your Guide to Getting the Right Design
10
often also make an interface easy to remember, but in principle, the usability
of returning to a system is different from that of facing it for the fi rst time. For
example, consider the sign “Kiss and Ride” seen outside some, Washington, DC,
Metro stations. Initially, the meaning of this sign may not be obvious (it has
poor learnability without outside assistance), but once you realize that it indi-
cates a drop-off zone for commuters arriving in a car driven by somebody else,
the sign becomes suffi ciently memorable to allow you to fi nd such zones at
other stations (it is easy to remember).
2

Interface memorability is rarely tested as thoroughly as the other usability attri-
butes, but there are in principle two main ways of measuring it. One is to per-
form a standard user test with casual users who have been away from the system
for a specifi ed amount of time and measure the time they need to perform some
typical test tasks. Alternatively, it is possible to conduct a memory test with users
after they fi nish a test session with the system and ask them to explain the effect
of various commands or to name the command (or draw the icon) that does a
certain thing. The interface’s score for memorability is then the number of cor-
rect answers given by the users.
The performance test with casual users is most representative of the reason we
want to measure memorability in the fi rst way. The memory test may be easier
to carry out but does have the problem that many modern user interfaces are
built on the principle of making as much as possible visible to the users. Users
of such systems do not need to be actively able to remember what is available,
since the system will remind them when necessary. In fact, a study of one such
graphical interface showed that users were unable to remember the contents

of the menus when they were away from the system, even though they could
use the same menus with no problems when they were sitting at the computer
(Mayes, Draper, McGregor & Oatley, 1988).
Few and Noncatastrophic Errors
Users should make as few errors as possible when using a computer system.
Typically, an error is defi ned as any action that does not accomplish the
desired goal, and the system’s error rate is measured by counting the number
of such actions made by users while performing some specifi ed task. Error
rates can thus be measured as part of an experiment to measure other usability
attributes.
Simply defi ning errors as being any incorrect user action does not take the
highly varying impact of different errors into account. Some errors are corrected
immediately by the user and have no other effect than to slow down the user’s
transaction rate somewhat. Such errors need not really be counted separately, as
their effect is included in the effi ciency of use if it is measured the normal way in
terms of the user’s transaction time.
Other errors are more catastrophic in nature, either because they are not dis-
covered by the user, leading to a faulty work product, or because they destroy
the user’s work, making them diffi cult to recover from. Such catastrophic errors
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

×