Monday, August 31, 2009

Aug 31 - Hollingsed & Novick, Usability Inspection Methods after 15 Years of Research and Practice (part 2)

Usability Inspection Methods after 15 Years of Research and Practice.
Tasha Hollingsed. Lockheed Martin, 2540 North Telshor Blvd., Suite C, Las Cruces, NM 88011. +1 505-525-5267 tasha.hollingsed@lmco.com
David G. Novick. Department of Computer Science, The University of Texas at El Paso, El Paso, TX 79968-0518. +1 915-747-5725 novick@utep.edu
SIGDOC’07, October 22–24, 2007, El Paso, Texas, USA.


COGNITIVE WALKTHROUGH

The cognitive walkthrough is a usability inspection method that evaluates the design of a user interface for its ease of exploratory learning, based on a cognitive model of learning and use [53].
Like other inspection methods, a cognitive walkthrough can be performed on an interface at any time during the development process, from the original mock-ups through the final release.

The process of the cognitive walkthrough comprises a preparatory phase and an analysis phase. During the preparatory phase, the experimenters determine the interface to be used, its likely users, the task, and the actions to be taken during the task.
During the analysis phase, the evaluators work through the four steps of humancomputer interaction developed by Lewis and Polson [25]:
1. The user sets a goal to be completed within the system.
2. The user determines the currently available actions.
3. The user selects the action that they think will take them closer to their goal.
4. The user performs the action and evaluates the feedback given by the system.

Wharton et al. [53] surveyed the history of the cognitive walkthrough prior to 1994. At the time, the two main limitations of the cognitive walkthrough were the repetitiveness of filling out the forms and the limited range of problems the process found ([53], [51], [19]). A new method of cognitive walkthrough addressed these limitations by using small groups instead of individual evaluators and rotating the form filling within the group, evaluating the simple tasks first, and by keeping a record of all problems identified during the evaluation, not just those identified during the process ([53], [41]).
Other criticisms of the cognitive walkthrough were that it does not provide guidelines about what makes an action clearly available to a user and that it is not known what types of actions are considered by a broad range of users [51]).

Since its inception, and with its refinements and extensions, the cognitive walkthrough has been shown to be an effective inspection method that can be applied not just by cognitive scientists and usability specialists but also by novice evaluators. However, the choice of task scenario can be difficult; if the scenario is not adequately described, the evaluation is not as effective.

PLURALISTIC USABILITY WALKTHROUGH

The pluralistic usability walkthrough [3] adapted the traditional usability walkthrough to incorporate representative users, product developers, members of the product team, and usability experts in the process. It is defined by five characteristics:
1. Inclusion of representative users, product developers, and human factors professionals;
2. The application’s screens are presented in the same order as they would appear to the user;
3. All participants are asked to assume the role of the user;
4. Participants write down what actions they, as users, would take for each screen before the group discusses the screens; and
5. When discussing each screen, the representative users speak first.


...benefits and limitations of the approach.
On the positive side, this approach offers feedback from users even if the interface is not fully developed, enables rapid iteration of the design cycle, and—because the users are directly involved—can result in “on-the-fly” redesign.
On the negative side, the approach must be limited to representative rather than comprehensive user paths through the interface, and users who, at a particular step, did not choose the path the group will follow must “reset” their interaction with the interface.

My comments: I think pluralistic walkthrough has another strength of using a cross-functional team to evaluate usability.

The pluralistic walkthrough appears to be in active use for assessing usability. It is included in the Usability Professionals Association draft body of knowledge [49].
Available reports indicate that pluralistic usability walkthrough is used in industry. While some human factors experts continue to conduct usability walkthroughs that do not combine stakeholder perspectives (e.g., [7], [50], [44]), it seems likely that use of the pluralistic usability
walkthrough is widespread but that teams do not refer to it as such in published reports.

FORMAL USABILITY INSPECTIONS

Formal usability inspection is a review by the interface designer and his or her peers of users’ potential task performance [22].
Like the pluralistic usability walkthrough, this involves stepping through the user’s task. However, because the reviewers consist of human factors experts, the review can be quicker, more thorough, and more technical than in the pluralistic walkthrough. The goal is to identify the maximum number of defects in the interface as efficiently as possible. The review process includes task performance models and heuristics, a variety of human-factors expertise, and defect detection within the framework of the software development lifecycle.
Like the cognitive walkthrough, formal usability inspections require definitions of user profiles and task scenarios. And, like the cognitive walkthrough, the reviewers use a cognitive model of task performance, which can be extended with a checklist of cognitive steps similar to those invoked by Norman [36] to bridge the “gulf of execution.”

Hewlett Packard used this method for at least two years before 1995. The inspection team included design engineers, usability engineers, customer support engineers, and customers at times. The team inspected fourteen products and found an average of 76 usability concerns per product and an average of 74 percent of those concerns were fixed per product. While no formal evaluation of the results was done, it was found that the engineers could detect several of the usability concerns, and the engineers enjoyed using the method while increasing their awareness of user needs [15].

Digital Equipment Corporation also conducted a version of formal usability inspections from 1994 to 1995 on ten products. They found an average of 66 usability problems per product and fixed an average of 52 problems per product. Finding even small usability problems proved to be an asset, especially when a number of these problems were easily fixed. As more problems were fixed, the perceived quality of the product improved as well, even if most of these fixes were small [45].

Since then, it appears that little research has been conducted on formal usability inspections. This approach now tends to be grouped into the overall inspection method class and gets overshadowed by the better known heuristic evaluation when comparisons between
inspection and empirical methods have been conducted. As a method, formal usability inspection gains speed at the cost of losing the multiple stakeholder perspectives of the pluralistic walkthrough, and its cognitive model can be seen as less comprehensive than that of the cognitive walkthrough.

CONCLUSION

Both empirical usability testing and usability inspection methods appear to be in wide use, with developers choosing the most appropriate method for their purposes and their context.
For example, half of the ten intranets winning a 2005 competition used heuristic evaluation [34]. The same report indicated that empirical usability testing was used by 80 percent of the winning intranets.

The cognitive walkthrough appears to be in continued use, although reports of use are not as frequent.
The pluralistic usability walkthrough remains in the repertoire of usability experts, although
usability experts continue to conduct user-only usability walkthroughs.
And formal usability inspection, although shown to be an effective approach for identifying usability problems, appears to be used less now than in the mid-1990s.

Many have claimed that usability inspection methods make for faster and more cost-efficient evaluation of the usability of an interface than empirical user testing.
But while usability inspection methods do identify a number of usability problems faster and more cost-efficiently, the best performing evaluator and method still only found 44 percent of the usability problems found in a laboratory setting [9].
While the cognitive walkthrough is useful for predicting problems on a given task and heuristic evaluation is useful for predicting problems on the interface, empirical testing provides lots of information throughout the interface and is the benchmark against which all other methods are measured [9].
Indeed, Jeffries et al. [19] noted that evaluators of usability methods may have rated problems found though empirical usability testing as, on average, more severe precisely because the problems were identified empirically rather than analytically. While inspection methods need expert evaluators to be effective, their strengths are that they can be implemented into the early stages of the development cycle and provide a forum in which changes to interface can be discussed.

The research on comparisons of usability assessment methods suggests several lessons for practitioners.
First, while “faster, cheaper” methods such as heuristic evaluation and the pluralistic usability walkthrough can be useful for rapid iteration early in the design cycle, inspection methods cannot fully substitute for the empirical user testing needed before releasing an interface or Web site to the public.
Second, empirical methods can also be used early in the development process, via “low-tech” versions of interfaces.
Third, developers often combine multiple inspection methods— heuristic evaluation and the cognitive walkthrough—in the same project so that they obtain better coverage of usability issues.
And fourth, adding multiple perspectives—along dimensions such as the range of stakeholders or kinds of usability problems—appears to improve the effectiveness of inspection methods.

It remains an open issue as to why usability professionals, in practice, rely on single-perspective methods, typically involving users, or experts, but not both. The evidence from reports of recent
uses of heuristic evaluation suggests that many usability specialists are missing the benefits of the pluralistic walkthrough and perspective-based evaluation.

At a deeper level, though, a new direction for research should complement these defect-tracking and learning approaches by seeking to understand the root causes of usability problems. The ideal solution would be to know the reasons for usability problems, so that designers can minimize the effort spent on usability inspection and testing.

My Comments: For my PhD research, shall I choose only 1 UEM (usability evaluation method) or shall I use a hybrid/combination of UEM?

References that I may want to read further in future:
[2] Andreasen, M. S., Nielsen, H., Schrøder, S., and Stage, J. 2007. What happened to remote usability testing?: An empirical study of three methods, Proceedings of the Conference on Human Factors in Computing Systems (SIGCHI 2007), San Jose, CA, April 28-May 3, 2007, 1405-1414.
[3] Bias, R. G. 1994. The pluralistic usability walkthrough: coordinated empathies. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 63-76.
[4] Blackmon, M., Polson, P., Kitajima, M., and Lewis, C. (2002). Cognitive walkthrough for the Web, Proceedings of the Conference on Human Factors in Computing Systems (CHI 2002), Minneapolis, MN, April 20-25, 2002, 463-470.
[6] Bradford, J. (1994). Evaluating high-level design: synergistic use of inspection and usability methods for evaluating early software designs. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 235-253.
[8] Brooks, P. (1994). Adding value to usability testing. . In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 255-271.
[9] Desurvire, H. (1994). Faster, cheaper!! Are usability inspection methods as effective as empirical testing? In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 173-202.
[16] Gunn, C. (1995). An example of formal usability inspections in practice at Hewlett-Packard company, Proceeding of the Conference on Human Factors in Computing System (CHI 95), Denver, Colorado, May 7-11, 1995, 103-104.
[17] Hovater, J., Krot, M., Kiskis, D. L., Holland, H., & Altman, M. (2002). Usability testing of the Virtual Data Center, Workshop on Usability of Digital Libraries, Second ACM- IEEE Joint Conference on Digital Libraries, Portland, OR, July 14-18, 2002, available at http://www.uclic.ucl.ac.uk/annb/DLUsability/Hovater7.pdf, accessed May 26, 2007.
[18] Jeffries, R. (1994). Usability problem reports: helping evaluators communicate effectively with developers. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 273-294.
[19] Jeffries, R., Miller, J., Wharton, C., and Uyeda, K. (1991). User interface evaluation in the real world: a comparison of four techniques, Proceeding of the Conference on Human Factors in Computing System (CHI 91), New Orleans, LA, April 27-May 2, 1991, 119-124.
[20] Jeffries, R., and Desurvire, H. (1992). Usability testing vs. heuristic evaluation: was there a contest? ACM SIGCHI Bulletin 24(4), 39-41.
[21] John B., and Packer, H (1995). Learning and using the cognitive walkthrough method: A case study approach, Proceedings of the Conference on Human Factors in Computing Systems (SIGCHI 95), Denver, Colorado, May 7-11, 1995, 429-436.
[22] Kahn, M., and Prail, A. (1994). Formal usability inspections, in Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 141-171.
[23] Karat, C.-M 1994. A comparison of user interface evaluation methods, in Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 203-233.
[24] Karat, C.-M., Campbell, R. and Fiegel, T. (1992). Comparison of empirical testing and walkthrough methods in user interface evaluation, Proceedings of the Conference in Human Factors in Computing Systems (CHI 92), Monterey, CA, May 3-7, 1992, 397-404.
[25] Lewis, C., and Polson, P. (1991). Cognitive walkthroughs: A method for theory-based evaluation of user interfaces (tutorial), Proceedings of the Conference on Human Factors in Computing Systems (SIGCHI 91), April 27-May 2, 1991, New Orleans, LA.
[26] Mack, R. and Montaniz, F. (1994). Observing, predicting, and analyzing usability problems. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 295-339.
[27] Mack, R., and Nielsen, J. (1993). Usability inspection methods: Report on a workshop held at CHI'92, Monterey
[29] Muller, M., Dayton, T., and Root, R. (1993). Comparing studies that compare usability assessment methods: an unsuccessful search for stable criteria, INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems, Amsterdam, April 24-29, 1993, 185- 186.
[31] Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proceedings of the Conference on Human Factors in Computing Systems (CHI 90), Seattle, WA, April 1-5, 1990, 249-256.
[32] Nielsen, J. (1992). Finding usability problems through heuristic evaluation, Proceedings of the Conference on Human Factors in Computing System (CHI 92), Monterey, CA, May 3-7, 1992, 373-380.
[33] Nielsen, J. and Mack, R. (eds.) (1994), Usability inspection methods, John Wiley & Sons, Inc., New York.
[34] Nielsen, J. (2005). Ten best intranets of 2005, Jakob Nielsen's Alertbox, February 28, 2005, available at http://www.useit.com/alertbox/20050228.html, accessed May
26, 2007.
[35] Nielsen, J., and Phillips, V, (1993). Estimating the relative usability of two interfaces: heuristic, formal, and empirical methods compared, Proceedings of the Conference on Human Factors in Computing System (CHI 93), Amsterdam, April 24-29, 1993, 214-221.
[37] Novick, D. and Chater, M. (1999). Evaluating the design of human-machine cooperation: The cognitive walkthrough for operating procedures, Proceedings of the Conference on
Cognitive Science Approaches to Process Control (CSAPC 99), Villeneuve d'Ascq, FR, September 21-24, 1999.
[38] Novick, D. (2000). Testing documentation with “low-tech” simulation, Proceedings of IPCC/SIGDOC 2000, Cambridge, MA, September 24-27, 2000, 55-68.
[41] Rieman, J., Franzke, M., and Redmiles, D. (1995). Usability evaluation with the cognitive walkthrough, Proceedings of the Conference on Human Factors in Computing Systems (CHI 95), Denver, Colorado, May 7-11, 1995, 387-388.
[46] Spencer, R. (2000). The streamlined cognitive walkthrough method, Proceedings of the Conference on Human Factors in Computing System (CHI 2000), The Hague, The Netherlands, April 1-6, 2000, 353-359.
[47] Tang, Z., Johnson, T., Tindall, R., and Zhang, J. (2006) Applying heuristic evaluation to improve the usability of a telemedicine system, Telemedecine Journal and E-Health
12(1), 24-34.
[49] Usability Professionals Association (undated). Methods: Pluralistic usability walkthrough, available at http://www.usabilitybok.org/methods/p2049, accessed May 26, 2007.
[50] User-Centered Web Effective Business Solutions (2007). Heuristic usability evaluation, available at http://www.ucwebs.com/usability/web-site-usability.htm, accessed May 27, 2007.
[51] Wharton, C., Bradford, J., Jeffries, R., and Franzke, M. (1992). Applying cognitive walkthroughs to more complex user interfaces: Experiences, issues, and recommendations, Proceedings of the Conference on Human Factors in Computing System (CHI 92), Monterey, CA, May 3-7, 1992, 381-388.
[52] Wharton, C. and Lewis, C. 1994. The role of psychological theory in usability inspection methods. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 341-350
[53] Wharton, C., Rieman, J., Lewis, C., and Polson, P. (1994). The Cognitive Walkthrough Method: A Practitioner’s Guide. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 1994, 105-140.
[55] Zhang, Z., Basili, V., Shneiderman, B. (1998). An empirical study of perspective-based usability inspection, Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting, Santa Monica, CA, October 5-9, 1998, 1346-1350.

My Comments: This is a very rich resource of Reference Materials. GREAT!! Extremely good for me to "Literature Review" on the subject of "Usability Evaluation Method." I should get hold to Nielsen and Mack's book on "Usability Inspection Methods."

No comments:

Post a Comment