Wednesday, September 2, 2009

Sep 2 - Furniss et al, Usability Evaluation Methods in Practice: Understanding the Context in which they are Embedded.

Usability Evaluation Methods in Practice: Understanding the Context in which they are Embedded.
Dominic Furniss. UCLIC, Remax House, 31/32 Alfred Place, London, WC1E 7DP. d.furniss@ucl.ac.uk
Ann Blandford. UCLIC, Remax House, 31/32 Alfred Place, London, WC1E 7DP. a.blandford@ucl.ac.uk
Paul Curzon. Queen Mary University, Dept. of Computer Science, Mile End, London, E1 4NS. pc/www@dcs.qmul.ac.uk

Proceedings of the ECCE 2007 Conference, 28-31 August 2007, London, UK

ABSTRACT
Motivation – To address a knowledge gap in why usability evaluation methods (UEMs) are adopted and adapted in professional usability practice.


Usability evaluation methods (UEMs) play a key role in the practice of usability. They help practitioners identify usability problems which can then be addressed through design.
Criticism is levelled at academia for the way in which it has valued UEMs and the amount of UEMs that are produced which never see the light of day ‘in the real world’.
The first criticism is elaborated by Wixon (2003) who argues that the current literature fails the usability practitioner because it is focused on a quasi-scientific way of evaluating UEMs, i.e. through the number of problems they find, which is inappropriate to practice.
The second criticism is supported by the work of Bellotti (1988) and O’Neill (1998) who report work that demonstrates that HCI transfer, from research to practice, has difficulties as practitioners do not seem to use methods developed by research. To colour this point O’Neill (1998, p. 65) refers to “the largely undisturbed arsenal of system development methods” that have been developed by research.
With both criticisms taken together there appears a need to better understand UEMs from the practitioners’ perspective, so UEM design and research improves.

OBJECTIVES
Objectives of the research include:
1) to understand why practitioners use some UEMs and not others;
2) to understand how UEMs are used and adapted; and
3) to understand important constraints and performance shaping factors in different usability contexts.

METHODOLOGY
Grounded theory (Strauss & Corbin, 1998) was selected as the study aims to investigate practice from practitioners’ perspectives in an exploratory manner; data is collected via interviews.
Grounded theory is aqualitative technique which “aims to develop theory from data rather than to gather data in order to test a theory or hypothesis” (Goede & De Villiers, 2003).
This grounded theory process first involves gathering data to be analysed via interviews with practitioners; transcribing these interviews verbatim; then breaking down this data into components by coding (open coding); then these codes are related to one another to reveal
patterns (axial coding), before forming and describing general patterns in the data (selective coding). These patterns constitute theory to explain the data.
The Grounded Theory is then linked to relevant theoretical frameworks and concepts to provide leverage for its explanation; in this case distributed cognition and resilience engineering.
Two contrasting contexts of usability practice were selected to sample from: website design and safety-critical system development.

Results Discussion

Furniss et al. (2007) highlight five RE themes which link to usability consultancy practice:
1. Efficiency throughness trade-offs mean that clients restrict the UEMs that practitioners would otherwise like to use, and practitioners cannot spend long on the projects they do.
2. There is a loose coupling in the labeling and the practice of UEMs e.g. Heuristic Evaluations are done in different ways. This loose coupling allows novices to cope more easily by protecting them from complexity.
3. UEMs are performed adaptably and flexibly in light of internal and external events e.g. practitioner experience, company capability, time, budget, and project need.
4. Survivability should be considered in terms of practice and product. Usability concerns should also be balanced with safety and business concerns. The focus on one of these could lead to the detriment of the system overall.
5. Local rationality should be considered in terms of building an account of how practitioners make the decisions they do within their own context. Wixon’s (2003) criticism essentially comes from a lack of consideration of the practitioner’s local rationality.


References that I may want to read further in future:
Furniss, D., Blandford, A & Curzon, P. (2007). Resilience in Usability Consultancy Practice: The Case for a Positive Resonance Model. To Appear Resilience Engineering Workshop, 25-27 June, ‘07
Furniss, D., Blandford, A & Curzon, P. (fourthcoming). Usability Work in Professional Website Design: Insights from Practitioners' Perspectives. In Law, E., Hvannberg, E., and Cockton, G. (Eds.). Maturing Usability: Quality in Software, Interaction and Value. Springer.
Goede, R. and De Villiers, C. (2003) The Applicability of Grounded Theory as Research Methodology in studies on the use of Methodologies in IS Practices. In Proceedings of SAICSIT 2003, Pages 208-217.
Nørgaard, M. & Hornbæk, K. (2006). What Do Usability Evaluators Do in Practice? An Explorative Study of Think-Aloud Testing. In ACM Conference on Designing Interactive Systems.
Redish, J., Bias, R. G., Bailey, R., Molich, R., Dumas, J., and Spool, J. M. 2002. Usability in practice: formative usability evaluations - evolution and revolution. In Proc. of CHI '02, , 885-890.
Strauss, A. & Corbin, J. (1998). Basics of Qualitative Research: Techniques and Procedures for
Developing Grounded Theory (2nd). London: Sage Publications.
Wixon, D. (2003). Evaluating Usability Methods: Why the current literature fails the practitioner. In Interactions, Vol. 10, Issue 4, (pg 28-34).

No comments:

Post a Comment