Methods for quantitative usability requirements: a case study on the development of the user interface of a mobile phone.
Timo Jokela Æ Jussi Koivumaa Æ Jani Pirkola
Petri Salminen Æ Niina Kantola
Received: 3 February 2005 / Accepted: 4 May 2005 / Published online: 8 October 2005
Springer-Verlag London Limited 2005
Timo Jokela Æ Jussi Koivumaa Æ Jani Pirkola
Petri Salminen Æ Niina Kantola
Received: 3 February 2005 / Accepted: 4 May 2005 / Published online: 8 October 2005
Springer-Verlag London Limited 2005
Pers Ubiquit Comput (2006) 10: 345–355
DOI 10.1007/s00779-005-0050-7
T. Jokela (&) Æ N. Kantola. Oulu University, P.O. Box 3000, Oulu, Finland. E-mail: timo.jokela@oulu.fi E-mail: niina.kantola@oulu.fi
J. Koivumaa Æ J. Pirkola. Nokia, P.O. Box 50, 90571 Oulu, Finland. E-mail: jussi.koivumaa@nokia.com E-mail: jani.pirkola@nokia.com
P. Salminen. ValueFirst, Luuvantie 28, 02620 Espoo, Finland. E-mail: petri.salminen@valuefirst.fi
J. Koivumaa Æ J. Pirkola. Nokia, P.O. Box 50, 90571 Oulu, Finland. E-mail: jussi.koivumaa@nokia.com E-mail: jani.pirkola@nokia.com
P. Salminen. ValueFirst, Luuvantie 28, 02620 Espoo, Finland. E-mail: petri.salminen@valuefirst.fi
Abstract
Quantitative usability requirements are a critical but challenging, and hence an often neglected aspect of a usability engineering process. A case study is described where quantitative usability requirements played a key role in the development of a new user interface of a mobile phone. Within the practical constraints of the project, existing methods for determining usability requirements and evaluating the extent to which these are met, could not be applied as such, therefore tailored methods had to be developed. These methods and their applications are discussed.
Mobile phones have become a natural part of our everyday lives. Their user friendliness, termed usability, are increasingly in demand. Usability brings many benefits: users are able and willing to use the various features of the phone and the services supplied by the operators, the need for customer support decreases, and, above all, user satisfaction increases.
At the same time, designing is becoming increasingly challenging with the increasing number of functions and reduction of the size of the phones. Another challenge is the ever shortening life of the phones resulting in less time for development.
The practice of designing usable products is called usability engineering.1 The book User-centered system design by Donald Norman and Stephen Draper [1] is a pioneering work. John Gould and his colleagues also worked with usability methodologies in the 1980s [2]. Dennis Wixon and Karen Holtzblatt at Digital Equipment developed Contextual Inquiry and later on Contextual Design [3]; Carroll and Mack [4] were also early contributors. Later, various UCD methodologies were proposed e.g. by [5–10]. The standard ISO 13407 [11] is a widely used general reference for usability engineering.
The first activity is to identify users. Context of use analysis is about getting to know users: what the users’ goals are in relation to the product under development, what kind of tasks they do and in which contexts. User information is the basis for usability requirements where the target levels of the usability of the product under development are determined. A new product should lead to more efficient user tasks...
An essential part of the usability life-cycle is (quantitative) usability requirements, i.e. measurable usability targets for the interaction design [13–17]. As stated in [13]: ‘‘Without measurable usability specifications, there is no way to determine the usability needs of a product, or to measure whether or not the finished product fulfils those needs. If we cannot measure usability, we cannot have usability engineering’’.
In this article, our aim is to meet the research challenge posed by Wixon: we present the methods that we used in a real development context of a mobile phone UI, for the determination of quantitative usability requirements and the evaluation of the compliance with them.
Methods for quantitative usability requirements
There are two main activities related to quantitative usability requirements.
During the early phases of a development project, the usability requirements are determined (a in Fig. 1), and during the late phases, the usability of the product is evaluated against the requirements (b in Fig. 1).
Determining usability requirements can be further split into two activities: defining the usability attributes, and setting target values or the attributes.
In the evaluation, a measuring instrument is required.
Determining usability attributes
The main reference of usability is probably the definition of usability in ISO 9241-11: ‘‘the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use’’ [19]. In brief, the definition means that usability requirements are based on measures of users performing tasks with the product to be developed.
– An example of an effectiveness measure is the percentage of users who can successfully complete a task.
– Efficiency can be measured by the mean time needed to successfully complete a task.
– User satisfaction can be measured with a questionnaire.
Usability requirements may include separate definitions of the target level (e.g. 90% of users can successfully complete a task) and the minimum acceptable level (e.g. 80% of users can successfully complete a task) [20].
Whiteside et al. [21] suggest that quantitative usability requirements be phrased at four levels: worst, planned, best and current.
Questionnaires measuring user satisfaction provide quantitative, though subjective usability metrics for related usability attributes.
Methods for determining usability targets
Possibly one of the most detailed guidelines for determining usability requirements is a six-step process by Wixon and Wilson [14].
In their process, relevant usability attributes are determined based on user profile and task analysis. Then the measuring instruments and measures are decided upon and a performance level is set for each attribute. T
hey agree with Whiteside et al. [21] that four performance levels can be set for each attribute
and determining the current level lays the foundation for setting other levels.
and determining the current level lays the foundation for setting other levels.
If the product is new, measurements for the current level can be attained, for example, from an existing manual system. Like Hix and Hartson [6], Wixon and Wilson [14] suggest that in the beginning two to three clear goals that focus on important and frequent tasks are enough and later, as the development teams accept the value of usability goals, more complex specifications can be generated.
Gould and Lewis [26] state that developing behavioural goals must cover at least three points.
Firstly, a description of the intended users must be given and the experimental participants should be agreed upon.
Secondly, the tasks to be performed and the circumstances in which they should be performed must be given.
The third point of the process is giving the measurement of interest, such as learning time and the criterion values to be achieved for each.
According to Nielsen [5] usability is associated with five attributes: learnability, efficiency, memorability, error and satisfaction. In usability goal setting, these attributes must be prioritised based on user and task analysis, and then operationalised and expressed in measurable ways.
Mayhew [7] introduces a nine-step procedure for setting usability goals. In her procedure qualitative usability goals are first identified and prioritized. Then those qualitative usability goals that are relatively high priority and seem easily quantifiable should be formulated to quantified goals.
For example the MUSiC methodology [28] aims to provide a comprehensive approach to the measurement of usability. It includes methods for specifying and measuring usability during design. One of the methods is the performance measurement method, which aims to provide a means of measuring two of the ISO 9241-11 standard usability components, i.e. effectiveness and efficiency.
My Comments: I think "quantitative usability" concept would be useful for my PhD research. Particularly relevant would be measurement of Effectiveness and Efficiency. Satisfaction would be more complex to be measured.
Methods for quantitative evaluation of usability
Whether the quantitative requirements have been met can be determind through a usability test.
Wixon et al. [14] define the term ‘‘test’’ as a broad term that encompasses any method for assessing whether goals have been achieved, like a formal laboratory test or a collection of satisfaction data through survey.
When evaluating usability, ISO 9241-11 [19] claims it is important that the context selected be representative. Evaluations can be done in the field in a real work situation or in laboratory settings in which the relevant aspects of the context of use are re-created in a representative and controlled way. A method that includes representative users performing typical, representative tasks is generally called usability testing.
Tasks that are done in usability testing provide an objective metric for the related usability attribute. Hix and Hartson [6] indicate that tasks must be very specifically worded in order to be the same for each participant. Tasks must also be specific, so that participants do not get sidetracked into irrelevant details during testing.
Wixon et al. [14] suggest that during the test, the tester should minimize the interaction with participants. Butler [33] describes his approach where ‘‘seven test users were given an introductory level problem to solve, then left alone with a copy of the user’s guide and a 3270 terminal logged onto the system.’’
User preference questionnaires provide a subjective metric for the related usability attribute such as ease of use or usefulness. Questionnaires are commonly built using Likert and semantic differential scales and are intended for use in various circumstances [32].
User preference questionnaires provide a subjective metric for the related usability attribute such as ease of use or usefulness. Questionnaires are commonly built using Likert and semantic differential scales and are intended for use in various circumstances [32].
There are a number of questionnaires available for quantitative usability evaluation, like SUMI [22], QUIS [24] and SUS [23]. Karat [34] states that questionnaires provide an easy and inexpensive method for obtaining measurement data on a system.
Usability can be quantitatively evaluated also with theory-based approaches such as GOMS and keystroke level model, KLM [35]. With GOMS, for example, total times can be predicted by associating times with each operator.
According to John [36], GOMS can also be used to predict how long it will take to learn a certain
task. With these quantitative predictions GOMS can be applied for example in a comparison between two systems.
The GOMS model also has its limitations. Preece et al. [37] suggest that GOMS can only really model computer-based tasks that involve a small set of highly routine data-entry type inputs. The model is not appropriate if errors occur.
KLM is a simplified version of GOMS.
task. With these quantitative predictions GOMS can be applied for example in a comparison between two systems.
The GOMS model also has its limitations. Preece et al. [37] suggest that GOMS can only really model computer-based tasks that involve a small set of highly routine data-entry type inputs. The model is not appropriate if errors occur.
KLM is a simplified version of GOMS.
My Comments: Read this article in detail...will enhance understanding of how they did the quantitative usability.
Discussion
Determining appropriate usability attributes and setting target values for them is a challenging task. Usability requirements should be defined so that they depict a ‘usable product’ as well as possible. .... It should be understood, however, that the appropriate set of attributes is heavily dependent on the product or application.
The determination of quantitative usability requirements and their evaluation should be distinguished. We propose that it is not necessary to know how to measure them exactly at the time of determining the requirements. An important role of usability requirements is that they give direction and vision to the user interface design.
This experience is shared by Wixon et al. [14]: ‘‘even if you do not test at all, designing with a clearly stated usability goal is preferable to designing toward a generic goal of ‘easy and intuitive’’’.
We encourage innovativeness in usability methods. It is seldom possible to use usability methods ideally. This article presents our innovations on the methods for determining and evaluating usability requirements. ..... The project context and the business case always have a major impact on the usability attributes.
Conclusion
We described a case study from a development project where the use of quantitative usability requirements was found useful.
We used methods that do not exactly follow the existing well-known usability methods. We believe that this is not a unique case: most industrial 5 development projects have specific constraints and limitations, and an ideal use of usability methods is not generally feasible. While we strongly recommend the use of measurable usability requirements, we do not propose our methods as a general solution. Clearly, each project has its specific features, and the usability methods should be selected and tailored based on the specific context of the project.
References that I may want to read further in future:
7. Mayhew DJ (1999) The usability engineering lifecycle. Morgan Kaufman, San Fancisco
14. Wixon D, Wilson C (1997) The usability engineering framework for product design and evaluation. In: Helander M, Landauer T, Prabhu P (eds) Handbook of human–computer
interaction. Elsevier, Amsterdam. pp 653–688
18. Wixon D (2003) Evaluating usability methods. Why the current literature fails the practitioner. Interactions 10(4):28–34
20. NIST (2004) Proposed industry format for usability requirements. Draft version 0.62
22. Kirakowski J, Corbett M (1993) SUMI: The software usability measurement inventory. Br J Educ Technol 24(3):210–212
23. Brooke J (1986) SUS — A ‘‘quick and dirty’’ usability scale. Digital Equipment Co. Ltd
24. Chin JP, Diehl VA, Norman KL (1988) Development of an instrument measuring user satisfaction of the human–computer interface. In: Proceedings of SIGCHI ‘88. New York
27. Dumas JS, Redish JC (1993)A practical guide to usability testing. Ablex Publishing Corporation, Norwood
28. Bevan N, Macleod M (1994) Usability measurement in context. Behav Inf Technol 13(1,2):132–145
29. Macleod M, Bowden R, Bevan N, Curson I (1997) The MUSiC performance measurement method. Behav Inf Technol 16(4,5):279–293
30. Maguire M (1998) RESPECT user-centred requirements handbook. Version 3.3. HUSAT Research Institute (now the Ergonomics and Saftety Research Institute, ESRI), Loughborough
University
14. Wixon D, Wilson C (1997) The usability engineering framework for product design and evaluation. In: Helander M, Landauer T, Prabhu P (eds) Handbook of human–computer
interaction. Elsevier, Amsterdam. pp 653–688
18. Wixon D (2003) Evaluating usability methods. Why the current literature fails the practitioner. Interactions 10(4):28–34
20. NIST (2004) Proposed industry format for usability requirements. Draft version 0.62
22. Kirakowski J, Corbett M (1993) SUMI: The software usability measurement inventory. Br J Educ Technol 24(3):210–212
23. Brooke J (1986) SUS — A ‘‘quick and dirty’’ usability scale. Digital Equipment Co. Ltd
24. Chin JP, Diehl VA, Norman KL (1988) Development of an instrument measuring user satisfaction of the human–computer interface. In: Proceedings of SIGCHI ‘88. New York
27. Dumas JS, Redish JC (1993)A practical guide to usability testing. Ablex Publishing Corporation, Norwood
28. Bevan N, Macleod M (1994) Usability measurement in context. Behav Inf Technol 13(1,2):132–145
29. Macleod M, Bowden R, Bevan N, Curson I (1997) The MUSiC performance measurement method. Behav Inf Technol 16(4,5):279–293
30. Maguire M (1998) RESPECT user-centred requirements handbook. Version 3.3. HUSAT Research Institute (now the Ergonomics and Saftety Research Institute, ESRI), Loughborough
University
31. Bevan N, Claridge N, Athousaki M, Maguire M, Catarci T, Matarazzo G, Raiss G (2002) Guide to specifying and evaluating usability as part of a contract, version1.0. PRUE project. Serco Usability Services, London, p 47
32. ANSI (2001) Common industry format for usability test reports. NCITS 354–2001
32. ANSI (2001) Common industry format for usability test reports. NCITS 354–2001
No comments:
Post a Comment