Monday, September 7, 2009

Sep 8 - about Usability Evaluation Method

These are my literature review pertaining to Usability Evaluation Method.

----------------
A Tool to Support Usability Inspection
Carmelo Ardito, Rosa Lanzilotti, Paolo Buono, Antonio PiccinnoDipartimento di Informatica, Università di Bari, Italy
{ardito, lanzilotti, buono, piccinno}@di.uniba.it

AVI '06, May 23-26, 2006, Venezia, Italy.


Different methods can be used for evaluating the usability of interactive systems. Among them, the most commonly adopted are inspection methods that involve expert evaluators only, who inspect the application and provide judgments based on their knowledge and expertise [8]. Their main advantage is the cost saving: they “save users”, and do not require any special equipment, nor lab facilities [4] [8]. In addition, experts can detect a wide range of problems and possible faults of a complex system in a limited amount of time.
Heuristic evaluation is a popular inspection method that involves few experts inspecting the system, and evaluating the interface against a list of recognized usability principles: the heuristics. Heuristic evaluation is said a “discount usability” method; it has been shown that it has a high benefit-cost ratio [7]. It is especially valuable when time and resources are short, because skilled evaluators, without the involvement of users, can produce high quality results in a limited amount of time [5].

It is recommended to have more than one evaluator inspecting an application. Every member in the inspection team works individually and produces a problem report. Afterwards, all inspectors in the team meet to discuss about the discovered problems, producing only one problems list.


AT (Abstract Task) Inspection

In order to overcome such drawbacks and with the objective of performing a more systematic evaluation, a different inspection has been proposed as part of a methodology for usability evaluation called SUE (Systematic Usability Evaluation); as a consequence, this technique has been called SUE inspection [6].
It exploits a set of evaluation patterns, called Abstract Tasks (ATs), which guide the inspector’s activities, precisely describing which objects of the application to look for and which actions to perform during the inspection in order to analyze such objects.

In this way, even less experienced evaluators are able to achieve more complete and precise results. ATs are formulated by means of a template providing a consistent format [6].
During the inspection, evaluators analyze the application by using the available ATs and producing a report in which the discovered problems are described.
A controlled experiment, described in [2], has shown the effectiveness and efficiency of this inspection with respect to the heuristic evaluation.
The use of ATs is the very novelty of this inspection. Moreover, since it can be used independently from the overall SUE methodology, it is more appropriate to call it AT inspection.

-----------------
Towards a Framework for Usability Testing of Interactive e-Learning Applications in Cognitive Domains, Illustrated by a Case Study.
S.S. (THABO) MASEMOLA AND M.R. (RUTH) DE VILLIERS. University of South Africa
S.S. Masemola, School of Computing, University of South Africa, P O Box 392, UNISA, 0003, South Africa; masemss@unisa.ac.za.
M.R. de Villiers, School of Computing, University of South Africa, P O Box 392, UNISA, 0003, South Africa; dvillmr@unisa.ac.za.

Proceedings of SAICSIT 2006, Pages 187 –197


What is Usability Testing?

Usability testing is a software evaluation technique that involves measuring the performance of typical end-users as they undertake a defined set of tasks on the system being investigated. It commenced in the early 1980s, as human factors professionals studied subjects using interfaces under real-world or controlled conditions and collected data on problems that arose (‘human factors’ is an early term for the human-computer interaction discipline). It has been shown to be an effective method that rapidly identifies problems and weaknesses, and is particularly used to improve the usability of products [Dumas, 2003; Dumas and Redish, 1999; Jeffries et al, 1991].
Since the early to mid-1990s, such testing has been empirically conducted in specialized controlled environments called usability laboratories, equipped with sophisticated monitoring and recording facilities for formal usability testing, supported by analytical software tools. It is an expensive technique. Participants, who are real end-users, interact with the product, performing specified representative tasks. Their actions can be rigorously monitored and recorded in various ways: by videotape − for subsequent re-viewing; event logging − down to keystroke level; and audio − to note verbalization and expressions.
The data, in both quantitative and qualitative forms, is analysed and changes can be recommended. Typical usability metrics include the time taken to complete a task, degree of completion, number of errors, time lost by errors, time to recover from an error, number of subjects who successfully completed a task, and so on [Avouris, 2001; Dix, Finlay, Abowd & Beale, 2004; ISO 9241, 1997; Preece, Rogers & Sharp, 2003; Wesson, 2002].

The primary targets of usability testing are the user interface and other interactive aspects. Such testing is used by academics for research and development, and also by usability practitioners in the corporate environment for rapid refinement of interfaces and analysis of system usability.

Usability Testing

Dumas [2003] lists six defining characteristics of usability tests, while a seventh and eighth are obtained from Dumas and Redish [1999]:
1. The focus is usability.
2. Participants are end users or potential end users.
3. There is an artifact to evaluate, which may be a product design, a system or a prototype.
4. The participants think aloud as they progress through tasks.
5. Data is recorded and analysed.
6. The results are communicated to appropriate audiences (often a corporate client).
7. Testing should cover only a few features of the product, incorporated in selected tasks.
8. Each participant should spend approximately an hour doing the stated tasks.

Our methodology and test plan are based on general methodologies for formal usability testing [Pretorius, Calitz & Van Greunen, 2005; Rubin, 1994; Van Greunen & Wesson, 2002] but with some distinguishing features, such as the emphasis on participants thinking aloud and the use of a best case for initial benchmarking,...The broad methodology involves the following steps:
─ Set up objectives in line with research questions.
─ Determine the aspects to be measured and their metrics.
─ Formulate documents: Initial test plan, task list, information document for participants, checklist for administrator, and determine a means of investigating satisfaction.
─ Acquire representative participants.
─ Conduct a pilot test.
─ Refine the test plan, task list, and information document for the main usability test in the light of the pilot.
─ Conduct usability test.
─ Determine means of analysis and presentation that address the unique, as well as the usual, aspects.
─ Draw conclusions and make proposals for the way forward.

---------------
A Comparative Study of Two Usability Evaluation Methods Using a Web-Based E-Learning Application.
Samuel Ssemugabi. School of Information Technology, Walter Sisulu University, Private Bag 1421, East London, 5200, South Africa. +27 43 7085407 ssemugab@wsu.ac.za
Ruth de Villiers. School of Computing, University of South Africa, P O Box 392, Unisa, 0003, South Africa +27 12 429 6559 dvillmr@unisa.ac.za

SAICSIT 2007, 2 - 3 October 2007, Fish River Sun, Sunshine Coast, South Africa.


Usability evaluation is concerned with gathering information about the usability or potential usability of a system, in order to assess it or to improve its interface by identifying problems and suggesting improvements [35]. Various usability evaluation methods (UEMs) exist, e.g. analytical, expert heuristic evaluation, survey, observational, and experimental methods [6, 13, 35].
To evaluate the usability of a system and to determine usability problems, it is important to select an appropriate UEM/s [10, 37], taking cognisance of efficiency, time, cost-effectiveness, ease of application, and expertise of evaluators [13, 30].

Evaluation of e-learning should address aspects of pedagogy and learning from educational domains as well as HCI factors such as the effectiveness of interfaces and the quality of usability and interaction.

Heuristic evaluation (HE) is a usability inspection technique originated by Nielsen [26, 27], in which a small set of expert evaluators, guided by a set of usability principles known as heuristics, determine whether a system conforms to these and identify specific usability problems in the system. It is the most widely used UEM for computer system interfaces. It is described as fast, inexpensive, and easy to perform, and can result in major improvements to user interfaces [4, 5, 19, 24].

Usability features are frequently not considered in development, sometimes because instructors and courseware developers are not trained to do so or because they lack the technological skills [41]. However, a main reason for the oversight is that usability evaluation can be difficult, time consuming and expensive [20]. If, in addition to its attributes of low cost and relative simplicity, HE is shown to be effective, efficient, and sufficient to identify the problems that impede earners, it would qualify as an ideal method for evaluating web-based e-learning.

Research Design
The main components of the research design were:
1. Identification of the target application.
2. Generation of a set of criteria/heuristics suitable for usability evaluation of e-learning applications, in particular, WBL environments.
3. End-user evaluations by means of criterion-based learner surveys on the target application using questionnaires and a focus group interview.
4. An heuristic evaluation with a competent and complementary set of experts, using the synthesised set of criteria as heuristics, followed by severity rating on the consolidated set of problems.
5. Data analysis to answer the research question, investigating in particular how the usability problems identified by heuristic evaluation compare with those identified by end users in the learner-surveys.

---------------
Designing for learning or designing for fun? Setting usability guidelines for mobile educational games.
Siobhan Thomas and Gareth Schott. Institute of Education, University of London, 20 Bedford Way, London WC1H OAL, UK. E-mail: four@nucleus.com; s.thomas@ioe.ac.uk, g.schott@ioe.ac.uk
Maria Kambouri. National Research and Development, Centre in Adult Literacy and Numeracy, Institute of Education, University of London, 20 Bedford Way, London WC1H OAL, UK. E-mail: m.kambouri@ioe.ac.uk

Learning With Mobile Devices: Reserch and Development. a book of papers, edited by Jill Attewell and Carol Savill-Smith. Learning and Skills Development Agency, Regent Arcade House19–25 Argyll Street, London W1F 7LS. pg 173-181


Heuristic evaluation – traditionally, evaluation in which a small team of independent evaluators compare user interfaces with a set of usability guidelines, the ‘heuristics’ – has been recognised as an effective method for the formative evaluation of educational software (Quinn 1996; Albion 1999; Squires and Preece 1999).

Heuristic evaluation using six evaluators uncovers 75% of usability problems (Nielsen 1994) and is considered a costeffective method of evaluation that yields reliable results for minimum investment (Quinn 1996).

---------------
20090906-2110

No comments:

Post a Comment