Saturday, August 29, 2009

Aug 29 - Masemola & de Villiers, Towards a Framework for Usability Testing of Interactive e-Learning Applications in Cognitive Domains...

Towards a Framework for Usability Testing of Interactive e-Learning Applications in Cognitive Domains, Illustrated by a Case Study.
S.S. (THABO) MASEMOLA AND M.R. (RUTH) DE VILLIERS. University of South Africa
S.S. Masemola, School of Computing, University of South Africa, P O Box 392, UNISA, 0003, South Africa; masemss@unisa.ac.za.
M.R. de Villiers, School of Computing, University of South Africa, P O Box 392, UNISA, 0003, South Africa; dvillmr@unisa.ac.za.
Proceedings of SAICSIT 2006, Pages 187 –197

Abstract
Testing has been conducted in a controlled usability laboratory on an interactive e-learning application that teaches mathematical skills in a cognitive domain. The study obtained performance measures and identified usability problems, but was focused primarily on using the testing technology to investigate distinguishing aspects of such applications, such as time usage patterns in domains where rapid completion is not necessarily a performance indicator. The paper addresses the issue of what, actually, is meant by ‘usability’ in learning environments. A pilot study identified obstacles and served to enhance the main study. Thinking-aloud on the part of participants provided useful data to support analysis of the performance measures, as can benchmarks and best case measures. Particular attention must be paid to ethical aspects. Features emerging from this study can contribute to a framework for usability testing and usage pattern analysis of interactive e-learning applications in cognitive domains.

What is Usability Testing?
Usability testing is a software evaluation technique that involves measuring the performance of typical end-users as they undertake a defined set of tasks on the system being investigated. It commenced in the early 1980s, as humanfactors professionals studied subjects using interfaces under real-world or controlled conditions and collected data on problems that arose (‘human factors’ is an early term for the human-computer interaction discipline). It has been shown to be an effective method that rapidly identifies problems and weaknesses, and is particularly used to improve the usability of products [Dumas, 2003; Dumas and Redish, 1999; Jeffries et al, 1991].
Since the early to mid-1990s, such testing has been empirically conducted in specialized controlled environments called usability laboratories, equipped with sophisticated monitoring and recording facilities for formal usability testing, supported by analytical software tools. It is an expensive technique. Participants, who are real end-users, interact with the product, performing specified representative tasks. Their actions can be rigorously monitored and recorded in various ways: by videotape − for subsequent re-viewing; event logging − down to keystroke level; and audio − to note verbalization and expressions.
The data, in both quantitative and qualitative forms, is analysed and changes can be recommended. Typical usability metrics include the time taken to complete a task, degree of completion, number of errors, time lost by errors, time to recover from an error, number of subjects who successfully completed a task, and so on [Avouris, 2001; Dix, Finlay, Abowd & Beale, 2004; ISO 9241, 1997; Preece, Rogers & Sharp, 2003; Wesson, 2002].

The primary targets of usability testing are the user interface and other interactive aspects. Such testing is used by academics for research and development, and also by usability practitioners in the corporate environment for rapid refinement of interfaces and analysis of system usability.

What is e-Learning?
Some definitions of e-learning equate it solely with use of the Internet in instruction and learning, but others [CEDEFOP, 2002; Wesson & Cowley, 2003] are broader, including multiple formats and methodologies such as the Internet and Web-based learning (WBL), multimedia CD-ROM, online instruction, educational software/courseware, and traditional computer-assisted learning (CAL). This approach suits the present study, which views e-learning as a
broad range of learning technologies encompassing various roles for technology, including interactive educational software, web-based learning, learning management systems, and learners using computers as tools [De Villiers, 2005].

Usability Testing
Dumas [2003] lists six defining characteristics of usability tests, while a seventh and eighth are obtained from Dumas and Redish [1999]:
1. The focus is usability.
2. Participants are end users or potential end users.
3. There is an artifact to evaluate, which may be a product design, a system or a prototype.
4. The participants think aloud as they progress through tasks.
5. Data is recorded and analysed.
6. The results are communicated to appropriate audiences (often a corporate client).
7. Testing should cover only a few features of the product, incorporated in selected tasks.
8. Each participant should spend approximately an hour doing the stated tasks.

Our methodology and test plan are based on general methodologies for formal usability testing [Pretorius, Calitz & Van Greunen, 2005; Rubin, 1994; Van Greunen & Wesson, 2002] but with some distinguishing features, such as the emphasis on participants thinking aloud and the use of a best case for initial benchmarking,...The broad methodology involves the following steps:
─ Set up objectives in line with research questions.
─ Determine the aspects to be measured and their metrics.
─ Formulate documents: Initial test plan, task list, information document for participants, checklist for administrator, and determine a means of investigating satisfaction.
─ Acquire representative participants.
─ Conduct a pilot test.
─ Refine the test plan, task list, and information document for the main usability test in the light of the pilot.
─ Conduct usability test.
─ Determine means of analysis and presentation that address the unique, as well as the usual, aspects.
─ Draw conclusions and make proposals for the way forward.


Conclusion

1. In addition to the identification of performance measures and problems, how can testing in a usability laboratory elicit valuable information about cognitive e-learning applications?
We propose that usability testing of e-learning applications should address both the interfaces and the learning content, because usability and functionality are closely related in e-learning. Performance metrics generated during learning activities can be used towards measuring the effectiveness aspect of usability, because success in the cognitive processing induced by learning functionality is fundamental to both usability and utility. It could be argued that conventional usability testing should be conducted on user interfaces only, to investigate users’ experiences with navigation features and the menus only. This could lead to improved interaction design, but it would be a paltry use of the capacity of the monitoring and analytical features in usability laboratories.

2. What activities/outputs yield meaningful information about interactive applications in such domains?
Innovative use of the usability lab technology led to usage analysis to identify usage patterns. Using data from thinking-aloud by subjects to clarify how they used their time, a clear distinction emerged between time spent navigating and time spent in cognitive activities. The act of configuring an environment or system to one’s own needs is called ‘incorporated subversion’ by Squires [1999]. In the present context, incorporated subversion of the usability testing technology led to added-value use of the laboratory and its software in novel and adaptable ways.

3. What notable features emerge from this study that can contribute specifically to a framework for usability testing of interactive e-learning in cognitive domains?
Generic frameworks and methodologies can be established, then customized for optimal usability testing of different kinds of e-learning applications and environments.
─ Attention to ethical aspects is of vital importance in this close-up recording of personal human activities.
─ A pilot study is essential to support the sensitive evolving plans and critical judgements that must be made.
─ This case study established the value of thinking out loud as a source of data, provided it is preceded by adequate preparation of participants.
─ The more fine-grained the tasks selected for testing, the better the data that is recorded.
─ Regarding the number of subjects, five are sufficient to identify usability problems, but are not enough to conduct serious analysis of learning and cognitive patterns. In this study the five participants generated valuable initial data on usage patterns and learning styles. Future in-depth research should be undertaken on low level, very finegrained tasks, accompanied by detailed analysis. With more data, realistic averages could be obtained to serve as benchmarks against which to compare future usage studies on small groups or single-subjects.
─ For the present, tentative benchmarks and best case measures were obtained. The best case provides a realistic optimum standard.

References that I may want to read further in future:
DE VILLIERS, M.R. 2004. Usability evaluation of an e-Learning tutorial: criteria, questions and a case study. In: G. Marsden, P. Kotzé, & A. Adesina-Ojo (Eds), Fulfilling the promise of ICT. Proceedings of SAICSIT 2004. ACM International Conference Proceedings Series.
DIX, A., FINLAY, J., ABOWD, G.D. and BEALE, R. 2004. Human-Computer Interaction. Pearson Education, Ltd, Harlow.
DUMAS, J.S. 2003. User-based evaluations. In: J.A. Jacko & A. Sears (Eds), The Human-Computer Interaction Handbook. Mahwah: Lawrence Erlbaum Associates.
DUMAS, J.S. and REDISH, J.C. 1999. A practical guide to usability testing. Exeter: Intellect.
FAULKNER, X. 2000. Usability Engineering. Houndsmills: Macmillan Press.
ISO 9241. 1997. Draft International Standard: Ergonomic requirements for office work with visual display terminals (VDT). Part 11: Guidance on Usability, ISO.
NIELSEN, J. 2000. Why you only need to test with five users. http://www.useit.com/alertbox/20000319.html . Accessed March 2006.
SQUIRES, D. and PREECE, J. 1999. Predicting quality in educational software: Evaluating for learning, usability and the synergy between them. Interacting with Computers 11(5): 467–483.
VAN GREUNEN, D. and WESSON, J L. 2002. Formal usability testing of interactive educational software: A case study. World Computer Congress (WCC): Stream 9: Usability. Montreal, Canada, August 2002.
WESSON, J L. 2002. Usability evaluation of web-based learning: An essential ingredient for success, TelE-Learning 2002: 357- 363, Montreal, Canada, August 2002.
WESSON, J.L. and COWLEY, N. L. 2003. The challenge of measuring e-learning quality: Some ideas from HCI. IFIP TC3/WG3.6 Working Conference on Quality Education @ a Distance: 231-238, Geelong, Australia, February 2003.

No comments:

Post a Comment