Friday, August 28, 2009
Aug 29 - Ssemugabi & de Villiers, A Comparative Study of Two Usability Evaluation Methods Using a Web-Based E-Learning Application
A Comparative Study of Two Usability Evaluation Methods Using a Web-Based E-Learning Application.
Samuel Ssemugabi. School of Information Technology, Walter Sisulu University, Private Bag 1421, East London, 5200, South Africa. +27 43 7085407 ssemugab@wsu.ac.za
Ruth de Villiers. School of Computing, University of South Africa, P O Box 392, Unisa, 0003, South Africa +27 12 429 6559 dvillmr@unisa.ac.za
SAICSIT 2007, 2 - 3 October 2007, Fish River Sun, Sunshine Coast, South Africa.
ABSTRACT
Usability evaluation of e-learning applications is a maturing area, which addresses interfaces, usability and interaction from human-computer interaction (HCI) and pedagogy and learning
from education. The selection of usability evaluation methods (UEMs) to determine usability problems is influenced by time, cost, efficiency, effectiveness, and ease of application. Heuristic
evaluation (HE) involves evaluation by experts with expertise in the domain area and/or HCI.
This comparative evaluation study investigates the extent to which HE identifies usability problems in a web-based learning application and compares the results with those of survey evaluations among end-users (learners).
Severity rating was conducted on a consolidated set of usability problems and further comparison of findings was done on the major and minor problems. The results of HE correspond
closely with those of the survey. However the four expert evaluators identified more problems than the 61 learners and identified 91% of the learners’ problems, when major problems only were considered. HE by a competent and balanced set of experts showed itself to be an appropriate, efficient and effective UEM for e-learning applications.
Usability is a key issue in human-computer interaction (HCI) since it is the aspect that commonly refers to quality of the user interface [30].
The International Standards Organisation (ISO) defines usability as [16]: The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context.
Usability evaluation is concerned with gathering information about the usability or potential usability of a system, in order to assess it or to improve its interface by identifying problems and
suggesting improvements [35]. Various usability evaluation methods (UEMs) exist, e.g. analytical, expert heuristic evaluation, survey, observational, and experimental methods [6,
13, 35].
To evaluate the usability of a system and to determine usability problems, it is important to select an appropriate UEM/s [10, 37], taking cognisance of efficiency, time, cost-effectiveness, ease of application, and expertise of evaluators [13, 30].
Evaluation of e-learning should address aspects of pedagogy and learning from educational domains as well as HCI factors such as the effectiveness of interfaces and the quality of usability and interaction.
Heuristic evaluation (HE) is a usability inspection technique originated by Nielsen [26, 27], in which a small set of expert evaluators, guided by a set of usability principles known as heuristics, determine whether a system conforms to these and identify specific usability problems in the system. It is the most widely used UEM for computer system interfaces. It is described as fast, inexpensive, and easy to perform, and can result in major improvements to user interfaces [4, 5, 19, 24].
Usability features are frequently not considered in development, sometimes because instructors and courseware developers are not trained to do so or because they lack the technological skills [41]. However, a main reason for the oversight is that usability evaluation can be difficult, time
consuming and expensive [20]. If, in addition to its attributes of low cost and relative simplicity, HE is shown to be effective, efficient, and sufficient to identify the problems that impede earners, it would qualify as an ideal method for evaluating webbased e-learning.
The main components of the research design
were:
1. Identification of the target application.
2. Generation of a set of criteria/heuristics suitable for usability evaluation of e-learning applications, in particular, WBL environments.
3. End-user evaluations by means of criterion-based learner surveys on the target application using questionnaires and a focus group interview.
4. An heuristic evaluation with a competent and complementary set of experts, using the synthesised set of criteria as heuristics, followed by severity rating on the consolidated set of problems.
5. Data analysis to answer the research question, investigating in particular how the usability problems identified by heuristic evaluation compare with with those identified by end users in the learner-surveys.
Evaluation criteria/heuristics (terms used interchangeably) for e-learning should address interfaces, usability and interaction from HCI, as well as pedagogy and learning from education.
A study was undertaken by the authors [38] to establish an appropriate set of 20 criteria within three categories for evaluating WBL applications.
This multi-faceted framework (see Table 1) supports Hosie’s and Schibeci’s concept [15] of ‘context-bound’/‘context-related’ evaluation. Nielsen’s [27] heuristics form the basis of Category 1, extended to customise them for educational purposes by using Squires’ and Preece’s [36] ‘learning with software’ heuristics. Other sources used in generating the criteria were, [1], [18], [19], [23], [34], [35] and [39].
Each criterion has a list of associated subcriteria or guidelines to help evaluators assess Info3Net. These are also shown in Table 1, but they can be customised to other contexts.
Category 1: General interface usability criteria (based on Nielsen’s heuristics, modified for e-learning context)
1 Visibility of system status
2 Match between the system and the real world i.e. match between designer model and user model
3 Learner control and freedom
4 Consistency and adherence to standards
5 Error prevention, in particular, prevention of peripheral usability-related errors [36]
6 Recognition rather than recall
7 Flexibility and efficiency of use
8 Aesthetics and minimalism in design
9 Recognition, diagnosis, and recovery from errors
10 Help and documentation
Category 2: Website-specific criteria for educational websites
11 Simplicity of site navigation, organisation and structure
12 Relevance of site content to the learner and the learning process
Category 3: Learner-centred instructional design, grounded in learning theory, aiming for effective learning
13 Clarity of goals, objectives and outcomes
14 Effectiveness of collaborative learning (where such is available)
15 Level of learner control
16 Support for personally significant approaches to learning
17 Cognitive error recognition, diagnosis and recovery
18 Feedback, guidance and assessment
19 Context meaningful to domain and learner
20 Learner motivation, creativity and active learning
SURVEY EVALUATIONS AMONG END-USERS (LEARNERS)
Query techniques, such as questionnaires and interviews aim to identify usability problems by asking users directly [9]. The questionnaire was designed using Gillham’s principles [12]. The first section elicited basic demographic information and details of the respondents’ experience. The main section was based on the 20 criteria of Section 5.1. For each criterion, straightforward, focused, single-issue statements, based on the sub-criteria, were generated to expand the meaning. Students rated Info3Net using the criteria, but a main aim was for them to use each criterion to name problems they had experienced in that regard, and write them in the space below.
Survey was done using 5-point Likert scale: Strongly agree, Agree, Maybe, Disagree, Strongly disagree.
Eighty registered students had used Info3Net during the semester, 61 of whom took part in the questionnaire survey, which was conducted on a single day in the usual laboratory setting, but in two separate groups. Sixty four problems were identified. To eliminate duplicates, a researcher meticulously considered them and combined those that were closely related. During consolidation, some were rephrased to correspond with terminology used by expert evaluators (7.4).
For further data, the questionnaire was followed by a focus group interview with eight students, as advocated by [35]. This clarified and elaborated problems identified by the questionnaire. More emerged, resulting in a final list of 55 problems.
HEURISTIC EVALUATION BY EXPERTS
Heuristic evaluation (HE) is an inspection technique whereby a set of experts evaluate whether a user interface conforms to defined usability principles, termed heuristics [9, 26, 33]. It is seldom possible for a single evaluator to identify all the usability problems in a system. Nielsen [27] recommends that a set ofthree to five evaluators be used to identify 65-75% of the usability problems. Our approach is based on Nielsen’s, using our custom-designed heuristics for web-based learning. The HE was supplemented by a severity rating process done by the experts to rank the final integrated set of problems ranked according to level of seriousness.
Identifying and Defining the Heuristics
The heuristics used were 15 of the 20 used in the end-user survey … and the results of the two evaluations are compared with regard to those 15. Each heuristic was divided into sub-criteria, as shown in Table 1.
Selection of Evaluators
Factors involved in selecting and inviting a balanced set of experts, are the number to use and their respective backgrounds.
Nielsen’s [27] cost-benefit analysis demonstrated optimal value with three or four evaluators. Despite this, the debate continues. [3] used eleven experts to assess usability of a university Web portal. Law and Hvannberg [22] reject the ‘magic five assumption’ and in the context of usability testing, used eleven participants to define 80% of the detectable usability problems. However, [19] argue that two to three evaluators who are experts both in the domain area and in HCI, so-called ‘double experts’, will point out the same number of usability problems as three to five ‘single experts’. Furthermore, in a heuristic evaluation of an educational multimedia application by Albion [1], four evaluators were selected, with expertise in user interface design, instructional/educational
design, and teaching.
The same approach was followed in this study. Four expert evaluators with expertise in user interface design, instructional/educational design, and teaching, were invited and agreed to participate. Two are lecturers in the subject-matter domain and are also HCI experts, familiar with HE; these two can be classified as ‘double experts’.
Briefing the Evaluators
Evaluators were briefed in advance about the HE process for the study, the domain of use of the target system, and the task scenarios to work through as advocated by [23] and [27]. In addition to the consent form (see 5.2) and a request to familiarise themselves with the heuristics before doing the actual evaluation, each evaluator was given a set of documents regarding the:
• Phases of the HE
• System and user profile
• Procedure.
Severity Rating
Severity rating, i.e assigning relative severities to individual problems, can be performed to determine each problem’s level of seriousness. It is usually estimated on a 3- or 5-point Likert
scale. The experts can either do ratings during their HEs or later, when all the problems have been aggregated.
The latter approach is advantageous [1, 23, 24] and is used in the present study. Table 5 shows the 5-point rating scale [32] used to assess the problems and assign severities. The final option indicates a non-problem.
The scale is similar to that used by Nielson [27] with an additional rating item, ‘Medium’ between the ‘Major’ and ‘Minor’ problem ratings.
Conclusion
The conclusion from these findings of a comparative evaluation study, is that the results of heuristic evaluation by experts correspond closely with those of survey evaluation among endusers (learners). In fact, the HE results are better than the survey results (see Table 6). They were produced by only four experts compared to 61 learners, and the experts were experiencing their first exposure to Info3Net, whereas the learners had used it for a semester.
The findings of this study indicate that heuristic evaluation, if conducted by a competent and complementary group of experts, is an appropriate, efficient, and highly effective usability evaluation method in the context of e-learning, as well as relatively easy to conduct and inexpensive. The researchers recommend that HE should, ideally, be supplemented with methods where users themselves identify usability or learning problems. This is in line with proposals that reliable evaluation can be achieved by systematically combining inspection with
user-based methods [2, 7]. In cases where only one approach has to be selected, the findings of this research can be used to propose heuristic evaluation as the optimal method.
A valuable secondary benefit of the study is that the problems identified in Info3Net can be addressed in future upgrades.
Another major contribution of this research is the generation of the framework of evaluation criteria, which is transferable to other e-learning contexts, where the sub-criteria can be customised to the particular environment or system.
Recommendations For Further Research
• Re-designing Info3Net in an action research approach, to solve some of the problems identified. The application could be re-evaluated to determine the impact of the changes.
• Applying the evaluation criteria generated in this study to evaluate other web-based e-learning applications.
• Adapting the framework of criteria to customise them for other contexts or other forms of e-learning.
• Determining which criteria are suitable for application by educators only and which by learners only.
References that I may want to read further in the future:
[1] Albion, P.R. 1999. Heuristic Evaluation of Multimedia: From Theory to Practice.
http://www.usq.edu.au/users/albion/papers/ascilite99.html
[2] Ardito, C., Costabile, M.F., De Marsico, M., Lanzilotti, R., Levialdi, S., Roselli, T. & Rossano, V. 2006. An Approach to Usability Evaluation of E-Learning Applications. Universal Access in the Information Society, 4(3): 270-283.
[4] Belkhiter, N., Boulet, M., Baffoun, S. & Dupuis, C. 2003. Usability Inspection of the ECONOF System's User Interface Visualization Component. In: C. Ghaoui. (Ed.), Usability Evaluation of Online Learning Programs. Hershey, P.A.: Information Science Publishing.
[7] Costabile, M.F, De Marsico, M., Lanzilotti, R., Plantamura, V.L. & Roselli, T. 2005. On the Usability Evaluation of ELearning Applications. In: Proceedings of the 38th Hawaii International Conference on System Science: 1-10. Washington: IEEE Computer Society.
[8] De Villiers, R. 2006. Multi-Method Evaluations: Case Studies of an Interactive Tutorial and Practice System. In: Proceedings of InSITE 2006 Conference. Manchester, United Kingdom.
[12] Gillham, B. 2000. Developing a Questionnaire. London: Bill Gillham.
[14] Hartson, H.R., Andre, T.S. & Williges, R.C. 2003. Criteria for Evaluating Usability Evaluation Methods. International Journal of Human-Computer Interaction, 15(1): 145-181.
[15] Hosie, P. & Schibeci, R. 2001. Evaluating Courseware: A need for more context bound evaluations? Australian Educational Computing, 16(2):18-26.
[18] Jones, A., Scanlon, E., Tosunoglu, C., Morris, E., Ross, S., Butcher, P. & Greenberg, J. 1999. Contexts for Evaluating Educational Software. Interacting with Computers, 11(5): 499-516.
[19] Karoulis, A. & Pombortsis, A. 2003. Heuristic Evaluation of Web-Based ODL Programs. In: C. Ghaoui. (Ed.), Usability Evaluation of Online Learning Programs. Hershey, P.A.: Information Science Publishing.
[25] Masemola, S.S. & De Villiers, M.R. 2006. Towards a Framework for Usability Testing of Interactive E-Learning Applications in Cognitive Domains, Illustrated by a Case Study. In: J. Bishop & D. Kourie. Service-Oriented Software and Systems. Proceeding of SAICSIT 2006: 187-197. ACM International Conference Proceedings Series.
[26] Nielsen, J. 1992. Finding Usability Problems through Heuristic Evaluation. In: Proceedings of the SIGCHI Conference on Human factors in Computing Systems: 373-380. Monterey: ACM Press.
[27] Nielsen, J. 1994. Heuristic Evaluations. In: J. Nielsen & R.L. Mack. (Eds), Usability Inspection Methods. New York: John Wiley & Sons.
[28] Nielsen, J. & Molich, R. 1990. Heuristic Evaluation of User Interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Empowering People: 249-256. Seattle: ACM Press.
[30] Parlangeli, O., Marchingiani, E. & Bagnara, S. 1999. Multimedia in Distance Education: Effects of Usability on Learning. Interacting with Computers, 12(1): 37-49.
[31] Peng, L.K., Ramaiach, C.K, & Foo, S. 2004. Heuristic-Based User Interface Evaluation at Nanyang Technological University in Singapore. Program: Electronic Library and Information Systems, 38 (1): 42-59.
[32] Pierotti, D. 1996. Usability Techniques: Heuristic Evaluation Activities. http://www.stcsig.org/usability/topics/articles/heactivities.html
[36] Squires, D. & Preece, J. 1999. Predicting Quality in Educational Software: Evaluating for Learning, Usability and the Synergy Between them. Interacting with Computers, 11(5): 467-483.
[37] Ssemugabi, S. 2006. Usability Evaluation of a Web-Based ELearning Application: A Study of Two Evaluation Methods. MSc Dissertation, University of South Africa.
[38] Ssemugabi, S. & De Villiers, M.R. 2007. Usability and Learning: A Framework for Evaluation of Web-Based ELearning Applications. In: Proceedings of the ED-MEDIA 2007 - World Conference on Educational Multimedia, Hypermedia & Telecommunications: 906-913. Vancouver, Canada.
[39] Storey, M.A, Phillips, B., Maczewski, M. & Wang, M. 2002. Evaluating the Usability of Web-Based Learning Tools. Educational Technology & Society, 5(3):91-100.
[40] Van Greunen, D. And Wesson, J L. 2002. Formal usability testing of interactive educational software: A case study. World Computer Congress (WCC): Stream 9: Usability. Montreal, Canada, August 2002.
[41] Vrasidas, C. 2004. Issues of Pedagogy and Design in Elearning System. In: ACM Symposium on Online Learning: 911-915. Nicosia: ACM Press.
[42] Wesson, J.L. & Cowley, N. L. 2003. The Challenge of Measuring E-Learning Quality: Some Ideas from HCI. IFIP TC3/WG3.6 Working Conference on Quality Education @ a Distance: 231-238. Geelong, Australia.
[43] Zaharias, P. 2006. A Usability Evaluation Method for ELearning: Focus on Motivation to Learn. In: CHI '06 Extended Abstracts on Human Factors in Computing Systems: 1571-1576. Montreal: ACM Press.
My Comments: Ssemugabi and de Villiers had listed many references that may be useful to my research. Good hints for literature review. :)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment