Monday, August 31, 2009

Aug 31 - Ardito et al, A Tool to Support Usability Inspection

A Tool to Support Usability Inspection
Carmelo Ardito, Rosa Lanzilotti, Paolo Buono, Antonio Piccinno
Dipartimento di Informatica, Università di Bari, Italy
{ardito, lanzilotti, buono, piccinno}@di.uniba.it

AVI '06, May 23-26, 2006, Venezia, Italy.

ABSTRACT
SUIT (Systematic Usability Inspection Tool) is an Internet-based tool that supports the evaluators during the usability inspection of software applications. SUIT makes it possible to reach inspectors everywhere, guiding them in their activities. Differently from other tools that have been proposed in literature, SUIT not only supports the activities of a single evaluator, but permits to manage a team of evaluators who can perform peer reviews of their inspection works and merge their individual reports in a single document on which they agree.


Usability is a significant aspect of the overall quality of interactive applications.

Different methods can be used for evaluating the usability of interactive systems. Among them, the most commonly adopted are inspection methods that involve expert evaluators only, who inspect the application and provide judgments based on their knowledge and expertise [8]. Their main advantage is the cost saving: they “save users”, and do not require any special equipment, nor lab facilities [4] [8]. In addition, experts can detect a wide range of problems and possible faults of a complex system in a limited amount of time.
Heuristic evaluation is a popular inspection method that involves few experts inspecting the system, and evaluating the interface against a list of recognized usability principles: the heuristics. Heuristic evaluation is said a “discount usability” method; it has been shown that it has a high benefit-cost ratio [7]. It is especially valuable when time and resources are short, because skilled evaluators, without the involvement of users, can produce high quality results in a limited amount of time [5].

It is recommended to have more than one evaluator inspecting an application. Every member in the inspection team works individually and produces a problem report. Afterwards, all nsipectors in the team meet to discuss about the discovered problems, producing only one problems list.
There are already tools for supporting the first phase of this process. In general, they offer an online form that helps conducting an inspection (such as usability review or heuristic evaluation). By using an online form to identify problems and complete a checklist, inspectors can reduce the amount of editing and re-work that is required when conducting inspections. In some cases, when inspectors have completed their work, the tools automatically create a report that can be edited and completed using a word processor.
This is the case of Review Assistant [9] and CUE (Custom Usability Evaluation) [1]. MAUVE (Multi-criteria Assessment of Usability for Virtual Environments) is another tool that was developed for usability assessment of virtual environment [10]. However, such tools only support the individual work of an inspector, but do not permit to manage a team of evaluators, to merge their individual reports and to coordinate their activities.

In this paper, we present SUIT (Systematic Usability Inspection Tool), an Internet-based tool to support the inspectors performing a special type of inspection technique, which has been proposed in [6] to overcome the drawbacks of the heuristic evaluation.
SUIT makes it possible to reach inspectors everywhere, guiding them in their activities. Inspectors can perform asynchronous peer reviews of their inspection works in a discussion forum. The inspectors are coordinated by an expert inspector who has the role of manager of the whole inspection process.
SUIT can also support other inspection techniques, such as heuristic evaluation; for other inspection techniques, little adaptations would be required.

AT (Abstract Task) Inspection
In order to overcome such drawbacks and with the objective of performing a more systematic evaluation, a different inspection has been proposed as part of a methodology for usability evaluation called SUE (Systematic Usability Evaluation); as a consequence, this technique has been called SUE inspection [6]. It exploits a set of evaluation patterns, called Abstract Tasks (ATs), which guide the inspector’s activities, precisely describing which objects of the application to look for and which actions to perform during the inspection in order to analyze such objects.
In this way, even less experienced evaluators are able to achieve more complete and precise results. ATs are formulated by means of a template providing a consistent format [6].
During the inspection, evaluators analyze the application by using the available ATs and producing a report in which the discovered problems are described.
A controlled experiment, described in [2], has shown the effectiveness and efficiency of this inspection with respect to the heuristic evaluation.
The use of ATs is the very novelty of this inspection. Moreover, since it can be used independently from the overall SUE methodology, it is more appropriate to call it AT inspection.

SUIT ARCHITECTURE
Internet application with Dynamic web pages, php, Apache serve, Data in database.
Event notification by auto email.

SUIT FUNCIONALITIES
In the AT inspection, two phases are required: the preparatory phase, which is performed only once for a class of applications to be evaluated; the execution phase, which is performed for each
application to be evaluated. The execution phase is coordinated by the inspection manager, who is an expert inspector who decides the number of inspectors to be involved, selects them and
coordinates their activities, as it is better described later.

My Comments: Since my PhD research involves developing and testing of a Usability Evaluation Tool, this article is very relevant to me as a benchmark.

CONCLUSION
In this paper, we have presented SUIT, a web-based tool for supporting usability inspections.
The novelty of SUIT, with respect to other tools that support application inspections, is that a
team of geographically distributed evaluators is managed.
Moreover, the team of evaluators can perform peer reviews of their work and discuss among them in an Internet-based forum with no need to meet face-to-face.
The current version of SUIT provides support for both AT inspection and heuristic evaluation. SUIT may be slightly modified to support other inspection techniques, such as cognitive walkthrough or guideline-based inspections.


REFERENCES that I may want to read further in future:
[1] CUE (Custom Usability Evaluation), available through the User Centric Web site. http://www.usercentric.com/, 2005.
[2] De Angeli, A., Matera, M., Costabile, M.F., Garzotto, F., and Paolini, P. On the Advantages of a Systematic Inspection for Evaluating Hypermedia Usability, International Journal of Human-Computer Interaction, Lawrence Erlbaum Associates, Inc, 15(3), 2003, 315-335.
[3] Doubleday, A., Ryan, M., Springett, M., and Sutcliffe A. A Comparison of Usability Techniques for Evaluating Design, In Proceedings of the DIS’97, Amsterdam, NL, August 18-20, 1997, ACM Press, 101-110.
[4] Jeffries, R., Miller, J., Wharton, C., and Uyeda, K. M. User Interface Evaluation in the Real Word: a Comparison of Four Techniques, In Proceedings of the ACM CHI 91 Human Factors in Computing Systems, New Orleans, Louisiana, April 28 - June 5, 1991, 119-124.
[5] Kantner, L., and Rosenbaum, S. Usability Studies of WWW Sites: Heuristics Evaluation vs. Laboratory Testing, In Proceedings of the SIGDOC ’97, Salt Lake City, Utah, USA, October 19-22, 1997, ACM Press, 153-160.
[6] Matera, M., Costabile, M.F., Garzotto, F., and Paolini, P. SUE Inspection: an Effective Method for Systematic Usability Evaluation of Hypermedia, IEEE Transactions on Systems, Man and Cybernetics- Part A, 32(1), 2002, 93-103.
[7] Nielsen, J. Usability Engineering, Academic Press. Cambridge, MA, 1993.

Aug 31 - Hollingsed & Novick, Usability Inspection Methods after 15 Years of Research and Practice (part 2)

Usability Inspection Methods after 15 Years of Research and Practice.
Tasha Hollingsed. Lockheed Martin, 2540 North Telshor Blvd., Suite C, Las Cruces, NM 88011. +1 505-525-5267 tasha.hollingsed@lmco.com
David G. Novick. Department of Computer Science, The University of Texas at El Paso, El Paso, TX 79968-0518. +1 915-747-5725 novick@utep.edu
SIGDOC’07, October 22–24, 2007, El Paso, Texas, USA.


COGNITIVE WALKTHROUGH

The cognitive walkthrough is a usability inspection method that evaluates the design of a user interface for its ease of exploratory learning, based on a cognitive model of learning and use [53].
Like other inspection methods, a cognitive walkthrough can be performed on an interface at any time during the development process, from the original mock-ups through the final release.

The process of the cognitive walkthrough comprises a preparatory phase and an analysis phase. During the preparatory phase, the experimenters determine the interface to be used, its likely users, the task, and the actions to be taken during the task.
During the analysis phase, the evaluators work through the four steps of humancomputer interaction developed by Lewis and Polson [25]:
1. The user sets a goal to be completed within the system.
2. The user determines the currently available actions.
3. The user selects the action that they think will take them closer to their goal.
4. The user performs the action and evaluates the feedback given by the system.

Wharton et al. [53] surveyed the history of the cognitive walkthrough prior to 1994. At the time, the two main limitations of the cognitive walkthrough were the repetitiveness of filling out the forms and the limited range of problems the process found ([53], [51], [19]). A new method of cognitive walkthrough addressed these limitations by using small groups instead of individual evaluators and rotating the form filling within the group, evaluating the simple tasks first, and by keeping a record of all problems identified during the evaluation, not just those identified during the process ([53], [41]).
Other criticisms of the cognitive walkthrough were that it does not provide guidelines about what makes an action clearly available to a user and that it is not known what types of actions are considered by a broad range of users [51]).

Since its inception, and with its refinements and extensions, the cognitive walkthrough has been shown to be an effective inspection method that can be applied not just by cognitive scientists and usability specialists but also by novice evaluators. However, the choice of task scenario can be difficult; if the scenario is not adequately described, the evaluation is not as effective.

PLURALISTIC USABILITY WALKTHROUGH

The pluralistic usability walkthrough [3] adapted the traditional usability walkthrough to incorporate representative users, product developers, members of the product team, and usability experts in the process. It is defined by five characteristics:
1. Inclusion of representative users, product developers, and human factors professionals;
2. The application’s screens are presented in the same order as they would appear to the user;
3. All participants are asked to assume the role of the user;
4. Participants write down what actions they, as users, would take for each screen before the group discusses the screens; and
5. When discussing each screen, the representative users speak first.


...benefits and limitations of the approach.
On the positive side, this approach offers feedback from users even if the interface is not fully developed, enables rapid iteration of the design cycle, and—because the users are directly involved—can result in “on-the-fly” redesign.
On the negative side, the approach must be limited to representative rather than comprehensive user paths through the interface, and users who, at a particular step, did not choose the path the group will follow must “reset” their interaction with the interface.

My comments: I think pluralistic walkthrough has another strength of using a cross-functional team to evaluate usability.

The pluralistic walkthrough appears to be in active use for assessing usability. It is included in the Usability Professionals Association draft body of knowledge [49].
Available reports indicate that pluralistic usability walkthrough is used in industry. While some human factors experts continue to conduct usability walkthroughs that do not combine stakeholder perspectives (e.g., [7], [50], [44]), it seems likely that use of the pluralistic usability
walkthrough is widespread but that teams do not refer to it as such in published reports.

FORMAL USABILITY INSPECTIONS

Formal usability inspection is a review by the interface designer and his or her peers of users’ potential task performance [22].
Like the pluralistic usability walkthrough, this involves stepping through the user’s task. However, because the reviewers consist of human factors experts, the review can be quicker, more thorough, and more technical than in the pluralistic walkthrough. The goal is to identify the maximum number of defects in the interface as efficiently as possible. The review process includes task performance models and heuristics, a variety of human-factors expertise, and defect detection within the framework of the software development lifecycle.
Like the cognitive walkthrough, formal usability inspections require definitions of user profiles and task scenarios. And, like the cognitive walkthrough, the reviewers use a cognitive model of task performance, which can be extended with a checklist of cognitive steps similar to those invoked by Norman [36] to bridge the “gulf of execution.”

Hewlett Packard used this method for at least two years before 1995. The inspection team included design engineers, usability engineers, customer support engineers, and customers at times. The team inspected fourteen products and found an average of 76 usability concerns per product and an average of 74 percent of those concerns were fixed per product. While no formal evaluation of the results was done, it was found that the engineers could detect several of the usability concerns, and the engineers enjoyed using the method while increasing their awareness of user needs [15].

Digital Equipment Corporation also conducted a version of formal usability inspections from 1994 to 1995 on ten products. They found an average of 66 usability problems per product and fixed an average of 52 problems per product. Finding even small usability problems proved to be an asset, especially when a number of these problems were easily fixed. As more problems were fixed, the perceived quality of the product improved as well, even if most of these fixes were small [45].

Since then, it appears that little research has been conducted on formal usability inspections. This approach now tends to be grouped into the overall inspection method class and gets overshadowed by the better known heuristic evaluation when comparisons between
inspection and empirical methods have been conducted. As a method, formal usability inspection gains speed at the cost of losing the multiple stakeholder perspectives of the pluralistic walkthrough, and its cognitive model can be seen as less comprehensive than that of the cognitive walkthrough.

CONCLUSION

Both empirical usability testing and usability inspection methods appear to be in wide use, with developers choosing the most appropriate method for their purposes and their context.
For example, half of the ten intranets winning a 2005 competition used heuristic evaluation [34]. The same report indicated that empirical usability testing was used by 80 percent of the winning intranets.

The cognitive walkthrough appears to be in continued use, although reports of use are not as frequent.
The pluralistic usability walkthrough remains in the repertoire of usability experts, although
usability experts continue to conduct user-only usability walkthroughs.
And formal usability inspection, although shown to be an effective approach for identifying usability problems, appears to be used less now than in the mid-1990s.

Many have claimed that usability inspection methods make for faster and more cost-efficient evaluation of the usability of an interface than empirical user testing.
But while usability inspection methods do identify a number of usability problems faster and more cost-efficiently, the best performing evaluator and method still only found 44 percent of the usability problems found in a laboratory setting [9].
While the cognitive walkthrough is useful for predicting problems on a given task and heuristic evaluation is useful for predicting problems on the interface, empirical testing provides lots of information throughout the interface and is the benchmark against which all other methods are measured [9].
Indeed, Jeffries et al. [19] noted that evaluators of usability methods may have rated problems found though empirical usability testing as, on average, more severe precisely because the problems were identified empirically rather than analytically. While inspection methods need expert evaluators to be effective, their strengths are that they can be implemented into the early stages of the development cycle and provide a forum in which changes to interface can be discussed.

The research on comparisons of usability assessment methods suggests several lessons for practitioners.
First, while “faster, cheaper” methods such as heuristic evaluation and the pluralistic usability walkthrough can be useful for rapid iteration early in the design cycle, inspection methods cannot fully substitute for the empirical user testing needed before releasing an interface or Web site to the public.
Second, empirical methods can also be used early in the development process, via “low-tech” versions of interfaces.
Third, developers often combine multiple inspection methods— heuristic evaluation and the cognitive walkthrough—in the same project so that they obtain better coverage of usability issues.
And fourth, adding multiple perspectives—along dimensions such as the range of stakeholders or kinds of usability problems—appears to improve the effectiveness of inspection methods.

It remains an open issue as to why usability professionals, in practice, rely on single-perspective methods, typically involving users, or experts, but not both. The evidence from reports of recent
uses of heuristic evaluation suggests that many usability specialists are missing the benefits of the pluralistic walkthrough and perspective-based evaluation.

At a deeper level, though, a new direction for research should complement these defect-tracking and learning approaches by seeking to understand the root causes of usability problems. The ideal solution would be to know the reasons for usability problems, so that designers can minimize the effort spent on usability inspection and testing.

My Comments: For my PhD research, shall I choose only 1 UEM (usability evaluation method) or shall I use a hybrid/combination of UEM?

References that I may want to read further in future:
[2] Andreasen, M. S., Nielsen, H., Schrøder, S., and Stage, J. 2007. What happened to remote usability testing?: An empirical study of three methods, Proceedings of the Conference on Human Factors in Computing Systems (SIGCHI 2007), San Jose, CA, April 28-May 3, 2007, 1405-1414.
[3] Bias, R. G. 1994. The pluralistic usability walkthrough: coordinated empathies. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 63-76.
[4] Blackmon, M., Polson, P., Kitajima, M., and Lewis, C. (2002). Cognitive walkthrough for the Web, Proceedings of the Conference on Human Factors in Computing Systems (CHI 2002), Minneapolis, MN, April 20-25, 2002, 463-470.
[6] Bradford, J. (1994). Evaluating high-level design: synergistic use of inspection and usability methods for evaluating early software designs. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 235-253.
[8] Brooks, P. (1994). Adding value to usability testing. . In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 255-271.
[9] Desurvire, H. (1994). Faster, cheaper!! Are usability inspection methods as effective as empirical testing? In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 173-202.
[16] Gunn, C. (1995). An example of formal usability inspections in practice at Hewlett-Packard company, Proceeding of the Conference on Human Factors in Computing System (CHI 95), Denver, Colorado, May 7-11, 1995, 103-104.
[17] Hovater, J., Krot, M., Kiskis, D. L., Holland, H., & Altman, M. (2002). Usability testing of the Virtual Data Center, Workshop on Usability of Digital Libraries, Second ACM- IEEE Joint Conference on Digital Libraries, Portland, OR, July 14-18, 2002, available at http://www.uclic.ucl.ac.uk/annb/DLUsability/Hovater7.pdf, accessed May 26, 2007.
[18] Jeffries, R. (1994). Usability problem reports: helping evaluators communicate effectively with developers. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 273-294.
[19] Jeffries, R., Miller, J., Wharton, C., and Uyeda, K. (1991). User interface evaluation in the real world: a comparison of four techniques, Proceeding of the Conference on Human Factors in Computing System (CHI 91), New Orleans, LA, April 27-May 2, 1991, 119-124.
[20] Jeffries, R., and Desurvire, H. (1992). Usability testing vs. heuristic evaluation: was there a contest? ACM SIGCHI Bulletin 24(4), 39-41.
[21] John B., and Packer, H (1995). Learning and using the cognitive walkthrough method: A case study approach, Proceedings of the Conference on Human Factors in Computing Systems (SIGCHI 95), Denver, Colorado, May 7-11, 1995, 429-436.
[22] Kahn, M., and Prail, A. (1994). Formal usability inspections, in Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 141-171.
[23] Karat, C.-M 1994. A comparison of user interface evaluation methods, in Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 203-233.
[24] Karat, C.-M., Campbell, R. and Fiegel, T. (1992). Comparison of empirical testing and walkthrough methods in user interface evaluation, Proceedings of the Conference in Human Factors in Computing Systems (CHI 92), Monterey, CA, May 3-7, 1992, 397-404.
[25] Lewis, C., and Polson, P. (1991). Cognitive walkthroughs: A method for theory-based evaluation of user interfaces (tutorial), Proceedings of the Conference on Human Factors in Computing Systems (SIGCHI 91), April 27-May 2, 1991, New Orleans, LA.
[26] Mack, R. and Montaniz, F. (1994). Observing, predicting, and analyzing usability problems. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 295-339.
[27] Mack, R., and Nielsen, J. (1993). Usability inspection methods: Report on a workshop held at CHI'92, Monterey
[29] Muller, M., Dayton, T., and Root, R. (1993). Comparing studies that compare usability assessment methods: an unsuccessful search for stable criteria, INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems, Amsterdam, April 24-29, 1993, 185- 186.
[31] Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proceedings of the Conference on Human Factors in Computing Systems (CHI 90), Seattle, WA, April 1-5, 1990, 249-256.
[32] Nielsen, J. (1992). Finding usability problems through heuristic evaluation, Proceedings of the Conference on Human Factors in Computing System (CHI 92), Monterey, CA, May 3-7, 1992, 373-380.
[33] Nielsen, J. and Mack, R. (eds.) (1994), Usability inspection methods, John Wiley & Sons, Inc., New York.
[34] Nielsen, J. (2005). Ten best intranets of 2005, Jakob Nielsen's Alertbox, February 28, 2005, available at http://www.useit.com/alertbox/20050228.html, accessed May
26, 2007.
[35] Nielsen, J., and Phillips, V, (1993). Estimating the relative usability of two interfaces: heuristic, formal, and empirical methods compared, Proceedings of the Conference on Human Factors in Computing System (CHI 93), Amsterdam, April 24-29, 1993, 214-221.
[37] Novick, D. and Chater, M. (1999). Evaluating the design of human-machine cooperation: The cognitive walkthrough for operating procedures, Proceedings of the Conference on
Cognitive Science Approaches to Process Control (CSAPC 99), Villeneuve d'Ascq, FR, September 21-24, 1999.
[38] Novick, D. (2000). Testing documentation with “low-tech” simulation, Proceedings of IPCC/SIGDOC 2000, Cambridge, MA, September 24-27, 2000, 55-68.
[41] Rieman, J., Franzke, M., and Redmiles, D. (1995). Usability evaluation with the cognitive walkthrough, Proceedings of the Conference on Human Factors in Computing Systems (CHI 95), Denver, Colorado, May 7-11, 1995, 387-388.
[46] Spencer, R. (2000). The streamlined cognitive walkthrough method, Proceedings of the Conference on Human Factors in Computing System (CHI 2000), The Hague, The Netherlands, April 1-6, 2000, 353-359.
[47] Tang, Z., Johnson, T., Tindall, R., and Zhang, J. (2006) Applying heuristic evaluation to improve the usability of a telemedicine system, Telemedecine Journal and E-Health
12(1), 24-34.
[49] Usability Professionals Association (undated). Methods: Pluralistic usability walkthrough, available at http://www.usabilitybok.org/methods/p2049, accessed May 26, 2007.
[50] User-Centered Web Effective Business Solutions (2007). Heuristic usability evaluation, available at http://www.ucwebs.com/usability/web-site-usability.htm, accessed May 27, 2007.
[51] Wharton, C., Bradford, J., Jeffries, R., and Franzke, M. (1992). Applying cognitive walkthroughs to more complex user interfaces: Experiences, issues, and recommendations, Proceedings of the Conference on Human Factors in Computing System (CHI 92), Monterey, CA, May 3-7, 1992, 381-388.
[52] Wharton, C. and Lewis, C. 1994. The role of psychological theory in usability inspection methods. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 341-350
[53] Wharton, C., Rieman, J., Lewis, C., and Polson, P. (1994). The Cognitive Walkthrough Method: A Practitioner’s Guide. In Nielsen, J. and Mack, R. (eds.), Usability inspection methods, John Wiley & Sons, Inc., New York, 1994, 105-140.
[55] Zhang, Z., Basili, V., Shneiderman, B. (1998). An empirical study of perspective-based usability inspection, Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting, Santa Monica, CA, October 5-9, 1998, 1346-1350.

My Comments: This is a very rich resource of Reference Materials. GREAT!! Extremely good for me to "Literature Review" on the subject of "Usability Evaluation Method." I should get hold to Nielsen and Mack's book on "Usability Inspection Methods."

Saturday, August 29, 2009

Aug 29 - Hollingsed & Novick, Usability Inspection Methods after 15 Years of Research and Practice

Usability Inspection Methods after 15 Years of Research and Practice.
Tasha Hollingsed. Lockheed Martin, 2540 North Telshor Blvd., Suite C, Las Cruces, NM 88011. +1 505-525-5267 tasha.hollingsed@lmco.com
David G. Novick. Department of Computer Science, The University of Texas at El Paso, El Paso, TX 79968-0518. +1 915-747-5725 novick@utep.edu

SIGDOC’07, October 22–24, 2007, El Paso, Texas, USA.

ABSTRACT
Usability inspection methods, such as heuristic evaluation, the cognitive walkthrough, formal usability inspections, and the pluralistic usability walkthrough, were introduced fifteen years ago. Since then, these methods, analyses of their comparative effectiveness, and their use have evolved in different ways. In this paper, we track the fortunes of the methods and analyses, looking at which led to use and to further research, and which led to relative methodological dead ends. Heuristic evaluation and the cognitive walkthrough appear to be the most actively used and researched techniques. The pluralistic walkthrough remains a recognized technique, although not the subject of significant further study. Formal usability inspections appear to have been incorporated into other techniques or largely abandoned in practice. We conclude with lessons for practitioners and suggestions for future research.


By the early 1990s usability became a key issue, and methods for assuring usability burgeoned. A central argument in the field was the relative effectiveness of empirical usability testing versus other, less costly, methods (see, e.g., [10], [19], [20], [24]). Fullblown usability testing was effective but expensive.
Other methods, generally known under the category of usability inspection methods [33], held the promise of usability results that kept costs low by relying on expert review or analysis of interfaces rather than by observing actual users empirically. Several different approaches to usability inspection were proposed, including heuristic evaluation [32], the cognitive walkthrough [51], the pluralistic walkthrough [3], and formal inspections [22].

To provide perspective on usability inspection methods in light of the 15 years of research and practice since the 1992 workshop, we review subsequently reported work for the four principal approaches described in [33]: heuristic evaluation, the cognitive walkthrough, formal usability inspections, and the pluralistic usability walkthrough.


HEURISTIC EVALUATION

In 1990, Nielsen and Molich introduced a new method for evaluating user interfaces called heuristic evaluation [28], [31]. The method involves having a small group of usability experts evaluate a user interface using a set of guidelines and noting the severity of each usability problem and where it exists. They found that the aggregated results of five to ten evaluators of four interfaces identified 55 to 90 percent of the known usability problems for those interfaces.
They concluded that heuristic evaluation was a cheap and intuitive method for evaluating the user interface early in the design process. This method was proposed as a substitute for
empirical user testing [32], [35].

One study [19] compared the four best-known methods of usability assessment: empirical
usability testing, heuristic evaluation, the cognitive walkthrough, and software guidelines
. The study reported that heuristic evaluation found more problems than any other evaluation method, while usability testing revealed more severe problems, more recurring problems and more global problems than heuristic evaluation. Other researchers found that heuristic evaluation found more problems than a cognitive walkthrough, but only for expert evaluators [10]. System designers and non-experts found the same number of problems with both heuristic evaluation and cognitive walkthrough. Another study [24], however, found empirical testing to yield more severe problems than inspection methods and ascribed differences from the results found in [19] to differences in evaluator expertise.

Nielsen examined the role of expertise as a factor in the effectiveness of heuristic evaluation [32]. He compared evaluation results from three distinct groups of evaluators: novice, usability experts, and double experts, who have expertise both in usability and in the particular type of interface being evaluated. As could be expected, the double experts found more problems than the other two groups... Nielsen concluded that if only regular usability experts can be obtained, then a larger number of evaluators (between three and five) are needed than if double experts are available.

Jeffries and Desurvire [20] addressed this misconception by clearly listing all of the disadvantages of heuristic evaluation found throughout these studies.
A first disadvantage is that the evaluators must be experts, as suggested by Nielsen’s findings earlier in the same year.
A second disadvantage is that several evaluation experts are needed. The authors pointed out that it is difficult for some developers to obtain just one expert, much less many, and to use these experts several times throughout the development cycles, as Nielsen suggests, can become costly.
A third disadvantage is the cost. Most of the issues identified by heuristic evaluation in the studies were minor, and few of the severe issues were identified. Another cost is that some of the issues may be “false alarms,” issues that may never bother users in actual use.

...Nielsen responded to these comparison studies with one of his own [35]. He found that while user testing is more expensive than heuristic evaluation, it also provides a better prediction of the possible issues within the interface.

The consensus view appears to be that empirical testing finds more severe issues that will likely impede the user, but at a greater cost to the developer. Heuristic evaluation finds many of the issues within the interface, but cheaper and earlier.

Assessment of the effectiveness of heuristic evaluation continues as an active research thread.

(To be continued.....)

Aug 29 - Masemola & de Villiers, Towards a Framework for Usability Testing of Interactive e-Learning Applications in Cognitive Domains...

Towards a Framework for Usability Testing of Interactive e-Learning Applications in Cognitive Domains, Illustrated by a Case Study.
S.S. (THABO) MASEMOLA AND M.R. (RUTH) DE VILLIERS. University of South Africa
S.S. Masemola, School of Computing, University of South Africa, P O Box 392, UNISA, 0003, South Africa; masemss@unisa.ac.za.
M.R. de Villiers, School of Computing, University of South Africa, P O Box 392, UNISA, 0003, South Africa; dvillmr@unisa.ac.za.
Proceedings of SAICSIT 2006, Pages 187 –197

Abstract
Testing has been conducted in a controlled usability laboratory on an interactive e-learning application that teaches mathematical skills in a cognitive domain. The study obtained performance measures and identified usability problems, but was focused primarily on using the testing technology to investigate distinguishing aspects of such applications, such as time usage patterns in domains where rapid completion is not necessarily a performance indicator. The paper addresses the issue of what, actually, is meant by ‘usability’ in learning environments. A pilot study identified obstacles and served to enhance the main study. Thinking-aloud on the part of participants provided useful data to support analysis of the performance measures, as can benchmarks and best case measures. Particular attention must be paid to ethical aspects. Features emerging from this study can contribute to a framework for usability testing and usage pattern analysis of interactive e-learning applications in cognitive domains.

What is Usability Testing?
Usability testing is a software evaluation technique that involves measuring the performance of typical end-users as they undertake a defined set of tasks on the system being investigated. It commenced in the early 1980s, as humanfactors professionals studied subjects using interfaces under real-world or controlled conditions and collected data on problems that arose (‘human factors’ is an early term for the human-computer interaction discipline). It has been shown to be an effective method that rapidly identifies problems and weaknesses, and is particularly used to improve the usability of products [Dumas, 2003; Dumas and Redish, 1999; Jeffries et al, 1991].
Since the early to mid-1990s, such testing has been empirically conducted in specialized controlled environments called usability laboratories, equipped with sophisticated monitoring and recording facilities for formal usability testing, supported by analytical software tools. It is an expensive technique. Participants, who are real end-users, interact with the product, performing specified representative tasks. Their actions can be rigorously monitored and recorded in various ways: by videotape − for subsequent re-viewing; event logging − down to keystroke level; and audio − to note verbalization and expressions.
The data, in both quantitative and qualitative forms, is analysed and changes can be recommended. Typical usability metrics include the time taken to complete a task, degree of completion, number of errors, time lost by errors, time to recover from an error, number of subjects who successfully completed a task, and so on [Avouris, 2001; Dix, Finlay, Abowd & Beale, 2004; ISO 9241, 1997; Preece, Rogers & Sharp, 2003; Wesson, 2002].

The primary targets of usability testing are the user interface and other interactive aspects. Such testing is used by academics for research and development, and also by usability practitioners in the corporate environment for rapid refinement of interfaces and analysis of system usability.

What is e-Learning?
Some definitions of e-learning equate it solely with use of the Internet in instruction and learning, but others [CEDEFOP, 2002; Wesson & Cowley, 2003] are broader, including multiple formats and methodologies such as the Internet and Web-based learning (WBL), multimedia CD-ROM, online instruction, educational software/courseware, and traditional computer-assisted learning (CAL). This approach suits the present study, which views e-learning as a
broad range of learning technologies encompassing various roles for technology, including interactive educational software, web-based learning, learning management systems, and learners using computers as tools [De Villiers, 2005].

Usability Testing
Dumas [2003] lists six defining characteristics of usability tests, while a seventh and eighth are obtained from Dumas and Redish [1999]:
1. The focus is usability.
2. Participants are end users or potential end users.
3. There is an artifact to evaluate, which may be a product design, a system or a prototype.
4. The participants think aloud as they progress through tasks.
5. Data is recorded and analysed.
6. The results are communicated to appropriate audiences (often a corporate client).
7. Testing should cover only a few features of the product, incorporated in selected tasks.
8. Each participant should spend approximately an hour doing the stated tasks.

Our methodology and test plan are based on general methodologies for formal usability testing [Pretorius, Calitz & Van Greunen, 2005; Rubin, 1994; Van Greunen & Wesson, 2002] but with some distinguishing features, such as the emphasis on participants thinking aloud and the use of a best case for initial benchmarking,...The broad methodology involves the following steps:
─ Set up objectives in line with research questions.
─ Determine the aspects to be measured and their metrics.
─ Formulate documents: Initial test plan, task list, information document for participants, checklist for administrator, and determine a means of investigating satisfaction.
─ Acquire representative participants.
─ Conduct a pilot test.
─ Refine the test plan, task list, and information document for the main usability test in the light of the pilot.
─ Conduct usability test.
─ Determine means of analysis and presentation that address the unique, as well as the usual, aspects.
─ Draw conclusions and make proposals for the way forward.


Conclusion

1. In addition to the identification of performance measures and problems, how can testing in a usability laboratory elicit valuable information about cognitive e-learning applications?
We propose that usability testing of e-learning applications should address both the interfaces and the learning content, because usability and functionality are closely related in e-learning. Performance metrics generated during learning activities can be used towards measuring the effectiveness aspect of usability, because success in the cognitive processing induced by learning functionality is fundamental to both usability and utility. It could be argued that conventional usability testing should be conducted on user interfaces only, to investigate users’ experiences with navigation features and the menus only. This could lead to improved interaction design, but it would be a paltry use of the capacity of the monitoring and analytical features in usability laboratories.

2. What activities/outputs yield meaningful information about interactive applications in such domains?
Innovative use of the usability lab technology led to usage analysis to identify usage patterns. Using data from thinking-aloud by subjects to clarify how they used their time, a clear distinction emerged between time spent navigating and time spent in cognitive activities. The act of configuring an environment or system to one’s own needs is called ‘incorporated subversion’ by Squires [1999]. In the present context, incorporated subversion of the usability testing technology led to added-value use of the laboratory and its software in novel and adaptable ways.

3. What notable features emerge from this study that can contribute specifically to a framework for usability testing of interactive e-learning in cognitive domains?
Generic frameworks and methodologies can be established, then customized for optimal usability testing of different kinds of e-learning applications and environments.
─ Attention to ethical aspects is of vital importance in this close-up recording of personal human activities.
─ A pilot study is essential to support the sensitive evolving plans and critical judgements that must be made.
─ This case study established the value of thinking out loud as a source of data, provided it is preceded by adequate preparation of participants.
─ The more fine-grained the tasks selected for testing, the better the data that is recorded.
─ Regarding the number of subjects, five are sufficient to identify usability problems, but are not enough to conduct serious analysis of learning and cognitive patterns. In this study the five participants generated valuable initial data on usage patterns and learning styles. Future in-depth research should be undertaken on low level, very finegrained tasks, accompanied by detailed analysis. With more data, realistic averages could be obtained to serve as benchmarks against which to compare future usage studies on small groups or single-subjects.
─ For the present, tentative benchmarks and best case measures were obtained. The best case provides a realistic optimum standard.

References that I may want to read further in future:
DE VILLIERS, M.R. 2004. Usability evaluation of an e-Learning tutorial: criteria, questions and a case study. In: G. Marsden, P. Kotzé, & A. Adesina-Ojo (Eds), Fulfilling the promise of ICT. Proceedings of SAICSIT 2004. ACM International Conference Proceedings Series.
DIX, A., FINLAY, J., ABOWD, G.D. and BEALE, R. 2004. Human-Computer Interaction. Pearson Education, Ltd, Harlow.
DUMAS, J.S. 2003. User-based evaluations. In: J.A. Jacko & A. Sears (Eds), The Human-Computer Interaction Handbook. Mahwah: Lawrence Erlbaum Associates.
DUMAS, J.S. and REDISH, J.C. 1999. A practical guide to usability testing. Exeter: Intellect.
FAULKNER, X. 2000. Usability Engineering. Houndsmills: Macmillan Press.
ISO 9241. 1997. Draft International Standard: Ergonomic requirements for office work with visual display terminals (VDT). Part 11: Guidance on Usability, ISO.
NIELSEN, J. 2000. Why you only need to test with five users. http://www.useit.com/alertbox/20000319.html . Accessed March 2006.
SQUIRES, D. and PREECE, J. 1999. Predicting quality in educational software: Evaluating for learning, usability and the synergy between them. Interacting with Computers 11(5): 467–483.
VAN GREUNEN, D. and WESSON, J L. 2002. Formal usability testing of interactive educational software: A case study. World Computer Congress (WCC): Stream 9: Usability. Montreal, Canada, August 2002.
WESSON, J L. 2002. Usability evaluation of web-based learning: An essential ingredient for success, TelE-Learning 2002: 357- 363, Montreal, Canada, August 2002.
WESSON, J.L. and COWLEY, N. L. 2003. The challenge of measuring e-learning quality: Some ideas from HCI. IFIP TC3/WG3.6 Working Conference on Quality Education @ a Distance: 231-238, Geelong, Australia, February 2003.

Friday, August 28, 2009

Aug 29 - Ssemugabi & de Villiers, A Comparative Study of Two Usability Evaluation Methods Using a Web-Based E-Learning Application



A Comparative Study of Two Usability Evaluation Methods Using a Web-Based E-Learning Application.
Samuel Ssemugabi. School of Information Technology, Walter Sisulu University, Private Bag 1421, East London, 5200, South Africa. +27 43 7085407 ssemugab@wsu.ac.za
Ruth de Villiers. School of Computing, University of South Africa, P O Box 392, Unisa, 0003, South Africa +27 12 429 6559 dvillmr@unisa.ac.za
SAICSIT 2007, 2 - 3 October 2007, Fish River Sun, Sunshine Coast, South Africa.

ABSTRACT
Usability evaluation of e-learning applications is a maturing area, which addresses interfaces, usability and interaction from human-computer interaction (HCI) and pedagogy and learning
from education. The selection of usability evaluation methods (UEMs) to determine usability problems is influenced by time, cost, efficiency, effectiveness, and ease of application. Heuristic
evaluation (HE) involves evaluation by experts with expertise in the domain area and/or HCI.
This comparative evaluation study investigates the extent to which HE identifies usability problems in a web-based learning application and compares the results with those of survey evaluations among end-users (learners).
Severity rating was conducted on a consolidated set of usability problems and further comparison of findings was done on the major and minor problems. The results of HE correspond
closely with those of the survey. However the four expert evaluators identified more problems than the 61 learners and identified 91% of the learners’ problems, when major problems only were considered. HE by a competent and balanced set of experts showed itself to be an appropriate, efficient and effective UEM for e-learning applications.

Usability is a key issue in human-computer interaction (HCI) since it is the aspect that commonly refers to quality of the user interface [30].
The International Standards Organisation (ISO) defines usability as [16]: The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context.

Usability evaluation is concerned with gathering information about the usability or potential usability of a system, in order to assess it or to improve its interface by identifying problems and
suggesting improvements [35]. Various usability evaluation methods (UEMs) exist, e.g. analytical, expert heuristic evaluation, survey, observational, and experimental methods [6,
13, 35].
To evaluate the usability of a system and to determine usability problems, it is important to select an appropriate UEM/s [10, 37], taking cognisance of efficiency, time, cost-effectiveness, ease of application, and expertise of evaluators [13, 30].

Evaluation of e-learning should address aspects of pedagogy and learning from educational domains as well as HCI factors such as the effectiveness of interfaces and the quality of usability and interaction.


Heuristic evaluation (HE) is a usability inspection technique originated by Nielsen [26, 27], in which a small set of expert evaluators, guided by a set of usability principles known as heuristics, determine whether a system conforms to these and identify specific usability problems in the system. It is the most widely used UEM for computer system interfaces. It is described as fast, inexpensive, and easy to perform, and can result in major improvements to user interfaces [4, 5, 19, 24].

Usability features are frequently not considered in development, sometimes because instructors and courseware developers are not trained to do so or because they lack the technological skills [41]. However, a main reason for the oversight is that usability evaluation can be difficult, time
consuming and expensive [20]. If, in addition to its attributes of low cost and relative simplicity, HE is shown to be effective, efficient, and sufficient to identify the problems that impede earners, it would qualify as an ideal method for evaluating webbased e-learning.

The main components of the research design
were:
1. Identification of the target application.
2. Generation of a set of criteria/heuristics suitable for usability evaluation of e-learning applications, in particular, WBL environments.
3. End-user evaluations by means of criterion-based learner surveys on the target application using questionnaires and a focus group interview.
4. An heuristic evaluation with a competent and complementary set of experts, using the synthesised set of criteria as heuristics, followed by severity rating on the consolidated set of problems.
5. Data analysis to answer the research question, investigating in particular how the usability problems identified by heuristic evaluation compare with with those identified by end users in the learner-surveys.
Evaluation criteria/heuristics (terms used interchangeably) for e-learning should address interfaces, usability and interaction from HCI, as well as pedagogy and learning from education.
A study was undertaken by the authors [38] to establish an appropriate set of 20 criteria within three categories for evaluating WBL applications.
This multi-faceted framework (see Table 1) supports Hosie’s and Schibeci’s concept [15] of ‘context-bound’/‘context-related’ evaluation. Nielsen’s [27] heuristics form the basis of Category 1, extended to customise them for educational purposes by using Squires’ and Preece’s [36] ‘learning with software’ heuristics. Other sources used in generating the criteria were, [1], [18], [19], [23], [34], [35] and [39].
Each criterion has a list of associated subcriteria or guidelines to help evaluators assess Info3Net. These are also shown in Table 1, but they can be customised to other contexts.

Category 1: General interface usability criteria (based on Nielsen’s heuristics, modified for e-learning context)
1 Visibility of system status
2 Match between the system and the real world i.e. match between designer model and user model
3 Learner control and freedom
4 Consistency and adherence to standards
5 Error prevention, in particular, prevention of peripheral usability-related errors [36]
6 Recognition rather than recall
7 Flexibility and efficiency of use
8 Aesthetics and minimalism in design
9 Recognition, diagnosis, and recovery from errors
10 Help and documentation

Category 2: Website-specific criteria for educational websites
11 Simplicity of site navigation, organisation and structure
12 Relevance of site content to the learner and the learning process

Category 3: Learner-centred instructional design, grounded in learning theory, aiming for effective learning
13 Clarity of goals, objectives and outcomes
14 Effectiveness of collaborative learning (where such is available)
15 Level of learner control
16 Support for personally significant approaches to learning
17 Cognitive error recognition, diagnosis and recovery
18 Feedback, guidance and assessment
19 Context meaningful to domain and learner
20 Learner motivation, creativity and active learning


SURVEY EVALUATIONS AMONG END-USERS (LEARNERS)
Query techniques, such as questionnaires and interviews aim to identify usability problems by asking users directly [9]. The questionnaire was designed using Gillham’s principles [12]. The first section elicited basic demographic information and details of the respondents’ experience. The main section was based on the 20 criteria of Section 5.1. For each criterion, straightforward, focused, single-issue statements, based on the sub-criteria, were generated to expand the meaning. Students rated Info3Net using the criteria, but a main aim was for them to use each criterion to name problems they had experienced in that regard, and write them in the space below.
Survey was done using 5-point Likert scale: Strongly agree, Agree, Maybe, Disagree, Strongly disagree.
Eighty registered students had used Info3Net during the semester, 61 of whom took part in the questionnaire survey, which was conducted on a single day in the usual laboratory setting, but in two separate groups. Sixty four problems were identified. To eliminate duplicates, a researcher meticulously considered them and combined those that were closely related. During consolidation, some were rephrased to correspond with terminology used by expert evaluators (7.4).
For further data, the questionnaire was followed by a focus group interview with eight students, as advocated by [35]. This clarified and elaborated problems identified by the questionnaire. More emerged, resulting in a final list of 55 problems.

HEURISTIC EVALUATION BY EXPERTS
Heuristic evaluation (HE) is an inspection technique whereby a set of experts evaluate whether a user interface conforms to defined usability principles, termed heuristics [9, 26, 33]. It is seldom possible for a single evaluator to identify all the usability problems in a system. Nielsen [27] recommends that a set ofthree to five evaluators be used to identify 65-75% of the usability problems. Our approach is based on Nielsen’s, using our custom-designed heuristics for web-based learning. The HE was supplemented by a severity rating process done by the experts to rank the final integrated set of problems ranked according to level of seriousness.

Identifying and Defining the Heuristics
The heuristics used were 15 of the 20 used in the end-user survey … and the results of the two evaluations are compared with regard to those 15. Each heuristic was divided into sub-criteria, as shown in Table 1.

Selection of Evaluators
Factors involved in selecting and inviting a balanced set of experts, are the number to use and their respective backgrounds.
Nielsen’s [27] cost-benefit analysis demonstrated optimal value with three or four evaluators. Despite this, the debate continues. [3] used eleven experts to assess usability of a university Web portal. Law and Hvannberg [22] reject the ‘magic five assumption’ and in the context of usability testing, used eleven participants to define 80% of the detectable usability problems. However, [19] argue that two to three evaluators who are experts both in the domain area and in HCI, so-called ‘double experts’, will point out the same number of usability problems as three to five ‘single experts’. Furthermore, in a heuristic evaluation of an educational multimedia application by Albion [1], four evaluators were selected, with expertise in user interface design, instructional/educational
design, and teaching.
The same approach was followed in this study. Four expert evaluators with expertise in user interface design, instructional/educational design, and teaching, were invited and agreed to participate. Two are lecturers in the subject-matter domain and are also HCI experts, familiar with HE; these two can be classified as ‘double experts’.

Briefing the Evaluators
Evaluators were briefed in advance about the HE process for the study, the domain of use of the target system, and the task scenarios to work through as advocated by [23] and [27]. In addition to the consent form (see 5.2) and a request to familiarise themselves with the heuristics before doing the actual evaluation, each evaluator was given a set of documents regarding the:
• Phases of the HE
• System and user profile
• Procedure.

Severity Rating
Severity rating, i.e assigning relative severities to individual problems, can be performed to determine each problem’s level of seriousness. It is usually estimated on a 3- or 5-point Likert
scale. The experts can either do ratings during their HEs or later, when all the problems have been aggregated.
The latter approach is advantageous [1, 23, 24] and is used in the present study. Table 5 shows the 5-point rating scale [32] used to assess the problems and assign severities. The final option indicates a non-problem.
The scale is similar to that used by Nielson [27] with an additional rating item, ‘Medium’ between the ‘Major’ and ‘Minor’ problem ratings.

Conclusion
The conclusion from these findings of a comparative evaluation study, is that the results of heuristic evaluation by experts correspond closely with those of survey evaluation among endusers (learners). In fact, the HE results are better than the survey results (see Table 6). They were produced by only four experts compared to 61 learners, and the experts were experiencing their first exposure to Info3Net, whereas the learners had used it for a semester.
The findings of this study indicate that heuristic evaluation, if conducted by a competent and complementary group of experts, is an appropriate, efficient, and highly effective usability evaluation method in the context of e-learning, as well as relatively easy to conduct and inexpensive. The researchers recommend that HE should, ideally, be supplemented with methods where users themselves identify usability or learning problems. This is in line with proposals that reliable evaluation can be achieved by systematically combining inspection with
user-based methods [2, 7]. In cases where only one approach has to be selected, the findings of this research can be used to propose heuristic evaluation as the optimal method.
A valuable secondary benefit of the study is that the problems identified in Info3Net can be addressed in future upgrades.
Another major contribution of this research is the generation of the framework of evaluation criteria, which is transferable to other e-learning contexts, where the sub-criteria can be customised to the particular environment or system.

Recommendations For Further Research
• Re-designing Info3Net in an action research approach, to solve some of the problems identified. The application could be re-evaluated to determine the impact of the changes.
Applying the evaluation criteria generated in this study to evaluate other web-based e-learning applications.
Adapting the framework of criteria to customise them for other contexts or other forms of e-learning.
• Determining which criteria are suitable for application by educators only and which by learners only.

References that I may want to read further in the future:
[1] Albion, P.R. 1999. Heuristic Evaluation of Multimedia: From Theory to Practice.
http://www.usq.edu.au/users/albion/papers/ascilite99.html
[2] Ardito, C., Costabile, M.F., De Marsico, M., Lanzilotti, R., Levialdi, S., Roselli, T. & Rossano, V. 2006. An Approach to Usability Evaluation of E-Learning Applications. Universal Access in the Information Society, 4(3): 270-283.
[4] Belkhiter, N., Boulet, M., Baffoun, S. & Dupuis, C. 2003. Usability Inspection of the ECONOF System's User Interface Visualization Component. In: C. Ghaoui. (Ed.), Usability Evaluation of Online Learning Programs. Hershey, P.A.: Information Science Publishing.
[7] Costabile, M.F, De Marsico, M., Lanzilotti, R., Plantamura, V.L. & Roselli, T. 2005. On the Usability Evaluation of ELearning Applications. In: Proceedings of the 38th Hawaii International Conference on System Science: 1-10. Washington: IEEE Computer Society.
[8] De Villiers, R. 2006. Multi-Method Evaluations: Case Studies of an Interactive Tutorial and Practice System. In: Proceedings of InSITE 2006 Conference. Manchester, United Kingdom.
[12] Gillham, B. 2000. Developing a Questionnaire. London: Bill Gillham.
[14] Hartson, H.R., Andre, T.S. & Williges, R.C. 2003. Criteria for Evaluating Usability Evaluation Methods. International Journal of Human-Computer Interaction, 15(1): 145-181.
[15] Hosie, P. & Schibeci, R. 2001. Evaluating Courseware: A need for more context bound evaluations? Australian Educational Computing, 16(2):18-26.
[18] Jones, A., Scanlon, E., Tosunoglu, C., Morris, E., Ross, S., Butcher, P. & Greenberg, J. 1999. Contexts for Evaluating Educational Software. Interacting with Computers, 11(5): 499-516.
[19] Karoulis, A. & Pombortsis, A. 2003. Heuristic Evaluation of Web-Based ODL Programs. In: C. Ghaoui. (Ed.), Usability Evaluation of Online Learning Programs. Hershey, P.A.: Information Science Publishing.
[25] Masemola, S.S. & De Villiers, M.R. 2006. Towards a Framework for Usability Testing of Interactive E-Learning Applications in Cognitive Domains, Illustrated by a Case Study. In: J. Bishop & D. Kourie. Service-Oriented Software and Systems. Proceeding of SAICSIT 2006: 187-197. ACM International Conference Proceedings Series.
[26] Nielsen, J. 1992. Finding Usability Problems through Heuristic Evaluation. In: Proceedings of the SIGCHI Conference on Human factors in Computing Systems: 373-380. Monterey: ACM Press.
[27] Nielsen, J. 1994. Heuristic Evaluations. In: J. Nielsen & R.L. Mack. (Eds), Usability Inspection Methods. New York: John Wiley & Sons.
[28] Nielsen, J. & Molich, R. 1990. Heuristic Evaluation of User Interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Empowering People: 249-256. Seattle: ACM Press.
[30] Parlangeli, O., Marchingiani, E. & Bagnara, S. 1999. Multimedia in Distance Education: Effects of Usability on Learning. Interacting with Computers, 12(1): 37-49.
[31] Peng, L.K., Ramaiach, C.K, & Foo, S. 2004. Heuristic-Based User Interface Evaluation at Nanyang Technological University in Singapore. Program: Electronic Library and Information Systems, 38 (1): 42-59.
[32] Pierotti, D. 1996. Usability Techniques: Heuristic Evaluation Activities. http://www.stcsig.org/usability/topics/articles/heactivities.html
[36] Squires, D. & Preece, J. 1999. Predicting Quality in Educational Software: Evaluating for Learning, Usability and the Synergy Between them. Interacting with Computers, 11(5): 467-483.
[37] Ssemugabi, S. 2006. Usability Evaluation of a Web-Based ELearning Application: A Study of Two Evaluation Methods. MSc Dissertation, University of South Africa.
[38] Ssemugabi, S. & De Villiers, M.R. 2007. Usability and Learning: A Framework for Evaluation of Web-Based ELearning Applications. In: Proceedings of the ED-MEDIA 2007 - World Conference on Educational Multimedia, Hypermedia & Telecommunications: 906-913. Vancouver, Canada.
[39] Storey, M.A, Phillips, B., Maczewski, M. & Wang, M. 2002. Evaluating the Usability of Web-Based Learning Tools. Educational Technology & Society, 5(3):91-100.
[40] Van Greunen, D. And Wesson, J L. 2002. Formal usability testing of interactive educational software: A case study. World Computer Congress (WCC): Stream 9: Usability. Montreal, Canada, August 2002.
[41] Vrasidas, C. 2004. Issues of Pedagogy and Design in Elearning System. In: ACM Symposium on Online Learning: 911-915. Nicosia: ACM Press.
[42] Wesson, J.L. & Cowley, N. L. 2003. The Challenge of Measuring E-Learning Quality: Some Ideas from HCI. IFIP TC3/WG3.6 Working Conference on Quality Education @ a Distance: 231-238. Geelong, Australia.
[43] Zaharias, P. 2006. A Usability Evaluation Method for ELearning: Focus on Motivation to Learn. In: CHI '06 Extended Abstracts on Human Factors in Computing Systems: 1571-1576. Montreal: ACM Press.

My Comments: Ssemugabi and de Villiers had listed many references that may be useful to my research. Good hints for literature review. :)

Aug 29 - Jokela et al, ..Standard Definition of Usability: Analyzing ISO 13407 against ISO




The Standard of User-Centered Design and the Standard Definition of Usability: Analyzing ISO13407 against ISO9241-11.
Timo Jokela, Netta Iivari. Oulu University, P.O. Box 3000, 90014 Oulu, Finland. +358 8 5531011 {timo.jokela, mailto:netta.iivari%7D@oulu.fi
Juha Matero, Minna Karukka. Nokia, P.O. Box 50, 90571 Oulu, Finland. {juha.p.matero, mailto:minna.karukka%7D@nokia.com


ABSTRACT
ISO 9241-11 and ISO 13407 are two important standards related to usability: the former one provides the definition of usability and the latter one guidance for designing usability. We carried out an interpretative analysis of ISO 13407 from the viewpoint of the standard definition of
usability from ISO 9241-11. The results show that ISO 13407 provides only partly guidance for designing usability as presumed by the definition. Guidance for describing users and environments are provided but very limited guidance is provided for the descriptions of user goals and usability measures, and generally for the process of producing the various outcomes.


Probably the best known definition of usability is by Nielsen: usability is about learnability, efficiency, memorability, errors, and satisfaction [16].

However, the definition of usability from ISO 9241-11 (Guidance on usability) [11] – “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” - is becoming the main reference of usability.

In addition that it is largely recognized in literature, this ‘standard’ definition of usability is used in the recent Common Industry Format, CIF, for usability testing [1].

To improve the usability of software and information systems, the paradigm of user-centered design1, UCD, has been proposed by a number of method and methodology books, starting from Nielsen [16] to ones published in late 90’s, [8], [4], [5], [15] and ending up with a set of very recent ones, [17] and [18].
1 Called ‘human-centered design’ in ISO 13407. Also called ‘usability engineering’.
My Comments: I think this article is a good resource for "Definition of Usability."

ISO 13407 [9], Human-centred design processes for interactive systems, is a standard that provides guidance for user-centered design. ...it describes usability at a level of principles, planning and activities. A third important aspect is that ISO 13407 explicitly uses the standard definition of usability from ISO 9241-11 as a reference for usability.

Usability is defined in ISO 9241-11 [11] as follows:
Usability: The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.
The terms are further defined as follows:
Effectiveness: the accuracy and completeness with which users achieve specified goals
Efficiency: the resources expended in relation to the accuracy and completeness with which users achieve goals
Satisfaction: freedom from discomfort, and positive attitude to the use of the product
Context of use: characteristics of the users, tasks and the organizational and physical environments
Goal: intended outcome
Task: activities required to achieve a goal

Generally, this definition of usability is a ‘broad’ approach to usability [2]: usability is about supporting users in achieving their goals in their work, it is not only a characteristic of a user interface.

..usability is a function of users of a product or a system (specified users). Further, for each user, usability is a function of achieving goals in terms of a set of attributes (i.e. effectiveness, efficiency and satisfaction) and environment of use.
As an example, one usability measure of a bank machine
could be: • 90 % users achieve the goal (Es) in less than 1 minute (Ey) with an average satisfaction rating ‘6’ (S) when users are novice ones (U), and they want to have a desired sum of cash withdrawn (G) with any bank machine (Et).

The analysis of the definition of usability shows that one needs to determine the following outocomes when the definition is used in a development project:
(1) The users of the system,
(2) Goals of users,
(3) Environments of use
(4) Measures of effectiveness, efficiency and satisfaction.

ISO 13407 is aimed to provide ‘overview’ guidance for the planning and management of user-centered design, not to provide detailed coverage of the methods and technique.
ISO 13407 is an international standard established in 1999. The standard “provides guidance on humancentred design activities throughout the life cycle of computer-based interactive systems”. The standard aims at “those managing design processes” and does not provide detailed coverage of methods and techniques.

ISO 13407 describes user-centered design from four
different aspects:
• Rationale for UCD
• Planning UCD
• Principles of UCD
• Activities of UCD.

Rationale. The rationale part briefly describes the benefits that usable systems provide, such as reduction of training and support costs, improved user satisfaction and productivity of users.

Principles. The standard identifies four general principles that characterize user-centered design, and that are not bound to any specific phase of development cycle:
• The active involvement of users and a clear understanding of user and task requirements
• An appropriate allocation of functions between users and technology
• Iteration of design solutions
Multi-disciplinary design.

Planning. The planning part provides guidance in fitting user-centered design activities into the overall system development process. Among other things, the standard emphasizes that project plans should reserve time and resources for iteration and user feedback. The importance of teamwork and communication is also mentioned.

Activities. The core of the standard – stated explicitly– is the description of user-centered design activities. The standard identifies four main activities of UCD, illustrated
in Figure 2:

Understand and Specify Context of Use. Know the user, the environment of use, and the tasks that he or she uses the product for.
Specify the User and Organizational Requirements. Determine the success criteria of usability for the product in terms of user tasks, e.g. how quickly a typical user should be able to complete a task with the product.
Determine the design guidelines and constraints. Produce Design Solutions. Incorporate HCI knowledge (of visual design, interaction design, usability) into design solutions.
Evaluate Designs against Requirements. The usability of designs is evaluated against user tasks.

Usability is one type of a quality characteristic in a product [10] among others, such as functionality, efficiency, reliability, maintainability and portability. In the requirement phase, when the quality requirements for a product are determined, also the usability requirements should be determined.
While all activities of life-cycle are relevant in the design of usability, the definition of usability has a critical impact especially in the requirements phase of a development project. The outcomes of these requirements activities (identification of users, goals, environments, usability measures) provide direction for the design phase and basis for planning evaluations.
In practice, Nielsen’s attributes as such are too ambiguous to be used in determining the usability requirements.
My Comments: This part of the article, particularly on User-Centred Design, is good on the process of applying usability throughout the entire design and development process.

Understand and specify the context of use
The standard describes the activity ‘Understand and
specify the context of use’ as follows:
The characteristics of the users, tasks and the organizational and physical environment define the context in which the system is used. It is important to understand and identify the details of this context in order to guide early design decisions, and to provide a basis for evaluation.
Information should be gathered about the context of use of new products and systems. If an existing system is upgraded or enhanced, this information may already be available but should be checked. If there are extensive results form user feedback, help desk reports and other data, these provide a basis fro prioritizing user requirements for system modifications and changes.
The context in which the system is to be used should be identified in terms of the following:
a) The characteristics of the intended users: relevant characteristics of the users can include knowledge, skill, experience, education, training, physical
attributes, habits, preferences and capabilities. If necessary, define the characteristics of different types of users, for example, with different levels of
experience of performing different roles (maintainers, installers, etc).
b) The tasks the users are to perform: the description should include the overall goals of the use of the system. The characteristics of tasks that can influence usability should be described, e.g. the frequency and the duration of performance”…. Tasks should not be described solely in terms of the functions or features.
c) The environment in which the users are to use the system: the environment includes the hardware, software and materials to be used. Their description
can be in terms of a set of products, one or more of which can be the focus of human-centred specification or evaluation, or it can be in terms of a set of
attributes or performance characteristics of the hardware, software and other materials. Relevant characteristics of the physical and social environment
should also be described. These can include relevant standards, attributes of the wider technical environment, the physical, ambient, legislative and the social and cultural environment.
The output from this activity should be a description of the relevant characteristics of users, tasks and environment, which identifies what aspects have an important impact on the system design. (See ISO 9241-11 for more information about the context of use and a sample report.)
The context of use description should
a) Specify the range of intended users, tasks and environments in sufficient detail to support design activity;
b) Be derived from suitable sources;
c) Be confirmed by users or if they are not available, by those representing their interests in the process;
d) Be adequately documented;
e) Be made available to the design team at
appropriate times and in appropriate forms to support design activities.

Specify the user and organizational requirements
In most design processes, there is a major activity specifying the functional and other requirements for the product or system. For human-centred design, this activity should be extended to create an explicit statement of user and organizational requirements in relation to the context of use description. The following aspects should
be considered in order to identify relevant requirements:
a) Required performance of the new system against operational and financial objectives;
b) Relevant statutory or legislative requirements, including safety and health;
c) Co-operation and communication between users and other relevant parties;
d) The users’ jobs (including allocation of tasks, users’ well-being, and motivation);
e) Task performance;
f) Work design and organization;
g) Management of change, including training and personnel to be involved;
h) Feasibility of operation and maintenance;
i) The human-computer interface and workstation design.
User and organizational requirements should be derive and objectives set with appropriate trade-offs identified between the different requirements.
The specification of user and organizational requirements should:
a) Identify the range of relevant users and other personnel in the design;
b) Provide a clear statement of the human-centred design goals;
c) Set appropriate priorities for the different requirements;
d) Provide measurable criteria against which the emerging design can be tested;
e) Be confirmed by the users or those representing their interests in the process;
f) Include any statutory or legislative requirements;
g) Be adequately documented.

ISO 13407 does not address the general complexity and specific challenges related to systematic identification of different users, identification of the different goals that users may have; nor determination of measures (effectiveness, efficiency, satisfaction) of usability. The determination of environments of use is addressed in most detailed manner.

DISCUSSION
ISO 9241-11 and ISO 13407 are two important standards related to usability: the former one provides the definition of usability and the latter one guidance for designing usability. We carried out an interpretive analysis of ISO 13407 from the viewpoint of the standard definition of usability from ISO 9241-11. The results show that ISO 13407 provides only partly guidance for designing usability as presumed by the definition. Guidance for describing users and environments are provided but very limited guidance is provided for the descriptions of user goals and usabilty measures, and generally for the process of producing the various outcomes.
References that I may want to read further in future:
1. ANSI. Common Industry Format for Usability Test Reports., NCITS 354-2001, 2001.
3. Bevan, N., Claridge, N., Maguire, M. and Athousaki, M., Specifying and evaluating usability requirements using the Common Industry Format: Four case studies. in IFIP 17th World Computer Conference 2002 - TC 13 Stream on Usability: Gaining a Competitive Edge, (Montreal, Canada, 2002), Kluwer Academic Publishers, 133-148.
8. Hix and Hartson. Developing User Interfaces: Ensuring Usability Through Product & Process.
John Wiley & Sons, 1993.
9. ISO/IEC. 13407 Human-Centred Design Processes for Interactive Systems, ISO/IEC 13407: 1999 (E), 1999.
12. Jokela, T., Making User-Centred Design Common Sense: Striving for an Unambiguous
and Communicative UCD Process Model. in NordiCHI 2002, (Aarhus, Denmark, 2002), ACM, 19-26.
13. Jokela, T. and Iivari, N., Systematic Determination of Quantitative Usability Requirements. in to be published in the proceedings of HCI International 2003, (Crete, 2003).
15. Mayhew, D.J. The Usability Engineering Lifecycle. Morgan Kaufman, San Fancisco,
1999.
16. Nielsen, J. Usability Engineering. Academic Press, Inc., San Diego, 1993.
17. Rosson, M.B. and Carroll, J.M. Usability Engineering. Scenario-Based Development of Human-Computer Interaction. Morgan Kaufmann Publishers, 2002.

Aug 29 - Reading again my References during Master's research

Aug 29 - Searched and found the References/articles that I read when I was doing my Master's Project Dissertation.

I think I would get all stuff on "Definition of Usability" from these articles.

Today I am starting out from "ACM_usability_20080704" folder.
So for next week I will concentrate on articles in this subfoder and other subfolders within "Usability" folder.

Aug 28 - Definition of Usability & Mobile Learning

Definition of Usability

Evaluation of the technical and pedagogical mobile usability.
Antti Syvänen. antti.syvänen@uta.fi Hypermedia Laboratory, University of Tampere, FIN-33014 Tampereen yliopisto, Finland.
Petri Nokelainen. petri.nokelainen@uta.fi Research Centre for Vocational Education, University of Tampere, FIN–13101 Hämeenlinna, Finland.
Attewell & Cavill-Smith (editor). Mobile Learning Anytime Everywhere. A book of papers from MLearn 2004. pg 191-185

The following components of the technical usability criteria were specified (Nokelainen 2004):
1 accessibility
2 ‘learnability’ and memorability
3 user control
4 help
5 graphical layout
6 reliability
7 consistency
8 efficiency
9 memory load
10 errors.

In addition, the following pedagogical usability components were specified (Nokelainen 2004):
1 learner control
2 learner activity
3 cooperative learning
4 goal orientation
5 applicability
6 effectiveness
7 motivation
8 valuation of previous knowledge
9 flexibility10 feedback.

Usability Guidelines for Designing Mobile Learning Portals.
Daniel Su Kuen Seong. The University of Nottingham, Malaysia. Jalan Broga, 43500 Semenyih, Selangor Darul Ehsan, Malaysia. +603-89248138 daniel.su@nottingham.edu.my
Mobility 06, Oct. 25–27, 2006, Bangkok, Thailand.
The 3rd International Conference on Mobile Technology, Applications and Systems — Mobility 2006.

The term usability is defined as by The International Organisation for Standardisation (ISO) ISO 9241-11 [16] as ‘the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.’
According to Nielsen [17], usability means the measure of the quality the users’ experience when interacting interface....usability is not a surface gloss which applied at the last minutes or before the releases of the system or product; but it is deeply affected by every stage of the analysis, design, and development [18].
Usable systems are easy to learn (learnability), efficient to use (efficiency), easy to remember (memorability), not error-prone (errors), and satisfactory in use (subjective satisfaction) [17].
The ultimate goal of usability is meeting the needs of to users’ satisfaction [17]
...advantages of usability encompass increased productivity, enhanced quality of work, improved user satisfaction, and reductions in support and training costs [19]. The reduction in costs has attracted many project managers and interface designers to employ the theory of usability when designing the interfaces as reported in [20, 21].

Cat 1: User Analysis
U1: The user/learner

Cat 2: Interaction
U2: Human-mobile interaction
U3: Map between mobile learning portals and the real world
U4: Help users recognise, diagnose, and recover from errors
U5: Visibility of the status
U6: Minimise human cognitive load

Cat 3: Mobile Learning Interface Design
U7: The small screen display
U8: Do not overuse
U9: Navigation
U10: Consistency.

Katja Karevaara, Media Education Centre, University of Helsinki.From educational usability to context-specific teachability: Development and use of the network-based teaching material contents in higher engineering education.

Pedagogical usability is often focusing on the aspects whether the interface, tools, content, and the tasks of the e-learning environments support learning in various contexts according to certain pedagogical objectives (Silius et al. 2003, Tervakari et al. 2002).

Research in pedagogical usability has been active recently. For example Kukulska-Hulme et al. (2004) conducted a project during 2001-2003 in Open University (UK). During the project it was recognised that "to get to the heart of pedagogical usability, we have to understand more about the impact of requirements in relation to communities, contexts and disciplines." The research group found therefore several layers of usability: contextspecific, academic, general and technical. In detail:
• Context specific usability relates to the requirements of particular disciplines and courses.
• Academic usability deals with educational issues, such as pedagogical strategy.
• General usability issues are common to most websites and include aspects such as clear navigation and accessibility for users with special needs.
• Technical usability addresses issues such as broken links and server reliability.


Definition of Mobile Learning

Usability Guidelines for Designing Mobile Learning Portals.
Daniel Su Kuen Seong. The University of Nottingham, Malaysia. Jalan Broga, 43500 Semenyih, Selangor Darul Ehsan, Malaysia. +603-89248138 daniel.su@nottingham.edu.my
Mobility 06, Oct. 25–27, 2006, Bangkok, Thailand.
The 3rd International Conference on Mobile Technology, Applications and Systems — Mobility 2006.

One distinct feature of mobile learning over e-Learning is mobility [6].
Hence, researchers and scholars are becoming enthusiastically in coining the term ‘mobile learning or m-learning’, such as ‘mobile learning as the point at which mobile computing and e-Learning intersect to produce an anytime, anywhere learning experience [7].
According to Nyiri [8], m-learning is fundamentally e-Learning delivered through mobile computational devices such as Palms, Personal Digital Assistants (PDA), Pocket PCs, smart phone, digital cell phones, and any other handheld devices. The use of mobile devices with the wireless network technology flourishes mobile learners to get convenience, expediency and immediacy of mobile learning in appropriate time and accessing the appropriate learning contents [7].
Additionally, mobile learning is the next generation of e-Learning and important instrument for lifelong learning [49].

MOBILE LEARNING FRAMEWORK.
Ali Mostakhdemin-Hosseini. Helsinki University of Technology, konemiehentie 2, Espoo Finland.Jarno Tuimala. Innoforss Research &/ Development Center, Wanherinkatu 11 3 krs., 30100 Forssa, Finland.

Mobile learning is not learning through mobiles phones or learning over a wireless connection even though the capabilities of running multimedia features has increased in recent years. But mobile learning is the evolution of elearning, which completes the missing component of an e-learning solution. Mobile learning most suits for those mobile parties in education institutes. So, utilizing mobile devices in education is mainly considered as enhanced tools.

Plan for week Aug 31 - Sep 5

For this week of Aug 24-29, I have read many articles. However, my objective of learning wider and deeper on definition of Usability has not reached my level of satisfaction.

Hence, for the following week of Aug 31-Sep 5, I shall continue to do literature review on "Definition of Usability."

The best article which I have enjoyed most was by Su (Daniel Su Kuen Seong, a Malaysian) on Usability Guidelines for Mobile Learning Portals.
Hahaha...the title sounds close to my title...Usability Evaluation Tool for Mobile Learning Applications.

Aug 28 - Su, Usability Guidelines for Designing Mobile Learning Portals

Usability Guidelines for Designing Mobile Learning Portals.
Daniel Su Kuen Seong. The University of Nottingham, Malaysia. Jalan Broga, 43500 Semenyih, Selangor Darul Ehsan, Malaysia. +603-89248138 daniel.su@nottingham.edu.my

Mobility 06, Oct. 25–27, 2006, Bangkok, Thailand.
The 3rd International Conference on Mobile Technology, Applications and Systems — Mobility 2006.

Abstract
Mobile learning presumes the use of mobile Internet technology to facilitate the learning process. The growth and rapid evolution of the wireless technology have created new opportunities for anytime and anywhere learning paradigm. As a result, numerous mobile learning portals have been developed to gain the advantages of it. Nonetheless, there is little research and exploration has been initiated in proposing usability guidelines in designing mobile learning portals to achieve efficiency, effectiveness and satisfaction of learning. Thus, this paper seeks to present usability guidelines by grounding the user interface on usability theoretical framework, possible constraints, and unique properties of mobile computing. Three categories of usability have been formulated: user analysis, interaction and interface design. Ten golden usability guidelines have been suggested which aims for designing highly efficacious, user friendly and usable mobile interface to support dynamicity of mobile and handheld devices. Moreover, Mobile Learning Course Manager (MLCM) portal has been developed to demonstrate and exemplify the usability guidelines proposed.

This integration of technologies has altered considerably the instructional strategies in our educational institutions and changed the way teachers teach and students learn [2]. ... Some noticeable examples with the use of technologies in educational context are multimedia learning, World Wide Web or web/Internet-based learning, e-Learning, and in recent years, the mobile learning.

Mobile learning has been perceived by many educationalists to offer flexibility in learning and
present a multitude yet unique educational advantages [5]. The rapid evolving of the wireless communication and demanding for the low-cost of mobile devices potentially direct many researchers and educationalists to move from web-based and e-Learning to mobile learning which promising easy and convenient ways of learning.
One distinct feature of mobile learning over e-Learning is mobility [6]. Hence, researchers and scholars are becoming enthusiastically in coining the term ‘mobile learning or m-learning’, such as ‘mobile learning as the point at which mobile computing and e-Learning intersect to produce an anytime, anywhere learning experience [7].
According to Nyiri [8], m-learning is fundamentally e-Learning delivered through mobile computational devices such as Palms, Personal Digital Assistants (PDA), Pocket PCs, smart phone, digital cell phones, and any other handheld devices. The use of mobile devices with the wireless network technology flourishes mobile learners to get convenience, expediency and immediacy of mobile learning in appropriate time and accessing the appropriate learning contents [7].
Additionally, mobile learning is the next generation of e-Learning and important instrument for lifelong learning [49].


The term usability is defined as by The International Organisation for Standardisation (ISO) ISO 9241-11 [16] as ‘the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.’
According to Nielsen [17], usability means the measure of the quality the users’ experience when interacting interface.
...usability is not a surface gloss which applied at the last minutes or before the releases of the system or product; but it is deeply affected by every stage of the analysis, design, and development [18].
Usable systems are easy to learn (learnability), efficient to use (efficiency), easy to remember (memorability), not error-prone (errors), and satisfactory in use (subjective satisfaction) [17].
The ultimate goal of usability is meeting the needs of to users’ satisfaction [17].
..advantages of usability encompass increased productivity, enhanced quality of work, improved user satisfaction, and reductions in support and training costs [19]. The reduction in costs has attracted many project managers and interface designers to employ the theory of usability when designing the interfaces as reported in [20, 21].


Mobile learning appears in any facets of possible variations in context such as locations, environment, conditions, noisy or quiet, weather and so on. Hence, the dynamicity has triggered
additional efforts and investigation in order to design highly usable mobile learning portals to cater for different types of users in the global economy.
Furthermore, the limitation of screen size, the presentation of mobile contents, and adaptation of the information to the sensitivity of context and devices influence the efficiency and effectiveness when learning via the mobile devices.
Hayhoe [23] emphasises that the design criterion of this ubiquitous perspective need to be examined thoroughly and focused as to eliminate boredom and disorientation which involves wide range of users in having variety types of handheld devices.


It is perceptible that little research and exploration has been initiated to propose usability
guidelines in designing mobile learning portals. Moreover, several extensive experiments conducted from 2002 to 2005 discover substantial amount of severe usability problems
pertinent to user behaviour in mobile navigation on mobile Internet portals’ design [24]. Therefore, this has set the directions for this paper to explore the usable guidelines when
designing mobile learning portals by grounding the user interface on usability theoretical framework, possible constraints, and unique properties of mobile computing.

Category Usability Guideline
User Analysis U1
Interaction U2, U3, U4, U5 and U6
Interface design U7, U8, U9 and U10
The usability guidelines are divided into three categories: user analysis, interaction, and interface design.

Guidelines mean ‘rules or principles for action, encapsulating some combination of practitioner determined best practices in a domain and research-based insights into factors relevant in that
domain [25].’
The usability guidelines framework is grounded on the usability theoretical framework discussed in [17], constraints proposed by [22], and understanding of a number of unique properties that are intrinsic to mobile computing as highlighted in [52, 53, 54].

3.1.1 User Analysis
U1: The user/learner

3.1.2 Interaction
U2: Human-mobile interaction
U3: Map between mobile learning portals and the real world
U4: Help users recognise, diagnose, and recover from errors
U5: Visibility of the status
U6: Minimise human cognitive load

3.1.3 Mobile Learning Interface Design
U7: The small screen display
U8: Do not overuse
U9: Navigation
U10: Consistency.

My Comments: Majority of Seong's USABILITY GUIDELINES were taken from Nielsen's TEN HEURISTIC or TEN HEURISTIC PRINCIPLES. Nielsen's 10 were among my "36 USABILITY INSPECTION CRITERIA" for e-Learning Portals. Seong had also the smart thing; take the Usabilty Criteria from a very well-known source. Most researchers would use some, most or all of Nielsen's 10 usability criteria.


Principally MLCM portal’s user interfaces can be perceived usable as the mobile learners or users are considered the most important entity in our design. Much effort has been devoted in analysing and evaluating the mobile learners’ need and caters for beginner, fast acting for a more expert learner, provides efficacious support to the learners’ working needs and is pleasant to use.

CONCLUSIONS
Technological changes and evolution significantly amplifies the demands on the quality and usable user interface and offer the potential to further enhance the task, structure and functionality of mobile devices. The rapidly increased number of mobile learning portals has urged the usability issues to be more transparent when designing the learning portals. This paper contributes ten usability guidelines which envelop user analysis, interaction and user interfaces categories. These usability guidelines empower the effort in creating highly usable and impressive user interface which promote the users’ satisfaction and interactivity while learning via the mobile and handheld devices. In addition, the proposed usability guidelines intend to set as benchmark for interface designers to further evaluate the mobile user interfaces to conform to the attributes of usability when designing usable mobile learning portals. Additionally, MLCM has demonstrated and exemplified the use of usability guidelines proposed which aims to increase the users’ satisfaction especially dealing with the diversify and universal mobile learners who comes from different background, culture, race, educational level, learning cognitive and styles. The simplicity design and ease of use, consistency in navigation, structural metaphoric, and standardised iconic design have promote mobile users’ cognitive and learning styles in accessing and comprehending MLCM to maximise their learning experiences and increase their learning curve indirectly.


Future Directions
we suggest the following for future extensions:
• Examines the usability guidelines by conducting usabilitytesting/evaluation to measure and quantify the theory of usability such as learnability, memorability, efficiency, effectiveness, and subjective satisfaction. The expected results and outcomes are used to prioritise the importance and plot the ranking of it.
• Further evaluation and experiment will be conducted to assess and determine whether the usability guidelines proposed are appropriate and met the basis of formal instructional design strategies, and the mobile curriculum development.
• Enhances the learning portals by extracting the lecture notes and other relevant digital documents, and transmit them to the respective mobile learners.

My Comments: In my Masters research, I measured the relative importance of all the 36 USABILITY INSPECTION CRITERIA (Seong called them Usability Guideline) and ranked them according to the rating of importance. I hope to repeat this for a larger set (36 plus More) of usability criteria for mobile learning.

References that I may want to read further in future:
[16] ISO/IEC, 9241-11 Ergonomic Requirements for Office Work with Visual Display Terminals (VDT)s – Part 11 Guidance on Usability. 1998: ISO/IEC 9241-11: 1998 (E).
[18] Lee, K. B., and Grice, R. A. Developing a New Usability Testing Method for Mobile Devices. In Proceedings of the 23rd International Performance, Computing, and Communications Conference (IPCCC’04). 14-17 April 2004, Phoenix, Arizona, 115-127.
[19] ISO/IEC, 13407. Human-Centred Design Processes for Interactive Systems. 1999: ISO/IEC 13407: 1999(E).
[22] Mayhew, D. J. The Usability Engineering Lifecycle: A Practitioner’s Handbooks for User Interface Design. Morgan Kaufman Publisher, 1999.
[23] Hayhoe, G. F. From desktop to palmtop: Creating usable online documents for wireless and handheld devices. In Proceedings of the IEEE International Professional Communication Conference. Santa Fe, New Mexico, USA, 24-27 October 2001.
[24] Kaikkonen, A. Usability problems in today’s mobile Internet portals. In Proceedings of the 2nd IEE International Conference on Mobile Technology, Applications and Systems. 15-17 November 2005, Guangzhou, Chine, 459-464.
[25] Vavoula, G. N., Lefrere, P., O’Malley, C., Sharples, M., and Taylor, J. Producing Guidelines for learning, teaching and tutoring in a mobile environment. In Proceedings of the 2nd IEEE International Workshop on Wireless and Mobile Technologies in Education (WMTE’ 04). 23-25 March 2004, Jhongli, Taiwan, ROC, 173-176.
[30] Su, D. K. S. A comparative evaluation and correlation between learning styles and academic achievement on e-Learning. In Kwan, R., and Fong, J. (Eds.), Proceedings of Web-based Learning: Technology and Pedagogy at The 4th International Conference on Web-based Learning (ICWL’05). 31 July-3 August 2005, Hong Kong, China, World Scientific Publishing, Singapore, 193-202
[39] Kaikkonen, A., and Laarni, J. Designing for small display screens. In Proceedings of the 2nd Nordic Conference on Human-Computer Interaction. Aarhus, Denmark, New York: ACM Press, 19-23 October, 2002, 227- 230.
[42] Su, D. K. S., and Chan, F. C. Navigational patterns on usable mobile news portals. Journal of Internet Technology, 7(3), April 2006, forthcoming.