Tuesday, August 18, 2009

Megal-Royo et al (2007), Evaluation Methods on Usability of M-Learning

Downloaded this article; found this to be so relevant that I decided to read immediately.

Critic 1: They only explained 3 usability evaluation methods; there are many more. (What they mentioned was actually the 3 most popular UEM.) Their classification is wrong; conventional user testing is under category of USABILITY TESTING, not under category of USABILITY INSPECTION.
Critic 2: They only mentioned 1 referenced definition of Usability. That is painting the wrong picture on Usability. Fact is: there are varied definitions and concepts on Usability.

Nowadays, m-learning employs the same pedagogical methods as any other conventional learning method, telematic or not. Nevertheless, the real problem is still the efficient and suitable adaptation of the contents to a means with clear restriction levels (Avellis, 2003) [2]. The former restrictions are basically visual (reduced screen, colours…etc), technological (memory, variety and compatibility between models…etc), and social (SMS emission and reception costs, acquisition, devices access and appropriate use…etc),

The present paper tries to show the possible use of "usability inspection methods" as a evaluation techniques, to evaluate two essential aspects: the contents based in the type of information managed through the means, and the interface, as the man-machine communication environment (Sharples, 2005) [1].

there is no real evaluation of the formative effectiveness of the contents, but the traditional quantitative-qualitative evaluation. In the ambit of usability, this topic is many times evaluated in terms of formal aspects rather than content.

it is necessary to bear, from the conceptualization moment of a new m-learning environment (adapted or not), navigation characteristics, interactivity orientation, information management, and accordingly, the contents to be taught and learnt.

Currently, the most common usability definition can be found in the international standard ISO/IEC 9126-1, where six guidelines are described for the creation of any kind of telematic application, extensible, without a doubt, to other programs and applications developed for mobile devices. They are functionality, reliability, usability, efficiency, maintainability and portability.

usability inspection is useful to find running problems, although it can also be applied while the design and building phase.

The most widely used methods are:
-Heuristic evaluation. Is a method proposed in the 90s by Nielsen (Nielsen &Molich, 1990) [7], where an expert applies some principles of usability on a certain program, tool or environment telematically developed. The different methods and experiences performed during the next decade caused the development of many categorised criteria, reaching up to 294 typical problems. Currently, the need of faster and more operative strategies has made the criteria to come to 10 levels, which many times are organised with regard to the platform to evaluate.
-Cognitive walkthroughs evaluation. Is a method developed by Lewis (Lewis et Al. 1990) [8], where some user problems are simulated in detail and step by step, especially analysing each task from a cognitive point of view. Expert users’ profiles are employed, with special care in the first formalization stages of the telematic program, tool or environment prototype.
-Conventional user test, following the diverse analysis methodologies of telematic platforms created or expressly adapted, regarding the need of valuation (functionality, visual ergonomics…etc.), whose profile is quite vast where sometimes previous knowledge is required.

The validation or an m-learning environment needs the combined use of usability techniques

The usability tests can be carried out in artificial laboratories or in real scenarios. ...
Lab-tests need equipment and staff able to assume specialised tasks and essays, executed under human and technical supervision. This method seems to be the most viable in usability evaluation, since the impediments related to the user location and to the different interface models in wireless devices are avoided.

Remote usability testing is described as "usability evaluation wherein the evaluator, performing observation and analysis, is separated in space and/or time from user" (Hartson et Al. 1996) [10].
The most of the current remote applications are focused in program test through the extraction of perceptive (visual, tactile, audible) or computing (website accesses, website types…etc.) data. These systems hold accurate data of users’ movements in a remote environment. Programs such us Noldus, Uzilla, WebRemUsine or WebQuilt are good examples, although it has to be considered that, normally, some additional software is required to process and control remote data.

Freeware software such as WebQuilt, used in web environments, has been adapted to wireless environments to analyse webpage access (Hong et Al) [12] and information management frequency (Tara Matthews, 2001) [13].

Conclusion
Beginning with the initial conditions of the means, m-learning environments offer a wide field of possibilities in for the development of specific methods that will allow functionality validation in a platform of such characteristics. Traditional techniques are combined with
innovative ones, addressed to "in situ" valuation in real contexts.
A starting point for the assessment of one technique against the rest is based on two factors: first, the capability of the methods to facilitate the contents access, fomenting their learning in these environments; secondly, the platform functionality assessment through its navigability and user orientation.

No comments:

Post a Comment