Thursday, August 27, 2009

Aug 28 - Vavoula & Sharples, Challenges in Evaluating Mobile Learning

Challenges in Evaluating Mobile Learning.
Giasemi N. Vavoula. University of Leicester, 105 Princess Road East, Leicester, LE1 7LG, UK. gv18@leicester.ac.uk Tel: +44 (0) 116 2523966
Mike Sharples. LSRI, University of Nottingham, Exchange Building, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB, UK. mike.sharples@nottingham.ac.uk Tel: +44 (0) 115 9513716

Published in Proceedings of Mlearn2008.


ABSTRACT
We propose six challenges in evaluating mobile learning: capturing and analysing learning in context and across contexts, measuring mobile learning processes and outcomes, respecting learner/participant privacy, assessing mobile technology utility and usability, considering the wider organisational and socio-cultural context of learning, and assessing in/formality. A three-level framework for evaluating mobile learning is presented, comprising a micro level concerned with usability, a meso level concerned with the learning experience, and a macro level concerned with integration within existing educational and organisational contexts. The paper concludes with a discussion of how the framework meets the evaluation challenges and with suggestions for further extensions.


We now appreciate mobile learning not just as learning that is facilitated by mobile technology, but also as the processes of coming to know through conversations and explorations across multiple contexts amongst people and personal interactive technologies (Sharples et al. 2007b). Such evolving conceptions introduce numerous challenges to all aspects of mobile learning research, including evaluation. As the field matures, our frameworks and tools need to respond to these challenges.

CHALLENGE 1: CAPTURING LEARNING CONTEXT AND LEARNING ACROSS CONTEXTS
A major task for educational evaluation is to capture and analyse learning in context.
To appreciate the challenge, let us compare mobile learning contexts with traditional classroom contexts from the researcher’s perspective.
In order to establish and document the learning context, the researcher needs to know:
the location of learning and the layout of the space (where); the social setting (who, with whom, from whom); the learning objectives and outcomes (why and what); the learning method(s) and activities (how); and the learning tools (how).

CHALLENGE 2: HAS ANYONE LEARNED ANYTHING?
A second challenge that faces mobile learning evaluation is the assessment of learning processes and outcomes.
In traditional learning settings such as the classroom there are well-established and accepted methods for the assessment of learning activities, such as essay writing, multiple choice tests, open-book exams, and unseen examinations.
Distinctions have been made between formative assessment (aiming to provide students with feedback regarding their progress) and summative assessment (aiming to judge and sum up the students’ achievements) (Scriven 1967)...

CHALLENGE 3: AN ETHICAL QUESTION
Mobile learning can involve the use of mobile technology, which may also be personal technology. Tapping into a person’s mobile phone to find out how they have been using it to learn might mean invading that person’s privacy.
Although research ethics frameworks are implemented by most research institutions and organisations, mobile learning research raises profound ethical issues.

CHALLENGE 4: LET’S NOT FORGET THE TECHNOLOGY
Evaluations of mobile learning often reference inherent limitations of mobile devices, such as their small screens, short battery lives, intermittent connectivity, and associated human factors, all of which affect their usability. As the focus of research shifts from the mobility of the technology to the mobility of the learner, additional issues arise as learners move across multiple devices, possibly over short time periods in multiple locations. Assessing the usability of the mobile technology and the effectiveness of its integration with the mobile learning practice remains a high priority for evaluation.

CHALLENGE 5: SEEING THE BIGGER PICTURE
Oliver and Harvey (2002) suggest four different kinds of impact of educational technologies in Higher Education (HE): impact on students’ learning, impact on individual academics’ practice, impact on institution, and national impact.
In the same context of HE Price and Oliver (2007) identify three types of impact studies: anticipatory, ongoing and achieved.
Anticipatory studies relate to pre-intervention intentions, opinions and attitudes; ongoing studies focus on analysing processes of integration; and achieved studies are summative studies of technology no longer ‘novel’.
Riley (2007) extends this impact framework by distinguishing between minor modifications and culturally significant changes in practice, and suggesting that different kinds of change will emerge over different timescales.
Although not exclusively linked with HE contexts, mobile learning evaluation has similar questions to answer regarding impact.

CHALLENGE 6: FORMAL OR INFORMAL?
Mobile learning is often defined in terms of the technology that mediates the learning experience: if the technology is mobile, so is the learning. Mobility, however, is not an exclusive property of the technology, it also resides in the lifestyle of the learner, who in the course of everyday life moves from one context to another, switching locations, social groups, technologies and topics; and learning often takes place inconspicuously or is crammed in the short gaps between these transitions. Although this view of learning is inclusive of formal education contexts, it is particularly pertinent to everyday, informal learning.
Nevertheless, characterising a learning experience as formal or informal can be complicated. For example, is the learning of pupils visiting a museum (largely considered an informal learning setting) with their school (an irrefutably formal learning setting) a case of formal or informal learning?

REQUIREMENTS FOR MOBILE LEARNING EVALUATION
The challenges discussed in the previous sections indicate that mobile learning evaluation needs to:
• Capture and analyse learning in context with consideration of learner privacy
Assess the usability of the technology and how it affects the learning experience
• Look beyond measurable cognitive gains into changes in the learning process and practice
• Consider organisational issues in the adoption of mobile learning practice and its integration with existing practices and understand how this integration affects attributes of in/formality
• Span the lifecycle of the mobile learning innovation that is evaluated, from conception to full deployment and beyond.


The evaluation framework of Myartspace consisted of three levels:
1. Micro level: the micro level examines the individual activities of the technology users and assesses usability and utility. In the case of Myartspace the activities included collecting objects through exhibit codes, making notes, contacting people who have collected a particular item, recording audio, and taking pictures.
2. Meso level: the meso level examines the learning experience as a whole, to identify learning breakthroughs and breakdowns; it also examines how well the learning experience integrates with other related learning experiences. In the case of Myartspace, evaluation at this level involved exploring whether there was a successful connection between learning in the museum and in the classroom as well as identifying critical incidents that reveal new patterns and forms of learning or where learning activity is impeded.
3. Macro level: the macro level examines the longer term impact of the new technology on established educational and learning practice. For Myartspace this related to the organisation of school museum visits. The evaluation at this level looked, for example, at the appropriation of the new technology by teachers, the emergence of new museum practices in supporting school visits, and how they related to the original project visions.

For Myartspace, the development comprised four phases:
(1) Requirements analysis, to establish the requirements for the socio-technical system (people and their interactions with technology) and specify how it would work, through consultation with the different stakeholder groups;
(2) Design of the user experience and interface;
(3) Implementation of the service; and
(4) Deployment of the service.

To establish the value of the service at the three levels, the evaluation framework explores the gap between expectations and reality and also unforeseen processes and outcomes.
This happened in three stages:
1. Stage 1: explores what is supposed to happen at a level. User expectations at each level can be captured through interviews with users (e.g. teachers, students, museum staff) and by analysing user documentation, training sessions and materials.
2. Stage 2: observes what actually happened at a level. The user experience is documented to establish the reality of technology use for the different users.
3. Stage 3: examines the gaps between user expectations and reality through a combination of reflective interviews with users and critical analysis of the findings from stages 1 and 2 by the evaluators.

References that I may want to read further in the future:
Kjeldskov, J. & J. Stage (2004). "New Techniques for Usability Evaluation of Mobile Systems." International Journal of Human-Computer Studies 60: 599-620.
Meek, J. (2006). Adopting a Lifecycle Approach to the Evaluation of Computers and Information Technology. Unpublished PhD thesis, School of Electronic, Electrical and Computer Engineering, The University of Birmingham.
Vavoula, G., J. Meek, M. Sharples, P. Lonsdale & P. Rudman (2006a). A lifecycle approach to evaluating MyArtSpace. 4th International Workshop of Wireless, Mobile and Ubiquitous Technologies in Education (WMUTE 2006 ), Athens, Greece, IEEE Computer Society.

So now...I have finished reading on "Usability" from articles in UsabilityMLearnELearn_20090821 folder.

No comments:

Post a Comment