Saturday, October 31, 2009

Oct 31 - Ling, Advances in Heuristic Usability Evaluation Method (PhD dissertation)

Oct 31 – Ling's PhD dissertation

Heuristic evaluation is one of the most popularly used usability evaluation method in
both industry and academia. (Ling, 2005; Rosenbaum, 2000)
In a major survey conducted by Rosenbaum, Rohn and Hamburg (2000), heuristic evaluation was noted as the most used usability evaluation method.

Using the newly developed E-Commerce heurist set (of usability criteria) resulted in finding larger number of real usability problems than using Nielsen’s (1994d) heuristic set. (Ling 2005)

Sweeney, Maguire and Shackel (1993) used the concept of “approach to evaluation” to classify usability methods: user-based, theory-based and expert-based. Their framework reflects different data sources that form the basis of evaluation.

Mack and Nielsen (1004) stated 4 ways to evaluate user interfaces.
Automatically
Empirically
Formally
Informally.
The informal evaluation methods are also referred to as usability inspection methods.

Ling (2005) had categorized usability evaluation methods into seven.
1) Analytic theory-based methods – include: (a) cognitive task analysis; (b) goal, operator, method and selection (GOMS) models; user action notations (UAN).
GOMS to estimate the expert’s task performance time.
2) Expert evaluation methods – include formal usability inspections, cognitive walkthroughs, pluralistic walkthroughs, guideline reviews, heuristic evaluation, claim analysis.
Guideline methods apply a set of established guidelines and rules to target system; differ from hruristic evaluation because they use a larger number of guidelines than the number of heuristics used in heuristic evaluations. Hence, guideline methods require lower level of expertise for the evaluators.
Comments: May be I could combine concept of heuristic evaluation and guidelines review method to form a new method whereby the evaluators are guided by a large set of guidelines for each heuristic.
3) Observational evaluation methods – include direct observations, videos, computer logging, verbal protocols, cooperative observations, critical incident reports and ethnographic studies.
Collect data on what users do when they interact with an interface.
4) Survey evaluation methods – include questionnaires, interviews, focus group and user feedback.
Evaluators to ask users for their subjective views of the user interface of a system.
5) Experimental evaluation methods – include beta testing, think aloud method, constructive interactions, retrospective testing, coaching methods and performance measures.
Involve laboratory experiments to analyze user’s interaction with the system.
Think aloud method asks users to verbalize their thinking throughout the test.
6) Psycho-physiological measures of satisfaction or workload – collect physiological data e.g. electrical activity in brain, heart rate, blood pressure, pupil dilation, skin conductivity, level of adrenaline in blood, mental state of users in term of satisfaction and mental workload.
Predicting user’s workload and satisfaction level.
7) Automatic testing – use programs to capture automatically the critical measures of an interface, e.g. response time, HTML broken links.
Generate a list of low-level interface problems.

Ling (2005) had described thoroughly the Nielsen’s Heuristics and heuristic evaluation method.

Muller et al (1995, 1998) added four more “participatory” heuristics (in addition to Nielsen’s ten heuristics) and validated the new set, which aims to assess how well the interactive system meets the needs of its users and their work environment (Muller et al, 1995; Muller et al, 1998). Also, added users ans inspectors to expert inspectors (traditionally used in heuristic evaluation). Four additional “participatory” heuristics are:
a) Respect the user and his/her skills
b) Promote a pleasurable experience with the system
c) Support quality work
d) Protect privacy.

Ling (2005) had explained about four drawbacks of the heuristic evaluation method.

Ling (2005) explained the factors affecting the results of heuristic evaluation.
a) Individual differences
b) Expertise level
c) Task scenario
d) Observation during evaluation
e) Heuristic set
f) Individual or group evaluations.

Domain Specific Heuristic Set:
Heuristic evaluation method was developed and applied mainly for single user, productivity-oriented desktop programs, which were the major computer applications in the early 1990s.
But with computer technologies getting more integrated into everyday life and new types of HCI emerging, Nielsen’s ten heuristics may not be able to cover usability issues in new computing systems.
For example, mobile systems need to address issues of changing context of use (Vetere et al, 2003).
Because domain-specific heuristics can be developed to supplement existing heuristics (Molich & Nielsen, 1990; Nielsen, 1993; Nielsen & Mack, 1994), researchers have derived many adapted heuristic sets to address the typical requirements and problems in different kinds of application domain.

Ling (2005) discussed how other researchers (including Baker et al, 2001; Baker et al, 2002) came out with the modified/customised heuristics set.

Ling (2005) defined usability problems, severity of usabillity problems, usability problem report.

Ling (2005) focused on usability of e-commerce websites, seemingly.

Questionnaires are the most widely used method to identify an individual's attitudes and feelings toward a software system (Kirakowski & Corbett, 1990).

What have I read:
Chapter 1 Objective and Significance
Chapter 2 Background Literature
Chapter 3 Conceptual Model and Hypotheses
Chapter 9 Conclusions and Recommendations

Ling (2005) concluded:
Based on the experimental results, the following guidelines can be derived on how to perform heuristic evaluation.
* Find evaluator of field independent cognitive style
* Use domain specific heuristic set to guide evaluation
* Have evaluators conduct the evaluation in pairs to reduce variability in results.


Source:
Ling, Chen. Advances in Heuristic Usability Evaluation Method. [Dissertation, PhD] Purdue University, West Lafayette, Indiana. Dec 2005.

No comments:

Post a Comment