ABSTRACT
Evaluation is the key to effective interface design. It becomes even more important when the interfaces are for cutting edge technology, in application areas that are new and with little prior design knowledge. Knowing how to evaluate new interfaces can decrease development effort and increase the returns on resources spent on formative evaluation. The problem is that there are few, if any, readily available evaluation tools for these new interfaces.
This work focuses on the creation and testing of a new set of heuristics that are tailored to the large screen information exhibit (LSIE) system class. This new set is created through a structured process that relies upon critical parameters associated with the notification systems design space. By inspecting example systems, performing claims analysis, categorizing claims, extracting design knowledge, and finally synthesizing heuristics; we have created a usable set of heuristics that is better equipped for supporting formative evaluation.
Contributions of this work include: a structured heuristic creation process based on critical parameters, a new set of heuristics tailored to the LSIE system class, reusable design knowledge in the form of claims and high level design issues, and a new usability evaluation method comparison test. These contributions result from the creation of the heuristics and two studies that illustrate the usability and utility of the new heuristics.
1 Introduction
2 Literature Review
3 Background and Motivation
7 Discussion
8 Conclusion
But, how would be go about creating an evaluation tool that applies to this type of system? Would we want to create a tool dedicated to this single system or would a more generic, system class level tool be a better investment of our time? Evidence from preliminary work suggests that system-class level evaluation tools hold the most promise for long-term performance benchmarking and system comparison, over more generic tools or even tools tailored for an individual system [85, 56, 5]. A system class level tool is situated more towards the specific side of the generality/specificity scale; yet, it is still generic enough to apply to many different systems within a class. So, again, how would we go about creating a new tool for this type of system? The key to successful evaluation tool creation is focusing on the user goals associated with the target system class. This requires an understanding of the system class, in terms of these critical user goals.
What is UEM?
UEMs are tools or techniques used by usability engineers to discover problems in the design of software systems, typically measuring performance against some usability metric (ease of use, learnability, etc).
What is Heuristic Evaluation?
Heuristic evaluation is a specific type of UEM in which expert usability professionals inspect a system according to a set of guidelines. This method is analytic in nature because the experts review a system (through prototypes or screen-shots) and try to discover usability issues from inspection and reflection upon the guidelines.
We need a specific tool, like heuristics, that can support formative evaluation of these displays.
Heuristics have been used throughout the HCI community for quick, efficient usability evaluation [66, 70, 69, 48, 40, 32, 56, 21].
Contributions of this work include:
• Critical parameter based creation of system class heuristics
We develop and use a new heuristic creation process that leverages critical parameters from the target system class. Researchers can now focus UEM development effort on a structured process that yields usable heuristics.
• Heuristics tailored to the LSIE system class
LSIE researchers and developers now have a new tool in their arsenal of evaluation methods. These heuristics focus on the unique user goals associated with the LSIE system class.
• LSIE system design guidance
In addition to the heuristics, we produced significant numbers of design tradeoffs from system inspection. These claims are useful to other system developers because the claims can be reused in disparate projects.
• UEM comparison tool
Through our efforts to compare the new heuristics to other existing alternatives, we developed a new comparison technique that relies upon expert inspection to provide a simplified method for calculating UEM comparison metrics.
• Deeper understanding of the generality vs. specificity tradeoff
Finally, we also provide more insight into the question of the level of specificity a UEM should have for a given system. We also find support for system-class specific UEMs, as other work has indicated.
The remainder of this document is organized as follows:
• Chapter 2 discusses appropriate literature and related work, situating our critical parameter
based approach and providing motivation;
• Chapter 3 provides details on early studies that illustrate the need for an effective UEM
creation method, it also illustrates the utility of claims analysis for uncovering problem sets;
• Chapter 4 describes the UEM creation process, including descriptions of the five LSIE systems
(phase 1);
• Chapter 5 describes the comparison experiment, including discussion (phase 2);
• Chapter 6 describes three efforts to show the heuristic set produced in Chapter 4 is indeed
useful and usable (phase 3);
• Chapter 7 provides a discussion of the implications of this work;
• and Chapter 8 provides detailed descriptions of the contributions and information on future
work directions.
Source:
Somervell, Jacob. Developing Heuristic Evaluation Methods for Large Screen Information Exhibits Based on Critical Parameters. [Dissertation, PhD in Computer Science and Applications] Virginia Polytechnic Institute and State University. June 22, 2004.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment