Friday, September 4, 2009

Sep 5 - Hoegh, Usability Problems: Do Software Developers Already Know?

Usability Problems: Do Software Developers Already Know?
Rune Thaarup Høegh. Aalborg University, Department of Computer Science, Aalborg East, DK-9220, Denmark. runethh@cs.aau.dk

OZCHI 2006, November 20-24, 2006, Sydney, Australia.
OZCHI 2006 Proceedings ISBN: 1-59593-545-2

Abstract
The result of usability evaluations is often accentuated as a distinctive input for developers to improve the usability of a software system. On the other hand developers say that many of the results from the usability evaluations are issues already known to them. This paper presents a study of usability problems as developers perceive them in their own emerging software in relation to usability problems experienced by users in a usability evaluation. The results indicate that having developers explicating their expectation on emerging software can provide a low-cost identification of problem areas, whereas a full scale usability evaluation provides specific knowledge of usability problems and their severity.


When developing interactive software systems, a key quality factor is usability of the software for its prospective users.
Usability relates to the extent a user can achieve specified goals with effectiveness, efficiency and satisfaction (ISO 1998).
Usability evaluations are applied to assess the quality of a user interaction design and establish
a basis for improving it (Rubin, 1994). This is accomplished by identifying specific parts of a system that do not properly support the users in carrying out their work. Thus usability evaluations and the related activities can help developers make better decisions, and thereby allow them to do their jobs more effectively (Radle & Young, 2001).

Usability reports are suggested as the mean for communicating results of a usability evaluation (Dumas and Redish, 1993; Rubin 1994, Molich 2000).
The amounts and severity of identified usability problems are a typically communicated measurement (Karat et al. 1992, Nielsen 1993, Preece et al. 2002).
The traditional usability report typically includes a description of evaluation method and settings, demographic data about the test subjects, a list of usability problems sorted by severity, detailed descriptions of the usability problems, and log files with transcripts of the individual usability tests.
The list of usability problems sorted by severity is probably the most essential information in the usability report.

The study examines the amount and nature of usability problems developers are aware off prior to a usability evaluation. The developers, from the company in the study, had previously claimed that many of the problems identified in usability evaluations were issues redundant to the knowledge they already had about the software from working with it on a daily basis; hence
they do did not find the identified usability problems in the report to be a useful tool for improving the software.

Assumptions about usability and developers not recognizing usability problems are two aspects of system development that can lead to software that is hard to use for the end-users.
Experiences from projects developed by the UCD approach show that developers often get surprised by seeing that users can not use the software the way it has been designed. The developers misjudge the usability of the software they have developed.

Method
Developers from two software projects were asked to write down all known usability problems in the software they developed.
Afterwards the two software systems were usability tested with users, the results from the evaluations were analyzed, and the results were compared to the usability problems described by the developers.

Procedure
The procedure used for this study involved three overall steps; Preparing the usability evaluation, interviewing the developers and usability evaluation.
The first step was the creation of task to be used in the usability test. The usability test was to be conducted as a validation test after the think aloud protocol (Rubin, 1994). The company’s human factor experts and a domain expert created a range of tasks designed to evaluate the general usability of the software. The tasks were used both to show the developers what parts of the software that would be tested, and for the usability test.
The second step was a workshop with the GUI developers of each project. The developers were introduced to the study, and then presented the tasks that would be used in the evaluation. They were then asked to individually write down the list of usability problems they knew existed in the parts of the software that would be tested. Afterwards the developers were asked to make a common list. The developers presented the usability problems one by one, and a facilitator asked the others if they had written down the same problems or variations of it. The usability problems were all written down on a whiteboard by the facilitator. This procedure went on until all their individual lists were exhausted. Finally the developers together rated the severity of the usability problems. The problems were rated according to the three categories suggest by Molich (2000).
The third step of the study was the usability evaluation. Five individual users participated in the evaluation of the software projects. All evaluations were done within one day. The evaluation was performed by the think-aloud protocol, and each evaluation session took one hour. All of the sessions were filmed by cameras and recorded using the video equipment in the laboratory.

Data Analysis
A detailed video based analysis (VBA) was conducted, and a list of usability problems for each system was generated based on the video records. The identification of usability problems included marking on the video when the problem occurred, along with a detailed description of the problem and a severity ranking. After all video was analyzed the identified usability problems were compared, and the complete list of problems was compiled. During the study the time spend on the various tasks was recorded in a diary.
The usability problems described on the three lists were compared, and similarities and differences between the lists were noted while taking both amounts of identified problems and types of problems into account. Finally the time spent on creating the three lists was compared. Additionally the results from project A and B were examined for differences and similarities.

My Comments: the 3 lists were (a) Individual Lists, (b) Common List, (c) VBA Lists.

CONCLUSION
The developers had knowledge about 38 percent of the usability problems prior to the evaluations
. The developers were however mostly able to describe the usability problems in more general terms than those identified during the usability evaluation.
Two-thirds of the usability problems identified by the usability evaluations were not known on before hand. These usability problems were problems that the developers did not expect or acknowledge. If they had an idea about them on before hand, it was not so obvious to them that they put them on any of their lists. Hence the usability evaluations have the potential to provide the developers with new knowledge of their software.
The descriptions of the usability problems written by the developers were less specific than the problem descriptions produced by VBA. The resources spent on the obtaining the usability problems were however significantly fewer.
The process of writing down the usability problems and merging the lists might have influenced the developers’ knowledge of the software. Prior to the process the developers might have had implicit ideas about the usability problems in the software, but asking them to write them down forced the developers to make their assumptions explicit.
On one hand this might be considered a flaw in the study, on the other hand having developers explicating their knowledge of the software’s usability might prove a valuable and cost-effective tool to improve the usability of the software.

References that I may want to read further in the future:
Dumas, J. S. and Redish, J. C. (1993). A practical guide to usability testing. Norwood, NJ: Ablex Publishing.
Rubin, J. (1994). Handbook of usability testing: How to plan, design, and conduct effective tests, New York, NY: John Wiley & Sons.

No comments:

Post a Comment