Over the years I have worked with colleagues at Ameritech (where the work began), U.S. WEST Advanced Technologies, and most recently Sapient to create a tool that has helped in dealing with some of these questions.
The tool that we developed is called the USE Questionnaire. USE stands for Usefulness, Satisfaction, and Ease of use. These are the three dimensions that emerged most strongly in the early development of the USE Questionnaire. For many applications, Usability appears to consist of Usefulness and Ease of Use, and Usefulness and Ease of Use are correlated.
Each factor in turn drives user satisfaction and frequency of use. Users appear to have a good sense of what is usable and what is not, and can apply their internal metrics across domains.
USE Questionnaire
Usefulness
It helps me be more effective.
It helps me be more productive.
It is useful.
It gives me more control over the activities in my life.
It makes the things I want to accomplish easier to get done.
It saves me time when I use it.
It meets my needs.
It does everything I would expect it to do.
Ease of Use
It is easy to use.
It is simple to use.
It is user friendly.
It requires the fewest steps possible to accomplish what I want to do with it.
It is flexible.
Using it is effortless.
I can use it without written instructions.
I don't notice any inconsistencies as I use it.
Both occasional and regular users would like it.
I can recover from mistakes quickly and easily.
I can use it successfully every time.
Ease of Learning
I learned to use it quickly.
I easily remember how to use it.
It is easy to learn to use it.
I quickly became skillful with it.
Satisfaction
I am satisfied with it.
I would recommend it to a friend.
It is fun to use.
It works the way I want it to work.
It is wonderful.
I feel I need to have it.
It is pleasant to use.
Work to refine the items and the scales continues. There is some evidence that for web sites and certain consumer products there is an additional dimension of fun or aesthetics associated with making a product compelling.
General Background
Subjective reactions to the usability of a product or application tend to be neglected in favor of performance measures, and yet it is often the case that these metrics measure the aspects of the user experience that are most closely tied to user behavior and purchase decisions.
While some tools exist for assessing software usability, they typically are proprietary (and may only be available for a fee). More importantly, they do not do a good job of assessing usability across domains.
When re-engineering began at Ameritech, it became important to be able to set benchmarks for product usability and to be able to measure progress against those benchmarks.
It also was critical to ensure resources were being used as efficiently as possible, and so tools to help select the most cost-effective methodology and the ability to prioritize design problems to be fixed by developers became important.
Finally, it became clear that we could eliminate all the design problems and still end up with a product that would fail in the marketplace.
It was with this environment as a background that a series of studies began at Ameritech. The first one was headed by Amy Schwartz, and was a collaboration of human factors, market research in our largest marketing organization, and a researcher from the University of Michigan business school.
Building on that research, I decided to develop a short questionnaire that could be used to measure the most important dimensions of usability for users, and to measure those dimensions across domains. Ideally it should work for software, hardware, services, and user support materials. It should allow meaningful comparisons of products in different domains, even though testing of the products happened at different times and perhaps under different circumstances. In the best of all worlds, the items would have a certain amount of face validity for both users and practitioners, and it would be possible to imagine the aspects of the design that might influence ratings of the items.
It would not be intended to be a diagnostic tool, but rather would treat the dimensions of usability as dependent variables.
Subsequent research would assess how various aspects of a given category of design would impact usability ratings.
The early studies at Ameritech suggested that a viable questionnaire could be created. Interestingly, the results of those early studies were consistent with studies conducted in the MIS and technology diffusion areas, which also had identified the importance of and the relationship between Usefulness, Satisfaction, and Ease of Use.
How It Developed
The first step in identifying potential items for the questionnaire was to collect a large pool of items to test. The items were collected from previous internal studies, from the literature, and from brainstorming. The list was then massaged to eliminate or reword items that could not be applied across the hardware, software, documentation, and service domains. One goal was to make the items as simply worded as possible, and as general as possible.
As rounds of testing progressed, standard psychometric techniques were used to weed out additional items that appeared to be too idiosyncratic or to improve items through ongoing tweaking of the wording. In general, the items contributing to each scale were of approximately equal weight, the Chronbach's Alphas were very high, and for the most part the items appeared to tap slightly different aspects of the dimensions being measured.
The questionnaires were constructed as seven-point Likert rating scales. Users were asked to rate agreement with the statements, raging from strongly disagree to strongly agree.
Various forms of the questionnaires were used to evaluate user attitudes towards a variety of consumer products.
Factor analyses following each study suggested that users were evaluating the products primarily using three dimensions, Usefulness, Satisfaction, and Ease of Use. Evidence of other dimensions was found, but these three served to most effectively discriminate between interfaces.
Partial correlation calculated using scales derived for these dimensions suggested that Ease of Use and Usefulness influence one another, such that improvements in Ease of Use improve ratings of Usefulness and vice versa.
While both drive Satisfaction, Usefulness is relatively less important when the systems are internal systems that users are required to use. Users are more variable in their Usefulness ratings when they have had only limited exposure to a product.
As expected from the literature, Satisfaction was strongly related to the usage (actual or predicted).
For internal systems, the items contributing to Ease of Use for other products actually could be separated into two factors, Ease of Learning and Ease of Use (which were obviously highly correlated).
Conclusion
While the questionnaire has been used successfully by many companies around the world, and as part of several dissertation projects, the development of the questionnaire is still not over. For the reasons cited, this is an excellent starting place. The norms I have developed over the years have been useful in determining when I have achieved sufficient usability to enable success in the market. To truly develop a standardized instrument, however, the items should be taken through a complete psychometric instrument development process.
A study I have been hoping to run is one that simultaneously uses the USE Questionnaire and other questionnaires like SUMI or QUIS to evaluate applications. Once a publicly available (i.e., free) standardized questionnaire is available that applies across domains, a variety of interesting lines of research are possible. The USE Questionnaire should continue to be useful as it stands, but I hope the best is yet to come.
Measuring Usability with the USE Questionnaire
By Arnold M. Lund
Source: http://www.stcsig.org/usability/newsletter/0110_measuring_with_use.html
Subscribe to:
Post Comments (Atom)
You haven't given any idea on how to interpret the result.
ReplyDeletethank you for sharing
ReplyDeletebut the link is broken stcsig..
ReplyDeleteWould also like sme information about the interpretation. Can't find it anywhere
ReplyDeleteHow do you present the data? or form conclusions about usability?
ReplyDelete