Questionnaires in Usability Engineering
Index of questions on this page
What is a questionnaire?
Are there different kinds of questions?
What are the advantages of using questionnaires in usability research?
What are the disadvantages?
How do questionnaires fit in with other HCI evaluation methods?
What is meant by reliability?
What is meant by validity?
Should I develop my own questionnaire?
What's wrong with putting a quick-and-dirty questionnaire together?
Factual-type of questionnaires are easy to do, though, aren't they?
What's the difference between a questionnaire which gives you numbers and one that gives you free text comments?
Can you mix factual and opinion questions, closed and open ended questions?
How do you analyse open-ended questionnaires?
What is a Likert-style questionnaire? One with five response choices to each statement, right?
How can I tell if a question belongs to a Likert scale or not?
How many response options should there be in a numeric questionnaire?
How many anchors should a questionnaire have?
My respondents are continually complaining about my questionnaire items. What can I do?
What other kinds of questionnaires are there?
Should favourable responses always be be checked on the left (or right) hand side of the scale?
Is a long questionnaire better than a short one? How short can a questionnaire be?
Is high statistical reliability the 'gold standard' to aim for?
What's the minimum and maximum figure for reliability?
Can you tell if a respondent is lying?
Why do some questionnaires have sub-scales?
How do you go about identifying component sub-scales?
How much can I change wordings by in a standardised opinion questionnaire?
What's the difference between a questionnaire and a checklist?
Where can I find out more about questionnaires?
Are there different kinds of questions?
There are three basic types of questions:
Factual-type questionnaires
Such questions ask about public, observable information that it would be tedious or inconvenient to get any other way. For instance, number of years that a respondent has been working with computers, or what kind of education did the respondent get. Or, how many times did the computer break down in a two-hour session, or how quickly did a user complete a certain task. If you are going to include such questions you should spend time and effort to ensure that the information you are collecting is accurate, or at least to determine the amount of bias in the answers you are getting.
Opinion-type questions
These ask the respondent what they think about something or someone. There's no right or wrong answer, all we have to do is give the strength of our feeling: do we like it or not, or which do we prefer? Will we vote for Mr A or Mr B? An opinion survey does not concern itself with subtleties of thought in the respondent, it is concerned with finding out how popular someone or something is. Opinion questions direct the thought of the respondent outwards, towards people or artefacts in the world out there. Responses to opinion questions can be checked against actual behaviour of people, usually, in retrospect ('Wow! It turned out that those soft, flexible keyboards were a lot less popular than we imagined they would be!')
Attitude questions
Attitude questions focus the respondent's attention to inside themselves, to their internal response to events and situations in their lives. There are a lot of questionnaires consisting of attitude questions about experiences with Information Technology, the Internet, Multi-media and so on. These tend to be of interest to the student of social science. Of more use to the HCI practitioner are questionnaires that ask the respondent what their attitudes are to working with a particular product the respondents have had some experience of. These are generally called satisfaction questionnaires.
In our research, we have found that user's attitudes to working with a particular computer system can be divided up into attitudes concerning:
* The user's feeling of being efficient
* The degree to which the user likes the system
* How helpful the user feels the system is
* To what extent the user feels in control of the interactions
* Does the user feel they can learn more about the system by using it.
We can't directly cross-check attitude results against behaviours in the way we can with factual and opinion type questions. However, we can check whether attitude results are internally consistent and this is an important consideration when developing attitude questionnaires.
What are the advantages of using questionnaires in usability research?
The biggest single advantage is that a usability questionnaire gives you feedback from the point of view of the user. If the questionnaire is reliable, and you have used it according to the instructions, then this feedback is a trustworthy sample of what you (will) get from your whole user population.
Another big advantage is that measures gained from a questionnaire are to a large extent, independent of the system, users, or tasks to which the questionnaire was applied. You could therefore compare
* the perceived usability of a word processor with an electronic mailing system,
* the ease of use of a database as seen by a novice and an expert user,
* the ease with which you can do graphs and statistical computations on a spreadsheet.
Additional advantages are that questionnaires are usually quick and therefore cost effective to administer and to score and that you can gather a lot of data using questionnaires as surveys. And of course, questionnaire data can be used as a reliable basis for comparison or for demonstrating that quantitative targets in usability have been met.
What are the disadvantages?
The biggest single disadvantage is that a questionnaire tells you only the user's reaction as the user perceives the situation. Thus some kinds of questions, for instance, to do with time measurement or frequency of event occurrence, are not usually reliably answered in questionnaires. On the whole it is useful to distinguish between subjective measures (which is what questionnaires are good for) and performance measures (which are publicly-observable facts and are more reliably gathered using direct event and time recording techniques).
There is an additional smaller disadvantage. A questionnaire is usually designed to fit a number of different situations (because of the costs involved). Thus a questionnaire cannot tell you in detail what is going right or wrong with the application you are testing. But a well-designed questionnaire can get your near to the issues, and an open-ended questionnaire can be designed to deliver specific information if properly worded.
How do questionnaires fit in with other HCI evaluation methods?
The ISO 9241 standard, part 12, defines usability in terms of effectiveness, efficiency, and satisfaction. If you are going to do a usability laboratory type of study, then you will most probably be recording user behaviour on a video or at least timing and counting events such as errors. This is known as performanceor efficiency analysis.
You will also most probably be assessing the quality of the outputs that the end user generates with the aid of the system you are evaluating. Although this is harder to do, and more subjective, this is known as effectiveness analysis.
But these two together don't add up to a complete picture of usability. You want to know what the user feels about the way they interacted with the software. In many situations, this may be the single most important item arising from an evaluation! Enter the user satisfaction questionnaire.
It is important to remember that these three items (effectiveness, efficiency, and satisfaction) don't always give the same answers: a system may be effective and efficient to use, but users may hate it. Or the other way round.
Questionnaires of a factual variety are also used very frequently in evaluation work to keep track of data about users such as their age, experience, and what their expectations are about the system that will be evaluated.
Questionnaires in Usability Engineering
A List of Frequently Asked Questions (3rd Ed.)
Compiled by: Jurek Kirakowski,Human Factors Research Group, Cork, Ireland.This edition: 2nd June, 2000.
Source: http://www.ucc.ie/hfrg/resources/qfaq1.html
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment