Saturday, September 5, 2009

Sep 6 - Definition/Concept of Usability

The Standard of User-Centered Design and the Standard Definition of Usability: Analyzing ISO13407 against ISO9241-11.
Timo Jokela, Netta Iivari. Oulu University, P.O. Box 3000, 90014 Oulu, Finland. +358 8 5531011 {timo.jokela, mailto:netta.iivari%7D@oulu.fi
Juha Matero, Minna Karukka. Nokia, P.O. Box 50, 90571 Oulu, Finland. {juha.p.matero, mailto:minna.karukka%7D@nokia.com

Probably the best known definition of usability is by Nielsen: usability is about learnability, efficiency, memorability, errors, and satisfaction [16].

However, the definition of usability from ISO 9241-11 (Guidance on usability) [11] – “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” - is becoming the main reference of usability.

In addition that it is largely recognized in literature, this ‘standard’ definition of usability is used in the recent Common Industry Format, CIF, for usability testing [1].

ISO 13407 [9], Human-centred design processes for interactive systems, is a standard that provides guidance for user-centered design. ...it describes usability at a level of principles, planning and activities. A third important aspect is that ISO 13407 explicitly uses the standard definition of usability from ISO 9241-11 as a reference for usability.

Usability is defined in ISO 9241-11 [11] as follows:
Usability: The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.
The terms are further defined as follows:
Effectiveness: the accuracy and completeness with which users achieve specified goals
Efficiency: the resources expended in relation to the accuracy and completeness with which users achieve goals
Satisfaction: freedom from discomfort, and positive attitude to the use of the product
Context of use: characteristics of the users, tasks and the organizational and physical environments
Goal: intended outcome
Task: activities required to achieve a goal

Generally, this definition of usability is a ‘broad’ approach to usability [2]: usability is about supporting users in achieving their goals in their work, it is not only a characteristic of a user interface.

..usability is a function of users of a product or a system (specified users). Further, for each user, usability is a function of achieving goals in terms of a set of attributes (i.e. effectiveness, efficiency and satisfaction) and environment of use.
As an example, one usability measure of a bank machine could be: • 90 % users achieve the goal (Es) in less than 1 minute (Ey) with an average satisfaction rating ‘6’ (S) when users are novice ones (U), and they want to have a desired sum of cash withdrawn (G) with any bank machine (Et).

The analysis of the definition of usability shows that one needs to determine the following outocomes when the definition is used in a development project:
(1) The users of the system,
(2) Goals of users,
(3) Environments of use
(4) Measures of effectiveness, efficiency and satisfaction.

ISO 13407 is aimed to provide ‘overview’ guidance for the planning and management of user-centered design, not to provide detailed coverage of the methods and technique.
ISO 13407 is an international standard established in 1999. The standard “provides guidance on humancentred design activities throughout the life cycle of computer-based interactive systems”. The standard aims at “those managing design processes” and does not provide detailed coverage of methods and techniques.

ISO 13407 describes user-centered design from four different aspects:
• Rationale for UCD
• Planning UCD
• Principles of UCD
• Activities of UCD.

Usability is one type of a quality characteristic in a product [10] among others, such as functionality, efficiency, reliability, maintainability and portability. In the requirement phase, when the quality requirements for a product are determined, also the usability requirements should be determined.
While all activities of life-cycle are relevant in the design of usability, the definition of usability has a critical impact especially in the requirements phase of a development project. The outcomes of these requirements activities (identification of users, goals, environments, usability measures) provide direction for the design phase and basis for planning evaluations.
In practice, Nielsen’s attributes as such are too ambiguous to be used in determining the usability requirements.
My Comments: This part of the article, particularly on User-Centred Design, is good on the process of applying usability throughout the entire design and development process.

DISCUSSION
ISO 9241-11 and ISO 13407 are two important standards related to usability: the former one provides the definition of usability and the latter one guidance for designing usability. We carried out an interpretive analysis of ISO 13407 from the viewpoint of the standard definition of usability from ISO 9241-11. The results show that ISO 13407 provides only partly guidance for designing usability as presumed by the definition. Guidance for describing users and environments are provided but very limited guidance is provided for the descriptions of user goals and usability measures, and generally for the process of producing the various outcomes.
----------
A Comparative Study of Two Usability Evaluation Methods Using a Web-Based E-Learning Application.
Samuel Ssemugabi. School of Information Technology, Walter Sisulu University, Private Bag 1421, East London, 5200, South Africa. +27 43 7085407 ssemugab@wsu.ac.za
Ruth de Villiers. School of Computing, University of South Africa, P O Box 392, Unisa, 0003, South Africa +27 12 429 6559 dvillmr@unisa.ac.za

SAICSIT 2007, 2 - 3 October 2007, Fish River Sun, Sunshine Coast, South Africa.

Usability is a key issue in human-computer interaction (HCI) since it is the aspect that commonly refers to quality of the user interface [30].
The International Standards Organisation (ISO) defines usability as [16]: The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context.

Usability evaluation is concerned with gathering information about the usability or potential usability of a system, in order to assess it or to improve its interface by identifying problems and suggesting improvements [35]. Various usability evaluation methods (UEMs) exist, e.g. analytical, expert heuristic evaluation, survey, observational, and experimental methods [6, 13, 35].
To evaluate the usability of a system and to determine usability problems, it is important to select an appropriate UEM/s [10, 37], taking cognisance of efficiency, time, cost-effectiveness, ease of application, and expertise of evaluators [13, 30].

Evaluation of e-learning should address aspects of pedagogy and learning from educational domains as well as HCI factors such as the effectiveness of interfaces and the quality of usability and interaction.

Usability Category and Usability Criteria used by Ssemugabi & de Villiers in their research:
Category 1: General interface usability criteria (based on Nielsen’s heuristics, modified for e-learning context)
1 Visibility of system status
2 Match between the system and the real world i.e. match between designer model and user model
3 Learner control and freedom
4 Consistency and adherence to standards
5 Error prevention, in particular, prevention of peripheral usability-related errors [36]
6 Recognition rather than recall
7 Flexibility and efficiency of use
8 Aesthetics and minimalism in design
9 Recognition, diagnosis, and recovery from errors
10 Help and documentation
Category 2: Website-specific criteria for educational websites
11 Simplicity of site navigation, organisation and structure
12 Relevance of site content to the learner and the learning process
Category 3: Learner-centred instructional design, grounded in learning theory, aiming for effective learning
13 Clarity of goals, objectives and outcomes
14 Effectiveness of collaborative learning (where such is available)
15 Level of learner control
16 Support for personally significant approaches to learning
17 Cognitive error recognition, diagnosis and recovery
18 Feedback, guidance and assessment
19 Context meaningful to domain and learner
20 Learner motivation, creativity and active learning

------------
A Tool to Support Usability Inspection
Carmelo Ardito, Rosa Lanzilotti, Paolo Buono, Antonio Piccinno
Dipartimento di Informatica, Università di Bari, Italy
{ardito, lanzilotti, buono, mailto:piccinno%7D@di.uniba.it

AVI '06, May 23-26, 2006, Venezia, Italy.

Usability is a significant aspect of the overall quality of interactive applications.

-----------
Methods for quantitative usability requirements: a case study on the development of the user interface of a mobile phone.
Timo Jokela Æ Jussi Koivumaa Æ Jani Pirkola
Petri Salminen Æ Niina Kantola

Pers Ubiquit Comput (2006) 10: 345–355
DOI 10.1007/s00779-005-0050-7

T. Jokela (&) Æ N. Kantola. Oulu University, P.O. Box 3000, Oulu, Finland. E-mail: timo.jokela@oulu.fi E-mail: niina.kantola@oulu.fi
J. Koivumaa Æ J. Pirkola. Nokia, P.O. Box 50, 90571 Oulu, Finland. E-mail: jussi.koivumaa@nokia.com E-mail: jani.pirkola@nokia.com
P. Salminen. ValueFirst, Luuvantie 28, 02620 Espoo, Finland. E-mail: petri.salminen@valuefirst.fi

Mobile phones have become a natural part of our everyday lives. Their user friendliness, termed usability, are increasingly in demand. Usability brings many benefits: users are able and willing to use the various features of the phone and the services supplied by the operators, the need for customer support decreases, and, above all, user satisfaction increases.
At the same time, designing is becoming increasingly challenging with the increasing number of functions and reduction of the size of the phones. Another challenge is the ever shortening life of the phones resulting in less time for development.

The practice of designing usable products is called usability engineering.1 The book User-centered system design by Donald Norman and Stephen Draper [1] is a pioneering work. John Gould and his colleagues also worked with usability methodologies in the 1980s [2]. Dennis Wixon and Karen Holtzblatt at Digital Equipment developed Contextual Inquiry and later on Contextual Design [3]; Carroll and Mack [4] were also early contributors. Later, various UCD methodologies were proposed e.g. by [5–10]. The standard ISO 13407 [11] is a widely used general reference for usability engineering.

The first activity is to identify users. Context of use analysis is about getting to know users: what the users’ goals are in relation to the product under development, what kind of tasks they do and in which contexts. User information is the basis for usability requirements where the target levels of the usability of the product under development are determined. A new product should lead to more efficient user tasks...

An essential part of the usability life-cycle is (quantitative) usability requirements, i.e. measurable usability targets for the interaction design [13–17]. As stated in [13]: ‘‘Without measurable usability specifications, there is no way to determine the usability needs of a product, or to measure whether or not the finished product fulfils those needs. If we cannot measure usability, we cannot have usability engineering’’.

Methods for quantitative usability requirements
There are two main activities related to quantitative usability requirements.
During the early phases of a development project, the usability requirements are determined (a in Fig. 1), and during the late phases, the usability of the product is evaluated against the requirements (b in Fig. 1).
Determining usability requirements can be further split into two activities: defining the usability attributes, and setting target values or the attributes.
In the evaluation, a measuring instrument is required.

Determining usability attributes
The main reference of usability is probably the definition of usability in ISO 9241-11: ‘‘the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use’’ [19]. In brief, the definition means that usability requirements are based on measures of users performing tasks with the product to be developed.
– An example of an effectiveness measure is the percentage of users who can successfully complete a task.
– Efficiency can be measured by the mean time needed to successfully complete a task.
– User satisfaction can be measured with a questionnaire.

Usability requirements may include separate definitions of the target level (e.g. 90% of users can successfully complete a task) and the minimum acceptable level (e.g. 80% of users can successfully complete a task) [20].
Whiteside et al. [21] suggest that quantitative usability requirements be phrased at four levels: worst, planned, best and current.
Questionnaires measuring user satisfaction provide quantitative, though subjective usability metrics for related usability attributes.

According to Nielsen [5] usability is associated with five attributes: learnability, efficiency, memorability, error and satisfaction. In usability goal setting, these attributes must be prioritised based on user and task analysis, and then operationalised and expressed in measurable ways.

-----------
Conceptual Framework and Models for Identifying and Organizing Usability Impact Factors of Mobile Phones.
Dong-Han Ham1, Jeongyun Heo2, Peter Fossick3, William Wong1, Sanghyun Park2, Chiwon Song2, Mike Bradley3
1School of Computing Science, Middlesex University, The Burroughs London, NW4 4BT UK. {d.ham, mailto:w.wong%7D@mdx.ac.uk
2MC R&D Centre, LG Electronics, Seoul Korea. {jy_heo, sanghyun, mailto:chiwon79%7D@lge.com
3Product Design Engineering, Middlesex University, Bramley Road London, N14 4YZ UK. {p.fossick, mailto:m.d.bradley%7D@mdx.ac.uk

OZCHI 2006, November 20-24, 2006, Sydney, Australia.

Usability has been regarded as a critical factor affecting the quality of mobile phones.

It has been reported that usability is one of the most important attributes affecting the quality of mobile phones and thus users’ satisfaction (Ketola and Röykkee, 2001).
Usability has been defined in various ways, but the concept of usability defined in ISO/IEC 9126 (1998) is widely accepted (Hornbaek, 2006). According to the definition, usability refers to ‘the capability of the (software) product to be understood, learned, used and be attractive to the user, when used under specified conditions.’
Although it is the definition focusing on software systems, it can be applied to mobile phones in consideration of features specific to mobile phones.

Usability can be considered both from a design and evaluation perspective (Folmer et al., 2003).
Usability should be properly specified during requirements analysis and designed during the architectural and implementation design phases. Conversely, usability is the concept that needs to be evaluated from a user-centric point of view.
User perception of usability is influenced by many design factors including visual appeal, hedonic qualities, logical task sequences, and pleasure in use, as well as contextual factors including the users’ environment (i.e. context of use).

To evaluate usability in a more systematic way, many studies examined factors or dimensions constituting usability (Bevan, 1999).
For example, ISO/IEC 9241 (1998) defines three dimensions: effectiveness, efficiency, and satisfaction. Another example is those described in Nielsen (1993): learnability, efficiency of use, memorability, errors, and satisfaction.
These dimensions can be classified into two main groups: objective and subjective dimensions.
An objective dimension generally measures how well users’ tasks are supported by applying task performance measures like task completion time and the number of errors. However, objective dimensions do not always predict the user’s assessment of usability because it does not reflect users’ feeling or satisfaction.
Subjective dimensions therefore needs to be assessed to provide a holistic and complete usability measurement.

Usability can be measured in various ways; however, they can be categorized in three methods: usability testing, usability inquiry, and usability inspection (Zhang, 2003).
It cannot be said that one method is the best in all situations. Hence it is necessary to choose an appropriate method, taking into consideration evaluation purposes, available time, measures to be collected and so on.

To examine the usability of mobile phones, it would be useful to understand the user interface of mobile phones, tasks to be completed, and the context of use of mobile phones.

My Comments: Usability problems of mobile phones are:
A key feature and constraint of user interface is that mobile phones have too little screen to display a lot of information at the same time; therefore, information organization and navigation are critical usability issues.
The second significant feature is that a physical button or key has generally more than one control function. Thus the functions of a single key are dependent on types of modes.
The third is that processing power and available memory are limited.

Three views of the framework indicate that usability impact factors can be categorized into three groups.
The first group is human perceived usability (user view) where typical examples include effectiveness, efficiency, and memorability.
The second is property exhibited by mobile phones (product view) and examples are reliability, durability, performance, and aesthetics.
The third is performance on the tasks (interaction view) and examples are task supportability and error prevention.

Usability can be interpreted by using usability indicators. Usability indicators can be obtained by the functional combination of usability criteria. Usability criteria can be measured by applying measurement methods to the properties of mobile phones. These properties have specific value as usability data. Figure 4 explains five abstraction levels of usability impact factors.
1 Usability (Quality in Use)
2 Usability Indicator
3 Usability Criteria
4 Usability Property (Metric)
5 Usability Data.

Usability Indicator
While usability cannot be accurately and fully evaluated in any way, it can be estimated or evaluated by some usability indicators which provide a basis for decision making. On the basis of the literature we reviewed, we concluded that usability of mobile phones can be estimated by five indicators, which are further divided into two dimensions. The five indicators are:
1 effectiveness,
2 efficiency,
3 learnability,
4 satisfaction, and5 customization.
Of those, the first three indicators are related to task performance dimensions and they are easy to quantify. But the later two indicators, related to emotion human factors, are not easy to quantify.

----------
Evaluating Web Usability Using Small Display Devices.
Carlos J. Costa. DCTI – ISCTE, Adetti/ISCTE, Lisboa, Portugal. carlos.costa@iscte.pt
José P. Novais Silva. Portugal Telecom, Lisboa, Portugal. jose.p.silva@telecom.pt
Manuela Aparício. ITML, Lisboa, Portugal. manuela@design.itml.org

SIGDOC’07, October 22–24, 2007, El Paso, Texas, USA.

Mobile devices enable users to access information and web-based services from any location either by PC or by small display devices. But, those mobile devices are restricted by small screen size, which limit the amount of information that can be displayed at same time. Therefore, it has become increasingly important to learn how to evaluate its use and how to design mobile devices functionalities. It is already proven that an attractive interface is not necessarily good and usable; therefore “a good graphic design and attractive displays can increase users´ satisfaction and thus improve productivity” [10].

Starting in the mid of 80s and gaining strength in the 90s, the interface development community employed usability engineering methods to design and test software systems for ease of use, ease of learning, memorability, lack of errors, and satisfaction [17].
Usability practitioners of the 1990s considered two factors as measures of usability [3]: the ease of learning and the ease of use.
Learnability, flexibility and robustness are pointed out as three principles that support usability [10].
In this study there were taken into account learnability and flexibility. As learnability [10] is intended to be “the ease with which new users can begin effective interaction and achieve maximal performance. Flexibility is the multiplicity of ways in which the user and system exchange information.”

What is the criterion that is to be taken into account when analysing usability? The answer to the question is mentioned by various authors [2][12][17][20].
Focusing on the user, traditional usability can be characterized by the following, [17]: Learnability, Memorability, Efficiency, Errors and Satisfaction.
However, Seffah and colleagues [20] presents another concept for usability. Combining the various standards and models (like Nielson usability characteristics) they unified then into a single model of usability measurement. This model called Quality in Use Integrated Measurement (QUIM), include 10 usability factors:
1-Efficiency;
2-Effectiveness;
3- Productivity;
4-Satisfaction;
5-Learnability;
6-Safety;
7- Trustfulness;
8-Accessibility;
9-Universality;
10-Usefulness.

-----------
A Method to Standardize Usability Metrics Into a Single Score.
Jeff Sauro. PeopleSoft, Inc., Denver, Colorado USA. Jeff_Sauro@peoplesoft.com
Erika Kindlund. Intuit, Inc., Mountain View, California USA. Erika_Kindlund@intuit.com

CHI 2005, April 2–7, 2005, Portland, Oregon, USA

In a summative usability evaluation, several metrics are available to the analyst for benchmarking the usability of a product. There is general agreement from the standards boards ANSI 2001[2] and ISO 9241 pt.11[18] as to what the dimensions of usability are (effectiveness, efficiency & satisfaction) and to a lesser extent which metrics are most commonly used to quantify those dimensions.
Effectiveness includes measures for completion rates and errors, efficiency is measured from time on task and satisfaction is summarized using any of a number of standardized satisfaction questionnaires (either collected on a task-by-task basis or at the end of a test session) [2],[18].

There have been attempts to derive a single measure for the construct of usability.
Babiker et al [3] derived a single metric for usability in hypertext systems using objective performance measures only.
Questionnaires such as the SUMI [22,23], PSSUQ[27], QUIS[7] and SUS[5] have users provide a subjective assessment of recently completed tasks or specific product issues and claim to derive a reliable and low-cost standardized measure of the overall usability or quality of use of a system.

Four summative usability tests were conducted to collect the common metrics as described above (task completion, error counts, task times and satisfaction scores) as well as several other metrics as suggested in Dumas and Redish [11], and Nielsen [39].
For measuring satisfaction we created a questionnaire containing semantic distance scales with five points, similar to the ASQ created by Lewis [26] (see Table 5 below). The questionnaire included questions on task experience, ease of task, time on task, and overall task satisfaction.

Creating a Single, Standardized and Summated Usability Metric: SUM
We created a single, standardized and summated usability metric for each task by averaging together the four standardized values (task completion, error rates, satisfaction scores, task times) based on the equal weighting of the coefficients from the Principal Components Analysis.

----------
Usability Problems: Do Software Developers Already Know?
Rune Thaarup Høegh. Aalborg University, Department of Computer Science, Aalborg East, DK-9220, Denmark. runethh@cs.aau.dk

OZCHI 2006, November 20-24, 2006, Sydney, Australia.
OZCHI 2006 Proceedings ISBN: 1-59593-545-2

When developing interactive software systems, a key quality factor is usability of the software for its prospective users.
Usability relates to the extent a user can achieve specified goals with effectiveness, efficiency and satisfaction (ISO 1998).

Usability evaluations are applied to assess the quality of a user interaction design and establish a basis for improving it (Rubin, 1994). This is accomplished by identifying specific parts of a system that do not properly support the users in carrying out their work. Thus usability evaluations and the related activities can help developers make better decisions, and thereby allow them to do their jobs more effectively (Radle & Young, 2001).

Usability reports are suggested as the mean for communicating results of a usability evaluation (Dumas and Redish, 1993; Rubin 1994, Molich 2000).
The amounts and severity of identified usability problems are a typically communicated measurement (Karat et al. 1992, Nielsen 1993, Preece et al. 2002).
The list of usability problems sorted by severity is probably the most essential information in the usability report.

-----------
Evaluating Web Page and Web Site Usability.
Christopher C. Whitehead. Columbus State University, 4225 University Avenue, Columbus, GA 31907. 1-706-565-3527 whitehead_christopher@colstate.edu

ACM SE’06, March, 10-12, 2006, Melbourne, Florida, USA

Why is usability important?
As Nielsen states, "Usability rules the Web. Simply stated, if the customer can't find a product, then he or she will not buy it" [5].
In this sense, usability is an extremely important aspect of individual Web page and overall Web site design, particularly for the business-oriented Web site.
In discussing e-commerce Web sites, Shacklett asserts that, "Twenty-eight percent of Web site transactions result in consumer failure and frustration....Six percent of users who leave a Web site in frustration say they won't return to the site or patronize the company” [7].
If usability is left unconsidered, then a business is likely to lose customers and miss out on profit opportunities--negatively impacting the cornerstones of any successful business.

WHAT IS WEB PAGE/SITE USABILITY?

Based on the International Standards Organization (ISO) definition of usability, Powell defines Web site usability as "the extent to which a site can be used by a specified group of users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use" [6].
Note that this definition applies equally well to Web page usability. It also implies that usability is user and task dependent, as well as being related to how well the user is able to accomplish what they set out to do, how efficiently the user can do this, and how satisfied the user is during and after the process.
..Jakob Nielsen's usability guidelines for determining the usability of a Web site [6]:
• Learnability--How easy it is to learn to use?
• Rememberability--How easy it is to remember how to use?
• Efficiency of use--How much work does it require the user to do?
• Reliability in use--Does it work correctly and does it help users perform tasks correctly?
• User Satisfaction--Is the user generally satisfied as a result of using the site?

McLaughin and Skinner break usability down into six related but distinct components [4]:
• Checkability: The system has or allows checks that ensure the correct information is going in and going out of it.
• Confidence: Users have confidence both in their capability to use the system and in the system itself.
• Control: Users have control over the operation of the system, particularly of the information fed into and out of the system.
• Ease of Use: The system is easy to use.
• Speed: The system can be used quickly.
• Understanding: The system and its outputs are understandable.

In general then, usability is potentially complex and wide ranging, but clearly "user-centered." In evaluating usability, it may be possible to measure each of these components separately or in combination using some form of metric or measure.

-----------
The Usability Perspective Framework.
Tobias Uldall-Esperson. University of Copenhagen, Department of Computing, Njalsgade 8, DK-2300 Copenhagen S. tobiasue@diku.dk

CHI 2008, April 5–10, 2008, Florence, Italy.

Five Usability Perspectives:
1. The interaction object usability perspective
2. The task usability perspective
3. The product usability perspective
4. The context of use usability perspective
5. The entreprise usabiltiy perspective.
Figure 1 - The five usability perspectives.

Three Cross-Perspective Themes:
1. Consistency
2. Coherence
3. Fitness.

18 Key Questions:
Figure 2- Key questions regarding Consistency.
Figure 3 - Key questions regarding Coherence.
Figure 4 - Key questions regarding Fitness.

....Currently the framework is composed of 18 key questions related to five usability perspectives and three themes.

-----------
NIELSEN VERSUS NIELSEN: A USABILITY ANALYSIS OF TELEVISION HOMEPAGES.AMANDA MCDERMAND, B.A.
A THESIS IN MASS COMMUNICATIONS
Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of MASTER OF ARTS.
August, 2006

Usability, as defined by the U.S. Department of Health and Human Services (USDHHS, n.d.), is the quality measurement of a user’s experience when interacting with a user-operated device, including a Web site.
Usability is an issue for all Web sites, and the U.S. Department of Health and Human Services maintains a site dedicated to helping designers to create more user-friendly sites. Many of the recommendations are based on the department’s 2003 study, which resulted in published Web design and usability guidelines for government site designers (USDHHS, 2003).

Jakob Nielsen’s work, which is cited many times in that report, will be used as a guideline for determining usability scores in this study.
Jakob Nielsen is known as the father of Web usability (MacGregor, et al., 2002). ...Critics say his approach to usability is too formulaic and sacrifices aesthetic appeal.
Cloninger (2000) mediated in this debate of usability versus design and explained that usability experts were rational, left-brained, science-oriented, and involved in doing whereas designers were intuitive, right-brained, artsy, and emotional.
These differences naturally placed designers and usability experts on opposite ends of the Web design debate, but both were important in creating a masterful Web site.
Engholm (2002) thoroughly discussed this dispute in relation to design theory, focusing first on the ideals of designers, then the purposes of usability experts. Designers argued for increased focus on graphics, aesthetics, and the entertainment capabilities of the Web while usability experts focused on function and content, with decreased priority given to graphic features. The author stated that both positions were important, since each have made an impression on the design history of the Web. Finally, Engholm negotiated a mediated position advocating a balance between form and function, and used this approach to criticize the development of digital style design.

This research focuses on how homepages score with each other based on established usability guidelines, or heuristics. Heuristic evaluation involves judging an interface, here a homepage, with a recognized usability principle (Head, 1999).

Harpel-Burke (2005) used this method to compare usability scores for library Web sites with corporate site usability statistics presented by Nielsen and Tahir (2002). ...In a similar fashion, these heuristics created by Nielsen and Tahir (2002) will be used to calculate usability scores for the homepages in this study.

----------
USABILITY ANALYSIS OF THE USDA-ARS OGALLALA INITIATIVE WEB SITE.SHELBY L. AXTELL, B.S.
A THESIS IN AGRICULTURAL EDUCATION
Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE.
May, 2006

The Ogallala Initiative Web site will serve as the database for information pertaining to several audiences affected by the aquifer. ...Since several audiences reference the Initiative Web site, it is important the Web site meet their needs, since “usability rules the Web,” (Nielsen, 2000, p. 9). A site must be usable for Web goers to return.
Nielsen (2000) believes if Web goers cannot maneuver through a Web site quickly, they will leave the site in less than a minute. Design of a Web site is key for people to find information quickly. Nielsen (2000) suggests when selecting design, do not ask which design you like best, but ask which design allows users to obtain information the quickest.

Usability – “that can be used, or fit for use; convenient to use” (Editors of the American
Heritage® Dictionaries, 2000); “the degree to which something, i.e. software, hardware or anything else, is easy to use and a good fit for the people who use it; a quality or characteristic of a product; it is whether a product is efficient, effective, and satisfying for those who use it” (Usability Professionals’ Association, 2005).

User preferences – What the people, who use the Initiative Web site, want to see on the Web site, or how they prefer it to be set up, i.e. is the Web site user friendly or not.

Utility – “quality or condition of being useful; usefulness,” (Editors of the American Heritage® Dictionaries, 2000).

Jakob Nielsen is quoted many times throughout this literature review, because he is considered an expert on Web site usability. He has been called “the reigning guru of Web usability,” by Fortune magazine, and “perhaps the best-known design and usability guru on the Internet,” by Financial Times, (“Nielsen: It’s Time for Redesign,” 2004, p. 1).

According to Nielsen (2000), “usability rules the Web,” (p. 9).
Usability is defined in the dictionary as something “that can be used, or fit for use; convenient to use” (Editors of the American Heritage® Dictionaries, 2000, p. 1).
Usability Professionals’ Association (2005) define usability as “the degree to which something, i.e. software, hardware or anything else, is easy to use and a good fit for the people who use it; a quality or characteristic of a product; it is whether a product is efficient, effective, and satisfying for those who use it” (p. 1).

Krug (2000) points out there is no such thing as an ‘average user.’ Each person viewing a Web site is going to like or dislike something different from each other; it basically comes down to a matter of personal preference (Krug, 2000). However, usability is taken into account with what the majority of users likes and dislikes (Krug, 2000).
Users should be able to leave a site satisfied that they were able to obtain the information they were searching. According to Krug (2000), a Web site is usable if it doesn’t make you think. Krug (2000) says when he evaluates Web sites, if he has to think, then the site is not easy to use. To Krug (2000) a Web page should be “selfevident, obvious, and self-explanatory” (p. 11). For example, a user should not spend even a millisecond of thought wondering if something on a site is clickable (Krug, 2000).

The dictionary defines efficiency as “the quality or degree of being efficient” (Woolf et al., 1973, p. 362). A Web site must be efficient by serving its purpose to the user, in order for them to return.

Web site usability testing can be tested through the following categories: general appearance, navigation, efficiency, and content. A Web site is usable if it doesn’t make users think while they are retrieving their information (Krug, 2000).
The main purpose users go to a Web site is for the content it contains, however the site must be aesthetically pleasing as well as easy to navigate in order to keep return users. In addition, the site must quickly upload in the browser as well as download content, graphics, and photos quickly, or users will leave the site.

“Usability is a reality check,” (Nielsen, 2005, p. 2). A Web site usability test, according to Nielsen (2005), simply determines what humans can or can’t work on a site.

----------
USABILITY EVALUATION OF AN ONLINE COTTON MEDIA RESOURCE GUIDE
KIMBERLY MACHELLE COOPER, B.S.
A THESIS IN AGRICULTURAL EDUCATION
Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE.
December, 2006

Effectiveness – The user effort required to achieve the user and domain goal (Cato, 2001).
Efficiency – The accuracy and completeness the user achieves with respect to the goals (Cato, 2001).
Heuristic Evaluation – An evaluation based on a collection of guidelines, principles or rules of thumb (Cato, 2001).
Interface – To act together or affect each other or to make things or people interact.
Satisfaction – the measure of user satisfaction on a number of attributes (Cato, 2001).
Usability – "the degree to which something, i.e. software, hardware, or anything else, is easy to use and a good fit for the people who use it; a quality or characteristic of a product; it is whether a product is efficient, effective, and satisfying for those who use it" (Usability Professionals’ Association, 2005). Measures of efficiency, effectiveness, satisfaction and usefulness (Cato, 2001).
Usefulness – The measure of the value the user places on the product (Cato, 2001).

While it is no mystery good usability is critical to the success of any Web site, determining what usability is and how to evaluate it in Web sites has been.
Nielsen (1993) defines usability as a combination of the following attributes: satisfaction, errors, memorability, efficiency, and learnability.
Then in 1999, Lee paraphrased the International Standards Organization (ISO) definition by stating, "Web usability is the efficient, effective and satisfying completion of a specified task by any given web user" (p. 38).

For the Web owner to be successful and users to be satisfied, Web sites need to consider usability and other design criteria such as user-centered navigability (Palmer, 2002; Nielsen, 2000).
Nielson (2000) has extended basic usability principles into the Web environment to include navigation, response time, credibility, and content.
"Usability includes consistency and the ease of getting the Web site to do what the user intends it to do, clarity of interaction, ease of reading, arrangement of information, speed, and layout," (Palmer, 2002, p. 153).

Palmer’s (2002) research applied media richness, design, and usability principles to identify key site characteristics, performance, and usability metrics used to discover elements of successful Web site design. His results suggested focusing on download delay, organization, and navigation, which heavily reflected earlier work in usability (Palmer, 2002; Nielsen, 2000). Other factors identified relevant to successful sites included: (1) sequencing, layout, and arrangement affects navigability; (2) content includes the amount and variety of information; (3) customization and interactivity for the site user; (4) provide opportunity for feedback (Palmer, 2002).

McMahon (2005) claimed "good usability can both save money and make money for a business" (p. 23) by affecting the customer’s intention to buy, boosting sales conversion, and increasing customer loyalty. ...Good usability would affect the customer’s intention to buy into what is being said; visitors would turn into customers who leave the site with their needs met; and a loyal return user would be established.

Errors, satisfaction, memorability, efficiency, and learnability were depicted as contributing factors to general usability in Nielsen’s 1993 model.
In reference to Web sites, errors are most typically associated with links: bad links, broken links, misleading links, disguised links, or disconnected links. However, errors with inconsistency and insecurity also create a loss of trust.
Web site memorability deals directly with the user’s previous experiences with the individual site. It should be easy to remember so that the casual user does not have to relearn the site upon reentrance after a period of time. The site should be "self-evident, obvious, and self-explanatory" (Krug, 2000, p. 11).
Consistency is the key to successful web design and usability. Consistent design and navigation assist the user in learning the site efficiently by making it easy for the user to remember.
Learnability in Web sites is quite different than other forms of communication. Krug (2000) asserts his first law of usability is not to make the user think; therefore, learnability should be kept to a minimum. Unless the intended goal of the site is learning the content, the user does not need to learn and retain the information (Zibell, 2000).

----------
Development of Usability Questionnaires for Electronic Mobile Products and Decision Making Methods.
Young Sam Ryu.
Dissertation Submitted to the Faculty of Virginia Polytechnic Institute and State University in Partial Fulfillment of the Requirements for the Degree of
Doctor of Philosophy in Industrial and Systems Engineering

July 2005
Blacksburg, Virginia

Usability has been defined by many researchers in many ways.
One of the first definitions of usability was “the quality of interaction which takes place” (Bennett, 1979, p. 8).

Shackel (1991) proposed an approach to define usability by focusing on the perception of the product and regarding acceptance of the product as the highest level of the usability concept. Considering usability in the context of acceptance, Shackel provides a definition stating that “usability of a system or equipment is the capability in human functional terms to be used easily and effectively by the specified range of users, given specified training and user support, to fulfill the specified range of tasks, within the specified range of environmental scenarios”
(Shackel, 1991, p.24). .... a set of usability criteria. Those are
􀂃 Effectiveness: level of interaction in terms of speed and errors;
􀂃 Learnability: level of learning needed to accomplish a task;
􀂃 Flexibility: level of adaptation to various tasks; and
􀂃 Attitude: level of user satisfaction with the system.

Shackel also collaborated on a later definition, stating that usability derives from “the extent to which an interface affords an effective and satisfying interaction to the intended users, performing the intended tasks within the intended environment at an acceptable cost” (Sweeney, Maguire, & Shackel, 1993, p. 690).

Another well-accepted definition of usability which received attention from the Human Computer Interaction (HCI) community was offered by Nielsen (1993). He also considers factors which may influence product acceptance. Nielsen does not provide any descriptive definition of usability; however, he provides the operational criteria to define clearly the concept of usability:
􀂃 Learnability: ability to reach a reasonable level of performance
􀂃 Memorability: ability to remember how to use a product
􀂃 Efficiency: trained users’ level of performance
􀂃 Satisfaction: subjective assessment of how pleasurable it is to use
􀂃 Errors: number of errors, ability to recover from errors, existence of serious errors

Finally, attempts to establish standards on usability have been made by the International Organization for Standardization (ISO). ISO 9241-11 (1998) is an international standard for the ergonomic requirements for office work with visual display terminals and defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use” (p. 2). Additionally, ISO 9241-11 classifies the dimensions of usability to account for the definition:
􀂃 Effectiveness: the accuracy and completeness with which users achieve goals
􀂃 Efficiency: the resources expended in relation to the accuracy and completeness
􀂃 Satisfaction: the comfort and acceptability of use

ISO/IEC 9126 elaborates on three different ways to assess usability. Part 1 (ISO/IEC 9126-1, 2001) provides the definition of usability which distinguishes clearly between the interface and task performance by designating usability as “the capability of the software to be understood, learned, used and liked by the user, when used under specified conditions” (p. 9).
The definition of ISO/IEC 9126-1 presents usability as quality-in-use. With the perception of usability as the product quality, the dimensions of usability indicated in ISO/IEC 9126-1 became
􀂃 Understandability,
􀂃 Learnability,
􀂃 Operability, and
􀂃 Attractiveness.
Part 2 (ISO/IEC 9126-2, 2003) includes external metrics using empirical research. Part 3 (ISO/IEC 9126-3, 2003) describes internal metrics which measure interface properties.

Given the descriptive definition of usability, new usability dimensions suggested by recent studies (e.g., aesthetic appeals and emotional dimensions) were blended in as the research progressed to develop the usability questionnaire for mobile products. For example, aesthetic appeal can be considered a sub-dimension of satisfaction, which is one of the main dimensions of ISO 9241-11.

Keinonen (1998) categorized different approaches to defining usability, including usability as a design process and usability as product attributes, which contribute to the establishment of design guidelines.
From the perspective of usability as a design process, usability engineering (UE) and user-centered design (UCD) have been defined and recognized as a process whereby the usability of a product is specified quantitatively (Tyldesley, 1988).

Numerous sets of usability principles and guidelines have been developed by the HCI community, including computer companies, standard organizations, and well-known researchers. Some well-known principles and guidelines they have developed includeShneiderman’s (1986) eight golden rules of dialogue design,
Norman’s (1988) seven principles of making tasks easy,
human interface guidelines by Apple Computer (1987),
usability heuristics by Nielsen (1993),
ISO 9241-10 (1996) for dialogue principles, and
the evaluation check list by Ravden and Johnson (1989).
These references cover many major dimensions of usability including consistency, user control, appropriate presentation, error handling, memory load, task matching, flexibility, and guidance (Keinonen, 1998).

Many research frameworks have been introduced as measures of usability at an operational level according to various usability dimensions (Nielsen, 1993; Rubin, 1994).

There are three different categories of methods to obtain measurements known as the usability inspection method, the testing method, and the inquiry method (Avouris, 2001).

Table 3. Example measures of usability (ISO 9241-11, 1998)
Effectiveness :
- Percentage of goal achieved
- Percentage of tasks completed
- Accuracy of completed task
Efficiency :
- Time to complete a task
- Monetary cost of performing the task
Satisfaction :
- Rating scale for satisfaction
- Frequency of discretionary use
- Frequency of complaints

Subjective usability measurements focus on an individual’s personal experience with a product or system.
Several usability questionnaires were developed by the HCI community, such asSoftware Usability Measurement Inventory (SUMI) (Kirakowski, 1996; Kirakowski & Corbett, 1993; Porteous, Kirakowski, & Corbett, 1993),
the Questionnaire for User Interaction Satisfaction (QUIS) (Chin, Diehl, & Norman, 1988; Harper & Norman, 1993; Shneiderman, 1986), and
the Post-Study System Usability Questionnaire (PSSUQ) (Lewis, 1995).

SUMI is the best-known usability questionnaire to measure user satisfaction and assess user-perceived software quality. SUMI is a 50-item questionnaire, each item of which is answered with “agree”, “undecided”, or “disagree”, and is available in various languages to provide an international standard.
Based on the answers collected, scores are calculated and analyzed into five subscales (Kirakowski & Corbett, 1993):
􀂃 Affect: degree to which the product engages the user’s emotional responses;􀂃 Control: degree to which the user sets the pace;
􀂃 Efficiency: degree to which the user can achieve the goals of interaction with the product;
􀂃 Learnability: degree to which the user can easily initiate operations and learn new features; and
􀂃 Helpfulness: extent to which user obtains assistance from the product.

QUIS was developed at the Human Computer Interaction Laboratory at the University of Maryland, College Park (Chin et al., 1988; Harper & Norman, 1993) based on the scale for “User evaluation of interactive computer systems” introduced by Shneiderman (1986). The most recent publication of QUIS, version 7, incorporates ten different dimensions of usability:
􀂃 Overall user reactions,
􀂃 Screen factors,
􀂃 Terminology and system information,
􀂃 Learning factors,
􀂃 System capabilities,
􀂃 Technical manuals and online help,
􀂃 Online tutorials,
􀂃 Multimedia,
􀂃 Teleconferencing, and
􀂃 Software installation.

----------

Michael Yeap
6/9/2009

No comments:

Post a Comment