Evaluation Research: an overviewComments: This is preview of a paper.Library TrendsJune 22, 2006 Powell, Ronald R. ABSTRACT Evaluation research can be defined as a type of study that uses standard social research methods for evaluative purposes, as a specific research methodology, and as an assessment process that employs special techniques unique to the evaluation of social programs. After the reasons for conducting evaluation research are discussed, the general principles and types are reviewed. Several evaluation methods are then presented, including input measurement, output/ performance measurement, impact/outcomes assessment, service quality assessment, process evaluation, benchmarking, standards, quantitative methods, qualitative methods, cost analysis, organizational effectiveness, program evaluation methods, and LIS-centered methods. Other aspects of evaluation research considered are the steps of planning and conducting an evaluation study and the measurement process, including the gathering of statistics and the use of data collection techniques. The process of data analysis and the evaluation report are also given attention. It is concluded that evaluation research should be a rigorous, systematic process that involves collecting data about organizations, processes, programs, services, and/or resources. Evaluation research should enhance knowledge and decision making and lead to practical applications. WHAT IS EVALUATION RESEARCH? Evaluation research is not easily defined. There is not even unanimity regarding its name; it is referred to as evaluation research and evaluative research. Some individuals consider evaluation research to be a specific research method; others focus on special techniques unique, more often than not, to program evaluation; and yet others view it as a research activity that employs standard research methods for evaluative purposes. Consistent with the last perspective, Childers concludes, "The differences between evaluative research and other research center on the orientation of the research and not on the methods employed" (1989, p. 251). When evaluation research is treated as a research method, it is likely to be seen as a type of applied or action research, not as basic or theoretical research. Weiss, in her standard textbook, defines evaluation as "the systematic assessment of the operation and/or the outcomes of a program or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy" (1998, p. 4; emphasis in original). While certainly not incorrect, this definition, at least within a library and information (LIS) context, is too narrow or limited. Wallace and Van Fleet, for example, point out that "evaluation has to do with understanding library systems" (2001, p. 1). As will be noted later in this article, evaluative methods are used for everything from evaluating library collections to reference transactions. WHY EVALUATE? But before examining the specific techniques and methods used in LIS evaluation research, let us first briefly consider the question of why evaluation is important and then identify the desirable characteristics of evaluation, the steps involved in planning an evaluation study, and the general approaches to evaluation. With regard to the initial question, Wallace and Van Fleet (2001, pp. xx-xxi) and others have noted that there are a growing number of reasons why it is important for librarians and other information professionals to evaluate their organizations' operations, resources, and services. Among those reasons are the need for organizations to 1. account for how they use their limited resources 2. explain what they do 3. enhance their visibility 4. describe their impact 5. increase efficiency 6. avoid errors 7. support planning activities 8. express concern for their public 9. support decision making 10. strengthen their political position. In addition to some of the reasons listed above, Weiss (1998, pp. 20-28) identifies several other purposes for evaluating programs and policies. They include the following: 1. Determining how clients are faring 2. Providing legitimacy for decisions 3. Fulfilling grant requirements 4. Making midcourse corrections in programs 5. Making decisions to continue or culminate programs 6. Testing new ideas 7. Choosing the best alternatives 8. Recording program history 9. Providing feedback to staff 10. Highlighting goals "Over the past decade, both academics and practitioners in the field of library and information science (LIS) have increasingly recognized the significance of assessing library services" (Shi & Levy, 2005, p. 266). In August 2004 the National Commission on Libraries and Information Science announced three strategic goals to guide its work in the immediate future. Among those three goals was the appraising and assessing of library and information services. CHARACTERISTICS AND PRINCIPLES OF EVALUATION Childers (1989, p. 250), in an article emphasizing the evaluation of programs, notes that evaluation research Wallace and Van Fleet (2001) comment that evaluation should be carefully planned, not occur by accident; have a purpose that is usually goal oriented; focus on determining the quality of a product or service; go beyond measurement; not be any larger than necessary; and reflect the situation in which it will occur. Similarly, evaluation should contribute to an organization's planning efforts; be built into existing programs; provide useful, systematically collected data; employ an outside evaluator/consultant when possible; involve the staff; not be any fancier than necessary; and target multiple audiences and purposes (Some Practical Lessons on Evaluation, 2000). In an earlier work on the evaluation of special libraries, Griffiths and King (1991, p. 3) identify some principles for good evaluation that still bear repeating: 1. Evaluation must have a purpose; it must not be an end in itself 2. Without the potential for some action, there is no need to evaluate 3. Evaluation must be more than descriptive; it must take into account relationships among operational performance, users, and organizations 4. Evaluation should be a communication tool involving staff and users 5. Evaluation should not be sporadic but be ongoing and provide a means for continual monitoring, diagnosis, and change 6. Ongoing evaluation should provide a means for continual monitoring, diagnosis and change 7. Ongoing evaluation should be dynamic in nature, reflecting new knowledge and changes in the environment As has been implied, but not explicitly stated above, evaluation often attempts to assess the effectiveness of a program or service. On a more specific level, evaluation can be used to support accreditation reviews, needs assessments, new projects, personnel reviews, conflict resolution, and professional compliance reports. Source: http://www.accessmylibrary.com/article-1G1-151440807/evaluation-research-overview.html Action: to download the entire paper at Univ. |
Friday, November 26, 2010
20101127 - Evaluation Research (Powell)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment