Saturday, November 27, 2010

20101127 - Evaluation research/method (Patton)

Evaluation Research : an interview with Dr Patton

Interviewed by Lisa Waldick


Comments: Dr Michael Quinn Patton is well-known for Evaluation Research methods. So, this creates my interest to read this interview.


Many development programs are evaluated to determine how effective and useful they are. But how effective and useful are the evaluations themselves? Internationally renowned evaluator, Michael Quinn Patton, recently came to IDRC (International Development Research Centre) to discuss his approach for making sure evaluations are useful for decision-makers. Dr Patton is the head of an organizational development consulting business: Utilization-Focused Information and Training. Known for five influential books on evaluation, including Qualitative/Evaluation and Research Methods, he was the 1984 recipient of the Alva and Gunnar Myrdal Award from the Evaluation Research Society for "outstanding contributions to evaluation use and practice"



Why is it important to evaluate development programs? Don't people on the ground just intuitively understand what is going on in programs?


Our very processes of taking in information distort reality — all the evidence of social science indicates this. We have selective perception — some of us have rose-coloured glasses, some of us are gloom-and-doomers. We are not neutral; there is an emotional content to information. We need disciplined techniques to be able to stand back from that day-to-day world and really be able to see what is going on. We need approaches to help us stand back from our tendency to have biases, prejudices, and preconceptions.



Can you give an example of how a preconception can influence a project?


There are examples in development that are legendary. One agriculture project grew a bean that cooked faster so it would use less fuel — and therefore you would partly reduce deforestation. But there was a lot of resistance to adopting it. Part of the resistance was because one of the few times women in this culture were able to socialize with each other was when they were cooking. They didn't want a fast cooking bean. Those are the kinds of things evaluators see when they go in — they see the things that people can't see because they are too close to it. Evaluation is about standing back and being able to see things through somebody else's eyes.



What distinguishes your approach to evaluation?


One of the ways of you can distinguish different evaluation approaches is by what they take as their bottom line for the evaluation. For me, it is the pragmatic use of evaluation findings and the evaluation process. In other words, it is that the evaluation is designed and implemented in a way that really makes a difference to improving programs and improving decisions about programs. So the bottom line in my approach is use — that's the reason why my approach is called utilization focussed evaluation.



How do you think the usefulness of evaluations can be increased?


In the timing of the evaluation, for example, it means that you time the findings to match when decisions are really going to be made. A lot of evaluations take place at the end of a project. An evaluation report gets written and it's a very good piece of work. But all the decisions have already been made about the future of the project by the time the evaluation gets done. On paper, it appears to make sense to do an evaluation right at the end of the project to try to capture everything that's gone on. But it turns out not to be useful to do that. Everything that is going to be decided about the future of a program gets decided before the end of the program.


Comments: Evaluation at the end of program is summative evaluation. Formative evaluation should be done so that improvements could be made during the program, and findings could be used to support decision making for the future of the program.



How can evaluations inform decision-making?


By knowing what the questions are that the decision-makers bring to a project. So, for example, you have to know: is there consideration being given to expanding the project from one part of the world to another — to adapt the intervention to a new ecosystem or a new group of people? Or do decision-makers already know that resources are declining and the real question is: Can we do more with less? Knowing what the decision context is lets you gather data that is relevant. A lot of evaluations get designed generically. When decision-makers get them, the response is: Well that's interesting. But it doesn't help me with what my decision is. It doesn't answer my question.



What do you see as the difference between research and evaluation?


There's a whole continuum of different kinds of evaluation and different kinds of research. However, on the whole, the purpose of evaluation is to produce useful information for program improvements and decision making. And the purpose of research is to produce knowledge about how the world works.

Because research is driven by the agenda of knowledge production, the standards for evidence are higher, and the time lines for generating knowledge can be longer. In evaluation, there are very concrete deadlines for when decisions have to get made, for when program action has to be taken. It often means that the levels of evidence involve less certainty than they would under a research approach and that the time lines are much shorter.



If you don't have the highest possible levels of evidence in the evaluation, isn't there a risk of making bad decisions?


In the real world, you don't have perfect knowledge and decisions are going to get made anyway. When a program is coming to an end and a decision has to get made about it, the decision is going to get made whether or not you have perfect knowledge. If you are saying: "No, don't decide now. Wait until I have perfect knowledge", the train is going to pass. The reality is that it's better to have some information in a timely fashion than to have perfect information too late to get used.



What is participatory evaluation? What are its advantages?


Participatory evaluation means involving people in the evaluation — not only to make the findings more relevant and more meaningful to them through their participation, but also to build their capacity for engaging in future evaluations and to deepen their capacity for evaluative thinking.

So, let's say you want to do a serious evaluation and you are trying to decide whether or not to have an external person do it or to do it internally. The external person may do a very good job of generating findings for you. But all the things that they learn about how to do evaluation, they take away with them. If you do the evaluation with counterparts in the countries where you are working, then they get the opportunity not only to generate findings and know where those findings come from, but also to learn about evaluative thinking.



What is evaluative thinking?


Evaluative thinking includes a willingness to do reality testing, to ask the question: how do we know what we think that we know. To use data to inform decisions — not to make data the only basis of decisions — but to bring data to bear on decisions. Evaluative thinking is not just limited to evaluation projects, it's not even just limited to formal evaluation. It's an analytical way of thinking that infuses everything that goes on.



What is the hardest thing to teach about evaluation?


The hardest thing I find to teach is how to go from data to recommendations. When you are doing an evaluation, you are looking at what has gone on — a history. But when you write recommendations, you are a futurist. Evaluations can help you make forecasts, but future decisions are not just a function of data. Making good, contextually grounded, politically savvy and do-able recommendations is a sophisticated skill. A great evaluator can really show the strengths and weaknesses in a program and can gather good, credible data about what is working and not working. But that doesn't mean that they know how to turn that information into recommendations.

I actually prefer to involve the primary decision-makers who are going to use the evaluation in generating their own recommendations through a process of facilitation and collaboration. I encourage them to look at the data, consider the options, and then come up with their own recommendations in a context that includes their values, experience, and resources.



When is it good not to evaluate a project or program?


You can overdo evaluation just because people get sick of it. There are also times of crisis when you need to take action, rather than study the questions. I've seen projects go down the tubes while people were studying and evaluating when in fact they needed to take action.



What do you think is the most important key to evaluation?


It is being serious diligent and disciplined about asking the questions, over and over: "What are we really going to do with this? Why are we doing it? What purpose is it going to serve? How are we going to use this information?" This typically gets answered casually: "We are going to use the evaluation to improve the program" — without asking the more detailed questions: "What do we mean by improve the program? What aspects of the program are we trying to improve?" So a focus develops, driven by use.



Source: http://www.evaluationwiki.org/index.php/Michael_Quinn_Patton

1 comment:

  1. He is the author of six evaluation books including a 4th edition of "Utilization-Focused Evaluation" (Sage, 2008) and "Qualitative Research and Evaluation Methods" (2002, 3rd edition). Previous editions of these books have been used in over 300 universities worldwide. He is also author of "Creative Evaluation"(2nd. ed., Sage, 1987); "Practical Evaluation"(Sage, 1982); and "Culture and Evaluation" (editor, Jossey-Bass, 1985). He is co-author of "Getting to Maybe: How the World Is Changed" which applies complexity theory to social innovation and presents developmental evaluation for evaluating innovations (Random House Canada, 2007).

    ReplyDelete