The Hewlett Foundation Blog
August 8, 2014 — By Ruth Levine
It’s easy to come up with reasons not to evaluate the impact of a program: We don’t have enough money. There’s no time to collect baseline data because we’ve got to get the program up and running. The pathways from what we’re doing to impact are so complex we can’t trace causation. It’s unethical to withhold this program from people, even if by doing so we’d be able to compare the experience of participants with non-participants and estimate the program’s net impact.
Over the years, I’ve tried, along with others, to overcome these excuses because they stand in the way of learning how to use precious dollars to do the greatest good.
But while there are many bad reasons to avoid an evaluation, there is one very good reason, and it’s not recognized often enough: If the findings from the evaluation won’t make a difference, don’t do it.
Here’s the set of questions that together can help us all figure out if an evaluation might make a difference:
What are the decisions that the findings from the evaluation could inform? Sometimes these have to do with renewing or increasing funding for the program, and sometimes they have to do with program design or organization. These future decisions should drive the evaluation questions. If no one can articulate what decisions will be made, it’s time to question whether an evaluation is worth it.
Are those decisions going to be based on evidence about program effectiveness? Funders have diverse motivations. Not all of those have to do with whether a program is achieving its stated goals. If a program is going to be expanded or killed regardless of what the data say, then think hard about the value of doing an evaluation at all—or at least make sure it’s done so transparently that decision makers have a hard time ignoring the facts.
When are those decisions going to be made? Far too often, the evaluation is started so late or will take so long that the findings will be available only after funding and other decisions have been made. The evaluation report ends up as an awkward afterthought.
Can the evaluation change anyone’s mind? Both funders and implementers usually have strong views about the effectiveness of a program based on experience, observation, or gut feelings. Sometimes the label “success” or “failure” has already been applied. If the data and methods for the evaluation are not strong enough to potentially change the minds of decisionmakers, what’s the point? Either invest in more robust methods or drop the ceremony of evaluation altogether.
If these questions were applied systematically and early in program design and implementation, we’d have more good and useful evaluations—ones that are well-timed and use appropriate methods. We’d have better clarity about the purpose of the evaluations we conduct. The timing and methods would match the needs of the decision makers, and greater transparency could mitigate against political influences. At the same time, we’d end up with fewer evaluations that are purely symbolic.
Can evaluations lead to better use of money? Yes. Will they if the conditions are not right? Absolutely not.