A researcher conducts an interview with a woman in Bamako, Mali, as part of Innovations for Poverty Action baseline health survey 
for an evaluation report on the role of user fees and information in health care decisions. Photo Credit: Nicolo Tomasell

We’ve been looking back at the Hewlett Foundation’s contributions to the field of impact evaluation in global development, and we’ve been looking ahead to a new approach to funding it. In looking back, we’ve tried to understand how some early investments yielded big things. In looking ahead, we’ve been exploring how this foundation can continue to be on the leading edge of a field that has grown so rapidly.

For us, the impact evaluation story starts in the early 2000s. Around that time, people working in international development were waking up to the potential for rigorous evaluation to shed new light on some persistent questions about what strategies and interventions could reduce disease, improve education, and change lives in other measurable ways.

In Mexico, for example, Santiago Levy and the late Jose Gomez de Leon, working alongside Paul Gertler and other U.S. academics, conducted an impact evaluation of a massive social program, PROGRESA. That evaluation showed the world that scientific evaluation of the impact of an important national policy intervention was possible, and that robust evaluation findings were hard for politicians to ignore.

The pioneering work of academics like Esther Duflo, Michael Kremer, and Ted Miguel, and development economists at the World Bank demonstrated repeatedly that randomized control trials not only were feasible in difficult field settings like those in Kenya and India, but also yielded insights that no other methods could. At the same time, there was growing frustration with the limitations of standard evaluation practices within development agencies—practices that focused far more on how much was spent and what activities were implemented than on what good was done.

These factors motivated the Center for Global Development’s report of the Evaluation Gap Working Group, When Will We Ever Learn: Improving Lives through Impact Evaluation, which was co-authored by Ruth when she was at CGD. That report argued that organizations seeking to use development dollars effectively should invest in impact evaluation, and that those investments should be pooled to yield the greatest value.

Long before either of us were here, the Hewlett Foundation backed the emerging field of impact evaluation. For instance, in our Mexico portfolio the foundation built on the PROGRESA experience by investing significantly in organizations in Mexico promoting stronger evaluation of social programs. The foundation also was an early funder of the home base for many of the leading academics working in the field, MIT’s Abdul Latif Jameel Poverty Action Lab (JPAL). Along with the Gates Foundation, the Hewlett Foundation supported the Evaluation Gap Working Group hosted by the Center for Global Development, and the then-director of the Global Development Program, Smita Singh, was an active member of that group. Again with our partners at the Gates Foundation, we were a founding and steady supporter of the International Initiative for Impact Evaluation (3ie), established to provide pooled funding for impact evaluations, to encourage use of evaluation findings, and to promote standards for evaluation quality and transparency.

These investments paid off. The 3ie Impact Evaluation Repository now holds 2,648 evaluations and the number of impact evaluations each year has skyrocketed. As much as $50 million a year is spent each year on impact evaluations, though this is still a small amount compared to the total spend on aid funded development projects and miniscule compared to domestic budgets. Countries such as Mexico, Colombia, Benin, South Africa and Uganda have created governmental units dedicated to evaluating public programs by commissioning or conducting evaluations themselves.

Having seen the fruits of earlier investments, we’re both gratified and unsatisfied. We know we cannot just keep doing more of the same if we’re going to play the role that foundations should: pushing the boundaries of what’s possible.

To understand the current needs and opportunities in the field of impact evaluation, Kristen has spent the last two years talking with grantees, experts in the field and evaluation users; reading the literature; and commissioning three papers on the future of impact evaluation. (These papers were presented by 3ie at a workshop in April, and will be published as working papers in October.)

Our conclusion is that three big challenges remain for the field, and we have an obligation to try to tackle them. The challenges we see are:

  • Lack of bodies of evidence. It doesn’t make sense to base decisions on single studies, but our two-year examination makes clear that there are still too few attempts to test whether programs that are successful in one setting will be equally successful elsewhere; and systematic reviews are rare. As a result, decisionmakers rarely have the full body of evidence they require to make a sound decision about whether to adopt a particular approach.
  • Few individuals have the skills to conduct high-quality impact evaluations. The skills for rigorous evaluation are particularly limited in many developing countries. What this means is that currently those with the greatest skills may have knowledge of the local context, but are not primarily rooted in that context – and they are limited in their abilities to build and sustain relationships with decisionmakers. This limits both the relevance of the evaluation, and the ability of the researchers to help put the research findings into the service of better decisions at the program and policy levels.
  • The incentives of academic researchers rarely match the needs of decision makers. While academics conduct studies that yield publishable insights and use the most cutting-edge methods, their interests, timing and means of communication do not always correspond to what’s needed for real-world decisionmaking.

Given this assessment, our future funding for the field of impact evaluation will seek to achieve the following outcomes: First, that impact evaluation practices are responsive to the needs of policymakers and program implementers. Second, that decisions are made based on bodies of knowledge that are designed to inform decision making. Third, that local researchers conduct high-quality, policy- and program-relevant impact evaluations.

As our grantmaking unfolds over the coming months and years, we may explore new ways for researchers and decision-makers to collaborate. We may test new ways to conduct impact evaluations that are more responsive to the types of questions decisionmakers ask and the timelines they work under. We will likely support new ways to review, summarize and present bodies of evidence to provide more useful information for policymakers. And we may invest in new ways to build evaluation capacities that are responsive to the context in which the evaluations are done.

These new directions represent a re-commitment to the field of impact evaluation—and a re-commitment to making sure that the Hewlett Foundation supports those who are exploring the field’s frontiers. As we go, we will keep watching, keep learning and, yes, keep evaluating.