August 14, 2015 — By Kristen Stelljes and Ruth Levine
A researcher conducts an interview with a woman in Bamako, Mali, as part of Innovations for Poverty Action baseline health survey for an evaluation report on the role of user fees and information in health care decisions. Photo Credit: Nicolo Tomasell
We’ve been looking back at the Hewlett Foundation’s contributions to the field of impact evaluation in global development, and we’ve been looking ahead to a new approach to funding it. In looking back, we’ve tried to understand how some early investments yielded big things. In looking ahead, we’ve been exploring how this foundation can continue to be on the leading edge of a field that has grown so rapidly.
For us, the impact evaluation story starts in the early 2000s. Around that time, people working in international development were waking up to the potential for rigorous evaluation to shed new light on some persistent questions about what strategies and interventions could reduce disease, improve education, and change lives in other measurable ways.
In Mexico, for example, Santiago Levy and the late Jose Gomez de Leon, working alongside Paul Gertler and other U.S. academics, conducted an impact evaluation of a massive social program, PROGRESA. That evaluation showed the world that scientific evaluation of the impact of an important national policy intervention was possible, and that robust evaluation findings were hard for politicians to ignore.
The pioneering work of academics like Esther Duflo, Michael Kremer, and Ted Miguel, and development economists at the World Bank demonstrated repeatedly that randomized control trials not only were feasible in difficult field settings like those in Kenya and India, but also yielded insights that no other methods could. At the same time, there was growing frustration with the limitations of standard evaluation practices within development agencies—practices that focused far more on how much was spent and what activities were implemented than on what good was done.
These factors motivated the Center for Global Development’s report of the Evaluation Gap Working Group, When Will We Ever Learn: Improving Lives through Impact Evaluation, which was co-authored by Ruth when she was at CGD. That report argued that organizations seeking to use development dollars effectively should invest in impact evaluation, and that those investments should be pooled to yield the greatest value.
Long before either of us were here, the Hewlett Foundation backed the emerging field of impact evaluation. For instance, in our Mexico portfolio the foundation built on the PROGRESA experience by investing significantly in organizations in Mexico promoting stronger evaluation of social programs. The foundation also was an early funder of the home base for many of the leading academics working in the field, MIT’s Abdul Latif Jameel Poverty Action Lab (JPAL). Along with the Gates Foundation, the Hewlett Foundation supported the Evaluation Gap Working Group hosted by the Center for Global Development, and the then-director of the Global Development Program, Smita Singh, was an active member of that group. Again with our partners at the Gates Foundation, we were a founding and steady supporter of the International Initiative for Impact Evaluation (3ie), established to provide pooled funding for impact evaluations, to encourage use of evaluation findings, and to promote standards for evaluation quality and transparency.
These investments paid off. The 3ie Impact Evaluation Repository now holds 2,648 evaluations and the number of impact evaluations each year has skyrocketed. As much as $50 million a year is spent each year on impact evaluations, though this is still a small amount compared to the total spend on aid funded development projects and miniscule compared to domestic budgets. Countries such as Mexico, Colombia, Benin, South Africa and Uganda have created governmental units dedicated to evaluating public programs by commissioning or conducting evaluations themselves.
Having seen the fruits of earlier investments, we’re both gratified and unsatisfied. We know we cannot just keep doing more of the same if we’re going to play the role that foundations should: pushing the boundaries of what’s possible.
To understand the current needs and opportunities in the field of impact evaluation, Kristen has spent the last two years talking with grantees, experts in the field and evaluation users; reading the literature; and commissioning three papers on the future of impact evaluation. (These papers were presented by 3ie at a workshop in April, and will be published as working papers in October.)
Our conclusion is that three big challenges remain for the field, and we have an obligation to try to tackle them. The challenges we see are:
Lack of bodies of evidence. It doesn’t make sense to base decisions on single studies, but our two-year examination makes clear that there are still too few attempts to test whether programs that are successful in one setting will be equally successful elsewhere; and systematic reviews are rare. As a result, decision-makers rarely have the full body of evidence they require to make a sound decision about whether to adopt a particular approach.
Few individuals have the skills to conduct high-quality impact evaluations. The skills for rigorous evaluation are particularly limited in many developing countries. What this means is that currently those with the greatest skills may have knowledge of the local context, but are not primarily rooted in that context – and they are limited in their abilities to build and sustain relationships with decisionmakers. This limits both the relevance of the evaluation, and the ability of the researchers to help put the research findings into the service of better decisions at the program and policy levels.
The incentives of academic researchers rarely match the needs of decisionmakers. While academics conduct studies that yield publishable insights and use the most cutting-edge methods, their interests, timing and means of communication do not always correspond to what’s needed for real-world decisionmaking.
Given this assessment, our future funding for the field of impact evaluation will seek to achieve the following outcomes: First, that impact evaluation practices are responsive to the needs of policymakers and program implementers. Second, that decisions are made based on bodies of knowledge that are designed to inform decisionmaking. Third, that local researchers conduct high-quality, policy- and program-relevant impact evaluations.
As our grantmaking unfolds over the coming months and years, we may explore new ways for researchers and decisionmakers to collaborate. We may test new ways to conduct impact evaluations that are more responsive to the types of questions decision makers ask and the timelines they work under. We will likely support new ways to review, summarize and present bodies of evidence to provide more useful information for policymakers. And we may invest in new ways to build evaluation capacities that are responsive to the context in which the evaluations are done.
These new directions represent a re-commitment to the field of impact evaluation—and a re-commitment to making sure that the Hewlett Foundation supports those who are exploring the field’s frontiers. As we go, we will keep watching, keep learning and, yes, keep evaluating.
As the dust settles from the United Nations Third International Conference on Financing for Development, it’s time for development professionals to celebrate the good things that were announced -- for example, the announcement that country, civil society and private foundation champions have committed to forming a Global Partnership for Sustainable Development Data -- and get back to the hard work of figuring out how to turn the “announceables” into tangibles. Fortunately, my fellow data revolutionaries at Open Data Watch have produced FIVE new resources to help us out:
The Data Impacts project. This is a must read for anyone who is interested in how data actually gets used to improve our lives. The site has 16 case studies of examples when data -- traditional and new -- have been used to change policy and practice and one cautionary case where the data was available but was not used with tragic results. Not only are the case studies great, but the report Data Revolutions- How Information Turns into Impact pulls together valuable lessons for data revolutionaries thinking about how to make data more useable.
Data Revolution FAQs. As a data revolutionary myself, I often find myself trying to explain what the data revolution is to people. Is it about big data? Official statistics? Open data? All of the above? How can I get involved? Now we have a handy guide -- the Data Revolution FAQs -- that explains not only what the data revolution is, but also what role we can all play in making it happen.
Partnerships and Financing for Statistics. As we consider how best to support statistical capacity building, it makes sense that we should learn from the lessons of the past. In Partnerships and Financing for Statistics: Lessons Learned from Recent Evaluations, Open Data Watch summarizes the lessons learned from 27 evaluations of statistical capacity programs from 2000-2014, looking at national, regional and international programs. The lessons and recommendations won’t likely be new ideas, but are the things that we need to be constantly reminded of and to figure out how to do better, such as:
Ensuring country ownership and aligning with country priorities;
Meaningfully involving developing-country stakeholders in the design of international programs; and
Coordinating donor and international organization supports and providing predictable support.
Notable for data revolutionaries are the report’s lessons on the importance of focusing on data users and their needs, so that efforts to improve data lead to more informed, hopefully better, decisions. The report also finds greater impact when the introduction of new methods or technology is paired with technical support and advice to those countries that want to adopt them. Open Data Watch tops it off with good recommendations for designing programs to strengthen national statistics systems and build statistical capacity, including building evaluations in from the beginning.
Aid for Statistics. A great companion piece to the lessons learned is ODW’s inventory of what funds are already out there in Aid for Statistics: An Inventory of Financial Instruments. The inventory estimates that $143 million per year in aid goes to support statistics through over 30 instruments. Despite the number of vehicles, and the size of the funding, the report identifies several gaps that need to be filled, including the need to collect more gender-disaggregated data, increase data access and use, improve data literacy and support more data innovation. The World Bank announced at the Financing for Development conference that they will be launching a new trust fund for data innovation to fill this last gap. The Aid for Statistics report recommends that donors pool funds to provide more stable and predictable support for statistics and that the funds themselves work to harmonize their efforts and encourage learning. There is also a need for donors to make the data on the aid they give for statistics more open.
At the Hewlett Foundation, we believe that the data revolution should be a big tent where data experts and users of all stripes come together to swap ideas and form new and interesting partnerships. And there’s one group in particular that I think could have a surprisingly important role to play within this big tent: demographers.
One reason why the data revolution is so “revolutionary” is that it gives different groups opportunities to work together, often for the first time. It’s a chance for citizens and civil society to talk to experts in health and education about what sort of data to collect, how to use what they already have, and how best to share it so they can monitor service quality. It’s a chance for data advocates to discuss with multilaterals what the norms and standards for sharing data should be. It’s a chance for holders of data about our cell phone calls to talk with statisticians in national statistics offices about how “big data” can complement official data. And it’s a chance for all of these groups to benefit from a demographer’s perspective and expertise.
Luckily, demographers have already come into the data revolution’s big tent. The first sign of this was the release of the International Union for the Scientific Study of Population’s (IUSSP) Defining and Successfully Accomplishing the Data Revolution: The perspective of demographers (Ruth Levine blogged about it last October). And in the last couple weeks, I’ve seen how the demography community has been busy putting their ideas into action. At the Cartagena Data Festival, IUSSP teamed up with the UN Sustainable Development Solutions Network for a side event looking at the design and monitoring of the Sustainable Development Goal (SDG) indicators. Demographers shared the stage with experts from Open Data Watch, the UN Foundation, and PARIS21, among others. The panelists discussed what it would take to ensure the SDGs are measureable, valid, and useful, including building capacity over time. You can read more about monitoring demographic indicators for the SDGs in Stephane Helleringer’s recent report.
The very next week at the 2015 Population Association of America (PAA) annual conference, IUSSP brought the Data Revolution conversation to the broader demography community in a brainstorming session and a panel presentation. The session connected demographers and other population experts—from academia to a national statistics office to the UN Population Fund—to talk about what demographers can contribute to the data revolution. It turns out it’s quite a lot. Tom Moultrie from IUSSP and the University of Cape Town laid out three main areas where demographers can contribute: data quality, access and availability, and developing and enhancing institutional capacity.
Data quality: In one of the meetings at PAA, someone said you can’t fool a demographer (well, that’s not quite what he said, but this is a family-friendly blog). Demographers know how to interrogate data and get it to spill its secrets. This skill is going to be very important as we start using new kinds of data, like big data and citizen-generated data, and merging it with more traditional kinds like survey and census data. It’s also going to be important to know what the data is telling us (and what it isn’t) as we use more of it in decision making.
Access and availability: Opening up data so it can be used and understood by more people is an important part of the data revolution. That’s easier said than done, of course. We need people who know how to curate large amounts of data and get it in shape to be used responsibly by others, including proper levels of anonymization and metadata.
Developing and enhancing institutional capacity: Making good on the data revolution is going to involve a lot of capacity building—from measuring progress against the SDGs to rolling out civil registration and vital statistics systems to strengthening national statistical systems. This will not be done overnight and will require not only increasing the number of people across disciplines, but also ensuring that people working on these issues have a keen understanding of data and aren’t fooled by misleading information. It will also mean helping people to expand their repertoire to include new types of data.
As the data revolution moves along, and more and more people come into the big tent, the greater chance there is for unexpected relationships to develop and for us to learn something from each other. For me, it would be great if through this process we all get the phone number of a demographer or two we can call with questions—and a big data expert, and a statistical capacity-building expert, and an open data expert, and on and on. So, at the next data revolution meeting, be on the lookout for a demographer to befriend. I expect we’ll be seeing more of them.
The Data Revolution for Sustainable Development has brought all kinds of data nerds out of the woodwork. Enthusiasts are talking about the need for better data, using data to make better decisions, and how best to take advantage of all types of data, from big to small, global to hyperlocal. The Cartagena Data Festival, which took place last month, was a chance for these so-called “revolutionaries” to stop talking about what the Data Revolution could be and start planning for how to get there. Plenaries and panels brought together the best thinkers from the private sector, civil society, and government, often on the same stage, and dynamic formats such as fishbowls and ignite presentations kept things moving, while bringing in many different perspectives. (You can decide for yourself whether it lived up to Sarah Lucas’ description of it as ‘best conference of a lifetime’ by checking out the archived videos of some sessions).
Though you could have made a fortune selling "I <3 Data" shirts outside the venue, the passionate participants were also frank about the limitations of data. Many talked about how to use data responsibly and focused on how to improve usability. As Kate Higgins from CIVICUS’ DataShift project said, “Data doesn’t change the world. People change the world.”
Luckily, the Data Revolution has both data and people. And in Cartegena, it had people so serious about using data to change the world that a group of them gave up their free time to stay for an informal brainstorming session. The vision for the data ecosystem in the short and medium term that came out of that discussion included these themes:
Take collective action. There are some things we can only do together, like developing global norms, standards, and principles—to make sure data is used responsibly and that it can be open and more easily shared among the private sector, government, civil society, and researchers. Collective action means jointly identifying real world problems that data can help solve and bringing together multi-sectoral teams to solve them. It also means understanding how data ecosystems at the local, national, regional, and global levels interact and who the players are at each level.
Demonstrate impact. We need to find data champions to make the case for investing in data and for using it for decision making—and we need to build on successes in addressing problems as the glue that will hold the diverse players in the ecosystem together.
Increase access and equity. We need to increase access to data and ensure equitable access. This will require developing tools to support data use, especially for those who are not data experts.
Build trust. To be successful, the Data Revolution needs to bring together data producers and users who don’t typically work together. This requires building trust between and among these various groups: national statistics offices, big data producers, citizens and others. Importantly, trust also requires allowing for experimentation and failure to encourage learning and sharing.
The Cartagena Data Festival was a great place to get started on turning this vision into a reality, but it was only a one-off event. So how do you continue to bring this brilliant, diverse data community together? One idea, originally suggested in the UN Data Revolution Group’s A World that Counts report, is a World Forum on Sustainable Development Data—and Cartagena could be a model for it. But a World Forum should also go beyond Cartagena to:
Expand the tent even further. In addition to the amazing community that gathered in Cartagena, more people from the private sector, government policymakers, and Asian data communities should participate in future events.
Tackle real challenges. The Cartagena Data Festival started to address this with the Data Capsule session, where people rolled up their sleeves and used data to look at how public security could be enhanced in Colombia. A World Forum could be a place where data communities across geographies, sectors, and data types could come together to work on problems that we can only solve together—things like how to collect accurate survey data more quickly and cheaply; how to integrate new and traditional sources of data to get a more complete and timely understanding of development challenges; and how to build data literacy from citizens to decision makers.
Make commitments. The Data Revolution needs all of us to stand up and say how we can move it forward in our roles as individuals and institutions. This calls for tangible commitments about what data will be released, how it will be used, how privacy and other protections will be managed, and what resources will be committed.
Celebrate achievements. It’s not a choice between investing in data and investing in development. We need to invest in data for development to make better decisions and reap potential cost savings. The International Center for Tropical Agriculture and the Colombian Rice Growers Federation created a computer model using data on crop growth and weather patterns to advise farmers which crops to plant. In 2013 it saved farmers from wasting US$3.8 million on seeds and agricultural inputs during a drought. The Forum needs to bring this story, and many others like it, to front page news.
Stay connected. Events like the Cartagena Data Festival are always energizing. You can go for weeks afterwards on the high of meetings new people and getting new ideas. But we need to find ways to maintain the momentum between opportunities for face-to-face meetings. Whether in smaller groups that work together on focused tasks or taking advantage of technology to keep in touch, we need to go into the first World Forum with a plan for staying connected.
There are already plans underway to hold the first World Forum for Sustainable Development Data in Spring 2016. Call me a data nerd, but I, for one, cannot wait.
It’s an exciting time to be a data lover. As Rachel’s post from last week on post-2015 and the data revolution describes, if the revolution becomes a reality we will have more and better data about what’s happening in developing countries. Even better, that data will increasingly be made available to citizens so they can hold their governments accountable for delivering on promises of development. This data won’t just be a single number for the whole nation. Ideally it will be disaggregated by gender, geography, and socio-economic status so countries can better understand who is receiving services, know who’s benefiting from development, and make sure that no one is left behind. The idea that the post-2015 framework will be universal is also making people think about measurement in new ways. Countries like the United States will be asked to report on their progress towards the sustainable development goals, just like developing countries.
Local think tanks and research institutions in Bangladesh, Canada, Ghana, Peru, Senegal, Sierra Leone, Tanzania, and Turkey, are participating in the study to see what data is available and examine its quality. They will talk with representatives from government, civil society, academia and the media to find out what improvements need to be made in accessibility and transparency of data, as well as the potential for technology-enabled and non-traditional modes of data collection. The teams will be testing the feasibility and relevance of potential ‘zero’ (eliminating extreme poverty) or ‘global minimum standard’ targets (provide free and universal legal identifiers, such as birth registrations). They will be examining the challenges of measuring (and implementing) a universal but country-relevant post-2015 framework for data that covers the following goal areas: Poverty; Employment and Inclusive Growth; Governance and Human Rights; Environmental Sustainability and Disaster Resilience; Global Partnership for Sustainable Development; Energy and Infrastructure; and Education.
IPAR (Initiative Prospective Agricole et Rurale) has already started this process in Senegal. Launches of the Data Test have also been held in Bangladesh by CPD and Canada by NSI as well as other countries. The country teams met in Nairobi at the end of last month to share what they have learned so far.
Country teams will be collecting data through July, and providing updates about their findings-- we will be linking to their posts as they come out. Once the teams are done with their research, they will come together to share what they learned and draw lessons that can help inform the selection of the targets for the post-2015 framework. You (and data lovers everywhere) will be hearing a lot from them—in blog posts, In-Progress Notes, via reports and at meetings—over the course of the next year.