In a world where both resources for policy research and the attention spans to take it in are finite—that is, in the real world—less can definitely be more. Less research can mean that there’s more money and time for the activities other than data collection, analysis, and synthesis—more time and money, that is, for the activities that can make the difference between the proverbial “report on a shelf” and the study that informs better policy decisions in a meaningful way. Less sophistication can mean more understanding: Asking simple, straightforward questions and using descriptive data can provide just the type of information that decision makers understand and value. Less verbiage can mean more reading: writing shorter documents focused on questions the intended audience actually wants the answers to can dramatically increase the likelihood that they will be read.
Within the Global Development and Population Program, we support economists and other social scientists at universities and think tanks who are pursuing important research questions: Which economic growth path will disproportionately benefit the poor, women, and those in the informal economy? What types of health service delivery can most effectively reach young people with vital reproductive health care? How can countries avoid the “resource curse” to obtain broad benefits from extractive industries? As the research is designed and conducted, whether the studies are one-country case studies, multi-country data analysis, or experimental impact evaluations, the researchers make thousands of decisions about the scope, methods, and means of communicating results. In each grant proposal, and in our ongoing relationships with researchers, we try to see whether those decisions are made with an eye toward getting the great possible value out of the investment in research.
The tradeoff we see most often is between doing more research or spending more time on engagement with journalists, advocates, policymakers, and others who might interpret and use the findings. In general, researchers maximize budgets and schedules for the research, and shortchange the activities that help ensure that their research is relevant to current policy debates, shared in the many venues and formats that are needed to achieve real impact, beyond any research publication. As funders, we frequently ask how members of the policy community will be engaged from the outset and how the work will be disseminated in a way and at a time that corresponds to the intended audiences’ needs. Too often, the answers are vague, with little evidence that the proposed budget could accommodate the significant amounts of labor, travel, coffee-and-sandwiches, and other quotidian, essential costs for policy outreach.
We also often see researchers reaching to explore ever more nuanced policy questions and applying sophisticated econometric and other abstruse techniques. It’s impressive, and may be just the ticket to get the resulting paper into a prestigious journal (or at least into a years-long cycle of revising-and-resubmitting). But more often than not the analyses that serve policy audiences are those that simply and compellingly bring to light facts about the conditions of people’s lives, the quality of public services, and the potential costs or savings from a particular government program. That is, the studies that present descriptive and basic analytic results in straightforward ways that connect to specific policy domains and decisions—the kind that a technocrat in the Ministry of Health, Education, Planning, or Finance might need to come up with a better program design and stronger budget request.
The final way that less can be more is in the presentation of findings. For their academic and think tank peers, policy researchers feel compelled to “show their work,” sharing all the details of study design, conceptual framework, analytic approach and—where they sometimes lose even me—the multiple specifications of the multivariate models they tried before landing on the “right” one. This adds up to far too much information for policy audiences, who are likely to tune out at the first mention of “sample size” and “statistical power.” What most people in the advocacy and policy community want to know is: Why is this important? What did you find? How does this fit into what we know from other sources (i.e., does this challenge or confirm conventional wisdom)? And, crucially, so what? What should we do differently now that we have these results? Researchers who can answer those questions succinctly and precisely are able to attract and sustain attention—and win our admiration for their communication skills.
As I’ve written before, we believe in the power of evidence to improve people’s lives, and we commit millions of dollars to specific studies and to policy research institutions. We want those dollars, in aggregate, to have the greatest impact they can—but that won’t happen until the researchers themselves do less to do more.
The Open Government Partnership is building a new way for citizens and the people who represent them to work together to solve real-world problems—and what I like best about it is that just about everyone’s who’s involved is simultaneously a believer and a skeptic. Their eyes are open and they’re learning as they go.
But I’m getting ahead of myself. The Open Government Partnership (OGP), which doesn’t get as much attention as many other international initiatives—or as much as it deserves—will hold its global summit next week in Mexico City. Launched in September 2011 by the presidents of the United States and Brazil, the OGP describes itself as “an international platform for domestic reformers committed to making their governments more open, accountable, and responsive to citizens.” Governments and civil society organizations in member countries—membership has grown from 8 countries at its founding to 66 today—work together to develop, and then implement and report on, joint action plans to increase public sector openness. This takes many forms, depending on the starting point and a particular country’s circumstances: in many countries, action plans include adopting and implementing high standards for fiscal transparency; in some, they focus on increasing the ability of citizens to participate in decision making about how governments allocate resources.
The OGP has three features that are core to how it seeks to nudge the world to a better place: First, it adheres to strict eligibility criteria for membership, such as the existence of an access to information law. In concept, the lure of joining a good governance club creates an incentive for greater transparency. Second, recognizing that transparency and participation are challenges around the world, OGP seeks to include and be relevant to all countries, not only those in the Global South. This leads to lots of mutual support and learning, and obviates the risk that the partnership will look like old-fashioned “we judge you / we fix you” donor-designed initiatives. Third, and perhaps most importantly, it is one of the only vehicles for civil society and governments to negotiate, plan, and report on progress together, primarily within national borders but also at the international level. This creates the conditions for a collaborative, rather than solely adversarial, relationship between at least some civil society actors and representatives of the public sector. Conceived as a platform for national policy formulation and implementation, cities now are getting into the act. Leading municipal governments are developing their own action plans to solicit and embrace greater civic participation at the most local levels.
The Open Government Partnership is based on the idea that civil society and governments can, with the right incentives and processes in place, work together constructively to both accelerate openness and, ultimately, solve thorny governance challenges. This is appealing on paper, but not a simple concept to bring to life. Governments have vastly greater resources than civil society organizations, and can—if they wish—use the power of the state to restrict the ability of citizens to act collectively. Even when both parties act in good faith, the work requires new skills and tools. Civil society leaders and public officials have to figure out each other’s vocabulary, rhythm of work, and decision-making processes. They also have to develop mutual respect and trust, a willingness to not just say the word “partnership,” but practice it. Particularly for civil society organizations, this means a redefinition of tactics and measures of success—from argument to conversation, but never to capitulation.
Everyone involved understands that this is a big, bold experiment, based on a set of assumptions that may turn out to be right—or wrong. And they are observing as they participate, encouraging each other to share what they’re learning along the way.
Among the many civil society organizations whose engagement in the Open Government Partnership we and other foundations support, we see full and enthusiastic participation in this experiment. Those who are working with government are often acting in creative ways to further the OGP ideals—and, at the same time, are tracking what’s working and whether the process of collaboration is genuine or a sham. OGP participants on the civil society side are always on the alert for signs that their government partners are not acting in good faith. They worry about being co-opted, that the often time-consuming engagement in the OGP processes might be seen by some governments as a way to keep civil society distracted and busy so they won’t be able to draw attention to bad actors acting badly in the public sector. They scrutinize the implementation of action plans, highlighting instances when the words on paper aren’t turning into reality. In short: they trust, but verify. Again, this takes time and new skills and tactics, all in the service of giving this new model a try.
Will OGP meet its ambition of being a new model for solving governance problems around the world? Maybe. Will it add to what we know about whether and how civil society organizations can work with the public sector? Yes, definitely. It already is. And we, and our partners around the world, are eager to learn right along with them.
I usually sleep soundly, but there are nights when work worries keep me up. The number one cause of a restless night: wondering whether a grant we made with the best of intentions is, in fact, making a partner organization worse off than it would have been without it.
Funders can do harm with their generosity in three ways, which sometimes combine to toxic effect: dependence, distortion, and delaying the inevitable.
We foster unhealthy dependence when we provide grants that are large, relative to the size of an organization’s annual budget. Caution lights start flashing for me when grant size reaches 20 percent of the total, but there are no hard-and-fast rules. In cases where we are a large funder—and particularly when we are the single-largest funder—changes in our strategy or in the allocation of our grantmaking budget can wreak havoc on a grantee’s organizational health, leaving a resource hole that cannot be filled quickly, if at all. We try to mitigate those risks to grantees with lots of advance notice about a change in funding levels, but the damage to an unprepared organization—not to mention the stress on leadership and staff—can be substantial.
We distort organizational priorities when we offer resources for work that is not squarely aligned with an institution’s mission, and requires hiring new staff or otherwise extending the grantee into new commitments. This is an easy trap to fall into, for both parties. For us, it’s easier to work with known partners than to establish new relationships, and we may tell ourselves we’re encouraging the organizations to boldly pursue a promising new direction. For the grantee organization, taking on new work—even when it’s only a second cousin to their current concerns—permits them to maintain (and maybe even deepen) a relationship with a long-time funder. While sometimes this works out splendidly, and both we and the grantee make the most of connections between longstanding efforts and a bright shiny new initiative, in many cases the organization’s mission just gets fuzzier, its story harder to tell. And when the project is over, leaders—having compromised their principles—end up having to make unwelcome decisions about whether to sustain the new work or let it end, with all the consequences for both staff and institution that entails.
We delay the inevitable when we make grants to keep an organization financially viable—at least in the short run—when we know the prospects for long-term sustainability are slim. It’s incredibly painful when the withdrawal of our support sounds the death knell to a non-profit organization; I admit that more than once I have been persuaded to approve a grant because failing to do so would have meant that a once-valued partner would close its doors and good people would be without a job. But life support for a grantee is no favor at all in the end, either to that organization or to our shared mission. It is better to face the music, and—even for the staff—to find another place to dance.
Sure, in all these cases, it’s not just the foundation that is responsible for the damage. It is incumbent on the grantee’s leadership team to make sure they are not jeopardizing their organizational health. They have to continuously work on diversifying their funding base, ensure that their commitments are in line with their raison d'être, and have a clear-eyed view of their own institutional viability. But the funder-grantee relationship is inherently unequal, and I’m very much aware of the responsibility those of who are making the funding decisions bear—decisions that, in the end, determine how much money is available for rent, payroll, equipment, travel, and everything else that makes nonprofits’ work possible. Day in and day out, when we think about writing the checks, we absolutely, positively have to check our own practices, and make sure that first, we are doing no harm.
The Industrial Revolution. The Digital Revolution. The French Revolution. The American Revolution. Whether economic or technological in nature, social or political, they all have one thing in common: what came after was fundamentally different than what came before, and the changes wrought could not be ignored.
What about the Data Revolution? What kind of revolution will it be?
Will the Data Revolution, which will be heralded in alongside the agreement about the 2030 Sustainable Development Goals, change the way we work? Will it change the way we think? And can it fundamentally alter the relationship between the powerless and the powerful? I’m hoping for all three.
In the long lead-up to the formal launch next week of the Global Partnership for Sustainable Development Data—an institutional embodiment of the hoped-for Data Revolution—much of the focus has been on how innovations in data collection and analysis have the potential to accelerate progress toward key development outcomes, from reducing food waste to increasing immunization coverage to improving the energy efficiency of cities. (For a running summary of Data Revolution commentary, check out this terrific resource at Open Data Watch.) We’ve all drawn inspiration from real-world examples in which census data are linked to feeds from social media and satellite imaging to yield insights that are rich, granular, and instantaneous—and potentially invaluable for real-time planning and implementation of government programs. As several data impacts case studies have shown, better and more varied data sources and tools are already transforming how we all work, and the Global Partnership can help focus attention and resources on making those innovations work harder for development outcomes.
But realizing the full potential of the Data Revolution will require setting even more ambitious expectations, and taking greater risks. It will require confronting the link between our data and ourselves.
The way we collect, protect, and use information is a reflection of a society’s values. That means data-as-usual reinforces current patterns of inequality and marginalization; and it means progressive social change can be propelled and reinforced when the way we work with data is consistent with the better world we’re seeking.
Just think, for a moment, about gender. Assumptions about gender permeate data collection, from the now-contested presumption that gender is a binary variable to the many manifestations of gender roles that are embedded in standard household surveys. If a family includes a wife and a husband, the husband is listed as the “head of household.” Questionnaires that ask only about “primary occupation” often miss income-earning activities that women engage in, if the interviewees self-identify as “housewives,” as they often do in cultures where men are seen as producers and women as reproducers. Similarly, male respondents are rarely if ever asked about care-giving. Even data collection that is explicitly intended to understand gender-specific experience cannot break loose of social values: to measure violence against women, to take one example, those who are collecting and analyzing data have to confront questions about behavioral boundaries within intimate relationships. In most societies, the question of whether marital rape exists is far from settled, so how can information about that kind of violence be obtained if it’s not even conceived of as possible?
And data related to gender is certainly not the only type of information that implicitly perpetuates the status quo. Everything from the categories we use for race and ethnicity to the way we measure income and consumption to the geographic boundaries we use—“rural?” “urban?”—carry with them the assumptions and values of the society in which people create those survey questions and analyze those variables. That means those who want to challenge assumptions and shape social values have a huge responsibility and an equally large opportunity: We have to make sure that the work we do to improve and intensify the collection and use of data helps propel change, not stifle it.
Practically, what does this mean? First, it means embracing the interaction between the political and the technical, and challenging the view that data about people and our lives exists outside of social constructs. It means experimenting with new ways to conceptualize and capture information: using ethnographic methods to arrive at different ways of thinking about work and relationships, for instance, and inviting into the conversation people supposedly represented in the numbers. And it means fully taking advantage of data that are a byproduct of actions and transactions, rather than solely depending on data constructed and collected for the purpose of answering pre-set questions. Yes, here I’m talking about some parts of the big data universe, like social media feeds, data from remote sensing, and transactional information showing how people work, consume, and move. I’m sure those who think more deeply about these issues than I do can expand on (or challenge) this list, and I surely hope that happens as the Data Revolution unfolds.
At the Hewlett Foundation, we are pleased to be anchor partners within the Global Partnership for Sustainable Development Data, working with remarkably committed and future-oriented individuals and organizations. At the very least, we expect this Partnership to be a leading force in helping apply more and better data to overcome development challenges. But even beyond that, we expect this Partnership to marshal the energy, resources, brainpower, and passion to revolutionize the way we think about the power of data itself.
When the Sustainable Development Goal Declaration is agreed by heads of state at the United Nations next week, there will be cheers—and jeers. The jeers will come from people who see a long list of lofty goals and no practical path to achieving them; and from those who see little connection between the pontificating of, well, the pontiff and other eminences at global gatherings and people struggling with the harsh realities of daily life around the world.
But I, for one, will be cheering. Here’s why.
The seventeen Sustainable Development Goals are messy and complicated, and they clearly embody the many compromises needed to reach agreement. They’re also pretty darn inspirational. If we push past cynicism, doesn’t it seem like a pretty good idea to work collectively for the end of hunger and extreme poverty, for equality and empowerment, for kids learning in school and living until a ripe old age, for decent work, peace, and a real balance between human consumption and environmental protection? And isn’t it an even better idea for the global community to work simultaneously on multiple, intertwined problems than on a few narrowly focused ones, and to intentionally integrate economic, environmental, and social concerns? Sure, it’s messy. Life is messy.
Another reason I’ll be cheering the goals is that they aspire to be universal, to apply in some form as much to the future of my children as to the future of children in every country around the world. While the development community has often dominated the discussion, the goals haven’t lost the critical thread of “this applies to all of us.” That thread is something that can serve as the point of departure for contemporary models of social change, which are not about people from wealthy countries who have the know-how and the resources to help those from poorer ones, but rather about people facing common problems with shared purpose, together.
The goals have merit because the process that generated them was more inclusive by orders of magnitude than similar exercises in the past. The national and thematic consultations, opinion surveys, opportunities for feedback, and many other efforts amplified the voices of people who have rarely been heard before in the halls of the United Nations. And because of this, everyone is accountable. No one can say “these aren’t our goals.”
The goals also focus attention in a useful way on how to conceptualize and measure complex concepts: quality education and learning, sustainable cities, justice and strong institutions, among others. And there’s a broadly shared expectation that monitoring of progress across and within countries will be just part of a larger effort in which people with a broad set of skills and passions join forces to use the power of data—a mash-up of traditional and novel sources—to accelerate progress.
All of these reasons make the goals worth cheering. But I’ll also be cheering because once they’re agreed, we can put the goal-setting process behind us, and get to work.
Not long ago, I wrote to my contact at a grantee organization, pleading for him to send in long overdue final reports that were required by the original terms of the grant. The work we’d funded had gone well—by all measures a success—but once the money was spent we had heard nothing, despite multiple reminders to submit the final narrative and financial reports. In my note, I simply let him know that the delayed reporting was causing extra work for my colleagues and myself, and blemishing the organization’s otherwise perfect reputation. He wrote back within hours as if he was surprised that anyone cared, promising (and then soon after delivering) precisely the reports we had been seeking.
In that instance, simply providing information about the importance of complying with the grant terms, and the consequences of not doing so, seemed to make a difference. So in the spirit of trying to make our relationships with grantees as good as they can be, let me share more broadly why you need to get your reports in on time.
First, you need to get your reports in on time because you promised you would. Each grant agreement letter spells out when interim and final reports are due. When a representative of a grantee organization signs and returns that grant agreement letter, we comply with our part of the deal—we cut a check—and we expect that the organization will take seriously all of its commitments, too.
Second, you need to get your reports in on time because failing to do so causes extra work for people you probably want to keep on your good side. Program officers, program associates, program directors and even the foundation president get periodic (and frequent) tallies of how many reports are overdue and from whom. For each person down the chain of command, if the number of overdue reports goes above a minimum threshold, the “gentle reminder” and “just wondering about your reports” emails start increasing in frequency. Do you really want to be the reason for your program officer’s bad day?
Third, you need to get your reports in on time because it is one of the few things that people working in non-profit organizations can do perfectly and with complete success. We’re all in the business of making big promises about what can be achieved to make the world a better place, working on shoestring budgets. And most of us routinely fall short of our goals. But writing a short report describing work we have done (and should be proud of) and presenting information about money spent is a lot easier than making governments work better, improving education, or protecting threatened ecosystems. And a special note to my friends from the academy: We do not grade, peer review, or publish the reports. They do not have to represent the most cutting-edge thought leadership—so they don’t require the intensity of effort (and accompanying procrastination) that your other work does. Just take an hour or two, write the report, send it in, and congratulate yourself on a job well done.
Is writing grant reports the most rewarding task for someone working diligently for social change or contributing to global knowledge? No. And neither is reviewing them the task my colleagues and I look forward to most. But both the writing and reviewing are parts of the process that allows us to keep the grant dollars flowing so you can keep changing the world.
If you’re in the business of running a nonprofit, there are bad grants and there are good grants. And then, once in a while, there are great grants.
A bad grant is one that is underfunded, relative to what needs to be accomplished; or one that commits an organization to activities that distract from its institutional mission; or one that locks in a relationship with an overly demanding funder. A good grant is one that provides adequate resources to carry out a project that is central to the institutional mission with a minimum of muss and fuss. And a great grant? Well, that’s general operating support—unrestricted funds that can be used in more or less any way, and reported on with materials the organization already has, such as its audited financials and annual report.
General operating support is precious to organizations whose work is dependent on project-by-project outside funding. It’s what gives them the ability to pursue activities they know are important, but no donor has yet seen as a priority, providing a measure of independence. Unrestricted money helps organizations retain staff during lean periods, and make investments that help to take institutional performance to the next level: a new website, increased capacity to implement a communications strategy, professional development for staff, remodeling an office, trips by the executive for fundraising, and a thousand other things that benefit the entire organization.
General operating support is particularly precious because it’s so hard to get. These days, many foundations are reluctant to offer unrestricted support and, despite some positive signals (including recently by the president of the Ford Foundation), I have my doubts about whether we’ll see widespread changes in the philanthropic sector anytime soon.
At the Hewlett Foundation, we’ve always been able to offer general operating support to a significant share of grantees, and we know from their responses that this is both deeply appreciated and rare among their funders. But it’s never enough.
If what you need the most is the hardest to get, you have to start getting creative. And organizations we work with are getting creative in lots of admirable ways—some that expand their sources of support, and others that reconceptualize what they’re trying to get funding for.
One way organizations try to expand support when they have a friendly institutional funder is to ask for a matching grant as the basis for a fundraising campaign. While creating some risk for the grantee, matching is a tried-and-true way to mobilize support from individuals, jumpstarting a base of support from people who might be persuaded to continue contributing even without an ongoing match to motivate them. In this way, even a small general support grant multiplies itself into a larger unrestricted pool.
When it comes to reconceptualizing the ways to use funding, clever organizations transform mundane institutional needs into initiatives that can attract funders who prefer discrete projects. With a little creativity, many institutional priorities can be “projectized.”
Imagine, for instance, that a nonprofit knows it needs to undertake strategic planning, upgrade staff skills, and revamp the website. Rather than dipping into reserves to pay for these necessities, a smart fundraiser could create a project called “Impact 2020” and pull together a narrative that explains how these activities fit together and, in combination, will prepare the organization for the future. With a realistic timetable and budget, that package has a chance of attracting funding from a supportive donor that likes specific, time-limited projects. Sure, it’s not as efficient or flexible as general support, but organizations may be able to get funding that otherwise wouldn’t be available, and perhaps even accelerate the sorts of institutional changes that tend to languish when there’s no external mandate to make progress.
Separate from conceptualizing institutional investments as projects, smart nonprofits make sure that projects serve institutional needs by creating budgets that reflect the true cost of the work, clearly and accurately. When the resources required to maintain or strengthen institutional capacities are lumped into something called “overhead,” funders balk. But most funders know that project costs need to include support for, say, computers, the financial management team’s time, and even some support for professional development for staff. Some (not all) funders will permit items like that to be listed as direct costs rather than aggregated into overhead. But that will only happen if organizations accurately estimate the full institutional costs as they prepare project budgets, and think about how to present the information clearly and in ways that demonstrate how each line item contributes to the project aims.
None of this magically solves the problem of too many bad grants and too little unrestricted support for the nonprofit sector. But for individual organizations applying some strategic thinking in their funding relationships, it can help get the most out of the great grants, and make the good ones even better.
August 14, 2015 — By Kristen Stelljes and Ruth Levine
A researcher conducts an interview with a woman in Bamako, Mali, as part of Innovations for Poverty Action baseline health survey for an evaluation report on the role of user fees and information in health care decisions. Photo Credit: Nicolo Tomasell
We’ve been looking back at the Hewlett Foundation’s contributions to the field of impact evaluation in global development, and we’ve been looking ahead to a new approach to funding it. In looking back, we’ve tried to understand how some early investments yielded big things. In looking ahead, we’ve been exploring how this foundation can continue to be on the leading edge of a field that has grown so rapidly.
For us, the impact evaluation story starts in the early 2000s. Around that time, people working in international development were waking up to the potential for rigorous evaluation to shed new light on some persistent questions about what strategies and interventions could reduce disease, improve education, and change lives in other measurable ways.
In Mexico, for example, Santiago Levy and the late Jose Gomez de Leon, working alongside Paul Gertler and other U.S. academics, conducted an impact evaluation of a massive social program, PROGRESA. That evaluation showed the world that scientific evaluation of the impact of an important national policy intervention was possible, and that robust evaluation findings were hard for politicians to ignore.
The pioneering work of academics like Esther Duflo, Michael Kremer, and Ted Miguel, and development economists at the World Bank demonstrated repeatedly that randomized control trials not only were feasible in difficult field settings like those in Kenya and India, but also yielded insights that no other methods could. At the same time, there was growing frustration with the limitations of standard evaluation practices within development agencies—practices that focused far more on how much was spent and what activities were implemented than on what good was done.
These factors motivated the Center for Global Development’s report of the Evaluation Gap Working Group, When Will We Ever Learn: Improving Lives through Impact Evaluation, which was co-authored by Ruth when she was at CGD. That report argued that organizations seeking to use development dollars effectively should invest in impact evaluation, and that those investments should be pooled to yield the greatest value.
Long before either of us were here, the Hewlett Foundation backed the emerging field of impact evaluation. For instance, in our Mexico portfolio the foundation built on the PROGRESA experience by investing significantly in organizations in Mexico promoting stronger evaluation of social programs. The foundation also was an early funder of the home base for many of the leading academics working in the field, MIT’s Abdul Latif Jameel Poverty Action Lab (JPAL). Along with the Gates Foundation, the Hewlett Foundation supported the Evaluation Gap Working Group hosted by the Center for Global Development, and the then-director of the Global Development Program, Smita Singh, was an active member of that group. Again with our partners at the Gates Foundation, we were a founding and steady supporter of the International Initiative for Impact Evaluation (3ie), established to provide pooled funding for impact evaluations, to encourage use of evaluation findings, and to promote standards for evaluation quality and transparency.
These investments paid off. The 3ie Impact Evaluation Repository now holds 2,648 evaluations and the number of impact evaluations each year has skyrocketed. As much as $50 million a year is spent each year on impact evaluations, though this is still a small amount compared to the total spend on aid funded development projects and miniscule compared to domestic budgets. Countries such as Mexico, Colombia, Benin, South Africa and Uganda have created governmental units dedicated to evaluating public programs by commissioning or conducting evaluations themselves.
Having seen the fruits of earlier investments, we’re both gratified and unsatisfied. We know we cannot just keep doing more of the same if we’re going to play the role that foundations should: pushing the boundaries of what’s possible.
To understand the current needs and opportunities in the field of impact evaluation, Kristen has spent the last two years talking with grantees, experts in the field and evaluation users; reading the literature; and commissioning three papers on the future of impact evaluation. (These papers were presented by 3ie at a workshop in April, and will be published as working papers in October.)
Our conclusion is that three big challenges remain for the field, and we have an obligation to try to tackle them. The challenges we see are:
Lack of bodies of evidence. It doesn’t make sense to base decisions on single studies, but our two-year examination makes clear that there are still too few attempts to test whether programs that are successful in one setting will be equally successful elsewhere; and systematic reviews are rare. As a result, decision-makers rarely have the full body of evidence they require to make a sound decision about whether to adopt a particular approach.
Few individuals have the skills to conduct high-quality impact evaluations. The skills for rigorous evaluation are particularly limited in many developing countries. What this means is that currently those with the greatest skills may have knowledge of the local context, but are not primarily rooted in that context – and they are limited in their abilities to build and sustain relationships with decisionmakers. This limits both the relevance of the evaluation, and the ability of the researchers to help put the research findings into the service of better decisions at the program and policy levels.
The incentives of academic researchers rarely match the needs of decisionmakers. While academics conduct studies that yield publishable insights and use the most cutting-edge methods, their interests, timing and means of communication do not always correspond to what’s needed for real-world decisionmaking.
Given this assessment, our future funding for the field of impact evaluation will seek to achieve the following outcomes: First, that impact evaluation practices are responsive to the needs of policymakers and program implementers. Second, that decisions are made based on bodies of knowledge that are designed to inform decisionmaking. Third, that local researchers conduct high-quality, policy- and program-relevant impact evaluations.
As our grantmaking unfolds over the coming months and years, we may explore new ways for researchers and decisionmakers to collaborate. We may test new ways to conduct impact evaluations that are more responsive to the types of questions decision makers ask and the timelines they work under. We will likely support new ways to review, summarize and present bodies of evidence to provide more useful information for policymakers. And we may invest in new ways to build evaluation capacities that are responsive to the context in which the evaluations are done.
These new directions represent a re-commitment to the field of impact evaluation—and a re-commitment to making sure that the Hewlett Foundation supports those who are exploring the field’s frontiers. As we go, we will keep watching, keep learning and, yes, keep evaluating.
It’s surprisingly hard to be a by-the-book grantmaker because being an imperfect person gets in the way. When I depend too much on my own gut instincts, give in to obligations of friendship, guilt, or a sense of compassion, or feel sure that success is just a matter of working harder and harder, I’m unlikely to make the best decisions about strategies or grants. But those are, of course, very common sentiments among humans. Which is why it sometimes feels like the more human I am, the less good at grantmaking I can be.
I know that sounds harsh. Let me explain.
At the Hewlett Foundation, we design and implement philanthropic strategies that are intended to make the greatest positive difference for each dollar granted—a high bar. We value long-term relationships with high-performing grantees insofar as their work is aligned with our strategies. But we’re supposed to make clear-eyed, rational choices about changing or ending relationships if the evidence tells us a different approach is likely to produce better outcomes. We build strategies around logic; portfolios around grantee organizations’ capacities. The calculus of grantmaking, at least as practiced here, is mostly a left-brain exercise.
But, being people, we naturally have a hard time paying more attention to objective information, accumulated evidence, and coherent theory than to our hunches and hopes. It’s hard to say no to friends, and sometimes it’s equally hard to feel good about recommending a grant to an organization led by someone we don’t much care for—even if the organization itself has much to offer. We may even find ourselves a little bored, sometimes, with the long-term, steady partners and have more fun thinking about all the new projects we could support if those same resources were uncommitted. And maybe most challenging of all, as people who in our past lives have worked intensely and dug deep into details, it’s disconcerting to realize that as funders we often do more for grantees by getting out of the way. It’s tempting to be part of the action, not just watching from the sidelines as the real work gets done.
To help us stay upright in the tug-o-war between heart and head, we have some tools and techniques. These include a strategy process, designed by our Effective Philanthropy Group, that asks fundamental questions and requires us to articulate assumptions underlying our theories of change. It also establishes expectations about external evaluations that will give us information about shortcomings we might otherwise be blind to. For the grant-by-grant decision-making, we have a practice of sharing proposals with colleagues for an independent view, and of having open discussions about whether a new idea genuinely adds value or is just novelty for its own sake. Feedback from our biannual grantee perception survey, administered by the Center for Effective Philanthropy, helps remind us how much organizations we support appreciate the latitude we try to give them to carry out their work as they see fit. And, quite frankly, a heavy load of grantmaking provides its own form of discipline: we just don’t have time to get overly involved in the day-to-day work of grantees.
Our practices are designed to help us fight off favoritism, faddishness and (one hopes) foolish infatuation with our own ideas. Fortunately, though, they don’t squeeze out all the space for being a human. What I see in my colleagues, in fact, is that some of their very best grant strategies leave room for inspiration, creativity, and serendipity; their most fruitful relationships with grantees are built on trust; and they do their work each day with passion and humor. As it turns out, the best grantmakers are people, with all the imperfections and contradictions that implies. And I wouldn’t have it any other way.
You’d be hard pressed to find it in most grant proposals, but it’s there. It’s just that the first year of a new program or organization is often disguised as a typical period of work: a full complement of staff working at full tilt, delivering services, producing studies, or promoting policy change. Grantseekers usually project that they’ll spend quickly from Day 1, and they promise funders that we’ll see lots of productivity and impact. Eager for a new idea to hatch quickly and grow, we buy into the fantasy.
Then, somewhere around month nine we start hearing (occasionally sheepish) explanations for slow spending and a lack of results—often accompanied by a question about whether it’s possible to get a no-cost extension on the grant. Although we almost always agree, it’s with a twinge of disappointment and a sense that we should have known better. We’ve seen this movie before. Eventually, when the evaluation report is written years later, we’ll see it in black and white: “Due to significant delays in initiating activities, the entire timeline of the work program lagged.”
You know, we’d all be a lot happier—and look a lot smarter—if we just acknowledged reality and embraced the Zero Year. It’s not a goose egg; it’s an important period of early development.
During the first months of any new effort, leaders are dedicated to under-the-radar and relatively low-cost activities like recruiting, planning, setting up in-office systems and policies, establishing new partnerships, and going through the sometimes painful and always time-consuming process of getting activities off the ground. That’s a necessary precondition to successful work later on, but it’s not the work itself. And the rate of spending during the first year, and particularly during the first six months, is usually far less than half of any later period. And, quite naturally, there are few visible products, and no impact. That’s not failure; that’s just life.
While it’s tempting to jump over the start-up phase in any grant proposal, it would be far better to describe it in detail and with realism—to build a Zero Year into both budgets and timelines. Grantees and funders could then have a conversation about what it really takes to put together the right team, to set up a website, and to do all the other vital, unglamorous things needed to initiate “real work.” Together we could figure out what a sensible spending trajectory looks like, and when we might reasonably expect to see results. We could redefine success for the Zero Year as where an organization needs to be by its end in order to thrive in every subsequent year. And I wouldn’t be a bit surprised if, in the discussion of the Zero Year, we figure out that it makes more sense to make a longer grant, giving the grantee greater financial security, a more realistic timeline for seeing results, and all the benefits that go along with it.
Best of all, by calling out the Zero Year as a specific and necessary step along a multi-year journey, we’d be honest not just with each other but also with ourselves.