You are (not) here: something we try not to lose sight of. (Image Credit: Hewlett Foundation, adapted from Wikimedia Commons, licensed under CC BY SA 3.0)
In the halls of the Hewlett Foundation, you hear people talking about places: how hard it is for artists to find affordable performance space in the Bay Area; the importance of protecting wilderness in the Western United States from commercial development. You hear people talking about policy: debates about the causes and consequences of climate change; why there's reason to hope that Congress will become genuinely functional again one day. And you hear an awful lot about the intersection between place and policy: the ways each state is implementing (or backing away from) the Common Core education standards; the state-level restrictions on the provision of abortion services. In other words, you get a flavor of the complex work my colleagues do every day to improve life in the United States, at this particular moment in the country’s economic, political, and social history. It’s both inspiring and informative to those of us who spend our time working on problems 10,000 miles away from home.
Before joining the Hewlett Foundation in 2011, I spent my professional life working in organizations with "development" in their names (or at least their missions): the U.S. Agency for International Development, the Center for Global Development, the Inter-American Development Bank, the World Bank. That’s the case for most of the program officers in the Global Development and Population Program.
In those work places, the air is thick with the peculiar jargon of international development: community capacity building, demand-driven, on-the-ground, low-hanging fruit, tackling the root causes of poverty (especially for women). Professionals progress through careers as they master a technocratic agenda and tally up the countries they’ve visited. With sweeping statements, people compare the social and economic trajectories of countries with peers at similar levels of incomes—“the Tanzanias of the world"—rather than invoking each country’s unique national history. Experts can rattle off average development indicators for rural and urban households, but are unable to name all three of the largest cities in the last country they worked in. Policy reforms are measured in five-year increments—the lifecycle of a project—with results observed at the national level, rather than in election cycles where gains are manifested unevenly across populations—region by region, district by district.
This is a caricature, of course, and there are impressive counter-examples. But there’s far more truth to it than fiction. And while international institutions and non-governmental organizations have facilitated many gains in social and economic well-being, they are profoundly limited by their outsider status. Fundamentally, they are poorly positioned to directly undertake the hard, long struggle of advancing political and social reforms in countries around the world. That’s the hard work of local reformers in civil society and public service—people with a stake in their own countries' futures.
So the trick is to figure out how we, as outsiders with a global perspective, can effectively support that hard work. We’ve found some ways. We can, for example, work toward the establishment of global norms for public sector transparency; we can support data collection and research that helps inform advocates and activists about where their country stands relative to others; we can foster exchange of tactical knowledge across borders. Importantly, we can provide funds directly or through intermediaries to organizations where people are working for social change in their own countries.
Those are a few of the things we can do. What we cannot do is define the right in-country policy agendas—or know the most effective, appropriate ways to pursue them. And if we ever start to forget that, we’ll watch our colleagues work on the many gnarly challenges we face here at home in the United States, and we’ll be reminded of how differently we have to do our jobs.
We fund a lot of research in the Global Development and Population Program. So we want to be sure that the researchers we’re supporting are using sound methods and reporting accurate findings. We want to contribute to the global public good of new knowledge, not the global public bad of weak science. Reaching that aspiration can be a challenge.
We are not the largest funder of research on issues in global development, but we’re one of them. Out of some $25 million in grants approved for our program at our Board meeting earlier this week, about a third of the dollars went to institutions that primarily engage in research; that proportion is characteristic of our whole grants portfolio. If you count up all our active research and evaluation-oriented grants, the total comes to about $100 million.
We occasionally fund specific studies that have narrow research questions, but more often we support research programs in think tanks, universities, and similar organizations. Topics covered range from those in political science about how citizens interact with local governments to public health investigations to estimate the incidence of unsafe abortion. The proposals aren’t like the thick protocols submitted to the National Academy of Sciences, the Wellcome Trust, or the National Institutes of Health, and when we review them we don’t anonymize, or blind, them as some of those institutions do. But—just like the funds from public research funders—the dollars we provide to researchers are used to design and field surveys, run field trials, analyze large data sets, and run policy simulations.
Eventually, some of the research will appear as published papers in political science, economics, demography, and public health; those work products will be subject to journals’ peer review, and investigators will struggle through the “revise and resubmit” obstacle course. Many of the studies, though, are not headed for publication in professional journals. Rather, findings are shared through institutional websites with a range of audiences in the form of working papers, reports, and policy briefs. All of it—we dearly hope—will help to increase the chances that policymakers will have (and use!) more and better information to make key decisions.
Which brings me to today’s conundrum: We are not staffed like a research funding institution, and we cannot count on journals’ quality assurance processes to vet all of the final products. So we have to figure out how to judge research quality, from proposal to finished product. That’s not so easy.
Research quality is a concept with many dimensions: Is it relevant? Are the choices about how to collect and analyze data appropriate, and are the methods applied correctly? Are the findings communicated in ways that work for technical and for policy audiences?
As grant makers with deep knowledge of the fields in which we work, we’re in a pretty good position to assess relevance of the questions and accessibility of the findings. It’s far harder, though, to figure out if the sampling design is sound, or if the statistical methods are the right ones, and are used correctly. We don’t have time to read every research paper our grantees produce, and I’m pretty sure they don’t want their program officer asking them a lot of questions about statistical power, endogenous variables and fixed-effects modeling. But we do have to find ways to assess the soundness of the research.
Here are a few ways we do it, and I freely admit that none of them is perfect:
The most common, and my least favorite by a country mile, is reputation. We assume the quality of the research is high when we’re working with researchers and institutions that have an established reputation for quality. This is self-evidently a risky strategy, but I’m pretty sure we are not alone among funders in using it. This isn’t blind review; it’s blinded-by-star-power review. And it’s one I’d like us to depend on a lot less.
We often ask about an institution’s own systems for quality assurance. Many think tanks, for instance, have peer review arrangements that include both in-house and external reviewers. We’ll ask questions about how they select the reviewers, what they do with comments, and whether they’ve ever had to retract a paper. We applaud grantees’ efforts to adhere to high levels of scientific transparency, including putting out original data sets to permit reanalysis.
We sometimes suggest ways to reinforce an organization’s own quality assurance processes, and may even provide extra resources for this purpose—for example, to recruit an advisory board that includes members with specialized knowledge, depending on them to vet the technical details. This can have a lot of benefits, including strengthening institutions beyond the one-time research effort.
We occasionally commission a quality assessment in which an outside expert audits a sample of the grantees’ work products, and reports findings to us. While not a full institutional evaluation, this can give us valuable information about strengths and weaknesses we might not otherwise have detected.
Knowing our own limitations, we occasionally bundle research funding into a regranting arrangement administered by a group that does have research skills in-house. This is the case, for example, with the International Initiative for Impact Evaluation and the International Development Research Center, both of which are partners in large regranting efforts.
We invest in field-wide efforts to foster greater quality such as impact evaluation registries and replication studies.
We believe in the value of research to refine concepts, develop coherent theory, and create a strong empirical basis for decision-making. That’s why, year after year, we recommend to our Board that they dedicate significant funds to individual studies and to research-based organizations. But with every grant recommendation we feel a heavy sense of responsibility: that research had better be good research. We know that’s the real test of a good research funder.
A mother and her daughter in Keur Alpha, Senegal read a coloring book created by Tostan, a grantee of our Global Development and Population Program. (Photo Credit: Jonathan Torgovnik/ Reportage by Getty Images, licensed under CC BY NC 4.0)
Policy matters. If I had to name the one assumption on which most of our grantmaking hangs—at least in the Global Development and Population Program—that would be it. We support organizations that study the impact of social and economic policies, and we support groups that advocate policy change that favors marginalized populations in countries around the world. We believe that when responsive, technically sound policies are adopted, people will be better off: women will have access to more and better health care, schools will do a better job of teaching kids to read, write, and do arithmetic, good jobs will be more plentiful.
In some domains—particularly around expanding reproductive and economic choices for women—we have a point of view about the policy agenda, and it’s a perspective strongly held by the foundation for decades. But we have a larger agenda: trying to increase the chances that those who are making policies have and use the best available evidence, and citizens who are affected have crucial information and channels of influence so that they can be heard at high levels.
That’s the story behind grants that are awarded to think tanks, universities, large policy-oriented international NGOs and even small advocacy groups in Mexico, as well as in countries in East and West Africa. It’s why, to put it plainly, a grantmaking program aiming to benefit the poor sends checks to organizations that are led and staffed by people who are, at the very least, not poor.
We could go about things a completely different way, of course. We could (and occasionally do) just pay for health care, or teachers’ salaries, or books. This is what many aid programs, both public and private, have done for decades. We could even recommend to our president and Board that we distribute money directly to poor people, as our friends at Give Directly do so well. If we took that route, we could stop sending those checks to the elite researchers, advocates, and institutions; we could skip policy and go to people. We could be certain that we were contributing to improvements in living conditions among known communities, in ways we could see and touch.
But the direct-to-people model of grantmaking, appealing as it is, doesn’t fulfill the responsibility we’re entrusted with by our Board. We’re asked to make as big a positive difference as possible with the resources available, and the way to do that is by acting on the levers through which a few dollars can affect many people over a long time: policy, markets, or technology. Most of the time, we choose to work on policy.
I think it’s the right choice, but I also think we have to be wary of believing our own stories too much. Knowing that the words on paper can be a lot prettier than the reality, we have to pay as much attention to the implementation of “good” policies as to their adoption. We also have to place priority on research and advocacy that reflect real experiences and foster greater empowerment.To do that, we can learn from those rare organizations, like WIEGO and Twaweza, that are based on principles of deep and genuine participation of people who otherwise would have no path to political power. We are watching with interest the experimental approaches that Feedback Labs and others are taking to connect people-level information to program- and policy-level decisions. And we are continuing to do grantmaking to support groups like Marie Stopes International as they provide services that help people in very direct and immediate ways. Although we’re at U.S. foundation with no in-country presence, we’re trying to keep our feet on the ground.
Policy matters, yes. But what really matters, after all is said and done, is people.
Behold the IMF: an organization that has become a champion of women’s economic empowerment in a big way.
The International Monetary Fund, that citadel of macroeconomic orthodoxy, has stated loudly and clearly that countries’ long-term prosperity depends on making more room in the labor force for women. In a speech last September in Japan, IMF Chief Christine Lagarde made the point elegantly: “We will need all the economic growth, dynamism, and ingenuity we can get in the years ahead. Thankfully, a key part of the solution is staring us right in the face—unleashing the economic power of women. Bringing the world’s largest excluded group into the fold.”
Looking across countries, the IMF has estimated that expanding economic opportunities for women can increase national income by 15 to 30 percent. And the institution, known in the past for harsh macroeconomic prescriptions, now offers up recommendations focused on economic inclusion.
In a study released just this past Monday, IMF researchers found that women’s participation in the job market depends on national policies that give women an equal chance at inheriting property, owning land, opening bank accounts and pursuing an occupation of their choosing. Many countries have gender-biased policies on the books and women’s full economic productivity simply can’t happen under the current rules of the game.
Their conclusion is straightforward: Want to grow your economy? Fix the gender-based discrimination that’s codified in laws and regulations. As Lagarde wrote about the study on her blog, “By helping women reach their full economic potential, we can also help boost growth, prosperity and stability for the whole world. In a world in search of growth, women will help find it, if they face a level playing field instead of an insidious conspiracy!”
This is precisely what the world needs to hear from the IMF—but we need to hear more.
We need to hear that the IMF sees and understands the informal economy, which is where the majority of female workers in many countries can be found. Yes, it’s great to pass laws that permit women to inherit property and pursue careers in occupations like teaching, banking, and hospitality. It would be even better to provide legal protections for domestic workers, women who sew garments in their homes, food vendors at construction sites, and people who sort out recyclable and reusable items from vast mountains of waste. Those workers are contributing to the economy, too.
We need to hear that the IMF recognizes the unpaid labor that keeps households running, farms producing, and family businesses afloat. National income measures don’t include the value of childcare, eldercare, cooking, cleaning, and many other services when they are provided outside of the context of a market transaction. As a consequence, productivity is vastly underestimated—and the contribution that women make is too often invisible to economic policymakers. Wages for housework? Maybe not. But recognition of the economic value of care and other unpaid work? A fitting task for a global financial institution.
Think tanks have tremendous potential to strengthen economic and social policy around the world, using data and analysis to answer questions about how to grow economies, share prosperity, and protect the environment. It is within think tanks that skilled analysts pull apart the most pressing policy problems, examine the impacts of policies, and translate the best available evidence from around the world into a local context. But to fulfill their promise, think tanks need well-qualified staff with the wherewithal to build long-term research programs, and they need to be able to respond with information and advice when unexpected policy opportunities arise. In other words, to fulfill all that potential, they need core organizational support and cannot just live project-to-project.
This week, my colleague Sarah Lucas and I have been in snowy Istanbul at the Think Tank Exchange, a gathering of 40-plus think tank leaders convened by the Think Tank Initiative (TTI). Along with the British and Norwegian governments, the Bill & Melinda Gates Foundation and Canada’s International Development Research Centre we support TTI. Through it, we and co-funders provide core support to think tanks in Latin America, South Asia, and Africa. This conference was a chance to see what our grant dollars are paying for—and we like what we’re seeing.
Talking to the extraordinary men and women who run these organizations, I heard time and again that core support is the lifeblood of think tanks, whose success depends on credibility and relevance. When an organization has resources that are not all tied up in specific, short-term projects, it can recruit the best researchers and give them the independence to pursue research on policy questions that no particular funder has yet prioritized. With core support, a think tank can ensure continuity within a research program, consolidating the organization’s reputation for being a “go to” source on a topic. It can invest in basic research infrastructure—library services, development of an economic or environmental model for policy simulations, statistical support—that yield benefits for multiple research projects. And the organization can marshal a response when, with the election of new political leadership or a boom in government revenues, a window opens up for new ideas.
The range of research conducted by think tanks supported by the Think Tank Initiative. Click the image to view full-sized. (Image Credit: Think Tank Initiative)
Funding think tanks is quiet and wonky, and there are days when it feels like an agonizing blend of academia and politics, a crazy mash-up of research methodologies, regulatory minutiae, and ministry personalities. There are no ribbons to cut, or announcements about how many microloans were offered or lives saved. Even when think tanks have major accomplishments to their name, they resist bragging because maintaining a relationship of trust with policymakers often requires letting others take credit. For that reason, it is hard for think tanks to be recognized by funders for the value they bring.
But like us, our fellow Think Tank Initiative funders do know there’s a very high pay-off when people who care about getting the facts are able to influence political decision makers. When think tanks find their voice—and that’s happening more and more, as the appetite for evidence and technical advice increases in many countries—there’s no question that they are key contributors to national policy debates. Think tanks deserve the level and type of funding that will let them do their best work. They deserve core support.
In his most recent essay, “Can Rich Countries be Reliable Partners for National Development?” Lant brilliantly deconstructs the existential crisis that faces traditional development agencies. The paper is a “must-read-now” for those who work in or around any bilateral agency or multilateral development bank because it lays bare the fundamental disconnect between the current missions of major development institutions and the actual wants and needs of both citizens in developing countries and their governments.
“Across the board, rich countries are backing away from the national development goals of poor countries, such as broad-based prosperity and effective government . . . towards a narrow agenda of low-bar goals, such as reducing ‘dollar-a-day’ poverty; ‘completing primary schooling’ (with no mention of quality of learning or education beyond primary); accessing basic water and sanitation; or focusing less on health and more on specific diseases.”
Several forces are pushing development agencies in this direction. Developing countries have alternatives to aid, particularly with resources from their own economic growth. And aid supporters in rich countries are increasingly “post-materialist,” which creates a divergence between what they want to provide and what people in developing countries want.
Lant’s analysis helps explain the poor track record of attempts to promote “country ownership.” To date, no development agency—with the possible exception of the U.S. Millennium Challenge Corporation—has done even a C+ job of aligning agency priorities with those established within partner countries, or using development dollars to strengthen in-country institutions and markets. Rich-country taxpayers who support aid tend to want faster progress toward pro-poor health, education and other social outcomes rather than economic growth alone. Even more daunting for the “country ownership” agenda is the awkward reality that shifting the locus of power to the national level wins few friends on the home front, particularly among those who are losing out on potential revenues from the business of aid.
Lant’s essay lays out a conundrum that seems irreconcilable. Given their constituencies, aid agencies cannot finance the things that address economic development writ large – such as energy projects, infrastructure, urban construction, universities, and transportation. So what’s a development agency to do?
The answer, I think, also comes from Lant—but from earlier times.
In 1999, Lant and his colleague Deon Fillmer figured out a way to take household survey data, like the Demographic and Health Survey, and estimate a robust wealth index so that we could compare and contrast the experiences of the wealthiest with the poorest households. Their paper, “Estimating Wealth Effects without Expenditure Data—or Tears: An Application to Educational Enrollments in States of India,” led directly to an explosion of analyses of equity in health, education, family planning, and other service utilization. For the first time, we could see, across dozens of countries, that progress was extraordinarily uneven, and that the benefits of spending on supposedly “pro-poor” interventions like childhood immunization were largely captured by better-off families. We had a new and crucially important way to assess the performance of a wide spectrum of government and donor programs, and to improve their design. Information and analysis shed new light, and continues to do so to this day.
Then, in 2002, Lant wrote “It Pays to be Ignorant: A Simple Political Economy of Rigorous Program Evaluation.” In it, he explained—using equations!—why and how individual development agencies and governments are unlikely to invest in learning from experience, even though it would permit them to optimize the impact of their spending. That was one of several seminal contributions that led to the movement for more and better evaluation through collective action.
And in 2004, Lant and his colleague Michael Woolcock wrote in “Solutions when the Solution is the Problem: Arraying the Disarray in Development” about intrinsic limitations of development interventions that are based on a technocratic analysis of “need:” When the problem is defined from 30,000 feet as a need for more clean water, for example, the solution starts with pipes and flow rates. If the people actually affected, within a local context, were given the opportunity to define a problem for themselves, and propose its solution, chances are they would pursue a path far different from both the technocrats and from people in other communities. It’s a complex paper, but at least one of its core messages is that information about people’s views, preferences, and experiences must be taken into account—and given real weight alongside external technical expertise.
These ideas can help development agencies escape their conundrum. Aid may be unable to finance the hydroelectric dams that countries want, but they can finance the information, research, piloting, and adaptation that make a difference in the ultimate success or failure of governments’ own investments. What Lant’s work has demonstrated over many years is the crucial role that information, analysis, and evaluation can play in shaping better outcomes for real people in real places. As I argued in an earlier blog post, aid agencies—particularly if they are willing and able to work collectively—have a comparative advantage in investing in gathering and revealing facts on the ground. Part of that agenda is pure data, part is research, part is evaluation and other forms of feedback. But it’s all important, and it’s all a big part of what needs to happen next if development aid agencies are ever to find the sweet spot between what they should do and what they can do.
It’s the time for our annual performance reviews at the Hewlett Foundation, so I’ve been spending more than the usual amount of time reflecting on skills that my colleagues demonstrate at work each day. And that’s made me think about the special and somewhat peculiar qualities that go into being a good grantmaker.
The program officer’s job is a tricky one. This is the person who establishes a relationship with a prospective grantee and figures out where the organization and the project idea fit into the overall field that we support. The program officer has to gather enough information to have confidence recommending we fund the work, and weigh various factors to sort out both the level and the duration of our support. He or she vets proposals, monitors progress, connects grantees to each other and to other funders. It’s also the program officer who figures out when and how to evaluate grants or strategies, and when to recommend expanding a commitment or pulling the plug. Program officers are also members of a larger team and have management responsibilities, as well as the obligation of being a good citizen of a dynamic organization.
So when I think about the essential qualities of a program officer, demonstrated by my colleagues, I see stars. Specifically, I see the five-pointed star shown here, with pairs of attributes in balance.
Program officers need to know a lot about their subject matter. In the Global Development and Population Program, for example, program officers have deep in-country experience and strong knowledge of the fields of reproductive health, education, governance, transparency and accountability, and gender. But sometimes they have to tell their inner expert to take the day off so they can question fundamental and often strongly-held assumptions within a field, and appreciate what people from other disciplines have to offer.
Program officers need to be well organized, able to keep to deadlines and tick tasks off a list. There are a surprisingly large number of tasks involved in grant making: the due diligence, the proposal review, the presentation to the Board, the report review, and so on (and on). If you looked at a program officer’s calendar here, you’d see at least 50 separate deadlines each year. (That’s about one a week, on average, and it often feels like one a day.) All too often, those fall when program officers are in a distant time zone, far from a reliable internet connection.
The pace feels relentless and the stakes can be high: missing a deadline almost always inconveniences many colleagues, and could even jeopardize our ability to award a grant.
At the same time, program officers cannot be so mired in planning and meeting deadlines that they are unable to respond when opportunities arise, there’s a need to act quickly to meet a grantee’s urgent needs, or a colleague needs a helping hand. And, while ticking the tasks off day-to-day, they still have to follow a separate, and very different, rhythm—that of keeping up in the field.
Program officers need to be passionate about impact, willing and even eager to ask hard questions of prospective grantees about how they are going to turn money into good things in the world. But sometimes—maybe even most of the time—they need to temper that passion with a willingness to let go, giving grantees latitude to do things their way. It is, after all, the grantees who are ultimately accountable for the success or failure of each grant. And people rarely do their best work when they feel that someone else is calling the shots.
Program officers are supposed to be “strategic thinkers,” able to set priorities for their time, grant and consulting budgets, and institutional voice. At the same time, they have to keep their mind open to opportunities that we might not have envisioned at the outset of the strategy—as well as to the possibility that things just aren’t working and it’s time to change course. Again, it’s a balancing act.
Finally, program officers have to tread a fine line in relating to people who benefit from a grant. Good grantmaking requires being able to establish a relationship of trust, one that entails empathy and connection. At the very same time, good grantmaking requires maintaining enough objectivity so that the funding relationship is sustained for the results, not for the relationship alone. That can be tough.
Five sets of skills, five ways to balance proactivity with restraint. Thinking in this way about how we’re all doing our jobs helps me understand what it really means to be a good grantmaker. And it’s permitting me to see many stars.
In founding the National Campaign, Sarah created an organization that has been inventive, strategic, and just plain successful—whether you’re talking about its policy work, media partnerships, or direct outreach to young people.
The media partnerships, which stretch across the TV dial and many online outlets, famously include 16 and Pregnant and Teen Mom. The National Campaign has used partnerships with these shows to bring responsible parenting messages to MTV’s big audiences—messages with so much punch that they’ve been cited as major factors in the dramatic decline in teen pregnancy after the show’s introduction.
The Campaign also has been pioneering in its creation of Bedsider.org, a made-for-twenty-somethings web portal with a growing reach that provides information about birth control along with a dose of fun. Studies of Bedsider.org’s effects on users’ knowledge and use of clinical services hint at its tremendous promise as a way to engage and educate young people.
All of these complementary efforts, and the other work the National Campaign has done over the years, have contributed to what President Obama was referring to in his State of the Union Address earlier this week: “We still may not agree on a woman’s right to choose, but surely, we can agree it’s a good thing that teen pregnancies and abortions are nearing all-time lows, and that every woman should have access to the health care that she needs.” Those are precisely the goals the National Campaign has worked so hard to achieve, and they have managed to do it—even in this day and age—with bipartisan support.
Sarah is one of this country’s most ardent champions of making birth control accessible to young people, but what I admire most is that she is simultaneously committed to a cause and to evidence. That’s a rare thing. Sarah’s not in it for the passionate speechifying or the ideological fight. She knows how to look at facts, analyze current conditions, and focus energies where they will make the greatest difference. She has built a team and an organization that’s in it for the results: the brightest future for the next generation.
The bell has rung. Civil society leaders from India, Pakistan, Senegal, Mali, Nigeria, Mexico, Uganda, Kenya, and Tanzania have come together, with optimism and conviction, asking others to join a movement to ensure that all children learn the fundamental, life-changing skills of reading and arithmetic.
But will anyone answer?
That’s the question I found myself asking just a couple of days ago, as I listened to Dr. Rukmini Banerji, head of the ASER Centre in India, speak on behalf of organizations from diverse countries that administer a simple reading and math test to school-age children each year, and publicly report the results—results that often reveal the dire state of education in their countries and motivate parents and policymakers to act. Before a large international gathering in Delhi, she said that we now know it is both possible and powerful to conduct citizen-led learning assessments. She called on civil society leaders around the world who are committed to better education to let parents, communities, and government officials know the cold, hard facts about whether or not children are learning.
The idea of citizen-led learning assessments is simple, and simply disruptive: rather than depending solely on enrollment statistics reported by government, assessments conducted by volunteers going door-to-door generate new and independent information about what matters—what children are learning. And that information is widely shared so that parents and others see for themselves whether children, in or out of school, have achieved the basic reading and arithmetic skills that can reasonably be expected of any 7- or 8-year-old.
For 10 years, the ASER Centre has conducted a national assessment in India, with data that are representative at the sub-state level—a feat of heroic proportions. Year after year the findings have been shocking. This year ASER reports that while almost all children are enrolled in a government or private school, only one third grader out of four can read a second-grade text. Almost one in five second graders cannot recognize the numbers zero through nine. And as the children progress through the grades, many fall further and further behind.
With the feasibility of this home-grown approach demonstrated in India, civil society organizations in East and West Africa, and in Pakistan have adopted and adapted it in recent years. Now more than a million children are tested annually. And similar efforts are now starting up in Mexico and Nigeria. Technologists, education experts, and leaders in governance and accountability are all participating in the expansion and evolution of this approach.
The work of groups undertaking citizen-led assessments has paid off. At the national level, the releases of each year’s findings set off a chain reaction of responses and accelerate reforms. Most importantly, they force public conversations across societies, and particularly by politicians, bureaucrats, and education experts: How can we improve education for all children?
At the global level, findings about how few children can do the basics, even after attending school every day, have paved the way for an “access plus learning” objective in the Sustainable Development Goals, expected to be agreed by UN member states in September. And the development of that goal has pushed the education community to focus on the vexing question of how to measure learning outcomes affordably, in ways that match in-country technical capacity.
The expansion of citizen-led learning assessments has also created some remarkable international bonds. The Indian team is working closely with counterparts in Pakistan, building strong bonds as they deal with common challenges; the Francophone countries of Senegal and Mali are collaborating on the challenges of translation to French and mother-tongue languages; and Uganda, Tanzania, and Kenya are ensuring that the data are collected in ways that permit comparison and encourage a friendly competition in the East African community.
So now, as we sit on the edge of a global agreement about the importance of learning, these civil society organizations from the Global South are calling on their counterparts in other countries to join the movement. Already they’ve seen expressions of interest, and have been fielding inquiries from every region of the world. As these groups organize, learn, and plan, however, they will need technical and financial resources. With our support for its start-up, the ASER Centre has established an International Unit to help share tools and know-how, including an app for handhelds that saves time and money in data entry and analysis. So the groundwork is being done.
But as this movement to measure children’s learning in an independent and inclusive way takes off, members of the donor community who care so much about better education—who have been so effective in helping governments increase school enrollments—owe it to themselves to take a close look at what citizen-led assessments can do to create an enabling environment for big changes. Not only are they a proven and relatively low-cost way to measure learning outcomes, but they also serve as a crucial external source of data, which will continue to be needed even as government-run assessments proliferate.
I know that we will do what we can to respond to the call. Will others join us?
So let’s say you were trying to make a big dent in problems facing people in poor countries, like inequality, poor health and nutrition, illiteracy, and corruption. Let’s say you had big ideas, lots of access to powerful decision makers and the smartest analysts, but relatively little money compared to the scale of the problems. Let’s say you wanted to make a lasting contribution, but were subject to priorities that were shaped by politicians with short time horizons. Let’s say you know that real solutions need to be appropriate to local conditions and “owned” by affected communities—but you’re an outsider. Let’s say, in other words, that you were an aid agency. How could you spend your money and use your connections to make the biggest difference?
One way—and I’ll even go out on a limb here and say it’s the best way—is to invest in learning about the effectiveness of policies and programs designed to solve those major social and economic problems you’re trying to tackle. You could evaluate programs funded by aid agencies, national governments, and even the private sector—and use your experience and networks to ensure sound methodologies, the relevance of the evaluation questions, and full use of the findings. You could help foster good governance by helping countries develop and refine the best interventions in particular contexts, and by encouraging them to be transparent about whether publicly-funded actions were succeeding or not. If you did that, you’d be leveraging your limited funds to reinforce country-owned, evidence-based policies and programs. You’d be fulfilling your mission.
You could get even more bang for your buck by funding evaluations collaboratively, benefiting from economies of scale and scope that come with pooled money and pooled thinking: a shared evaluation agenda; opportunities for replication in varied settings to figure out what’s generalizable and what’s not; and standard practices for quality assurance and dissemination of findings.
And the good news is that you could start intensifying your evaluation efforts today, because all the pieces are in place. The evaluation field has developed a wide range of methods and leading academics around the world are focused on impact measurement. In fact, program evaluation is seen as one of the most appealing specialties by bright young development professionals. Several organizations, particularly the International Initiative for Impact Evaluation, have well-developed procedures for soliciting, vetting, and funding evaluations. Importantly, operational non-governmental organizations and developing country governments are both increasingly recognizing the value of evaluation and using new findings. Just next week, for example, as part of the International Year of Evaluation, the Government of India is dedicating five days to high-level discussions about how evaluation can contribute to better health, education and social protection policies. And guess what? Better evaluation is a priority that is shared by people of (almost) all political stripes. Win. Win. Win.