The bell has rung. Civil society leaders from India, Pakistan, Senegal, Mali, Nigeria, Mexico, Uganda, Kenya, and Tanzania have come together, with optimism and conviction, asking others to join a movement to ensure that all children learn the fundamental, life-changing skills of reading and arithmetic.
But will anyone answer?
That’s the question I found myself asking just a couple of days ago, as I listened to Dr. Rukmini Banerji, head of the ASER Centre in India, speak on behalf of organizations from diverse countries that administer a simple reading and math test to school-age children each year, and publicly report the results—results that often reveal the dire state of education in their countries and motivate parents and policymakers to act. Before a large international gathering in Delhi, she said that we now know it is both possible and powerful to conduct citizen-led learning assessments. She called on civil society leaders around the world who are committed to better education to let parents, communities, and government officials know the cold, hard facts about whether or not children are learning.
The idea of citizen-led learning assessments is simple, and simply disruptive: rather than depending solely on enrollment statistics reported by government, assessments conducted by volunteers going door-to-door generate new and independent information about what matters—what children are learning. And that information is widely shared so that parents and others see for themselves whether children, in or out of school, have achieved the basic reading and arithmetic skills that can reasonably be expected of any 7- or 8-year-old.
For 10 years, the ASER Centre has conducted a national assessment in India, with data that are representative at the sub-state level—a feat of heroic proportions. Year after year the findings have been shocking. This year ASER reports that while almost all children are enrolled in a government or private school, only one third grader out of four can read a second-grade text. Almost one in five second graders cannot recognize the numbers zero through nine. And as the children progress through the grades, many fall further and further behind.
With the feasibility of this home-grown approach demonstrated in India, civil society organizations in East and West Africa, and in Pakistan have adopted and adapted it in recent years. Now more than a million children are tested annually. And similar efforts are now starting up in Mexico and Nigeria. Technologists, education experts, and leaders in governance and accountability are all participating in the expansion and evolution of this approach.
The work of groups undertaking citizen-led assessments has paid off. At the national level, the releases of each year’s findings set off a chain reaction of responses and accelerate reforms. Most importantly, they force public conversations across societies, and particularly by politicians, bureaucrats, and education experts: How can we improve education for all children?
At the global level, findings about how few children can do the basics, even after attending school every day, have paved the way for an “access plus learning” objective in the Sustainable Development Goals, expected to be agreed by UN member states in September. And the development of that goal has pushed the education community to focus on the vexing question of how to measure learning outcomes affordably, in ways that match in-country technical capacity.
The expansion of citizen-led learning assessments has also created some remarkable international bonds. The Indian team is working closely with counterparts in Pakistan, building strong bonds as they deal with common challenges; the Francophone countries of Senegal and Mali are collaborating on the challenges of translation to French and mother-tongue languages; and Uganda, Tanzania, and Kenya are ensuring that the data are collected in ways that permit comparison and encourage a friendly competition in the East African community.
So now, as we sit on the edge of a global agreement about the importance of learning, these civil society organizations from the Global South are calling on their counterparts in other countries to join the movement. Already they’ve seen expressions of interest, and have been fielding inquiries from every region of the world. As these groups organize, learn, and plan, however, they will need technical and financial resources. With our support for its start-up, the ASER Centre has established an International Unit to help share tools and know-how, including an app for handhelds that saves time and money in data entry and analysis. So the groundwork is being done.
But as this movement to measure children’s learning in an independent and inclusive way takes off, members of the donor community who care so much about better education—who have been so effective in helping governments increase school enrollments—owe it to themselves to take a close look at what citizen-led assessments can do to create an enabling environment for big changes. Not only are they a proven and relatively low-cost way to measure learning outcomes, but they also serve as a crucial external source of data, which will continue to be needed even as government-run assessments proliferate.
I know that we will do what we can to respond to the call. Will others join us?
So let’s say you were trying to make a big dent in problems facing people in poor countries, like inequality, poor health and nutrition, illiteracy, and corruption. Let’s say you had big ideas, lots of access to powerful decision makers and the smartest analysts, but relatively little money compared to the scale of the problems. Let’s say you wanted to make a lasting contribution, but were subject to priorities that were shaped by politicians with short time horizons. Let’s say you know that real solutions need to be appropriate to local conditions and “owned” by affected communities—but you’re an outsider. Let’s say, in other words, that you were an aid agency. How could you spend your money and use your connections to make the biggest difference?
One way—and I’ll even go out on a limb here and say it’s the best way—is to invest in learning about the effectiveness of policies and programs designed to solve those major social and economic problems you’re trying to tackle. You could evaluate programs funded by aid agencies, national governments, and even the private sector—and use your experience and networks to ensure sound methodologies, the relevance of the evaluation questions, and full use of the findings. You could help foster good governance by helping countries develop and refine the best interventions in particular contexts, and by encouraging them to be transparent about whether publicly-funded actions were succeeding or not. If you did that, you’d be leveraging your limited funds to reinforce country-owned, evidence-based policies and programs. You’d be fulfilling your mission.
You could get even more bang for your buck by funding evaluations collaboratively, benefiting from economies of scale and scope that come with pooled money and pooled thinking: a shared evaluation agenda; opportunities for replication in varied settings to figure out what’s generalizable and what’s not; and standard practices for quality assurance and dissemination of findings.
And the good news is that you could start intensifying your evaluation efforts today, because all the pieces are in place. The evaluation field has developed a wide range of methods and leading academics around the world are focused on impact measurement. In fact, program evaluation is seen as one of the most appealing specialties by bright young development professionals. Several organizations, particularly the International Initiative for Impact Evaluation, have well-developed procedures for soliciting, vetting, and funding evaluations. Importantly, operational non-governmental organizations and developing country governments are both increasingly recognizing the value of evaluation and using new findings. Just next week, for example, as part of the International Year of Evaluation, the Government of India is dedicating five days to high-level discussions about how evaluation can contribute to better health, education and social protection policies. And guess what? Better evaluation is a priority that is shared by people of (almost) all political stripes. Win. Win. Win.
Last year around this time, I used this space to offer some predictions for 2014. Every single one of them came true!
Now I’m emboldened to go beyond predictions and share my resolutions for 2015—and leave it to you to hold me to them.
I resolve to do a better job of ensuring that, even when supporting work on long-term problems, we’re attentive to near-term progress. Across each of the domains in which we work, we’re seeking large-scale social change that requires overcoming longstanding barriers, from gender-based discrimination to the over-medicalization of basic health services, and to poor (and often corrupt) public sector performance. We are patient, and don’t fool ourselves into believing that dramatic progress is likely in a few months or even a few years. The path of least resistance is always to continue the status quo—the same sensible approach, the same good partners—and to look for confirmation that incremental progress is being made. But patience and respect for past decisions have to coexist with being alert to the possibility that what we’re doing isn’t working, or isn’t working as fast as it should. That’s why I’m going to push myself this coming year to figure out the right near-term measures to help us understand (and accelerate) the progress we’re making.
I resolve to spend more time getting feedback. It goes almost without saying that people who work in foundations face a serious feedback problem. Not only are we at least one step removed from anything that could be called “on the ground,” but we also spend most of our time with organizations that receive or want funding—and they, of course, have few incentives to tell us that our thinking is simple-minded, or our proposed solutions are unlikely to work. These are problems our colleagues in the Effective Philanthropy Group here at the Foundation are tackling with other funders through the Fund for Shared Insight, and it will take these efforts and many more to find the best ways to elicit, hear, and use feedback. For my part, I need to place more priority on consultation and on setting aside time for site visits and other opportunities to observe and listen.
I resolve to be completely honest and upfront about the prospects for funding. Telling someone “no” is neither pleasant nor the route to popularity, and it’s far too easy for funders to rely on vague statements. “Not now.” “We’ll have to see.” “After our strategy refresh.” But I know from being on the grant-seeking side of the table for many years that the second-best response to a funding pitch is not “maybe”; it’s “no.” A clear “no” means those who are seeking funds can move on to look for more appropriate funders, instead of trying to figure out, often fruitlessly, how to turn a “maybe” into “yes.” In 2015 and the years after that, I’ll do my best to be clear and candid in response to requests, even at the risk of being seen as undiplomatic.
I’m hoping these resolutions are within my grasp and that I’ll be able to report back with some sense of accomplishment this time next year. In the meantime, if you have ideas for other resolutions we should be making—or for how to achieve these—do not hesitate to speak up.
Like a whole lot of other people, I’ve become engrossed by Serial. (If you’ve somehow managed to avoid its seeming cultural omnipresence: Serial is a weekly podcast in which a reporter sifts through clues and patches together fragments of people’s memories to try to solve a long-ago real-life murder mystery—and possibly exonerate the man convicted of the crime.) It’s a messy story, complete with ambiguous evidence, false leads, and contradictory points of view. And that’s what’s so captivating. Listeners must puzzle it out along with the reporter, applying powers of observation, inquiry, and judgment to understand both what happened and what it means. The untidiness of the reporter’s investigative process as we follow along with her creates a kind of audience engagement that no polished three-act structure, complete with a tidy ending, ever could. For a taste of it, just tune in to the “meta-conversation” on Reddit, in Slate, or around the water cooler at many offices (including our own).
So why am I writing about this when I’m supposed to be focusing on global development policy and programs? Because the phenomenon of Serial, and the popularity of detective stories in general, gives us a clue about one of the great unsolved mysteries in global development: Why don’t people who want to get the most good out of every development dollar spent pay enough attention to what works and why? Why don’t more people truly learn from program evaluations?
I think one of the reasons that the findings from program evaluations don’t gain traction—either in the minds of individual development practitioners or with the institutions whose programs might be improved—is because of the way they’re presented: dull, predictable, and unengaging, tied up in a too-neat bow. They are not only easy to ignore; they’re actually hard to pay attention to.
But what if program evaluations were constructed as detective stories? There are lots of similarities, in fact, between a good mystery and the assessment of a particular program’s effectiveness. Maybe there are no dead bodies or court records in program evaluations, but there are lots and lots of questions we really need the answer to—lots of questions we should be just plain curious about. What happened, and why? Does everyone agree or are there differences of opinion? What were the motivations for decisions, and the impact of them? Inquiring minds want to know!
And, like true crime dramas, evaluations are stories necessarily built on incomplete facts and conflicting opinions that have to be poked, prodded, and sorted. An open-minded, curious evaluator has to take it in and make enough sense of all the information to know what conclusions she is confident about, and what remains unknown. She not only has to sort fact from fiction, but also figure out what the findings mean: Yes, the implementation was slower than expected, but how does that (or does it even) affect what was actually accomplished? Was the loss of three key staff members during year two a result of disastrous management or just bad luck? And then there’s the big one: even if all the outcome indicators are trending in the right direction, can we say with confidence that it’s due to the program itself?
I know it’s a stretch to imagine that evaluation reports, which are notoriously banal, could be as compelling as Serial. But surely when we’re designing evaluations we could try to tap into our natural human curiosity. We could ask ourselves the question, “What are we genuinely interested in knowing?” This is a question that might lead us in a novel direction—not the well-trod paths of bureaucratic exercises to tick the “evaluated” box or the careful marshaling of evidence to confirm what we already believe. Then, in the course of the evaluation, the evaluator could present the information not as a neat set of conclusions backed up by selected and sanitized facts, but as a puzzle for those who care to join in the solving of it.
In the end, I’d bet that the findings from an evaluation that engaging would be remembered for a long time, and might even have a chance of influencing decisions far into the future.
“At the sound of the tone, please state your name and press the pound or hash key . . .” That phrase fills me with dread.
Working on large collaborative projects with people located in many different parts of the world, I spend a lot of time on conference calls with 5-10 other participants. Sometimes very, very early in the morning. As a consequence, I also spend a lot of time wishing those conference calls were more productive and satisfying as a means of communication. Too often the organizers and participants seem at sea, not knowing when to speak, or who’s talking, or where the conversation is going. Without the cues of an in-person meeting, it’s easy to lose focus or miss an opportunity to make a key point. But it doesn’t have to be that way. Here are some tips harvested from some of the best conference calls I’ve been on.
If you are leading the call:
Send the call-in information with the invitation.
For high-stakes decision making, do pre-calls with key participants to understand where agreement may be difficult. Shape the agenda to permit time for discussion where it’s required. Minimize presentation time and maximize discussion time.
Send the agenda at least two working days before the call, and include the call-in information along with instructions on how to access background materials. Make sure the agenda is clear about the objectives, the time for each item, and any decision points. The fact that this meeting is taking place on the phone rather than in person makes it more important that the reason for having it is clear to everyone, not less.
Open the call a couple of minutes early so you are on when people join.
Formally start no more than five minutes late, even if some participants have not yet joined. Take attendance by stating who you believe is on the line and asking for a quick response. Then ask anyone not named to introduce themselves.
Start every call with the following statement: “If you are not speaking, please put your phone on mute. At each point where comments or questions are requested, I will ask each person (or location) in turn, in the following order: [state the order, starting with the person who, because of time zones, is most inconvenienced by the time of the call]. If you have no comments, please feel free to simply pass to the next person. At the conclusion of a round of comments, I may open it up again. At that point, speak up if you have additional points to make. Please state your name each time you speak.” This system may seem overly formal, but it will reduce the problems of dead air/everyone speaking/people turning to their email instead of listening, and it will give everyone a chance to participate if they wish.
Invite any additions or changes to the agenda; note if there is an “any other business” section at the end. Reassure participants that they can contact you off-line to suggest topics that need to be covered in a future call or through a round of emails.
Have someone else take notes so you can attend to the difficult business of moderating the discussion and managing the time with a firm hand. Your note-taker can also quickly email the agenda and background materials to any participants who do not have them handy.
Make sure stacked-up comments get addressed, and stay attuned to emerging points of consensus or discontent; circle back to them as needed before the section ends to solicit additional thoughts.
Clearly state at the end what the conclusions are, including any decisions or next steps. Also indicate when notes will be circulated and if you will be asking for comments on them.
Plan to end a little early because people often drop off before the scheduled end.
If you are a participant on the call:
Ask in advance for an agenda with the relevant information if it’s not sent around within two days of the call. Test any links in advance so you don’t waste everyone’s time fumbling with technology.
Prepare for the call with talking points you think you will want to raise for each agenda item so you make the most of the time and don’t ramble. You may want to use these as the basis for a follow-up note, to make sure your points were understood.
Join promptly. If you must join late, wait until a pause to quickly introduce yourself and don’t expect anyone to recap the conversation for you.
Mute yourself when you are not speaking.
Introduce yourself each time you speak and indicate when you have finished your intervention. (“I’ll turn it back to you now.”)
Respect the agenda and order of discussion topics—limit your comments and questions to the points currently under discussion. If there are additional topics you think need to be covered, consider suggesting a follow-up call rather than trying to crowd a single call.
Assume there are other people listening who have not introduced themselves.
Be careful about sounding negative. As in email, without the cues of an in-person interaction, negative comments are often over-interpreted.
Stick around until the end of the call.
After the call, review any call notes promptly.
If this sounds like a lot of work, let me reassure you it’s less work than struggling through unproductive calls and then trying to make sense of the result. Let’s make the world a better place – one conference call at a time.
Are big ideas “destroying international development” as Michael Hobbes (or at least his provocative headline writer at The New Republic) would have us believe? Casting a wide net of blame for “save the world” failures, he comes up with some big fish: Playpump, the once-heralded invention that was supposed to harness the power of children’s play to pump water in Zambia; “Deworm the World,” an initiative to rid millions of children of intestinal worms; and the Millennium Villages, an effort to combine a comprehensive set of interventions to help poor communities escape the “poverty trap.” While Hobbes uses these as examples, his real target is what he calls “the paradigm of the Big Idea—that once we identify the correct one, we can simply unfurl it on the entire developing world like a picnic blanket.” Beyond highlighting the sin of “solutioneering,” he comes close to rejecting the idea of generalizable knowledge altogether:
“The repeated ‘success, scale, fail’ experience of the last 20 years of development practice suggests something super boring: Development projects thrive or tank according to the specific dynamics of the place in which they’re applied. It’s not that you test something in one place, then scale it up to 50. It’s that you test it in one place, then test it in another, then another. No one will ever be invited to explain that in a TED talk.”
A lot of ink has been spilled on the follies of international development over the decades, of course. On just my own shelf I have dozens of representatives of the genre, from Hirschman’s Development Projects Observed (1967) to Easterly’s The Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the Poor (2014), and a lot in between. In his own recent contributions, Hobbes argues for far less hubris and more humility, patience, and learning-as-you-go.
I’m all for that.
But I’m reluctant to give up on Big Ideas – if they’re the right Big Ideas, pursued by people who balance aspiration with realism. (Interestingly, in another recent piece, Hobbes highlights a couple of initiatives that meet his own test for the right Big Ideas.)
So here are four Big Ideas that the Global Development and Population Program spends time (and grant dollars) on:
The Big Idea of facts. The deficit of core economic, demographic, health, education, crime, and other statistics in low-income countries profoundly handicaps policymakers’ and citizens’ ability to take action, at every level, from national to local. This is a problem that has to be solved country-by-country, sector-by-sector, but it doesn’t have to be a process of continual micro-level reinvention. Many data collection methods are replicable and the results are often more useful when the information is standardized and valid comparisons can be made across countries and regions.
When we support the collection of information about whether teachers are present in schools and kids have textbooks, and about whether health centers have the medicines and vaccines they are supposed to, we’re investing in the Big Idea of facts. (That’s the Service Delivery Indicators project.) When we provide funding for groups trying to measure informal employment and compare the experience of men and women, we’re investing in the Big Idea of facts. (That’s Data2X.) And when we help build an ever-expanding base of compare-and-contrast impact evaluations around the world, and help expose decision makers to the type of information that can help them make the best-educated guesses about what’s going to work—that, too, is an investment in the Big Idea of facts. (That’s the International Initiative for Impact Evaluation and many other organizations.)
The Big Idea of local. It’s not easy for a foundation sitting in Menlo Park, California, to know what’s going on anywhere else, let alone in Dar es Salaam, Accra, Ouagadougou, Mumbai, or Lima. What we do know, though, is that individuals embedded in their own contexts, engaged in debates that affect them, their children, their fellow citizens, and their countries’ futures, are often in the best positions to figure out how to define and advance social and economic objectives. (That’s the Think Tank Initiative, Amplify Change, Fundar, Twaweza, Pratham, the INDEPTH Network, and many other organizations around the world.)
The Big Idea of open. Despite their proximity to the equator, many of the countries in the Global South have suffered from a severe lack of sunshine. Until recently, citizens have been in the dark about who was buying up the rights to exploit natural resources and how much they were paying, what governments were spending on priority programs and sectors, and whether government-run services were of adequate quality or not. Underperforming governments have been able to act with impunity.
But expectations have changed and, with remarkable speed over the past 10 years, key information about both government and corporate practices is available to watchdog groups and average citizens. (That’s the International Budget Partnership, the Global Initiative for Fiscal Transparency, the Natural Resource Governance Institute, Publish What You Fund, Creative Commons, the Transparency and Accountability Initiative and many others.)
The Big Idea of global. Increasingly, the distinction between the “developed” and “developing” world is falling away, eroded by the threat of collective problems like climate disruption and Ebola, and by the convergence around common problems like the rising tide of chronic disease everywhere. Many wicked problems affect people in all countries, and their solutions will require shared values, know-how, and action. (That’s the common agenda we have to promote reproductive health and rights at home and around the world. It’s the Open Government Partnership. And it’s our many conversations with our colleagues in the Environment, Education, Performing Arts, and Philanthropy Programs, as well as the Madison and Cyber Initiatives.)
Give up on Big Ideas? Not me. Find the right ones? We’re trying.
The headlines from India are chilling, and as if from a time long gone: botched sterilizations in a government program to curb population growth. Twelve women dead, more than 60 hospitalized, community members outraged.
This had trouble written all over it from the get-go: poor, illiterate women in Chhattisgarh, one of the most impoverished states in India, were given incentive payments of a more than a week’s wages to undergo tubal ligation at a mobile health clinic. The health workers, too, received an incentive based on the number of procedures performed, possibly to reach a monthly target. And the conditions were nothing short of appalling: no alternative birth control methods, rusty instruments, poor infection control, tainted medicines (which it now appears may have caused the women’s deaths), and a lack of follow-up care once the surgery was done.
While this terrible incident has come to international attention because of its sensational elements, it fits a troubling, longstanding pattern. For many decades India has pursued an aggressive population control policy that seems far more focused on the interests of the state than those of the women and their families. Incentive payments that are hard for poor women to refuse, combined with numerical targets and an emphasis on permanent contraception, have added up to a family planning program that’s at odds with global norms.
The numbers tell the tale: More than 4.6 million women are sterilized in India every year, and sterilization accounts for more than 80 percent of contraceptive use in the country. Female sterilization is more than twice as prevalent in India than in places where the Pill, injectables, implants, IUDs, and other methods are offered, like China and Latin America. Many of the procedures are done in temporary camps, set up for quantity rather than quality. The doctor performing surgery on these women, for instance, had been honored by the state government earlier this year for performing more than 50,000 tubal ligations.
Surely, it’s time for this to stop.
Even the Indian government has said so. I was in the audience, along with Melinda Gates, Dr. Raj Shah from USAID, and many others, at the London Family Planning Summit in July 2012 when a representative of the Indian Government promised that, “Through the largest public health program in the world, the National Rural Health Mission and the upcoming National Urban Health Mission, addressing equity, ensuring quality, including adolescents and integration into the continuum of care are slated to be the cornerstones of the new strategy. The centerpiece of its strategy on family planning will be a shift from limiting to spacing methods, and an expansion of choice of methods, especially IUDs.” Those are all the right words. Then, just two weeks ago when the FP2020 2013-2014 Progress Report was released, I was impressed to read that “Family planning in India has undergone a paradigm shift, away from the old sterilization-centric emphasis on population control.”
If the paradigm truly has shifted, some people haven’t gotten the memo.
Despite all that good thinking, though, the job is clearly not done—and maybe it’s barely begun. I’m sure there are health authorities in India, as elsewhere, who are horrified by the current situation and would like nothing better than to promote change. But those who are trying to improve conditions in tough environments have to figure out how to put into operation the complex concepts promoted in those documents—things like “ensur[ing] that respecting and protecting rights is built into performance monitoring and accountability, indicators, procedures and practices.” That’s going to take hard work at all levels of government, paired with organized actions by citizens to hold government to account. Along the way, the international community—funders, technical partners and everyone else—should offer support and celebrate progress.
But more than that, we should all be brave enough at times like this to publicly condemn repressive policies and call for an immediate halt to lethal practices.
By policy work I mean the construction of public policy positions—more spending, less spending, better spending, better targeting, smarter monitoring, smarter regulation, deregulation, whatever—and then, mindful of (and well within) the limits placed on nonprofits and their funders, the mobilization of support from both elite and grassroots groups.
The potential pay-off is huge. When you have relatively few dollars and you want to solve big problems, policy work is one of very few possible pathways. A change in public policy can instantly scale a good idea, and can unleash vastly more resources than any private source could muster. If, for example, you’re trying to improve educational outcomes for kids in villages and cities throughout Uganda, you need the Ugandan government itself to take the lead, perhaps persuaded by a combination of evidence and advocacy from those outside government.
But far from being immune to the vicissitudes of politics, policy work is almost entirely subject to the rise and fall of particular people, parties, and platforms. No matter how rigorous the analysis you use to develop a policy position, and no matter how many problems it might elegantly solve, its fate lies in the hands of politicians, and voters—blue, red, green, rainbow-hued. That’s a high-risk place to be. One step forward? At the next election, you could well be three steps back.
If every election is a roll of the political dice that puts at risk years of investment in public policy research and advocacy, how can funders contribute to real and sustained progress? I don’t have any silver bullets, but can at least offer a few ideas that we’re trying within the Global Development and Population Program.
Movements matter. While voters and the politicians they elect can shift preferences and positions at the drop of a hat, in almost any issue area it’s possible to find and strengthen the communities of people who have a deep and abiding commitment. These are the people who identify themselves with the issue, who don’t think of it as a job but instead as a cause that affects them in personal ways, and to which they enthusiastically dedicate their brainpower, energy, and time. Movements need patient funding, investment in leadership, and encouragement to refresh themselves as context changes. That patience, investment, and encouragement are vital, because without movements advocacy efforts are ephemeral.
Knowledge lives on. We’re investing in institutions that are not captured by a single advocacy agenda, but rather by a mission of support for evidence-informed policy. These include but are not limited to think tanks, which conduct and communicate economic and social research across a range of policy domains, from labor to taxes to social safety nets. As research institutions, they contribute to cumulative bodies of evidence—for example, about the relationship between investments in opportunities for women and economic outcomes at the society-wide level. As institutions that work on multiple topics, they are also able to respond to the questions and opportunities of the day.
Support people who are doing real things, regardless of the political climate. The drive toward large-scale impact almost inevitably points in the direction of policy work and away from support to front-line service delivery. That was certainly my own inclination coming into the Foundation three and a half years ago. But I’ve come to appreciate very much the value of also supporting organizations whose work is far from the low-oxygen policy zone, close to the ground where people need help. In our case this includes, for instance, the organizations that are expanding the provision of family planning services to women around the world, and citizen groups working in their own communities to solve problems. That’s not most of what we do, but it has intrinsic value and serves as a continuing reminder that while our scale may be small relative to the need, we do have the ability to get something accomplished without depending on favorable political winds.
These directions point away from short-term, specific, headline-grabbing policy asks. They point toward work that takes a long time and can rarely claim big wins. It might be harder to love, but it won’t break your heart.
It’s as unfortunate as it is unsurprising: young, low-income women—the very people whose lives are often turned upside-down if they become pregnant—are less likely to use birth control if they have to pay for it. And high up-front costs are a big part of the reason they’re far less likely to use the most effective methods, like IUDs. Time and again, research has demonstrated that if contraception is offered for free, more women use it.
That’s why advocates who want all women to have access to the full range of contraceptive methods were united in their support of the Affordable Care Act (ACA). That’s why they used every tactic—from mass mobilization to litigation to political horse-trading—to press their case. When the rules were written, they got much of what they wanted: legislation that expands health insurance to those previously excluded from the system, covering every contraceptive method with no co-pay. In a country with a fragmented health care system, fractious politics, and a deep ambivalence about the role of government, it was a major victory.
Among the many hopes for the Affordable Care Act, certainly one has been that the extension of health insurance will help women control whether and when they become pregnant. In theory, a lower price to consumers, particularly low-income consumers, should mean greater demand for birth control, including methods that protect from pregnancy for months or years. And more use of the most effective methods should mean greater control for women over their futures, fewer unintended pregnancies, and all the benefits that go along with those things.
That’s the theory. The reality is that to get to these outcomes that are so good for women, families, and society requires that a whole bunch of things work well. People who otherwise would not be able to afford preventive services like contraception counseling and care have to sign up for the newly available insurance. Private as well as public insurers have to follow the letter and the spirit of the law: all methods, no co-pay, no funny business. Providers have to be willing and able to provide quality care, including long-acting methods that they are less familiar with than the easy-to-dispense Pill. Women have to make the effort to get and use the birth control method that’s right for them. Oh, and the whole system has to remain solvent.
Whether the reality bears out the theory will be revealed in the data, if we watch carefully. Now that we’re close to a year into implementation, we’re starting to see that some of the links in the chain are strong while others need reinforcement.
Oddly, sometimes when we talk about data, we end up ignoring the real world we’re so eager to measure. In the “data for development” crowd—and quite a crowd it is these days—we talk about the merits of indicators without thinking about whether the data exists to, you know, measure them—things like maternal mortality rates within small districts. We get excited about monitoring annual changes in birth and death rates, despite the fact that the surveys that yield the numerators are only done every five years and the census data that we depend on for denominators show up once a decade. We have an unlimited appetite for more and more granular information—down to the facility, down to the household, down to the smallest child in the most remote village—with little awareness of the potential costs of collecting, cleaning, and analyzing it.
Enter the practical voice of the original data scientists: demographers. From the time of John Graunt in the 1600s, demographers have developed efficient ways to collect information about population size and change. Equally importantly, they have figured out ways to assess the quality of that information, estimate the measurement errors, and compensate for incomplete and imprecise data. That’s not a sideline for demographers. It’s their job.
The special skill of demographers lies in our ability to understand the systemic linkages between human population stocks and flows across space and time. This deep understanding puts demographers in a strong position to evaluate what is feasible and realistic with data collected on human populations, the limitations of those data, and the validity of the results. The assessment and evaluation of data quality, the capacity to link and process data from a multiplicity of disparate sources, and the ability to see the data as part of a larger systemic framework, are central aspects of this skill.
In addition to offering a technical word of warning to those planning a data revolution, the demographers provide scientifically-grounded suggestions for addressing indicator design, data quality, and interoperability. If you’re constructing indicators that will actually be useful, or investing in new data collection, or trying to figure out how to triangulate information from multiple sources, you want demographers on your team. Luckily, with this statement they’re saying that they are now ready, willing, and able to enlist as data revolutionaries.