The Industrial Revolution. The Digital Revolution. The French Revolution. The American Revolution. Whether economic or technological in nature, social or political, they all have one thing in common: what came after was fundamentally different than what came before, and the changes wrought could not be ignored.
What about the Data Revolution? What kind of revolution will it be?
Will the Data Revolution, which will be heralded in alongside the agreement about the 2030 Sustainable Development Goals, change the way we work? Will it change the way we think? And can it fundamentally alter the relationship between the powerless and the powerful? I’m hoping for all three.
In the long lead-up to the formal launch next week of the Global Partnership for Sustainable Development Data—an institutional embodiment of the hoped-for Data Revolution—much of the focus has been on how innovations in data collection and analysis have the potential to accelerate progress toward key development outcomes, from reducing food waste to increasing immunization coverage to improving the energy efficiency of cities. (For a running summary of Data Revolution commentary, check out this terrific resource at Open Data Watch.) We’ve all drawn inspiration from real-world examples in which census data are linked to feeds from social media and satellite imaging to yield insights that are rich, granular, and instantaneous—and potentially invaluable for real-time planning and implementation of government programs. As several data impacts case studies have shown, better and more varied data sources and tools are already transforming how we all work, and the Global Partnership can help focus attention and resources on making those innovations work harder for development outcomes.
But realizing the full potential of the Data Revolution will require setting even more ambitious expectations, and taking greater risks. It will require confronting the link between our data and ourselves.
The way we collect, protect, and use information is a reflection of a society’s values. That means data-as-usual reinforces current patterns of inequality and marginalization; and it means progressive social change can be propelled and reinforced when the way we work with data is consistent with the better world we’re seeking.
Just think, for a moment, about gender. Assumptions about gender permeate data collection, from the now-contested presumption that gender is a binary variable to the many manifestations of gender roles that are embedded in standard household surveys. If a family includes a wife and a husband, the husband is listed as the “head of household.” Questionnaires that ask only about “primary occupation” often miss income-earning activities that women engage in, if the interviewees self-identify as “housewives,” as they often do in cultures where men are seen as producers and women as reproducers. Similarly, male respondents are rarely if ever asked about care-giving. Even data collection that is explicitly intended to understand gender-specific experience cannot break loose of social values: to measure violence against women, to take one example, those who are collecting and analyzing data have to confront questions about behavioral boundaries within intimate relationships. In most societies, the question of whether marital rape exists is far from settled, so how can information about that kind of violence be obtained if it’s not even conceived of as possible?
And data related to gender is certainly not the only type of information that implicitly perpetuates the status quo. Everything from the categories we use for race and ethnicity to the way we measure income and consumption to the geographic boundaries we use—“rural?” “urban?”—carry with them the assumptions and values of the society in which people create those survey questions and analyze those variables. That means those who want to challenge assumptions and shape social values have a huge responsibility and an equally large opportunity: We have to make sure that the work we do to improve and intensify the collection and use of data helps propel change, not stifle it.
Practically, what does this mean? First, it means embracing the interaction between the political and the technical, and challenging the view that data about people and our lives exists outside of social constructs. It means experimenting with new ways to conceptualize and capture information: using ethnographic methods to arrive at different ways of thinking about work and relationships, for instance, and inviting into the conversation people supposedly represented in the numbers. And it means fully taking advantage of data that are a byproduct of actions and transactions, rather than solely depending on data constructed and collected for the purpose of answering pre-set questions. Yes, here I’m talking about some parts of the big data universe, like social media feeds, data from remote sensing, and transactional information showing how people work, consume, and move. I’m sure those who think more deeply about these issues than I do can expand on (or challenge) this list, and I surely hope that happens as the Data Revolution unfolds.
At the Hewlett Foundation, we are pleased to be anchor partners within the Global Partnership for Sustainable Development Data, working with remarkably committed and future-oriented individuals and organizations. At the very least, we expect this Partnership to be a leading force in helping apply more and better data to overcome development challenges. But even beyond that, we expect this Partnership to marshal the energy, resources, brainpower, and passion to revolutionize the way we think about the power of data itself.
When the Sustainable Development Goal Declaration is agreed by heads of state at the United Nations next week, there will be cheers—and jeers. The jeers will come from people who see a long list of lofty goals and no practical path to achieving them; and from those who see little connection between the pontificating of, well, the pontiff and other eminences at global gatherings and people struggling with the harsh realities of daily life around the world.
But I, for one, will be cheering. Here’s why.
The seventeen Sustainable Development Goals are messy and complicated, and they clearly embody the many compromises needed to reach agreement. They’re also pretty darn inspirational. If we push past cynicism, doesn’t it seem like a pretty good idea to work collectively for the end of hunger and extreme poverty, for equality and empowerment, for kids learning in school and living until a ripe old age, for decent work, peace, and a real balance between human consumption and environmental protection? And isn’t it an even better idea for the global community to work simultaneously on multiple, intertwined problems than on a few narrowly focused ones, and to intentionally integrate economic, environmental, and social concerns? Sure, it’s messy. Life is messy.
Another reason I’ll be cheering the goals is that they aspire to be universal, to apply in some form as much to the future of my children as to the future of children in every country around the world. While the development community has often dominated the discussion, the goals haven’t lost the critical thread of “this applies to all of us.” That thread is something that can serve as the point of departure for contemporary models of social change, which are not about people from wealthy countries who have the know-how and the resources to help those from poorer ones, but rather about people facing common problems with shared purpose, together.
The goals have merit because the process that generated them was more inclusive by orders of magnitude than similar exercises in the past. The national and thematic consultations, opinion surveys, opportunities for feedback, and many other efforts amplified the voices of people who have rarely been heard before in the halls of the United Nations. And because of this, everyone is accountable. No one can say “these aren’t our goals.”
The goals also focus attention in a useful way on how to conceptualize and measure complex concepts: quality education and learning, sustainable cities, justice and strong institutions, among others. And there’s a broadly shared expectation that monitoring of progress across and within countries will be just part of a larger effort in which people with a broad set of skills and passions join forces to use the power of data—a mash-up of traditional and novel sources—to accelerate progress.
All of these reasons make the goals worth cheering. But I’ll also be cheering because once they’re agreed, we can put the goal-setting process behind us, and get to work.
Not long ago, I wrote to my contact at a grantee organization, pleading for him to send in long overdue final reports that were required by the original terms of the grant. The work we’d funded had gone well—by all measures a success—but once the money was spent we had heard nothing, despite multiple reminders to submit the final narrative and financial reports. In my note, I simply let him know that the delayed reporting was causing extra work for my colleagues and myself, and blemishing the organization’s otherwise perfect reputation. He wrote back within hours as if he was surprised that anyone cared, promising (and then soon after delivering) precisely the reports we had been seeking.
In that instance, simply providing information about the importance of complying with the grant terms, and the consequences of not doing so, seemed to make a difference. So in the spirit of trying to make our relationships with grantees as good as they can be, let me share more broadly why you need to get your reports in on time.
First, you need to get your reports in on time because you promised you would. Each grant agreement letter spells out when interim and final reports are due. When a representative of a grantee organization signs and returns that grant agreement letter, we comply with our part of the deal—we cut a check—and we expect that the organization will take seriously all of its commitments, too.
Second, you need to get your reports in on time because failing to do so causes extra work for people you probably want to keep on your good side. Program officers, program associates, program directors and even the foundation president get periodic (and frequent) tallies of how many reports are overdue and from whom. For each person down the chain of command, if the number of overdue reports goes above a minimum threshold, the “gentle reminder” and “just wondering about your reports” emails start increasing in frequency. Do you really want to be the reason for your program officer’s bad day?
Third, you need to get your reports in on time because it is one of the few things that people working in non-profit organizations can do perfectly and with complete success. We’re all in the business of making big promises about what can be achieved to make the world a better place, working on shoestring budgets. And most of us routinely fall short of our goals. But writing a short report describing work we have done (and should be proud of) and presenting information about money spent is a lot easier than making governments work better, improving education, or protecting threatened ecosystems. And a special note to my friends from the academy: We do not grade, peer review, or publish the reports. They do not have to represent the most cutting-edge thought leadership—so they don’t require the intensity of effort (and accompanying procrastination) that your other work does. Just take an hour or two, write the report, send it in, and congratulate yourself on a job well done.
Is writing grant reports the most rewarding task for someone working diligently for social change or contributing to global knowledge? No. And neither is reviewing them the task my colleagues and I look forward to most. But both the writing and reviewing are parts of the process that allows us to keep the grant dollars flowing so you can keep changing the world.
If you’re in the business of running a nonprofit, there are bad grants and there are good grants. And then, once in a while, there are great grants.
A bad grant is one that is underfunded, relative to what needs to be accomplished; or one that commits an organization to activities that distract from its institutional mission; or one that locks in a relationship with an overly demanding funder. A good grant is one that provides adequate resources to carry out a project that is central to the institutional mission with a minimum of muss and fuss. And a great grant? Well, that’s general operating support—unrestricted funds that can be used in more or less any way, and reported on with materials the organization already has, such as its audited financials and annual report.
General operating support is precious to organizations whose work is dependent on project-by-project outside funding. It’s what gives them the ability to pursue activities they know are important, but no donor has yet seen as a priority, providing a measure of independence. Unrestricted money helps organizations retain staff during lean periods, and make investments that help to take institutional performance to the next level: a new website, increased capacity to implement a communications strategy, professional development for staff, remodeling an office, trips by the executive for fundraising, and a thousand other things that benefit the entire organization.
General operating support is particularly precious because it’s so hard to get. These days, many foundations are reluctant to offer unrestricted support and, despite some positive signals (including recently by the president of the Ford Foundation), I have my doubts about whether we’ll see widespread changes in the philanthropic sector anytime soon.
At the Hewlett Foundation, we’ve always been able to offer general operating support to a significant share of grantees, and we know from their responses that this is both deeply appreciated and rare among their funders. But it’s never enough.
If what you need the most is the hardest to get, you have to start getting creative. And organizations we work with are getting creative in lots of admirable ways—some that expand their sources of support, and others that reconceptualize what they’re trying to get funding for.
One way organizations try to expand support when they have a friendly institutional funder is to ask for a matching grant as the basis for a fundraising campaign. While creating some risk for the grantee, matching is a tried-and-true way to mobilize support from individuals, jumpstarting a base of support from people who might be persuaded to continue contributing even without an ongoing match to motivate them. In this way, even a small general support grant multiplies itself into a larger unrestricted pool.
When it comes to reconceptualizing the ways to use funding, clever organizations transform mundane institutional needs into initiatives that can attract funders who prefer discrete projects. With a little creativity, many institutional priorities can be “projectized.”
Imagine, for instance, that a nonprofit knows it needs to undertake strategic planning, upgrade staff skills, and revamp the website. Rather than dipping into reserves to pay for these necessities, a smart fundraiser could create a project called “Impact 2020” and pull together a narrative that explains how these activities fit together and, in combination, will prepare the organization for the future. With a realistic timetable and budget, that package has a chance of attracting funding from a supportive donor that likes specific, time-limited projects. Sure, it’s not as efficient or flexible as general support, but organizations may be able to get funding that otherwise wouldn’t be available, and perhaps even accelerate the sorts of institutional changes that tend to languish when there’s no external mandate to make progress.
Separate from conceptualizing institutional investments as projects, smart nonprofits make sure that projects serve institutional needs by creating budgets that reflect the true cost of the work, clearly and accurately. When the resources required to maintain or strengthen institutional capacities are lumped into something called “overhead,” funders balk. But most funders know that project costs need to include support for, say, computers, the financial management team’s time, and even some support for professional development for staff. Some (not all) funders will permit items like that to be listed as direct costs rather than aggregated into overhead. But that will only happen if organizations accurately estimate the full institutional costs as they prepare project budgets, and think about how to present the information clearly and in ways that demonstrate how each line item contributes to the project aims.
None of this magically solves the problem of too many bad grants and too little unrestricted support for the nonprofit sector. But for individual organizations applying some strategic thinking in their funding relationships, it can help get the most out of the great grants, and make the good ones even better.
August 14, 2015 — By Kristen Stelljes and Ruth Levine
A researcher conducts an interview with a woman in Bamako, Mali, as part of Innovations for Poverty Action baseline health survey for an evaluation report on the role of user fees and information in health care decisions. Photo Credit: Nicolo Tomasell
We’ve been looking back at the Hewlett Foundation’s contributions to the field of impact evaluation in global development, and we’ve been looking ahead to a new approach to funding it. In looking back, we’ve tried to understand how some early investments yielded big things. In looking ahead, we’ve been exploring how this foundation can continue to be on the leading edge of a field that has grown so rapidly.
For us, the impact evaluation story starts in the early 2000s. Around that time, people working in international development were waking up to the potential for rigorous evaluation to shed new light on some persistent questions about what strategies and interventions could reduce disease, improve education, and change lives in other measurable ways.
In Mexico, for example, Santiago Levy and the late Jose Gomez de Leon, working alongside Paul Gertler and other U.S. academics, conducted an impact evaluation of a massive social program, PROGRESA. That evaluation showed the world that scientific evaluation of the impact of an important national policy intervention was possible, and that robust evaluation findings were hard for politicians to ignore.
The pioneering work of academics like Esther Duflo, Michael Kremer, and Ted Miguel, and development economists at the World Bank demonstrated repeatedly that randomized control trials not only were feasible in difficult field settings like those in Kenya and India, but also yielded insights that no other methods could. At the same time, there was growing frustration with the limitations of standard evaluation practices within development agencies—practices that focused far more on how much was spent and what activities were implemented than on what good was done.
These factors motivated the Center for Global Development’s report of the Evaluation Gap Working Group, When Will We Ever Learn: Improving Lives through Impact Evaluation, which was co-authored by Ruth when she was at CGD. That report argued that organizations seeking to use development dollars effectively should invest in impact evaluation, and that those investments should be pooled to yield the greatest value.
Long before either of us were here, the Hewlett Foundation backed the emerging field of impact evaluation. For instance, in our Mexico portfolio the foundation built on the PROGRESA experience by investing significantly in organizations in Mexico promoting stronger evaluation of social programs. The foundation also was an early funder of the home base for many of the leading academics working in the field, MIT’s Abdul Latif Jameel Poverty Action Lab (JPAL). Along with the Gates Foundation, the Hewlett Foundation supported the Evaluation Gap Working Group hosted by the Center for Global Development, and the then-director of the Global Development Program, Smita Singh, was an active member of that group. Again with our partners at the Gates Foundation, we were a founding and steady supporter of the International Initiative for Impact Evaluation (3ie), established to provide pooled funding for impact evaluations, to encourage use of evaluation findings, and to promote standards for evaluation quality and transparency.
These investments paid off. The 3ie Impact Evaluation Repository now holds 2,648 evaluations and the number of impact evaluations each year has skyrocketed. As much as $50 million a year is spent each year on impact evaluations, though this is still a small amount compared to the total spend on aid funded development projects and miniscule compared to domestic budgets. Countries such as Mexico, Colombia, Benin, South Africa and Uganda have created governmental units dedicated to evaluating public programs by commissioning or conducting evaluations themselves.
Having seen the fruits of earlier investments, we’re both gratified and unsatisfied. We know we cannot just keep doing more of the same if we’re going to play the role that foundations should: pushing the boundaries of what’s possible.
To understand the current needs and opportunities in the field of impact evaluation, Kristen has spent the last two years talking with grantees, experts in the field and evaluation users; reading the literature; and commissioning three papers on the future of impact evaluation. (These papers were presented by 3ie at a workshop in April, and will be published as working papers in October.)
Our conclusion is that three big challenges remain for the field, and we have an obligation to try to tackle them. The challenges we see are:
Lack of bodies of evidence. It doesn’t make sense to base decisions on single studies, but our two-year examination makes clear that there are still too few attempts to test whether programs that are successful in one setting will be equally successful elsewhere; and systematic reviews are rare. As a result, decision-makers rarely have the full body of evidence they require to make a sound decision about whether to adopt a particular approach.
Few individuals have the skills to conduct high-quality impact evaluations. The skills for rigorous evaluation are particularly limited in many developing countries. What this means is that currently those with the greatest skills may have knowledge of the local context, but are not primarily rooted in that context – and they are limited in their abilities to build and sustain relationships with decisionmakers. This limits both the relevance of the evaluation, and the ability of the researchers to help put the research findings into the service of better decisions at the program and policy levels.
The incentives of academic researchers rarely match the needs of decisionmakers. While academics conduct studies that yield publishable insights and use the most cutting-edge methods, their interests, timing and means of communication do not always correspond to what’s needed for real-world decisionmaking.
Given this assessment, our future funding for the field of impact evaluation will seek to achieve the following outcomes: First, that impact evaluation practices are responsive to the needs of policymakers and program implementers. Second, that decisions are made based on bodies of knowledge that are designed to inform decisionmaking. Third, that local researchers conduct high-quality, policy- and program-relevant impact evaluations.
As our grantmaking unfolds over the coming months and years, we may explore new ways for researchers and decisionmakers to collaborate. We may test new ways to conduct impact evaluations that are more responsive to the types of questions decision makers ask and the timelines they work under. We will likely support new ways to review, summarize and present bodies of evidence to provide more useful information for policymakers. And we may invest in new ways to build evaluation capacities that are responsive to the context in which the evaluations are done.
These new directions represent a re-commitment to the field of impact evaluation—and a re-commitment to making sure that the Hewlett Foundation supports those who are exploring the field’s frontiers. As we go, we will keep watching, keep learning and, yes, keep evaluating.
It’s surprisingly hard to be a by-the-book grantmaker because being an imperfect person gets in the way. When I depend too much on my own gut instincts, give in to obligations of friendship, guilt, or a sense of compassion, or feel sure that success is just a matter of working harder and harder, I’m unlikely to make the best decisions about strategies or grants. But those are, of course, very common sentiments among humans. Which is why it sometimes feels like the more human I am, the less good at grantmaking I can be.
I know that sounds harsh. Let me explain.
At the Hewlett Foundation, we design and implement philanthropic strategies that are intended to make the greatest positive difference for each dollar granted—a high bar. We value long-term relationships with high-performing grantees insofar as their work is aligned with our strategies. But we’re supposed to make clear-eyed, rational choices about changing or ending relationships if the evidence tells us a different approach is likely to produce better outcomes. We build strategies around logic; portfolios around grantee organizations’ capacities. The calculus of grantmaking, at least as practiced here, is mostly a left-brain exercise.
But, being people, we naturally have a hard time paying more attention to objective information, accumulated evidence, and coherent theory than to our hunches and hopes. It’s hard to say no to friends, and sometimes it’s equally hard to feel good about recommending a grant to an organization led by someone we don’t much care for—even if the organization itself has much to offer. We may even find ourselves a little bored, sometimes, with the long-term, steady partners and have more fun thinking about all the new projects we could support if those same resources were uncommitted. And maybe most challenging of all, as people who in our past lives have worked intensely and dug deep into details, it’s disconcerting to realize that as funders we often do more for grantees by getting out of the way. It’s tempting to be part of the action, not just watching from the sidelines as the real work gets done.
To help us stay upright in the tug-o-war between heart and head, we have some tools and techniques. These include a strategy process, designed by our Effective Philanthropy Group, that asks fundamental questions and requires us to articulate assumptions underlying our theories of change. It also establishes expectations about external evaluations that will give us information about shortcomings we might otherwise be blind to. For the grant-by-grant decision-making, we have a practice of sharing proposals with colleagues for an independent view, and of having open discussions about whether a new idea genuinely adds value or is just novelty for its own sake. Feedback from our biannual grantee perception survey, administered by the Center for Effective Philanthropy, helps remind us how much organizations we support appreciate the latitude we try to give them to carry out their work as they see fit. And, quite frankly, a heavy load of grantmaking provides its own form of discipline: we just don’t have time to get overly involved in the day-to-day work of grantees.
Our practices are designed to help us fight off favoritism, faddishness and (one hopes) foolish infatuation with our own ideas. Fortunately, though, they don’t squeeze out all the space for being a human. What I see in my colleagues, in fact, is that some of their very best grant strategies leave room for inspiration, creativity, and serendipity; their most fruitful relationships with grantees are built on trust; and they do their work each day with passion and humor. As it turns out, the best grantmakers are people, with all the imperfections and contradictions that implies. And I wouldn’t have it any other way.
You’d be hard pressed to find it in most grant proposals, but it’s there. It’s just that the first year of a new program or organization is often disguised as a typical period of work: a full complement of staff working at full tilt, delivering services, producing studies, or promoting policy change. Grantseekers usually project that they’ll spend quickly from Day 1, and they promise funders that we’ll see lots of productivity and impact. Eager for a new idea to hatch quickly and grow, we buy into the fantasy.
Then, somewhere around month nine we start hearing (occasionally sheepish) explanations for slow spending and a lack of results—often accompanied by a question about whether it’s possible to get a no-cost extension on the grant. Although we almost always agree, it’s with a twinge of disappointment and a sense that we should have known better. We’ve seen this movie before. Eventually, when the evaluation report is written years later, we’ll see it in black and white: “Due to significant delays in initiating activities, the entire timeline of the work program lagged.”
You know, we’d all be a lot happier—and look a lot smarter—if we just acknowledged reality and embraced the Zero Year. It’s not a goose egg; it’s an important period of early development.
During the first months of any new effort, leaders are dedicated to under-the-radar and relatively low-cost activities like recruiting, planning, setting up in-office systems and policies, establishing new partnerships, and going through the sometimes painful and always time-consuming process of getting activities off the ground. That’s a necessary precondition to successful work later on, but it’s not the work itself. And the rate of spending during the first year, and particularly during the first six months, is usually far less than half of any later period. And, quite naturally, there are few visible products, and no impact. That’s not failure; that’s just life.
While it’s tempting to jump over the start-up phase in any grant proposal, it would be far better to describe it in detail and with realism—to build a Zero Year into both budgets and timelines. Grantees and funders could then have a conversation about what it really takes to put together the right team, to set up a website, and to do all the other vital, unglamorous things needed to initiate “real work.” Together we could figure out what a sensible spending trajectory looks like, and when we might reasonably expect to see results. We could redefine success for the Zero Year as where an organization needs to be by its end in order to thrive in every subsequent year. And I wouldn’t be a bit surprised if, in the discussion of the Zero Year, we figure out that it makes more sense to make a longer grant, giving the grantee greater financial security, a more realistic timeline for seeing results, and all the benefits that go along with it.
Best of all, by calling out the Zero Year as a specific and necessary step along a multi-year journey, we’d be honest not just with each other but also with ourselves.
If I told you that a half-day training program creates life-changing benefits for women and their families, I’d understand if you were skeptical. Yet, implausible as that sounds, researchers at the University of California San Francisco Bixby Center have shown precisely how it can happen. And it’s a story that deserves to be shared.
Let’s start with the world as we know it. When a woman in the U.S. asks her doctor about contraception, she’ll usually get a prescription for birth control pills or a recommendation that her partner use condoms. This has been the practice for many decades, despite the development of far more effective and convenient forms of birth control—the IUD and the contraceptive implant.
Although long-acting reversible methods of contraception, or LARCs, have been proven to be highly effective and safe for women of all ages, whether they have had children or not, most women don’t hear about them from their doctors because of outmoded beliefs that these methods are appropriate for older married moms. Providers shy away from routinely offering LARCs because they may lack confidence in their ability to insert the devices correctly and clinic routines may not accommodate slightly longer visits. Even unfamiliarity with the right billing codes can prevent a woman from getting the most effective form of birth control—a particularly crucial issue now that the Affordable Care Act covers all methods.
These barriers to good care have bad consequences: Only 9 percent of women in the U.S. use LARCs, while they are much more popular in other industrialized countries. And because women who depend on pills and condoms are much more likely to get pregnant by accident than women who use long-acting methods, low use of the most effective birth control is partially responsible for the fact that half—yes, you read that right: half—of all pregnancies in this country are unintended.
Now: let’s create a better world. A few years ago, a team of researchers at the University of California San Francisco, led by Cynthia Harper, decided to do something about this problem. The UCSF team developed a half-day training program to update physician knowledge and to train all clinic staff about the relative benefits and risks of different contraceptives, with a focus on LARCs. The aim was to raise the knowledge level about the most effective birth control methods—and the hope was that this would lead not only to more women using these methods, but eventually to fewer women being surprised by a pregnancy they had been trying to avoid. The training was distinguished by its use of up-to-date medical evidence, and by a comprehensive clinic-wide approach: everyone from schedulers to billing clerks to the doctors themselves learned about how to integrate underused contraceptives into their practice.
Then, in partnership with Planned Parenthood, the UCSF team tested the training program in the most rigorous way possible—through a randomized trial in which 40 clinics around the country were assigned by lot to either get the training or not. The researchers then measured changes in use of LARCs by patients coming in for family planning visits in all the clinics, and followed up with the patients to find out whether they became pregnant over the following year. (Both the development of the training program and the research were supported by grants from the Hewlett Foundation’s Global Development and Population Program.)
The results, appearing in the Lancet this week, are impressive. They show for the first time how a clinic-based intervention can decrease unintended pregnancy. The researchers found, first, that in clinics where the training had been conducted, 71 percent of women were informed about LARCS, compared to 39 percent in the business-as-usual clinics. Second, in clinics where personnel had been trained, 28 percent of women coming in for a family planning visit chose an IUD or an implant, compared to 17 percent of comparable women in other clinics. Third, and most remarkably, the training halved the number of unintended pregnancies.
Bottom line: a short and relatively inexpensive training of health care providers dramatically reduces the emotional, physical, and societal costs of unintended pregnancy.
As they tossed their graduation caps in the air a few days ago, my daughter and about 500 classmates took a big step toward adulthood. Some are off to four-year colleges, a few are taking time to test the job market or explore the world on their own, and the rest—a good number—are headed to community college for an associate’s degree and future opportunities for work or further education. These bright and energetic young people are starting down a path toward learning and earning. Cheering for them at this turning point lifted my heart about the country’s future.
For many graduates, attending one of the country’s 1,000-plus community colleges is an attractive and affordable option—and, for some, a crucial stepping stone to a set of opportunities that their parents never had. The 12 million community college students in the U.S. disproportionately come from lower income families that have historically been underrepresented in higher education: almost 40 percent African American, Latino, or Asian American and Pacific Islander, and more than half are women. Community colleges are so important to social mobility in the U.S., in fact, that earlier this year the White House proposed the America’s College Promise initiative, which anticipates supporting two tuition-free years at a community college for eligible students.
But to succeed, they have to finish their course of study and make it to the next graduation ceremony. That doesn’t happen by accident.
A key strategy to make sure community college students have the best chance at success is to help them protect themselves against unintended pregnancy. While the teen birth rate has dropped dramatically since the early 1990s, there hasn’t been nearly as much progress made in reducing unplanned pregnancies among twenty-somethings—even those young people who have big plans for their futures. And a pregnancy can be devastating to their education. Unplanned births account for almost one in ten drop-outs among female students at community colleges, and male students also have difficulty completing a course of study if they have to deal with the responsibilities of becoming a father.
That’s why the National Campaign to Prevent Teen and Unplanned Pregnancy has focused attention on improving information, support, and access to health services for community college students. For instance, they developed free on-line lessons for college students about how to prevent pregnancy, using information from the outstanding Bedsider.org birth control support network site. A recent evaluation of the on-line lessons showed positive changes in knowledge, attitudes, and behavioral intent about preventing unplanned pregnancy.
But the opportunity to help community college students goes way beyond creating educational content. The National Campaign has also worked at the state level to encourage community colleges to help students prevent unplanned pregnancy. Just last year, Mississippi passed a law—the first of its kind in the country—directing community college and higher education leaders to develop a plan to address unplanned pregnancy. In March, the legislature appropriated funds to each of the state’s community colleges to take action, and the National Campaign is providing technical assistance as the schools prepare to start up activities later this year.
Mississippi’s actions inspired legislators in Arkansas, who enacted similar legislation in March 2015. The National Campaign is now working closely with higher education groups and others as they begin to tackle the challenge of unplanned pregnancy at Arkansas community colleges and public universities.
As young people strive for the opportunities that education makes possible, they need the information, support, and services to keep them in school, and to make sure they become parents only when they’re ready for that responsibility. Community colleges can be—and should be—a crucial ally in that effort.
There are a lot of things they don’t teach you in graduate school: how to prepare a budget, how to woo a donor, how to ease an unproductive colleague off the payroll without incurring his or her wrath. And a whole lot of other skills that are required to do a good job as a leader of a think tank or other research institution. There are, in fact, few opportunities to gain that know-how except through trial-and-error.
When people who have had distinguished careers as researchers move into executive roles – a path that’s common in think tanks – they’re often entering the foreign territory of management, organizational finance and operations. Deliberation and on-the-one-hand-on-the-other-hand analysis have to give way to making decisions and sticking with them. A love of ideas and discourse has to take a back seat to practical matters: Who gets which office? Should we take government contracts? How do we set and defend an indirect cost rate? How do we maintain an independent research agenda if most of our money is tied to funders’ specific interests? These may not be as fun to puzzle through as questions about the effects of a new tax regime on unemployment, but they’re essential to the health and sustainability of an organization.
Fortunately, think tank leaders have a new resource to help them learn the ropes, building on others’ hard-won experience. They can find valuable advice and tools in Improving Think Tank Management: Practical Guidance for Think Tanks, Research Advocacy Organizations and Their Funders, just launched by Results for Development. With our support, the author, Ray Struyk, dug into his decades of experience helping to set up think tanks, and created a go-to source of guidance on key issues related to staffing, funding and running a policy research organization. He also drew on information and case studies from the Global Development Network, the Think Tank Fund and the Think Tank Initiative, making the book relevant to organizations in every region. The freely available checklists, sample by-laws and policies, and other ancillary information that go with each chapter help translate the ideas into action. As a bonus, the book includes specific guidance for think tank funders about how to assess and support policy research groups. (In my copy, those pages are already dog-eared.)
Improving Think Tank Management, written for a niche market, isn’t likely to make it to the best seller list. But for the men and women who bear the weight of their organization’s success on their shoulders, I guarantee it will be a page-turner – and will make it just that much more likely that they’ll be able to build the great think tanks that the world needs.