Yesterday, when we closed the doors of the Hewlett Foundation’s Mexico City office for the last time, it was with a sense of admiration for the work done by the talented staff and grantees over the past dozen or so years – and a sense of promise about the work yet to come. Let me use this space to explain why we made this change and what it means for the community of grantees we support.
The Hewlett Foundation office in Mexico City—our only office aside from our home base of Menlo Park, California—was set up in 2001 to support the U.S. Latin American Relations Program (USLAR Program). Over four years, the USLAR Program invested about $21 million in work to “strengthen and foster cooperation among institutions in the Americas in order to address common hemispheric challenges.” This included grant making in environment (mostly fresh water management), democratic governance, migration studies, economic research, and studies of justice reform and rule of law in Brazil, Chile, Argentina, and Mexico.
The USLAR Program achieved a great deal. To give just a few examples: Together with the Ford and MacArthur Foundations we funded a fellowship program that supported hundreds of Mexicans in graduate programs in the U.S. Technical information produced by Hewlett Foundation grantees on rule of law was adopted for the major reform to Mexico’s justice system in 2008. And the Emmy Award-winning documentary Presumed Guilty crystalized several years of research and data collection by our grantees.
Over time, the USLAR Program evolved into lines of work under the Environment Program, including significant efforts around public transportation and air quality, and the Global Development (now Global Development and Population, or GD&P) Program.
The Environment Program has had some great successes. Let me just say if you’re one of the many people who uses Mexico City’s Ecobici bike share, you’ve experienced the end results of their work first-hand. In addition, grantee organizations helped to develop the Metro Bus system in Mexico City. The first corridor was opened in 2005 and it has now five more corridors with over 100 km of dedicated and efficient bus lanes. This system moves 700,000 passengers a day. Grantees also helped to create the conditions for the approval of the first fuel economy standard for new passenger cars in Latin America. This will come fully into force in 2016 and will save not only emissions but also cost for Mexican consumers.
In Global Development and Population, our major emphasis has been to support greater access to information about public laws, budgets, spending, and service delivery quality, and to fund efforts by citizen groups to use that information to hold governments at both federal and state level accountable for living up to their many commitments.
The successes of our grantees are too many to mention, but include contributions to:
Laying the groundwork for passage of Mexico’s federal Freedom of Information Act.
Achievements in public sector transparency, including availability of better budget information, improved rules governing the use of public funds (such as performance-based budgeting systems), and a pioneering framework for impact evaluation of social programs.
Projects like Mejora Tu Escuela, that uses publicly available information about schools to help citizens make decisions and take action.
Bringing together diverse organizations into a community of practice, in which new and exciting collaborations have resulted from on-going information-sharing and trust-building.
It was, in fact, these sorts of successes that inspired the Hewlett Foundation to engage at a much larger scale in the field of transparency and accountability, now with a geographic focus on East and West Africa.
So, why leave? Well, first of all, we are not leaving altogether. We intend to continue to make grants in Mexico around challenges in transparency, accountability and civic participation. Mexico has consistently demonstrated that it is an incubator for innovative thinking, with the potential not only to make significant advances for the Mexican population but also to influence the field far beyond its borders. Just this year, for example, as Mexico is the co-chair of the Open Government Partnership, there are a vast number of opportunities to demonstrate new international leadership. We see that, we appreciate that, and we intend to continue to provide support to a thriving community of organizations working on these tough issues. In addition, the Environment Program intends to continue making grants in Latin America on climate change mitigation via partner organizations and re-granters.
But we are ending our physical presence there because, like everyone else, we have to make tradeoffs in how we deploy staff and other resources. We don’t have the option of expanding the number of staff, and we see great value in having more Program Officer time and attention dedicated to grants in other regions. We believe the strong relationships we have in Mexico—and the relative proximity that permits frequent travel—will keep us in touch with what’s going on, while freeing up some resources to pursue our ambitious work on transparency, accountability, and participation globally and in Africa. Moreover, by having staff work simultaneously on a portfolio in Mexico and in other parts of the world, we can be more efficient in transmitting lessons and ideas back and forth.
I’ve been tremendously impressed with the understanding and cooperative spirit of our grantee community as we’ve consulted over the past year about this change. I have no doubt that the excellent work and the open, dynamic collaboration we’ve enjoyed until now will continue—and deepen—as we move to this new way of working together.
In the small and overheated world of impact evaluation we have a serious “baby and bathwater” problem. It’s not that we may be tossing out the baby with the bathwater; it’s that we risk throwing out the baby and drowning in the bathwater.
The baby? It’s the value of measuring in a valid and reliable way whether something that we intentionally do to change people’s lives—improve their health, advance their educational opportunities, increase their income—in fact does that thing. Across the ideological and methodological divides, I’m guessing there’s quite a bit of agreement about value in knowing whether X leads to Y—whether, for example, introducing single-sex classrooms leads to girls being more likely to complete primary school. We may resist reducing complex social and economic systems to linear causation, but “if / then” thinking is a core feature of most public policy design. Assessing whether those causal relationships bear out in the real world has the potential to make policy and program decisions better than they otherwise would be.
The bathwater? It’s the circular and often misinformed debates about methodological superiority. Polarizing positions have been taken between those who favor one way to measure the differences with and without a particular intervention—namely, comparing randomly selected “treatment” and “control” groups—and those who are deeply skeptical of applying the scientific method to context-specific, nuanced, and dynamic interactions between people and their environment. Observers of these debates and participants themselves must by now be quite weary of the conversation. We have been listening for a long while to characterization of the “randomistas” and arguments about how random assignment evaluations stack up in cost, difficulty, and rigor to many alternatives, from quasi-experimental modeling methods to before-after observational studies. Some of the arguments are ancient, while others are newer—or at least being joined with new passion. If you want a refresher on the state of play, you can find some useful resources in twoposts on the Center for Global Development’s blog (including the comments) and this recent paper by Howard White.
In the persistent back-and-forth, in the taking of sides, I fear we are at risk of losing the focus on impact—which is, after all, the main value proposition of impact evaluation. The most important contribution impact evaluation can make is to challenge the practice of measuring only what we spend and what we do, and then confidently assuming good things will result in equal measure. If all impact evaluation does is direct our attention to real-world changes in place of self-promotional storytelling, it will have made a contribution. Undertaking an impact evaluation—regardless of methodology—makes us state, for the record, that we think a causal pathway exists between a particular X and Y. And impact evaluation makes us ask the toughest question: “Will our actions truly do more good than harm?” Far from being an expression of dogmatism, impact evaluations start by saying we aren’t so sure about the effects of our actions, we’re open to surprise and to learning. Whatever methods we may like or loathe, we have to protect that baby.
Are women part of the information equation? This is a question we’re starting to ask across the investments we make in transparency and accountability. So far, the answer is mostly, “We don’t know.” That’s not good enough.
Here’s a quick run-down of the scope of our involvement in the field of transparency and accountability: We support many groups that strive to make information available so that citizens can know what their governments take in and what they spend it on. For instance, we provide funding for organizations like the Revenue Watch Institute that press for the release of information about how much money governments obtain for access to oil, gas, minerals and other natural resources. Some of this is through advocacy around global standards for information disclosure, and some is through support to on-the-ground work at the national level in several countries. We help citizen groups like those affiliated with the International Budget Partnership press for public access to budget information, and we contribute to generating and using information about the quality of government-run health, education, and other services. We help groups working on behalf of those without a strong political voice to analyze that information and provide feedback to public officials about citizens’ needs and their assessments of the quality of services. All told, it’s something on the order of $20-plus million a year in grants.
It’s a big agenda—but maybe it’s not quite big enough.
What if women were being systematically excluded from this agenda? This could happen, for example, if women had less access to information than men, particularly through the types of technology-enabled platforms that are increasingly being tested. Given that in many low-income countries women are about 20 percent less likely than men to own a mobile phone, it would actually be a surprise if there were not a large gender differential in information access. (The Millennium Campaign has a useful blog post on this point.)
Exclusion could also happen if the questions women have about what their government is up to were different than those of men – or were different than the questions we might think women would be interested in. Tools of transparency and accountability have sometimes been used to examine how much governments and donors spend on maternal health and family planning – we’ve sponsored some of that work ourselves – but surely questions of relevance to women go far beyond the reproductive agenda.
And exclusion could happen if women were less able than men to participate in the citizen groups that exploit the benefits of open data. Participation, after all, is rarely gender-neutral.
Yes, there are some bright spots, where those in the transparency and accountability world have paid attention to gender as an important social dimension. For instance, the International Budget Partnership has a long tradition of working on gender budgeting. And recently Publish What You Pay has set out to integrate the dimension of gender into its work, summarized here.
But the unexplored territory is far greater than what’s already underway. Overall, we’ve seen relatively little consideration of gender differences—or of mitigating the risk of gender bias—within the community of organizations focusing on transparency and accountability. In parallel, we’ve seen few organizations that are dedicated to advancing gender equality jumping into the arena of social accountability, despite the opportunities to, for example, take a gender lens to all those interesting Open Government Partnership action plans.
So we will continue to ask these irksome questions. More than asking questions, we’ll start exploring what we might do to understand—and to eliminate—gender-based differences in the lively and creative field of transparency and accountability.
We’ve been doing a little spring cleaning around here, hauling some old descriptions of our work to the curb, hanging some new pictures and shining up descriptions of what we do and how we do it. Meaning: we’ve fixed up the web pages for the Global Development and Population Program. Take a look when you get a few minutes, and remember to use the left hand navigation to click through to expanded descriptions of each component. Over time we’ll add some strategy and evaluation documents to make it a richer resource for the fields in which we work.
We’re hopeful that the website is self-explanatory and we don’t need to fill up a lot of blog space repeating ourselves. But one part merits a few additional words.
A few people have asked why we (okay, I) named one of our program sub-components “Evidence-informed Policymaking.” I know, it’s awkward. More appealing, more natural would be the common phrase “Evidence-based policymaking.” But, like a few others, such as the UK Department for International Development, we’re sticking with the clumsy name, and I’ll tell you why.
“Evidence-based policymaking” is a phrase borrowed from “evidence-based medicine,” an approach to making decisions about patient care that takes advantage of the best available scientific evidence, incorporating an assessment of the quality of the underlying study designs from which that evidence was gleaned. Although increasingly accepted within the medical field, it’s still a challenge to trump physician judgment and expert views—“eminence-based medicine”—with the body of scientific findings about what treatments are effective.
The jump from “evidence-based medicine” to “evidence-based (social) policymaking” is not a straightforward one. It’s the jump into politics, and the jump from hard science to difficult science.
We’d probably all happily go to a doctor who depends only on the best biomedical science, and who ignores political interests and values—at least values beyond respect for basic human rights. But we’d be unhappy with a policymaker who mechanistically focuses only on what can be studied in grand experiments, or who sets priorities and pursued programs through some sort of numbers-driven optimization calculus. Not only would we be unhappy, but that person would likely be thrown out of office in short order. Instead, we’d want a policymaker who shares our values and goals, understands the political chess game, and then—perhaps only then—turns to the always imperfect findings from social science research to help figure out the best choices among politically feasible options. At that point, we’d want the policymaker and those influencing her or him to grasp the best available knowledge, and to know the difference between a shoddy study and a good one. In short, we’d want the policies to be informed by good evidence, not based on it.
In addition to being a shade closer to what we expect of the real world, the advantage of having this slightly out-of-tune moniker is that it catches people’s attention. They expect you to say “evidence-based” and when you say “evidence-informed,” it starts a conversation—often quite an interesting one. Try it and see.
We like facts. Anyone who follows our work in the Global Development and Population Program knows that many of our grant portfolios are heavily weighted toward collecting, analyzing, and using information. Our largest grants, just as examples, are to support policy research organizations in South Asia, Latin America, and Africa, and to fund the collection and sharing of information about the quality and impact of education and health services.
We like facts because we think they can help people who are in decision-making positions—government officials, politicians, donors—understand the nature and magnitude of social and economic problems that are beyond what they can observe first-hand. And we like evidence about how governments are responding to those problems because we think citizens have a right to be informed so that they can take corrective action, while those in government use the evidence to improve public programs and services. We also like facts because we like to be surprised, to have our assumptions and beliefs challenged. And the “what the world really looks like” information is often quite surprising indeed.
Liking facts and supporting organizations that share this focus does not mean we (or those organizations) are naïve. We know that facts are constructed within a social context; we know, for example, that it takes a concerted effort to overcome gender and other types of bias that are embedded in how data are collected. We also know the challenges of connecting data and evidence to policy debates, whether in politically charged environments like the United States or in countries where “science” is abused as a tool to promote wrongheaded policies. (Look no further than Ugandan President Museveni’s remarks upon signing the terrible anti-gay bill into law.) Like many others in the “research to policy” arena, we understand that chances for policies to be affected by science are few and far between. Those with cold, hard facts have to invite themselves to the party rather than waiting for politicians to seek their wisdom.
One way to find those opportunities to affect policy is to deconstruct the levels of “policy” and find the levels in any particular context where even a tiny little opening exists. That opening may already exist within the public sector, or it may be created through citizen awareness and action.
At the top level, we can think about “big P” policies like the creation of a new cash transfer program, the removal of school fees, the commitment to disclose information about public budgets and contracts, and other actions that might appear in a newspaper headline. Others might argue, but I think the main role facts play at this level is as part of the advocacy artillery, selectively used to back up a pre-established pro or con position.
Below that level, however, is a cascade of implementation decisions, taken variously by Ministers, technical personnel in ministries, operational managers, and front-line workers. Regardless of whether governments function in a centralized, decentralized, or chaotic way, choices made at each of these levels—“small p” policies and practices—can be better when facts are marshalled to support decision-making. They can also be better when citizens become more knowledgeable about what decisions have been taken and what their impact is. For a cash transfer program, for example, the nitty-gritty design and implementation choices—for whom, how often, with what conditions (if any), through what type of banking system, and so on—are amenable to the knowledge accrued from similar programs elsewhere, as well as from tailored information-gathering, studies, and periodic evaluation. Information about how the targeting was supposed to occur, how efficiently the public sector is delivering on its promises, and who is responsible for taking action, when made available to citizen and civil society organizations, can play an important role in fostering greater accountability.
While some combination of social values and raw political ambition may drive virtually all “big P” policies, the levels underneath have a lot to do with what happens after the headlines fade. And those levels are often far more open to being influenced, directly and indirectly, by empirical information.
If we were only reading the headlines, it would be easy to become cynical about the notion that research is ever used in making policy. But when we see how many of the organizations we support are successfully finding a low profile but high impact route, we are happy to be able to be part of the story. And that’s a fact.
Every day I’m tempted to apply lessons from global health successes to other domains of development, like education and governance. It’s a temptation I’ve got to fight.
You can see the appeal. The world has seen many great successes in public health in poor countries, from eradicating small pox to reducing child mortality to extending the lives of people with HIV and AIDS – and those successes have been due in large measure to a combination of good science and generous aid. The global health sector, for all its flaws, has demonstrated an ability to set shared goals, build and sustain organizations like GAVI and the Global Fund to Fight AIDS, Tuberculosis and Malaria, use reasonably good data to make decisions, and transfer knowledge about what works in one country to inform practice in others. Most other pieces of the development puzzle have had considerably less success than global health as they’ve tried to raise money, build institution, and create a body of evidence at the global level to serve local decision makers.
Why can’t global health successes go viral? I think it’s because global health is simply more, well, global than most other domains of development. It’s more global because infectious diseases don’t respect borders, and when one country is healthier so are its neighbors. It’s more global because health products and health workers, in different ways, trade on international markets, and when vaccines, drugs, and interventions are tested in one country chances are pretty good they will work in others.
The “globalness” of global health has contributed in fundamental ways to the successes in the field. It’s what’s made it possible, for example, for medical doctors and nurses from the U.S., Canada, Western Europe, and Cuba to help so much and so quickly in both humanitarian crises and in the ongoing development of health systems that provide Western-style care. It’s what’s permitted huge volumes of drugs and vaccines to be purchased at relatively affordable prices, so they can then be made available to every country.
In contrast, look at education: Resources can be pooled and coordinated at the international level, as they are by the Global Partnership for Education, and it’s possible to create standards for collection of data. But the “globalness” story in education is quite thin. Teachers and schoolbooks from one country are rarely useful in others. And the jury is still out on whether teaching-learning tools developed in one setting will work as well in others.
Or look at governance: International norms around public financial management and transparency are useful to raise the bar, but governance and public sector management problems have to be solved in creative, often unpredictable, ways within the historical and cultural contexts in which they occur. They may be able to draw ideas from other settings, but do not benefit from a “corruption vaccine” with the power to stamp out graft around the world.
Pick a sector and try to figure out the “globalness” story. I’ll be there are very few, if any, that have a much that’s global as global health.
What global health does offer to other development challenges, I think, is the inspiration that comes with making a real, measurable difference in people’s lives. We can and should seek that sort of positive impact regardless of the sector in which we’re working. But increasingly I’m seeing that to make that difference means adopting different models than what’s worked in global health.
I’m the sort of person who looks on the bright side, who sees the glass half full. But I cannot yet see the bright side of how governments in many African countries provide services to their citizens.
Here are some cold, hard facts: In Uganda, four out of every five grade school teachers are unable to pass a test of basic reading and math. In Nigeria, two out of every three doctors cannot diagnose simple ailments. If you go into a government health facility in Kenya, you’ve got only a 50 percent chance of finding essential medicines that haven’t passed their expiration date. With each school day, each clinic visit, a fundamental contract between governments and their citizens is broken.
It is not just for lack of money. These depressing figures stand in stark contrast to economic gains and greater spending on health and education. Nigeria’s GDP has grown annually in real terms by about seven percent in recent years, while Uganda’s economy has grown by about six percent and Kenya’s by four percent. Government revenues in many African countries are on the upswing, and so are education and health budgets—and yet the quality of social service delivery remains abysmal.
There is, however, a bit of good news in this otherwise grim story: thanks to some ambitious and creative efforts, both governments and citizens now know more about the quality of health and education than they’ve ever known before, and they’ll be able to track progress—or lack of progress—in the years to come. This knowledge comes from efforts like the Service Delivery Indicators (SDI) project, which collects and shares facility-level information about what teachers and clinicians know, what they do, and what inputs they have to work with. By focusing on provider knowledge and behaviors, rather than solely on inputs, SDI sheds light on service delivery breakdowns that result from dysfunctional governance at multiple levels and the lack of incentives for good performance.
Using a common methodology across countries to permit comparisons, the results of the SDI survey are a wake-up call, drawing both government and public attention to the quality crisis. When the Uganda results were released last November, for example, the government committed to establishing a task force to spearhead new public service reforms. In Kenya, a report of survey findings was launched in the midst of a teachers’ strike, sparking a flurry in the media about why Kenyan schools are failing. Basic descriptive information sends a powerful message.
Beyond what it measures, SDI is special because of its commitment to get the data—both in summary form and in full datasets for researchers’ use—into the public domain. Some 11,000 people have looked at the microdata, and should find it to be useful as a complement to other datasets that have information on health and education expenditures and outcomes, as well as demographic, policy and other variables. Citizen groups in Kenya are also using SDI data, along with other information, to raise awareness about the quality of services.
Regardless of the specific findings from the data analysis, just opening up the information contributes to better policy. When government officials know that citizens are aware of how bad things are across the country, there’s a better chance they’ll feel like they have to do something about it.
No, SDI alone can’t make clinics or schools work better. But it sheds light on the sheer scale of the problem, and serves as a spur toward the reform that is so obviously needed.
Much more on the quality of health and education services is being discussed today and tomorrow at the “Making Services Work for Poor People” conference organized by the World Bank and the Overseas Development Institute. Several sessions will be live-streamed here.
There we were: An American team, a British team, a Canadian team and a Norwegian team, far from home and eager to make the most of years of work. Were we skiing, ice dancing or hurtling down the luge track? No, we were not. (Surprise!) We were just a bunch of individuals representing organizations that have pooled funds to support think tanks around the world, and we were meeting to wrap up the first phase of the project and gear up for the second. While we didn’t come home with medals, I, at least, came home with a renewed appreciation for the value of collaboration among funders.
A little background: The Think Tank Initiative is a ten-year program designed to support and strengthen about 50 think tanks in 22 countries in Latin America, South Asia, East Africa and West Africa. It grew out of a $50 million commitment made in 2006 by the Hewlett Foundation board, which believes strongly in the value of evidence-informed policy making. The foundation board recognized the tremendous contributions that can – under the right conditions – be made by reputable, independent organizations that generate solid analyses of policy options and are able to communicate their findings at the right time and place. The Hewlett board also recognized that all too often the only funding available to think tanks in low- and middle-income countries is for projects that donors define. Many of these organizations have few opportunities to pursue independent research that responds to national priorities. Thus was born the idea that we might be able to enhance the quality of policy making over the long term by selecting promising think tanks to receive flexible funding – “core support” – combined with opportunities for the think tank directors to gain and apply new knowledge about management, governance, research quality assurance, communications and fundraising.
Others shared that vision. First, the International Development Research Centre in Canada joined as a co-funder. With regional offices and a track record of research capacity building, IDRC also took on the truly challenging job of implementing the global program. The Bill & Melinda Gates Foundation, the UK Department for International Development, the Government of the Netherlands and recently the Government of Norway all joined at significant funding levels. Eventually, the Think Tank Initiative mobilized about $100 million for the first five-year phase and, after an external evaluation, close to that level for the second. It’s a tiny fraction of the funding for, say, the $50 billion Olympics, but it’s a big boost in unrestricted dollars for think tanks.
The collaboration has been fascinating to watch and to participate in – a sort of petri dish of collaboration among funders. It has not always been easy. Three different types of organizations – bilateral agencies, foundations and a Crown corporation (please don’t ask me to define exactly what that is) – have had to line up their budgeting and reporting systems to avoid administrative madness. And we have each had to figure out how a big bet on think tanks fits into our distinct institutional strategies, which themselves have been moving targets. More importantly, each of the funders has had to simultaneously wear an institutional hat and make decisions that serve the initiative as a whole rather than the piece we most care about. Meaning: letting partnership trump ownership.
Working together is always more complicated than going it alone. Sometimes dealing with those complications isn’t worth it. Compromises made for the sake of partnership can compromise a project’s effectiveness. But when we get collaboration right, working with partners who deserve points for both skill and style, we can get bigger wins than any of us could expect working alone. In this case, it means that think tanks across the developing world will have a better shot at making a real difference in their countries’ future.
Ed, as Callahan’s article points out, is not the richest guy from the Valley, or the best known. But he surely has placed some of the smartest philanthropic bets anyone could imagine, with a focus on achieving big positive change through policy research plus advocacy. He’s built at least four excellent organizations from the ground up, and knows that the best work comes when people are pushed to achieve impact but given the flexibility to do it their way, responding to opportunities in real time.
My own understanding of what philanthropic dollars can do comes directly from working at the Center for Global Development, an organization Ed initially bankrolled and where he served as Chair of the Board of Directors (a position he continues to hold today). He invested in CGD not to keep a bunch of policy wonks employed, but to change the world, first by making the case for debt relief and over time through purposeful, focused, high-impact work on aid, trade, migration, climate and the many other ways in which the U.S. and other rich countries shape the prospects of poor countries.
In one of our earliest conversations, now a dozen years ago, Ed terrified me by asking how my analyses of global health policies were going to prevent babies from dying in the next year. Though I’d worked at the World Bank and the InterAmerican Development Bank, it was the first time anyone asked me flat out how I was solving urgent real-world problems. Let me tell you, that question deserved a really good answer – and seeking that answer fundamentally changed me from a “write a good report” person to a “try to make a big difference” person. I’m guessing the same is true for many of my former colleagues.
Remarkably, Ed’s high expectations about impact, which CGD has lived up to under the leadership of Nancy Birdsall, were paired with a healthy separation from the day-to-day decisions. Ed is one of those rare individuals who can place a bet and wait to see whether it pays off. Far more than many others seem to be able to do, he lets go – and as he does, the organizations he supports go further and faster than if he were holding on tight.
People aren’t just data. But if people aren’t in the data, too often they escape the notice of those who make decisions that affect their lives.
I’m thinking, for example, about a woman I met last year on a visit to Korogocho, one of Nairobi’s sprawling slums. More than any earlier experience, my conversation with her, in the 8-by-10 corrugated shelter she shares with seven children and a sometime husband, helped me understand the relationship between being counted and being seen.
In an isolated and poor community, virtually unserved by any level of government and sustained by catch-as-catch-can jobs, this woman would be all but invisible to many who make decisions that affect her life. She would be unseen when far off government planners and international donors figure out where to locate health services and schools. She would be unrecognized as a citizen who needs protection from violence and human rights abuses. And she would be unknown as a contributor to the Kenyan economy.
She would be, that is, if not for the fact that the African Population and Health Research Center operates a demographic surveillance site in Korogocho. The site is one of several large communities where data collectors periodically visit selected households – by chance, including hers – and researchers use the information they collect to make visible the basic facts and experiences of people living in Nairobi’s urban slums. Information about this woman, aggregated and analyzed with information about many others, has helped make the policy community understand the needs of the urban poor, and has helped to track progress (or lack of it) over more than a decade.
It’s this connection between being counted and being visible that motivates our work with the United Nations Foundation, the US Department of State and many leading experts in a project called Data 2X. Data 2X, launched by Secretary Hillary Clinton in 2012, starts from a belief that essential features of women’s lives are undercounted or entirely ignored. Consequently, policies – even those that may be intended to improve equity and living conditions – miss the mark.
In phase one of the project, just now coming to a conclusion, the project team led by Mayra Buvinic looked at many aspects of life – freedom from violence, security, livelihoods, health and education – and created an inventory of the main sources of cross-national data through which we understand the conditions women face and the contributions they make. They obtained terrific input along the way from a large number of leading experts on gender data, from academia, UN agencies, government and civil society groups focused on gender-sensitive research and policy.
Sadly, it is an inventory that shows many gaps. High-quality data that is routinely collected has focused principally on women as mothers. We know more from the Demographic and Health Surveys about women’s childbearing and use of contraception, for instance, than we know from any source about how women make a living. In contrast, we know more about the work patterns of men than we do about their health or whether they want to father children. This is not just a reflection of how societies think about gender roles; the availability (and scarcity) of information itself reinforces gender roles and handicaps policymaking.
These data gaps are big. Despite some important surveys, for instance, we have very little information about the prevalence of domestic and other types of violence against women, or the health and other costs. Without such information, advocates struggle to make the case for better enforcement of the many existing laws, or to monitor progress when more intense efforts are made to protect those who are vulnerable.
The situation is even more acute in measuring the level and type of economic activity. Women (and particularly poor women) do work that is rarely captured in official statistics, whether it is tending goats, selling fruit by the side of the road, making and selling food for construction crews or doing cleaning and childcare in the homes of better off families. In most African countries, more than half of all economic activity occurs in the informal sector, and a disproportionate amount of that work is done by women. But national income estimates, upon which we base so much of our understanding of the world, don’t capture it. Collectively, women make an enormous economic contribution, but they are systematically excluded from being counted as workers in information that governments and the international organizations that they work with use to shape economic strategies.
Documenting the data gaps is relatively straightforward. In the next phase of the project, we’ll take on the more challenging task of helping to accelerate progress toward filling those gaps. Fortunately, we are building on many related efforts – we are certainly not the first to recognize the problem or to work hard to help solve it. From UN Women to the UN Statistical Commission to the World Bank and USAID, many organizations are motivated to find better ways to capture data about women and have initiatives to do so. We will be looking for opportunities to provide technical support for those efforts when that is a missing piece, and to bring attention to the importance of work when what’s missing are high-level champions. And we are starting to think about whether there is a role for non-traditional data to complement standard surveys, censuses and administrative sources, like school records. (More about that on future Fridays.)
Everyone associated with Data 2X knows that better, more complete data collection will not automatically lead to policies that treat women more fairly. But we also all believe that having a full and accurate picture of women’s lives around the world will boost the chances that they will be able to be seen from far away, in the ministries, parliaments and donor agencies where many decisions are made that shape the prospects for this generation and the next.