Every day I’m tempted to apply lessons from global health successes to other domains of development, like education and governance. It’s a temptation I’ve got to fight.
You can see the appeal. The world has seen many great successes in public health in poor countries, from eradicating small pox to reducing child mortality to extending the lives of people with HIV and AIDS – and those successes have been due in large measure to a combination of good science and generous aid. The global health sector, for all its flaws, has demonstrated an ability to set shared goals, build and sustain organizations like GAVI and the Global Fund to Fight AIDS, Tuberculosis and Malaria, use reasonably good data to make decisions, and transfer knowledge about what works in one country to inform practice in others. Most other pieces of the development puzzle have had considerably less success than global health as they’ve tried to raise money, build institution, and create a body of evidence at the global level to serve local decision makers.
Why can’t global health successes go viral? I think it’s because global health is simply more, well, global than most other domains of development. It’s more global because infectious diseases don’t respect borders, and when one country is healthier so are its neighbors. It’s more global because health products and health workers, in different ways, trade on international markets, and when vaccines, drugs, and interventions are tested in one country chances are pretty good they will work in others.
The “globalness” of global health has contributed in fundamental ways to the successes in the field. It’s what’s made it possible, for example, for medical doctors and nurses from the U.S., Canada, Western Europe, and Cuba to help so much and so quickly in both humanitarian crises and in the ongoing development of health systems that provide Western-style care. It’s what’s permitted huge volumes of drugs and vaccines to be purchased at relatively affordable prices, so they can then be made available to every country.
In contrast, look at education: Resources can be pooled and coordinated at the international level, as they are by the Global Partnership for Education, and it’s possible to create standards for collection of data. But the “globalness” story in education is quite thin. Teachers and schoolbooks from one country are rarely useful in others. And the jury is still out on whether teaching-learning tools developed in one setting will work as well in others.
Or look at governance: International norms around public financial management and transparency are useful to raise the bar, but governance and public sector management problems have to be solved in creative, often unpredictable, ways within the historical and cultural contexts in which they occur. They may be able to draw ideas from other settings, but do not benefit from a “corruption vaccine” with the power to stamp out graft around the world.
Pick a sector and try to figure out the “globalness” story. I’ll be there are very few, if any, that have a much that’s global as global health.
What global health does offer to other development challenges, I think, is the inspiration that comes with making a real, measurable difference in people’s lives. We can and should seek that sort of positive impact regardless of the sector in which we’re working. But increasingly I’m seeing that to make that difference means adopting different models than what’s worked in global health.
I’m the sort of person who looks on the bright side, who sees the glass half full. But I cannot yet see the bright side of how governments in many African countries provide services to their citizens.
Here are some cold, hard facts: In Uganda, four out of every five grade school teachers are unable to pass a test of basic reading and math. In Nigeria, two out of every three doctors cannot diagnose simple ailments. If you go into a government health facility in Kenya, you’ve got only a 50 percent chance of finding essential medicines that haven’t passed their expiration date. With each school day, each clinic visit, a fundamental contract between governments and their citizens is broken.
It is not just for lack of money. These depressing figures stand in stark contrast to economic gains and greater spending on health and education. Nigeria’s GDP has grown annually in real terms by about seven percent in recent years, while Uganda’s economy has grown by about six percent and Kenya’s by four percent. Government revenues in many African countries are on the upswing, and so are education and health budgets—and yet the quality of social service delivery remains abysmal.
There is, however, a bit of good news in this otherwise grim story: thanks to some ambitious and creative efforts, both governments and citizens now know more about the quality of health and education than they’ve ever known before, and they’ll be able to track progress—or lack of progress—in the years to come. This knowledge comes from efforts like the Service Delivery Indicators (SDI) project, which collects and shares facility-level information about what teachers and clinicians know, what they do, and what inputs they have to work with. By focusing on provider knowledge and behaviors, rather than solely on inputs, SDI sheds light on service delivery breakdowns that result from dysfunctional governance at multiple levels and the lack of incentives for good performance.
Using a common methodology across countries to permit comparisons, the results of the SDI survey are a wake-up call, drawing both government and public attention to the quality crisis. When the Uganda results were released last November, for example, the government committed to establishing a task force to spearhead new public service reforms. In Kenya, a report of survey findings was launched in the midst of a teachers’ strike, sparking a flurry in the media about why Kenyan schools are failing. Basic descriptive information sends a powerful message.
Beyond what it measures, SDI is special because of its commitment to get the data—both in summary form and in full datasets for researchers’ use—into the public domain. Some 11,000 people have looked at the microdata, and should find it to be useful as a complement to other datasets that have information on health and education expenditures and outcomes, as well as demographic, policy and other variables. Citizen groups in Kenya are also using SDI data, along with other information, to raise awareness about the quality of services.
Regardless of the specific findings from the data analysis, just opening up the information contributes to better policy. When government officials know that citizens are aware of how bad things are across the country, there’s a better chance they’ll feel like they have to do something about it.
No, SDI alone can’t make clinics or schools work better. But it sheds light on the sheer scale of the problem, and serves as a spur toward the reform that is so obviously needed.
Much more on the quality of health and education services is being discussed today and tomorrow at the “Making Services Work for Poor People” conference organized by the World Bank and the Overseas Development Institute. Several sessions will be live-streamed here.
There we were: An American team, a British team, a Canadian team and a Norwegian team, far from home and eager to make the most of years of work. Were we skiing, ice dancing or hurtling down the luge track? No, we were not. (Surprise!) We were just a bunch of individuals representing organizations that have pooled funds to support think tanks around the world, and we were meeting to wrap up the first phase of the project and gear up for the second. While we didn’t come home with medals, I, at least, came home with a renewed appreciation for the value of collaboration among funders.
A little background: The Think Tank Initiative is a ten-year program designed to support and strengthen about 50 think tanks in 22 countries in Latin America, South Asia, East Africa and West Africa. It grew out of a $50 million commitment made in 2006 by the Hewlett Foundation board, which believes strongly in the value of evidence-informed policy making. The foundation board recognized the tremendous contributions that can – under the right conditions – be made by reputable, independent organizations that generate solid analyses of policy options and are able to communicate their findings at the right time and place. The Hewlett board also recognized that all too often the only funding available to think tanks in low- and middle-income countries is for projects that donors define. Many of these organizations have few opportunities to pursue independent research that responds to national priorities. Thus was born the idea that we might be able to enhance the quality of policy making over the long term by selecting promising think tanks to receive flexible funding – “core support” – combined with opportunities for the think tank directors to gain and apply new knowledge about management, governance, research quality assurance, communications and fundraising.
Others shared that vision. First, the International Development Research Centre in Canada joined as a co-funder. With regional offices and a track record of research capacity building, IDRC also took on the truly challenging job of implementing the global program. The Bill & Melinda Gates Foundation, the UK Department for International Development, the Government of the Netherlands and recently the Government of Norway all joined at significant funding levels. Eventually, the Think Tank Initiative mobilized about $100 million for the first five-year phase and, after an external evaluation, close to that level for the second. It’s a tiny fraction of the funding for, say, the $50 billion Olympics, but it’s a big boost in unrestricted dollars for think tanks.
The collaboration has been fascinating to watch and to participate in – a sort of petri dish of collaboration among funders. It has not always been easy. Three different types of organizations – bilateral agencies, foundations and a Crown corporation (please don’t ask me to define exactly what that is) – have had to line up their budgeting and reporting systems to avoid administrative madness. And we have each had to figure out how a big bet on think tanks fits into our distinct institutional strategies, which themselves have been moving targets. More importantly, each of the funders has had to simultaneously wear an institutional hat and make decisions that serve the initiative as a whole rather than the piece we most care about. Meaning: letting partnership trump ownership.
Working together is always more complicated than going it alone. Sometimes dealing with those complications isn’t worth it. Compromises made for the sake of partnership can compromise a project’s effectiveness. But when we get collaboration right, working with partners who deserve points for both skill and style, we can get bigger wins than any of us could expect working alone. In this case, it means that think tanks across the developing world will have a better shot at making a real difference in their countries’ future.
Ed, as Callahan’s article points out, is not the richest guy from the Valley, or the best known. But he surely has placed some of the smartest philanthropic bets anyone could imagine, with a focus on achieving big positive change through policy research plus advocacy. He’s built at least four excellent organizations from the ground up, and knows that the best work comes when people are pushed to achieve impact but given the flexibility to do it their way, responding to opportunities in real time.
My own understanding of what philanthropic dollars can do comes directly from working at the Center for Global Development, an organization Ed initially bankrolled and where he served as Chair of the Board of Directors (a position he continues to hold today). He invested in CGD not to keep a bunch of policy wonks employed, but to change the world, first by making the case for debt relief and over time through purposeful, focused, high-impact work on aid, trade, migration, climate and the many other ways in which the U.S. and other rich countries shape the prospects of poor countries.
In one of our earliest conversations, now a dozen years ago, Ed terrified me by asking how my analyses of global health policies were going to prevent babies from dying in the next year. Though I’d worked at the World Bank and the InterAmerican Development Bank, it was the first time anyone asked me flat out how I was solving urgent real-world problems. Let me tell you, that question deserved a really good answer – and seeking that answer fundamentally changed me from a “write a good report” person to a “try to make a big difference” person. I’m guessing the same is true for many of my former colleagues.
Remarkably, Ed’s high expectations about impact, which CGD has lived up to under the leadership of Nancy Birdsall, were paired with a healthy separation from the day-to-day decisions. Ed is one of those rare individuals who can place a bet and wait to see whether it pays off. Far more than many others seem to be able to do, he lets go – and as he does, the organizations he supports go further and faster than if he were holding on tight.
People aren’t just data. But if people aren’t in the data, too often they escape the notice of those who make decisions that affect their lives.
I’m thinking, for example, about a woman I met last year on a visit to Korogocho, one of Nairobi’s sprawling slums. More than any earlier experience, my conversation with her, in the 8-by-10 corrugated shelter she shares with seven children and a sometime husband, helped me understand the relationship between being counted and being seen.
In an isolated and poor community, virtually unserved by any level of government and sustained by catch-as-catch-can jobs, this woman would be all but invisible to many who make decisions that affect her life. She would be unseen when far off government planners and international donors figure out where to locate health services and schools. She would be unrecognized as a citizen who needs protection from violence and human rights abuses. And she would be unknown as a contributor to the Kenyan economy.
She would be, that is, if not for the fact that the African Population and Health Research Center operates a demographic surveillance site in Korogocho. The site is one of several large communities where data collectors periodically visit selected households – by chance, including hers – and researchers use the information they collect to make visible the basic facts and experiences of people living in Nairobi’s urban slums. Information about this woman, aggregated and analyzed with information about many others, has helped make the policy community understand the needs of the urban poor, and has helped to track progress (or lack of it) over more than a decade.
It’s this connection between being counted and being visible that motivates our work with the United Nations Foundation, the US Department of State and many leading experts in a project called Data 2X. Data 2X, launched by Secretary Hillary Clinton in 2012, starts from a belief that essential features of women’s lives are undercounted or entirely ignored. Consequently, policies – even those that may be intended to improve equity and living conditions – miss the mark.
In phase one of the project, just now coming to a conclusion, the project team led by Mayra Buvinic looked at many aspects of life – freedom from violence, security, livelihoods, health and education – and created an inventory of the main sources of cross-national data through which we understand the conditions women face and the contributions they make. They obtained terrific input along the way from a large number of leading experts on gender data, from academia, UN agencies, government and civil society groups focused on gender-sensitive research and policy.
Sadly, it is an inventory that shows many gaps. High-quality data that is routinely collected has focused principally on women as mothers. We know more from the Demographic and Health Surveys about women’s childbearing and use of contraception, for instance, than we know from any source about how women make a living. In contrast, we know more about the work patterns of men than we do about their health or whether they want to father children. This is not just a reflection of how societies think about gender roles; the availability (and scarcity) of information itself reinforces gender roles and handicaps policymaking.
These data gaps are big. Despite some important surveys, for instance, we have very little information about the prevalence of domestic and other types of violence against women, or the health and other costs. Without such information, advocates struggle to make the case for better enforcement of the many existing laws, or to monitor progress when more intense efforts are made to protect those who are vulnerable.
The situation is even more acute in measuring the level and type of economic activity. Women (and particularly poor women) do work that is rarely captured in official statistics, whether it is tending goats, selling fruit by the side of the road, making and selling food for construction crews or doing cleaning and childcare in the homes of better off families. In most African countries, more than half of all economic activity occurs in the informal sector, and a disproportionate amount of that work is done by women. But national income estimates, upon which we base so much of our understanding of the world, don’t capture it. Collectively, women make an enormous economic contribution, but they are systematically excluded from being counted as workers in information that governments and the international organizations that they work with use to shape economic strategies.
Documenting the data gaps is relatively straightforward. In the next phase of the project, we’ll take on the more challenging task of helping to accelerate progress toward filling those gaps. Fortunately, we are building on many related efforts – we are certainly not the first to recognize the problem or to work hard to help solve it. From UN Women to the UN Statistical Commission to the World Bank and USAID, many organizations are motivated to find better ways to capture data about women and have initiatives to do so. We will be looking for opportunities to provide technical support for those efforts when that is a missing piece, and to bring attention to the importance of work when what’s missing are high-level champions. And we are starting to think about whether there is a role for non-traditional data to complement standard surveys, censuses and administrative sources, like school records. (More about that on future Fridays.)
Everyone associated with Data 2X knows that better, more complete data collection will not automatically lead to policies that treat women more fairly. But we also all believe that having a full and accurate picture of women’s lives around the world will boost the chances that they will be able to be seen from far away, in the ministries, parliaments and donor agencies where many decisions are made that shape the prospects for this generation and the next.
Whoever invented the “report launch” and the “policy brief” has had a lot of impact in the world – and, by my measure, not all of it good. At some point in the remote past, someone noticed that people who create and implement policies do not read book-length technical treatises produced by think tanks and other research-based organizations. Many of those people are too busy pursuing an ideological agenda or negotiating a quid pro quo; facts and nuanced analyses don’t matter. Some of those people, however, do have an interest in grounding their policy work in facts and good analysis. They just don’t have enough time, attention span or ability to understand technical terms of art to read a whole book or report. Thus was invented the report launch, to draw attention to a researcher’s new tome, and the policy brief, a summary of findings, aimed at an interested but non-technical audience.
From that point forward, virtually every policy-oriented book or report has been accompanied by – drum roll here – a launch and a policy brief. We see this all the time. In fact, a common response when I ask grantees about the policy impact of a research report is, “We prepared a policy brief for the report launch.”
Perhaps the report launch and the policy brief (L/PB) mode of research-to-policy translation has hampered the search for other and possibly better ways to solve the original problem. Maybe there’s some room for improvement here.
Let’s just say there were a new law outlawing policy research organizations from L/PB. (This would, of course, be an evidence-based policy, founded on an observation that launches rarely coincide with moments when the analysis is needed by the policy community, and policy briefs often are as unread as the reports on which they are based.) What would motivated researchers do to translate their work into changes in the real world?
They might get creative. They might think to themselves, “who are we trying to get to do what, and what might influence them to look at the evidence we’ve generated?” Midway through a research project they might ask, “who might value the findings and when, and how can we keep them up-to-date about what we’re learning?” Even at the start of a research project they might ask, “what problems are facing the policy community, who really cares about them, and how can they help us define our research questions and keep us focused as our work proceeds?”
The answers to those questions might lead to things like alliances between research and advocacy organizations around specific action or awareness campaigns. Or the establishment of a “policy roundtable,” where invited participants from political and academic circles meet in private regularly for a combination of socializing and exchange of ideas. Or a working group, task force, commission, study group or other collective means to define a question and conduct research. Or even a report, but timed to be issued just at the moment, like a G20 meeting, when politicians are faced with pressure to deliver a policy solution. Or something else entirely, appropriate to the context.
Each one of those approaches would require more time, more work and often more money than the old L/PB tactic. But they would potentially be many-fold more effective in getting the information researchers have into the hands, minds and hearts of those who can use it.
I’ve become a fan of Shark Tank. If you haven’t seen it, here’s the premise of this reality TV show: five or six venture capital types (the sharks) listen to pitches from a series of entrepreneurs, and respond with critiques (sometime bordering on ridicule), compliments and an occasional offer of money in exchange for a stake in the budding business. It’s a caricature of the venture capital world and it’s pretty darn entertaining.
It turns out we are not so different than the sharks. The questions that the investors ask of the pitchman (or pitchwoman) are very much like the ones we do, or should, ask of our grantees, although within our non-profit aquarium and using our own language. Here are the types of questions they ask:
How original is the idea, and/or how unique is the style and quality of the product?
How big is the market? Is this responding to existing demand, or is it such a new idea that it’s something no one thought they needed before?
Is the inventor able to sell an idea and execute on it, or is he or she just the “ideas person”? Is there a track record of earlier success?
What are the financial and other risks, and how are those risks going to be distributed among the parties?
What are the distributors through which the product gets to the market, and what do those distributors want, in terms of a cut of the profit or a chance to put their own brand on the product?
Many of the non-profits we work with would cringe at questions like this, but I’m guessing that more attention to cost structure, risk allocation, entrepreneurship and the nature of demand for goods and services would sharpen all our thinking.
Most interesting to me about Shark Tank is how the investors figure out whether they are a good match for the project, regardless of its intrinsic merits. They consider what they can bring through their own knowledge of the business sector, sales ability and professional network. When they invest and take a stake in the business, they are offering much more than money – often the money is relatively small potatoes – and they take seriously the time and other commitments required to give the enterprise a fighting chance. “Beyond the grant dollars” in a big way. So you hear them saying things like, “I could work with you to make this thing succeed, but I’m working on too many other high-effort projects right now, so I’m out.” Or, “I think you’re onto something, but I won’t have anything to bring to the table in your line of business.” Also fascinating is the interplay among the potential investors, which sometimes results in a multi-party deal, with each bringing distinctive ingredients and demands into the mix. All of these are analogous to our world of grant making.
I realize that the concept of venture philanthropy is not exactly new, and many venture capitalists are taking their bank accounts and know-how to the philanthropic enterprise. I also realize that we have more nuanced and hard-to-measure strategies than the sharks’, which presumably is “make the most possible money.” But Shark Tank’s amped-up version of the investor-entrepreneur dealmaking makes me wish for just one episode where the ideas are focused on social rather than commercial value, and the panel is a mix of venture capitalists and, well, us.
What will it take to get the value out of evaluation? That question was in the thought bubble above my head for most of a two-day meeting last week organized by the Institute of Medicine and hosted by the Wellcome Trust in London. The meeting, with the written-by-committee title of “Evaluation Methods for Large-Scale, Complex, Multi-National Global Health Initiatives,” had a pretty straightforward aim: to look at several recent evaluations and figure out how to better measure the difference big donor-funded programs are making in people’s lives.
The evaluations we heard about covered programs that account for billions of dollars of donor spending, including the Global Fund to Fight AIDS, TB and Malaria, the President’s Emergency Plan for AIDS Relief, the President’s Malaria Initiative and the Affordable Medicine Facility for Malaria. These programs represent some of the largest and most ambitious global health initiatives; they also are among the most successful, inspirational and innovative efforts ever launched by international donors.
Big-deal programs deserve big-deal evaluations, and these programs got them. All the evaluations combined interview data from thousands of sources, and crunched endless amounts of monitoring and budget information. They tried to filter out bias, while at the same time recognizing that those who know most about the performance of programs are often those doing the work (and those living off funding from donors). The evaluators – some of the best in the business – tried heroically to distinguish health improvements that could be legitimately credited to the program from those that might have happened anyway. For each of these evaluations, the political stakes were high and the methodological challenges enormous.
Assuming you don’t want to watch two days of webcast content (to be posted here), here are my quick and partial take-aways. I hope others who were there use the comment feature to offer their own observations.
These evaluation experiences provide hints about ways to create greater political space for serious evaluation; improve the relevance and technical quality of the evaluations; and intensify the use of evaluation findings.
Having the political space to be able to conduct a good evaluation of a high-stakes program requires a sort of “open to learning” stance on the part of program leadership and/or a mandate from on high. It also requires that the evaluation is separated enough from the program to be genuinely unbiased, and welcomed by the advocacy community, even if the news is not always positive.
Better technical quality – and this will not be news – requires being able to articulate a theory of change and identify the fundamental assumptions that the evaluation should interrogate. Given the zillions of possible “interesting” questions one might ask, it also requires take-no-prisoners priority-setting about what the most important evaluation questions are. This priority-setting needs to take into consideration whether the questions they can be answered in a way that is sufficiently persuasive to change minds of those who are in a position to take decisions.
Better technical quality in these sorts of programs (and many others) also requires thinking about the evaluation from the outset – something not done in any of these initiatives, remarkably – and embedding impact evaluation during implementation. And let’s not forget the value of putting both the methods and the data itself out for public comment and reanalysis. That helps keep everyone’s game up.
Evaluation findings are used most effectively when there has been ongoing, meaningful engagement of both implementers and partner countries, and when those stakeholders have a commitment to learning and adaptation. Use of evaluation findings also benefits from a system for regular public follow-up of the recommendations. Also important and often neglected: adequate planning, skills and budget for fit-for-purpose dissemination. Incredibly, in several of these evaluations the dissemination budget was – wait for it – $0.00.
These experiences, informative as they are, don’t help prepare us fully to do things well in the future. Most if not all of these evaluations were very much in the old-school “donor-recipient” model, which simply will not fly in the future (thank goodness). As Ian Goldman, head of evaluation and research in the South African government and Board member of the International Initiative for Impact Evaluation, gently but firmly put it in his remarks (paraphrasing here), “You are operating under an outdated partnership model. You have to change.” These evaluations also did not find ways to take advantage of any non-traditional data sources, like on-the-spot client feedback. Surely this is part of the emerging set of opportunities for evaluators.
This meeting alone did not change the world, or ensure that we will get more value out of future evaluations. But it sure made me think about what we at the Hewlett Foundation, long-standing supporters of some of the most pioneering evaluation work in international development, can do to advance the state of this imperfect art.
One of the most enduring beliefs in development is the notion that poor countries are harmed by the out-migration of highly skilled professionals; one of the most off-repeated policy prescriptions is to create barriers to reduce "brain drain." Take, for instance, the recent New York Times op-ed, Migration Hurts the Homeland, by Paul Collier, one of the world’s most influential development economists. In it he writes:
But many poor countries have too much emigration. I do not mean that they would be better with none, but they would be better with less.
. . . [S]mall developing countries have high emigration rates, even if their economies are doing well: Ghana, for instance, has a rate of skilled emigration 12 times that of China. If, in addition, their economies are in trouble, they suffer an educational hemorrhage. The top rankings for skilled emigration are a roll call of the bottom billion. Haiti loses around 85 percent of its educated youth, a rate that is debilitating.
This view has intuitive appeal. When skilled people leave their home countries, how could that not be a loss? In some static way, perhaps it is. But migration is dynamic and complex process, in which causation, people and money move in more than one direction. As my former colleague at the Center for Global Development, Michael Clemens, has articulately argued many times:
Opportunities for overseas employment creates incentives for people to invest in higher education and acquiring skills. If young people knew they were restricted only to jobs at home, far fewer would seek to better themselves through education. (Michael has demonstrated this with creative and pathbreaking research on African doctors and nurses.)
No serious research has ever demonstrated that economic and social development was accelerated when borders were closed.
Remittances are a major source of financial flows from wealthy to poor countries – estimated at $410 billion in 2014 – that simply would not exist in the absence of migration.
Out-migration and return migration are crucial pathways through which ideas and knowledge spread around the world.
Increasing opportunities for migration from poor countries is one of the fastest and most powerful tools to raise living standards, both of migrants and of those they left at home. (See this article for more.)
While restrictions on immigration may be motivated by good intentions – stopping the so-called “brain drain” – they are fundamentally at odds with the notion that people should be able to choose where to live and how they use their skills and talents.
In the end, I have no doubt that for poor people and poor countries migration is a good thing. And I also have no doubt that many leading thinkers in the research and policy community hold precisely the opposite view.
A TEDx talk by one of my favorites, Dan Ariely, elegantly captures what makes us work, beyond the paycheck. He starts with a pretty basic conundrum: Why do people willingly, even eagerly, engage in really hard work? Why do people climb mountains?
In the first part of the entertaining twenty minutes, he talks about how much people’s will to work is shaped by a sense of purpose. He describes a set of behavioral experiments that quantify how much less work people do when they realize their efforts are unappreciated or even futile. As intuition would suggest, if you ask people to do something meaningless—to create something that is never used, for example, or that is shredded before their very eyes—their enthusiasm plummets. It turns out that people work a lot harder for the same amount of money when they feel challenged and less when the sense of meaning evaporates.
In the last part of the talk, Ariely looks at the puzzle differently. He shows experiments in which subjects are asked to take on complex tasks, and then to rate the quality of the end product and to guess what others will think. The harder people work to create something, the more beautiful and valuable they think it is; they also expect others to appreciate the beauty and value. This is the case even when, objectively, what they have created is not beautiful and they could have made something more perfect had they chosen to work on a simpler task.
If you do take time to watch the talk I think you’ll get something out of it. What I got was a nudge to do a better job of providing feedback on the hard work that everyone in our Program does: appreciation but also meaningful responses. I could surely do more to communicate what I think when I read reports and proposals, or see how the various parts of grant farming is done, from seeds to harvest. I also got a better understanding of why we sometimes become strongly attached to the most complicated parts of the portfolio (or theory of change). We talk about going for the “low-hanging fruit” but nine times out of ten I think we (and the organizations with which we work) actually are reaching for the upper branches and find it difficult and demotivating to abandon those in favor of challenges that are more likely to succeed. Like the research subjects in the video, the harder we work on something the more enchanting we think it is. In some cases, we may be listening to the siren song of spinning wheels.