Behold the IMF: an organization that has become a champion of women’s economic empowerment in a big way.
The International Monetary Fund, that citadel of macroeconomic orthodoxy, has stated loudly and clearly that countries’ long-term prosperity depends on making more room in the labor force for women. In a speech last September in Japan, IMF Chief Christine Lagarde made the point elegantly: “We will need all the economic growth, dynamism, and ingenuity we can get in the years ahead. Thankfully, a key part of the solution is staring us right in the face—unleashing the economic power of women. Bringing the world’s largest excluded group into the fold.”
Looking across countries, the IMF has estimated that expanding economic opportunities for women can increase national income by 15 to 30 percent. And the institution, known in the past for harsh macroeconomic prescriptions, now offers up recommendations focused on economic inclusion.
In a study released just this past Monday, IMF researchers found that women’s participation in the job market depends on national policies that give women an equal chance at inheriting property, owning land, opening bank accounts and pursuing an occupation of their choosing. Many countries have gender-biased policies on the books and women’s full economic productivity simply can’t happen under the current rules of the game.
Their conclusion is straightforward: Want to grow your economy? Fix the gender-based discrimination that’s codified in laws and regulations. As Lagarde wrote about the study on her blog, “By helping women reach their full economic potential, we can also help boost growth, prosperity and stability for the whole world. In a world in search of growth, women will help find it, if they face a level playing field instead of an insidious conspiracy!”
This is precisely what the world needs to hear from the IMF—but we need to hear more.
We need to hear that the IMF sees and understands the informal economy, which is where the majority of female workers in many countries can be found. Yes, it’s great to pass laws that permit women to inherit property and pursue careers in occupations like teaching, banking, and hospitality. It would be even better to provide legal protections for domestic workers, women who sew garments in their homes, food vendors at construction sites, and people who sort out recyclable and reusable items from vast mountains of waste. Those workers are contributing to the economy, too.
We need to hear that the IMF recognizes the unpaid labor that keeps households running, farms producing, and family businesses afloat. National income measures don’t include the value of childcare, eldercare, cooking, cleaning, and many other services when they are provided outside of the context of a market transaction. As a consequence, productivity is vastly underestimated—and the contribution that women make is too often invisible to economic policymakers. Wages for housework? Maybe not. But recognition of the economic value of care and other unpaid work? A fitting task for a global financial institution.
Think tanks have tremendous potential to strengthen economic and social policy around the world, using data and analysis to answer questions about how to grow economies, share prosperity, and protect the environment. It is within think tanks that skilled analysts pull apart the most pressing policy problems, examine the impacts of policies, and translate the best available evidence from around the world into a local context. But to fulfill their promise, think tanks need well-qualified staff with the wherewithal to build long-term research programs, and they need to be able to respond with information and advice when unexpected policy opportunities arise. In other words, to fulfill all that potential, they need core organizational support and cannot just live project-to-project.
This week, my colleague Sarah Lucas and I have been in snowy Istanbul at the Think Tank Exchange, a gathering of 40-plus think tank leaders convened by the Think Tank Initiative (TTI). Along with the British and Norwegian governments, the Bill & Melinda Gates Foundation and Canada’s International Development Research Centre we support TTI. Through it, we and co-funders provide core support to think tanks in Latin America, South Asia, and Africa. This conference was a chance to see what our grant dollars are paying for—and we like what we’re seeing.
Talking to the extraordinary men and women who run these organizations, I heard time and again that core support is the lifeblood of think tanks, whose success depends on credibility and relevance. When an organization has resources that are not all tied up in specific, short-term projects, it can recruit the best researchers and give them the independence to pursue research on policy questions that no particular funder has yet prioritized. With core support, a think tank can ensure continuity within a research program, consolidating the organization’s reputation for being a “go to” source on a topic. It can invest in basic research infrastructure—library services, development of an economic or environmental model for policy simulations, statistical support—that yield benefits for multiple research projects. And the organization can marshal a response when, with the election of new political leadership or a boom in government revenues, a window opens up for new ideas.
The range of research conducted by think tanks supported by the Think Tank Initiative. Click the image to view full-sized. (Image Credit: Think Tank Initiative)
Funding think tanks is quiet and wonky, and there are days when it feels like an agonizing blend of academia and politics, a crazy mash-up of research methodologies, regulatory minutiae, and ministry personalities. There are no ribbons to cut, or announcements about how many microloans were offered or lives saved. Even when think tanks have major accomplishments to their name, they resist bragging because maintaining a relationship of trust with policymakers often requires letting others take credit. For that reason, it is hard for think tanks to be recognized by funders for the value they bring.
But like us, our fellow Think Tank Initiative funders do know there’s a very high pay-off when people who care about getting the facts are able to influence political decision makers. When think tanks find their voice—and that’s happening more and more, as the appetite for evidence and technical advice increases in many countries—there’s no question that they are key contributors to national policy debates. Think tanks deserve the level and type of funding that will let them do their best work. They deserve core support.
In his most recent essay, “Can Rich Countries be Reliable Partners for National Development?” Lant brilliantly deconstructs the existential crisis that faces traditional development agencies. The paper is a “must-read-now” for those who work in or around any bilateral agency or multilateral development bank because it lays bare the fundamental disconnect between the current missions of major development institutions and the actual wants and needs of both citizens in developing countries and their governments.
“Across the board, rich countries are backing away from the national development goals of poor countries, such as broad-based prosperity and effective government . . . towards a narrow agenda of low-bar goals, such as reducing ‘dollar-a-day’ poverty; ‘completing primary schooling’ (with no mention of quality of learning or education beyond primary); accessing basic water and sanitation; or focusing less on health and more on specific diseases.”
Several forces are pushing development agencies in this direction. Developing countries have alternatives to aid, particularly with resources from their own economic growth. And aid supporters in rich countries are increasingly “post-materialist,” which creates a divergence between what they want to provide and what people in developing countries want.
Lant’s analysis helps explain the poor track record of attempts to promote “country ownership.” To date, no development agency—with the possible exception of the U.S. Millennium Challenge Corporation—has done even a C+ job of aligning agency priorities with those established within partner countries, or using development dollars to strengthen in-country institutions and markets. Rich-country taxpayers who support aid tend to want faster progress toward pro-poor health, education and other social outcomes rather than economic growth alone. Even more daunting for the “country ownership” agenda is the awkward reality that shifting the locus of power to the national level wins few friends on the home front, particularly among those who are losing out on potential revenues from the business of aid.
Lant’s essay lays out a conundrum that seems irreconcilable. Given their constituencies, aid agencies cannot finance the things that address economic development writ large – such as energy projects, infrastructure, urban construction, universities, and transportation. So what’s a development agency to do?
The answer, I think, also comes from Lant—but from earlier times.
In 1999, Lant and his colleague Deon Fillmer figured out a way to take household survey data, like the Demographic and Health Survey, and estimate a robust wealth index so that we could compare and contrast the experiences of the wealthiest with the poorest households. Their paper, “Estimating Wealth Effects without Expenditure Data—or Tears: An Application to Educational Enrollments in States of India,” led directly to an explosion of analyses of equity in health, education, family planning, and other service utilization. For the first time, we could see, across dozens of countries, that progress was extraordinarily uneven, and that the benefits of spending on supposedly “pro-poor” interventions like childhood immunization were largely captured by better-off families. We had a new and crucially important way to assess the performance of a wide spectrum of government and donor programs, and to improve their design. Information and analysis shed new light, and continues to do so to this day.
Then, in 2002, Lant wrote “It Pays to be Ignorant: A Simple Political Economy of Rigorous Program Evaluation.” In it, he explained—using equations!—why and how individual development agencies and governments are unlikely to invest in learning from experience, even though it would permit them to optimize the impact of their spending. That was one of several seminal contributions that led to the movement for more and better evaluation through collective action.
And in 2004, Lant and his colleague Michael Woolcock wrote in “Solutions when the Solution is the Problem: Arraying the Disarray in Development” about intrinsic limitations of development interventions that are based on a technocratic analysis of “need:” When the problem is defined from 30,000 feet as a need for more clean water, for example, the solution starts with pipes and flow rates. If the people actually affected, within a local context, were given the opportunity to define a problem for themselves, and propose its solution, chances are they would pursue a path far different from both the technocrats and from people in other communities. It’s a complex paper, but at least one of its core messages is that information about people’s views, preferences, and experiences must be taken into account—and given real weight alongside external technical expertise.
These ideas can help development agencies escape their conundrum. Aid may be unable to finance the hydroelectric dams that countries want, but they can finance the information, research, piloting, and adaptation that make a difference in the ultimate success or failure of governments’ own investments. What Lant’s work has demonstrated over many years is the crucial role that information, analysis, and evaluation can play in shaping better outcomes for real people in real places. As I argued in an earlier blog post, aid agencies—particularly if they are willing and able to work collectively—have a comparative advantage in investing in gathering and revealing facts on the ground. Part of that agenda is pure data, part is research, part is evaluation and other forms of feedback. But it’s all important, and it’s all a big part of what needs to happen next if development aid agencies are ever to find the sweet spot between what they should do and what they can do.
It’s the time for our annual performance reviews at the Hewlett Foundation, so I’ve been spending more than the usual amount of time reflecting on skills that my colleagues demonstrate at work each day. And that’s made me think about the special and somewhat peculiar qualities that go into being a good grantmaker.
The program officer’s job is a tricky one. This is the person who establishes a relationship with a prospective grantee and figures out where the organization and the project idea fit into the overall field that we support. The program officer has to gather enough information to have confidence recommending we fund the work, and weigh various factors to sort out both the level and the duration of our support. He or she vets proposals, monitors progress, connects grantees to each other and to other funders. It’s also the program officer who figures out when and how to evaluate grants or strategies, and when to recommend expanding a commitment or pulling the plug. Program officers are also members of a larger team and have management responsibilities, as well as the obligation of being a good citizen of a dynamic organization.
So when I think about the essential qualities of a program officer, demonstrated by my colleagues, I see stars. Specifically, I see the five-pointed star shown here, with pairs of attributes in balance.
Program officers need to know a lot about their subject matter. In the Global Development and Population Program, for example, program officers have deep in-country experience and strong knowledge of the fields of reproductive health, education, governance, transparency and accountability, and gender. But sometimes they have to tell their inner expert to take the day off so they can question fundamental and often strongly-held assumptions within a field, and appreciate what people from other disciplines have to offer.
Program officers need to be well organized, able to keep to deadlines and tick tasks off a list. There are a surprisingly large number of tasks involved in grant making: the due diligence, the proposal review, the presentation to the Board, the report review, and so on (and on). If you looked at a program officer’s calendar here, you’d see at least 50 separate deadlines each year. (That’s about one a week, on average, and it often feels like one a day.) All too often, those fall when program officers are in a distant time zone, far from a reliable internet connection.
The pace feels relentless and the stakes can be high: missing a deadline almost always inconveniences many colleagues, and could even jeopardize our ability to award a grant.
At the same time, program officers cannot be so mired in planning and meeting deadlines that they are unable to respond when opportunities arise, there’s a need to act quickly to meet a grantee’s urgent needs, or a colleague needs a helping hand. And, while ticking the tasks off day-to-day, they still have to follow a separate, and very different, rhythm—that of keeping up in the field.
Program officers need to be passionate about impact, willing and even eager to ask hard questions of prospective grantees about how they are going to turn money into good things in the world. But sometimes—maybe even most of the time—they need to temper that passion with a willingness to let go, giving grantees latitude to do things their way. It is, after all, the grantees who are ultimately accountable for the success or failure of each grant. And people rarely do their best work when they feel that someone else is calling the shots.
Program officers are supposed to be “strategic thinkers,” able to set priorities for their time, grant and consulting budgets, and institutional voice. At the same time, they have to keep their mind open to opportunities that we might not have envisioned at the outset of the strategy—as well as to the possibility that things just aren’t working and it’s time to change course. Again, it’s a balancing act.
Finally, program officers have to tread a fine line in relating to people who benefit from a grant. Good grantmaking requires being able to establish a relationship of trust, one that entails empathy and connection. At the very same time, good grantmaking requires maintaining enough objectivity so that the funding relationship is sustained for the results, not for the relationship alone. That can be tough.
Five sets of skills, five ways to balance proactivity with restraint. Thinking in this way about how we’re all doing our jobs helps me understand what it really means to be a good grantmaker. And it’s permitting me to see many stars.
In founding the National Campaign, Sarah created an organization that has been inventive, strategic, and just plain successful—whether you’re talking about its policy work, media partnerships, or direct outreach to young people.
The media partnerships, which stretch across the TV dial and many online outlets, famously include 16 and Pregnant and Teen Mom. The National Campaign has used partnerships with these shows to bring responsible parenting messages to MTV’s big audiences—messages with so much punch that they’ve been cited as major factors in the dramatic decline in teen pregnancy after the show’s introduction.
The Campaign also has been pioneering in its creation of Bedsider.org, a made-for-twenty-somethings web portal with a growing reach that provides information about birth control along with a dose of fun. Studies of Bedsider.org’s effects on users’ knowledge and use of clinical services hint at its tremendous promise as a way to engage and educate young people.
All of these complementary efforts, and the other work the National Campaign has done over the years, have contributed to what President Obama was referring to in his State of the Union Address earlier this week: “We still may not agree on a woman’s right to choose, but surely, we can agree it’s a good thing that teen pregnancies and abortions are nearing all-time lows, and that every woman should have access to the health care that she needs.” Those are precisely the goals the National Campaign has worked so hard to achieve, and they have managed to do it—even in this day and age—with bipartisan support.
Sarah is one of this country’s most ardent champions of making birth control accessible to young people, but what I admire most is that she is simultaneously committed to a cause and to evidence. That’s a rare thing. Sarah’s not in it for the passionate speechifying or the ideological fight. She knows how to look at facts, analyze current conditions, and focus energies where they will make the greatest difference. She has built a team and an organization that’s in it for the results: the brightest future for the next generation.
The bell has rung. Civil society leaders from India, Pakistan, Senegal, Mali, Nigeria, Mexico, Uganda, Kenya, and Tanzania have come together, with optimism and conviction, asking others to join a movement to ensure that all children learn the fundamental, life-changing skills of reading and arithmetic.
But will anyone answer?
That’s the question I found myself asking just a couple of days ago, as I listened to Dr. Rukmini Banerji, head of the ASER Centre in India, speak on behalf of organizations from diverse countries that administer a simple reading and math test to school-age children each year, and publicly report the results—results that often reveal the dire state of education in their countries and motivate parents and policymakers to act. Before a large international gathering in Delhi, she said that we now know it is both possible and powerful to conduct citizen-led learning assessments. She called on civil society leaders around the world who are committed to better education to let parents, communities, and government officials know the cold, hard facts about whether or not children are learning.
The idea of citizen-led learning assessments is simple, and simply disruptive: rather than depending solely on enrollment statistics reported by government, assessments conducted by volunteers going door-to-door generate new and independent information about what matters—what children are learning. And that information is widely shared so that parents and others see for themselves whether children, in or out of school, have achieved the basic reading and arithmetic skills that can reasonably be expected of any 7- or 8-year-old.
For 10 years, the ASER Centre has conducted a national assessment in India, with data that are representative at the sub-state level—a feat of heroic proportions. Year after year the findings have been shocking. This year ASER reports that while almost all children are enrolled in a government or private school, only one third grader out of four can read a second-grade text. Almost one in five second graders cannot recognize the numbers zero through nine. And as the children progress through the grades, many fall further and further behind.
With the feasibility of this home-grown approach demonstrated in India, civil society organizations in East and West Africa, and in Pakistan have adopted and adapted it in recent years. Now more than a million children are tested annually. And similar efforts are now starting up in Mexico and Nigeria. Technologists, education experts, and leaders in governance and accountability are all participating in the expansion and evolution of this approach.
The work of groups undertaking citizen-led assessments has paid off. At the national level, the releases of each year’s findings set off a chain reaction of responses and accelerate reforms. Most importantly, they force public conversations across societies, and particularly by politicians, bureaucrats, and education experts: How can we improve education for all children?
At the global level, findings about how few children can do the basics, even after attending school every day, have paved the way for an “access plus learning” objective in the Sustainable Development Goals, expected to be agreed by UN member states in September. And the development of that goal has pushed the education community to focus on the vexing question of how to measure learning outcomes affordably, in ways that match in-country technical capacity.
The expansion of citizen-led learning assessments has also created some remarkable international bonds. The Indian team is working closely with counterparts in Pakistan, building strong bonds as they deal with common challenges; the Francophone countries of Senegal and Mali are collaborating on the challenges of translation to French and mother-tongue languages; and Uganda, Tanzania, and Kenya are ensuring that the data are collected in ways that permit comparison and encourage a friendly competition in the East African community.
So now, as we sit on the edge of a global agreement about the importance of learning, these civil society organizations from the Global South are calling on their counterparts in other countries to join the movement. Already they’ve seen expressions of interest, and have been fielding inquiries from every region of the world. As these groups organize, learn, and plan, however, they will need technical and financial resources. With our support for its start-up, the ASER Centre has established an International Unit to help share tools and know-how, including an app for handhelds that saves time and money in data entry and analysis. So the groundwork is being done.
But as this movement to measure children’s learning in an independent and inclusive way takes off, members of the donor community who care so much about better education—who have been so effective in helping governments increase school enrollments—owe it to themselves to take a close look at what citizen-led assessments can do to create an enabling environment for big changes. Not only are they a proven and relatively low-cost way to measure learning outcomes, but they also serve as a crucial external source of data, which will continue to be needed even as government-run assessments proliferate.
I know that we will do what we can to respond to the call. Will others join us?
So let’s say you were trying to make a big dent in problems facing people in poor countries, like inequality, poor health and nutrition, illiteracy, and corruption. Let’s say you had big ideas, lots of access to powerful decision makers and the smartest analysts, but relatively little money compared to the scale of the problems. Let’s say you wanted to make a lasting contribution, but were subject to priorities that were shaped by politicians with short time horizons. Let’s say you know that real solutions need to be appropriate to local conditions and “owned” by affected communities—but you’re an outsider. Let’s say, in other words, that you were an aid agency. How could you spend your money and use your connections to make the biggest difference?
One way—and I’ll even go out on a limb here and say it’s the best way—is to invest in learning about the effectiveness of policies and programs designed to solve those major social and economic problems you’re trying to tackle. You could evaluate programs funded by aid agencies, national governments, and even the private sector—and use your experience and networks to ensure sound methodologies, the relevance of the evaluation questions, and full use of the findings. You could help foster good governance by helping countries develop and refine the best interventions in particular contexts, and by encouraging them to be transparent about whether publicly-funded actions were succeeding or not. If you did that, you’d be leveraging your limited funds to reinforce country-owned, evidence-based policies and programs. You’d be fulfilling your mission.
You could get even more bang for your buck by funding evaluations collaboratively, benefiting from economies of scale and scope that come with pooled money and pooled thinking: a shared evaluation agenda; opportunities for replication in varied settings to figure out what’s generalizable and what’s not; and standard practices for quality assurance and dissemination of findings.
And the good news is that you could start intensifying your evaluation efforts today, because all the pieces are in place. The evaluation field has developed a wide range of methods and leading academics around the world are focused on impact measurement. In fact, program evaluation is seen as one of the most appealing specialties by bright young development professionals. Several organizations, particularly the International Initiative for Impact Evaluation, have well-developed procedures for soliciting, vetting, and funding evaluations. Importantly, operational non-governmental organizations and developing country governments are both increasingly recognizing the value of evaluation and using new findings. Just next week, for example, as part of the International Year of Evaluation, the Government of India is dedicating five days to high-level discussions about how evaluation can contribute to better health, education and social protection policies. And guess what? Better evaluation is a priority that is shared by people of (almost) all political stripes. Win. Win. Win.
Last year around this time, I used this space to offer some predictions for 2014. Every single one of them came true!
Now I’m emboldened to go beyond predictions and share my resolutions for 2015—and leave it to you to hold me to them.
I resolve to do a better job of ensuring that, even when supporting work on long-term problems, we’re attentive to near-term progress. Across each of the domains in which we work, we’re seeking large-scale social change that requires overcoming longstanding barriers, from gender-based discrimination to the over-medicalization of basic health services, and to poor (and often corrupt) public sector performance. We are patient, and don’t fool ourselves into believing that dramatic progress is likely in a few months or even a few years. The path of least resistance is always to continue the status quo—the same sensible approach, the same good partners—and to look for confirmation that incremental progress is being made. But patience and respect for past decisions have to coexist with being alert to the possibility that what we’re doing isn’t working, or isn’t working as fast as it should. That’s why I’m going to push myself this coming year to figure out the right near-term measures to help us understand (and accelerate) the progress we’re making.
I resolve to spend more time getting feedback. It goes almost without saying that people who work in foundations face a serious feedback problem. Not only are we at least one step removed from anything that could be called “on the ground,” but we also spend most of our time with organizations that receive or want funding—and they, of course, have few incentives to tell us that our thinking is simple-minded, or our proposed solutions are unlikely to work. These are problems our colleagues in the Effective Philanthropy Group here at the Foundation are tackling with other funders through the Fund for Shared Insight, and it will take these efforts and many more to find the best ways to elicit, hear, and use feedback. For my part, I need to place more priority on consultation and on setting aside time for site visits and other opportunities to observe and listen.
I resolve to be completely honest and upfront about the prospects for funding. Telling someone “no” is neither pleasant nor the route to popularity, and it’s far too easy for funders to rely on vague statements. “Not now.” “We’ll have to see.” “After our strategy refresh.” But I know from being on the grant-seeking side of the table for many years that the second-best response to a funding pitch is not “maybe”; it’s “no.” A clear “no” means those who are seeking funds can move on to look for more appropriate funders, instead of trying to figure out, often fruitlessly, how to turn a “maybe” into “yes.” In 2015 and the years after that, I’ll do my best to be clear and candid in response to requests, even at the risk of being seen as undiplomatic.
I’m hoping these resolutions are within my grasp and that I’ll be able to report back with some sense of accomplishment this time next year. In the meantime, if you have ideas for other resolutions we should be making—or for how to achieve these—do not hesitate to speak up.
Like a whole lot of other people, I’ve become engrossed by Serial. (If you’ve somehow managed to avoid its seeming cultural omnipresence: Serial is a weekly podcast in which a reporter sifts through clues and patches together fragments of people’s memories to try to solve a long-ago real-life murder mystery—and possibly exonerate the man convicted of the crime.) It’s a messy story, complete with ambiguous evidence, false leads, and contradictory points of view. And that’s what’s so captivating. Listeners must puzzle it out along with the reporter, applying powers of observation, inquiry, and judgment to understand both what happened and what it means. The untidiness of the reporter’s investigative process as we follow along with her creates a kind of audience engagement that no polished three-act structure, complete with a tidy ending, ever could. For a taste of it, just tune in to the “meta-conversation” on Reddit, in Slate, or around the water cooler at many offices (including our own).
So why am I writing about this when I’m supposed to be focusing on global development policy and programs? Because the phenomenon of Serial, and the popularity of detective stories in general, gives us a clue about one of the great unsolved mysteries in global development: Why don’t people who want to get the most good out of every development dollar spent pay enough attention to what works and why? Why don’t more people truly learn from program evaluations?
I think one of the reasons that the findings from program evaluations don’t gain traction—either in the minds of individual development practitioners or with the institutions whose programs might be improved—is because of the way they’re presented: dull, predictable, and unengaging, tied up in a too-neat bow. They are not only easy to ignore; they’re actually hard to pay attention to.
But what if program evaluations were constructed as detective stories? There are lots of similarities, in fact, between a good mystery and the assessment of a particular program’s effectiveness. Maybe there are no dead bodies or court records in program evaluations, but there are lots and lots of questions we really need the answer to—lots of questions we should be just plain curious about. What happened, and why? Does everyone agree or are there differences of opinion? What were the motivations for decisions, and the impact of them? Inquiring minds want to know!
And, like true crime dramas, evaluations are stories necessarily built on incomplete facts and conflicting opinions that have to be poked, prodded, and sorted. An open-minded, curious evaluator has to take it in and make enough sense of all the information to know what conclusions she is confident about, and what remains unknown. She not only has to sort fact from fiction, but also figure out what the findings mean: Yes, the implementation was slower than expected, but how does that (or does it even) affect what was actually accomplished? Was the loss of three key staff members during year two a result of disastrous management or just bad luck? And then there’s the big one: even if all the outcome indicators are trending in the right direction, can we say with confidence that it’s due to the program itself?
I know it’s a stretch to imagine that evaluation reports, which are notoriously banal, could be as compelling as Serial. But surely when we’re designing evaluations we could try to tap into our natural human curiosity. We could ask ourselves the question, “What are we genuinely interested in knowing?” This is a question that might lead us in a novel direction—not the well-trod paths of bureaucratic exercises to tick the “evaluated” box or the careful marshaling of evidence to confirm what we already believe. Then, in the course of the evaluation, the evaluator could present the information not as a neat set of conclusions backed up by selected and sanitized facts, but as a puzzle for those who care to join in the solving of it.
In the end, I’d bet that the findings from an evaluation that engaging would be remembered for a long time, and might even have a chance of influencing decisions far into the future.
“At the sound of the tone, please state your name and press the pound or hash key . . .” That phrase fills me with dread.
Working on large collaborative projects with people located in many different parts of the world, I spend a lot of time on conference calls with 5-10 other participants. Sometimes very, very early in the morning. As a consequence, I also spend a lot of time wishing those conference calls were more productive and satisfying as a means of communication. Too often the organizers and participants seem at sea, not knowing when to speak, or who’s talking, or where the conversation is going. Without the cues of an in-person meeting, it’s easy to lose focus or miss an opportunity to make a key point. But it doesn’t have to be that way. Here are some tips harvested from some of the best conference calls I’ve been on.
If you are leading the call:
Send the call-in information with the invitation.
For high-stakes decision making, do pre-calls with key participants to understand where agreement may be difficult. Shape the agenda to permit time for discussion where it’s required. Minimize presentation time and maximize discussion time.
Send the agenda at least two working days before the call, and include the call-in information along with instructions on how to access background materials. Make sure the agenda is clear about the objectives, the time for each item, and any decision points. The fact that this meeting is taking place on the phone rather than in person makes it more important that the reason for having it is clear to everyone, not less.
Open the call a couple of minutes early so you are on when people join.
Formally start no more than five minutes late, even if some participants have not yet joined. Take attendance by stating who you believe is on the line and asking for a quick response. Then ask anyone not named to introduce themselves.
Start every call with the following statement: “If you are not speaking, please put your phone on mute. At each point where comments or questions are requested, I will ask each person (or location) in turn, in the following order: [state the order, starting with the person who, because of time zones, is most inconvenienced by the time of the call]. If you have no comments, please feel free to simply pass to the next person. At the conclusion of a round of comments, I may open it up again. At that point, speak up if you have additional points to make. Please state your name each time you speak.” This system may seem overly formal, but it will reduce the problems of dead air/everyone speaking/people turning to their email instead of listening, and it will give everyone a chance to participate if they wish.
Invite any additions or changes to the agenda; note if there is an “any other business” section at the end. Reassure participants that they can contact you off-line to suggest topics that need to be covered in a future call or through a round of emails.
Have someone else take notes so you can attend to the difficult business of moderating the discussion and managing the time with a firm hand. Your note-taker can also quickly email the agenda and background materials to any participants who do not have them handy.
Make sure stacked-up comments get addressed, and stay attuned to emerging points of consensus or discontent; circle back to them as needed before the section ends to solicit additional thoughts.
Clearly state at the end what the conclusions are, including any decisions or next steps. Also indicate when notes will be circulated and if you will be asking for comments on them.
Plan to end a little early because people often drop off before the scheduled end.
If you are a participant on the call:
Ask in advance for an agenda with the relevant information if it’s not sent around within two days of the call. Test any links in advance so you don’t waste everyone’s time fumbling with technology.
Prepare for the call with talking points you think you will want to raise for each agenda item so you make the most of the time and don’t ramble. You may want to use these as the basis for a follow-up note, to make sure your points were understood.
Join promptly. If you must join late, wait until a pause to quickly introduce yourself and don’t expect anyone to recap the conversation for you.
Mute yourself when you are not speaking.
Introduce yourself each time you speak and indicate when you have finished your intervention. (“I’ll turn it back to you now.”)
Respect the agenda and order of discussion topics—limit your comments and questions to the points currently under discussion. If there are additional topics you think need to be covered, consider suggesting a follow-up call rather than trying to crowd a single call.
Assume there are other people listening who have not introduced themselves.
Be careful about sounding negative. As in email, without the cues of an in-person interaction, negative comments are often over-interpreted.
Stick around until the end of the call.
After the call, review any call notes promptly.
If this sounds like a lot of work, let me reassure you it’s less work than struggling through unproductive calls and then trying to make sense of the result. Let’s make the world a better place – one conference call at a time.