Millions Learning starts with a single proposition: the 100-year gap in educational outcomes between developed and developing countries cannot be closed with a business-as-usual approach. Co-authors Jenny Perlman Robinson and Rebecca Winthrop recommend five actions to expand quality education to millions more children in developing countries. Three of them stand out:
Fund the middle phase
Measure and learn what works through better learning and scaling of data
Share new ideas through a network of Idea Hubs
I’m hopeful that donors and governments will heed this advice and find new ways to provide flexible funding so that learning innovators have time to refine, adapt and sort out who is responsible for what to deliver on the promise of improved learning for more children.
Pairing this flexible funding with new investments to produce timely data and information about what it takes to scale up learning innovations could also advance the “science of delivery.” And a network of Idea Hubs could offer real-time advice and connections with others tackling similar challenges – a concept that the team at Brookings Center for Universal Education calls “real-time scaling labs.”
Reading Millions Learning reminded me of a story from our own grantmaking. In December 2013, I received a call from Mamadou Ly, executive director of the Senegalese NGO Associates in Research and Education for Development. ARED specializes in developing African-language instructional and reading materials that are culturally relevant, low-cost yet high quality and designed to meet community needs. The Hewlett Foundation funded the pilot phase of ARED’s instructional approach for teaching reading, math and social studies in grades 1-3 beginning in 2011, and in November 2013 approved a three-year renewal grant to provide ARED time to further test and solidify its approach and plan for expansion.
Ly reported that the minister of education had taken notice of ARED’s positive evaluation results and invited them to submit a plan for scaling up their model to 500 schools (a five-fold expansion). This was good news because ARED’s goal was to demonstrate that mother-tongue instruction was feasible, and would enable Senegalese children to stay in and succeed in school.
Yet, Ly had questions he wanted to tackle before he presented a plan for scaling up the approach to the minister of education: What other government institutions would need to be involved and what would it take to get them on board? Who would be responsible for training and supporting teachers? What would ARED need to do, and who would pay for the 500-school expansion? How would ARED work with many more communities to ensure the programs were designed for their needs?
At the end of the call, we agreed to help ARED find and fund a scaling-up advisor that could work with them and the ministry of education – someone who had experience navigating the technocratic and political enabling conditions for scaling up that could coach ARED through the process.
ARED is making good progress today, and Dubai Cares has joined us to fund their work. Government officials, National Parent-Teacher Association representatives and others who participated in planning workshops with ARED have become “learning champions.” And the minister for education recently endorsed ARED’s plan to scale up.
ARED still has some way to go to put mother-tongue instruction and reading materials into the hands of more of Senegal’s teachers and students. Success will depend on the ability of ARED and the ministry of education to consolidate their education alliance, mobilize longer-term funding, continuously track and learn from their progress in scaling up, and measure and communicate children’s learning outcomes. Perhaps they will be among the first clients of real-time scaling labs!
Which leaves me wondering how many other potential real-time scaling lab clients are out there? And how might their actions improve your work? Share your thoughts with us on Twitter @Hewlett_Found or with me @pscheid6.
It’s that time of year again—glittery solicitations from many worthy (and some not so worthy) nonprofits are piling up like snowdrifts in my mailboxes, both physical and electronic. Like it or not, fundraising for charitable organizations is a competitive sport, and the end of the year can feel like the Winter Olympics! Meanwhile, here at the Hewlett Foundation program staff await end-of-year financial reports from our current grantees, and the suspense around whose report will be ‘naughty’ and whose will be ‘nice’ is palpable.
Believe it or not, we do review these financial reports. Carefully. What are we looking for? Put simply, we want to be reassured that the grant dollars we are providing are being put to good use. This means that we are looking to see whether grantees are spending the money on the staff, activities, and other material support proposed in their grant budgets and that the pace of spending is more or less consistent with the timeline they had laid out for themselves. Of course, we understand that even the best laid plans sometimes need to change as organizations get to work implementing their programs. Lots can, and does, happen, and we find that organizations that are able to anticipate, adapt, and change course in response to challenges are often the most effective organizations at achieving their goals. On the other hand, when we see lots of underspending or redirection of grant budget, it’s sometimes an early signal that an organization misdiagnosed the problem or solution, or that their capacity to deliver is low.
Throughout the year, and now especially, we get questions from our grantees about financial reporting requirements, formats, and how to (successfully) request a no-cost extension. As a program officer, I am often surprised how long our grantees wait to communicate with us about problems they are encountering, and more importantly, how few are proactive about proposing course corrections (and where necessary, requesting a budget modification to allow them to redirect their resources and energies).
So here, in the spirit of this gift-giving season, is a bit of advice for better managing your communications about budgets and financial reporting with us, and probably most of your funders.
What determines whether educational innovations conceived on the ground ultimately result in better learning outcomes for students, and how can we take the most successful innovations to scale? A recent evaluation of part of our Quality Education in Developing Countries Initiative suggests some answers. In 2011, we awarded two grants to support grassroots innovations in improving children’s learning. These grants were given to two intermediaries, who used our funds to support organizations working to improve children’s early learning and highlight their approaches. The assumption behind the grants was that other donors and national Ministries of Education might invest in and expand promising approaches, if only they were aware of them and had proof that they worked. The intermediaries, Firelight Foundation and TrustAfrica, took two different approaches:
Firelight Foundation chose to make micro-grants in a single country (Tanzania) to community-based organizations mostly focused on early childhood education and development (pre-primary), both within and outside the formal education system. Firelight Foundation sought to:
take advantage of the close proximity of its partners to one another to share experiences, ideas, problems, and solutions;
leverage stories about how children were doing shared word-of-mouth between communities to facilitate expansion of innovations from one community-based organization to others;
provide a wider menu of capacity-building support to community-based organizations based on their own priorities.
TrustAfrica chose to make larger (though still relatively small) grants in four countries (Kenya, Uganda, Senegal and Mali) focusing on the development and use of mother-tongue teaching and learning materials and improved teaching practices for reading instruction during children’s first few years of primary school. TrustAfrica’s sought to:
leverage its Pan-African identity and its experience convening decision-makers and others concerned about education to share results and advocate for change;
create a network of grantees across countries for peer learning and to collectively share their results with donors who would be willing to support expansion of their work;
provide more targeted capacity building support to local NGOs to strengthen their monitoring and evaluation systems.
A recent formative evaluation, carried out by Management Systems International, looked at some of the underlying assumptions that were made, what could be learned from these two “Innovation Funds,” and recommendations for private foundations and other donors interested in scaling learning. While the two intermediaries took very different approaches, the summary report describes in more detail key findings, lessons learned, and recommendations for funders. Here are a few I think are worth elevating.
Good work and experimentation is abundant, but much of it still goes largely unnoticed and unmeasured. Both Firelight Foundation and TrustAfrica found a number of promising innovations and provided them with support to further test and demonstrate results.
Both provided their grantees with critical support to strengthen their capacity to better measure the impact of their interventions on learning. Firelight Foundation’s more extensive capacity building also helped strengthen small community-based organizations in other important ways.
Both received high marks for their flexibility, adaptability, and responsiveness to grantees’ needs.
The most promising innovations are still not getting the broader exposure and longer-term support they need to expand. The Hewlett Foundation, Firelight Foundation, and TrustAfrica could have been more proactive in understanding education system dynamics in order to create more concrete plans for their uptake, and supporting communication with the right decision-makers (either at the community/district level in the case of Firelight Foundation, or at the district or national level in the case of Trust Africa).
The Hewlett Foundation could have more clearly communicated its expectation for monitoring and evaluation of these innovations.
What do we hope other foundations and donors will learn from our experience? Here are three key lessons:
1. Be realistic about timing. Donors and grant recipients often underestimate the time and effort required for program start-up. (For more on this subject, read Ruth Levine’s blog about the zero year.) It is also essential that they build a deeper understanding and test assumptions about the program life-cycle along the way; how to expand a promising innovation is not always evident at the start, or could change with time. The most effective organizations will need time to test, learn, and adapt.
2. Be realistic about what to evaluate, how and when. Communicate early, often, and consistently with grantees about your expectations for evaluation of innovative programs. Share resources, examples, and learning from other evaluation work with grantees to help them avoid reinvention or missteps. Many innovators will need to build capacity and get systems in place (some from the ground up) for monitoring and evaluation before they are ready for impact evaluations. Before attempting an impact evaluation, invest in other ground work, such as monitoring and evaluation systems audits (one was done during this formative evaluation), qualitative evaluations, and documenting more specifically the intervention components, including what has been learned from execution, how the approach has been adapted over time and costs.
3. Be realistic about the time required and the political dynamics necessary for translating grassroots innovations into something larger. Encourage grantees to understand how their approach aligns with (or not) national-level and local priorities (including parental demands); provide additional time and support that helps them develop strategies early on for sharing their results and why these matter with decision-makers at the appropriate level (again, including parents); and finally, help them assess and clarify which avenues for expansion are most viable – expansion of their own service model, government uptake, adoption by other private sector service providers or proliferation of the innovation’s principles and practices across a variety of service providers.
In 2014/2015, the Hewlett Foundation renewed its support for both Firelight Foundation and TrustAfrica to allow them to continue to work with the most promising of these innovations for improving learning for young children. Over the next year, they will measure and gather additional data to communicate how their work is affecting children’s learning and intensify efforts to connect with other educational service organizations, government officials and donors to identify opportunities for expansion.
You can also read TrustAfrica and Firelight Foundation’s own perspectives about what they learned from this evaluation process and their future plans for the program at their site. And for more about how innovations in learning get started and expand or scale-up, you might also want to check out the series of case studies from the Center for Universal Education at Brookings Millions Learning project, and watch for more from them in 2016!
I was born January 1, 1961 to a working class family in a small town in the U.S. Midwest—a New Year’s baby. My parents weren’t concerned about whether I would finish high school, let alone the risk that I would graduate unable to read or do math well enough to participate fully in society. But UNESCO’s 2015 EFA Global Monitoring Report tells a different story for many children and their parents, even today. At the current rate of progress, it will be 2072 before all young people in lower and middle income countries achieve literacy.
That’s why, like so many others, I’m applauding the adoption of the Sustainable Development Goals, and especially, the ambitious fourth goal: “Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all.” The goals’ adoption this week makes me wonder: what will the prospects be for girls and boys born in 2016? How many will have all of the advantages of literacy come 2031? What’s clear is that a real, sustained commitment will be needed to achieve this ambitious goal, and early assessment of learning can play a critical role in ensuring success.
If governments, schools, and communities can deliver on the promise of learning for all children at all stages of their education over the next 15 years, a much higher proportion of these children should enter their adolescent and adult years as readers and critical thinkers. They’ll be capable of joining the workforce of the future, and of accessing and making sense of a variety of news, public health, economic, and scientific information to benefit their families, communities, and societies, or simply reading for leisure. Furthermore, as demonstrated through compelling evidence in UNESCO’s 2014 report “Sustainable Development Begins with Education,” improving learning at all stages of life will contribute to achieving the other sixteen Sustainable Development Goals. To take just two examples: having an educated mother saves young children’s lives, and education is, without question, a real poverty-buster.
If this goal is to be achieved, we’ll also be measuring the progress of the Sustainable Development Goals not by how many kids have attended or completed school, but rather, by how many are truly learning—and measurement of learning will start early and continue often. This means that parents, teachers, and decision-makers who are responsible for ensuring schools meet this challenge will know early on whether a child is acquiring the basic reading and math skills that serve as the foundation for all learning. And they’ll be able to provide further support to children who need it much sooner.
The People’s Action for Learning (PAL) Network—a growing movement of civil society organizations who are implementing citizen-led assessments of learning—has proven that it can be done. Their recent call for an early grade reading indicator as part of the toolkit for measuring progress against the equitable learning goal is critical to ensuring that five or ten years from now, children born in 2016 are not left behind. The citizen-led assessments have devised a methodology that is low-cost, low-stakes, and inclusive. The results can be easily understood and used by teachers, parents, communities, and local leaders to identify learning gaps early so that they can all work together to remedy them.
There are also emerging opportunities for others to learn from the PAL Network’s experience. Already, UNICEF’s Multi-Indicator Cluster Surveys (MICS) is tapping into the experience of the citizen-led assessments to develop new household survey modules that governments could adopt for measuring children’s early learning, as well capturing more data about the actions that families are taking to support learning in their homes and in their communities.
If I don’t make it to 2031 to see what happens, perhaps a 2016 New Year’s baby will open this time capsule and judge whether we’ve all measured up to our promises to her— but hopefully she’ll be too busy reading something more fun or useful to her day-to-day life, fully confident that her future children, regardless of their socio-economic background, will have the same opportunity.
There’s no question that the potential of the ‘data revolution’ first described in the U.N.’s “A World that Counts” report has captured the imagination of the international development community, especially data-wonks and donors concerned with how the Post-2015 Sustainable Development Goals will be measured. In the many discussions that have ensued, a consistent theme has been an aspiration to realize the potential of ‘non-traditional data sources.’ Non-traditional, for these revolutionaries, means not just the oft-cited Big Data, but forms of citizen-generated information that can shed light on real-world living conditions and public opinion.
Case in point—Senegal. In February, a team at Université Cheikh Anta Diop’s Laboratoire de Recherche sur les Transformations Économique et Sociales in Dakar revealed the results of their 2014 national assessment of children’s learning known as Jàngandoo (which means “learn together”)—part of a growing movement of civil society organizations who are carrying out independent, citizen-led efforts to measure basic learning of children ages six to fourteen in reading and Math in nine countries in East and West Africa, South Asia and Mexico. (Les resultats en francais).
Jàngandoo has only been up and running for three years, but policymakers in Senegal have taken notice. M. Serigne Mbaye Thiam, Senegal’s Minister of Education and no stranger to open data efforts, opened the national dissemination event, noting Jàngandoo’s contribution to providing policy makers and ordinary citizens timely data on the status of children’s learning that is easy to understand and interpret so that they can plan together actions for improving the quality of education. Since then, Jàngandoo has been rolling out regional dissemination events that have opened up a dialogue between citizen’s groups and local education officials about why children aren’t learning and what can be done about it.
Like other citizen-led assessments of learning, Jàngandoo takes place in homes rather than at school, and therefore captures data on children who do not attend regularly, have dropped out or never attended school, measuring learning earlier and more broadly than official national or regional assessments, which typically take place later in the primary cycle. Jàngandoo’s learning assessments are administered in multiple languages—French, Wolof, Pulaar and Arabic, depending on the child’s home language. According to UNESCO’s EFA Global Monitoring Report 2015, Jàngandoo is the only national-level assessment that has measured Senegalese children’s learning levels below Grade 9 in almost 10 years. The most recent PASEC, which measured children’s performance at the end of class 2 and class 5 in eight Francophone countries, including Senegal, is from 2007.
Unlike other citizen-led assessments, Jàngandoo also measures children’s knowledge of their own culture and environment, including certain aspects of sustainable development. Such locally-driven experimentation can also contribute to how UNESCO and others think about localizing how education for sustainable development gets measured in West Africa, a region where the effects of climate change and drought are an ever-present reality in nearly every child’s life.
Jàngandoo is an initiative of the Université Cheikh Anta Diop’s Laboratoire de Recherche sur les Transformations Économique et Sociales (Laboratory for Research on Economic and Social Transformation—LARTES). LARTES works with a network of twelve civil society organizations and a local technology firm that deploy a cadre of trained data gatherers armed with PDAs to reach all 45 departments of Senegal—all of whom make up what is known as Jàngandoo.
What’s striking about LARTES is that its staff is comprised mainly of young men and women representing a new breed of independent researchers in West Africa. If there is a data revolution, these are the revolutionaries. What’s their latest innovation? Ask Professor Abdou Fall, Jàngandoo’s leader, about his vision for on demand, a la carte data services for local governments who aspire to develop data-informed plans for improving their schools. Their first client: the Mayor of Dakar.
All this takes coordination and careful oversight, but also a particular vision for how data can be used to solve real development challenges. The sort of vision set out by the Africa Data Consensus for bringing different data communities together to generate “data for public good and inclusive development”.
While the global-level conversations about the data revolution are creating needed energy and networks, a cadre of data revolutionaries in civil society and tucked away in government agencies is also hard at work. Their progress often depends upon external funding that runs out before they can institutionalize their efforts. Perhaps what is needed then is a revolution in financing that will nurture the work of country-level data heroes like the LARTES team at Université Cheikh Anta Diop in Senegal. People who think differently about “data for whom,” “data for what” and the speed at which data can be made useable to change people’s lives for the better.
Despite the growing number of children in developing countries that are attending school, they are not reaching the levels of numeracy and literacy that they should. Some have called this a “learning crisis.” According to Rakesh Rajani, Head of Twaweza, previously “it [was] very hard to make that case and get policymakers to pay attention because we didn’t have data, we didn’t have evidence.” But now we do.
Now, civil society organizations across the world from East Africa to Southeast Asia are empowering communities to collect and use this data to demand better results for their children through citizen-led assessments. In eight countries, these groups are conducting household surveys to better measure all children’s learning, regardless of their school status.
The video below, produced by Made in Africa TV in Tanzania, describes the program and illustrates both the need for collecting household-level data on children’s learning, as well as the opportunities for better policies and action presented by this innovative approach.
The Hewlett Foundation announced a pledge of $18.5 million to the Global Partnership for Education’s 2015-18 replenishmenttoday. The pledge, which will be fulfilled through the Foundation’s grantmaking and technical support to civil society organizations and other key actors in global education, will support capacity building for better systems for learning assessment and the use of assessment data to inform planning and improve learning outcomes. It will also support civil society organizations that are using citizen-led, household assessments of learning to raise awareness about children’s learning status and to motivate action at the national, sub-national and community level for improved learning.
Last month, my colleague Dana Schmidt wrote a blog post about what the Hewlett Foundation and its grantees have learned about improving children’s early learning from the Quality Education in Developing Countries Initiative. Under this Initiative, our grantees implemented a variety of instructional models both within the school day and after school hours, with children enrolled and with those who were not. Many of these were evaluated using randomized control studies or other quasi-experimental evaluation designs to determine the impact of these interventions on children’s learning.
At the Hewlett Foundation, we encourage our grantees and also try ourselves to talk about and learn from failure, or at least from those things that don’t always go according to plan! As the (relatively) new kid on the block with the Global Development & Population team, I thought I would offer a few observations about what we learned about the evaluation process itself, recognizing that hindsight is 20-20 (or some approximation thereof). Most of this wisdom comes directly from conversations with our grantees and colleagues since I’ve joined the Foundation.
We underestimated the time that was needed for some of the instructional models to be more fully developed, and in hindsight, should have allowed our grantees more time to work the kinks out before carrying out some randomized control evaluations. In some cases, randomized control study designs were just not possible, and so we had to be flexible about evaluation methods. One of the smartest things we did was to encourage grantees and evaluators to work closely together—so evaluators could better understand the instructional model and the context, and grantees had input in framing the questions and helping determine how best to measure learning outcomes.
While the randomized control studies that we commissioned were able to measure the impacts of these instructional models on learning, they were not able to sufficiently unpack the effect or most essential elements of each instructional model. Practically speaking, this has meant that it has been difficult to tell which elements contribute most to learning improvements and thus are highest priority for scaling up, and which could be dropped or emphasized less, depending on resource availability. Despite these challenges, the Meta-Analysis by Patrick McEwan of Wellesley College goes a long way towards unpacking what is known about improving learning based on a review of dozens of randomized control studies conducted over the past 20 years or so. We think it is essential reading. We will continue to work with our grantees to help them better unpack the elements of their instructional models wherever possible; we encourage others to take this essential step before initiating randomized control studies in the future.
We also underestimated the challenges associated with completing cost analyses of these instructional models. But where we did succeed in doing so, it has produced valuable information that our grantees and policy makers can use for identifying possible areas for streamlining or improving efficiencies that will enable scale-up (e.g. rethinking or restructuring teacher training, mentoring and support, and options for getting instructional and reading materials into the hands of teachers and children most affordably).
Finally, we learned that it is especially important to spend more time and energy figuring out possible delivery channels and constraints to scale-up from the outset. We assumed that we and our grantees would be able to build ownership and political will for scaling up based on evaluation results coupled with brokering new partnerships and financing relationships. We did not fully appreciate all of the systemic barriers to change in environments where incentives and accountability are not currently structured around learning outcomes. We did not sufficiently plan how to manage the discontinuity in political will when reform-minded leaders and other allies left office.
We also could have done a better job of structuring our support to some grantees to enable them to work with policy makers and other key stakeholders to answer vital questions about scaling up. Such as: when to scale and whether this meant deepening services and impact in existing locations, viral spread of innovations, or scaling programs vertically through government or other key providers to reach more children with a basic package of improvements? How to do this without losing the basic integrity of these instructional models? Who would be responsible for quality assurance? Who pays for what and how to achieve the necessary commitment and clarify the roles of various key actors?
So what does this mean for the remaining nine months or so of this time-bound initiative that we started several years ago? The Hewlett Foundation is not funding the full scale-up costs of these programs—we don’t see that as our role in getting sustainable solutions in place, and simply don’t have the resources to do so. Rather, we are supporting our grantees to consolidate and expand the most promising instructional models, and also to use the results of evaluations and cost analyses to their best advantage. We are providing more intensive technical support and capacity building to a few of our existing grantees to assist them in pursuing opportunities that have emerged for scaling up. We also hope to document and share some of our experiences through our participation and support of the Brookings Institution Center for Universal Education’s “Millions Learning” work. Finally, we will be intensifying our support for household-based assessments of student learning, like ASER in India, Uwezo in East Africa and Beekunko and Jangandoo in Mali and Senegal, in order to better capture data on all children’s learning and inspire both communities and policy-makers to take notice and take action to improve children’s learning.