Are you a researcher or organizational designer interested in how governments can better serve citizens and how behavioral science can nudge governments to respond to citizen feedback?

The William and Flora Hewlett Foundation’s Global Development and Population Program seeks three-page expressions of interest (EOI) for a six-month consulting engagement to help design a potential, longer-term research collaborative to study the application of behavioral insights to civic monitoring platforms. Interested candidates should answer the following four questions in no more than three pages using this template. Complete EOIs are due by December 15. Based on the interest we receive, we will request full proposals from one or more candidates.

  1. What is your organization’s experience applying behavioral insights to improve governance? Please also briefly describe your recent experience running experiments with government officials and civil society organizations.
  2. What is your organization’s technological experience and capacity to run experiments on digital and mobile platforms, including any use of A/B testing, user surveys, and social network analysis? If you don’t have this capacity in-house, please describe any relevant partnerships.
  3. What is your international experience, especially in Latin America, South Asia, and sub-Saharan Africa?
  4. Please describe recent experience delivering rigorous, concise and timely results that informed strategic decisions.

Additional Context:

Throughout this year I’ve spoken with dozens of government officials, researchers, technologists, and civil society organizations that have experimented with online and offline reporting mechanisms to collect citizen feedback about the performance of public services. (Some of those platforms were featured in a 2016 evidence review by Tiago Peixoto and Jonathan Fox.) Three common challenges surfaced throughout my discussions. First, the vast majority of platforms haven’t managed to attract a critical mass of users. Second, the majority of users report only once without engaging further. Third, most active users come from the wealthiest neighborhoods, leading to concerns that such platforms drive public resources away from poorer neighborhoods.

These three challenges, common across more than a dozen platforms, prompted Flora Family Foundation Summer Intern Maya Delaney and me to wonder if some fixes might be found in behavioral research.  We wrote a blog post in September that sought input on whether we should fund a research initiative to apply behavioral insights research to attract more active and diverse engagement on civic monitoring platforms. The rich feedback we received challenged our initial assumptions and ultimately convinced us that a lot of additional work is required to design a research initiative that will provide useful knowledge to decision makers hungry for greater evidence.

Three pieces of constructive criticism stood out:

  1. The research should focus more on optimizing the responsiveness of public officials than encouraging greater uptake by citizens. We don’t have evidence that increased citizen participation leads to greater government responsiveness. The behavioral experiments should prioritize the behaviors of public officials first and citizens second. How can we get better at using feedback and technology to nudge public service providers to be more responsive?
  2. In our original blog post, we conflated two different but related feedback pathways. In one form of feedback, the citizen describes her experience with a service to provide information that informs the decision of other citizens (similar to a restaurant review website). In the second form of feedback, the citizen registers a complaint with the expectation that it will be resolved by the relevant authority. We need to unpack the assumptions of each pathway (and how they relate) to design experiments that yield useful insights.
  3. We should consider the tradeoffs between research that applies a number of behavioral experiments to a single platform versus comparative research that explores different approaches across different types of platforms. If we study fewer platforms, it’s likely the findings will be easier to apply but less generalizable; they will be more useful to fewer people and less useful to many people.

This helpful criticism convinced us that we should first commission a design phase to consult with relevant stakeholders from technology companies, governments, researchers, and civil society organizations to inform the design and scope of such a research initiative. We need some smart people to help us think through the following questions, and surely many more:

  • Should the research initiative focus on just one platform where government officials are eager to learn how they can test behavioral insights to increase responsiveness to citizen reports? Or should the initiative compare insights across many platforms? Should those platforms all have similar or different approaches?
  • How do the operators of citizen feedback platforms currently commission research and impact evaluation? What are their priority learning questions? This should include at a minimum input from administrators of relevant citizen feedback initiatives in Mexico City, Cape TownSao Paulo, Buenos Aires, Montevideo, Los AngelesNew DelhiAustin, Madrid, Palmares, Kenya’s Huduma Program, Indonesia’s LAPOR, UNICEF’s U-Report and Maji Voice.
  • How do technology providers of citizen feedback platforms test and experiment to optimize for responsiveness, user satisfaction, and veracity of reports? What are the priority learning questions of citizen feedback technology providers including FixMyStreet, AccentureNyaruka, SAPSeeClickFix, SalesForcePublicStuff (Accela), everythingcivicMicrosoft, and CitySourced?. How have these technology solutions integrated social media as a feedback channel, and what research methods does such integration enable? Which of these companies are most interested and best positioned to be involved in such a research initiative?
  • What can be learned from private sector technology companies that have already studied and applied behavioral insights to increase responsiveness and engagement on their platforms? This includes but is not limited to Yelp, TripAdvisor, Foursquare, SurveyMonkey, Facebook, Google Maps, and Uber. Are researchers from any of these companies interested in participating in the research initiative in some capacity?
  • What are the priority learning questions from academic researchers who study citizen feedback platforms? Which researchers are interested in and best positioned to contribute to the research initiative?
  • How are existing networks currently positioned to facilitate peer learning between researchers, technologists, and public officials interested citizen feedback platforms? This should include but not be limited to the Open Government Partnership’s Subnational Program, Feedback Labs, What Works Cities, and WeGO.
  • Finally, given our interest in supporting research that could inform the decisions of platform administrators, technology providers, public officials, and policymakers, what  organizations might be well positioned to host and coordinate such a research initiative?  What would a sensible size and duration be?  What are risks and challenges, and how can they be mitigated?

 

Whew, that is a lot of work for a six-month contract! We’re looking for a group that can prioritize quickly, move fast, and communicate clearly. The findings of this work will be made public and will, we expect, inform the terms of reference for the resulting research initiative, assuming it still seems like a good idea at the end of the design phase. Please direct any questions to dsasaki@hewlett.org.