But are those stories having any effect? Three new studies just came out — overseen by API, and supported by the Hewlett Foundation, along with our colleagues at the Democracy Fund and the Rita Allen Foundation. One of these, Estimating Fact-checking’s Effects—from Brendan Nyhan of Dartmouth College and Jason Reifler of the University of Exeter—gets precisely at this question of impact. Here’s what I found to be some of the study’s most interesting findings:
First, views on fact-checking’s “favorability” ratings differ depending on political party affiliation. While the views of “low-knowledge” respondents don’t differ much depending on whether they are Republicans or Democrats (29% vs 36% view fact-checking favorably, respectively), it makes a real difference for people with high levels of political knowledge: “just 34% of Republicans … have very favorable views of fact-checkers compared with 59% of high-knowledge Democrats.” Nyhan and Reifler hypothesize that this may be due to Republicans / conservatives tendency to “hold less favorable opinions of the news media” overall, coupled with a greater likelihood to believe that there is a liberal media bias.
Then they looked at impact. Here there were (at least) two big questions:
Belief Accuracy: One risk inherent in fact-checking stories is that “exposing people to false claims in an effort to debunk them” can lead to a situation where readers recall the misinformation more clearly than they remember the intended correction. Here the question is: Does exposure to fact-checking content increase “belief accuracy?” Nyhan and Reifler found, through post-exposure surveys, that “the rate of correct answers increased from 12% to 19% among people with low political knowledge,” and was even more effective among people with “high political knowledge” (from 22% to 32%).
Motivated Reasoning: Many experimental studies in psychology and political science have found that new factual information doesn’t necessarily change erroneous, pre-existing beliefs. In fact, it can actually backfire for some groups (e.g., depending on the issue, context, and messenger, counterfactual information can cause partisans to more deeply entrench in their pre-existing beliefs). Thus Nyhan and Reifler expected partisans to be more likely to learn/recall “belief-consistent” facts.
True to their hypothesis, the researchers found that corrections of inaccurate statements are more persuasive when the reader and politician belong to the same political parties. “Readers tend to think the opposing party politician’s statement was false, even before they read the correction.” This suggests that fact-checking may be particularly important during primary contests (though fact-checking is currently more common during general elections).
Contrary to their expectations, they found that “correct answers increased somewhat more for belief-inconsistent facts (from 9% to 20%) than for belief-consistent facts (from 14% to 22%).”
“Republican knowledge of belief-inconsistent facts increased by five percentage points and by ten percentage points for belief-consistent ones. The pattern for Democrats is the opposite, however — knowledge increased by 15 percentage points for belief-inconsistent facts compared with eight percentage points for belief-consistent facts.”
It’s worth noting that the public at large is not the only potential audience for fact-checking, nor necessarily even the most important one. In a prior article, Nyhan also explored the impact of increased fact-checking on politicians’ behaviors—that is, testing whether there’s a deterrent effect.
In a 2014 study of 1,200 legislators in nine states, Nyhan and Reifler sent candidates and policymakers reminders about “the risks to their reputation and electoral security if they are caught making questionable statements.” The result? A 55% reduction in the likelihood of receiving a negative PolitiFact rating, or having the accuracy of their statements questioned publicly, in comparison to legislators who were not sent reminders. That said, state legislators are very seldom fact-checked, anyway, and it’s not clear whether the same effect would hold for Congress.
Nevertheless, fact-checking seems to me a promising development in the journalistic field. For it to succeed, newsrooms (and, when relevant, funders) still need to wrestle with questions like when and what to fact-check in order to maintain both relevance and bipartisan credibility, how to scale the reach of existing efforts, and whether and how to expand beyond fact-checking politicians and pundits to other purveyors of misinformation. But those are topics for another day. For now, I’ll just say I’m grateful to API and all the researchers we’re supporting—and excited for the next round of research releases!
State-of-the-art "Like" Button, Circa 1952. (Photo Credit: Flickr User Mpls55408, licensed under CC BY NC 2.0)
Spoiler alert: The campaigns of the future are not going to look like the campaigns of the past. Just as the introduction of television changed the door-to-door game decades ago, so too is the rise of the Internet and the ad-targeting available through services like TiVo changing the TV ad game that has dominated in recent years. And with fewer people watching commercials,the importance of social media companies will only expand.
That was my biggest takeaway from a discussion I attended late last month exploring the effect of changing technology on political campaigns. Sponsored by the Democracy Fund and hosted at Stanford Universty by Law Professor Nate Persily, alongside Heather Smith, the former Rock the Vote President, and Ben Ginsberg, former national counsel to both the Republican Governors Association and multiple presidential campaigns, it was a veritable who’s who of the political campaign world. Participants included Ann Ravel, Chair of the Federal Elections Commission, as well as several other commissioners from the FEC and FCC; the founders of both Change.org and Brigade Media; representatives from Google, Facebook, Twitter, and Tumblr; the head of Catalist, the self-styled “data utility” for progressive organizations; representatives from the Republican National Convention, and the New Organizing Institute; and academics from Harvard, Stanford, UNC Chapel Hill, and others. And of course some of us funders.
The day was divided into four sessions, looking at:
1. Campaign Communications
2. Voter Engagement and Mobilization
3. Campaign Fundraising
4. Campaign Finance Policy
The first session on “The Future of Campaign Communications” explored how new technologies will impact the prevelance of television ads, and the costs of campaign. The group’s best guess was that, today, only about 3% of campaign ad dollars are spent on digital – maybe more in presidential elections. The bulk is still spent on TV (though of course TV doesn’t play a big role in less competitive races, and is not a factor at all downballot).
As outreach moves to the internet, how will new microtargeting capabilities affect campaigns? The biggest questions were around truthiness and tone.
While broadcast communications make it harder to lie, microtargeting makes misinformation much easier to propogate. But of course both journalists and political opponents are highly incentivized to detect others’ mistruths. And Facebook coders are working on algorithms to validate statements’ authenticy. The key may lie in whether or not outsiders can see what’s going on. The room was less worried about tone on platforms like Facebook that don’t allow for anonymity. And incentives for positive play also exist at the campaign level—a main driver for placing ads on Facebook is the “propensity to share” (PTS), and PTS has proven to be better when ads are positive.
The second session explored “The Future of Voter Mobilization and Engagement.” Much here has also changed, though in ways I think most of us are familiar with. Centralization of data services within groups like Catalist has had a big impact—no more mom-and-pop data vendors. Today’s door-to-door canvassers can access a wealth of data about their targets, and input new information as they go.
One thing here remained clear: Campaigns will continue to target the most likely, most partisan voters. Participants agreed that “foundations must do the harder work” of encouraging less partisan, less engaged citizens (no matter their voting preferences) to take part in elections.
The third sessions moved on to explore “The Future of Campaign Fundraising,” looking at questions around how tech advances will alter the methods, tone, and level of participation in campaign financing.
New tech has surely made it easier to give, leading to a higher percentage of people contributing to campaigns. That said, it remains a small group—a universe of only about ten million donors. And a higher and higher percentage of the overall total is coming from the wealthiest 0.01%. In the early 1980s, that group accounted for 15% of the total. Now it’s closer to 40%.
The day concluded with a panel on “Public Policy and the Campaign of the Future” looking at campaign regulations going forward. Here the lawyers in the room conceded that the “rules have become so complicated that even the top lawyers wouldn’t get all the rules right on a test.” Which didn’t make me feel any better.
From this session, two core questions arose:
1. How powerful will the parties be? They were already weakened by McCain-Feingold, and now more and more groups like Catalist are taking over traditional party roles. That said, parties remain the gatekeepers of information and relationships that may be hard to replicate.
2. In the future, could the norms and algorithms that Google and Facebook come up with be more important than the rules from the FEC?
It was a fascinating day. The one thing that became clear to me is that our campaigns are undergoing a generational transition, and no one person or group has a monopoly on understanding what the future will look like.
Even before voter turnout hit a 70-year low in 2014, many of us in the democracy reform field were in conversation about whether and how we might improve voter turnout, and civic engagement writ large. This, in part, is why the Hewlett Foundation recently announced that we will be partnering with Knight Foundation, the Democracy Fund, and the Rita Allen Foundation on a $3 million challenge to identify how we can better inform voters and increase civic participation before, during, and after elections.
A recent blog post from Knight Foundation elaborates on the goals of this election-focused Knight News Challenge: “We are looking for innovative ideas ranging from new ways that news media and others can inform voters, to making voting easy, efficient and fair, to converting election participation into longer-term civic engagement, on the local, state or national level.”
The challenge opened today, but we have been focusing on these issues for some time. Our Madison Initiative seeks to create opportunities for deliberation, negotiation, and compromise in the US Congress. But it’s clear that civic engagement is an essential component of strengthening representative institutions. Indeed, many Americans opt out of the civic process. As the Pew Research Center reported last year, “[M]any of those in the center remain on the edges of the political playing field, relatively distant and disengaged, while the most ideologically oriented and politically rancorous Americans make their voices heard through greater participation in every stage of the political process.”
The Madison Initiative is strictly nonpartisan and agnostic about specific policy outcomes. But we think that representative institutions can be, well, more representative when more of the electorate participates in the process.
For the Hewlett Foundation, this partnership represents a unique opportunity to work with leading peer funders to support new and promising ideas from people across the media, technology, and election administration fields. We hope to see ideas and collaborations from civic technologists, academics, students, startups, nonprofits, governments, and individuals.
Finally, a bit of history and a few fun facts:
This is Knight’s 13th News Challenge, and they often collaborate with different partners.
The platform they use for submissions and feedback, available at newschallenge.org, is tailor-made to create a more collaborative, open discussion and process; it’s built on an Open IDEO OI Engine.
Knight News Challenges attract hundreds of entries but usually result in less than 25 winners.
Winners receive funding of anywhere from $35,000 to several hundred thousand dollars.
Past winners have included Internet entrepreneurs, emerging media innovators, libraries, and legacy newsrooms. For example DocumentCloud, which helps journalists analyze, annotate and publish original source documents, is being used by more than 200 newsrooms nationwide. Ushahidi has helped map information in crises from Haiti to Japan. hNews, a project by Web inventor Tim Berners-Lee and the Media Standards Trust, allows readers to see the source of information in online articles. It is used by more than 200 news organizations such as the Associated Press.
The submission process is intended to attract a diverse range of ideas from many different people and organizations. The application questions are simple:
In one sentence, please describe your idea as simply as possible.
Briefly describe the need that you’re trying to address.
What progress have you made so far?
What would be a successful outcome for your project?
Please list your team members and their relevant experiences/skills.
See, it’s pretty straightforward.
The News Challenge will open on February 25, 2015, and close on March 19. More information is available on the News Challenge’s website. I have no doubt that the ideas that arise will engage and inspire us all!
To submit an entry for the Knight News Challenge or provide feedback on other submissions, visit newschallenge.org. Knight News Challenge: Elections closes on March 19 at 5 p.m. (EST).
As part of the Hewlett Foundation’s Madison Initiative—aimed at improving opportunities for better dialogue, deliberation, and compromise in Congress—I’m spearheading our work around civic engagement and voting. Two questions I’ve been wrestling lately are whether, and to what end, philanthropy might help to improve voter turnout.
Many US foundations and nonprofits operate in this space. In 2011, US funders granted more than $31M directly in support of issues around voting. In 2012, an election year, this figure was closer to $60M, alongside at least another $300M towards civic education and civic engagement efforts focused on specific issues. (These figures pale in comparison, of course, to the more than $6B spent by partisan actors seeking to drive turnout in the 2012 elections.)
There appear to be (at least) three possible goals that the nonprofit sector has pursued around voter turnout:
1. Quantity. Increasing the size of American voter turnout beyond the averages of 55-63% in presidential elections (dropping to 34-42% in mid-terms, and reaching the lowest level in more than seven decades in 2014, as shown in the New York Times chart at right) that the US has experienced since the 1940s. This is where the majority of voter engagement efforts appear to be focused, or at least where much of the dialogue is focused.
2. Quality. Improving the understanding and thoughtfulness of voting decisions made by existing voters (i.e., without necessarily expanding the electorate). There is, frankly, much room for improvement here—for example, several recent state-based studies have found that 75% to 83% of voters find ballot measures too complicated and confusing to understand (yet 66% have cast ballots on measures with which they are unfamiliar). Knowledge about candidates, both their stances on policy issues and their approaches to the policymaking process—is similarly lacking.
3. Representativeness. Changing the composition of the electorate to ensure that the voting population more closely mirrors the socio-demographic profile of America at large. On average, US voters tend to be consistently older, whiter, wealthier, and more educated than the broader US population, running the risk that public policies are misaligned with the interests of the average American. Presumably actors with this goal believe (likely rightly) that the policy preferences of a more representative electorate would differ and that policy-makers’ decisions would change to reflect the preferences of a more representative electorate.
These three goals—quantity, quality, and representativeness —are each fairly distinct. And different organizations often have very different goals in mind, even when we’re all talking about “improving voter turnout.” But the extent to which we are pursuing one, two, or all three of these goals isn’t always a focal point in our conversations. Instead, the conversation often jumps quickly to strategies, tactics, and the grantees that are executing them.
Strategic differences may not matter where there is agreement on tactics—similar tactics can advance different strategic ends. But it is nevertheless important for us to be aware of them, if and when the time comes when strategy suggests a change in tactics, and different strategies may suggest different changes. I’ll quote Hewlett Foundation President Larry Kramer here: “Madison and Hamilton had very different goals in mind in the 1780s, but both saw their goals advanced by a stronger federal government, so both pushed for the Constitution. Once it was up and running, the different goals emerged starkly, so they split over what the national government should do.” At least understanding when fellow funders are pursuing different goals, strategies, and tactics can help clarify what can otherwise be a murky conversation.
I continue to be fascinated by the robust and complex field of academic research surrounding questions of voting—a field that started in the 1920s but has grown rapidly in recent years. And I continue to appreciate opportunities to get more specific with colleagues in the democracy reform community about what goals we are pursuing in the voting space, where and how we are pursuing them, and why.
According to many newsrooms, fact-checking articles are among the most requested, and most read, content they produce. In recent years groups like Factcheck, Flackcheck, FactChecker (home of the “Pinocchios”), Politifact and Punditfact have emerged to address growing questions of truth in politics.
Fact-checking is done with at least three possible goals in mind, each with distinct audiences associated:
For the broader public, to improve understanding of candidates and issues by countering “misinformation.”
For other journalists, to help shift the culture away from he said/she said coverage towards greater fact-checking.
For politicians, political pundits, or (more recently) news networks, to hold them accountable and deter erroneous statements.
Last year, the Madison Initiative helped to support a project by the American Press Institute (API) to research practices in this space. In December, I attended an API conference on the topic in Arlington, Virginia. More than 50 people turned out—academics from journalism and communications schools across the country, journalists from a variety of outlets, political ad companies and consulting firms, and of course a small handful of funders like me. The event covered a range of topics including recent research, journalists’ experiences from the 2014 mid-terms, and new tools and formats under development. Several highlights emerged from these discussions.
Perhaps the most fascinating research looked at which erroneous, politically important opinions are most commonly held by the public. To take one example, a plurality of Americans believes that China holds more than half of the US debt. How much does it actually hold? Eight percent.
I was also interested to learn that in the 2014 midterms, ads from outside groups (e.g., Super PACs) were more prone to misinformation than those from the candidates themselves, presumably due to a lack of accountability. Not surprisingly, closer races appear to inspire more negative (and more distorted) ads. And fact-checks are increasingly being used as “weapons” by opponents on the campaign trail.
It was likewise impressive to learn the Washington Post’s Truth Teller has built structured data into its fact-checking, in an effort to address the fact that the same mistruths are often repeated over and over again. Their prototype algorithm tests statements against a database of thousands of prior fact-checks, helping reduce the burden on human fact-checkers. Even though this is a relatively young tool it appears to be surprisingly accurate, identifying erroneous claims 75-80% of the time.
The day concluded by breaking into groups to brainstorm opportunities for research, distribution, and new tech tools. Overall, the discussions raised a few questions for me:
How can the fact-checking industry possibly keep up? One North Carolina reporter noted that, after catching a particularly egregious lie, the perpetrator (evidently a campaign manager) joked that “if you didn’t give us a red light, we wouldn’t have been doing our job.” Even assuming that most campaigns are honest, reporters are still clearly out-gunned by the campaign industry.
How can fact-checkers manage across distribution channels? Current fact-checking focuses primarily on TV ads and shows and online news content, and is not equipped to cover print, mailers, or radio as robustly. National, TV-based campaigns are clearly easiest to check—but with the rise of TiVo and micro-targeting, future political communications are unlikely to stick to these channels.
Most importantly, what difference does it make? Behavioral scientists speak to the complexity of correcting misperceptions once people have already latched onto them—in some cases finding that attempts to correct the erroneous beliefs might instead just serve to more deeply cement falsehoods. When it comes to elections, how often do people actually change their vote based on new information about candidates’ truths or mistruths, rather than just voting their party line? And how often are fact-checkers simply “preaching to the choir,” providing ammunition to further inflame party loyalists about how dishonest the other side is, rather than creating room for real learning opportunities across party lines?
On the one hand, I would be hard-pressed to just “give up on the importance of facts.” On the other, it remains unclear whether and when facts actually do matter in people’s decision-making, and what might make them matter more. Scholars and fact-checkers alike are now trying to find clearer answers to those questions.
Science plays a pivotal role in informing the work of the social sector—not to mention business, government, and society at large. (Fun fact: A quick scan of our own grants suggests that since 2001 the Hewlett Foundation has supported almost 1,000 research projects with over $390M in funding—and these are just the grants with “research” in the title!)
Scientific facts are solid, trustworthy, reproducible. That idea, that the same experiments should always get the same results, regardless of who performs them, is in fact a cornerstone of science’s claim to “objective truth.” But as Mark Twain so eloquently put it “it ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. ”
A growing chorus of voices is alleging that Twain’s jibe can be applied to a great many scientific findings, both in the natural (biological and physical) and social sciences. The journal Nature has developed an archive of reports and analyses called Challenges in Irreproducible Research. Articles from The Atlantic, The Economist, and The New York Times offer a few recent examples of the purported problem in the popular dialogue. There’s even a forthcoming Sony/Participant Media film, Merchants of Doubt, related to the topic.
Glenn Begley, chief scientific officer at TetraLogic Pharmaceuticals, described an experience he had while at the drug company Amgen: He and his colleagues were unable to replicate 47 out of 53 “landmark” papers about cancer—all of which had appeared in top journals. Evidently some of the results could not be reproduced “even with the help of the original scientists working in their own labs.”
While much has been made of the potential problems with scientific integrity, we need to hesitate before jumping to any firm conclusions. If studies cannot be replicated, it could be for two very different reasons:
Dishonesty is, of course, one possibility. Certainly there are incentives for researchers to discover “surprising” results. Academics need to get their work published in well-regarded journals in order to secure funding or tenure. Top journals’ rejection rates can exceed 90 percent and, unsurprisingly, studies with surprising findings are more likely to make the cut. Alternately, according to the aforementioned article in The Atlantic, “maybe sometimes it’s the questions that are biased, not the answers”—as with medical research studies comparing a company’s new drugs “against those already known to be inferior to others on the market.”
Second, and perhaps less obvious, even when researchers act with complete integrity, conventional practices common in their fields may produce findings that are misleading or invalid. According to The Atlantic, these challenges could range “from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.”
These kinds of concerns have inspired a number of responses, including:
The National Academy of Sciences, which has convened an expert panel to address the issue;
The National Science Foundations’ Social, Behavioral, and Economic Sciences directorate, which has formed a task force on the topic.
At the Hewlett Foundation, we’re supporting scientists at Rutgers and Stanford Universities, in collaboration with Stanford’s Center for Advanced Study in the Behavioral Sciences, to assess the prevalence of these problematic practices, alongside the feasibility and potential impact of proposed interventions to improve “scientific integrity.”
In the meantime, many scientific journals have recently adopted policies designed to elevate the validity and integrity of the research they publish. According to Stanford and Rutgers, these policies include requiring minimal sample sizes, providing additional space for methods sections in order to increase transparency, inviting authors to complete checklists of high integrity practices (e.g., reporting all variables, all conditions, etc.), requiring authors to post their data online for others to be able to access and review, stamping articles with badges for high integrity practices (transparency, pre-registering hypotheses and methods to distinguish theory-testing from exploratory analyses, making data available online), and setting aside space to publish attempted replications.
It remains unclear how big the problem really is. We hope the research we are supporting will help to … um… clarify whether the research our researchers are researching is flawed. In the meantime I’m looking forward to Merchants of Doubt — a comedy, I think?
I recently attended a conference in Chicago on how to improve statehouse reporting—looking not just at the reporters physically in our nation’s 50 state capitols, but also those covering the effects of Statehouse policies on agencies, departments, and the executive branch.
Why is this important? (At least) two reasons: First, fully half of the state legislatures that met in 2012 passed more bills in one year than congress passed in two years. That’s a lot of bills.
Second, just as with the broader field of journalism, statehouse reporting has experienced dramatic reductions in coverage in recent years. Pew Research Center estimates a 35% decline in statehouse newspaper reporters since 2003, an even steeper decline than that in newsrooms overall during the same period. That leaves fewer than 1,600 journalists in America’s capitols—and only 47% of those are full-time. That averages out to 15 full-time reporters per state, but the actual numbers vary widely, from a high of 53 in Texas to just two in South Dakota. According to Pew:
“Less than a third of U.S. newspapers assign any kind of reporter—full-time or part-time—to the statehouse.”
“Fully 86% of local TV news stations do not assign even one reporter—full -time or part-time—to the statehouse.” This is especially problematic in that most Americans (particularly less ideological ones) still get the majority of their news from local TV.
“Students account for 14% (223 in all) of the overall statehouse reporting corps.” In fact four states—Missouri, Nevada, Kansas, and Arizona—have more students than FTEs.
We discussed three primary goals that a healthy Statehouse reporting field would help to support:
Holding officials accountable (and thereby helping to deter bad behavior)
Engaging the public with relevant, useful information
Leveling the playing field between special interests and the public
We then discussed a LOT of barriers to these goals—the discussion filled up four or five flipcharts (in pretty small print). These fell into a couple of loose, often overlapping categories which I’ll attempt to summarize here:
Lack of sustainable business models. This was the background to all of our discussion.
Public trust in information. Including discussion of coverage that is more “horse-racy” than substantive, polarization of the media, lack of journalistic standards, and the increasing influence of special interests.
Public engagement with the news. Including news’ lack of nuance / personalization, and the perceived lack of understanding (on the part of news producers) of “consumers’” preferences. All driving low public interest in political news.
News quality. Including concerns that news has become more “reactive than proactive” and that so little is translated into Spanish or other languages. This also touched on the lack of “data or journalistic standards” and of “data interoperability” (e.g., overarching taxonomies) that would allow data to speak to each other across geographies or organizational silos, which would allow for better trend interpretation.
Reporter efficiency. Including the lack of training and institutional knowledge, declining reporter relationships (e.g., access to tips, time to conduct interviews, etc.). This discussion also touched on reporters’ limited access to (and ability to interpret) public records.
Government accountability. Including the lack of government transparency, as well as the perceived shift of government resources away from governing towards communications/PR.
The (increasingly?) negative tone of news. Including whether / how this serves to undermine other democratic goals of civic engagement.
Undaunted (well, maybe a little daunted), we then broke into groups to brainstorm solutions. Ideas included:
Creating either a national hub or regional hubs of accountability journalism, making data and research available for easy customization by local reporters.
Creating a mobile, time-limited (3 year), highly-publicized team to help improve state media that would focus on specific states. (The benefit of it being time-limited and highly-publicized being that the public might feel a heightened responsibility to support the effort and focus attention on longer-run sustainability.)
Auditing—and creating a central database of—existing journalistic data resources, collaborations, players, etc., with a goal of helping to inform others about what is available and identifying the gaps.
Improving data standardization across states to provide reporters with, for example, standardized data on bills’ statuses, so that journalists have tools as good as the lobbyists.
But while the discussion of problems facing statehouse journalism was rich and the ideas generated interesting, none are silver bullets. Addressing the issues plaguing statehouse reporting will require time and effort from many actors across the news media. The question remains which, if any, of these ideas could be implemented, iterated, and scaled—and which might make a demonstrable difference.
My husband (a former venture capitalist) and I recently had a fascinating conversation about mergers and acquisitions in our respective sectors. If you’re the leader of a for-profit company who manages to pull off a merger, it’s a coup worth celebrating, with real financials benefits to the parties involved. Even board members get a cut. But if you’re in the nonprofit sector and announce a merger or acquisition, the most likely response is “Oh, I’m so sorry,” (and you can almost see the thought bubble: “Ouch, they must’ve run out of cash.”)
This poses a problem. There are over 1.8 million active nonprofit organizations in the US, and almost 80,000 foundations. Just looking at the campaign finance reform field, there are at least 100 organizations competing for an estimated $14M in annual foundation grants (according to the Foundation Center). For those of us who haven’t had enough coffee yet today, that works out to $140,000 per organization per year (not counting c4 and other non-foundation contributions). Needless to say, if we were starting from scratch, we probably wouldn’t design a field this fragmented.
Donors are no doubt part of the problem – both by funding a proliferation of organizations and by not properly facilitating later collaborations. That’s why the Hewlett Foundation recently co-sponsored a convening with the Bechtel Foundation on Bold Strategies for Accelerating Impact: Nonprofit Mergers and Foundation Spendouts. I’ll focus here on the former topic, which (given that we are not a spend-down foundation) was our primary area of interest.
The upshot of the convening—which was sold out, with almost double the number of attendees originally expected—is that social change efforts need (obviously) to grow their impact if they’re going to succeed. And there are only two primary ways to grow: organic (expanding one’s own scope of work or reach) or inorganic (acquiring or merging with another organization with a broader scope or audience/geographic reach than one’s own). Some argue that the latter may be easier than the former.
It’s interesting then to examine why a nonprofit (or any organization) would consider merging. There are at least three primary reasons:
Scope: to expand an organization’s goals, competencies, etc.
Scale: to expand nationally or internationally
Streamline: as a cost play, to free up more resources for impact. (Though a Chronicle of Philanthropy article notes that this may be less realistic for nonprofits, where transaction costs for mergers tend to be higher—“most nonprofit groups are too small to realize the kinds of cost savings that often make for-profit mergers attractive.”)
And of course not all “mergers” are created equal. Instead the nonprofit consultancy Bridgespan lays out a helpful spectrum of collaboration, the appropriateness of which may vary depending on the goal:
There are obvious benefits to mergers (or other less complete forms of collaboration). So why isn’t there more of it? I’ve heard it suggested that, for mergers in the nonprofit sector, ego is the primary barrier. I suspect there’s something to that view. But maybe it’s closer to “mission attachment.” Unlike the for-profit sector, nonprofits are launched with social impact, not money, in mind. But social missions can be clouded, understandably and even unconsciously, by leaders’ attachments to their own organizations. Mergers present leaders with (real or perceived) threats to their own job security and that of their staff. But it is no doubt more complicated than that.
In addition, it’s easy to forget that most for-profit mergers (more than 80% by some recent measures) fail to achieve the goals of the merger. For nonprofits the risks of merger are essentially the same in as in business—but the rewards “are much smaller and less tangible,” yielding a less favorable calculus overall. As with business, if nonprofits are going to create value through greater collaboration, a host of questions must first be addressed:
Which organization(s) are sufficiently similar in purpose and goals, but ideally with different but complementary skill sets—“something to offer you, and something you need?”
How can alignment between the boards be established?
What branding is optimal? For example, rather than taking one brand or the other, nonprofit brands could be arrayed on the website as business units.
What steps will incentivize leaders to give up place or power? (A good severance package probably doesn’t hurt!)
How will new roles be defined for senior staff? The risks (and pushback) here may depend on the motivating rationale for the merger (increasing scope isn’t a threat, streamlining probably is).
And that’s just a few of the questions that need answers. A number of tools have been developed to help. Bridgespan has a whole series of papers and blogs dedicated to the topic. Other resources include the Foundation Center’s Nonprofit Collaboration Database, which provides detailed information on more than 650 collaborations nominated for the Lodestar Foundation Collaboration Prize.
But perhaps the most important challenge in the nonprofit sector (or at least the most relevant to my work) is funding: For good due diligence beforehand to proactively identify likely problems; and for successful implementation—keeping in mind that merged organizations may need more (not less) funding to increase impact.
Given all of the promise (and challenge) that nonprofit mergers present, I was pleased to hear that two colleagues of ours in the campaign finance reform space, Fund for the Republic (a Hewlett Foundation grantee) and Americans for Campaign Reform, just joined forces to become Issue One. This is a great example of something I hope we see more of, if and when it makes sense.
Given the potential benefits and the clear pitfalls to avoid, I’m left wondering: When and how should funders encourage mergers among our grantees?
One of our grantees, the Pew Research Center, just released another great report in its series exploring political polarization in the United States, and how that relates to government, society, and people’s personal lives.
This latest report, Political Polarization and Media Habits, explores what sources people turn to for news, which ones they trust (and which they don’t), who they tend to talk to about politics, and how all of this varies by ideology. The short answer is that ideology predicts a lot.
Several of Pew’s findings were not entirely surprising. For example:
Consistent conservatives are “tightly clustered around a single news source, far more than any other group in the survey, with 47% citing Fox News as their main source for news about government and politics.” They also “express greater distrust than trust of 24 of the 36 news sources measured in the survey. At the same time, fully 88% of consistent conservatives trust Fox News.”
Consistent liberals are “less unified in their media loyalty; they rely on a greater range of news outlets [and] … express more trust than distrust of 28 of the 36 news outlets in the survey. NPR, PBS and the BBC are the most trusted news sources for consistent liberals.”
But a few things did surprise me:
That the Wall Street Journal is the only source more trusted than distrusted by all five of Pew’s ideological groupings.
That audiences for NPR (National Public Radio), PBS, the New York Times, and BBC are as liberal as Pew finds. I would’ve assumed liberal, but not that liberal.
That 45% of people hadn’t even heard of NPR
That so many more people get news from Facebook than from Twitter. I had assumed Facebook was maybe 2x more common, but it’s more like 4x (48% vs 9%)
That consistent liberals are “more likely than those in other ideological groups to block or “defriend” someone on a social network—as well as to end a personal friendship—because of politics.” Forty-four percent of consistent liberals who use Facebook “say they have blocked someone on a social networking site because of a political post” (versus 31% of consistent conservatives). I would’ve assumed the practice was equally common (and much more rare).
That consistent conservatives report both talking about (68%) and “enjoying talking about” (81%) politics much more than consistent liberals (of whom only 57% talk about politics “at least a few times a week”, with only 69% reporting they enjoy it).
In short, the most ideologically extreme Americans have very different news consumption and discussion habits, many of which tend to reinforce their own belief systems. And of course those on the farther left and right ends of the spectrum, who together comprise about 20% of the public, have a greater impact on the political process—they vote much more, and are more likely to be driving the political conversation amongst their friends. “Nearly four in ten consistent conservatives (39%) and 30% of consistent liberals tend to drive political discussions. In other words, they talk about politics often, say others tend to turn to them for information rather than the reverse, and describe themselves as leaders rather than listeners in these kinds of conversations.”
The silver lining in Pew’s findings is that most Americans still do rely on an “array of outlets—with varying audience profiles—for political news. And many consistent conservatives and liberals hear dissenting political views in their everyday lives.” Figuring out how to encourage more of this could go a long way toward fixing our dysfunctional politics.
This week a new nonprofit news site is launching in the U.S. Given the thousands of online news sites that already exist (an estimated 170+ of them nonprofit), you might be questioning why you should care. But you should, because this one is a bit different.
The Conversation is an independent news and commentary site produced by a team of professional journalists who work closely with academic authors to leverage the academics’ expertise, apply it to important issues, and make it available to the public.
We recently made a grant to The Conversation through Special Projects, because their work struck us as potentially illuminating on a couple of dimensions. As pretty much everyone who reads news knows, since the advent of the Internet, journalism around the world has been struggling. Two big challenges dominate, at least for those who care about democracy:
The old journalism business model no longer works. Many newsrooms have had to reduce staff or close entirely. According to a recent report on the State of the News Media in 2013, employment of full-time editorial staff in the U.S., which peaked at 57,000 in 1989, had fallen by 29% by the end of 2011. The same report found that the number of U.S. newspapers with daily circulation has fallen by about 14% since 1990.
What remains is much more fragmented, and often much more ideological. So far as television goes, much has been made of the ideological extremity of Fox News (“regularly watched” by only 20% of Americans) and MSNBC (regularly watched by only 11%). They are pretty different. A recent Pew study, which the Hewlett Foundation helped support, found that “consistent conservatives” expressed overwhelmingly positive views of Fox (74% favorable). Meanwhile 73% of “consistent liberals” held an unfavorable view of the network. Talk radio is notoriously ideological, though much more popular amongst conservatives than liberals. Seven of the top 10 outlets are considered conservative, the rest are independent or moderate—and have grown from 400M in the 1990s to 1.4B in recent years. Given the breadth of online news sites it is essentially impossible to register their ideological tones, but suffice to say that many occupy a quite specific ideological niche. In short, people have a very hard time agreeing on what the facts are, much less on what to do about them.
The Conversation touches upon both of these problems.
On the business model side, the plurality of The Conversation’s funding comes from universities, at least in the U.K. and Australia (the two countries where it has been active to date). Academics benefit from an increased audience for their research, and the universities themselves benefit from increased visibility. Free to read, share, and republish, The Conversation makes its articles available to other news outlets for distribution (see recent publications by the Washington Post here, here, here, and here). In short, the financial side appears promising.
On the ideology side, public trust in institutions is falling across the board. There is some hope that university researchers will be viewed as credible sources of information, at least by some subset of the population. A recent U.K. study found that “90% said they trusted scientists working for universities.” But U.K. scientists may fare better on the trust dimension than American ones. Huffington Post recently found that “only 36 percent of Americans reported having "a lot" of trust that information they get from scientists is accurate and reliable. Fifty-one percent said they trust that information only a little, and another 6 percent said they don't trust it at all.”
Another source for the same data actually gets to the point of The Conversation—a members-only American Sociological Review study: Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010. The full study has a handy chart showing the changes in trust of science by ideological affiliation, but unfortunately, it’s only accessible if you pay for it and figuring out how to even show the chart here, in a legally permissible way, would take a few days to sort out). On top of that, the whole study is a bit hard to read (for a layperson), and it’s very long—all challenges with the accessibility of academic knowledge that the Conversation is designed to help overcome.
In addition to the challenges of presenting complex information in a way that a lay audience can understand it, The Conversation also recognizes the challenge of being pigeonholed ideologically—and thus either being ignored entirely, or helping to further the growing problem of U.S. political polarization. Academic authors must “abide by protocols that help rebuild trust in journalism: they sign on to an editorial charter; disclose funding and conflicts; abide by community standards; and write in areas in which they have demonstrated expertise.”
Given all of this, I’m cautiously optimistic that The Conversation can find a good foothold here in the U.S. at a time when our public, and our policymakers, badly need to re-find more sources of agreed-upon facts and expert knowledge. I would welcome your thoughts!