According to many newsrooms, fact-checking articles are among the most requested, and most read, content they produce. In recent years groups like Factcheck, Flackcheck, FactChecker (home of the “Pinocchios”), Politifact and Punditfact have emerged to address growing questions of truth in politics.
Fact-checking is done with at least three possible goals in mind, each with distinct audiences associated:
For the broader public, to improve understanding of candidates and issues by countering “misinformation.”
For other journalists, to help shift the culture away from he said/she said coverage towards greater fact-checking.
For politicians, political pundits, or (more recently) news networks, to hold them accountable and deter erroneous statements.
Last year, the Madison Initiative helped to support a project by the American Press Institute (API) to research practices in this space. In December, I attended an API conference on the topic in Arlington, Virginia. More than 50 people turned out—academics from journalism and communications schools across the country, journalists from a variety of outlets, political ad companies and consulting firms, and of course a small handful of funders like me. The event covered a range of topics including recent research, journalists’ experiences from the 2014 mid-terms, and new tools and formats under development. Several highlights emerged from these discussions.
Perhaps the most fascinating research looked at which erroneous, politically important opinions are most commonly held by the public. To take one example, a plurality of Americans believes that China holds more than half of the US debt. How much does it actually hold? Eight percent.
I was also interested to learn that in the 2014 midterms, ads from outside groups (e.g., Super PACs) were more prone to misinformation than those from the candidates themselves, presumably due to a lack of accountability. Not surprisingly, closer races appear to inspire more negative (and more distorted) ads. And fact-checks are increasingly being used as “weapons” by opponents on the campaign trail.
It was likewise impressive to learn the Washington Post’s Truth Teller has built structured data into its fact-checking, in an effort to address the fact that the same mistruths are often repeated over and over again. Their prototype algorithm tests statements against a database of thousands of prior fact-checks, helping reduce the burden on human fact-checkers. Even though this is a relatively young tool it appears to be surprisingly accurate, identifying erroneous claims 75-80% of the time.
The day concluded by breaking into groups to brainstorm opportunities for research, distribution, and new tech tools. Overall, the discussions raised a few questions for me:
How can the fact-checking industry possibly keep up? One North Carolina reporter noted that, after catching a particularly egregious lie, the perpetrator (evidently a campaign manager) joked that “if you didn’t give us a red light, we wouldn’t have been doing our job.” Even assuming that most campaigns are honest, reporters are still clearly out-gunned by the campaign industry.
How can fact-checkers manage across distribution channels? Current fact-checking focuses primarily on TV ads and shows and online news content, and is not equipped to cover print, mailers, or radio as robustly. National, TV-based campaigns are clearly easiest to check—but with the rise of TiVo and micro-targeting, future political communications are unlikely to stick to these channels.
Most importantly, what difference does it make? Behavioral scientists speak to the complexity of correcting misperceptions once people have already latched onto them—in some cases finding that attempts to correct the erroneous beliefs might instead just serve to more deeply cement falsehoods. When it comes to elections, how often do people actually change their vote based on new information about candidates’ truths or mistruths, rather than just voting their party line? And how often are fact-checkers simply “preaching to the choir,” providing ammunition to further inflame party loyalists about how dishonest the other side is, rather than creating room for real learning opportunities across party lines?
On the one hand, I would be hard-pressed to just “give up on the importance of facts.” On the other, it remains unclear whether and when facts actually do matter in people’s decision-making, and what might make them matter more. Scholars and fact-checkers alike are now trying to find clearer answers to those questions.
Science plays a pivotal role in informing the work of the social sector—not to mention business, government, and society at large. (Fun fact: A quick scan of our own grants suggests that since 2001 the Hewlett Foundation has supported almost 1,000 research projects with over $390M in funding—and these are just the grants with “research” in the title!)
Scientific facts are solid, trustworthy, reproducible. That idea, that the same experiments should always get the same results, regardless of who performs them, is in fact a cornerstone of science’s claim to “objective truth.” But as Mark Twain so eloquently put it “it ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. ”
A growing chorus of voices is alleging that Twain’s jibe can be applied to a great many scientific findings, both in the natural (biological and physical) and social sciences. The journal Nature has developed an archive of reports and analyses called Challenges in Irreproducible Research. Articles from The Atlantic, The Economist, and The New York Times offer a few recent examples of the purported problem in the popular dialogue. There’s even a forthcoming Sony/Participant Media film, Merchants of Doubt, related to the topic.
Glenn Begley, chief scientific officer at TetraLogic Pharmaceuticals, described an experience he had while at the drug company Amgen: He and his colleagues were unable to replicate 47 out of 53 “landmark” papers about cancer—all of which had appeared in top journals. Evidently some of the results could not be reproduced “even with the help of the original scientists working in their own labs.”
While much has been made of the potential problems with scientific integrity, we need to hesitate before jumping to any firm conclusions. If studies cannot be replicated, it could be for two very different reasons:
Dishonesty is, of course, one possibility. Certainly there are incentives for researchers to discover “surprising” results. Academics need to get their work published in well-regarded journals in order to secure funding or tenure. Top journals’ rejection rates can exceed 90 percent and, unsurprisingly, studies with surprising findings are more likely to make the cut. Alternately, according to the aforementioned article in The Atlantic, “maybe sometimes it’s the questions that are biased, not the answers”—as with medical research studies comparing a company’s new drugs “against those already known to be inferior to others on the market.”
Second, and perhaps less obvious, even when researchers act with complete integrity, conventional practices common in their fields may produce findings that are misleading or invalid. According to The Atlantic, these challenges could range “from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.”
These kinds of concerns have inspired a number of responses, including:
The National Academy of Sciences, which has convened an expert panel to address the issue;
The National Science Foundations’ Social, Behavioral, and Economic Sciences directorate, which has formed a task force on the topic.
At the Hewlett Foundation, we’re supporting scientists at Rutgers and Stanford Universities, in collaboration with Stanford’s Center for Advanced Study in the Behavioral Sciences, to assess the prevalence of these problematic practices, alongside the feasibility and potential impact of proposed interventions to improve “scientific integrity.”
In the meantime, many scientific journals have recently adopted policies designed to elevate the validity and integrity of the research they publish. According to Stanford and Rutgers, these policies include requiring minimal sample sizes, providing additional space for methods sections in order to increase transparency, inviting authors to complete checklists of high integrity practices (e.g., reporting all variables, all conditions, etc.), requiring authors to post their data online for others to be able to access and review, stamping articles with badges for high integrity practices (transparency, pre-registering hypotheses and methods to distinguish theory-testing from exploratory analyses, making data available online), and setting aside space to publish attempted replications.
It remains unclear how big the problem really is. We hope the research we are supporting will help to … um… clarify whether the research our researchers are researching is flawed. In the meantime I’m looking forward to Merchants of Doubt — a comedy, I think?
I recently attended a conference in Chicago on how to improve statehouse reporting—looking not just at the reporters physically in our nation’s 50 state capitols, but also those covering the effects of Statehouse policies on agencies, departments, and the executive branch.
Why is this important? (At least) two reasons: First, fully half of the state legislatures that met in 2012 passed more bills in one year than congress passed in two years. That’s a lot of bills.
Second, just as with the broader field of journalism, statehouse reporting has experienced dramatic reductions in coverage in recent years. Pew Research Center estimates a 35% decline in statehouse newspaper reporters since 2003, an even steeper decline than that in newsrooms overall during the same period. That leaves fewer than 1,600 journalists in America’s capitols—and only 47% of those are full-time. That averages out to 15 full-time reporters per state, but the actual numbers vary widely, from a high of 53 in Texas to just two in South Dakota. According to Pew:
“Less than a third of U.S. newspapers assign any kind of reporter—full-time or part-time—to the statehouse.”
“Fully 86% of local TV news stations do not assign even one reporter—full -time or part-time—to the statehouse.” This is especially problematic in that most Americans (particularly less ideological ones) still get the majority of their news from local TV.
“Students account for 14% (223 in all) of the overall statehouse reporting corps.” In fact four states—Missouri, Nevada, Kansas, and Arizona—have more students than FTEs.
We discussed three primary goals that a healthy Statehouse reporting field would help to support:
Holding officials accountable (and thereby helping to deter bad behavior)
Engaging the public with relevant, useful information
Leveling the playing field between special interests and the public
We then discussed a LOT of barriers to these goals—the discussion filled up four or five flipcharts (in pretty small print). These fell into a couple of loose, often overlapping categories which I’ll attempt to summarize here:
Lack of sustainable business models. This was the background to all of our discussion.
Public trust in information. Including discussion of coverage that is more “horse-racy” than substantive, polarization of the media, lack of journalistic standards, and the increasing influence of special interests.
Public engagement with the news. Including news’ lack of nuance / personalization, and the perceived lack of understanding (on the part of news producers) of “consumers’” preferences. All driving low public interest in political news.
News quality. Including concerns that news has become more “reactive than proactive” and that so little is translated into Spanish or other languages. This also touched on the lack of “data or journalistic standards” and of “data interoperability” (e.g., overarching taxonomies) that would allow data to speak to each other across geographies or organizational silos, which would allow for better trend interpretation.
Reporter efficiency. Including the lack of training and institutional knowledge, declining reporter relationships (e.g., access to tips, time to conduct interviews, etc.). This discussion also touched on reporters’ limited access to (and ability to interpret) public records.
Government accountability. Including the lack of government transparency, as well as the perceived shift of government resources away from governing towards communications/PR.
The (increasingly?) negative tone of news. Including whether / how this serves to undermine other democratic goals of civic engagement.
Undaunted (well, maybe a little daunted), we then broke into groups to brainstorm solutions. Ideas included:
Creating either a national hub or regional hubs of accountability journalism, making data and research available for easy customization by local reporters.
Creating a mobile, time-limited (3 year), highly-publicized team to help improve state media that would focus on specific states. (The benefit of it being time-limited and highly-publicized being that the public might feel a heightened responsibility to support the effort and focus attention on longer-run sustainability.)
Auditing—and creating a central database of—existing journalistic data resources, collaborations, players, etc., with a goal of helping to inform others about what is available and identifying the gaps.
Improving data standardization across states to provide reporters with, for example, standardized data on bills’ statuses, so that journalists have tools as good as the lobbyists.
But while the discussion of problems facing statehouse journalism was rich and the ideas generated interesting, none are silver bullets. Addressing the issues plaguing statehouse reporting will require time and effort from many actors across the news media. The question remains which, if any, of these ideas could be implemented, iterated, and scaled—and which might make a demonstrable difference.
My husband (a former venture capitalist) and I recently had a fascinating conversation about mergers and acquisitions in our respective sectors. If you’re the leader of a for-profit company who manages to pull off a merger, it’s a coup worth celebrating, with real financials benefits to the parties involved. Even board members get a cut. But if you’re in the nonprofit sector and announce a merger or acquisition, the most likely response is “Oh, I’m so sorry,” (and you can almost see the thought bubble: “Ouch, they must’ve run out of cash.”)
This poses a problem. There are over 1.8 million active nonprofit organizations in the US, and almost 80,000 foundations. Just looking at the campaign finance reform field, there are at least 100 organizations competing for an estimated $14M in annual foundation grants (according to the Foundation Center). For those of us who haven’t had enough coffee yet today, that works out to $140,000 per organization per year (not counting c4 and other non-foundation contributions). Needless to say, if we were starting from scratch, we probably wouldn’t design a field this fragmented.
Donors are no doubt part of the problem – both by funding a proliferation of organizations and by not properly facilitating later collaborations. That’s why the Hewlett Foundation recently co-sponsored a convening with the Bechtel Foundation on Bold Strategies for Accelerating Impact: Nonprofit Mergers and Foundation Spendouts. I’ll focus here on the former topic, which (given that we are not a spend-down foundation) was our primary area of interest.
The upshot of the convening—which was sold out, with almost double the number of attendees originally expected—is that social change efforts need (obviously) to grow their impact if they’re going to succeed. And there are only two primary ways to grow: organic (expanding one’s own scope of work or reach) or inorganic (acquiring or merging with another organization with a broader scope or audience/geographic reach than one’s own). Some argue that the latter may be easier than the former.
It’s interesting then to examine why a nonprofit (or any organization) would consider merging. There are at least three primary reasons:
Scope: to expand an organization’s goals, competencies, etc.
Scale: to expand nationally or internationally
Streamline: as a cost play, to free up more resources for impact. (Though a Chronicle of Philanthropy article notes that this may be less realistic for nonprofits, where transaction costs for mergers tend to be higher—“most nonprofit groups are too small to realize the kinds of cost savings that often make for-profit mergers attractive.”)
And of course not all “mergers” are created equal. Instead the nonprofit consultancy Bridgespan lays out a helpful spectrum of collaboration, the appropriateness of which may vary depending on the goal:
There are obvious benefits to mergers (or other less complete forms of collaboration). So why isn’t there more of it? I’ve heard it suggested that, for mergers in the nonprofit sector, ego is the primary barrier. I suspect there’s something to that view. But maybe it’s closer to “mission attachment.” Unlike the for-profit sector, nonprofits are launched with social impact, not money, in mind. But social missions can be clouded, understandably and even unconsciously, by leaders’ attachments to their own organizations. Mergers present leaders with (real or perceived) threats to their own job security and that of their staff. But it is no doubt more complicated than that.
In addition, it’s easy to forget that most for-profit mergers (more than 80% by some recent measures) fail to achieve the goals of the merger. For nonprofits the risks of merger are essentially the same in as in business—but the rewards “are much smaller and less tangible,” yielding a less favorable calculus overall. As with business, if nonprofits are going to create value through greater collaboration, a host of questions must first be addressed:
Which organization(s) are sufficiently similar in purpose and goals, but ideally with different but complementary skill sets—“something to offer you, and something you need?”
How can alignment between the boards be established?
What branding is optimal? For example, rather than taking one brand or the other, nonprofit brands could be arrayed on the website as business units.
What steps will incentivize leaders to give up place or power? (A good severance package probably doesn’t hurt!)
How will new roles be defined for senior staff? The risks (and pushback) here may depend on the motivating rationale for the merger (increasing scope isn’t a threat, streamlining probably is).
And that’s just a few of the questions that need answers. A number of tools have been developed to help. Bridgespan has a whole series of papers and blogs dedicated to the topic. Other resources include the Foundation Center’s Nonprofit Collaboration Database, which provides detailed information on more than 650 collaborations nominated for the Lodestar Foundation Collaboration Prize.
But perhaps the most important challenge in the nonprofit sector (or at least the most relevant to my work) is funding: For good due diligence beforehand to proactively identify likely problems; and for successful implementation—keeping in mind that merged organizations may need more (not less) funding to increase impact.
Given all of the promise (and challenge) that nonprofit mergers present, I was pleased to hear that two colleagues of ours in the campaign finance reform space, Fund for the Republic (a Hewlett Foundation grantee) and Americans for Campaign Reform, just joined forces to become Issue One. This is a great example of something I hope we see more of, if and when it makes sense.
Given the potential benefits and the clear pitfalls to avoid, I’m left wondering: When and how should funders encourage mergers among our grantees?
One of our grantees, the Pew Research Center, just released another great report in its series exploring political polarization in the United States, and how that relates to government, society, and people’s personal lives.
This latest report, Political Polarization and Media Habits, explores what sources people turn to for news, which ones they trust (and which they don’t), who they tend to talk to about politics, and how all of this varies by ideology. The short answer is that ideology predicts a lot.
Several of Pew’s findings were not entirely surprising. For example:
Consistent conservatives are “tightly clustered around a single news source, far more than any other group in the survey, with 47% citing Fox News as their main source for news about government and politics.” They also “express greater distrust than trust of 24 of the 36 news sources measured in the survey. At the same time, fully 88% of consistent conservatives trust Fox News.”
Consistent liberals are “less unified in their media loyalty; they rely on a greater range of news outlets [and] … express more trust than distrust of 28 of the 36 news outlets in the survey. NPR, PBS and the BBC are the most trusted news sources for consistent liberals.”
But a few things did surprise me:
That the Wall Street Journal is the only source more trusted than distrusted by all five of Pew’s ideological groupings.
That audiences for NPR (National Public Radio), PBS, the New York Times, and BBC are as liberal as Pew finds. I would’ve assumed liberal, but not that liberal.
That 45% of people hadn’t even heard of NPR
That so many more people get news from Facebook than from Twitter. I had assumed Facebook was maybe 2x more common, but it’s more like 4x (48% vs 9%)
That consistent liberals are “more likely than those in other ideological groups to block or “defriend” someone on a social network—as well as to end a personal friendship—because of politics.” Forty-four percent of consistent liberals who use Facebook “say they have blocked someone on a social networking site because of a political post” (versus 31% of consistent conservatives). I would’ve assumed the practice was equally common (and much more rare).
That consistent conservatives report both talking about (68%) and “enjoying talking about” (81%) politics much more than consistent liberals (of whom only 57% talk about politics “at least a few times a week”, with only 69% reporting they enjoy it).
In short, the most ideologically extreme Americans have very different news consumption and discussion habits, many of which tend to reinforce their own belief systems. And of course those on the farther left and right ends of the spectrum, who together comprise about 20% of the public, have a greater impact on the political process—they vote much more, and are more likely to be driving the political conversation amongst their friends. “Nearly four in ten consistent conservatives (39%) and 30% of consistent liberals tend to drive political discussions. In other words, they talk about politics often, say others tend to turn to them for information rather than the reverse, and describe themselves as leaders rather than listeners in these kinds of conversations.”
The silver lining in Pew’s findings is that most Americans still do rely on an “array of outlets—with varying audience profiles—for political news. And many consistent conservatives and liberals hear dissenting political views in their everyday lives.” Figuring out how to encourage more of this could go a long way toward fixing our dysfunctional politics.
This week a new nonprofit news site is launching in the U.S. Given the thousands of online news sites that already exist (an estimated 170+ of them nonprofit), you might be questioning why you should care. But you should, because this one is a bit different.
The Conversation is an independent news and commentary site produced by a team of professional journalists who work closely with academic authors to leverage the academics’ expertise, apply it to important issues, and make it available to the public.
We recently made a grant to The Conversation through Special Projects, because their work struck us as potentially illuminating on a couple of dimensions. As pretty much everyone who reads news knows, since the advent of the Internet, journalism around the world has been struggling. Two big challenges dominate, at least for those who care about democracy:
The old journalism business model no longer works. Many newsrooms have had to reduce staff or close entirely. According to a recent report on the State of the News Media in 2013, employment of full-time editorial staff in the U.S., which peaked at 57,000 in 1989, had fallen by 29% by the end of 2011. The same report found that the number of U.S. newspapers with daily circulation has fallen by about 14% since 1990.
What remains is much more fragmented, and often much more ideological. So far as television goes, much has been made of the ideological extremity of Fox News (“regularly watched” by only 20% of Americans) and MSNBC (regularly watched by only 11%). They are pretty different. A recent Pew study, which the Hewlett Foundation helped support, found that “consistent conservatives” expressed overwhelmingly positive views of Fox (74% favorable). Meanwhile 73% of “consistent liberals” held an unfavorable view of the network. Talk radio is notoriously ideological, though much more popular amongst conservatives than liberals. Seven of the top 10 outlets are considered conservative, the rest are independent or moderate—and have grown from 400M in the 1990s to 1.4B in recent years. Given the breadth of online news sites it is essentially impossible to register their ideological tones, but suffice to say that many occupy a quite specific ideological niche. In short, people have a very hard time agreeing on what the facts are, much less on what to do about them.
The Conversation touches upon both of these problems.
On the business model side, the plurality of The Conversation’s funding comes from universities, at least in the U.K. and Australia (the two countries where it has been active to date). Academics benefit from an increased audience for their research, and the universities themselves benefit from increased visibility. Free to read, share, and republish, The Conversation makes its articles available to other news outlets for distribution (see recent publications by the Washington Post here, here, here, and here). In short, the financial side appears promising.
On the ideology side, public trust in institutions is falling across the board. There is some hope that university researchers will be viewed as credible sources of information, at least by some subset of the population. A recent U.K. study found that “90% said they trusted scientists working for universities.” But U.K. scientists may fare better on the trust dimension than American ones. Huffington Post recently found that “only 36 percent of Americans reported having "a lot" of trust that information they get from scientists is accurate and reliable. Fifty-one percent said they trust that information only a little, and another 6 percent said they don't trust it at all.”
Another source for the same data actually gets to the point of The Conversation—a members-only American Sociological Review study: Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010. The full study has a handy chart showing the changes in trust of science by ideological affiliation, but unfortunately, it’s only accessible if you pay for it and figuring out how to even show the chart here, in a legally permissible way, would take a few days to sort out). On top of that, the whole study is a bit hard to read (for a layperson), and it’s very long—all challenges with the accessibility of academic knowledge that the Conversation is designed to help overcome.
In addition to the challenges of presenting complex information in a way that a lay audience can understand it, The Conversation also recognizes the challenge of being pigeonholed ideologically—and thus either being ignored entirely, or helping to further the growing problem of U.S. political polarization. Academic authors must “abide by protocols that help rebuild trust in journalism: they sign on to an editorial charter; disclose funding and conflicts; abide by community standards; and write in areas in which they have demonstrated expertise.”
Given all of this, I’m cautiously optimistic that The Conversation can find a good foothold here in the U.S. at a time when our public, and our policymakers, badly need to re-find more sources of agreed-upon facts and expert knowledge. I would welcome your thoughts!
October 16, 2014 — By Kelly Born, Tom Glaisyer , and Jonathan Kartt
Kelly Born is a Program Officer at the Hewlett Foundation, where she works on both Special Projects and our Madison Initiative. Tom Glaisyer is the program director of the Democracy Fund and Jonathan Kartt works in Programs & Evaluation for the Rita Allen Foundation. This is the second of two posts on voter information platforms.
We partnered to explore dozens of these platforms, and quickly realized that we weren’t sure how best to support the field, or which groups to partner with. So the Hewlett Foundation and the Rita Allen Foundation crafted an RFP to solicit proposals from a handful of potential nonprofit partners, with the goal of funding them in a rapid-cycle innovation project. We were open to all kinds of ideas, and suggested a few possibilities:
Consulting Support: Because the ultimate success of any voter information platform depends on the quality of its design and resultant resonance with users, we suggested potential projects aimed at supporting design iteration and experimentation.
Implementation Support: These needed to be projects that were essentially shovel-ready, capable of being fielded before (and tested during) the 2014 election cycle.
Learning Support: There is much to be learned during this election cycle that might help inform later work in 2016. So we were open to jointly establishing a learning agenda for 2014 and then pairing nonprofit partners with researchers to test the effectiveness of different innovations.
Ultimately the proposals we received included some combination of all of these options.
Independently, the three foundations reviewed and assessed the pros and cons of all of the proposals, and between us we are now funding three public charities that responded to the RFP:
The Healthy Democracy Fund, to pilot its deliberative ballot decision-making approaches in Arizona and Colorado, and to conduct communications research around the efforts to understand what kind of messaging works with voters.
Maplight, to further develop its Voter’s Edge tool such that it can be more easily embedded in other platforms (e.g., news sites, civic organizations).
Seattle City Club’s Living Voters Guide, to further develop the site and to expand it to encompass not just ballot information but candidate data, including information from Voter’s Edge.
All of these projects include a research component to help understand what nonpartisan information resonates with voters, in hopes that we can learn and improve in future election cycles.
We are optimistic about the possibilities of these charitable projects, and about innovations in the sector more broadly - both for-profit and non-profit. These efforts offer hope that in future cycles citizens will have access to—and use—a wealth of information for even down-ticket races.
But we also have (lots of) questions:
When do people search for this information? How do they find it?
How do you expand the audience beyond political junkies to reach a broader population?
How useful do voters find this information? When and how does it actually influence decision-making?
What formats do voters prefer?
Do the platforms increase public trust in the political process or might some, particularly those that offer candidate matching, increase polarization?
How can the platforms be sustained?
Are the approaches scalable? What level of data standardization is desirable or feasible? For example, it is currently easy to get information on Congressional candidates, but much harder to digitally aggregate even the names of candidates for down-ballot races, let alone any meaningful information about them.
We are wrestling with these questions, supporting some research with these partners to test aspects of them, and exploring more broadly how we can aid the emerging community of practice that exists around this next generation of nonpartisan voter information tools. As always, we welcome your comments.
October 8, 2014 — By Kelly Born, Tom Glaisyer , and Jonathan Kartt
Kelly Born is a Program Officer at the Hewlett Foundation, where she works on both Special Projects and our Madison Initiative. Tom Glaisyer is the program director of the Democracy Fund and Jonathan Kartt works in Programs & Evaluation for the Rita Allen Foundation.
How will voters find information in 2014? For those who care about US democracy, this question is front and center in a world where both the structure of the news media and the channels through which voters get information are in flux. In the not too distant past, voters received most of their information about candidates and ballot measures through mass market dailies and TV or radio—places where the message was mediated by gatekeepers. The only opportunity to directly communicate with voters was through paid advertising or in-person contact. Nowadays, candidates have limitless options to directly reach voters—even television, when delivered via satellite, permits hyper targeting of political advertising messages.
But it’s not just campaigns that are exploiting these new digital opportunities—a host of (mostly new) organizations, non-profit and for-profit, are seeking not to win a vote, but to inform voters about their options.
It’s an exciting time for the field. Abroad, websites that match voters to policy positions held by parties, so-called voting advice applications, have seen significant adoption. In Germany, for example, Wahl-o-Mat was queried 13.2M times in 2013—not bad when you consider there are only 80M people in the country. In the US, we have encountered dozens of similar sites such as Vote411, ethePeople, and Project VoteSmart.
The digitization of data permits an increasing amount of contextual information to be added to what was previously just a thumbnail sketch of a candidate or issue. For example, information on candidates or ballot initiatives can now be combined with “rules of the road” on where and when to vote, and what materials to bring. This digital “plumbing” is often under-appreciated—Google’s Civic Information API provide a way to lookup polling places in 2014 and listed the candidates on the ballot. It builds on data from the PEW Charitable Trust's Voting Information Project and augments a recently developed iOS app.
Recognizing the possibilities in this emerging ecosystem of voter information, the Hewlett Foundation, the Rita Allen Foundation and the Democracy Fund partnered to explore the dozens of voter information websites that have developed in the last few years. We examined a number of dimensions:
Geographic Scope: Many provide national coverage, whereas others, like ethePeople, partner with media and civics groups in specific states or localities. Maplight’s Voter’s Edge cover national races while also offering some down-ballot coverage in particular states (in this case, California).
Audience: Some, like Ballotpedia, provide detailed information that might appeal more to policy wonks like ourselves, whereas Voter’s Edge or Who’s On The Ballot seek to serve those who prefer a less detailed view.
Approach: Sites like Voter’s Edge provide “just the facts” (on a lot of dimensions, including candidate’s prior jobs, campaign funding sources, etc.). Others, like the newly launched Crowdpac,use campaign funding sources to predict candidates’ positions, in an attempt to address the challenge of comparing a 30-year incumbent’s record to that of a first-time challenger who has never held office. ISideWith uses matching algorithms – and has now paired more than 11 million users with their “ideal” candidates based on answers to basic philosophical and political questions (e.g., “what is your stance on taxation?”). Still others actually involve citizens in the deliberative process: Healthy Democracy in Oregon convenes a representative panel of dozens of citizens for a week to evaluate the pros and cons of a particular ballot initiative. The information is then shared with voters in the official voting guide. Research has shown how valued that information has been—a majority of Oregonians were aware of the tool, and roughly two thirds who read the CIR statements found them helpful when deciding how to vote. In Washington State the Living Voters Guide has utilized a deliberative platform to allow voters to share why they are in favor of or opposed to a particular initiative.
Business Models: Half of what we found are for-profit operations like Crowdpac and Poll Vault. The other half (most of what we’ve discussed herein) are nonprofit. So we spoke with venture capitalists who had invested in several of the for-profit competitors to understand their reasons for doing so, and to ensure that we felt there was a good rationale for philanthropic investment in this space.
Operating and Partnership Approaches: Some, like Project VoteSmart, rely on teams of dedicated interns, while others are striving towards more automated systems. We also looked at organizations’ partnerships—many like ethePeople are collaborating extensively as part of their model, others are closer to independent operators.
Use: Finally, we looked at use. Not much is known about the end-users of these types of voting information services beyond broad demographic statistics. In terms of numbers, some platforms have received a fair amount of uptake, whereas others are so new that no usage data is even available yet—however, no site appears to have come close to Wahl-o-Mat’s success in Germany.
This wide variety of activity left us with lots of questions: whether and how to support this field, who to partner with, and on what kinds of projects? We have begun to explore these questions, and will discuss our early work on this topic in a follow-up post next week.
The Pew Research Center released one of the largest political studies in its history last week, which we were pleased to help support. Its findings, when coupled with longitudinal data from Pew surveys over the last two decades, reveal a complex picture of what is, and what is not, changing in the American electorate.
Some context: since 1987, Pew has conducted a periodic “Political Typology” survey exploring the causes, consequences, and extent of variations in American political values. Over its 20-year history, this study has surveyed the political beliefs and values of about 3,000 Americans, allowing for very useful insights about the country at large, but less nuanced understanding of how these values vary by age, geography, race or ethnicity, level of political engagement, or socio-demographic background. With this poll Pew has expanded this survey to include 10,000 respondents—enough to help us understand the many, many different flavors of American political opinions and values. It reveals an American public that is far more nuanced than our party structure allows for—some are very liberal, some are very conservative, but the vast majority (79%) doesn’t consistently fall within either of these camps.
Let me share a few of the insights I have gleaned from the expanded typology data thus far:
“Overall, 21% of Americans today are consistently liberal or conservative in their political views, up from 10% in 1994.”
The partisan gap in opinions on more than 40 separate political topics (including the environment, immigration, and the role of government) has doubled over the previous quarter century.
People don’t just disagree more, they also like each other less. “Levels of partisan animosity—holding a very unfavorable view of the other political party—have more than doubled over the past two decades….Today 38% of Democrats hold a very unfavorable view of the Republican Party, and 43% of Republicans say the same about the Democratic Party. Those figures were 16% and 17%, respectively, in 1994…”
And it is those who disagree more (and like the other side less) that increasingly dominate our politics: Of the most consistently conservative Americans, 80% say they “always vote”. Of the most consistently liberal, 61%. Contrast this with moderate turnout around 40%, and we have a problem…
I love data. Even somewhat depressing data like these. And the data don’t stop here. The findings are so robust that they cannot all be discussed in one release (much less one blog post). So Pew will be releasing findings on a rolling basis over the next six months, and I imagine we that we at Hewlett will continue to blog about it. The series will soon look more closely at the relationship between polarization and media consumption, social networks, exposure to political messaging, geographic location, and economic status, amongst other things. This has given us much to think about. I hope that others are likewise inclined to wrestle with these data, their implications for our democracy, and what we can do to improve the American political landscape moving forward.
When the Hewlett Foundation first began considering work in the democracy reform space in 2012, we had the same question that every new funder has upon entering a field: “Who is funding whom, to do what?”
We wanted to ensure we had a good feel for the landscape before we got too far out into it—both to avoid duplicating efforts and to enable more effective teamwork with other foundations. Not knowing the answers to these basic funding questions makes coordination, let alone collaboration, unnecessarily difficult. It can take dozens of calls with other funders to begin even to get the lay of the land. Details learned are often forgotten, so every time there is a funder convening, or every time a new funder comes along, the landscape analysis must be redone. It wouldn’t surprise me if, when we were first entering the field, there was a collective groan from longstanding democracy funders—yet another freshman to educate!—who nonetheless graciously helped us get up to speed.
With all this in mind, in 2013 the Hewlett Foundation and seven other democracy funders joined together, under the guidance (read: leadership, execution, and pretty much everything else) of the Foundation Center, to begin to answer these questions—and to incorporate the answers in a form that would be readily accessible and continually updated. Our foundation partners were the Carnegie Corporation of New York, The JPB Foundation, the John D. and Catherine T. MacArthur Foundation, the Omidyar Network’s Democracy Fund, Open Society Foundations, the Rita Allen Foundation, and the Rockefeller Brothers Fund.
The first step was to craft a taxonomy to help structure and organize the funding data. As you can imagine, there were a lot of vexing questions and perspectives here: from what was in or out of scope and what the main categories should be all the way down to the specific wording of each primary, secondary, and tertiary category. We laughed. We cried. We iterated—and then we repeated the process. Along the way we received enormously helpful input from dozens of experts and observers in different parts of the field, on both the funder and grantee side. In short, a crazy amount of collaboration happened.
The result is an online, interactive, data-based visualization tool: Foundation Funding for U.S. Democracy. Using the Foundation Center’s data set for 2011 (which was used in the search pictured below), along with partial but growing sets of data for subsequent years, the tool allows users to map, chart, and filter data to gain a deeper understanding of the democracy funding landscape. While it remains a work in progress—necessarily, but also by design—it already offers a powerful new vantage point to consider the resources flowing from foundations to nonprofits working to strengthen democracy in the U.S.
Foundation Center President Brad Smith has a blog post summarizing his observations about the process and emerging patterns in the data. For my part, I will simply encourage you to start tooling around the site. Try out a few searches and get a feel for what it has to offer.
We will be helping all of our grantees familiarize themselves with this tool. Indeed, as useful as this data set and visualization will be for foundations already working in the field or new funders considering entering it, the real killer app for this work will be to help grant seekers. They can use it to answer questions like: who is funding the work we do, in the way and / or in the places that we do it? What are the areas of focus, key grantees, grant sizes, etc. of [insert your favorite] foundation? Which foundations are we best positioned to approach for support? What other nonprofits out there share our goals and could be good partners?
The tool provides data that is very valuable to foundations. It provides information that is both important and indeed urgent for nonprofits working to secure and expand the resources they need to advance their missions to improve the health of our democracy. And that, ultimately, is the whole point!