December 16, 2015 — By Kelly Born and Jean Parvin Bordewich
Our work at the Madison Initiative remains squarely focused on improving conditions for dialogue, negotiation, and compromise in Congress. But one question we continue to wrestle with is this: How do we know how Congress is doing?
Some Congressional observers have looked at metrics around “legislative productivity,” bemoaning recent Congresses as the “least productive” ever. But others have rightfully argued that the number of bills passed, alone, doesn’t mean much. Some bills are truly substantive, while some are clearly ceremonial. To complicate matters, still others believe that not passing legislation is sometimes the best reflection of the voters’ interests on an issue.
What else then, might we measure to get a true picture of Congress’ health and effectiveness? Several Madison Initiative grantees have recently taken up this challenge, and we wanted to highlight their efforts with this post.
Earlier this year, The Bipartisan Policy Center launched its Healthy Congress Index. The index tracks Congress overall, rather than individual members, and includes factors like the number of working days, the use of the filibuster, cloture filings, and Senate amendments considered, amongst other things, all with the goal of understanding how Congress is governing. It is part of BPC’s Commission on Political Reform, and represents “a first-of-its-kind effort to track, encourage and reward Congressional competence.” The Index tracks back to the 104th Congress, and shows declining performance across most metrics, with 2011’s 112th Congress fairing particularly poorly (see, for example, the chart below showing Senate ammendments considered).
Sarah Binder of the Brookings Institution is taking a slightly different approach to measuring the overall health of Congress. She has created a measure of legislative gridlock that looks at the number of bills passed that address the most salient issues facing the country, with salience determined by things like the frequency of an issue’s appearance in the New York Times’ editorial section.
The Lugar Center and Georgetown University’s McCourt School of Public Policy recently launched a new Bipartisan Index, intended to develop an objective measure of how well members of opposite parties work with one another. The index measures the frequency with which a Member co-sponsors a bill introduced by the opposite party, and the frequency with which a Member’s own bills attract co-sponsors from the opposite party. It includes data going back to the 103rd Congress and, while the overall trends aren’t looking good (see below), some Members are doing much better than others.
Most recently, the Legislative Effectiveness Project was launched by political scientists Craig Volden at the University of Virginia and Alan Wiseman at Vanderbilt. They have developed a very thoughtful methodology that considers the number of bills that each member of the House of Representatives sponsored and the number of those bills that received any action in or beyond committee on the floor of the House. For those bills that get out of committee, they look at those which subsequently passed out of the House, as well as which become law. The methodology also looks at whether bills are commemorative, substantive, or both substantive and significant. In calculating productivity they control for the Member’s seniority, and whether she was a member of the majority party, amongst other things.
There are, of course, many different ways to get at the health, productivity, and effectiveness of Congress. Each touches on different dimensions of individual and institutional efficacy, and all have their blind spots. Ultimately we’re convinced that a combination of multiple measures will be necessary to truly understand the workings of such a complex institution. We think those highlighted above are pointed in the right direction—and we would welcome your thoughts on these and other attempts to understand Congress’ effectiveness.
In March of 2014, the William and Flora Hewlett Foundation launched a new initiative focused on US democracy reform, The Madison Initiative. The overarching goal is to “help create the conditions in which Congress and its members can deliberate, negotiate, and compromise in ways that work for more Americans.”
Our mandate is for a 3-year, exploratory initiative to assess whether and how the Foundation might be able to make a difference here. During this period, we are focused on three central questions:
Are there solutions and approaches that are worth pursuing?
Is there ample grantee capacity to pursue these ideas (or can we help build it)?
Are there funding partners we can work with to make it happen?
In exploring this problem of congressional dysfunction we realized early on that, unfortunately, there don’t appear to be any silver-bullets that will solve this problem – it’s not as if campaign finance reform, nonpartisan redistricting, or increased voter turnout, taken on their own, would resolve our current democratic ails (even setting aside for the moment how hard it would be to actually achieve these changes!).
Regrettably, there is no clear consensus on what to do to improve the system, much less on how to do it. This may be, in part, why Inside Philanthropy awarded The Madison Initiative with 2014’s Big Foundation Bet Most Likely to Fail. Given this, our view has been that current congressional dysfunction is occurring in a system of systems (and sub-systems) that are interacting in complicated ways.
Early on we decided to develop a systems map rather than a theory of change to guide our work (working in close partnership with the Center for Evaluation Innovation and Kumu, collaborations we’ve written a bit about here). Theories of change typically outline desired (social or environmental) outcomes and then map backwards, linearly, to the activities and inputs necessary to achieve those outcomes. Systems maps are perhaps better suited for more complex, uncertain environments like democracy reform, where cause-and-effect relationships can be entangled and mutually reinforcing, rather than unidirectional.
Version 1.0 of our map includes more than 35 variables we believe are contributing to the problem, distributed across three key domains: Congress, Campaigns and Elections, and Citizens. In light of this complexity, rather than making an initial set of big bets on a few key variables, we have instead spread a series of smaller bets within these systems to see where grantees might gain traction, and what this reveals about the system’s more confounding parts.
The benefits of this approach are many – in fact, I cannot imagine effectively tackling this particular problem any other way. But employing this spread betting approach also involves a few challenges for us at Hewlett, and for our partners and grantees. The trade-offs are worth considering:
We are acknowledging and respecting complexity, but this can sow seeds of confusion for our partners. Our approach has the essential benefit of taking into account the systemic complexity and interdependency of what we are trying to help change. We are avoiding over-simplifying and thereby misconstruing our reality (a good thing). But we are exploring more than 35 variables (ranging from deteriorating bipartisan relationships to the proliferation of partisan news media), with more than 60 active grantees. This approach can be hard to manage, and harder still to convey to others – especially anyone accustomed to a more linear and readily understandable theory of change.
Our course correcting helps us learn, but has a real impact on partners. As we diversify our investments to learn more about what works, we will continue to learn more about which efforts are having the most impact on congressional dysfunction, and which are less germane to the problem. As we do, we will necessarily converge (and double down) on a few core interventions, while discontinuing others. This will mean disappointing organizations that we respect and had supported at the outset – an inevitable byproduct of this approach, but unpleasant for all involved.
Our evidence-based approach risks coming off as overly academic. We are determined to avoid investing in solutions where there is not solid evidence to support their viability vis-à-vis our goals. This helps us avoid squandering funds on interventions that won’t, ultimately, work. But this approach also runs the risk of coming across as standoffish, academic, and idiosyncratic in the eyes of a practitioner-driven field that in some instances may be pursuing work that is harder to (or has yet to be) substantiated by solid research.
We’ve certainly got our work cut out for us. But we deeply believe that the social sector shouldn’t shy away from complex problems. We also believe that the benefits of this approach far outweigh the costs. It enables broad-based learning, and truly forces us to constantly re-think the grants we are making. Building in these tough choices, rather than forging ahead with a pre-defined strategy, requires that we not just learn, but that we act on what we discover. And fast.
In short, while beset by a few real challenges, we’re convinced that an emergent path is the best path forward. Surely we will place some wrong bets along the way. But, as a favorite colleague of mine often says, “it’s not like we’re selling cigarettes to children.” All of our grantees are doing great work – ultimately it will (not so simply) be a question of which of these lines of work is most likely to improve Congress.
In 2017, we will go back to our Board of Directors to discuss whether and how The Madison Initiative’s work will continue. In the meantime, we would love to hear how other funders have approached emergent problems like this – and how nonprofits might advise that we manage these inherent challenges as we progress?
Social media has become Millennials’ top source for politics. A great new study by the Pew Research Center, released this week, finds that a solid 61% of Millennials say they get political news from Facebook, versus just 39% of Baby Boomers. Gen Xers like me bridge the gap at 51%.
“Millennials and Political News” is based on data collected in 2014 through the Center’s American Trends Panel which, full disclosure, the Hewlett Foundation’s Madison Initiative helped support. The study offers a range of interesting findings:
Facebook vs Twitter: For all generations, Facebook beats Twitter as a political news source – only 14% of online Millennials got political news on Twitter, slightly higher than both Gen Xers (9%) and Baby Boomers (5%). But Twitter is definitely used by those who proactively seek out political information; depending on the generation, on average 30% of Twitter users report following “Political Parties, Candidates, or Elected Officials” (vs only 23% for Facebook).
Trust in news: When it comes to trust, each generation trusts roughly 40% of the sources they are familiar with and distrusts about 20%. The rest are a mixed bag. Yikes. But the generations view individual news sources differently: While the majority of Millennials report trusting The Daily Show and The Colbert Report, the majority of older generations indicate distrust. Meanwhile the majority of Boomers trust Fox News; Millennials and Gen Xers express about equal trust and distrust.
The study’s findings regarding generational differences in social media use, news and politics raised important questions for me, and are relevant to those who are analyzing and working to improve civic information and engagement:
Millennials’ reliance on Facebook for political news has clear implications for political candidates – and for campaign spending. As I’ve noted before, experts estimate that only about 3% of campaign ad dollars are spent on digital – maybe more in presidential elections. In the future, this is obviously bound to grow – but how rapidly and by how much and? Millennials still punch below their weight in terms of voter turnout, but as they age (and, presumably, vote more) this trend towards social media will have an impact.
We should expect to see even more micro-targeting. During the 2012 elections, the New York Times noted how “in recent primaries, two kinds of Republican voters have been seeing two different Mitt Romney video ads pop up on local and national news Web sites.” A former campaign manager for President Obama recently noted of his experience with U.K. elections – where social media are already dominant given much tighter regulations on campaign finance and ad buys – that “Facebook and other social media, was the most effective because it was often a message shared by their friends or others they trusted versus politicians and the media that they don’t trust. We found that the undecided voters were moving our way as a result.”
What does the trend toward social media mean for the finances of already beleaguered media platforms? And for campaign regulation? Local TV has been one of the few journalism platforms to see increasing ad revenues in recent years, while newspaper and magazine revenues have cratered. This is in large part because of political advertising, which tends to happen on news-producing local TV stations. What will happen to the sustainability of local TV news as campaign advertising and communication are increasingly transferred to social media intermediaries rather than news content producers? And, with television traditionally positioned as the primary channel for campaign communications, current campaign finance laws and regulations are heavily oriented around broadcasting. The FEC has struggled with how to regulate this shift toward digital campaigning.
Millennials may be living in less of an echo chamber – for now. The jury is still out on what role social media plays in determining who sees what news (even if unconsciously). In the Pew Research survey, only 18% of Millennials who pay attention to political posts say the posts they see on Facebook are “mostly or always in line with their own views” (versus 21% of Gen Xers and 31% of Baby Boomers). Twitter and Facebook algorithms remain proprietary. Recent and hotly contested research by Facebook’s data scientists finds that the biggest determinant of what news you see is who your friends are, rather than a social media algorithm, and for most people only about 20% of “friends” are from an opposing party. Earlier 2010 research (below) makes similar suggestions regarding the role of the internet, versus other media and interactions (e.g., with personal acquaintances), in influencing “ideological segregation.” That said, if the aforementioned Facebook study is right, algorithms may cause as much as a 4% to 10% decrease in ideologically “cross-cutting” exposure on that site. While not a huge amount, that means Facebook – if willing – could make itself part of the solution to echo chambers, rather than part of the problem.
What does the rise of social media mean in the long term for an informed and engaged citizenry? The folks at Pew Research Center note how hard it is to understand: “The [social media] experience is individualized through one’s own choices, through the friends in one’s network and their proclivities, and through algorithms – all of which can change over time.” As they report, we are just starting to understand these interactions, and the implications they will have for American politics.
But are those stories having any effect? Three new studies just came out — overseen by API, and supported by the Hewlett Foundation, along with our colleagues at the Democracy Fund and the Rita Allen Foundation. One of these, Estimating Fact-checking’s Effects—from Brendan Nyhan of Dartmouth College and Jason Reifler of the University of Exeter—gets precisely at this question of impact. Here’s what I found to be some of the study’s most interesting findings:
First, views on fact-checking’s “favorability” ratings differ depending on political party affiliation. While the views of “low-knowledge” respondents don’t differ much depending on whether they are Republicans or Democrats (29% vs 36% view fact-checking favorably, respectively), it makes a real difference for people with high levels of political knowledge: “just 34% of Republicans … have very favorable views of fact-checkers compared with 59% of high-knowledge Democrats.” Nyhan and Reifler hypothesize that this may be due to Republicans / conservatives tendency to “hold less favorable opinions of the news media” overall, coupled with a greater likelihood to believe that there is a liberal media bias.
Then they looked at impact. Here there were (at least) two big questions:
Belief Accuracy: One risk inherent in fact-checking stories is that “exposing people to false claims in an effort to debunk them” can lead to a situation where readers recall the misinformation more clearly than they remember the intended correction. Here the question is: Does exposure to fact-checking content increase “belief accuracy?” Nyhan and Reifler found, through post-exposure surveys, that “the rate of correct answers increased from 12% to 19% among people with low political knowledge,” and was even more effective among people with “high political knowledge” (from 22% to 32%).
Motivated Reasoning: Many experimental studies in psychology and political science have found that new factual information doesn’t necessarily change erroneous, pre-existing beliefs. In fact, it can actually backfire for some groups (e.g., depending on the issue, context, and messenger, counterfactual information can cause partisans to more deeply entrench in their pre-existing beliefs). Thus Nyhan and Reifler expected partisans to be more likely to learn/recall “belief-consistent” facts.
True to their hypothesis, the researchers found that corrections of inaccurate statements are more persuasive when the reader and politician belong to the same political parties. “Readers tend to think the opposing party politician’s statement was false, even before they read the correction.” This suggests that fact-checking may be particularly important during primary contests (though fact-checking is currently more common during general elections).
Contrary to their expectations, they found that “correct answers increased somewhat more for belief-inconsistent facts (from 9% to 20%) than for belief-consistent facts (from 14% to 22%).”
“Republican knowledge of belief-inconsistent facts increased by five percentage points and by ten percentage points for belief-consistent ones. The pattern for Democrats is the opposite, however — knowledge increased by 15 percentage points for belief-inconsistent facts compared with eight percentage points for belief-consistent facts.”
It’s worth noting that the public at large is not the only potential audience for fact-checking, nor necessarily even the most important one. In a prior article, Nyhan also explored the impact of increased fact-checking on politicians’ behaviors—that is, testing whether there’s a deterrent effect.
In a 2014 study of 1,200 legislators in nine states, Nyhan and Reifler sent candidates and policymakers reminders about “the risks to their reputation and electoral security if they are caught making questionable statements.” The result? A 55% reduction in the likelihood of receiving a negative PolitiFact rating, or having the accuracy of their statements questioned publicly, in comparison to legislators who were not sent reminders. That said, state legislators are very seldom fact-checked, anyway, and it’s not clear whether the same effect would hold for Congress.
Nevertheless, fact-checking seems to me a promising development in the journalistic field. For it to succeed, newsrooms (and, when relevant, funders) still need to wrestle with questions like when and what to fact-check in order to maintain both relevance and bipartisan credibility, how to scale the reach of existing efforts, and whether and how to expand beyond fact-checking politicians and pundits to other purveyors of misinformation. But those are topics for another day. For now, I’ll just say I’m grateful to API and all the researchers we’re supporting—and excited for the next round of research releases!
State-of-the-art "Like" Button, Circa 1952. (Photo Credit: Flickr User Mpls55408, licensed under CC BY NC 2.0)
Spoiler alert: The campaigns of the future are not going to look like the campaigns of the past. Just as the introduction of television changed the door-to-door game decades ago, so too is the rise of the Internet and the ad-targeting available through services like TiVo changing the TV ad game that has dominated in recent years. And with fewer people watching commercials,the importance of social media companies will only expand.
That was my biggest takeaway from a discussion I attended late last month exploring the effect of changing technology on political campaigns. Sponsored by the Democracy Fund and hosted at Stanford Universty by Law Professor Nate Persily, alongside Heather Smith, the former Rock the Vote President, and Ben Ginsberg, former national counsel to both the Republican Governors Association and multiple presidential campaigns, it was a veritable who’s who of the political campaign world. Participants included Ann Ravel, Chair of the Federal Elections Commission, as well as several other commissioners from the FEC and FCC; the founders of both Change.org and Brigade Media; representatives from Google, Facebook, Twitter, and Tumblr; the head of Catalist, the self-styled “data utility” for progressive organizations; representatives from the Republican National Convention, and the New Organizing Institute; and academics from Harvard, Stanford, UNC Chapel Hill, and others. And of course some of us funders.
The day was divided into four sessions, looking at:
1. Campaign Communications
2. Voter Engagement and Mobilization
3. Campaign Fundraising
4. Campaign Finance Policy
The first session on “The Future of Campaign Communications” explored how new technologies will impact the prevelance of television ads, and the costs of campaign. The group’s best guess was that, today, only about 3% of campaign ad dollars are spent on digital – maybe more in presidential elections. The bulk is still spent on TV (though of course TV doesn’t play a big role in less competitive races, and is not a factor at all downballot).
As outreach moves to the internet, how will new microtargeting capabilities affect campaigns? The biggest questions were around truthiness and tone.
While broadcast communications make it harder to lie, microtargeting makes misinformation much easier to propogate. But of course both journalists and political opponents are highly incentivized to detect others’ mistruths. And Facebook coders are working on algorithms to validate statements’ authenticy. The key may lie in whether or not outsiders can see what’s going on. The room was less worried about tone on platforms like Facebook that don’t allow for anonymity. And incentives for positive play also exist at the campaign level—a main driver for placing ads on Facebook is the “propensity to share” (PTS), and PTS has proven to be better when ads are positive.
The second session explored “The Future of Voter Mobilization and Engagement.” Much here has also changed, though in ways I think most of us are familiar with. Centralization of data services within groups like Catalist has had a big impact—no more mom-and-pop data vendors. Today’s door-to-door canvassers can access a wealth of data about their targets, and input new information as they go.
One thing here remained clear: Campaigns will continue to target the most likely, most partisan voters. Participants agreed that “foundations must do the harder work” of encouraging less partisan, less engaged citizens (no matter their voting preferences) to take part in elections.
The third sessions moved on to explore “The Future of Campaign Fundraising,” looking at questions around how tech advances will alter the methods, tone, and level of participation in campaign financing.
New tech has surely made it easier to give, leading to a higher percentage of people contributing to campaigns. That said, it remains a small group—a universe of only about ten million donors. And a higher and higher percentage of the overall total is coming from the wealthiest 0.01%. In the early 1980s, that group accounted for 15% of the total. Now it’s closer to 40%.
The day concluded with a panel on “Public Policy and the Campaign of the Future” looking at campaign regulations going forward. Here the lawyers in the room conceded that the “rules have become so complicated that even the top lawyers wouldn’t get all the rules right on a test.” Which didn’t make me feel any better.
From this session, two core questions arose:
1. How powerful will the parties be? They were already weakened by McCain-Feingold, and now more and more groups like Catalist are taking over traditional party roles. That said, parties remain the gatekeepers of information and relationships that may be hard to replicate.
2. In the future, could the norms and algorithms that Google and Facebook come up with be more important than the rules from the FEC?
It was a fascinating day. The one thing that became clear to me is that our campaigns are undergoing a generational transition, and no one person or group has a monopoly on understanding what the future will look like.
Even before voter turnout hit a 70-year low in 2014, many of us in the democracy reform field were in conversation about whether and how we might improve voter turnout, and civic engagement writ large. This, in part, is why the Hewlett Foundation recently announced that we will be partnering with Knight Foundation, the Democracy Fund, and the Rita Allen Foundation on a $3 million challenge to identify how we can better inform voters and increase civic participation before, during, and after elections.
A recent blog post from Knight Foundation elaborates on the goals of this election-focused Knight News Challenge: “We are looking for innovative ideas ranging from new ways that news media and others can inform voters, to making voting easy, efficient and fair, to converting election participation into longer-term civic engagement, on the local, state or national level.”
The challenge opened today, but we have been focusing on these issues for some time. Our Madison Initiative seeks to create opportunities for deliberation, negotiation, and compromise in the US Congress. But it’s clear that civic engagement is an essential component of strengthening representative institutions. Indeed, many Americans opt out of the civic process. As the Pew Research Center reported last year, “[M]any of those in the center remain on the edges of the political playing field, relatively distant and disengaged, while the most ideologically oriented and politically rancorous Americans make their voices heard through greater participation in every stage of the political process.”
The Madison Initiative is strictly nonpartisan and agnostic about specific policy outcomes. But we think that representative institutions can be, well, more representative when more of the electorate participates in the process.
For the Hewlett Foundation, this partnership represents a unique opportunity to work with leading peer funders to support new and promising ideas from people across the media, technology, and election administration fields. We hope to see ideas and collaborations from civic technologists, academics, students, startups, nonprofits, governments, and individuals.
Finally, a bit of history and a few fun facts:
This is Knight’s 13th News Challenge, and they often collaborate with different partners.
The platform they use for submissions and feedback, available at newschallenge.org, is tailor-made to create a more collaborative, open discussion and process; it’s built on an Open IDEO OI Engine.
Knight News Challenges attract hundreds of entries but usually result in less than 25 winners.
Winners receive funding of anywhere from $35,000 to several hundred thousand dollars.
Past winners have included Internet entrepreneurs, emerging media innovators, libraries, and legacy newsrooms. For example DocumentCloud, which helps journalists analyze, annotate and publish original source documents, is being used by more than 200 newsrooms nationwide. Ushahidi has helped map information in crises from Haiti to Japan. hNews, a project by Web inventor Tim Berners-Lee and the Media Standards Trust, allows readers to see the source of information in online articles. It is used by more than 200 news organizations such as the Associated Press.
The submission process is intended to attract a diverse range of ideas from many different people and organizations. The application questions are simple:
In one sentence, please describe your idea as simply as possible.
Briefly describe the need that you’re trying to address.
What progress have you made so far?
What would be a successful outcome for your project?
Please list your team members and their relevant experiences/skills.
See, it’s pretty straightforward.
The News Challenge will open on February 25, 2015, and close on March 19. More information is available on the News Challenge’s website. I have no doubt that the ideas that arise will engage and inspire us all!
To submit an entry for the Knight News Challenge or provide feedback on other submissions, visit newschallenge.org. Knight News Challenge: Elections closes on March 19 at 5 p.m. (EST).
As part of the Hewlett Foundation’s Madison Initiative—aimed at improving opportunities for better dialogue, deliberation, and compromise in Congress—I’m spearheading our work around civic engagement and voting. Two questions I’ve been wrestling lately are whether, and to what end, philanthropy might help to improve voter turnout.
Many US foundations and nonprofits operate in this space. In 2011, US funders granted more than $31M directly in support of issues around voting. In 2012, an election year, this figure was closer to $60M, alongside at least another $300M towards civic education and civic engagement efforts focused on specific issues. (These figures pale in comparison, of course, to the more than $6B spent by partisan actors seeking to drive turnout in the 2012 elections.)
There appear to be (at least) three possible goals that the nonprofit sector has pursued around voter turnout:
1. Quantity. Increasing the size of American voter turnout beyond the averages of 55-63% in presidential elections (dropping to 34-42% in mid-terms, and reaching the lowest level in more than seven decades in 2014, as shown in the New York Times chart at right) that the US has experienced since the 1940s. This is where the majority of voter engagement efforts appear to be focused, or at least where much of the dialogue is focused.
2. Quality. Improving the understanding and thoughtfulness of voting decisions made by existing voters (i.e., without necessarily expanding the electorate). There is, frankly, much room for improvement here—for example, several recent state-based studies have found that 75% to 83% of voters find ballot measures too complicated and confusing to understand (yet 66% have cast ballots on measures with which they are unfamiliar). Knowledge about candidates, both their stances on policy issues and their approaches to the policymaking process—is similarly lacking.
3. Representativeness. Changing the composition of the electorate to ensure that the voting population more closely mirrors the socio-demographic profile of America at large. On average, US voters tend to be consistently older, whiter, wealthier, and more educated than the broader US population, running the risk that public policies are misaligned with the interests of the average American. Presumably actors with this goal believe (likely rightly) that the policy preferences of a more representative electorate would differ and that policy-makers’ decisions would change to reflect the preferences of a more representative electorate.
These three goals—quantity, quality, and representativeness —are each fairly distinct. And different organizations often have very different goals in mind, even when we’re all talking about “improving voter turnout.” But the extent to which we are pursuing one, two, or all three of these goals isn’t always a focal point in our conversations. Instead, the conversation often jumps quickly to strategies, tactics, and the grantees that are executing them.
Strategic differences may not matter where there is agreement on tactics—similar tactics can advance different strategic ends. But it is nevertheless important for us to be aware of them, if and when the time comes when strategy suggests a change in tactics, and different strategies may suggest different changes. I’ll quote Hewlett Foundation President Larry Kramer here: “Madison and Hamilton had very different goals in mind in the 1780s, but both saw their goals advanced by a stronger federal government, so both pushed for the Constitution. Once it was up and running, the different goals emerged starkly, so they split over what the national government should do.” At least understanding when fellow funders are pursuing different goals, strategies, and tactics can help clarify what can otherwise be a murky conversation.
I continue to be fascinated by the robust and complex field of academic research surrounding questions of voting—a field that started in the 1920s but has grown rapidly in recent years. And I continue to appreciate opportunities to get more specific with colleagues in the democracy reform community about what goals we are pursuing in the voting space, where and how we are pursuing them, and why.
According to many newsrooms, fact-checking articles are among the most requested, and most read, content they produce. In recent years groups like Factcheck, Flackcheck, FactChecker (home of the “Pinocchios”), Politifact and Punditfact have emerged to address growing questions of truth in politics.
Fact-checking is done with at least three possible goals in mind, each with distinct audiences associated:
For the broader public, to improve understanding of candidates and issues by countering “misinformation.”
For other journalists, to help shift the culture away from he said/she said coverage towards greater fact-checking.
For politicians, political pundits, or (more recently) news networks, to hold them accountable and deter erroneous statements.
Last year, the Madison Initiative helped to support a project by the American Press Institute (API) to research practices in this space. In December, I attended an API conference on the topic in Arlington, Virginia. More than 50 people turned out—academics from journalism and communications schools across the country, journalists from a variety of outlets, political ad companies and consulting firms, and of course a small handful of funders like me. The event covered a range of topics including recent research, journalists’ experiences from the 2014 mid-terms, and new tools and formats under development. Several highlights emerged from these discussions.
Perhaps the most fascinating research looked at which erroneous, politically important opinions are most commonly held by the public. To take one example, a plurality of Americans believes that China holds more than half of the US debt. How much does it actually hold? Eight percent.
I was also interested to learn that in the 2014 midterms, ads from outside groups (e.g., Super PACs) were more prone to misinformation than those from the candidates themselves, presumably due to a lack of accountability. Not surprisingly, closer races appear to inspire more negative (and more distorted) ads. And fact-checks are increasingly being used as “weapons” by opponents on the campaign trail.
It was likewise impressive to learn the Washington Post’s Truth Teller has built structured data into its fact-checking, in an effort to address the fact that the same mistruths are often repeated over and over again. Their prototype algorithm tests statements against a database of thousands of prior fact-checks, helping reduce the burden on human fact-checkers. Even though this is a relatively young tool it appears to be surprisingly accurate, identifying erroneous claims 75-80% of the time.
The day concluded by breaking into groups to brainstorm opportunities for research, distribution, and new tech tools. Overall, the discussions raised a few questions for me:
How can the fact-checking industry possibly keep up? One North Carolina reporter noted that, after catching a particularly egregious lie, the perpetrator (evidently a campaign manager) joked that “if you didn’t give us a red light, we wouldn’t have been doing our job.” Even assuming that most campaigns are honest, reporters are still clearly out-gunned by the campaign industry.
How can fact-checkers manage across distribution channels? Current fact-checking focuses primarily on TV ads and shows and online news content, and is not equipped to cover print, mailers, or radio as robustly. National, TV-based campaigns are clearly easiest to check—but with the rise of TiVo and micro-targeting, future political communications are unlikely to stick to these channels.
Most importantly, what difference does it make? Behavioral scientists speak to the complexity of correcting misperceptions once people have already latched onto them—in some cases finding that attempts to correct the erroneous beliefs might instead just serve to more deeply cement falsehoods. When it comes to elections, how often do people actually change their vote based on new information about candidates’ truths or mistruths, rather than just voting their party line? And how often are fact-checkers simply “preaching to the choir,” providing ammunition to further inflame party loyalists about how dishonest the other side is, rather than creating room for real learning opportunities across party lines?
On the one hand, I would be hard-pressed to just “give up on the importance of facts.” On the other, it remains unclear whether and when facts actually do matter in people’s decision-making, and what might make them matter more. Scholars and fact-checkers alike are now trying to find clearer answers to those questions.
Science plays a pivotal role in informing the work of the social sector—not to mention business, government, and society at large. (Fun fact: A quick scan of our own grants suggests that since 2001 the Hewlett Foundation has supported almost 1,000 research projects with over $390M in funding—and these are just the grants with “research” in the title!)
Scientific facts are solid, trustworthy, reproducible. That idea, that the same experiments should always get the same results, regardless of who performs them, is in fact a cornerstone of science’s claim to “objective truth.” But as Mark Twain so eloquently put it “it ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. ”
A growing chorus of voices is alleging that Twain’s jibe can be applied to a great many scientific findings, both in the natural (biological and physical) and social sciences. The journal Nature has developed an archive of reports and analyses called Challenges in Irreproducible Research. Articles from The Atlantic, The Economist, and The New York Times offer a few recent examples of the purported problem in the popular dialogue. There’s even a forthcoming Sony/Participant Media film, Merchants of Doubt, related to the topic.
Glenn Begley, chief scientific officer at TetraLogic Pharmaceuticals, described an experience he had while at the drug company Amgen: He and his colleagues were unable to replicate 47 out of 53 “landmark” papers about cancer—all of which had appeared in top journals. Evidently some of the results could not be reproduced “even with the help of the original scientists working in their own labs.”
While much has been made of the potential problems with scientific integrity, we need to hesitate before jumping to any firm conclusions. If studies cannot be replicated, it could be for two very different reasons:
Dishonesty is, of course, one possibility. Certainly there are incentives for researchers to discover “surprising” results. Academics need to get their work published in well-regarded journals in order to secure funding or tenure. Top journals’ rejection rates can exceed 90 percent and, unsurprisingly, studies with surprising findings are more likely to make the cut. Alternately, according to the aforementioned article in The Atlantic, “maybe sometimes it’s the questions that are biased, not the answers”—as with medical research studies comparing a company’s new drugs “against those already known to be inferior to others on the market.”
Second, and perhaps less obvious, even when researchers act with complete integrity, conventional practices common in their fields may produce findings that are misleading or invalid. According to The Atlantic, these challenges could range “from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.”
These kinds of concerns have inspired a number of responses, including:
The National Academy of Sciences, which has convened an expert panel to address the issue;
The National Science Foundations’ Social, Behavioral, and Economic Sciences directorate, which has formed a task force on the topic.
At the Hewlett Foundation, we’re supporting scientists at Rutgers and Stanford Universities, in collaboration with Stanford’s Center for Advanced Study in the Behavioral Sciences, to assess the prevalence of these problematic practices, alongside the feasibility and potential impact of proposed interventions to improve “scientific integrity.”
In the meantime, many scientific journals have recently adopted policies designed to elevate the validity and integrity of the research they publish. According to Stanford and Rutgers, these policies include requiring minimal sample sizes, providing additional space for methods sections in order to increase transparency, inviting authors to complete checklists of high integrity practices (e.g., reporting all variables, all conditions, etc.), requiring authors to post their data online for others to be able to access and review, stamping articles with badges for high integrity practices (transparency, pre-registering hypotheses and methods to distinguish theory-testing from exploratory analyses, making data available online), and setting aside space to publish attempted replications.
It remains unclear how big the problem really is. We hope the research we are supporting will help to … um… clarify whether the research our researchers are researching is flawed. In the meantime I’m looking forward to Merchants of Doubt — a comedy, I think?
I recently attended a conference in Chicago on how to improve statehouse reporting—looking not just at the reporters physically in our nation’s 50 state capitols, but also those covering the effects of Statehouse policies on agencies, departments, and the executive branch.
Why is this important? (At least) two reasons: First, fully half of the state legislatures that met in 2012 passed more bills in one year than congress passed in two years. That’s a lot of bills.
Second, just as with the broader field of journalism, statehouse reporting has experienced dramatic reductions in coverage in recent years. Pew Research Center estimates a 35% decline in statehouse newspaper reporters since 2003, an even steeper decline than that in newsrooms overall during the same period. That leaves fewer than 1,600 journalists in America’s capitols—and only 47% of those are full-time. That averages out to 15 full-time reporters per state, but the actual numbers vary widely, from a high of 53 in Texas to just two in South Dakota. According to Pew:
“Less than a third of U.S. newspapers assign any kind of reporter—full-time or part-time—to the statehouse.”
“Fully 86% of local TV news stations do not assign even one reporter—full -time or part-time—to the statehouse.” This is especially problematic in that most Americans (particularly less ideological ones) still get the majority of their news from local TV.
“Students account for 14% (223 in all) of the overall statehouse reporting corps.” In fact four states—Missouri, Nevada, Kansas, and Arizona—have more students than FTEs.
We discussed three primary goals that a healthy Statehouse reporting field would help to support:
Holding officials accountable (and thereby helping to deter bad behavior)
Engaging the public with relevant, useful information
Leveling the playing field between special interests and the public
We then discussed a LOT of barriers to these goals—the discussion filled up four or five flipcharts (in pretty small print). These fell into a couple of loose, often overlapping categories which I’ll attempt to summarize here:
Lack of sustainable business models. This was the background to all of our discussion.
Public trust in information. Including discussion of coverage that is more “horse-racy” than substantive, polarization of the media, lack of journalistic standards, and the increasing influence of special interests.
Public engagement with the news. Including news’ lack of nuance / personalization, and the perceived lack of understanding (on the part of news producers) of “consumers’” preferences. All driving low public interest in political news.
News quality. Including concerns that news has become more “reactive than proactive” and that so little is translated into Spanish or other languages. This also touched on the lack of “data or journalistic standards” and of “data interoperability” (e.g., overarching taxonomies) that would allow data to speak to each other across geographies or organizational silos, which would allow for better trend interpretation.
Reporter efficiency. Including the lack of training and institutional knowledge, declining reporter relationships (e.g., access to tips, time to conduct interviews, etc.). This discussion also touched on reporters’ limited access to (and ability to interpret) public records.
Government accountability. Including the lack of government transparency, as well as the perceived shift of government resources away from governing towards communications/PR.
The (increasingly?) negative tone of news. Including whether / how this serves to undermine other democratic goals of civic engagement.
Undaunted (well, maybe a little daunted), we then broke into groups to brainstorm solutions. Ideas included:
Creating either a national hub or regional hubs of accountability journalism, making data and research available for easy customization by local reporters.
Creating a mobile, time-limited (3 year), highly-publicized team to help improve state media that would focus on specific states. (The benefit of it being time-limited and highly-publicized being that the public might feel a heightened responsibility to support the effort and focus attention on longer-run sustainability.)
Auditing—and creating a central database of—existing journalistic data resources, collaborations, players, etc., with a goal of helping to inform others about what is available and identifying the gaps.
Improving data standardization across states to provide reporters with, for example, standardized data on bills’ statuses, so that journalists have tools as good as the lobbyists.
But while the discussion of problems facing statehouse journalism was rich and the ideas generated interesting, none are silver bullets. Addressing the issues plaguing statehouse reporting will require time and effort from many actors across the news media. The question remains which, if any, of these ideas could be implemented, iterated, and scaled—and which might make a demonstrable difference.