Evaluation – the independent, systematic investigation of how, why, and to what extent our objectives and goals are being achieved – is a critical part of outcome-focused philanthropy at the Hewlett Foundation. Independent evaluations help us learn and adapt throughout the strategy lifecycle.
But are the evaluations that we’ve conducted meeting our expectations for quality? And how often are we living up to our principles about the best practices of evaluation?
To answer those questions, we looked in depth at nearly 50 evaluations commissioned by our program staff between 2009 and 2016 to inform grantmaking practice. We found that:
Evaluation quality has improved over time. We define quality based on our evaluation principles and practices; it includes clarity of purpose, rigorous and relevant methodology, engagement of grantees, and several other criteria.
Our evaluations are valuable and used in multiple ways. More than half (52%) are used in grant and grantee-level decisions.
Spending on evaluation has nearly doubled. We’re now at 1.3% of grantmaking—but we did not meet our target of 2%. The foundation set this benchmark level of funding in 2013, with the goal of increasing the utility and value of evaluations for more effective grantmaking.
Higher-quality evaluations cost more, but other factors also contribute to better evaluations. For example, when staff invest sufficient time at key points—such as when structuring the evaluation plan, engaging in discussion around data, and adapting reports for specific audiences—there is payoff in terms of the overall evaluation experience and utility.
We have increased grantee engagement since 2013, but we can do better. Currently, one in four evaluations engage grantees in all three phases of an evaluation—planning, implementing, and interpreting results.
Evaluations are being shared more than in the past, but there’s room to improve. While about three out of four evaluations were shared with targeted external audiences – through webinars or workshops with selected grantees and other funders, for example – broader, public sharing happens less often. About 45% of the foundation’s evaluations conducted from 2013 to 2016 are publicly available, up from 29% in the prior three-year period.