Transparency Talk

« Cutting-Edge Philanthropy | Main | Group of 9: Setting the Stage for Grantees to Be Transparent with One Another »

The Edna McConnell Clark Foundation’s Kelly Fitzsimmons Discusses a New Blueprint for Evaluation Plans
October 15, 2014

The Social Innovation Fund (SIF), a White House initiative and program of the Corporation for National and Community Service, has recently created a new document that’s designed to help organizations build a comprehensive evaluation design. The Social Innovation Fund Evaluation Plan Guidance aims to share best practices to benefit and strengthen the sector as a whole.

Recently, Transparency Talk conducted an online interview with Kelly Fitzsimmons of The Edna McConnell Clark Foundation (EMCF) and with Michael Smith, Director of the Social Innovation Fund, to learn how the new framework provided by SIF can be adapted for use in assessing foundation program impact.

””

Kelly Fitzsimmons

””

Michael Smith

1.    Please tell us a little bit about the Edna McConnell Clark Foundation, and why you became involved in the Social Innovation Fund?  

EMCF: The Edna McConnell Clark Foundation seeks to transform the life trajectories of vulnerable and economically disadvantaged youth by making multi-million, multi-year investments in nonprofits with a potential for growth and compelling evidence that they can help more young people become successful, productive adults.

We agreed to become a SIF intermediary based on our belief that the Social Innovation Fund could become a catalyst for scaling "what works” by encouraging the public and private sectors to direct more resources to the most effective solutions to some of our nation’s seemingly intractable social problems.  

As a SIF intermediary, we have helped mobilize a total of $120 million in 12 promising, evidence-based organizations: $30 million in federal funds from the SIF with $30 million from our own resources and, through the True North Fund, helped our grantees secure the $60 million required for match. Our SIF grants are designed to build the evidence base and organizational capacity of this portfolio of nonprofits so they can, within three years:

  • Significantly increase the numbers of youth served by effective programs, and
  • Substantially advance the evidence of their effectiveness with rigorous, independent program evaluations.

2. What do you see as the value of program evaluations in your field?

For us at the Edna McConnell Clark Foundation, one of the biggest positives is that evaluation can expand what you know about what “works” as well as about what doesn’t work. It’s our belief that program evaluations are a key driver of innovation.

EMCF: I think there are some misconceptions about what you “do” with program evaluation. For us at the Edna McConnell Clark Foundation, one of the biggest positives is that evaluation can expand what you know about what “works” as well as about what doesn’t work. It’s our belief that program evaluations are a key driver of innovation.

Whether a study shows positive, mixed or disappointing results, if carefully designed it almost always unearths information that can be used to innovate and improve how a program is delivered to boost quality and impact.  To get the most out of an evaluation, we believe it is critical during the planning stage for organizations and their evaluators to ask themselves not only what impacts they are looking to test, but also what they’d like to learn about the program’s implementation.  For example, answering questions such as: “How closely is the program run compared to the intended model?” or “To what extent does this or that program component contribute to impact?” can yield important insights into how well a program is implemented across different sites (or cities or regions) or reveal differences in impacts depending on the population served or environmental factors.

3.     How does having evaluation plans, like the Social Innovation Fund’s Evaluation Plan Guidance, help nonprofits become more effective?

EMCF_logoEMCF: The Social Innovation Fund’s tool is a useful resource for organizations interested in building their evidence base and thinking about how to plan thoughtfully for evaluation. It offers practical takeaways that organizations should consider when thinking about evaluation, from structuring an evaluation plan to what elements should be considered in an evaluation, and even ways to assess the feasibility of undertaking one. A thoughtful evaluation plan can also inform an organization’s larger plans. For example, if an evaluation requires that *X* number of kids must participate in order for a program to be assessed, does your organization need to grow or adapt in order to meet that threshold? If so, how will the organization get there while maintaining program quality? 

In essence, a strong, multi-year evaluation plan is much like a strong business plan—it helps you think about the resources you need, identify your interim and ultimate goals, and even decide what to do and how to communicate if your plan goes off-track. 

4.     How do EMCF and your grantees use the data you’ve collected from evaluations?

EMCF: We like to approach evidence building from the premise that we’re seeking to understand *how* a program works, not just *if* a program works. From this perspective, whether the findings are positive, mixed or null, evaluating programs over time can yield insights that inform practice, drive innovation and ultimately ensure the best possible outcomes for youth and families. 

In essence, a strong, multi-year evaluation plan is much like a strong business plan—it helps you think about the resources you need, identify your interim and ultimate goals, and even decide what to do and how to communicate if your plan goes off-track.

For example, take Reading Partners, which connects students who are half a grade to 2 ½ grades behind in reading with trained volunteers who use a specialized curriculum. A recently released MDRC evaluation found these kids made greater gains in literacy—1.5 to two months—than their peers after an average of 28 hours of Reading Partners’ instruction. During the evaluation, MDRC was able to corroborate that local sites were implementing the program with a high degree of fidelity, including providing appropriate support and training to volunteer tutors. The data collected also indicated the program was effective across different subsets of students, increasing reading ability across 2nd to 5th different grade, varying baseline reading achievement levels, for girls and boys, and even non-native English speakers. This knowledge is now helping Reading Partners think more strategically about how and where it expands to impact more kids. 

We worked with Reading Partners as we do with other EMCF grantees, bringing in experts to help them develop high-quality evaluation plans, often connecting them to other experts, and also funding their evaluations. We help them identify key evaluation questions at the outset, work together to monitor progress toward evaluation goals, to make revisions to their plans when circumstances change or new information arises, and to communicate results when they become available. Evidence building is a continuous, dynamic process that informs how EMCF as well as our grantees set and reach our growth, learning and impact goals.

SIF_logoWe also use quality and impact data to help measure and track quarterly and annually the performance of each grantee and our entire portfolio, including whether our investment strategy is having its intended effect of aiding our grantees in meeting the yearly and end-of-investment milestones and evidence-building goals on which we have mutually agreed.

5.  Many funders express that they are using such evaluations as learning tools.  But there is a fear factor that comes in when grantees have specific benchmarks to meet and then fail to meet them that it will mean they will not receive renewed support.  And that speaks to the tension between risk, innovation, and accountability.  How do you navigate those tensions so that the assessment process doesn't actually stand in the way of risk and innovation?

SIF: Our grantees have expressed this concern too and the way we have answered it is this: evaluation should be about proving and improving. Evaluation results should represent the beginning of a process where all stakeholders use what is learned to enhance and even overhaul programs.

The Social Innovation Fund is, at its core, a grand experiment. As part of this experiment, we are here to learn and together. The investment we are making in our grantees’ and subgrantees’ innovation is a risk, but it’s a measured, calculated risk.

The Social Innovation Fund is, at its core, a grand experiment. As part of this experiment, we are here to learn and together. The investment we are making in our grantees’ and subgrantees’ innovation is a risk, but it’s a measured, calculated risk. Their evaluations will help us understand what works.

If their program comes back with, say, null results but some really valuable information about how the program was implemented or a specific population that needs a different approach to achieve impact – we will not write that off as a failure. But we will demand that our grantees use that information to get the positive results next time. And we expect that they will share this information with their peers so that other programs can learn and build on these lessons.

There is a lot of work that needs to be done in the field to make sure evaluation information released isn’t treated as binary – it works or it doesn’t. All of the evaluation reports we’ve seen to date are more in the gray area, even those with truly positive impact. There is always some element of a program that doesn’t work as anticipated. We know that most folks don’t dig in to find those details, and we have committed to working with our grantees to start the conversations that will help others utilize the evaluation information coming out of the SIF so that the results aren’t seen as an up or down vote – they are seen as rich sources of information that can be truly useful.

-- Kelly Fitzsimmons and Michael Smith

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Share This Blog

  • Share This

Subscribe to Transparency Talk

  • Enter your email address:

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:
    glasspockets@foundationcenter.org

Categories