Transparency Talk

Category: "#OpenForGood" (27 posts)

Getting Practical About Open Licensing
January 11, 2018

Kristy Tsadick is Deputy General Counsel and Heath Wickline is a Communications Officer at the William and Flora Hewlett Foundation, where they created an Open Licensing Toolkit for the foundation’s staff and its grantees in 2015. This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Kristy_Tsadick photo
Kristy Tsadick
Heath_Wickline photo
Heath Wickline

Some of the biggest barriers to open licensing—an alternative to traditional copyright that encourages sharing of intellectual property with few or no restrictions—are practical ones. What rights are authors really giving others when they openly license their work? How do authors decide on the right Creative Commons license for their work? And having decided to openly license what they’ve created, how do authors actually let others know about their decision?

The Hewlett Foundation, where we both work, has a long history of supporting openness and transparency, and when Larry Kramer joined the foundation as president in 2012, he decided to make a renewal of that commitment a key part of his tenure. In 2015, that renewed commitment resulted in a decision to extend our support for open licensing to require it on works created using grant funds, underlining our belief that if grants are made to support the public good then the knowledge they generate should also be considered a public good.

To successfully implement this idea, we knew we would have to offer some concrete guidance to our program staff and grantees on both what we were asking of them and how to do it. We also knew we wanted to create a policy that would offer our grantees flexibility to comply with it in ways that made sense for their organizations. Both ideas are embodied in the Open Licensing Toolkit for Staff that we developed.

The kit is structured to help the foundation’s program staff decide to which grants the new rule applies, introduce open licensing to grantees, and help clarify what an open license on written works will mean for them. It uses FAQs, a “decision tree,” template emails and other documents to walk through the process. There is even a guide to marking works with a Creative Commons license to make clear what information is needed along with the copyright notice. And while the kit was designed with Hewlett Foundation staff in mind, we also wanted it to be useful for grantees and others interested in expanding their understanding and use of open licenses—so, of course, the toolkit itself carries a broad Creative Commons license.

Hewlett_toolkitIn thinking about which of our grants would be in scope for open licensing, we realized early on that general operating support is incompatible with the policy because those funds are given “with no strings attached.” Beyond even this broad exemption, we wanted to allow plenty of space for grantees to select licenses or request an exemption where they felt open licenses could do harm to them financially. It’s been gratifying to see how grantees have recognized the spirit of the new policy, and how infrequently they’ve requested exemptions—so much so that we stopped tracking those requests about a year after instituting the new policy. In one area where we did often see requests for exemptions—in grants to performing arts organizations, where the “work” is often a performance and selling tickets to it or recordings of it central to a grantee’s business model—we recently decided to change our standard grant agreements to recognize the need for this exemption.

Our goal in adopting the new policy was to show others what open licensing could mean for them—the way it can help spread knowledge and increase the impact of philanthropic resources. In that, we’ve been extremely successful, as other organizations have built on our toolkit, and our policy, to encourage open licensing in their own work. The Children’s Investment Fund Foundation (CIFF), for example, based its implementation guide for its own transparency policy on our toolkit, and the U.S. Department of State included a link to it in its Federal Open Licensing Playbook to encourage open licensing across all federal agencies. And because we included a Creative Commons license on the kit to be #OpenForGood, other organizations—including yours—are free to use and build on our work, too.

Hardly anyone would argue against getting more impact for the same dollars or having their ideas adopted and shared by more people. But real-world implementation details get in the way. Our experience with our Open Licensing Toolkit shows that a practical, flexible approach to open licensing helped extend our impact in ways we never could have imagined.

--Kristy Tsadick and Heath Wickline

In the Know: #OpenForGood Staff Pick December 2017
December 20, 2017

Gabriela Fitz is director of knowledge management initiatives at Foundation Center.

This post is part of the Glasspockets #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Gabi Fitz photo

As the #OpenForGood campaign builds steam, and we continue to add to our IssueLab Results repository of more than 400 documents containing lessons learned and evaluative data, our team will regularly shine the spotlight on new and noteworthy examples of the knowledge that is available to help us work smarter, together. This current pick comes to us from the Conrad N. Hilton Foundation. Read last month's staff pick here.


Staff Pick: Conrad N. Hilton Foundation

Evaluation of the Conrad N. Hilton Foundation Chronic Homelessness Initiative: 2016 Evaluation Report, Phase I

Download the Report

Quick Summary

 

In 2011, the Conrad N. Hilton Foundation partnered with Abt Associates Inc. to conduct an evaluation of the Hilton Foundation’s Chronic Homelessness Initiative, with the goal of answering an overarching question: Is the Chronic Homelessness Initiative an effective strategy to end and prevent chronic homelessness in Los Angeles County?

Answering that question has not been so easy. And it bears mentioning that this is not one of those reports that strives to prove a certain model is working, but instead provides a suitably complicated picture of an issue that will be an ongoing, multi-agency struggle.  A combination of economic conditions, insufficient and shrinking availability of affordable housing, and an unmet need for mental health and supportive services actually resulted in an increase in homeless people living in Los Angeles County during the time period under study. The numbers even suggest that Los Angeles was further from ending chronic homelessness than ever before. But the story is a bit more complicated than that.

In this final evaluation report on the community’s progress over five years, (January 2011 through December 2015), Abt Associates Inc. found that the collaborative system that had been developed during the first phase of the initiative actually represented a kind of turning point for the County to address chronic homelessness, which was needed more than ever by the end of 2015.

Field of Practice

  • Housing and Homelessness

What kinds of knowledge does this report open up?

This report goes beyond evaluating a single effort or initiative to look at the larger collaborative system of funding bodies and stakeholders involved in solving a problem like chronic homelessness. We often hear that no foundation can solve problems single-handedly, so it’s refreshing to see a report framework that takes this reality into account by not just attempting to isolate the foundation-funded part of the work. The initiative’s strategy focused on a systemic approach that included goals, such as the leveraging of public funds, demonstrated action by elected and public officials, and increased capacity among developers and providers to provide permanent and supporting housing effectively, alongside the actual construction of thousands of housing units. By adopting this same systemic lens, the evaluation itself provides valuable insight into not just the issue of chronic homelessness in Los Angeles County, but also into how we might think about and evaluate programs and initiatives that are similarly collaborative or interdependent by design.

What makes it stand out?

This report is notable for two reasons. First is the evaluators’ willingness and ability to genuinely grapple with the discouraging fact that homelessness had gone up during the time of the initiative, as well as the foundation’s willingness to share this knowledge by publishing and sharing it. All too often, reports that don’t cast foundation strategies in the best possible light don’t see the light of day at all. Sadly, it is that kind of “sweeping under the rug” of knowledge that keeps us all in the dark. The second notable thing about this report is its design. The combination of a summary “dashboard” with easily digestible infographics about both the process of the evaluation and its findings, and a clear summary analysis for each strategic goal, makes this evaluation stand out from the crowd.

Key Quote

“From our vantage point, the Foundation’s investment in Systems Change was its most important contribution to the community’s effort to end chronic homelessness during Phase I of the Initiative. But that does not mean the Foundation’s investments in programs and knowledge dissemination did not make significant contributions. We believe it is the interplay of the three that yielded the greatest dividend.”

--Gabriela Fitz

In the Know: #OpenForGood Staff Pick
November 1, 2017

Gabriela Fitz is director of knowledge management initiatives at Foundation Center.

This post is part of the Glasspockets #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Gabi Fitz photo

As the #OpenForGood campaign builds steam, and we continue to add to our IssueLab Results repository of more than 400 documents containing lessons learned and evaluative data, our team will regularly shine the spotlight on new and noteworthy examples of the knowledge that is available to help us work smarter, together. This current pick comes to us from the Native Arts & Cultures Foundation.


Staff Pick: Native Arts & Cultures Foundation

Progressing Issues of Social Importance Through the Work of Indigenous Artists: A Social Impact Evaluation of the Native Arts and Cultures Foundation's Pilot Community Inspiration Program

Download the Report

Quick Summary

NACF Report

Impact measurement is a challenge for all kinds of organizations, and arts and culture organizations in particular often struggle with how to quantify the impact they are making. How does one measure the social impact of an epic spoken word poem, or of a large-scale, temporary art installation, or of performance art? The same is true of measuring the impact of social change efforts--how can these be measured in the short term given the usual pace of change? This report provides a good example of how to overcome both of these struggles.

In 2014, the Native Arts & Cultures Foundation (NACF) launched a new initiative, the Community Inspiration Program (CIP), which is rooted in the understanding that arts and cultures projects have an important role to play in motivating community engagement and supporting social change.

This 2017 report considers the social impacts of the 2014 CIP projects—what effects did they have on communities and on the issues, conversations, and connections that are critical in those communities? Its secondary purpose is to provide the NACF with ideas for how to improve its grantmaking in support of arts for community change.

Field(s) of Practice

  • Arts and Culture
  • Native and Indigenous Communities
  • Social Change
  • Community Engagement

This report opens up knowledge about the pilot phases of a new initiative whose intended impacts, community inspiration and social change, are vital but difficult concepts to operationalize and measure. The evaluation provides valuable insight into how foundations can encourage the inclusion of indigenous perspectives and truths not only in the design of their programs but also in the evaluation of those same programs.

What makes it stand out?

Several key aspects make this report noteworthy. First, this evaluation comprises a unique combination of more traditional methods and data with what the authors call an "aesthetic-appreciative" evaluation lens, which accounts for a set of dimensions associated with aesthetic projects such as "disruption," "stickiness," and "communal meaning," providing a more holistic analysis of the projects. Further, because the evaluation was focused on Native-artist led projects, it relied on the guidance of indigenous research strategies. Intentionality around developing strategies and principles for stakeholder-inclusion make this a noteworthy and useful framework for others, regardless of whether Native communities are the focus of your evaluation.

Key Quote

"Even a multiplicity of evaluation measures may not 'truly' tell the story of social impact if, for evaluators, effects are unobservable (for example, they occur at a point in the future that is beyond the evaluation's timeframe), unpredictable (so that evaluators don't know where to look for impact), or illegible (evaluators cannot understand that they are seeing the effects of a project)."

--Gabriela Fitz

Open Access to Foundation Knowledge
October 25, 2017

This post is part of the Glasspockets #OpenForGood series in partnership with the Fund for Shared Insight. This post also appears in Medium. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Lisa Brooks Photo
Lisa Brooks

Foundations have a lot of reasons to share knowledge. They produce knowledge themselves. They hire others to research and author works that help with internal strategy development and evaluation of internal strategies, programs, and projects. And they make grants that assist others in gaining insight into social issues — be it through original research, evaluation work, or other work aimed at creating a better understanding of issues so that we can all pursue better solutions to social problems. In almost all aspects of foundation work, knowledge is an outcome.

While openly sharing this knowledge is uneven across the social sector, we do see more and more foundations starting to explore open access to the knowledge assets they make possible. Many foundations are sharing more intentionally through their websites, external clearinghouses, and other online destinations. And more foundations are suggesting — sometimes requiring — that their grantees openly share knowledge that was produced with grant dollars.

Lacey Althouse Photo
Lacey Althouse

Some foundations are even becoming open access champions. For example, the Hewlett Foundation has authored a terrifically helpful free toolkit that provides an in-depth how-to aimed at moving foundation and grantee intellectual property licensing practices away from “all rights reserved” copyrights and toward “some rights reserved” open licenses. (Full disclosure: IssueLab is included in the toolkit as one solution for long term knowledge preservation and sharing.) (“Hewlett Foundation Open Licensing Toolkit for Staff”)

For those who are already 100% open it’s easy to forget that, when first starting out, learning about open access can be daunting. For those who are trying to open up, like most things, getting there is a series of steps. One step is understanding how licensing can work for, or against, openness. Hewlett’s toolkit is a wonderful primer for understanding this. IssueLab also offers some ways to dig into other areas of openness. Check out Share the Wealth for tips.

Hawaii

 

However it is that foundations find their way to providing open access to the knowledge they make possible, we applaud and support it! In the spirit of International Open Access Week’s theme, “Open in order to….,” here’s what a few leading foundations have to say about the topic of openness in the social sector.

James Irvine Foundation 
Find on IssueLab.

“We have a responsibility to share our knowledge. There’s been a lot of money that gets put into capturing and generating knowledge and we shouldn’t keep it to ourselves.”

-Kim Ammann Howard, Director of Impact Assessment and Learning

Hewlett Foundation
Find on IssueLab.

“Our purpose for existing is to help make the world a better place. One way we can do that is to try things, learn, and then share what we have learned. That seems obvious. What is not obvious is the opposite: not sharing. So the question shouldn’t be why share; it should be why not share.”

-Larry Kramer, President

Hawaii Community Foundation
Find on IssueLab.

“Openness and transparency is one element of holding ourselves accountable to the public — to the communities we’re either in or serving. To me, it’s a necessary part of our accountability and I don’t think it should necessarily be an option.

-Tom Kelly, Vice President of Knowledge, Evaluation and Learning

The David and Lucile Packard Foundation
Find on IssueLab.

“Why do we want to share these things? …One, because it’s great to share what we’re learning, what’s worked, what hasn’t, what impact has been made so that others can learn from the work that our grantees are doing so that they can either not reinvent the wheel, gain insights from it or learn from where we’ve gone wrong… I think it helps to build the field overall since we’re sharing what we’re learning.”

-Bernadette Sangalang, Program Officer

The Rockefeller Foundation
Find on IssueLab

“To ensure that we hold ourselves to this high bar, The Rockefeller Foundation pre-commits itself to sharing the results of its evaluations — well before the results are even known.”

-Veronica Olazabal, Shawna Hoffman, and Nadia Asgaraly
(Read more on why the Rockefeller Foundation is open for good.)

If you are a foundation ready to make open access the norm as part of your impact operations, here’s how you can become an open knowledge organization today.

IssueLab believes that social sector knowledge is a public good that is meant to be freely accessible to all. We collect and share the sector’s knowledge assets and we support the social sector’s adoption of open knowledge practices. Visit our collection of ~23,000 open access resources. While you’re there, add your knowledge — it takes minutes and costs nothing. Find out what we’re open in order to do here. IssueLab is a service of Foundation Center.

--Lisa Brooks and Lacey Althouse

How "Going Public" Improves Evaluations
October 17, 2017

Edward Pauly is director of research and evaluation at The Wallace Foundation. This post is part of the Glasspockets #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

ED_finalAs foundations strive to be #OpenForGood and share key lessons from their grantees' work, a frequent question that arises is how foundations can balance the value of openness with concerns about potential risks.

Concerns about risk are particularly charged when it comes to evaluations. Those concerns include: possible reputational damage to grantees from a critical or less-than-positive evaluation; internal foundation staff disagreements with evaluators about the accomplishments and challenges of grantees they know well; and evaluators’ delays and complicated interpretations.

It therefore may seem counterintuitive to embrace – as The Wallace Foundation has – the idea of making evaluations public and distributing them widely. And one of the key reasons may be surprising: To get better and more useful evaluations.

The Wallace Foundation has found that high-quality evaluations – by which we mean independent, commissioned research that tackles questions that are important to the field – are often a powerful tool for improving policy and practice. We have also found that evaluations are notably improved in quality and utility by being publicly distributed.

Incentives for High Quality

A key reason is that the incentives of a public report for the author are aligned with quality in several ways:

  • Evaluation research teams know that when their reports are public and widely distributed, they will be closely scrutinized and their reputation is on the line. Therefore, they do their highest quality work when it’s public.  In our experience, non-public reports are more likely than public reports to be weak in data use, loose in their analysis, and even a bit sloppy in their writing.  It is also noteworthy that some of the best evaluation teams insist on publishing their reports.
  • Evaluators also recognize that they benefit from the visibility of their public reports because visibility brings them more research opportunities – but only if their work is excellent, accessible and useful.
  • We see evaluators perk up when they focus on the audience their reports will reach. Gathering data and writing for a broad audience of practitioners and policymakers incentivizes evaluators to seek out and carefully consider the concerns of the audience: What information does the audience need in order to judge the value of the project being evaluated? What evidence will the intended audience find useful? How should the evaluation report be written so it will be accessible to the audience?

Making evaluations public is a classic case of a virtuous circle: public scrutiny creates incentives for high quality, accessibility and utility; high quality reports lead to expanded, engaged audiences – and the circle turns again, as large audiences use evaluation lessons to strengthen their own work, and demand more high-quality evaluations. To achieve these benefits, it’s obviously essential for grantmakers to communicate upfront and thoroughly with grantees about the goals of a public evaluation report -- goals of sharing lessons that can benefit the entire field, presented in a way that avoids any hint of punitive or harsh messaging.

“What is it that you don’t know, that if you knew it, would enable you to make important progress in your own work?”

Asking the Right Questions

A key difference between evaluations commissioned for internal use and evaluations designed to produce public reports for a broad audience lies in the questions they ask. Of course, for any evaluation or applied research project, a crucial precursor to success is getting the questions right. In many cases, internally-focused evaluations quite reasonably ask questions about the lessons for the foundation as a grantmaker. Evaluations for a broad audience of practitioners and policymakers, including the grantees themselves, typically ask a broader set of questions, often emphasizing lessons for the field on how an innovative program can be successfully implemented, what outcomes are likely, and what policies are likely to be supportive.

In shaping these efforts at Wallace as part of the overall design of initiatives, we have found that one of the most valuable initial steps is to ask field leaders: What is it that you don’t know, that if you knew it, would enable you to make important progress in your own work? This kind of listening can help a foundation get the questions right for an evaluation whose findings will be valued, and used, by field leaders and practitioners.

Knowledge at Work

For example, school district leaders interested in Wallace-supported “principal pipelines” that could help ensure a reliable supply of effective principals, wanted to know the costs of starting such pipelines and maintaining them over time. The result was a widely-used RAND report that we commissioned, “What It Takes to Operate and Maintain Principal Pipelines: Costs and Other Resources.” RAND found that costs are less than one half of 1% of districts’ expenditures; the report also explained what drives costs, and provided a very practical checklist of the components of a pipeline that readers can customize and adapt to meet their local needs.

Other examples that show how high-quality public evaluations can help grantees and the field include:

Being #OpenForGood does not happen overnight, and managing an evaluation planned for wide public distribution isn’t easy. The challenges start with getting the question right – and then selecting a high-performing evaluation team; allocating adequate resources for the evaluation; connecting the evaluators with grantees and obtaining relevant data; managing the inevitable and unpredictable bumps in the road; reviewing the draft report for accuracy and tone; allowing time for grantees to fact-check it; and preparing with grantees and the research team for the public release. Difficulties, like rocks on a path, crop up in each stage in the journey. Wallace has encountered all of these difficulties, and we don’t always navigate them successfully. (Delays are a persistent issue for us.)

Since we believe that the knowledge we produce is a public good, it follows that the payoff of publishing useful evaluation reports is worth it. Interest from the field is evidenced by 750,000 downloads last year from www.wallacefoundation.org, and a highly engaged public discourse about what works, what doesn’t, why, and how – rather than the silence that often greets many internally-focused evaluations.

--Edward Pauly

Opening Up the Good and Bad Leads to Stronger Communities and Better Grantmaking
September 28, 2017

Hanh Cao Yu is Chief Learning Officer at The California Endowment.  She has been researcher and evaluator of equity and philanthropy for more than two decades. 

This post is part of the Glasspockets #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Hanh-Cao-Yu-photoMore than a year ago when I began my tenure at The California Endowment (TCE), I reflected deeply about the opportunities and challenges ahead as the new Chief Learning Officer.  We were six years into a complex, 10-year policy/systems change initiative called Building Healthy Communities (BHC).  This initiative was launched in 2010 to advance statewide policy, change the narrative, and transform 14 of California’s communities most devastated by health inequities into places where all people—particular our youth—have an opportunity to thrive.  This is the boldest bet in the foundation’s history at $1 billion and the stakes are high.  It is not surprising, then, that despite the emphasis on learning, the evaluation of BHC is seen as a winning or losing proposition. 

“By acknowledging our mistakes, our focus has sharpened and our dual roles as changemakers and grantmakers have continued to evolve.”

As I thought about the role of learning and evaluation in deepening BHC’s impact, I became inspired by the words of Nelson Mandela: “I never lose.  I either win or I learn.”  His encouragement to shift our mindset from “Win/Lose” to “Win/Learn” is crucial to continuous improvement and success.  

I also drew from the insights of Amy Edmondson who reminds us that if we experience failure, not all failures are bad.  According to Edmondson, mistakes can be preventable, unavoidable due to complexity, or even intelligent failures.  So, despite careful planning and learning from decades of research on comprehensive community initiatives and bold systems change efforts, in an initiative as complex as BHC, mistakes can and will occur. By spurring change at community, regional and state levels, and linking community mobilization with sophisticated policy advocacy, TCE was truly venturing into new territory when we launched BHC.

BHC's Big Wins and Lessons 

At the mid-point of BHC, TCE staff and Board paused to assess where we have been successful and where we could do better in improving the conditions under which young people could be healthy and thrive in our underserved communities.  The results were widely shared in the 2016 report, A New Power Grid:  Building Healthy Communities at Year 5.

As a result of taking the time to assess overall progress, we identified some of BHC's biggest impacts to date. In the first five years, TCE and partners contributed to significant policy/system wins:

  • Improved health coverage for the underserved;
  • Strengthened health coverage policy for the undocumented;
  • Improved school climate, wellness and equity;
  • Prevention and reform within the justice system;
  • Public-private investment and policy changes on behalf of boys and young men of color; and
  • Local & regional progress in adoption of “Health In All Policies,” a collaborative approach incorporating health considerations into decision-making across all policy areas

Our Board and team are very pleased with the results and impact of BHC to date, but we have been committed to learning from our share of mistakes. 

Along with the victories, we acknowledged in the report some hard lessons.  Most notable among our mistakes were more attention to:

  • Putting Community in “Community-Driven” Change.  Armed with lessons on having clarity about results to achieve results, we over thought the early process.  This resulted in prescriptiveness in the planning phase that was not only unnecessary, but also harmful. We entered the community planning process with multiple outcomes frameworks and a planning process that struck many partners as philanthropic arrogance. The smarter move was to engage community leaders with the clarity of a shared vision and operating principles, and create the space for community leaders and residents to incubate goals, results, and strategy. Fortunately, we course corrected, and our partners were patient while we did so.
  • Revisiting assumptions about local realities and systems dynamics.  In the report, we discussed our assumption about creating a single locus of inside-out, outside-in activity where community residents, leaders and systems leaders could collaborate on defined goals. It was readily apparent that community leaders distrusted many “systems” insiders, and systems leaders viewed outsider/activists as unreasonable. We underestimated the importance of the roles of historical structural inequalities, context, and dynamics of relationships at the local level.  Local collaboratives or “hubs” were reorganized and customized to meet local realities, and we threw the concept of a single model of collaboration across all the sites out the window.

Some course corrections we made included adjusting and sharpening our underlying assumptions and theory of change and taking on new community-driven priorities that we never anticipated early on; examples include school discipline reform, dismantling the prison pipeline in communities of color through prevention, and work that is taking place in TCE’s Boys & Young Men of Color portfolio.  By acknowledging our mistakes, our focus has sharpened and our dual roles as changemakers and grantmakers have continued to evolve. 

“Some partner feedback was difficult to hear, but all of it was useful and is making our work with partners stronger.”

Further, significant developments have occurred since the report:

Positioning “Power Building” as central to improving complex systems and policies.  In defining key performance indicators, we know the policy milestones achieved thus far represent only surface manifestations of the ultimate success we are seeking.  We had a breakthrough when we positioned “building the power and voice” of the adults and youth in our communities and “health equity” at the center of our BHC North Star Goals and Indicators.  Ultimately, we’ll know we are successful when the power dynamics in our partner communities have shifted so that adult and youth residents know how to hold local officials accountable for full, ongoing implementation of these policies.

Continuing to listen to our partners.  In addition to clarifying our North Stars, we sought further mid-point advice from our partners, reaching out to 175 stakeholders, including 68 youth and adult residents of BHC communities, for feedback to shape the remainder of BHC’s implementation and to inform our transition planning for the next decade.  Some of what our partners told us was difficult to hear, but all of it was useful and is making our work with partners stronger.    

From these lessons, I challenge our philanthropic colleagues to consider:

  • How can we learn to detect complex failures early to help us go beyond lessons that are superficial? As Amy Edmonson states, “The job of leaders is to see that their organizations don’t just move on after a failure but stop to dig in and discover the wisdom contained in it.”
  • In complex initiatives and complex organizations, what does it take to design a learning culture to capitalize successfully on mistakes? How do we truly engage in “trial and error” and stay open to experimentation and midcourse corrections?  How can we focus internally on our own operations and ways of work, as well as being willing to change our strategies and relationships with external partners?  Further, how do we, as grantmakers responsible for serving the public good, take responsibility for making these lessons #OpenForGood so others can learn from them as well?

It is worth noting that a key action that TCE took at the board level as we embarked on BHC was to dissolve the Board Program Committee and replace it with Learning and Performance Committee.  This set-up offered consistent opportunity for learning from evaluation reports between the Board, the CEO, and the management team and for sharing our learnings publicly to build the philanthropic field.  Now, even as we enter the final phase of BHC, we continue to look for ways to structure opportunities to learn, and I can say, “We are well into a journey to learn intelligently from our successes as well as our mistakes to make meaningful, positive impacts.”

--Hanh Cao Yu

Championing Transparency: The Rockefeller Foundation Is First to Share All Evaluations As Part of #OpenForGood
September 26, 2017

The Rockefeller Foundation staff who authored this post are Veronica Olazabal, Director of Measurement, Evaluation, and Organizational Performance; Shawna Hoffman, Measurement, Evaluation, and Organizational Performance Specialist; and Nadia Asgaraly, Measurement and Evaluation Intern.

This post is part of the Glasspockets #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Veronica Olazabal
Veronica Olazabal
Shawna Hoffman
Shawna Hoffman
Nadia Asgaraly
Nadia Asgaraly

TRF Color LogoToday, aligned with The Rockefeller Foundation's commitments to sharing and accountability, we are proud to be the first foundation to accept the challenge and proactively make all of our evaluation reports publicly available as part of Foundation Center's #OpenForGood campaign.

A History of Transparency and Sharing

Since its founding more than 100 years ago, The Rockefeller Foundation's mission has remained unchanged: to promote the well-being of humanity throughout the world. To this end, the Foundation seeks to catalyze and scale transformative innovation across sectors and geographies, and take risks where others cannot, or will not. While working in innovative spaces, the Foundation has always recognized that the full impact of its programs and investments can only be realized if it measures - and shares - what it is learning. Knowledge and evidence sharing is core to the organization's DNA dating back to its founder John D. Rockefeller Sr., who espoused the virtues of learning from and with others—positing that this was the key to "enlarging the boundaries of human knowledge."

“To ensure that we hold ourselves to this high bar, The Rockefeller Foundation pre-commits itself to sharing the results of its evaluations - well before the results are even known.”

Evaluation for the Public Good

Building the evidence base for the areas in which we work is the cornerstone of The Rockefeller Foundation's approach to measurement and evaluation. By systematically tracking progress toward implementation and outcomes of our programs, and by testing, validating, and assessing our assumptions and hypotheses, we believe that we can manage and optimize our impact. Through the documentation of what works, for who, and how/under what conditions, there is potential to amplify our impact, by crowding-in other funders to promising solutions, and diverting resources from being wasted on approaches that prove ineffectual.

But living out transparency as a core value is not without its challenges. A commitment to the principle of transparency alone is insufficient; organizations, especially foundations, must walk the talk. Sharing evidence requires the political will and human resources to do so, and more importantly, getting comfortable communicating not only one's successes, but also one's challenges and failures. For this reason, to ensure that we hold ourselves to this high bar, The Rockefeller Foundation pre-commits itself to sharing the results of its evaluations - well before the results are even known. Then, once evaluation reports are finalized, they are posted to the Foundation website, available to the public free of charge.

#OpenForGood Project

The Foundation Center's #OpenForGood project, and IssueLab's related Results platform, help take the Foundation's commitment to sharing and strengthening the evidence base to the next level. By building a repository where everyone can identify others working on similar topics, search for answers to specific questions, and quickly identify where knowledge gaps exists, they are leading the charge on knowledge sharing.

The Rockefeller Foundation is proud to support this significant effort by being the first to contribute its evaluation evidence base to IssueLab: Results as part of the #OpenForGood movement, with the hope of encouraging others to do the same.

-- Veronica Olazabal, Shawna Hoffman, and Nadia Asgaraly

How To Keep Me Scrolling Through What You Are Sharing
August 10, 2017

Tom Kelly is Vice President of Knowledge, Evaluation & Learning at the Hawai‘i Community Foundation. He has been learning and evaluating in philanthropy since the beginning of the century. @TomEval  TomEval.com

This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Tom Kelly Hi ResHello, my name is Tom and I am a Subscriber. And a Tweeter, Follower, Forwarder (FYI!), Google Searcher, and DropBox Hoarder. I subscribe to blogs, feeds, e-newsletters, and email updates. My professional title includes the word “Knowledge,” so I feel compelled to make sure I am keeping track of the high volume of data, information, reports, and ideas flowing throughout the nonprofit and foundation worlds (yes, it is a bit of a compulsion…and I am not even including my favorite travel, shopping and coupon alerts).

It is a lot and I confess I do not read all of it. It is a form of meditation for me to scroll through emails and Twitter feeds while waiting in line at Aloha Salads. I skim, I save, I forward, I retweet – I copy and save for later reading (later when?). In fact, no one can be expected to keep up, so how does anyone make sense of it all, or even find what we need when we need it? Everyone being #OpenForGood and sharing everything is great, but who is reading it all? And how do we make what we are opening for good actually good?

Making Knowledge Usable

We have all experienced at some point Drowning in Information-Starving for Knowledge (John Naisbitt’s Megatrends…I prefer E.O. Wilson’s “starving for wisdom” theory). The information may be out there but rarely in a form that is easily found, read, understood, and most importantly used. Foundation Center and IssueLab have made it easier for people in the sector to know what is being funded, where new ideas are being tested, and what evidence and lessons are available. But nonprofits and foundations still have to upload and share many more of their documents than they do now. And we need to make sure that the information we share is readable, usable, and ready to be applied.

Hawaii Community Foundation Graphic

DataViz guru Stephanie Evergreen recently taught me a new hashtag: #TLDR – “Too Long, Didn’t Read.”

She now proposes that every published report be available in three formats – a one-page handout with key messages, a 3-page executive summary, and a 25-page report (plus appendices). In this way the “scanners,” “skimmers” and “deep divers” can access the information in the form they prefer and in the time they have. It also requires writing (and formatting) differently for each of these sets of eyes. (By the way, do you know which one you are?)

From Information to Influence

But it is not enough to make your reports accessible, searchable, and also easily readable in short and long forms; you also need to include the information people need to make decisions and act. It means deciding in advance who you want to inform and influence and what you want people to do with the information. You need to be clear about your purpose for sharing information, and you need to give people the right kinds of information if you expect them to read it, learn from it, and apply it.

“Give people the right kinds of information if you expect them to read it, learn from it, and apply it.”

Too many times I have read reports with promising findings or interesting lessons, and then I race through all the footnotes and the appendices at the back of the report looking for resources that could point me to the details of evidence and data or implementation guidance. I usually wind up trying to track down the authors by email or phone to follow up.

A 2005 study of more than 1,000 evaluations published in human services found only 22 well-designed and well-documented reports that shared any analysis of implementation factors – what lessons people learned about how best to put the program or services in place. We cannot expect other people and organizations to share knowledge and learn if they cannot access information from others that helps them use the knowledge and apply it in their own programs and organizations. YES, I want to hear about your lessons and “a-ha’s,” but I also want to see data and analysis of the common challenges that all nonprofits and foundations face:

  • How to apply and adapt program and practice models in different contexts
  • How to sustain effective practices
  • How to scale successful efforts to more people and communities

This means making sure that your evaluations and your reports include opening up the challenges of implementation – the same challenges others are likely to face. It also means placing your findings in the context of existing learning while also using similar definitions so that we can build on each other’s knowledge. For example, in our recent middle school connectedness initiative, our evaluator Learning for Action reviewed the literature first to determine specific components and best practices of youth mentoring so that we could build the evaluation on what had come before, and then report clearly about what we learned about in-school mentoring and open up  useful and comparable knowledge to the field. 

So please plan ahead and define your knowledge sharing and influence agenda up front and consider the following guidelines:

  • Who needs to read your report?
  • What information does your report need to share to be useful and used?
  • Read and review similar studies and reports and determine in advance what additional knowledge is needed and what you will document and evaluate.
  • Use common definitions and program model frameworks so we are able to continually build on field knowledge and not create anew each time.
  • Pay attention to and evaluate implementation, replication and the management challenges (staffing, training, communication, adaptation) that others will face.
  • And disseminate widely and share at conferences, in journals, in your sector networks, and in IssueLab’s open repository.

And I will be very happy to read through your implementation lessons in your report’s footnotes and appendices next time I am in line for a salad.

--Tom Kelly

Crafting A Better Tap of Knowledge
August 9, 2017

Gabriela Fitz is director of knowledge management initiatives at Foundation Center. This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Gabi Fitz photoThis past weekend, I went to visit an old meat packing plant in Chicago’s Back of the Yards neighborhood. The plant, renamed “Plant Chicago,” serves as a workshop and food production space, playing host to a number of micro-enterprises including a brewery and bakery. But it wasn’t the beer or even the pies that drew me there. It was their tagline, “Closed Loop, Open Source.”

If you know me (or the work of IssueLab at all), you know why I couldn’t resist. The closed loop approach is all about a circular economy, where we take “waste from one process and re-purpose it as inputs for another, creating a circular, closed-loop model of material reuse.” It’s a simple principle and one that I imagine most of us would get behind.

But what’s not so simple is building and maintaining those closed loop systems so that people begin to see (and taste) the benefits. Standing in the lobby of Plant Chicago it was painfully clear to me that circular economies, whether they are in food production or in knowledge production, require more than just good intentions.

Plant Chicago
Plant Chicago, a workshop and food production space, hosts micro-enterprises, including a brewery and bakery. Credit: Gabriela Fitz

Just as I started to feel the familiar weight of trying to execute systems change, I spotted a small sketch of a pyramid amongst a number of technical diagrams and development plans covering a large wall. This simple sketch was similar to a model many of you are probably familiar with but  is still worth describing. In the sketch, the base of the pyramid was labeled “beliefs and values.” The next level up was “practices and actions.” The top of the pyramid was “results.”

When it comes to the closed loop we care so much about at IssueLab, we keep seeing organizations try to skip from beliefs to results. The social sector wants shared learning without sharing; collective impact without collectivized intelligence. But open knowledge - like any sector-wide or organizational change - has to include a change in practices, not just beliefs. If we don’t adopt open knowledge practices, we can’t expect to see the results we hope for: improved program design and delivery at the community level or less duplication of avoidable mistakes. If we truly want to live up to the #OpenForGood ideal, our beliefs and values are simply not sufficient. (Note that the definition of closed loop I quote above is not about beliefs, it’s about actions, relying on verbs like “take,” “re-purpose,” and “create.”)

The good news is that we have the infrastructure to make a circular knowledge economy possible. We’ve built the plant. Tools like open licenses and open repositories were designed to facilitate and support change in knowledge sharing practices, making it easier for foundations to move up the levels of the pyramid.

Now, we just need to start taking a couple simple actions that reflect our beliefs and move us towards the results we want to see. If you believe in the #OpenForGood principle that social sector knowledge is a public good from which nonprofits and foundations can benefit, your foundation can: 1) use open licensing for your knowledge products, and 2) earn an #OpenForGood badge by sharing your knowledge products, like evaluations, through IssueLab’s open repository. Once those practices are as much part of the normal way of doing foundation business as cutting checks and producing summary reports are, we can all sit back and enjoy that beer, together.

--Gabriela Fitz

How Improved Evaluation Sharing Has the Potential to Strengthen a Foundation’s Work
July 27, 2017

Jen GlickmanJennifer Glickman is manager, research team, at the Center for Effective Philanthropy. This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Philanthropy is a complex, demanding field, and many foundations are limited in the amount of resources they can dedicate to obtaining and sharing knowledge about their practices. This makes it necessary to consider, then, in what areas should foundations focus their learning and sharing efforts to be #OpenForGood?

Last year, the Center for Effective Philanthropy (CEP) released two research reports exploring this question. The first, Sharing What Matters: Foundation Transparency, looks at foundation CEOs’ perspectives on what it means to be transparent, who the primary audiences are for foundations’ transparency efforts, and what is most important for foundations to share.

The second report, Benchmarking Foundation Evaluation Practices, presents benchmarking data collected from senior foundation staff with evaluation responsibilities on topics such as evaluation staffing and structures, investment in evaluation work, and the usefulness of evaluation information. Together, these reports provide meaningful insights into how foundations can learn and share knowledge most effectively.

CEP’s research found that there are specific topics about which foundation CEOs believe being transparent could potentially increase their foundation’s ability to be effective. These areas include the foundation’s grantmaking processes, its goals and strategies, how it assesses its performance, and the foundation’s experiences with what has and has not worked in its efforts to achieve its programmatic goals. While foundation CEOs believe their foundations are doing well in sharing information about their grantmaking, goals, and strategies, they say their foundations are much less transparent about the lessons they learn through their work.

CEP Transparency Graphic

For example, nearly 70 percent of the CEOs CEP surveyed say being transparent about their foundation’s experiences with what has worked in its efforts to achieve its programmatic goals could increase effectiveness to a significant extent. In contrast, only 46 percent say their foundations are very or extremely transparent about these experiences. Even fewer, 31 percent, say their foundations are very or extremely transparent about what has not worked in their programmatic efforts, despite 60 percent believing that being transparent about this topic could potentially increase their effectiveness to a significant extent.

And yet, foundations want this information about lessons learned and think it is important. Three-quarters of foundation CEOs say they often seek out opportunities to learn from other foundations’ work, and is that it enables others to learn from foundation work more generally.

How is knowledge being shared then? According to our evaluation research, foundations are mostly sharing their programmatic knowledge internally. Over three-quarters of the evaluation staff who responded to our survey say evaluation findings are shared quite a bit or a lot with the foundation’s CEO, and 66 percent say findings are shared quite a bit or a lot with foundation staff. In comparison:

  • Only 28 percent of respondents say evaluation findings are shared quite a bit or a lot with the foundation’s grantees;
  • 17 percent say findings are shared quite a bit or a lot with other foundations; and
  • Only 14 percent say findings are shared quite a bit or a lot with the general public.

CEP Evaluation Survey Graphic

In fact, less than 10 percent of respondents say that disseminating evaluation findings externally is a top priority for their role.

But respondents do not think these numbers are adequate. Nearly three-quarters of respondents say their foundation invests too little in disseminating evaluation findings externally. Moreover, when CEP asked respondents what they hope will have changed for foundations in the collection and/or use of evaluation information in five years, one of the top three changes mentioned was that foundations will be more transparent about their evaluations and share what they are learning externally.

So, if foundation CEOs believe that being transparent about what their foundation is learning could increase its effectiveness, and foundation evaluation staff believe that foundations should be investing more in disseminating findings externally, what is holding foundations back from embracing an #OpenForGood approach?

CEP has a research study underway looking more deeply into what foundations know about what is and isn’t working in their practices and with whom they share that information, and will have new data to enrich the current conversations on transparency and evaluation in early 2018. In the meanwhile, take a moment to stop and consider what you might #OpenForGood.

--Jennifer Glickman

Share This Blog

  • Share This

Subscribe to Transparency Talk

  • Enter your email address:

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:
    glasspockets@foundationcenter.org

Categories