Transparency Talk

Category: "Research" (38 posts)

The Rockefeller Brothers Fund is #OpenForGood
January 31, 2018

Hope Lyons is the director of program management at the Rockefeller Brothers Fund, and Ari Klickstein is the communications associate/digital specialist at RBF. This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Hope Lyons
Ari Klickstein

As a private foundation, the Rockefeller Brothers Fund advances a just, peaceful, and sustainable world through grantmaking and related activities. We believe that discerning and communicating the impact of our grantmaking and other programmatic contributions is essential to fulfilling the Fund’s mission, as is a commitment to stewardship, transparency, and accountability. Philanthropy exists to serve the public good. By opening up what we are learning, we believe that we are honoring the public’s trust in our activities as a private foundation.

As part of our commitment to serving the public good, we are proud to be among the first foundations to join the new #OpenForGood campaign by sharing published reports on our grantmaking through Foundation Center’s open repository, IssueLab, and its new special collection of evaluations Find Results, and continue to make them available on our own website. These reports and impact assessments are materials authored by third party assessment teams, and sometimes by our own program leadership, in addition to the published research papers and studies by grantees already on IssueLab.

We feel strongly that we have a responsibility to our grantees, trustees, partners, and the wider public to periodically evaluate our grantmaking, to use the findings to inform our strategy and practice, and to be transparent about what we are learning. In terms of our sector, this knowledge can go a long way in advancing fields of practice by identifying effective approaches. The Fund has a long history of sharing our findings with the public, stretching as far back as 1961, when the results of the Fund’s Special Studies Project were published as the bestselling volume Prospect for America. The book featured expert analysis on key issues of the era including international relations, economic and societal challenges, and democratic practices, topics which remain central to our grantmaking work.

We view our grantmaking as an investment in the public good, and place a great deal of importance on accountability. Through surveys conducted by the Center for Effective Philanthropy in 2016, our grantees and prospective grantees told us that they wanted to hear more about what we have learned, as well as what the Fund has tried but was recognized as less successful in its past grantmaking. Regular assessments by CEP and third-party issue-area experts help keep us accountable and identify blind-spots in our strategies. While our evaluations have long been posted online, and we have reorganized our website to make the materials easier to find, we have also made a commitment to have additional reflections on what we’re learning going forward and to more proactively share these reports. We are grateful to Foundation Center for creating and maintaining IssueLab as a sharing platform and learning environment hub for the public, practitioners, and peers alike to locate resources and benefit from the research that the philanthropic sector undertakes.

--Hope Lyons and Ari Klickstein

Open Solutions: MacArthur Foundation Opens Up Knowledge from Its $100 Million Competition
December 22, 2017

MacArthur Foundation is opening up its work, its grantmaking process, and perhaps most importantly — its submissions — through the 100&Change competition.

The 100&Change Solutions competition funds a single proposal that “promises real and measurable progress in solving a critical problem of our time.” MacArthur welcomed proposals from any field or problem area.

Throughout this competition, MacArthur committed to be open and transparent about its grantmaking process. Examples of how this openness played out during the competition include:

100&Change LogoEarlier this week, these processes culminated with MacArthur Foundation’s announcement that Sesame Workshop and the International Rescue Committee (IRC) are joint winners of the $100 million grant. The other three finalists each received a $15 million grant.

The two organizations will work collaboratively to implement an early childhood development intervention “designed to address the ‘toxic stress’ experienced by children in the Syrian response region—Jordan, Lebanon, Iraq, and Syria,” the foundation said in a statement. “The project will improve children's learning outcomes today and their intellectual and emotional development over the long term.” 

The foundation felt compelled to support what will be the “largest early childhood prevention program ever created in a humanitarian setting.” Due to the scale of this project, there is potential for this project to improve and impact how refugee children are treated and cared for globally. Additionally, project leaders are hopeful this program will encourage a redirection of existing humanitarian aid and provide a working model for local government support.

In terms of scale, through the media component of customized educational content and a new local version of Sesame Street via television, mobile phones, digital platforms and direct services, an estimated 9.4 million young children will be reached. Home visits will be reinforced with digital content, and the project will connect trained local outreach and community health workers to reach 800,000 caregivers, and an estimated 1.5 million children will receive direct services in homes and child development centers.

The 100&Change competition also served as a force for innovation in MacArthur’s grantmaking practices and processes, and one MacArthur program officer said it helped the foundation evaluate and reflect on its own processes. For example, the foundation acknowledged that the eight semi-finalists and their proposals were atypical grant applications that would not normally be funded through its committed funding areas of: over-incarceration, global climate change, nuclear risk, increasing financial capital for the social sector; supporting journalism; and funding proposals in its headquarters city of Chicago.

The competition, launched in 2016, marks another step in MacArthur’s commitment to opening up its work in the field of philanthropy. Through a partnership with Foundation Center, more than 1,900 grant applications for the 100&Change competition will be available through a portal, 100&Change Solutions Bank.

The solutions bank encourages opportunities for organizations and funders to learn from one another, and promotes the production and sharing of knowledge. Aware that the competition generated numerous and worthwhile solutions to global issues, MacArthur was hopeful that publicly sharing the solutions represented by the nearly 2,000 proposal submissions would benefit other funders interested in exploring and funding worthy proposals. This could potentially minimize applicants from spending more time cultivating new donors and tailoring proposals to prospective funders.

A common criticism of competition philanthropy is that it’s a lot of work for the vast majority of applicants when there are thousands of applicants and only one or a handful of prize winners. MacArthur’s solutions bank approach has the potential to make this effort worthwhile since many can learn from the proposed solutions, and potentially find new collaborative partners, funders and donors.

Similarly, MacArthur’s commitment to Glasspockets’ transparency principles, and more recently, joining the #OpenForGood campaign to affirm its ongoing commitment to openly sharing its knowledge are among the ways that the foundation is working to go beyond the transaction and maximize all of its assets.

--Melissa Moy

Transparency and Philanthropy - An Oxymoron in India? Not Anymore.
December 13, 2017

Sumitra Mishra is the executive director of Mobile Creches, a leading organization in India that works for the right to early childhood development for marginalized children. Its work spans from grassroots interventions to policy advocacy at the national level. She serves on the management team of Philanthropy for Social Justice and Peace (PSJP). Chandrika Sahai is the coordinator of PSJP.

Sumitra Mishra  India has traditionally been a philanthropic culture with giving ingrained in all of its major religions, a part of everyday life. However, both formal and informal giving in India have mainly been private matters, the choice of cause and the method of giving have mostly been motivated by the givers’ desire to do good and feel good. Often, past giving was opaque in its reasons and strategies. Traditionally perceived with distrust, the general public has remained skeptical about NGOs and activism in India, and giving for social change has been marginal. While the latest report, Philanthropy in India (published by Philanthropy for Social Justice and Peace in association with Alliance, WINGS and the Centre for Social Impact and Philanthropy, and Ashoka University) validates this picture, it also points to new trends that hold a promising future in which these trends are reversed. These trends make a case for openness and greater public engagement as key ingredients to finding solutions to complex social problems that continue to plague India. 

Chandrika Sahai PhotoRetail Giving

First, there is the rise of ”retail giving” or individual giving by ordinary citizens, which is bringing middle class individuals, especially young people, into the fold of philanthropy because of their desire to be a part of the solution. They give, not because they have excess wealth to distribute; rather they are driven to do something that can make a change. This trend is supported by use of technology platforms that makes it easier for givers and their circle of friends to get closer to change on the ground. More and more people from diverse backgrounds are engaged in the process and it leads to greater impact than just raising funds.

Last month, to mark India’s Children’s Day, Child Right and You (CRY) ran a #happychildhood campaign on social media with videos of CRY donors and supporters sharing their favorite childhood memories. The campaign was not a direct call for donations. Instead, it tapped the innate empathy in people – the desire to recreate similar experiences for others, motivating them to give because they care. Another example is the DaanUtsav, which started in 2009 as Joy of Giving Week, and has become a tremendous success, engaging 6 to 7 million people today in the act of giving. These examples show how retail giving is democratizing the process of giving, opening up avenues for raising awareness and leveraging the power of these large, networked platforms to mobilize and scale individual agency for social change.  

The Rise of Progressive Philanthropists

Philanthropy-in-India-Front-cover-724x1024Second, the report points to bold steps in giving by progressive individual philanthropists investing large sums of money in structural reforms in the areas of health, education, water and sanitation. Most significantly, there is now a consortium of philanthropists visibly supportive of independent media. This comes at a time when independent media is under attack in the country, indicated, not least by the recent murder of journalist Gauri Lankesh. By publicly investing in independent media, philanthropists with voices of influence such as Azim Premji and Rohini Nilekani are giving not just their dollars, but adding their power and influence to the cause as well, demonstrating the important role transparency has to play in making a difference.   “In India a few people are emerging who are willing to put their money into such things – but it’s a slow burn,” says Rohini Nilekani, who along with her husband recently signed the Giving Pledge, committing to give away the majority of their wealth, at least $1.7 billion to philanthropy.

Furthermore, the report cites the emergence of a number of agencies in India like GuideStar India, Credibility Alliance, CAF India, and GiveIndia that are leading the NGO accrediting process to bridge the gap between NGOs and philanthropists – individuals, corporate, HNIs, foundations. What is most interesting in this push for transparency? It is based on a model where NGOs are pushing for accountability from within, by voluntarily seeking this accreditation.

Citizen-Led Movements

Third, until now, citizen philanthropy-led, social movements have been unrecognized in their push to keep social change movements open, democratic, accountable and issue based. The report draws attention to self-funded activist movements, notably the Right to Information Campaign, the Right to Work movement that succeeded on the strength of public support and not institutional philanthropy. This trend signals that philanthropy is least effective in aiding social change when it plays into unequal power relationships between givers and receivers. It is most effective when it is like a baton passed to wider communities who take center stage in exemplifying how giving, motivating and direct action can push systemic changes. Despite increasing pressure on civil society now leading to shrinking spaces for communicating dissent against inequities and injustice, the report notes how many civil society organizations in every district and town of the country “have been able to mobilize and support citizens to claim access to their rights and to organize self-help efforts.”

These developments in India give a new meaning to transparency in philanthropy. They shift the focus away from compliance to the role of philanthropy and the methods used by it, and places agency and power of the people center stage in this conversation. While the report points to this culture shift, it also points to areas for improvement, particularly the need for donor education.  Perhaps the agenda for donor education in India is best summed up by Pushpa Sundar in her book published earlier this year, Giving with a Thousand Hands: The Changing Face of Indian Philanthropy.  She writes, “Philanthropy orientation has to change from ‘giving back’ to solving social problems.”

People are giving because they want to solve social problems through their own participation. It is time for them to get their due and for the field of institutional philanthropy to recognize that the real drivers of change are people.

--Sumitra Mishra and Chandrika Sahai

How "Going Public" Improves Evaluations
October 17, 2017

Edward Pauly is director of research and evaluation at The Wallace Foundation. This post is part of the Glasspockets #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

ED_finalAs foundations strive to be #OpenForGood and share key lessons from their grantees' work, a frequent question that arises is how foundations can balance the value of openness with concerns about potential risks.

Concerns about risk are particularly charged when it comes to evaluations. Those concerns include: possible reputational damage to grantees from a critical or less-than-positive evaluation; internal foundation staff disagreements with evaluators about the accomplishments and challenges of grantees they know well; and evaluators’ delays and complicated interpretations.

It therefore may seem counterintuitive to embrace – as The Wallace Foundation has – the idea of making evaluations public and distributing them widely. And one of the key reasons may be surprising: To get better and more useful evaluations.

The Wallace Foundation has found that high-quality evaluations – by which we mean independent, commissioned research that tackles questions that are important to the field – are often a powerful tool for improving policy and practice. We have also found that evaluations are notably improved in quality and utility by being publicly distributed.

Incentives for High Quality

A key reason is that the incentives of a public report for the author are aligned with quality in several ways:

  • Evaluation research teams know that when their reports are public and widely distributed, they will be closely scrutinized and their reputation is on the line. Therefore, they do their highest quality work when it’s public.  In our experience, non-public reports are more likely than public reports to be weak in data use, loose in their analysis, and even a bit sloppy in their writing.  It is also noteworthy that some of the best evaluation teams insist on publishing their reports.
  • Evaluators also recognize that they benefit from the visibility of their public reports because visibility brings them more research opportunities – but only if their work is excellent, accessible and useful.
  • We see evaluators perk up when they focus on the audience their reports will reach. Gathering data and writing for a broad audience of practitioners and policymakers incentivizes evaluators to seek out and carefully consider the concerns of the audience: What information does the audience need in order to judge the value of the project being evaluated? What evidence will the intended audience find useful? How should the evaluation report be written so it will be accessible to the audience?

Making evaluations public is a classic case of a virtuous circle: public scrutiny creates incentives for high quality, accessibility and utility; high quality reports lead to expanded, engaged audiences – and the circle turns again, as large audiences use evaluation lessons to strengthen their own work, and demand more high-quality evaluations. To achieve these benefits, it’s obviously essential for grantmakers to communicate upfront and thoroughly with grantees about the goals of a public evaluation report -- goals of sharing lessons that can benefit the entire field, presented in a way that avoids any hint of punitive or harsh messaging.

“What is it that you don’t know, that if you knew it, would enable you to make important progress in your own work?”

Asking the Right Questions

A key difference between evaluations commissioned for internal use and evaluations designed to produce public reports for a broad audience lies in the questions they ask. Of course, for any evaluation or applied research project, a crucial precursor to success is getting the questions right. In many cases, internally-focused evaluations quite reasonably ask questions about the lessons for the foundation as a grantmaker. Evaluations for a broad audience of practitioners and policymakers, including the grantees themselves, typically ask a broader set of questions, often emphasizing lessons for the field on how an innovative program can be successfully implemented, what outcomes are likely, and what policies are likely to be supportive.

In shaping these efforts at Wallace as part of the overall design of initiatives, we have found that one of the most valuable initial steps is to ask field leaders: What is it that you don’t know, that if you knew it, would enable you to make important progress in your own work? This kind of listening can help a foundation get the questions right for an evaluation whose findings will be valued, and used, by field leaders and practitioners.

Knowledge at Work

For example, school district leaders interested in Wallace-supported “principal pipelines” that could help ensure a reliable supply of effective principals, wanted to know the costs of starting such pipelines and maintaining them over time. The result was a widely-used RAND report that we commissioned, “What It Takes to Operate and Maintain Principal Pipelines: Costs and Other Resources.” RAND found that costs are less than one half of 1% of districts’ expenditures; the report also explained what drives costs, and provided a very practical checklist of the components of a pipeline that readers can customize and adapt to meet their local needs.

Other examples that show how high-quality public evaluations can help grantees and the field include:

Being #OpenForGood does not happen overnight, and managing an evaluation planned for wide public distribution isn’t easy. The challenges start with getting the question right – and then selecting a high-performing evaluation team; allocating adequate resources for the evaluation; connecting the evaluators with grantees and obtaining relevant data; managing the inevitable and unpredictable bumps in the road; reviewing the draft report for accuracy and tone; allowing time for grantees to fact-check it; and preparing with grantees and the research team for the public release. Difficulties, like rocks on a path, crop up in each stage in the journey. Wallace has encountered all of these difficulties, and we don’t always navigate them successfully. (Delays are a persistent issue for us.)

Since we believe that the knowledge we produce is a public good, it follows that the payoff of publishing useful evaluation reports is worth it. Interest from the field is evidenced by 750,000 downloads last year from, and a highly engaged public discourse about what works, what doesn’t, why, and how – rather than the silence that often greets many internally-focused evaluations.

--Edward Pauly

How To Keep Me Scrolling Through What You Are Sharing
August 10, 2017

Tom Kelly is Vice President of Knowledge, Evaluation & Learning at the Hawai‘i Community Foundation. He has been learning and evaluating in philanthropy since the beginning of the century. @TomEval

This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Tom Kelly Hi ResHello, my name is Tom and I am a Subscriber. And a Tweeter, Follower, Forwarder (FYI!), Google Searcher, and DropBox Hoarder. I subscribe to blogs, feeds, e-newsletters, and email updates. My professional title includes the word “Knowledge,” so I feel compelled to make sure I am keeping track of the high volume of data, information, reports, and ideas flowing throughout the nonprofit and foundation worlds (yes, it is a bit of a compulsion…and I am not even including my favorite travel, shopping and coupon alerts).

It is a lot and I confess I do not read all of it. It is a form of meditation for me to scroll through emails and Twitter feeds while waiting in line at Aloha Salads. I skim, I save, I forward, I retweet – I copy and save for later reading (later when?). In fact, no one can be expected to keep up, so how does anyone make sense of it all, or even find what we need when we need it? Everyone being #OpenForGood and sharing everything is great, but who is reading it all? And how do we make what we are opening for good actually good?

Making Knowledge Usable

We have all experienced at some point Drowning in Information-Starving for Knowledge (John Naisbitt’s Megatrends…I prefer E.O. Wilson’s “starving for wisdom” theory). The information may be out there but rarely in a form that is easily found, read, understood, and most importantly used. Foundation Center and IssueLab have made it easier for people in the sector to know what is being funded, where new ideas are being tested, and what evidence and lessons are available. But nonprofits and foundations still have to upload and share many more of their documents than they do now. And we need to make sure that the information we share is readable, usable, and ready to be applied.

Hawaii Community Foundation Graphic

DataViz guru Stephanie Evergreen recently taught me a new hashtag: #TLDR – “Too Long, Didn’t Read.”

She now proposes that every published report be available in three formats – a one-page handout with key messages, a 3-page executive summary, and a 25-page report (plus appendices). In this way the “scanners,” “skimmers” and “deep divers” can access the information in the form they prefer and in the time they have. It also requires writing (and formatting) differently for each of these sets of eyes. (By the way, do you know which one you are?)

From Information to Influence

But it is not enough to make your reports accessible, searchable, and also easily readable in short and long forms; you also need to include the information people need to make decisions and act. It means deciding in advance who you want to inform and influence and what you want people to do with the information. You need to be clear about your purpose for sharing information, and you need to give people the right kinds of information if you expect them to read it, learn from it, and apply it.

“Give people the right kinds of information if you expect them to read it, learn from it, and apply it.”

Too many times I have read reports with promising findings or interesting lessons, and then I race through all the footnotes and the appendices at the back of the report looking for resources that could point me to the details of evidence and data or implementation guidance. I usually wind up trying to track down the authors by email or phone to follow up.

A 2005 study of more than 1,000 evaluations published in human services found only 22 well-designed and well-documented reports that shared any analysis of implementation factors – what lessons people learned about how best to put the program or services in place. We cannot expect other people and organizations to share knowledge and learn if they cannot access information from others that helps them use the knowledge and apply it in their own programs and organizations. YES, I want to hear about your lessons and “a-ha’s,” but I also want to see data and analysis of the common challenges that all nonprofits and foundations face:

  • How to apply and adapt program and practice models in different contexts
  • How to sustain effective practices
  • How to scale successful efforts to more people and communities

This means making sure that your evaluations and your reports include opening up the challenges of implementation – the same challenges others are likely to face. It also means placing your findings in the context of existing learning while also using similar definitions so that we can build on each other’s knowledge. For example, in our recent middle school connectedness initiative, our evaluator Learning for Action reviewed the literature first to determine specific components and best practices of youth mentoring so that we could build the evaluation on what had come before, and then report clearly about what we learned about in-school mentoring and open up  useful and comparable knowledge to the field. 

So please plan ahead and define your knowledge sharing and influence agenda up front and consider the following guidelines:

  • Who needs to read your report?
  • What information does your report need to share to be useful and used?
  • Read and review similar studies and reports and determine in advance what additional knowledge is needed and what you will document and evaluate.
  • Use common definitions and program model frameworks so we are able to continually build on field knowledge and not create anew each time.
  • Pay attention to and evaluate implementation, replication and the management challenges (staffing, training, communication, adaptation) that others will face.
  • And disseminate widely and share at conferences, in journals, in your sector networks, and in IssueLab’s open repository.

And I will be very happy to read through your implementation lessons in your report’s footnotes and appendices next time I am in line for a salad.

--Tom Kelly

Foundations and Endowments: Smart People, Dumb Choices
August 3, 2017

(Marc Gunther writes about nonprofits, foundations, business and sustainability. He also writes for A version of this post also appears in Nonprofit Chronicles.)

This post is part of a Transparency Talk series, presented in partnership with the Conrad N. Hilton Foundation, examining the importance of the 990-PF, the informational tax form that foundations must annually file. The series will explore the implications of the open 990; how journalists and researchers use the 990-PF to understand philanthropy; and its role, limitations, and potential as a communications tool.

Marc Gunther photoAmerica’s foundations spend many millions of dollars every year on investment advice. In return, they get sub-par performance.

You read that right: Money that could be spent on charitable programs — to alleviate global poverty, help cure disease, improve education, support research or promote the arts — instead flows into the pockets of well-to-do investment advisors and asset managers who, as a group, generate returns on their endowment investments that are below average.

This is redistribution in the wrong direction, on a grand scale: Foundation endowments hold about $800 billion in investments. It hasn’t attracted a lot of attention, but that could change as foundations make their IRS tax filings open, digital and searchable. That should create competitive pressures on foundation investment officers to do better, and for foundation executives and trustees to rethink business as usual investing.

The latest evidence that they aren’t doing very well arrived recently with the news that two energy funds managed by a Houston-based private equity firm called EnerVest are on the verge of going bust. Once worth $2 billion, the funds will leave investors “with, at most, pennies for every dollar they invested,” the Wall Street Journal reports. To add insult to injury, the funds in question were invested in oil and natural gas during 2012 and 2013, just as Bill McKibben, and a handful of their allies were urging institutional investors to divest from fossil fuels.

Foundations that invested in the failing Enervest funds include the J. Paul Getty Trust, the John D. and Catherine T. MacArthur Foundation and the California-based Fletcher Jones Foundation, according to their most recent IRS filings. Stranded assets, anyone?

“Endowed private foundations are unaccountable to anyone other than their own trustees.”

Of course, no investment strategy can prevent losses. But the collapse of the Enervest funds points to a broader and deeper problem–the fact that most foundations trust their endowment to investment offices and/or outside portfolio managers who pursue active and expensive investment strategies that, as a group, have underperformed the broader markets.

How costly has this underperformance been? That’s impossible to know because most foundations do not disclose their investment returns. This, by itself, is a troubling; it’s a reminder that endowed private foundations are unaccountable to anyone other than their own trustees.

On disclosure, there are signs of progress. The Ford Foundation says it intends to release its investment returns for the first time. A startup company called Foundation Financial Research is compiling data on endowments as well, which it intends to make available to foundation trustees and sell to asset managers.

What’s more, as the IRS Form 990s filed by foundations become machine readable, it will become easier for analysts, activists, journalists and other foundations to see exactly how billions of dollars of foundations assets are deployed, and how they are performing. Advocates for mission-based investment, or for hiring more women and people of color to manage foundation assets are likely to shine a light on foundations whose endowments that are underperforming.

Unhappily, all indications are that most foundations are underperforming because they pursue costly, active investment strategies. This month, what is believed to be the most comprehensive annual survey of foundation endowment performance once again delivered discouraging news for the sector.

The 2016 Council on Foundations–Commonfund Study of Investment of Endowments for Private and Community Foundations® reported on one-year, five-year and 10-year returns for private foundations, and they again trail passive benchmarks.

The 10-year annual average return for private foundations was 4.7 percent, the study found. The five-year return was 7.6 percent. Those returns are net of fees — meaning that outside investment fees are taken into account — but they do not take into account the considerable salaries of investment officers at staffed foundations.

By comparison, Vanguard, the pioneering giant of passive investing, says a simple mix of index funds with 70 percent in stocks and 30 percent in fixed-income assets delivered an annualized return of 5.4 percent over the past 10 years. The five-year return was 9.1 percent.

These differences add up in a hurry.

Warnings, Ignored

The underperformance of foundation endowments is not a surprise. In a Financial Times essay called The end of active investing? that should be read by every foundation trustee, Charles D. Ellis, who formerly chaired the investment committee at Yale, wrote:

“Over 10 years, 83 per cent of active funds in the US fail to match their chosen benchmarks; 40 per cent stumble so badly that they are terminated before the 10-year period is completed and 64 per cent of funds drift away from their originally declared style of investing. These seriously disappointing records would not be at all acceptable if produced by any other industry.”

The performance of hedge funds, private-equity funds and venture capital has trended downwards as institutional investors flocked into those markets, chasing returns. Notable investors including Warren Buffett, Jack Bogle (who as Vanguard’s founder has a vested interest in passive investing), David Swensen, Yale’s longtime chief investment officer, and Charles Ellis have all argued for years that most investors–even institutional investors–should simply diversity their portfolios, pursue passive strategies and keep their investing costs low.

In his most recent letter to investors in Berkshire Hathaway, Buffett wrote:

“When trillions of dollars are managed by Wall Streeters charging high fees, it will usually be the managers who reap outsized profits, not the clients. Both large and small investors should stick with low-cost index funds.”

For more from Buffett about why passive investing makes sense, see my March blogpost, Warren Buffett has some excellent advice for foundations that they probably won’t take. Recently, Freakonomics did an excellent podcast on the topic, titled The Stupidest Thing You Can Do With Your Money.

2016700activepassivesign-640x410-jpgThat said, the debate between active and passive asset managers remains unsettled. While index funds have outperformed actively-managed portfolios over the last decade, Cambridge Associates, a big investment firm that builds customized portfolios for institutional investors and private clients, published a study last spring saying that this past decade is an anomaly. Cambridge Associates found that since 1990, fully diversified (i.e., actively managed) portfolios have underperformed a simple 70/30 stock/bond portfolio in only two periods: 1995–99 and 2009–2016. To no one’s surprise, Cambridge says: “We continue to find investments in private equity and hedge funds that we believe have an ability to add value to portfolios over the long term.” Portfolio managers are also sure to argue that their expertise and connections enable them to beat market indices.

But where is the evidence? To the best of my knowledge, seven of the U.S.’s 10 biggest foundations decline to disclose their investment returns. I emailed or called the Getty, MacArthur and Fletcher Jones foundations to ask about their investments in Enervest and was told that they do not discuss individual investments. They declined comment.

To its credit, MacArthur does disclose its investment performance of its $6.3 billion endowment. On the other hand, MacArthur has an extensive grantmaking program supporting “conservation and sustainable development.” Why is it financing oil and gas assets?

Ultimately, foundation boards are responsible for overseeing the investment of their endowments. Why don’t they do a better job of it? Maybe it’s because many foundation trustees — particularly those who oversee the investment committees — come out of Wall Street, private equity funds, hedge funds and venture capital. They are the so-called experts, and they have built successful careers by managing other people’s people. It’s not easy for the other board members, who may be academics, activists, lawyers or politicians, to question their expertise. But that’s what they need to do.

And, at the very least, foundations ought to be open about how their endowments are performing so those who manage their billions of dollars can be held accountable.

--Marc Gunther

What Will You #OpenForGood?
July 13, 2017

Janet Camarena is director of transparency initiatives at Foundation Center.  This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Janet Camarena Photo

This week, Foundation Center is launching our new #OpenForGood campaign, designed to encourage better knowledge sharing practices among foundations.  Three Foundation Center services—Glasspockets, IssueLab, and GrantCraft are leveraging their platforms to advance the idea that philanthropy can best live up to its promise of serving the public good by openly and consistently sharing what it’s learning from its work.  Glasspockets is featuring advice and insights from “knowledge sharing champions” in philanthropy on an ongoing #OpenForGood blog series; IssueLab has launched a special Results platform allowing users to learn from a collective knowledge base of foundation evaluations; and a forthcoming GrantCraft Guide on open knowledge practices is in development.

Although this campaign is focused on helping and inspiring foundations to use new and emerging technologies to better collectively learn, it is also in some ways rooted in the history that is Foundation Center’s origin story.


A Short History

Sixty years ago, Foundation Center was established to provide transparency for a field in jeopardy of losing its philanthropic freedom due to McCarthy Era accusations that gained traction in the absence of any openness whatsoever about foundation priorities, activities, or processes.  Not one, but two congressional commissions were formed to investigate foundations committing alleged “un-American activities.”  As a result of these congressional inquiries, which spanned several years during the 1950s, Foundation Center was established to provide transparency in a field that had nearly lost everything due to its opacity. 

“The solution and call to action here is actually a simple one – if you learn something, share something.”

I know our Transparency Talk audience is most likely familiar with this story since the Glasspockets name stems from this history when Carnegie Corporation Chair Russell Leffingwell said, “The foundation should have glass pockets…” during his congressional testimony, describing a vision for a field that would be so open as to allow anyone to have a look inside the workings and activities of philanthropy.  But it seems important to repeat that story now in the context of new technologies that can facilitate greater openness.

Working Collectively Smarter

Now that we live in a time when most of us walk around with literal glass in our pockets, and use these devices to connect us to the outside world, it is surprising that only 10% of foundations have a website, which means 90% of the field is missing discovery from the outside world.  But having websites would really just bring foundations into the latter days of the 20th century--#OpenForGood aims to bring them into the present day by encouraging foundations to openly share their knowledge in the name of working collectively smarter.

What if you could know what others know, rather than constantly replicating experiments and pilots that have already been tried and tested elsewhere?  Sadly, the common practice of foundations keeping knowledge in large file cabinets or hard drives only a few can access means that there are no such shortcuts. The solution and call to action here is actually a simple one—if you learn something, share something

In foundations, learning typically takes the form of evaluation and monitoring, so we are specifically asking foundations to upload all of your published reports from 2015 and 2016 to the new IssueLab: Results platform, so that anyone can build on the lessons you’ve learned, whether inside or outside of your networks. Foundations that upload their published evaluations will receive an #OpenForGood badge to demonstrate their commitment to creating a community of shared learning.

Calls to Action

But #OpenForGood foundations don’t just share evaluations, they also:

  • Open themselves to ideas and lessons learned by others by searching shared repositories, like those at IssueLab as part of their own research process;
  • They use Glasspockets to compare their foundation's transparency practices to their peers, add their profile, and help encourage openness by sharing their experiences and experiments with transparency here on Transparency Talk;
  • They use GrantCraft to hear what their colleagues have to say, then add their voice to the conversation. If they have an insight, they share it!

Share Your Photos

“#OpenForGood foundations share their images with us so we can show the collective power of philanthropic openness, not just in words, but images. ”

And finally, #OpenForGood foundations share their images with us so we can show the collective power of philanthropic openness, not just in words, but images.  We would like to evolve the #OpenForGood campaign over time to become a powerful and meaningful way for foundations to open up your work and impact a broader audience than you could reach on your own. Any campaign about openness and transparency should, after all, use real images rather than staged or stock photography. 

So, we invite you to share any high resolution photographs that feature the various dimensions of your foundation's work.  Ideally, we would like to capture images of the good you are doing out in the world, outside of the four walls of your foundation, and of course, we would give appropriate credit to participating foundations and your photographers.  The kinds of images we are seeking include people collaborating in teams, open landscapes, and images that convey the story of your work and who benefits. Let us know if you have images to share that may now benefit from this extended reach and openness framing by contacting

What will you #OpenForGood?

--Janet Camarena

Why Evaluations Are Worth Reading – or Not
June 14, 2017

Rebekah Levin is the Director of Evaluation and Learning for the Robert R. McCormick Foundation, guiding the Foundation in evaluating the impact of its philanthropic giving and its involvement in community issues. She is working both with the Foundation’s grantmaking programs, and also with the parks, gardens, and museums at Cantigny Park. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

Rebekah Levin photoTruth in lending statement:  I am an evaluator.  I believe strongly in the power of excellent evaluations to inform, guide, support and assess programs, strategies, initiatives, organizations and movements.  I have directed programs that were redesigned to increase their effectiveness, their cultural appropriateness and their impact based on evaluation data, helped to design and implement evaluation initiatives here at the foundation that changed the way that we understand and do our work, and have worked with many foundation colleagues and nonprofits to find ways to make evaluation serve their needs for understanding and improvement. 

“I believe strongly in the power of excellent evaluations."

One of the strongest examples that I’ve seen of excellent evaluation within philanthropy came with a child abuse prevention and treatment project.  Our foundation funded almost 30 organizations that were using 37 tools to measure treatment impact of treatment, many of which were culturally inappropriate, designed for initial screenings, or inappropriate for a host of other reasons, and staff from these organizations running similar programs had conflicting views about the tools.  Foundation program staff wanted to be able to compare program outcomes using uniform evaluation tools and to use that data to make funding, policy, and program recommendations, but they were at a loss as to how to do so in a way that honored the grantees’ knowledge and experience.   A new evaluation initiative was funded, combining the development of a "community of practice" for the nonprofits and foundation together to:

  • create a unified set of reporting tools;
  • learn together from the data about how to improve program design and implementation, and the systematic use of data to support staff/program effectiveness;
  • develop a new rubric which the foundation would use to assess programs and proposals; and
  • provide evaluation coaching for all organizations participating in the initiative.

The evaluation initiative was so successful that the nonprofits participating decided to continue their work together beyond the initial scope of the project to improve their own programs and better support the children and families that they are serving. This “Unified Project Outcomes” article describes the project and established processes in far greater detail.

But I have also seen and been a part of evaluations where:

  • the methodology was flawed or weak;
  • the input data were inaccurate and full of gaps;
  • there was limited understanding of the context of the organization;
  • there was no input from relevant participants; and
  • there was no thought to the use of the data/analysis;

so that little to no value came out of them, and the learning that took place as a result was equally inconsequential.

Mccormick-foundation-logo_2xSo now to those evaluation reports that often come at the end of a project or foundation initiative, and sometimes have interim and smaller versions throughout their life span.  Except to a program officer who has to report to their director about how a contract or foundation strategy was implemented, the changes from the plan that occurred, and the value or impact of an investment or initiative, should anyone bother reading them?  From my perch, the answer is a big “Maybe.”  What does it take for an evaluation report to be worth my time to read, given the stack of other things sitting here on my desk that I am trying to carve out time to read?  A lot.

  1. It has to be an evaluation and not a PR piece. Too often, "evaluation" reports provide a cleaned up version of what really occurred in a program, with none of the information about how and why an initiative or organization functioned as it did, and the data all point to its success.  This is not to say that initiatives/organizations can’t be successful.  But no project or organization works perfectly, and if I don’t see critical concerns/problems/caveats identified, my guess is that I’m not getting the whole story, and its value to me drops precipitously.
  2. It has to provide relevant context. To read an evaluation of a multi-organizational collaboration in Illinois without placing its fiscal challenges within the context of our state’s ongoing budget crisis, or to read about a university-sponsored community-based educational program without knowing the long history of mistrust between the school and the community, or any other of the relevant and critical contextual pieces that are effect a program, initiative or organization makes that evaluation of little value.  Placed within a nuanced set of circumstances significantly improves the possibility that the knowledge is transferable to other settings.
  3. It has to be clear and as detailed as possible about the populations that it is serving. Too often, I read evaluations that leave out critical information about who they were targeting and who participated or was served. 
  4. The evaluation’s methodology must be described with sufficient detail so that I have confidence that it used an appropriate and skillful approach to its design and implementation as well as the analysis of the data. I also pay great attention to what extent those who were the focus of the evaluation participated in the evaluation’s design, the questions being addressed, the methodology being used, and the analysis of the data.
  5. And finally, in order to get read, the evaluation has to be something I know exists, or something I can easily find. If it exists in a repository like IssueLab, my chances of finding it increase significantly.  After all, even if it’s good, it is even better if it is #OpenForGood for others, like me, to learn from it.

When these conditions are met, the answer to the question, “Are evaluations worth reading?” is an unequivocal “YES!,” if you value learning from others’ experiences and using that knowledge to inform and guide your own work.

--Rebekah Levin

The Real World is Messy. How Do You Know Your Foundation Is Making an Impact?
June 7, 2017

Aaron Lester is an experienced writer and editor in the nonprofit space. In his role as content marketing manager at Fluxx, Aaron’s goal is to collect and share meaningful stories from the world of philanthropy. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

AaronLesterIn a perfect world, foundations could learn from every mistake, build on every new piece of knowledge, and know with certainty what impact every effort has made.

Of course, we’re not in that world. We’re in the real, fast-paced world of nonprofits where messy human needs and unpredictable natural and political forces necessitate a more flexible course. In that world, it’s more challenging to measure the effects of our grantmaking efforts and learn from them. It turns out knowledge sharing is a tough nut to crack.

And without meaningful knowledge sharing, we’re left struggling to understand the philanthropic sector’s true impact — positive or negative — within a single organization or across many. The solution is a more transparent sector that is willing to share data — quantitative as well as qualitative — that tells stories of wins and losses, successes and failures—in other words, a sector that is #OpenForGood. But, of course, this is much easier said than done.

My role at Fluxx creates many opportunities for me to talk with others in the field and share stories the philanthropic sector can learn from. I recently had the chance to speak with grantmakers on this very issue.

Measuring Whose Success?

Even within a foundation, it can be difficult to truly understand the impact of a grant or other social investment.

“Lose the mindset defined by a fear of failure; instead, embrace one that drives you to search for opportunity.”

As Adriana Jiménez, director of grants management at the ASPCA and former grants manager at the Surdna Foundation, explains, it’s difficult for foundations to prove conclusively that it’s their slice of the grantmaking that has made a meaningful difference in the community. “When you collect grant-by-grant data, it doesn’t always roll up to your foundation’s goals or even your grant’s goals.”

The issue is that there’s no standardized way to measure grantmaking data, and it’s an inherently difficult task because there are different levels of assessment (grant, cluster, program, foundation, etc.), there is similar work being done in different contexts, and a lot of data is only available in narrative form.

One way to combat these challenges is to make sure your foundation is transparent and in agreement around shared goals with grantees from the start of the relationship. Being too prescriptive or attempting to standardize the way your grantees work will never create the results you’re after. Part of this early alignment includes developing clear, measurable goals together and addressing how the knowledge you’re gaining can and should translate into improvements in performance.

A grantee should never have to alter their goals or objectives just to receive funding. That sends the wrong message, and it provides the wrong incentive for grantees to participate in knowledge-sharing activities. But when you work as partners from the start and provide space for grantees to collaborate on strategy, a stronger partnership will form, and the stories your data tells will begin to be much more meaningful.

The Many Languages of Human Kindness

If sharing knowledge is difficult within one organization, it’s even more challenging across organizations.

FluxxJiménez points out that a major challenge is the complexity of foundations, as they rely on different taxonomies and technologies and discuss similar issues using different language. Every foundation’s uniqueness is, in its day-to-day work, its strength, but in terms of big-picture learning across organizations, it’s a hurdle.

Producing cohesive, comprehensive data out of diverse, fragmented information across multiple organizations is a huge challenge. Mining the information and tracking it in an ongoing way is another obstacle made more difficult because the results are often more anecdotal than they are purely quantitative. And when this information is spread out over so many regions and focus areas, the types of interventions vary so widely that meaningful knowledge sharing becomes untenable.

Gwyneth Tripp, grants manager at Blue Shield of California Foundation, also cites a capacity issue. Most foundations don’t have designated roles for gathering, tracking, organizing, and exchanging shareable data, so they resort to asking staff who already have their own sizable to-do lists. Tripp says:

“They have an interest and a desire [in knowledge sharing], but also a real challenge of balancing the everyday needs, the strategic goals, the relationships with grantees, and then adding that layer of ‘let’s learn and think about it all’ is really tough to get in.

“Also, becoming more transparent about the way you work, including sharing successes as well as failures, can open your foundation up to scrutiny. This can be uncomfortable. But it’s important to delineate between ‘failure’ and ‘opportunity to learn and improve.’”

Sparking Change

But foundations know (possibly better than anyone else) that obstacles don’t make accomplishing a goal impossible.

And this goal’s rewards are great: When foundations can achieve effective knowledge sharing, they’ll have better insights into what other funding is available for the grantees within the issues they are tackling, who is being supported, which experiments are worth replicating, and where there are both gaps and opportunities. And with those insights, foundations gain the ability to iterate and improve upon their operations, even leading to stronger, more strategic collaborations and partnerships.

Creating and promoting this kind of accessible, useful knowledge sharing starts with a few steps:

  1. Begin from within. Tracking the impact of your grantmaking efforts and sharing those findings with the rest of the sector requires organizations to look internally first. Start by building a knowledge management implementation plan that involves every stakeholder, from internal teams to grantee partners to board executives.
  1. Determine and prioritize technology needs. Improvements in technology — specifically cloud-based technology — are part of what’s driving the demand for data on philanthropic impact in the first place. Your grants management system needs to provide integrated efficiency and accessibility if you want to motivate staff participation and generate usable insights from the data you’re collecting. Is your software streamlining your efforts, or is it only complicating them?
  1. Change your mindset. Knowledge sharing can be intimidating, but it doesn’t have to be. Lose the mindset defined by a fear of failure; instead, embrace one that drives you to search for opportunity. Promote a stronger culture of knowledge sharing across the sector by sharing your organizational practices and lessons learned. Uncover opportunities to collect data and share information across organizations.

There’s no denying that knowledge sharing benefits foundations everywhere, along with the programs they fund. Don’t let the challenges hold you back from aiming for educational, shareable data — you have too much to gain not to pursue that goal.  What will you #OpenForGood?

--Aaron Lester 

Transparency Talk Welcomes Arcus Foundation to Glasspockets
March 29, 2017

(Melissa Moy is special projects associate for Glasspockets.) 

Arcus foundation logoWe are pleased to welcome Arcus Foundation to our community of foundations that have publicly commited to working transparently. By taking and sharing the “Who Has Glass Pockets?” (WHGP) self-assessment, Arcus is contributing to a growing collection of profiles that serve as a knowledge bank and transparency benchmarking mechanism.

Arcus, with its offices in New York and Cambridge, United Kingdom, advocates for global human rights and conservation movements: “Together, we learn from each other and take bold risks on groundbreaking ideas that drive progress toward a future of respect and dignity for all.”

“We strive to apply a high level of transparency in our operations and in our relationships with grantees, partners and other stakeholders.’”

This month, Arcus became the 87th foundation to join WHGP.  As a way of welcoming Arcus to the Glasspockets community, we’d like to highlight some of the ways in which this foundation openly shares its environmental and social justice work.

First, Arcus has pledged a rare commitment to openness in its transparency statement that is part of the website’s introduction to Arcus’ work.

The foundation uses its website to explain its grantmaking process,  shares expectations for grantees, and offers a searchable grantee map and database.  A short video invites and informs prospective grant applicants.

Other ways that Arcus lives up to its transparency statement is by opening up its knowledge via  grantee impact stories, reports, and a foundation blog.  Additionally, the foundation discloses more than a decade of its financial information

Enjoy exploring the work that Arcus is doing for social justice and the environment.  Perhaps it will inspire your foundation to become #88!  Does your foundation have glass pockets?  Find out

 --Melissa Moy

Share This Blog

  • Share This

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:

Subscribe to Transparency Talk