Transparency Talk

Category: "Philanthropy" (69 posts)

How To Keep Me Scrolling Through What You Are Sharing
August 10, 2017

Tom Kelly is Vice President of Knowledge, Evaluation & Learning at the Hawai‘i Community Foundation. He has been learning and evaluating in philanthropy since the beginning of the century. @TomEval  TomEval.com

This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Tom Kelly Hi ResHello, my name is Tom and I am a Subscriber. And a Tweeter, Follower, Forwarder (FYI!), Google Searcher, and DropBox Hoarder. I subscribe to blogs, feeds, e-newsletters, and email updates. My professional title includes the word “Knowledge,” so I feel compelled to make sure I am keeping track of the high volume of data, information, reports, and ideas flowing throughout the nonprofit and foundation worlds (yes, it is a bit of a compulsion…and I am not even including my favorite travel, shopping and coupon alerts).

It is a lot and I confess I do not read all of it. It is a form of meditation for me to scroll through emails and Twitter feeds while waiting in line at Aloha Salads. I skim, I save, I forward, I retweet – I copy and save for later reading (later when?). In fact, no one can be expected to keep up, so how does anyone make sense of it all, or even find what we need when we need it? Everyone being #OpenForGood and sharing everything is great, but who is reading it all? And how do we make what we are opening for good actually good?

Making Knowledge Usable

We have all experienced at some point Drowning in Information-Starving for Knowledge (John Naisbitt’s Megatrends…I prefer E.O. Wilson’s “starving for wisdom” theory). The information may be out there but rarely in a form that is easily found, read, understood, and most importantly used. Foundation Center and IssueLab have made it easier for people in the sector to know what is being funded, where new ideas are being tested, and what evidence and lessons are available. But nonprofits and foundations still have to upload and share many more of their documents than they do now. And we need to make sure that the information we share is readable, usable, and ready to be applied.

Hawaii Community Foundation Graphic

DataViz guru Stephanie Evergreen recently taught me a new hashtag: #TLDR – “Too Long, Didn’t Read.”

She now proposes that every published report be available in three formats – a one-page handout with key messages, a 3-page executive summary, and a 25-page report (plus appendices). In this way the “scanners,” “skimmers” and “deep divers” can access the information in the form they prefer and in the time they have. It also requires writing (and formatting) differently for each of these sets of eyes. (By the way, do you know which one you are?)

From Information to Influence

But it is not enough to make your reports accessible, searchable, and also easily readable in short and long forms; you also need to include the information people need to make decisions and act. It means deciding in advance who you want to inform and influence and what you want people to do with the information. You need to be clear about your purpose for sharing information, and you need to give people the right kinds of information if you expect them to read it, learn from it, and apply it.

“Give people the right kinds of information if you expect them to read it, learn from it, and apply it.”

Too many times I have read reports with promising findings or interesting lessons, and then I race through all the footnotes and the appendices at the back of the report looking for resources that could point me to the details of evidence and data or implementation guidance. I usually wind up trying to track down the authors by email or phone to follow up.

A 2005 study of more than 1,000 evaluations published in human services found only 22 well-designed and well-documented reports that shared any analysis of implementation factors – what lessons people learned about how best to put the program or services in place. We cannot expect other people and organizations to share knowledge and learn if they cannot access information from others that helps them use the knowledge and apply it in their own programs and organizations. YES, I want to hear about your lessons and “a-ha’s,” but I also want to see data and analysis of the common challenges that all nonprofits and foundations face:

  • How to apply and adapt program and practice models in different contexts
  • How to sustain effective practices
  • How to scale successful efforts to more people and communities

This means making sure that your evaluations and your reports include opening up the challenges of implementation – the same challenges others are likely to face. It also means placing your findings in the context of existing learning while also using similar definitions so that we can build on each other’s knowledge. For example, in our recent middle school connectedness initiative, our evaluator Learning for Action reviewed the literature first to determine specific components and best practices of youth mentoring so that we could build the evaluation on what had come before, and then report clearly about what we learned about in-school mentoring and open up  useful and comparable knowledge to the field. 

So please plan ahead and define your knowledge sharing and influence agenda up front and consider the following guidelines:

  • Who needs to read your report?
  • What information does your report need to share to be useful and used?
  • Read and review similar studies and reports and determine in advance what additional knowledge is needed and what you will document and evaluate.
  • Use common definitions and program model frameworks so we are able to continually build on field knowledge and not create anew each time.
  • Pay attention to and evaluate implementation, replication and the management challenges (staffing, training, communication, adaptation) that others will face.
  • And disseminate widely and share at conferences, in journals, in your sector networks, and in IssueLab’s open repository.

And I will be very happy to read through your implementation lessons in your report’s footnotes and appendices next time I am in line for a salad.

--Tom Kelly

Foundations and Endowments: Smart People, Dumb Choices
August 3, 2017

(Marc Gunther writes about nonprofits, foundations, business and sustainability. He also writes for NonprofitChronicles.com. A version of this post also appears in Nonprofit Chronicles.)

This post is part of a new Transparency Talk series devoted to putting the spotlight on the importance of the 990PF, the informational tax form that foundations must annually file.  The series will explore the implications of the open 990; how journalists and researchers use the 990PF to understand philanthropy; and its role, limitations, and potential as a communications tool. 

Marc Gunther photoAmerica’s foundations spend many millions of dollars every year on investment advice. In return, they get sub-par performance.

You read that right: Money that could be spent on charitable programs — to alleviate global poverty, help cure disease, improve education, support research or promote the arts — instead flows into the pockets of well-to-do investment advisors and asset managers who, as a group, generate returns on their endowment investments that are below average.

This is redistribution in the wrong direction, on a grand scale: Foundation endowments hold about $800 billion in investments. It hasn’t attracted a lot of attention, but that could change as foundations make their IRS tax filings open, digital and searchable. That should create competitive pressures on foundation investment officers to do better, and for foundation executives and trustees to rethink business as usual investing.

The latest evidence that they aren’t doing very well arrived recently with the news that two energy funds managed by a Houston-based private equity firm called EnerVest are on the verge of going bust. Once worth $2 billion, the funds will leave investors “with, at most, pennies for every dollar they invested,” the Wall Street Journal reports. To add insult to injury, the funds in question were invested in oil and natural gas during 2012 and 2013, just as Bill McKibben, 350.org and a handful of their allies were urging institutional investors to divest from fossil fuels.

Foundations that invested in the failing Enervest funds include the J. Paul Getty Trust, the John D. and Catherine T. MacArthur Foundation and the California-based Fletcher Jones Foundation, according to their most recent IRS filings. Stranded assets, anyone?

“Endowed private foundations are unaccountable to anyone other than their own trustees.”

Of course, no investment strategy can prevent losses. But the collapse of the Enervest funds points to a broader and deeper problem–the fact that most foundations trust their endowment to investment offices and/or outside portfolio managers who pursue active and expensive investment strategies that, as a group, have underperformed the broader markets.

How costly has this underperformance been? That’s impossible to know because most foundations do not disclose their investment returns. This, by itself, is a troubling; it’s a reminder that endowed private foundations are unaccountable to anyone other than their own trustees.

On disclosure, there are signs of progress. The Ford Foundation says it intends to release its investment returns for the first time. A startup company called Foundation Financial Research is compiling data on endowments as well, which it intends to make available to foundation trustees and sell to asset managers.

What’s more, as the IRS Form 990s filed by foundations become machine readable, it will become easier for analysts, activists, journalists and other foundations to see exactly how billions of dollars of foundations assets are deployed, and how they are performing. Advocates for mission-based investment, or for hiring more women and people of color to manage foundation assets are likely to shine a light on foundations whose endowments that are underperforming.

Unhappily, all indications are that most foundations are underperforming because they pursue costly, active investment strategies. This month, what is believed to be the most comprehensive annual survey of foundation endowment performance once again delivered discouraging news for the sector.

The 2016 Council on Foundations–Commonfund Study of Investment of Endowments for Private and Community Foundations® reported on one-year, five-year and 10-year returns for private foundations, and they again trail passive benchmarks.

The 10-year annual average return for private foundations was 4.7 percent, the study found. The five-year return was 7.6 percent. Those returns are net of fees — meaning that outside investment fees are taken into account — but they do not take into account the considerable salaries of investment officers at staffed foundations.

By comparison, Vanguard, the pioneering giant of passive investing, says a simple mix of index funds with 70 percent in stocks and 30 percent in fixed-income assets delivered an annualized return of 5.4 percent over the past 10 years. The five-year return was 9.1 percent.

These differences add up in a hurry.

Warnings, Ignored

The underperformance of foundation endowments is not a surprise. In a Financial Times essay called The end of active investing? that should be read by every foundation trustee, Charles D. Ellis, who formerly chaired the investment committee at Yale, wrote:

“Over 10 years, 83 per cent of active funds in the US fail to match their chosen benchmarks; 40 per cent stumble so badly that they are terminated before the 10-year period is completed and 64 per cent of funds drift away from their originally declared style of investing. These seriously disappointing records would not be at all acceptable if produced by any other industry.”

The performance of hedge funds, private-equity funds and venture capital has trended downwards as institutional investors flocked into those markets, chasing returns. Notable investors including Warren Buffett, Jack Bogle (who as Vanguard’s founder has a vested interest in passive investing), David Swensen, Yale’s longtime chief investment officer, and Charles Ellis have all argued for years that most investors–even institutional investors–should simply diversity their portfolios, pursue passive strategies and keep their investing costs low.

In his most recent letter to investors in Berkshire Hathaway, Buffett wrote:

“When trillions of dollars are managed by Wall Streeters charging high fees, it will usually be the managers who reap outsized profits, not the clients. Both large and small investors should stick with low-cost index funds.”

For more from Buffett about why passive investing makes sense, see my March blogpost, Warren Buffett has some excellent advice for foundations that they probably won’t take. Recently, Freakonomics did an excellent podcast on the topic, titled The Stupidest Thing You Can Do With Your Money.

2016700activepassivesign-640x410-jpgThat said, the debate between active and passive asset managers remains unsettled. While index funds have outperformed actively-managed portfolios over the last decade, Cambridge Associates, a big investment firm that builds customized portfolios for institutional investors and private clients, published a study last spring saying that this past decade is an anomaly. Cambridge Associates found that since 1990, fully diversified (i.e., actively managed) portfolios have underperformed a simple 70/30 stock/bond portfolio in only two periods: 1995–99 and 2009–2016. To no one’s surprise, Cambridge says: “We continue to find investments in private equity and hedge funds that we believe have an ability to add value to portfolios over the long term.” Portfolio managers are also sure to argue that their expertise and connections enable them to beat market indices.

But where is the evidence? To the best of my knowledge, seven of the U.S.’s 10 biggest foundations decline to disclose their investment returns. I emailed or called the Getty, MacArthur and Fletcher Jones foundations to ask about their investments in Enervest and was told that they do not discuss individual investments. They declined comment.

To its credit, MacArthur does disclose its investment performance of its $6.3 billion endowment. On the other hand, MacArthur has an extensive grantmaking program supporting “conservation and sustainable development.” Why is it financing oil and gas assets?

Ultimately, foundation boards are responsible for overseeing the investment of their endowments. Why don’t they do a better job of it? Maybe it’s because many foundation trustees — particularly those who oversee the investment committees — come out of Wall Street, private equity funds, hedge funds and venture capital. They are the so-called experts, and they have built successful careers by managing other people’s people. It’s not easy for the other board members, who may be academics, activists, lawyers or politicians, to question their expertise. But that’s what they need to do.

And, at the very least, foundations ought to be open about how their endowments are performing so those who manage their billions of dollars can be held accountable.

--Marc Gunther

How Improved Evaluation Sharing Has the Potential to Strengthen a Foundation’s Work
July 27, 2017

Jen GlickmanJennifer Glickman is manager, research team, at the Center for Effective Philanthropy. This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Philanthropy is a complex, demanding field, and many foundations are limited in the amount of resources they can dedicate to obtaining and sharing knowledge about their practices. This makes it necessary to consider, then, in what areas should foundations focus their learning and sharing efforts to be #OpenForGood?

Last year, the Center for Effective Philanthropy (CEP) released two research reports exploring this question. The first, Sharing What Matters: Foundation Transparency, looks at foundation CEOs’ perspectives on what it means to be transparent, who the primary audiences are for foundations’ transparency efforts, and what is most important for foundations to share.

The second report, Benchmarking Foundation Evaluation Practices, presents benchmarking data collected from senior foundation staff with evaluation responsibilities on topics such as evaluation staffing and structures, investment in evaluation work, and the usefulness of evaluation information. Together, these reports provide meaningful insights into how foundations can learn and share knowledge most effectively.

CEP’s research found that there are specific topics about which foundation CEOs believe being transparent could potentially increase their foundation’s ability to be effective. These areas include the foundation’s grantmaking processes, its goals and strategies, how it assesses its performance, and the foundation’s experiences with what has and has not worked in its efforts to achieve its programmatic goals. While foundation CEOs believe their foundations are doing well in sharing information about their grantmaking, goals, and strategies, they say their foundations are much less transparent about the lessons they learn through their work.

CEP Transparency Graphic

For example, nearly 70 percent of the CEOs CEP surveyed say being transparent about their foundation’s experiences with what has worked in its efforts to achieve its programmatic goals could increase effectiveness to a significant extent. In contrast, only 46 percent say their foundations are very or extremely transparent about these experiences. Even fewer, 31 percent, say their foundations are very or extremely transparent about what has not worked in their programmatic efforts, despite 60 percent believing that being transparent about this topic could potentially increase their effectiveness to a significant extent.

And yet, foundations want this information about lessons learned and think it is important. Three-quarters of foundation CEOs say they often seek out opportunities to learn from other foundations’ work, and is that it enables others to learn from foundation work more generally.

How is knowledge being shared then? According to our evaluation research, foundations are mostly sharing their programmatic knowledge internally. Over three-quarters of the evaluation staff who responded to our survey say evaluation findings are shared quite a bit or a lot with the foundation’s CEO, and 66 percent say findings are shared quite a bit or a lot with foundation staff. In comparison:

  • Only 28 percent of respondents say evaluation findings are shared quite a bit or a lot with the foundation’s grantees;
  • 17 percent say findings are shared quite a bit or a lot with other foundations; and
  • Only 14 percent say findings are shared quite a bit or a lot with the general public.

CEP Evaluation Survey Graphic

In fact, less than 10 percent of respondents say that disseminating evaluation findings externally is a top priority for their role.

But respondents do not think these numbers are adequate. Nearly three-quarters of respondents say their foundation invests too little in disseminating evaluation findings externally. Moreover, when CEP asked respondents what they hope will have changed for foundations in the collection and/or use of evaluation information in five years, one of the top three changes mentioned was that foundations will be more transparent about their evaluations and share what they are learning externally.

So, if foundation CEOs believe that being transparent about what their foundation is learning could increase its effectiveness, and foundation evaluation staff believe that foundations should be investing more in disseminating findings externally, what is holding foundations back from embracing an #OpenForGood approach?

CEP has a research study underway looking more deeply into what foundations know about what is and isn’t working in their practices and with whom they share that information, and will have new data to enrich the current conversations on transparency and evaluation in early 2018. In the meanwhile, take a moment to stop and consider what you might #OpenForGood.

--Jennifer Glickman

How to Make Grantee Reports #OpenForGood
July 20, 2017

Mandy Ellerton and Molly Matheson Gruen joined the [Archibald] Bush Foundation in 2011, where they created and now direct the Foundation's Community Innovation programs. The programs allow communities to develop and test new solutions to community challenges, using approaches that are collaborative and inclusive of people who are most directly affected by the problem. This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Ellertonmandy20152
Mandy Ellerton

When we started working at the Bush Foundation in 2011, we encountered a machine we’d never seen before: the Lektriever. It’s a giant machine that moves files around, kind of like a dry cleaner’s clothes rack, and allows you to seriously pack in the paper. As a responsible grantmaker, it’s how the Bush Foundation had meticulously tracked and stored its files for posterity - in particular, grantee reports - for decades.

In 2013, the Bush Foundation had the privilege of moving to a new office. Mere days before we were to move into the new space, we got a frantic call from the new building’s management. It turned out that the Lektrievers (we actually had multiple giant filing machines!) were too heavy for the floor of the new building, which had to be reinforced with a number of steel plates to sustain their weight.

MMG 2015 Headshot1
Molly Matheson Gruen

The Lektrievers symbolized our opportunity to become more transparent and move beyond simply preserving our records, instead seeing them as relevant learning tools for current audiences. It was time to lighten the load and share this valuable information with the world.

Even with all this extra engineering, we would still have to say goodbye to one of the machines altogether for the entire system to be structurally sound. We had decades of grantee stories, experiences and learning trapped in a huge machine in the inner sanctum of our office, up on the 25th floor. 

Learning Logs Emerge

We developed our grantee learning log concept in the Community Innovation Programs as one way to increase the Foundation’s transparency. At the heart of it, our learning logs are a very simple concept: they are grantee reports, shared online. But, like many things that appear simple, once you pull on the string of change – the complexity reveals itself.

“Every Community Innovation project is an opportunity for others to learn and the learning logs are a platform to share this learning.”

Before we could save the reports from a life of oblivion in the Lektriever, build out the technology and slap the reports online, we needed to entirely rethink our approach to grantee reporting to create a process that was more mutually beneficial. First, we streamlined our grant accountability measures (assessing whether the grantees did what they said they’d do) by structuring them into a conversation with grantees, rather than as a part of the written reports. We’ve found that conducting these assessments in a conversation takes the pressure off and creates a space where grantees can be more candid, leading to increased trust and a stronger partnership.

Second, our grantee reports now focus on what grantees are learning in their grant-funded project. What’s working? What’s not? What would you do differently if you had it to do all over again? This new process resulted in reports that were more concise and to the point.

Finally, we redesigned our website to create a searchable mechanism for sharing these reports online. This involved linking our grant management system directly with our website so that when a grantee submits a report, we do a quick review and then the report automatically populates our website. We’ve also designed a way for grantees to be able to designate select answers as private when they want to share sensitive information with us, yet not make it entirely public. We leave it up grantee discretion and those selected answers do not appear on the website. Grantees designate their answers to be private for a number of reasons, most often because they discuss sensitive situations having to do with specific people or partners – like when someone drops out of the project or when a disagreement with a partner holds up progress. And while we’ve been pleased at the candor of most of our grantees, some are still understandably reluctant to be publicly candid about failures or mistakes.

But why does this new approach to grantee reporting matter, besides making sure the floor doesn’t collapse beneath our Lektrievers?

Bushfoundation-Lektriever photo
The Lektriever is a giant machine that moves files around, kind of like a dry cleaner’s clothes rack. The Bush Foundation had meticulously tracked and stored its files for posterity - in particular, grantee reports - for decades. Credit: Bush Foundation

Learning Sees the Light of Day

Learning logs help bring grantee learning into the light of day, instead of hiding in the Lektrievers, so that more people can learn about what it really takes to solve problems. Our Community Innovation programs at the Bush Foundation fund and reward the process of innovation–the process of solving problems. Our grantees are addressing wildly different issues: from water quality to historical trauma, from economic development to prison reform. But, when you talk to our grantees, you see that they actually have a lot in common and a lot to learn from one another about effective problem-solving. And beyond our grantee pool, there are countless other organizations that want to engage their communities and work collaboratively to solve problems.  Every Community Innovation project is an opportunity for others to learn and the learning logs are a platform to share this learning, making it #OpenForGood.

We also want to honor our grantees’ time. Grantees spend a lot of time preparing grant reports for funders. And, in a best case scenario, a program officer reads the report and sends the grantee a response of some kind before the report is filed away. But, let’s be honest – sometimes even that doesn’t happen. The report process can be a burden on nonprofits and the only party to benefit is the funder. We hope that the learning logs help affirm to our grantees that they’re part of something bigger than themselves - that what they share matters to others who are doing similar work.

We also hear from our grantees that the reports provide a helpful, reflective process, especially when they fill it out together with collaborating partners. One grantee even said she’d like to fill out the report more often than we require to have regular reflection moments with her team!

Learning from the Learning Logs

We only launched the learning logs last year, but we’ve already received some positive feedback. We’ve heard from both funded and non-funded organizations that the learning logs provide inspiration and practical advice so that they can pursue similar projects. A grantee recently shared a current challenge in their work. It directly connected to some work we knew another grantee had done and had written about in their learning log. So, since this knowledge was now out in the open, we were able to direct them to the learning log as a way to expand our grantee’s impact, even beyond their local community, and use it to help advance another grantee’s work.

Take, for example, some of the following quotes from some of our grantee reports:

  • The Minnesot Brain Injury Alliance's project worked on finding ways to better serve homeless people with brain injuries.  They reflected that, "Taking the opportunity for reflection at various points in the process was very important in working toward innovation.  Without reflection, we might not have been open to revising our plan and implementing new possibilities."
  • GROW South Dakota addressed a number of challenges facing rural South Dakota communities. They shared that, “Getting to conversations that matter requires careful preparation in terms of finding good questions and setting good ground rules for how the conversations will take place—making sure all voices are heard, and that people are listening for understanding and not involved in a debate.”
  •  The People's Press Project engaged communities of color and disenfranchised communities to create a non-commercial, community-owned, low-powered radio station serving the Fargo-Moorhead area of North Dakota. They learned “quickly that simply inviting community members to a meeting or a training was not a type of outreach that was effective.”

Like many foundations, we decline far more applications than what we fund, and our limited funding can only help communities tackle so many problems. Our learning logs are one way to try and squeeze out more impact from those direct investments. By reading grantee learning logs, hopefully more people will be inspired to effectively solve problems in their communities.

We’re not planning to get rid of the Lektrievers anytime soon – they’re pretty retro cool and efficient. They contain important historical records and are incredibly useful for other kinds of record keeping, beyond grantee documentation. Plus, the floor hasn’t fallen in yet. But, as Bush Foundation Communications Director Dominick Washington put it, now we’re unleashing the knowledge, “getting it out of those cabinets, and to people who can use it.”

--Mandy Ellerton and Molly Matheson Gruen

What Will You #OpenForGood?
July 13, 2017

Janet Camarena is director of transparency initiatives at Foundation Center.  This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Janet Camarena Photo

This week, Foundation Center is launching our new #OpenForGood campaign, designed to encourage better knowledge sharing practices among foundations.  Three Foundation Center services—Glasspockets, IssueLab, and GrantCraft are leveraging their platforms to advance the idea that philanthropy can best live up to its promise of serving the public good by openly and consistently sharing what it’s learning from its work.  Glasspockets is featuring advice and insights from “knowledge sharing champions” in philanthropy on an ongoing #OpenForGood blog series; IssueLab has launched a special Results platform allowing users to learn from a collective knowledge base of foundation evaluations; and a forthcoming GrantCraft Guide on open knowledge practices is in development.

Although this campaign is focused on helping and inspiring foundations to use new and emerging technologies to better collectively learn, it is also in some ways rooted in the history that is Foundation Center’s origin story.

OFG-twitter

A Short History

Sixty years ago, Foundation Center was established to provide transparency for a field in jeopardy of losing its philanthropic freedom due to McCarthy Era accusations that gained traction in the absence of any openness whatsoever about foundation priorities, activities, or processes.  Not one, but two congressional commissions were formed to investigate foundations committing alleged “un-American activities.”  As a result of these congressional inquiries, which spanned several years during the 1950s, Foundation Center was established to provide transparency in a field that had nearly lost everything due to its opacity. 

“The solution and call to action here is actually a simple one – if you learn something, share something.”

I know our Transparency Talk audience is most likely familiar with this story since the Glasspockets name stems from this history when Carnegie Corporation Chair Russell Leffingwell said, “The foundation should have glass pockets…” during his congressional testimony, describing a vision for a field that would be so open as to allow anyone to have a look inside the workings and activities of philanthropy.  But it seems important to repeat that story now in the context of new technologies that can facilitate greater openness.

Working Collectively Smarter

Now that we live in a time when most of us walk around with literal glass in our pockets, and use these devices to connect us to the outside world, it is surprising that only 10% of foundations have a website, which means 90% of the field is missing discovery from the outside world.  But having websites would really just bring foundations into the latter days of the 20th century--#OpenForGood aims to bring them into the present day by encouraging foundations to openly share their knowledge in the name of working collectively smarter.

What if you could know what others know, rather than constantly replicating experiments and pilots that have already been tried and tested elsewhere?  Sadly, the common practice of foundations keeping knowledge in large file cabinets or hard drives only a few can access means that there are no such shortcuts. The solution and call to action here is actually a simple one—if you learn something, share something

In foundations, learning typically takes the form of evaluation and monitoring, so we are specifically asking foundations to upload all of your published reports from 2015 and 2016 to the new IssueLab: Results platform, so that anyone can build on the lessons you’ve learned, whether inside or outside of your networks. Foundations that upload their published evaluations will receive an #OpenForGood badge to demonstrate their commitment to creating a community of shared learning.

Calls to Action

But #OpenForGood foundations don’t just share evaluations, they also:

  • Open themselves to ideas and lessons learned by others by searching shared repositories, like those at IssueLab as part of their own research process;
  • They use Glasspockets to compare their foundation's transparency practices to their peers, add their profile, and help encourage openness by sharing their experiences and experiments with transparency here on Transparency Talk;
  • They use GrantCraft to hear what their colleagues have to say, then add their voice to the conversation. If they have an insight, they share it!

Share Your Photos

“#OpenForGood foundations share their images with us so we can show the collective power of philanthropic openness, not just in words, but images. ”

And finally, #OpenForGood foundations share their images with us so we can show the collective power of philanthropic openness, not just in words, but images.  We would like to evolve the #OpenForGood campaign over time to become a powerful and meaningful way for foundations to open up your work and impact a broader audience than you could reach on your own. Any campaign about openness and transparency should, after all, use real images rather than staged or stock photography. 

So, we invite you to share any high resolution photographs that feature the various dimensions of your foundation's work.  Ideally, we would like to capture images of the good you are doing out in the world, outside of the four walls of your foundation, and of course, we would give appropriate credit to participating foundations and your photographers.  The kinds of images we are seeking include people collaborating in teams, open landscapes, and images that convey the story of your work and who benefits. Let us know if you have images to share that may now benefit from this extended reach and openness framing by contacting openforgood@foundationcenter.org.

What will you #OpenForGood?

--Janet Camarena

GrantAdvisor: A TripAdvisor for Funder Feedback
July 6, 2017

Michelle Greanias is executive director of PEAK Grantmaking. Follow her on Twitter @mgreanias. This post also appears in PEAK Grantmaking’s blog.

Michelle GreaniasFor funders, hearing honest input from grantseekers about what they think about a foundation’s practices and getting insights from their experiences working as a grantee partner is a critical component of effective grantmaking. Up until now, funders have needed to initiate the request for feedback via surveys, conversations, and third-party evaluators.  Now, a collaboration of funders, nonprofits, and others interested in improving philanthropy are exploring a new approach—GrantAdvisor, which recently launched in California and Minnesota with a goal of eventually reaching the entire country.

GrantAdvisor is like TripAdvisor—it’s a website that allows individuals (in this case, grant applicants, grantees, and others) to share their first-hand experiences with funding organizations, and for funders to have the opportunity to respond publicly.  The idea is that just as a traveler would check TripAdvisor when planning a trip, a nonprofit would check GrantAdvisor before applying to a funder. And, just as a hotel monitors TripAdvisor to see what your customers like best and least about them, funders can see how grantees and colleagues are experiencing working with them.

“Listening to unfettered feedback from grantees can help funders build more efficient processes and more effective partnerships, which ultimately increases impact.”

It works by collecting anonymous feedback from grantseekers and grantees. When five reviews have been submitted, the data will be shared publicly. A funder profile needs at least five reviews before it becomes public. The unpublished results are sent to the funder providing an opportunity for the funder to respond. After the first five reviews are published, subsequent reviews will be posted, and the funder can respond at any time. Funders are encouraged to register with GrantAdvisor to receive automatic notices when reviews are posted about their organizations and post responses when new reviews are submitted.

As a grants manager, this concept was a little scary to me at first—what if the feedback isn’t all positive?  How would it affect an organization’s reputation?  But the reality is that an organization’s reputation is already affected if grantseekers are having poor experiences with a funder. I want to know, and I believe most grants managers would agree, about any issues and be able to address them.  Especially since the alternative is allowing problems to build and multiply as bad practice impacts more and more grantees.

I also considered this transparent move through another critical lens—aligning values with practices.  In PEAK Grantmaking’s recent research, the top three common values held by grantmakers were collaboration, respect, and integrity.  Being open to feedback, even difficult feedback, is a concrete way to show that grantmakers are “walking the talk” by bringing those values to life through our practices.

Jessamyn Shams-Lau, executive director of Peery Foundation, and Maya Winkelstein, executive director of the Open Road Alliance, both support this work and see four reasons that GrantAdvisor.org is useful to funders:

  1. Feedback: Listening to unfettered feedback from grantees can help funders build more efficient processes and more effective partnerships, which ultimately increases impact.
  2. Benchmarking: With a common set of questions for every foundation, funders can benchmark the effectiveness of their grantmaking practices from the perspective of the grantee experience.
  3. Honest and Accurate Data. When foundations directly solicit feedback (even anonymously), respondents give different answers. Since GrantAdvisor.org collects reviews with or without funder prompting, this unsolicited feedback is the most honest feedback and honest reviews mean accurate data.
  4. Saving Time. Over time, the hope is that the sharing of information via GrantAdvisor.org will help potential grantees better self-select which foundations to approach and which are not well aligned. This will result in a higher-quality pipeline for foundations, which saves everyone time and gets funders closer to impact faster.

Given the promise and potential of this new feedback mechanism to strengthen grantmaking practice, I am honored to serve on the GrantAdvisor national advisory committee. I will share more information about this effort as it progresses and look forward to hearing from the profession about this tool, particularly those in California and Minnesota, where GrantAdvisor will be initially active.

--Michelle Greanias

Bringing Knowledge Full Circle: Giving Circles Shape Accessible and Meaningful Philanthropy
June 21, 2017

Laura Arrillaga-Andreessen is a Lecturer in Business Strategy at the Stanford Graduate School of Business, Founder and President of the Laura Arrillaga-Andreessen Foundation, Founder and Board Chairman of Stanford Center on Philanthropy and Civil Society and Founder and Chairman Emeritus of the Silicon Valley Social Venture Fund. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

Laura Arrillaga-Andreessen PhotoNathalie Morton, a resident of Katy, TX, was passionate about giving back to her suburban Houston community. However, she felt her lack of philanthropic experience might hinder her effectiveness. 

After initial conversations with her friends and neighbors, she discovered that they shared her desire to give locally and, like herself, lacked the financial ability to make the large contributions that they associated with high-impact philanthropy. After initial online research, Nathalie learned that a giving circle is a collaborative form of giving that allows individuals to pool their resources, knowledge and ideas to develop their philanthropic strategy and scale their impact. Nathalie then discovered the Laura Arrillaga-Andreessen Foundation’s (LAAF.org) Giving Circles Fund (GCF) initiative, an innovative online platform that provides an accessible and empowering experience for a diverse group of philanthropists to practice, grow and scale their philanthropy by giving collaboratively.

“Philanthropists have an imperative to share the research and rationale behind their philanthropic decisions for the greater good.”

With LAAF support, Nathalie was inspired to create the Cinco Ranch Giving Circle to pool her community members’ resources for the greater good. In its first year, this circle of over 30 families has come together to invest thousands of dollars in local nonprofits — all through donations as modest as $10 per month. Every member found that sharing time, values, wisdom and dollars not only deepened their relationships with one another but also that the measurable impact they could have together far exceeded that which they could achieve alone. This experience empowered Nathalie and her fellow giving circle participants to see themselves as philanthropists and develop their practice in a collaborative environment.

Nathalie’s story is just one of myriad ways that the giving circles model has made strategic philanthropy more accessible. Two years ago, I wrote a post on this same blog about how funders should have not only glass pockets but also “glass skulls,” underscoring that philanthropists have an imperative to share the research and rationale behind their philanthropic decisions for the greater good of all who are connected to the issue.  Or put another way, giving circles can help donors of all sizes become #OpenForGood. GCF allows philanthropists, like Nathalie, to do just that — by empowering givers at any level to make their thinking and decisions about social impact more open and collaborative.

LAAF logoA lack of financial, intellectual and evaluation resources are barriers to entry for many people who want to give in a way that matters more. That’s why I’ve committed the past two decades to not only redefining philanthropy — I believe that anyone, regardless of age, background or experience, can be a strategic philanthropist — but also to providing highest quality, free educational resources (MOOCs, teaching materials, case studies, giving guides) to empower anyone to make the most of whatever it is they have to give. Although most GCF individual monthly contributions are in the double digits, the impact of our giving circles is increasingly significant — our circles have given over $550,000 in general operating support grants to nonprofits nationally. By design, giving circles amplify individual giving by providing built-in mechanisms for more strategic philanthropy, including increasing

  • Transparency: Giving circles are effective because they are radically transparent about their operations, selection processes, meeting etiquette, voting rules, etc. We have found that giving circles grow and flourish when members understand exactly how the circle works and their role in its success. In addition, all of our circles publish their grants on their GCF pages, so that current and prospective members have insight into each circle’s history, portfolio and impact.
  • Democracy: GCF giving circles have a flat structure, in which everyone has an equal vote — regardless of their respective donations’ size. With LAAF support and a comprehensive portfolio of resources, group leaders facilitate meetings — ranging from casual meetups to knowledge sharing and issue ecosystem mapping gatherings to nonprofit nomination and voting sessions. Even in multigenerational giving circles where members are able to give at different levels, all of their members’ voices, perspectives and opinions hold equal weight.
  • Accessibility: Giving circles require a lower level of financial capital than other philanthropic models. A 2014 study has shown a higher rate of participation in giving circles for Millennials, women and communities of color — reflecting the spectacular pluralism that makes philanthropy beautiful. [1] On our GCF platform, we host multiple college and high school circles that have started teaching their members to carve out philanthropic dollars even on a minimal budget. Additionally, most of our circles are open to the public, and anyone can join and actively participate (yes, that includes you!).
  • Risk-tolerance: With more diverse participants and lower amounts of capital, GCF giving circles are more likely to give to community-based or smaller organizations that typically struggle to secure capital from more established philanthropies, thus meeting a critical social capital market need.

The power of collectively-pooling ideas, experiences and resources, as well as sharing decision-making, inspired me to found Silicon Valley Social Venture Fund (SV2) in 1998. What began as a small, local giving circle has grown into the second largest venture philanthropy partnership in the world. More importantly, its experiential education model — grounded in the principles listed above — has influenced the philanthropic practice of hundreds of now highly strategic philanthropists who respectively have invested hundreds of millions of dollars globally.  To this day, being a partner-member of the SV2 giving circle continues to inform how I give and evolve my own philanthropic impact.  Now, powered by the GCF platform, technology gives all of us the ability to scale our own giving by partnering with like-minded givers locally, nationally and globally so we can all move toward an #OpenForGood ideal. The mobilization of givers of all levels harnesses the power of the collective and demonstrates that the sum of even the smallest contributions can lead to deeply meaningful social change.

--Laura Arrillaga-Andreessen

____________________________________________________________________________________

[1] https://www.philanthropy.com/article/Giving-Circles-Popular-With/150525

Why Evaluations Are Worth Reading – or Not
June 14, 2017

Rebekah Levin is the Director of Evaluation and Learning for the Robert R. McCormick Foundation, guiding the Foundation in evaluating the impact of its philanthropic giving and its involvement in community issues. She is working both with the Foundation’s grantmaking programs, and also with the parks, gardens, and museums at Cantigny Park. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

Rebekah Levin photoTruth in lending statement:  I am an evaluator.  I believe strongly in the power of excellent evaluations to inform, guide, support and assess programs, strategies, initiatives, organizations and movements.  I have directed programs that were redesigned to increase their effectiveness, their cultural appropriateness and their impact based on evaluation data, helped to design and implement evaluation initiatives here at the foundation that changed the way that we understand and do our work, and have worked with many foundation colleagues and nonprofits to find ways to make evaluation serve their needs for understanding and improvement. 

“I believe strongly in the power of excellent evaluations."

One of the strongest examples that I’ve seen of excellent evaluation within philanthropy came with a child abuse prevention and treatment project.  Our foundation funded almost 30 organizations that were using 37 tools to measure treatment impact of treatment, many of which were culturally inappropriate, designed for initial screenings, or inappropriate for a host of other reasons, and staff from these organizations running similar programs had conflicting views about the tools.  Foundation program staff wanted to be able to compare program outcomes using uniform evaluation tools and to use that data to make funding, policy, and program recommendations, but they were at a loss as to how to do so in a way that honored the grantees’ knowledge and experience.   A new evaluation initiative was funded, combining the development of a "community of practice" for the nonprofits and foundation together to:

  • create a unified set of reporting tools;
  • learn together from the data about how to improve program design and implementation, and the systematic use of data to support staff/program effectiveness;
  • develop a new rubric which the foundation would use to assess programs and proposals; and
  • provide evaluation coaching for all organizations participating in the initiative.

The evaluation initiative was so successful that the nonprofits participating decided to continue their work together beyond the initial scope of the project to improve their own programs and better support the children and families that they are serving. This “Unified Project Outcomes” article describes the project and established processes in far greater detail.

But I have also seen and been a part of evaluations where:

  • the methodology was flawed or weak;
  • the input data were inaccurate and full of gaps;
  • there was limited understanding of the context of the organization;
  • there was no input from relevant participants; and
  • there was no thought to the use of the data/analysis;

so that little to no value came out of them, and the learning that took place as a result was equally inconsequential.

Mccormick-foundation-logo_2xSo now to those evaluation reports that often come at the end of a project or foundation initiative, and sometimes have interim and smaller versions throughout their life span.  Except to a program officer who has to report to their director about how a contract or foundation strategy was implemented, the changes from the plan that occurred, and the value or impact of an investment or initiative, should anyone bother reading them?  From my perch, the answer is a big “Maybe.”  What does it take for an evaluation report to be worth my time to read, given the stack of other things sitting here on my desk that I am trying to carve out time to read?  A lot.

  1. It has to be an evaluation and not a PR piece. Too often, "evaluation" reports provide a cleaned up version of what really occurred in a program, with none of the information about how and why an initiative or organization functioned as it did, and the data all point to its success.  This is not to say that initiatives/organizations can’t be successful.  But no project or organization works perfectly, and if I don’t see critical concerns/problems/caveats identified, my guess is that I’m not getting the whole story, and its value to me drops precipitously.
  2. It has to provide relevant context. To read an evaluation of a multi-organizational collaboration in Illinois without placing its fiscal challenges within the context of our state’s ongoing budget crisis, or to read about a university-sponsored community-based educational program without knowing the long history of mistrust between the school and the community, or any other of the relevant and critical contextual pieces that are effect a program, initiative or organization makes that evaluation of little value.  Placed within a nuanced set of circumstances significantly improves the possibility that the knowledge is transferable to other settings.
  3. It has to be clear and as detailed as possible about the populations that it is serving. Too often, I read evaluations that leave out critical information about who they were targeting and who participated or was served. 
  4. The evaluation’s methodology must be described with sufficient detail so that I have confidence that it used an appropriate and skillful approach to its design and implementation as well as the analysis of the data. I also pay great attention to what extent those who were the focus of the evaluation participated in the evaluation’s design, the questions being addressed, the methodology being used, and the analysis of the data.
  5. And finally, in order to get read, the evaluation has to be something I know exists, or something I can easily find. If it exists in a repository like IssueLab, my chances of finding it increase significantly.  After all, even if it’s good, it is even better if it is #OpenForGood for others, like me, to learn from it.

When these conditions are met, the answer to the question, “Are evaluations worth reading?” is an unequivocal “YES!,” if you value learning from others’ experiences and using that knowledge to inform and guide your own work.

--Rebekah Levin

The Real World is Messy. How Do You Know Your Foundation Is Making an Impact?
June 7, 2017

Aaron Lester is an experienced writer and editor in the nonprofit space. In his role as content marketing manager at Fluxx, Aaron’s goal is to collect and share meaningful stories from the world of philanthropy. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

AaronLesterIn a perfect world, foundations could learn from every mistake, build on every new piece of knowledge, and know with certainty what impact every effort has made.

Of course, we’re not in that world. We’re in the real, fast-paced world of nonprofits where messy human needs and unpredictable natural and political forces necessitate a more flexible course. In that world, it’s more challenging to measure the effects of our grantmaking efforts and learn from them. It turns out knowledge sharing is a tough nut to crack.

And without meaningful knowledge sharing, we’re left struggling to understand the philanthropic sector’s true impact — positive or negative — within a single organization or across many. The solution is a more transparent sector that is willing to share data — quantitative as well as qualitative — that tells stories of wins and losses, successes and failures—in other words, a sector that is #OpenForGood. But, of course, this is much easier said than done.

My role at Fluxx creates many opportunities for me to talk with others in the field and share stories the philanthropic sector can learn from. I recently had the chance to speak with grantmakers on this very issue.

Measuring Whose Success?

Even within a foundation, it can be difficult to truly understand the impact of a grant or other social investment.

“Lose the mindset defined by a fear of failure; instead, embrace one that drives you to search for opportunity.”

As Adriana Jiménez, director of grants management at the ASPCA and former grants manager at the Surdna Foundation, explains, it’s difficult for foundations to prove conclusively that it’s their slice of the grantmaking that has made a meaningful difference in the community. “When you collect grant-by-grant data, it doesn’t always roll up to your foundation’s goals or even your grant’s goals.”

The issue is that there’s no standardized way to measure grantmaking data, and it’s an inherently difficult task because there are different levels of assessment (grant, cluster, program, foundation, etc.), there is similar work being done in different contexts, and a lot of data is only available in narrative form.

One way to combat these challenges is to make sure your foundation is transparent and in agreement around shared goals with grantees from the start of the relationship. Being too prescriptive or attempting to standardize the way your grantees work will never create the results you’re after. Part of this early alignment includes developing clear, measurable goals together and addressing how the knowledge you’re gaining can and should translate into improvements in performance.

A grantee should never have to alter their goals or objectives just to receive funding. That sends the wrong message, and it provides the wrong incentive for grantees to participate in knowledge-sharing activities. But when you work as partners from the start and provide space for grantees to collaborate on strategy, a stronger partnership will form, and the stories your data tells will begin to be much more meaningful.

The Many Languages of Human Kindness

If sharing knowledge is difficult within one organization, it’s even more challenging across organizations.

FluxxJiménez points out that a major challenge is the complexity of foundations, as they rely on different taxonomies and technologies and discuss similar issues using different language. Every foundation’s uniqueness is, in its day-to-day work, its strength, but in terms of big-picture learning across organizations, it’s a hurdle.

Producing cohesive, comprehensive data out of diverse, fragmented information across multiple organizations is a huge challenge. Mining the information and tracking it in an ongoing way is another obstacle made more difficult because the results are often more anecdotal than they are purely quantitative. And when this information is spread out over so many regions and focus areas, the types of interventions vary so widely that meaningful knowledge sharing becomes untenable.

Gwyneth Tripp, grants manager at Blue Shield of California Foundation, also cites a capacity issue. Most foundations don’t have designated roles for gathering, tracking, organizing, and exchanging shareable data, so they resort to asking staff who already have their own sizable to-do lists. Tripp says:

“They have an interest and a desire [in knowledge sharing], but also a real challenge of balancing the everyday needs, the strategic goals, the relationships with grantees, and then adding that layer of ‘let’s learn and think about it all’ is really tough to get in.

“Also, becoming more transparent about the way you work, including sharing successes as well as failures, can open your foundation up to scrutiny. This can be uncomfortable. But it’s important to delineate between ‘failure’ and ‘opportunity to learn and improve.’”

Sparking Change

But foundations know (possibly better than anyone else) that obstacles don’t make accomplishing a goal impossible.

And this goal’s rewards are great: When foundations can achieve effective knowledge sharing, they’ll have better insights into what other funding is available for the grantees within the issues they are tackling, who is being supported, which experiments are worth replicating, and where there are both gaps and opportunities. And with those insights, foundations gain the ability to iterate and improve upon their operations, even leading to stronger, more strategic collaborations and partnerships.

Creating and promoting this kind of accessible, useful knowledge sharing starts with a few steps:

  1. Begin from within. Tracking the impact of your grantmaking efforts and sharing those findings with the rest of the sector requires organizations to look internally first. Start by building a knowledge management implementation plan that involves every stakeholder, from internal teams to grantee partners to board executives.
  1. Determine and prioritize technology needs. Improvements in technology — specifically cloud-based technology — are part of what’s driving the demand for data on philanthropic impact in the first place. Your grants management system needs to provide integrated efficiency and accessibility if you want to motivate staff participation and generate usable insights from the data you’re collecting. Is your software streamlining your efforts, or is it only complicating them?
  1. Change your mindset. Knowledge sharing can be intimidating, but it doesn’t have to be. Lose the mindset defined by a fear of failure; instead, embrace one that drives you to search for opportunity. Promote a stronger culture of knowledge sharing across the sector by sharing your organizational practices and lessons learned. Uncover opportunities to collect data and share information across organizations.

There’s no denying that knowledge sharing benefits foundations everywhere, along with the programs they fund. Don’t let the challenges hold you back from aiming for educational, shareable data — you have too much to gain not to pursue that goal.  What will you #OpenForGood?

--Aaron Lester 

Because What You Know Shouldn’t Just Be About Who You Know
June 1, 2017

Janet Camarena is director of transparency initiatives for Foundation Center.  This post is part of the Glasspockets’ #OpenforGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

Janet Camarena Photo"Knowledge is obsolete."  As a librarian, my ears perked up at this TEDx talk and articles buzzing about this in the education field.  It seems plausible.  Why memorize facts, when anything one wants to know can be readily looked up, on the go, via a smart phone? As a mom, I imagine my kids sitting down to prepare for rich, thought-provoking classroom discussions instead of laboring over endless multiple-choice tests. What an exciting time to be alive — a time when all of humanity’s knowledge is at our fingertips, leading experts are just a swipe away, the answer always literally close at hand, and we’ve been released from the drudgery of memorization and graduated to a life of active, informed debate! And how lucky are we to be working in philanthropy and able to leverage all this knowledge for good, right?

Though the active debate part may sound familiar, sadly, for those of us working in philanthropy, the ubiquity of knowledge remains more sci-fi mirage than a TED Talk rendering of our present-day reality.  As Glasspockets reported in “The Foundation Transparency Challenge” infographic, released last November, still only 10% of foundations even have a website, so even a smart phone is not smart enough to help connect you to the 90% of those that don't.

The Foundation Transparency Challenge also reveals other areas of potential improvement for institutional philanthropy, including a number of transparency practices not widely embraced by the majority of funders. Indeed, the data we’ve collected demonstrates that philanthropy is weakest when it comes to creating communities of shared learning, with fewer than half the foundations with a Glasspockets profile using their websites to share what they are learning, only 22 % percent sharing how they assess their own performance, and only 12 percent revealing details about their strategic plan.

Foundation Center data also tells us that foundations annually make an average of $5.4 billion in grants for knowledge-production activities, such as evaluations, white papers, and case studies. Yet only a small fraction of foundations actively share the knowledge assets that result from those grants -- and far fewer share them under an open license or through an open repository. For a field that is focused on investing in ideas -- and not shy about asking grantees to report on the progress of these ideas -- there is much potential here to open up our knowledge to peers and practitioners who, like so many of us, are looking for new ideas and new approaches to urgent, persistent problems.

“Sadly, for those of us working in philanthropy, the ubiquity of knowledge remains more sci-fi mirage than a TED Talk rendering of our present-day reality.”

As for having a universe of experts a swipe away to help inform our philanthropic strategies, the reality is that the body of knowledge related to philanthropic work is scattered across the thousands of institutional foundation websites that do exist. But who has time for the Sisyphean task of filtering through it all?

No coincidence, perhaps, that a main finding of a recent report commissioned by the William and Flora Hewlett Foundation was that foundation professionals looking to gain and share knowledge tend to prefer to confer with trusted foundation peers and colleagues. At the same time, the field is doing a lot of soul searching related to diversity, equity, and inclusion -- and what it can do to improve its performance in those areas. But if practitioners in the field are only sourcing knowledge from their peers, doesn’t that suggest their knowledge networks may be unintentionally insular and lacking in well…diversity of opinion and perspective? And might there be a way to connect the dots and improve the effectiveness, efficiency, and inclusivity of our networks by changing the way we source, find, and share lessons learned? 

In other words, shouldn’t what we know not just be about who we know?

#OpenForGood

The good news is that as more foundations professionalize their staffs and develop in-house expertise in learning, monitoring and evaluation, (as well as in grants management and communications), there are a number of developing practices out there worth highlighting. At the same time, a number of technology platforms and tools have emerged that make it easy for us to improve the way we search for and find answers to complex questions. Here at Foundation Center, for example, we are using this post to kick off a new #OpenForGood series featuring the voices of “knowledge sharing champions” from the philanthropic and social sectors. Some of these experts will be sharing their perspectives on opening up knowledge at their own foundations, while others will clue us in to tools and platforms that can improve the way philanthropy leverages the knowledge it generates (and pays for), as well as discovers new sources of knowledge. 

But before we get there, you might be wondering: What does it mean to be a social sector organization that is #OpenForGood? And how does my organization become one? Not to worry. The following suggestions are intended to help organizations demonstrate they are moving in the direction of greater openness:

  1. Grantmakers can start by assessing their own foundation’s openness by taking and sharing the “Who Has Glass Pockets?” transparency self-assessment survey.
  2. Funders and nonprofits alike can openly share what they are learning with the rest of the field. If your organization invested in monitoring and evaluating results in 2015 or 2016, make the effort to share those evaluations in our new IssueLab: Results In exchange for sharing your recent evaluations, you will receive an #OpenforGood badge to display on your website to signal your commitment to creating a community of shared learning.
  3. If you have lessons to share but not a formal evaluation process, share them in blog format here on Transparency Talk, on PhilanTopic, or on GrantCraft, so others can still benefit from your experience.
  4. Adopt an open licensing policy so that others can more easily build on your work.

The #OpenForGood series is timed to align with the launch of a new Foundation Center platform designed to help philanthropy learn from all the collective knowledge at its disposal. Developed by the team at IssueLab, whose collection already includes more than 22,000 reports from thousands of nonprofits and foundations, IssueLab Results is dedicated in particular to the collection and sharing of evaluations.

IssueLab Topic Graphic

IssueLab Results supplies easy, open access to the lessons foundations are learning about what is and isn’t working. The site includes a growing curated collection of evaluations and a special collection containing guidance on the practice of evaluation. And it’s easy to share your knowledge through the site – just look for the orange “Upload” button. 

The basic idea here is to scale social sector knowledge so that everyone benefits and the field, collectively, grows smarter rather than more fragmented. On a very practical level, it means that a researcher need only visit one website rather than thousands to learn what is known about the issue s/he is researching. But the only way the idea can scale is if foundations and nonprofits help us grow the collection by adding their knowledge here. If they do – if you do – it also means that philanthropy will have a more inclusive and systematic way to source intelligence beyond the “phone a friend” approach.

The bottom line is that in philanthropy today, knowledge isn’t obsolete, it’s obscured. Won’t you join us in helping make it #OpenForGood.

If you have a case study related to knowledge sharing and management and/or the benefits of transparency and openness, let us know in the comments below, or find us on Twitter @glasspockets.

--Janet Camarena

Share This Blog

  • Share This

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:
    glasspockets@foundationcenter.org

Subscribe to Transparency Talk

Categories