Transparency Talk

How To Keep Me Scrolling Through What You Are Sharing
August 10, 2017

Tom Kelly is Vice President of Knowledge, Evaluation & Learning at the Hawai‘i Community Foundation. He has been learning and evaluating in philanthropy since the beginning of the century. @TomEval  TomEval.com

This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Tom Kelly Hi ResHello, my name is Tom and I am a Subscriber. And a Tweeter, Follower, Forwarder (FYI!), Google Searcher, and DropBox Hoarder. I subscribe to blogs, feeds, e-newsletters, and email updates. My professional title includes the word “Knowledge,” so I feel compelled to make sure I am keeping track of the high volume of data, information, reports, and ideas flowing throughout the nonprofit and foundation worlds (yes, it is a bit of a compulsion…and I am not even including my favorite travel, shopping and coupon alerts).

It is a lot and I confess I do not read all of it. It is a form of meditation for me to scroll through emails and Twitter feeds while waiting in line at Aloha Salads. I skim, I save, I forward, I retweet – I copy and save for later reading (later when?). In fact, no one can be expected to keep up, so how does anyone make sense of it all, or even find what we need when we need it? Everyone being #OpenForGood and sharing everything is great, but who is reading it all? And how do we make what we are opening for good actually good?

Making Knowledge Usable

We have all experienced at some point Drowning in Information-Starving for Knowledge (John Naisbitt’s Megatrends…I prefer E.O. Wilson’s “starving for wisdom” theory). The information may be out there but rarely in a form that is easily found, read, understood, and most importantly used. Foundation Center and IssueLab have made it easier for people in the sector to know what is being funded, where new ideas are being tested, and what evidence and lessons are available. But nonprofits and foundations still have to upload and share many more of their documents than they do now. And we need to make sure that the information we share is readable, usable, and ready to be applied.

Hawaii Community Foundation Graphic

DataViz guru Stephanie Evergreen recently taught me a new hashtag: #TLDR – “Too Long, Didn’t Read.”

She now proposes that every published report be available in three formats – a one-page handout with key messages, a 3-page executive summary, and a 25-page report (plus appendices). In this way the “scanners,” “skimmers” and “deep divers” can access the information in the form they prefer and in the time they have. It also requires writing (and formatting) differently for each of these sets of eyes. (By the way, do you know which one you are?)

From Information to Influence

But it is not enough to make your reports accessible, searchable, and also easily readable in short and long forms; you also need to include the information people need to make decisions and act. It means deciding in advance who you want to inform and influence and what you want people to do with the information. You need to be clear about your purpose for sharing information, and you need to give people the right kinds of information if you expect them to read it, learn from it, and apply it.

“Give people the right kinds of information if you expect them to read it, learn from it, and apply it.”

Too many times I have read reports with promising findings or interesting lessons, and then I race through all the footnotes and the appendices at the back of the report looking for resources that could point me to the details of evidence and data or implementation guidance. I usually wind up trying to track down the authors by email or phone to follow up.

A 2005 study of more than 1,000 evaluations published in human services found only 22 well-designed and well-documented reports that shared any analysis of implementation factors – what lessons people learned about how best to put the program or services in place. We cannot expect other people and organizations to share knowledge and learn if they cannot access information from others that helps them use the knowledge and apply it in their own programs and organizations. YES, I want to hear about your lessons and “a-ha’s,” but I also want to see data and analysis of the common challenges that all nonprofits and foundations face:

  • How to apply and adapt program and practice models in different contexts
  • How to sustain effective practices
  • How to scale successful efforts to more people and communities

This means making sure that your evaluations and your reports include opening up the challenges of implementation – the same challenges others are likely to face. It also means placing your findings in the context of existing learning while also using similar definitions so that we can build on each other’s knowledge. For example, in our recent middle school connectedness initiative, our evaluator Learning for Action reviewed the literature first to determine specific components and best practices of youth mentoring so that we could build the evaluation on what had come before, and then report clearly about what we learned about in-school mentoring and open up  useful and comparable knowledge to the field. 

So please plan ahead and define your knowledge sharing and influence agenda up front and consider the following guidelines:

  • Who needs to read your report?
  • What information does your report need to share to be useful and used?
  • Read and review similar studies and reports and determine in advance what additional knowledge is needed and what you will document and evaluate.
  • Use common definitions and program model frameworks so we are able to continually build on field knowledge and not create anew each time.
  • Pay attention to and evaluate implementation, replication and the management challenges (staffing, training, communication, adaptation) that others will face.
  • And disseminate widely and share at conferences, in journals, in your sector networks, and in IssueLab’s open repository.

And I will be very happy to read through your implementation lessons in your report’s footnotes and appendices next time I am in line for a salad.

--Tom Kelly

Crafting A Better Tap of Knowledge
August 9, 2017

Gabriela Fitz is director of knowledge management initiatives at Foundation Center. This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Gabi Fitz photoThis past weekend, I went to visit an old meat packing plant in Chicago’s Back of the Yards neighborhood. The plant, renamed “Plant Chicago,” serves as a workshop and food production space, playing host to a number of micro-enterprises including a brewery and bakery. But it wasn’t the beer or even the pies that drew me there. It was their tagline, “Closed Loop, Open Source.”

If you know me (or the work of IssueLab at all), you know why I couldn’t resist. The closed loop approach is all about a circular economy, where we take “waste from one process and re-purpose it as inputs for another, creating a circular, closed-loop model of material reuse.” It’s a simple principle and one that I imagine most of us would get behind.

But what’s not so simple is building and maintaining those closed loop systems so that people begin to see (and taste) the benefits. Standing in the lobby of Plant Chicago it was painfully clear to me that circular economies, whether they are in food production or in knowledge production, require more than just good intentions.

Plant Chicago
Plant Chicago, a workshop and food production space, hosts micro-enterprises, including a brewery and bakery. Credit: Gabriela Fitz

Just as I started to feel the familiar weight of trying to execute systems change, I spotted a small sketch of a pyramid amongst a number of technical diagrams and development plans covering a large wall. This simple sketch was similar to a model many of you are probably familiar with but  is still worth describing. In the sketch, the base of the pyramid was labeled “beliefs and values.” The next level up was “practices and actions.” The top of the pyramid was “results.”

When it comes to the closed loop we care so much about at IssueLab, we keep seeing organizations try to skip from beliefs to results. The social sector wants shared learning without sharing; collective impact without collectivized intelligence. But open knowledge - like any sector-wide or organizational change - has to include a change in practices, not just beliefs. If we don’t adopt open knowledge practices, we can’t expect to see the results we hope for: improved program design and delivery at the community level or less duplication of avoidable mistakes. If we truly want to live up to the #OpenForGood ideal, our beliefs and values are simply not sufficient. (Note that the definition of closed loop I quote above is not about beliefs, it’s about actions, relying on verbs like “take,” “re-purpose,” and “create.”)

The good news is that we have the infrastructure to make a circular knowledge economy possible. We’ve built the plant. Tools like open licenses and open repositories were designed to facilitate and support change in knowledge sharing practices, making it easier for foundations to move up the levels of the pyramid.

Now, we just need to start taking a couple simple actions that reflect our beliefs and move us towards the results we want to see. If you believe in the #OpenForGood principle that social sector knowledge is a public good from which nonprofits and foundations can benefit, your foundation can: 1) use open licensing for your knowledge products, and 2) earn an #OpenForGood badge by sharing your knowledge products, like evaluations, through IssueLab’s open repository. Once those practices are as much part of the normal way of doing foundation business as cutting checks and producing summary reports are, we can all sit back and enjoy that beer, together.

--Gabriela Fitz

Foundations and Endowments: Smart People, Dumb Choices
August 3, 2017

(Marc Gunther writes about nonprofits, foundations, business and sustainability. He also writes for NonprofitChronicles.com. A version of this post also appears in Nonprofit Chronicles.)

This post is part of a new Transparency Talk series devoted to putting the spotlight on the importance of the 990PF, the informational tax form that foundations must annually file.  The series will explore the implications of the open 990; how journalists and researchers use the 990PF to understand philanthropy; and its role, limitations, and potential as a communications tool. 

Marc Gunther photoAmerica’s foundations spend many millions of dollars every year on investment advice. In return, they get sub-par performance.

You read that right: Money that could be spent on charitable programs — to alleviate global poverty, help cure disease, improve education, support research or promote the arts — instead flows into the pockets of well-to-do investment advisors and asset managers who, as a group, generate returns on their endowment investments that are below average.

This is redistribution in the wrong direction, on a grand scale: Foundation endowments hold about $800 billion in investments. It hasn’t attracted a lot of attention, but that could change as foundations make their IRS tax filings open, digital and searchable. That should create competitive pressures on foundation investment officers to do better, and for foundation executives and trustees to rethink business as usual investing.

The latest evidence that they aren’t doing very well arrived recently with the news that two energy funds managed by a Houston-based private equity firm called EnerVest are on the verge of going bust. Once worth $2 billion, the funds will leave investors “with, at most, pennies for every dollar they invested,” the Wall Street Journal reports. To add insult to injury, the funds in question were invested in oil and natural gas during 2012 and 2013, just as Bill McKibben, 350.org and a handful of their allies were urging institutional investors to divest from fossil fuels.

Foundations that invested in the failing Enervest funds include the J. Paul Getty Trust, the John D. and Catherine T. MacArthur Foundation and the California-based Fletcher Jones Foundation, according to their most recent IRS filings. Stranded assets, anyone?

“Endowed private foundations are unaccountable to anyone other than their own trustees.”

Of course, no investment strategy can prevent losses. But the collapse of the Enervest funds points to a broader and deeper problem–the fact that most foundations trust their endowment to investment offices and/or outside portfolio managers who pursue active and expensive investment strategies that, as a group, have underperformed the broader markets.

How costly has this underperformance been? That’s impossible to know because most foundations do not disclose their investment returns. This, by itself, is a troubling; it’s a reminder that endowed private foundations are unaccountable to anyone other than their own trustees.

On disclosure, there are signs of progress. The Ford Foundation says it intends to release its investment returns for the first time. A startup company called Foundation Financial Research is compiling data on endowments as well, which it intends to make available to foundation trustees and sell to asset managers.

What’s more, as the IRS Form 990s filed by foundations become machine readable, it will become easier for analysts, activists, journalists and other foundations to see exactly how billions of dollars of foundations assets are deployed, and how they are performing. Advocates for mission-based investment, or for hiring more women and people of color to manage foundation assets are likely to shine a light on foundations whose endowments that are underperforming.

Unhappily, all indications are that most foundations are underperforming because they pursue costly, active investment strategies. This month, what is believed to be the most comprehensive annual survey of foundation endowment performance once again delivered discouraging news for the sector.

The 2016 Council on Foundations–Commonfund Study of Investment of Endowments for Private and Community Foundations® reported on one-year, five-year and 10-year returns for private foundations, and they again trail passive benchmarks.

The 10-year annual average return for private foundations was 4.7 percent, the study found. The five-year return was 7.6 percent. Those returns are net of fees — meaning that outside investment fees are taken into account — but they do not take into account the considerable salaries of investment officers at staffed foundations.

By comparison, Vanguard, the pioneering giant of passive investing, says a simple mix of index funds with 70 percent in stocks and 30 percent in fixed-income assets delivered an annualized return of 5.4 percent over the past 10 years. The five-year return was 9.1 percent.

These differences add up in a hurry.

Warnings, Ignored

The underperformance of foundation endowments is not a surprise. In a Financial Times essay called The end of active investing? that should be read by every foundation trustee, Charles D. Ellis, who formerly chaired the investment committee at Yale, wrote:

“Over 10 years, 83 per cent of active funds in the US fail to match their chosen benchmarks; 40 per cent stumble so badly that they are terminated before the 10-year period is completed and 64 per cent of funds drift away from their originally declared style of investing. These seriously disappointing records would not be at all acceptable if produced by any other industry.”

The performance of hedge funds, private-equity funds and venture capital has trended downwards as institutional investors flocked into those markets, chasing returns. Notable investors including Warren Buffett, Jack Bogle (who as Vanguard’s founder has a vested interest in passive investing), David Swensen, Yale’s longtime chief investment officer, and Charles Ellis have all argued for years that most investors–even institutional investors–should simply diversity their portfolios, pursue passive strategies and keep their investing costs low.

In his most recent letter to investors in Berkshire Hathaway, Buffett wrote:

“When trillions of dollars are managed by Wall Streeters charging high fees, it will usually be the managers who reap outsized profits, not the clients. Both large and small investors should stick with low-cost index funds.”

For more from Buffett about why passive investing makes sense, see my March blogpost, Warren Buffett has some excellent advice for foundations that they probably won’t take. Recently, Freakonomics did an excellent podcast on the topic, titled The Stupidest Thing You Can Do With Your Money.

2016700activepassivesign-640x410-jpgThat said, the debate between active and passive asset managers remains unsettled. While index funds have outperformed actively-managed portfolios over the last decade, Cambridge Associates, a big investment firm that builds customized portfolios for institutional investors and private clients, published a study last spring saying that this past decade is an anomaly. Cambridge Associates found that since 1990, fully diversified (i.e., actively managed) portfolios have underperformed a simple 70/30 stock/bond portfolio in only two periods: 1995–99 and 2009–2016. To no one’s surprise, Cambridge says: “We continue to find investments in private equity and hedge funds that we believe have an ability to add value to portfolios over the long term.” Portfolio managers are also sure to argue that their expertise and connections enable them to beat market indices.

But where is the evidence? To the best of my knowledge, seven of the U.S.’s 10 biggest foundations decline to disclose their investment returns. I emailed or called the Getty, MacArthur and Fletcher Jones foundations to ask about their investments in Enervest and was told that they do not discuss individual investments. They declined comment.

To its credit, MacArthur does disclose its investment performance of its $6.3 billion endowment. On the other hand, MacArthur has an extensive grantmaking program supporting “conservation and sustainable development.” Why is it financing oil and gas assets?

Ultimately, foundation boards are responsible for overseeing the investment of their endowments. Why don’t they do a better job of it? Maybe it’s because many foundation trustees — particularly those who oversee the investment committees — come out of Wall Street, private equity funds, hedge funds and venture capital. They are the so-called experts, and they have built successful careers by managing other people’s people. It’s not easy for the other board members, who may be academics, activists, lawyers or politicians, to question their expertise. But that’s what they need to do.

And, at the very least, foundations ought to be open about how their endowments are performing so those who manage their billions of dollars can be held accountable.

--Marc Gunther

How Improved Evaluation Sharing Has the Potential to Strengthen a Foundation’s Work
July 27, 2017

Jen GlickmanJennifer Glickman is manager, research team, at the Center for Effective Philanthropy. This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Philanthropy is a complex, demanding field, and many foundations are limited in the amount of resources they can dedicate to obtaining and sharing knowledge about their practices. This makes it necessary to consider, then, in what areas should foundations focus their learning and sharing efforts to be #OpenForGood?

Last year, the Center for Effective Philanthropy (CEP) released two research reports exploring this question. The first, Sharing What Matters: Foundation Transparency, looks at foundation CEOs’ perspectives on what it means to be transparent, who the primary audiences are for foundations’ transparency efforts, and what is most important for foundations to share.

The second report, Benchmarking Foundation Evaluation Practices, presents benchmarking data collected from senior foundation staff with evaluation responsibilities on topics such as evaluation staffing and structures, investment in evaluation work, and the usefulness of evaluation information. Together, these reports provide meaningful insights into how foundations can learn and share knowledge most effectively.

CEP’s research found that there are specific topics about which foundation CEOs believe being transparent could potentially increase their foundation’s ability to be effective. These areas include the foundation’s grantmaking processes, its goals and strategies, how it assesses its performance, and the foundation’s experiences with what has and has not worked in its efforts to achieve its programmatic goals. While foundation CEOs believe their foundations are doing well in sharing information about their grantmaking, goals, and strategies, they say their foundations are much less transparent about the lessons they learn through their work.

CEP Transparency Graphic

For example, nearly 70 percent of the CEOs CEP surveyed say being transparent about their foundation’s experiences with what has worked in its efforts to achieve its programmatic goals could increase effectiveness to a significant extent. In contrast, only 46 percent say their foundations are very or extremely transparent about these experiences. Even fewer, 31 percent, say their foundations are very or extremely transparent about what has not worked in their programmatic efforts, despite 60 percent believing that being transparent about this topic could potentially increase their effectiveness to a significant extent.

And yet, foundations want this information about lessons learned and think it is important. Three-quarters of foundation CEOs say they often seek out opportunities to learn from other foundations’ work, and is that it enables others to learn from foundation work more generally.

How is knowledge being shared then? According to our evaluation research, foundations are mostly sharing their programmatic knowledge internally. Over three-quarters of the evaluation staff who responded to our survey say evaluation findings are shared quite a bit or a lot with the foundation’s CEO, and 66 percent say findings are shared quite a bit or a lot with foundation staff. In comparison:

  • Only 28 percent of respondents say evaluation findings are shared quite a bit or a lot with the foundation’s grantees;
  • 17 percent say findings are shared quite a bit or a lot with other foundations; and
  • Only 14 percent say findings are shared quite a bit or a lot with the general public.

CEP Evaluation Survey Graphic

In fact, less than 10 percent of respondents say that disseminating evaluation findings externally is a top priority for their role.

But respondents do not think these numbers are adequate. Nearly three-quarters of respondents say their foundation invests too little in disseminating evaluation findings externally. Moreover, when CEP asked respondents what they hope will have changed for foundations in the collection and/or use of evaluation information in five years, one of the top three changes mentioned was that foundations will be more transparent about their evaluations and share what they are learning externally.

So, if foundation CEOs believe that being transparent about what their foundation is learning could increase its effectiveness, and foundation evaluation staff believe that foundations should be investing more in disseminating findings externally, what is holding foundations back from embracing an #OpenForGood approach?

CEP has a research study underway looking more deeply into what foundations know about what is and isn’t working in their practices and with whom they share that information, and will have new data to enrich the current conversations on transparency and evaluation in early 2018. In the meanwhile, take a moment to stop and consider what you might #OpenForGood.

--Jennifer Glickman

How to Make Grantee Reports #OpenForGood
July 20, 2017

Mandy Ellerton and Molly Matheson Gruen joined the [Archibald] Bush Foundation in 2011, where they created and now direct the Foundation's Community Innovation programs. The programs allow communities to develop and test new solutions to community challenges, using approaches that are collaborative and inclusive of people who are most directly affected by the problem. This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Ellertonmandy20152
Mandy Ellerton

When we started working at the Bush Foundation in 2011, we encountered a machine we’d never seen before: the Lektriever. It’s a giant machine that moves files around, kind of like a dry cleaner’s clothes rack, and allows you to seriously pack in the paper. As a responsible grantmaker, it’s how the Bush Foundation had meticulously tracked and stored its files for posterity - in particular, grantee reports - for decades.

In 2013, the Bush Foundation had the privilege of moving to a new office. Mere days before we were to move into the new space, we got a frantic call from the new building’s management. It turned out that the Lektrievers (we actually had multiple giant filing machines!) were too heavy for the floor of the new building, which had to be reinforced with a number of steel plates to sustain their weight.

MMG 2015 Headshot1
Molly Matheson Gruen

The Lektrievers symbolized our opportunity to become more transparent and move beyond simply preserving our records, instead seeing them as relevant learning tools for current audiences. It was time to lighten the load and share this valuable information with the world.

Even with all this extra engineering, we would still have to say goodbye to one of the machines altogether for the entire system to be structurally sound. We had decades of grantee stories, experiences and learning trapped in a huge machine in the inner sanctum of our office, up on the 25th floor. 

Learning Logs Emerge

We developed our grantee learning log concept in the Community Innovation Programs as one way to increase the Foundation’s transparency. At the heart of it, our learning logs are a very simple concept: they are grantee reports, shared online. But, like many things that appear simple, once you pull on the string of change – the complexity reveals itself.

“Every Community Innovation project is an opportunity for others to learn and the learning logs are a platform to share this learning.”

Before we could save the reports from a life of oblivion in the Lektriever, build out the technology and slap the reports online, we needed to entirely rethink our approach to grantee reporting to create a process that was more mutually beneficial. First, we streamlined our grant accountability measures (assessing whether the grantees did what they said they’d do) by structuring them into a conversation with grantees, rather than as a part of the written reports. We’ve found that conducting these assessments in a conversation takes the pressure off and creates a space where grantees can be more candid, leading to increased trust and a stronger partnership.

Second, our grantee reports now focus on what grantees are learning in their grant-funded project. What’s working? What’s not? What would you do differently if you had it to do all over again? This new process resulted in reports that were more concise and to the point.

Finally, we redesigned our website to create a searchable mechanism for sharing these reports online. This involved linking our grant management system directly with our website so that when a grantee submits a report, we do a quick review and then the report automatically populates our website. We’ve also designed a way for grantees to be able to designate select answers as private when they want to share sensitive information with us, yet not make it entirely public. We leave it up grantee discretion and those selected answers do not appear on the website. Grantees designate their answers to be private for a number of reasons, most often because they discuss sensitive situations having to do with specific people or partners – like when someone drops out of the project or when a disagreement with a partner holds up progress. And while we’ve been pleased at the candor of most of our grantees, some are still understandably reluctant to be publicly candid about failures or mistakes.

But why does this new approach to grantee reporting matter, besides making sure the floor doesn’t collapse beneath our Lektrievers?

Bushfoundation-Lektriever photo
The Lektriever is a giant machine that moves files around, kind of like a dry cleaner’s clothes rack. The Bush Foundation had meticulously tracked and stored its files for posterity - in particular, grantee reports - for decades. Credit: Bush Foundation

Learning Sees the Light of Day

Learning logs help bring grantee learning into the light of day, instead of hiding in the Lektrievers, so that more people can learn about what it really takes to solve problems. Our Community Innovation programs at the Bush Foundation fund and reward the process of innovation–the process of solving problems. Our grantees are addressing wildly different issues: from water quality to historical trauma, from economic development to prison reform. But, when you talk to our grantees, you see that they actually have a lot in common and a lot to learn from one another about effective problem-solving. And beyond our grantee pool, there are countless other organizations that want to engage their communities and work collaboratively to solve problems.  Every Community Innovation project is an opportunity for others to learn and the learning logs are a platform to share this learning, making it #OpenForGood.

We also want to honor our grantees’ time. Grantees spend a lot of time preparing grant reports for funders. And, in a best case scenario, a program officer reads the report and sends the grantee a response of some kind before the report is filed away. But, let’s be honest – sometimes even that doesn’t happen. The report process can be a burden on nonprofits and the only party to benefit is the funder. We hope that the learning logs help affirm to our grantees that they’re part of something bigger than themselves - that what they share matters to others who are doing similar work.

We also hear from our grantees that the reports provide a helpful, reflective process, especially when they fill it out together with collaborating partners. One grantee even said she’d like to fill out the report more often than we require to have regular reflection moments with her team!

Learning from the Learning Logs

We only launched the learning logs last year, but we’ve already received some positive feedback. We’ve heard from both funded and non-funded organizations that the learning logs provide inspiration and practical advice so that they can pursue similar projects. A grantee recently shared a current challenge in their work. It directly connected to some work we knew another grantee had done and had written about in their learning log. So, since this knowledge was now out in the open, we were able to direct them to the learning log as a way to expand our grantee’s impact, even beyond their local community, and use it to help advance another grantee’s work.

Take, for example, some of the following quotes from some of our grantee reports:

  • The Minnesot Brain Injury Alliance's project worked on finding ways to better serve homeless people with brain injuries.  They reflected that, "Taking the opportunity for reflection at various points in the process was very important in working toward innovation.  Without reflection, we might not have been open to revising our plan and implementing new possibilities."
  • GROW South Dakota addressed a number of challenges facing rural South Dakota communities. They shared that, “Getting to conversations that matter requires careful preparation in terms of finding good questions and setting good ground rules for how the conversations will take place—making sure all voices are heard, and that people are listening for understanding and not involved in a debate.”
  •  The People's Press Project engaged communities of color and disenfranchised communities to create a non-commercial, community-owned, low-powered radio station serving the Fargo-Moorhead area of North Dakota. They learned “quickly that simply inviting community members to a meeting or a training was not a type of outreach that was effective.”

Like many foundations, we decline far more applications than what we fund, and our limited funding can only help communities tackle so many problems. Our learning logs are one way to try and squeeze out more impact from those direct investments. By reading grantee learning logs, hopefully more people will be inspired to effectively solve problems in their communities.

We’re not planning to get rid of the Lektrievers anytime soon – they’re pretty retro cool and efficient. They contain important historical records and are incredibly useful for other kinds of record keeping, beyond grantee documentation. Plus, the floor hasn’t fallen in yet. But, as Bush Foundation Communications Director Dominick Washington put it, now we’re unleashing the knowledge, “getting it out of those cabinets, and to people who can use it.”

--Mandy Ellerton and Molly Matheson Gruen

What Will You #OpenForGood?
July 13, 2017

Janet Camarena is director of transparency initiatives at Foundation Center.  This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Janet Camarena Photo

This week, Foundation Center is launching our new #OpenForGood campaign, designed to encourage better knowledge sharing practices among foundations.  Three Foundation Center services—Glasspockets, IssueLab, and GrantCraft are leveraging their platforms to advance the idea that philanthropy can best live up to its promise of serving the public good by openly and consistently sharing what it’s learning from its work.  Glasspockets is featuring advice and insights from “knowledge sharing champions” in philanthropy on an ongoing #OpenForGood blog series; IssueLab has launched a special Results platform allowing users to learn from a collective knowledge base of foundation evaluations; and a forthcoming GrantCraft Guide on open knowledge practices is in development.

Although this campaign is focused on helping and inspiring foundations to use new and emerging technologies to better collectively learn, it is also in some ways rooted in the history that is Foundation Center’s origin story.

OFG-twitter

A Short History

Sixty years ago, Foundation Center was established to provide transparency for a field in jeopardy of losing its philanthropic freedom due to McCarthy Era accusations that gained traction in the absence of any openness whatsoever about foundation priorities, activities, or processes.  Not one, but two congressional commissions were formed to investigate foundations committing alleged “un-American activities.”  As a result of these congressional inquiries, which spanned several years during the 1950s, Foundation Center was established to provide transparency in a field that had nearly lost everything due to its opacity. 

“The solution and call to action here is actually a simple one – if you learn something, share something.”

I know our Transparency Talk audience is most likely familiar with this story since the Glasspockets name stems from this history when Carnegie Corporation Chair Russell Leffingwell said, “The foundation should have glass pockets…” during his congressional testimony, describing a vision for a field that would be so open as to allow anyone to have a look inside the workings and activities of philanthropy.  But it seems important to repeat that story now in the context of new technologies that can facilitate greater openness.

Working Collectively Smarter

Now that we live in a time when most of us walk around with literal glass in our pockets, and use these devices to connect us to the outside world, it is surprising that only 10% of foundations have a website, which means 90% of the field is missing discovery from the outside world.  But having websites would really just bring foundations into the latter days of the 20th century--#OpenForGood aims to bring them into the present day by encouraging foundations to openly share their knowledge in the name of working collectively smarter.

What if you could know what others know, rather than constantly replicating experiments and pilots that have already been tried and tested elsewhere?  Sadly, the common practice of foundations keeping knowledge in large file cabinets or hard drives only a few can access means that there are no such shortcuts. The solution and call to action here is actually a simple one—if you learn something, share something

In foundations, learning typically takes the form of evaluation and monitoring, so we are specifically asking foundations to upload all of your published reports from 2015 and 2016 to the new IssueLab: Results platform, so that anyone can build on the lessons you’ve learned, whether inside or outside of your networks. Foundations that upload their published evaluations will receive an #OpenForGood badge to demonstrate their commitment to creating a community of shared learning.

Calls to Action

But #OpenForGood foundations don’t just share evaluations, they also:

  • Open themselves to ideas and lessons learned by others by searching shared repositories, like those at IssueLab as part of their own research process;
  • They use Glasspockets to compare their foundation's transparency practices to their peers, add their profile, and help encourage openness by sharing their experiences and experiments with transparency here on Transparency Talk;
  • They use GrantCraft to hear what their colleagues have to say, then add their voice to the conversation. If they have an insight, they share it!

Share Your Photos

“#OpenForGood foundations share their images with us so we can show the collective power of philanthropic openness, not just in words, but images. ”

And finally, #OpenForGood foundations share their images with us so we can show the collective power of philanthropic openness, not just in words, but images.  We would like to evolve the #OpenForGood campaign over time to become a powerful and meaningful way for foundations to open up your work and impact a broader audience than you could reach on your own. Any campaign about openness and transparency should, after all, use real images rather than staged or stock photography. 

So, we invite you to share any high resolution photographs that feature the various dimensions of your foundation's work.  Ideally, we would like to capture images of the good you are doing out in the world, outside of the four walls of your foundation, and of course, we would give appropriate credit to participating foundations and your photographers.  The kinds of images we are seeking include people collaborating in teams, open landscapes, and images that convey the story of your work and who benefits. Let us know if you have images to share that may now benefit from this extended reach and openness framing by contacting openforgood@foundationcenter.org.

What will you #OpenForGood?

--Janet Camarena

GrantAdvisor: A TripAdvisor for Funder Feedback
July 6, 2017

Michelle Greanias is executive director of PEAK Grantmaking. Follow her on Twitter @mgreanias. This post also appears in PEAK Grantmaking’s blog.

Michelle GreaniasFor funders, hearing honest input from grantseekers about what they think about a foundation’s practices and getting insights from their experiences working as a grantee partner is a critical component of effective grantmaking. Up until now, funders have needed to initiate the request for feedback via surveys, conversations, and third-party evaluators.  Now, a collaboration of funders, nonprofits, and others interested in improving philanthropy are exploring a new approach—GrantAdvisor, which recently launched in California and Minnesota with a goal of eventually reaching the entire country.

GrantAdvisor is like TripAdvisor—it’s a website that allows individuals (in this case, grant applicants, grantees, and others) to share their first-hand experiences with funding organizations, and for funders to have the opportunity to respond publicly.  The idea is that just as a traveler would check TripAdvisor when planning a trip, a nonprofit would check GrantAdvisor before applying to a funder. And, just as a hotel monitors TripAdvisor to see what your customers like best and least about them, funders can see how grantees and colleagues are experiencing working with them.

“Listening to unfettered feedback from grantees can help funders build more efficient processes and more effective partnerships, which ultimately increases impact.”

It works by collecting anonymous feedback from grantseekers and grantees. When five reviews have been submitted, the data will be shared publicly. A funder profile needs at least five reviews before it becomes public. The unpublished results are sent to the funder providing an opportunity for the funder to respond. After the first five reviews are published, subsequent reviews will be posted, and the funder can respond at any time. Funders are encouraged to register with GrantAdvisor to receive automatic notices when reviews are posted about their organizations and post responses when new reviews are submitted.

As a grants manager, this concept was a little scary to me at first—what if the feedback isn’t all positive?  How would it affect an organization’s reputation?  But the reality is that an organization’s reputation is already affected if grantseekers are having poor experiences with a funder. I want to know, and I believe most grants managers would agree, about any issues and be able to address them.  Especially since the alternative is allowing problems to build and multiply as bad practice impacts more and more grantees.

I also considered this transparent move through another critical lens—aligning values with practices.  In PEAK Grantmaking’s recent research, the top three common values held by grantmakers were collaboration, respect, and integrity.  Being open to feedback, even difficult feedback, is a concrete way to show that grantmakers are “walking the talk” by bringing those values to life through our practices.

Jessamyn Shams-Lau, executive director of Peery Foundation, and Maya Winkelstein, executive director of the Open Road Alliance, both support this work and see four reasons that GrantAdvisor.org is useful to funders:

  1. Feedback: Listening to unfettered feedback from grantees can help funders build more efficient processes and more effective partnerships, which ultimately increases impact.
  2. Benchmarking: With a common set of questions for every foundation, funders can benchmark the effectiveness of their grantmaking practices from the perspective of the grantee experience.
  3. Honest and Accurate Data. When foundations directly solicit feedback (even anonymously), respondents give different answers. Since GrantAdvisor.org collects reviews with or without funder prompting, this unsolicited feedback is the most honest feedback and honest reviews mean accurate data.
  4. Saving Time. Over time, the hope is that the sharing of information via GrantAdvisor.org will help potential grantees better self-select which foundations to approach and which are not well aligned. This will result in a higher-quality pipeline for foundations, which saves everyone time and gets funders closer to impact faster.

Given the promise and potential of this new feedback mechanism to strengthen grantmaking practice, I am honored to serve on the GrantAdvisor national advisory committee. I will share more information about this effort as it progresses and look forward to hearing from the profession about this tool, particularly those in California and Minnesota, where GrantAdvisor will be initially active.

--Michelle Greanias

Transparency and the Art of Storytelling
June 28, 2017

Mandy Flores-Witte is Senior Communications Officer for the Kenneth Rainin Foundation. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

Mandy Flores-WitteFoundations are uniquely poised to support higher-risk projects, and as a result, failures can happen. Recently, I was searching online for examples on how to share the story about a grant that had some unexpected outcomes and found that, while the field strives to be transparent, it can still be a challenge to learn about initiatives that didn’t go as planned.

Communicating about a project doesn’t always have to happen in a scholarly report or detailed analysis, or by hiring experts to produce an evaluation. Sharing what you learned can be as simple as telling a story.

Embracing the Facts and Checking Our Ego

"Sharing stories can help you reach people in a way that statistics cannot."

When the Rainin Foundation funded our first public art installation in San Francisco’s Central Market, a busy neighborhood undergoing a significant economic transformation, we knew it was an experiment with risks. The art installation’s large platform, swing, and see saw were designed to get neighborhood residents, tech workers, customers of local businesses, and visitors — people spanning the economic spectrum—to interact. There’s no doubt that the project succeeded at bringing people together. But after seven months, it was relocated to a different part of the city because of complaints and safety concerns about the types of people and activities it attracted.

These issues were addressed at several community meetings—meetings that helped build stronger relationships among project stakeholders such as city departments, businesses, artists, local nonprofits, and neighbors. We were disappointed that the project did not go as planned, but we were amazed to see how one public art installation could spark so many conversations and also be a platform for exposing the city’s social issues. We knew we had to share what we learned. Or put another way, we saw an opportunity to be #OpenForGood.

Selecting a Medium for Sharing

Rainin Foundation - Block by Block
The Kenneth Rainin Foundation hosts "Block by Block," a public music and dancing event. Credit: Darryl Smith, Luggage Store Gallery

We considered a formal assessment to communicate our findings, but the format didn’t feel right. We wanted to preserve the stories and the voices of the people involved — whether it was the job fair hosted by a nearby business to help drug dealers get out of the "game," the woman who sought refuge at the installation from domestic violence, or the nonprofit that hosted performances at the site. These stories demonstrated the value of public art.

We decided the most engaging approach would be to have our partners talk candidly about the experience. We selected Medium, an online storytelling platform, to host the series of "as told to" narratives, which we believed would be the most authentic way to hear from our partners. Our intention was to use the series as a tool to start a conversation. And it worked.

Taking Risks is Uncomfortable

The Rainin Foundation intentionally supported art in the public realm — knowing the risks involved — and we thought the discussion of what happened should be public, too. It was uncomfortable to share our missteps publicly, and it made us and our partners vulnerable. In fact, just weeks before publishing the stories, we were cautioned by a trusted colleague about going forward with the piece. The colleague expressed concern it could stir up negative feelings and backfire, harming the reputation of the foundation and our partners.

We took this advice to heart, and we also considered who we are as a foundation. We support cutting-edge ideas to accelerate change. This requires us to test new approaches, challenge the status quo, and be open to failure in both our grantmaking and communications. Taking risks is part of who we are, so we published the series.

Jennifer Rainin, CEO of the Kenneth Rainin Foundation, shares the year's pivotal moments in Turning Points: 2015.

We’ve applied a transparent approach to knowledge-sharing in other ways as well. To accompany one of our annual reports, the foundation created a video with Jen Rainin, our chief executive officer, talking about the foundation’s pivotal moments. Jen read some heartfelt personal letters from the parents of children suffering from Inflammatory Bowel Disease, explaining how their children were benefitting from a diet created by a researcher we support. Talking about scientific research can be challenging and complex, but sharing the letters in this way and capturing Jen’s reaction to them enabled us to humanize our work. The video was widely viewed (it got more hits than the written report), and has inspired us to continue experimenting with how we share our work.

Start Talking About Impact

I encourage foundations to look beyond formal evaluations and data for creative ways to be #OpenForGood and talk about their impact. While reports are important to growth and development, sharing stories can help you reach people in a way that statistics cannot. Explore new channels, platforms and content formats. Keep in mind that videos don’t have to be Oscar-worthy productions, and content doesn’t have to be polished to perfection. There’s something to be gained by encouraging those involved in your funded projects to speak directly and honestly. It creates intimacy and fosters human connections. And it’s hard to elicit those kinds of feelings with newsletters or reports.

What are your stories from the times you’ve tried, failed, and learned?

-- Mandy Flores-Witte

Bringing Knowledge Full Circle: Giving Circles Shape Accessible and Meaningful Philanthropy
June 21, 2017

Laura Arrillaga-Andreessen is a Lecturer in Business Strategy at the Stanford Graduate School of Business, Founder and President of the Laura Arrillaga-Andreessen Foundation, Founder and Board Chairman of Stanford Center on Philanthropy and Civil Society and Founder and Chairman Emeritus of the Silicon Valley Social Venture Fund. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

Laura Arrillaga-Andreessen PhotoNathalie Morton, a resident of Katy, TX, was passionate about giving back to her suburban Houston community. However, she felt her lack of philanthropic experience might hinder her effectiveness. 

After initial conversations with her friends and neighbors, she discovered that they shared her desire to give locally and, like herself, lacked the financial ability to make the large contributions that they associated with high-impact philanthropy. After initial online research, Nathalie learned that a giving circle is a collaborative form of giving that allows individuals to pool their resources, knowledge and ideas to develop their philanthropic strategy and scale their impact. Nathalie then discovered the Laura Arrillaga-Andreessen Foundation’s (LAAF.org) Giving Circles Fund (GCF) initiative, an innovative online platform that provides an accessible and empowering experience for a diverse group of philanthropists to practice, grow and scale their philanthropy by giving collaboratively.

“Philanthropists have an imperative to share the research and rationale behind their philanthropic decisions for the greater good.”

With LAAF support, Nathalie was inspired to create the Cinco Ranch Giving Circle to pool her community members’ resources for the greater good. In its first year, this circle of over 30 families has come together to invest thousands of dollars in local nonprofits — all through donations as modest as $10 per month. Every member found that sharing time, values, wisdom and dollars not only deepened their relationships with one another but also that the measurable impact they could have together far exceeded that which they could achieve alone. This experience empowered Nathalie and her fellow giving circle participants to see themselves as philanthropists and develop their practice in a collaborative environment.

Nathalie’s story is just one of myriad ways that the giving circles model has made strategic philanthropy more accessible. Two years ago, I wrote a post on this same blog about how funders should have not only glass pockets but also “glass skulls,” underscoring that philanthropists have an imperative to share the research and rationale behind their philanthropic decisions for the greater good of all who are connected to the issue.  Or put another way, giving circles can help donors of all sizes become #OpenForGood. GCF allows philanthropists, like Nathalie, to do just that — by empowering givers at any level to make their thinking and decisions about social impact more open and collaborative.

LAAF logoA lack of financial, intellectual and evaluation resources are barriers to entry for many people who want to give in a way that matters more. That’s why I’ve committed the past two decades to not only redefining philanthropy — I believe that anyone, regardless of age, background or experience, can be a strategic philanthropist — but also to providing highest quality, free educational resources (MOOCs, teaching materials, case studies, giving guides) to empower anyone to make the most of whatever it is they have to give. Although most GCF individual monthly contributions are in the double digits, the impact of our giving circles is increasingly significant — our circles have given over $550,000 in general operating support grants to nonprofits nationally. By design, giving circles amplify individual giving by providing built-in mechanisms for more strategic philanthropy, including increasing

  • Transparency: Giving circles are effective because they are radically transparent about their operations, selection processes, meeting etiquette, voting rules, etc. We have found that giving circles grow and flourish when members understand exactly how the circle works and their role in its success. In addition, all of our circles publish their grants on their GCF pages, so that current and prospective members have insight into each circle’s history, portfolio and impact.
  • Democracy: GCF giving circles have a flat structure, in which everyone has an equal vote — regardless of their respective donations’ size. With LAAF support and a comprehensive portfolio of resources, group leaders facilitate meetings — ranging from casual meetups to knowledge sharing and issue ecosystem mapping gatherings to nonprofit nomination and voting sessions. Even in multigenerational giving circles where members are able to give at different levels, all of their members’ voices, perspectives and opinions hold equal weight.
  • Accessibility: Giving circles require a lower level of financial capital than other philanthropic models. A 2014 study has shown a higher rate of participation in giving circles for Millennials, women and communities of color — reflecting the spectacular pluralism that makes philanthropy beautiful. [1] On our GCF platform, we host multiple college and high school circles that have started teaching their members to carve out philanthropic dollars even on a minimal budget. Additionally, most of our circles are open to the public, and anyone can join and actively participate (yes, that includes you!).
  • Risk-tolerance: With more diverse participants and lower amounts of capital, GCF giving circles are more likely to give to community-based or smaller organizations that typically struggle to secure capital from more established philanthropies, thus meeting a critical social capital market need.

The power of collectively-pooling ideas, experiences and resources, as well as sharing decision-making, inspired me to found Silicon Valley Social Venture Fund (SV2) in 1998. What began as a small, local giving circle has grown into the second largest venture philanthropy partnership in the world. More importantly, its experiential education model — grounded in the principles listed above — has influenced the philanthropic practice of hundreds of now highly strategic philanthropists who respectively have invested hundreds of millions of dollars globally.  To this day, being a partner-member of the SV2 giving circle continues to inform how I give and evolve my own philanthropic impact.  Now, powered by the GCF platform, technology gives all of us the ability to scale our own giving by partnering with like-minded givers locally, nationally and globally so we can all move toward an #OpenForGood ideal. The mobilization of givers of all levels harnesses the power of the collective and demonstrates that the sum of even the smallest contributions can lead to deeply meaningful social change.

--Laura Arrillaga-Andreessen

____________________________________________________________________________________

[1] https://www.philanthropy.com/article/Giving-Circles-Popular-With/150525

Why Evaluations Are Worth Reading – or Not
June 14, 2017

Rebekah Levin is the Director of Evaluation and Learning for the Robert R. McCormick Foundation, guiding the Foundation in evaluating the impact of its philanthropic giving and its involvement in community issues. She is working both with the Foundation’s grantmaking programs, and also with the parks, gardens, and museums at Cantigny Park. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

Rebekah Levin photoTruth in lending statement:  I am an evaluator.  I believe strongly in the power of excellent evaluations to inform, guide, support and assess programs, strategies, initiatives, organizations and movements.  I have directed programs that were redesigned to increase their effectiveness, their cultural appropriateness and their impact based on evaluation data, helped to design and implement evaluation initiatives here at the foundation that changed the way that we understand and do our work, and have worked with many foundation colleagues and nonprofits to find ways to make evaluation serve their needs for understanding and improvement. 

“I believe strongly in the power of excellent evaluations."

One of the strongest examples that I’ve seen of excellent evaluation within philanthropy came with a child abuse prevention and treatment project.  Our foundation funded almost 30 organizations that were using 37 tools to measure treatment impact of treatment, many of which were culturally inappropriate, designed for initial screenings, or inappropriate for a host of other reasons, and staff from these organizations running similar programs had conflicting views about the tools.  Foundation program staff wanted to be able to compare program outcomes using uniform evaluation tools and to use that data to make funding, policy, and program recommendations, but they were at a loss as to how to do so in a way that honored the grantees’ knowledge and experience.   A new evaluation initiative was funded, combining the development of a "community of practice" for the nonprofits and foundation together to:

  • create a unified set of reporting tools;
  • learn together from the data about how to improve program design and implementation, and the systematic use of data to support staff/program effectiveness;
  • develop a new rubric which the foundation would use to assess programs and proposals; and
  • provide evaluation coaching for all organizations participating in the initiative.

The evaluation initiative was so successful that the nonprofits participating decided to continue their work together beyond the initial scope of the project to improve their own programs and better support the children and families that they are serving. This “Unified Project Outcomes” article describes the project and established processes in far greater detail.

But I have also seen and been a part of evaluations where:

  • the methodology was flawed or weak;
  • the input data were inaccurate and full of gaps;
  • there was limited understanding of the context of the organization;
  • there was no input from relevant participants; and
  • there was no thought to the use of the data/analysis;

so that little to no value came out of them, and the learning that took place as a result was equally inconsequential.

Mccormick-foundation-logo_2xSo now to those evaluation reports that often come at the end of a project or foundation initiative, and sometimes have interim and smaller versions throughout their life span.  Except to a program officer who has to report to their director about how a contract or foundation strategy was implemented, the changes from the plan that occurred, and the value or impact of an investment or initiative, should anyone bother reading them?  From my perch, the answer is a big “Maybe.”  What does it take for an evaluation report to be worth my time to read, given the stack of other things sitting here on my desk that I am trying to carve out time to read?  A lot.

  1. It has to be an evaluation and not a PR piece. Too often, "evaluation" reports provide a cleaned up version of what really occurred in a program, with none of the information about how and why an initiative or organization functioned as it did, and the data all point to its success.  This is not to say that initiatives/organizations can’t be successful.  But no project or organization works perfectly, and if I don’t see critical concerns/problems/caveats identified, my guess is that I’m not getting the whole story, and its value to me drops precipitously.
  2. It has to provide relevant context. To read an evaluation of a multi-organizational collaboration in Illinois without placing its fiscal challenges within the context of our state’s ongoing budget crisis, or to read about a university-sponsored community-based educational program without knowing the long history of mistrust between the school and the community, or any other of the relevant and critical contextual pieces that are effect a program, initiative or organization makes that evaluation of little value.  Placed within a nuanced set of circumstances significantly improves the possibility that the knowledge is transferable to other settings.
  3. It has to be clear and as detailed as possible about the populations that it is serving. Too often, I read evaluations that leave out critical information about who they were targeting and who participated or was served. 
  4. The evaluation’s methodology must be described with sufficient detail so that I have confidence that it used an appropriate and skillful approach to its design and implementation as well as the analysis of the data. I also pay great attention to what extent those who were the focus of the evaluation participated in the evaluation’s design, the questions being addressed, the methodology being used, and the analysis of the data.
  5. And finally, in order to get read, the evaluation has to be something I know exists, or something I can easily find. If it exists in a repository like IssueLab, my chances of finding it increase significantly.  After all, even if it’s good, it is even better if it is #OpenForGood for others, like me, to learn from it.

When these conditions are met, the answer to the question, “Are evaluations worth reading?” is an unequivocal “YES!,” if you value learning from others’ experiences and using that knowledge to inform and guide your own work.

--Rebekah Levin

Share This Blog

  • Share This

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:
    glasspockets@foundationcenter.org

Subscribe to Transparency Talk

Categories