Transparency Talk

Category: "Evaluation" (73 posts)

What Will You #OpenForGood?
July 13, 2017

Janet Camarena is director of transparency initiatives at Foundation Center.  This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Janet Camarena Photo

This week, Foundation Center is launching our new #OpenForGood campaign, designed to encourage better knowledge sharing practices among foundations.  Three Foundation Center services—Glasspockets, IssueLab, and GrantCraft are leveraging their platforms to advance the idea that philanthropy can best live up to its promise of serving the public good by openly and consistently sharing what it’s learning from its work.  Glasspockets is featuring advice and insights from “knowledge sharing champions” in philanthropy on an ongoing #OpenForGood blog series; IssueLab has launched a special Results platform allowing users to learn from a collective knowledge base of foundation evaluations; and a forthcoming GrantCraft Guide on open knowledge practices is in development.

Although this campaign is focused on helping and inspiring foundations to use new and emerging technologies to better collectively learn, it is also in some ways rooted in the history that is Foundation Center’s origin story.

OFG-twitter

A Short History

Sixty years ago, Foundation Center was established to provide transparency for a field in jeopardy of losing its philanthropic freedom due to McCarthy Era accusations that gained traction in the absence of any openness whatsoever about foundation priorities, activities, or processes.  Not one, but two congressional commissions were formed to investigate foundations committing alleged “un-American activities.”  As a result of these congressional inquiries, which spanned several years during the 1950s, Foundation Center was established to provide transparency in a field that had nearly lost everything due to its opacity. 

“The solution and call to action here is actually a simple one – if you learn something, share something.”

I know our Transparency Talk audience is most likely familiar with this story since the Glasspockets name stems from this history when Carnegie Corporation Chair Russell Leffingwell said, “The foundation should have glass pockets…” during his congressional testimony, describing a vision for a field that would be so open as to allow anyone to have a look inside the workings and activities of philanthropy.  But it seems important to repeat that story now in the context of new technologies that can facilitate greater openness.

Working Collectively Smarter

Now that we live in a time when most of us walk around with literal glass in our pockets, and use these devices to connect us to the outside world, it is surprising that only 10% of foundations have a website, which means 90% of the field is missing discovery from the outside world.  But having websites would really just bring foundations into the latter days of the 20th century--#OpenForGood aims to bring them into the present day by encouraging foundations to openly share their knowledge in the name of working collectively smarter.

What if you could know what others know, rather than constantly replicating experiments and pilots that have already been tried and tested elsewhere?  Sadly, the common practice of foundations keeping knowledge in large file cabinets or hard drives only a few can access means that there are no such shortcuts. The solution and call to action here is actually a simple one—if you learn something, share something

In foundations, learning typically takes the form of evaluation and monitoring, so we are specifically asking foundations to upload all of your published reports from 2015 and 2016 to the new IssueLab: Results platform, so that anyone can build on the lessons you’ve learned, whether inside or outside of your networks. Foundations that upload their published evaluations will receive an #OpenForGood badge to demonstrate their commitment to creating a community of shared learning.

Calls to Action

But #OpenForGood foundations don’t just share evaluations, they also:

  • Open themselves to ideas and lessons learned by others by searching shared repositories, like those at IssueLab as part of their own research process;
  • They use Glasspockets to compare their foundation's transparency practices to their peers, add their profile, and help encourage openness by sharing their experiences and experiments with transparency here on Transparency Talk;
  • They use GrantCraft to hear what their colleagues have to say, then add their voice to the conversation. If they have an insight, they share it!

Share Your Photos

“#OpenForGood foundations share their images with us so we can show the collective power of philanthropic openness, not just in words, but images. ”

And finally, #OpenForGood foundations share their images with us so we can show the collective power of philanthropic openness, not just in words, but images.  We would like to evolve the #OpenForGood campaign over time to become a powerful and meaningful way for foundations to open up your work and impact a broader audience than you could reach on your own. Any campaign about openness and transparency should, after all, use real images rather than staged or stock photography. 

So, we invite you to share any high resolution photographs that feature the various dimensions of your foundation's work.  Ideally, we would like to capture images of the good you are doing out in the world, outside of the four walls of your foundation, and of course, we would give appropriate credit to participating foundations and your photographers.  The kinds of images we are seeking include people collaborating in teams, open landscapes, and images that convey the story of your work and who benefits. Let us know if you have images to share that may now benefit from this extended reach and openness framing by contacting openforgood@foundationcenter.org.

What will you #OpenForGood?

--Janet Camarena

Transparency and the Art of Storytelling
June 28, 2017

Mandy Flores-Witte is Senior Communications Officer for the Kenneth Rainin Foundation. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

Mandy Flores-WitteFoundations are uniquely poised to support higher-risk projects, and as a result, failures can happen. Recently, I was searching online for examples on how to share the story about a grant that had some unexpected outcomes and found that, while the field strives to be transparent, it can still be a challenge to learn about initiatives that didn’t go as planned.

Communicating about a project doesn’t always have to happen in a scholarly report or detailed analysis, or by hiring experts to produce an evaluation. Sharing what you learned can be as simple as telling a story.

Embracing the Facts and Checking Our Ego

"Sharing stories can help you reach people in a way that statistics cannot."

When the Rainin Foundation funded our first public art installation in San Francisco’s Central Market, a busy neighborhood undergoing a significant economic transformation, we knew it was an experiment with risks. The art installation’s large platform, swing, and see saw were designed to get neighborhood residents, tech workers, customers of local businesses, and visitors — people spanning the economic spectrum—to interact. There’s no doubt that the project succeeded at bringing people together. But after seven months, it was relocated to a different part of the city because of complaints and safety concerns about the types of people and activities it attracted.

These issues were addressed at several community meetings—meetings that helped build stronger relationships among project stakeholders such as city departments, businesses, artists, local nonprofits, and neighbors. We were disappointed that the project did not go as planned, but we were amazed to see how one public art installation could spark so many conversations and also be a platform for exposing the city’s social issues. We knew we had to share what we learned. Or put another way, we saw an opportunity to be #OpenForGood.

Selecting a Medium for Sharing

Rainin Foundation - Block by Block
The Kenneth Rainin Foundation hosts "Block by Block," a public music and dancing event. Credit: Darryl Smith, Luggage Store Gallery

We considered a formal assessment to communicate our findings, but the format didn’t feel right. We wanted to preserve the stories and the voices of the people involved — whether it was the job fair hosted by a nearby business to help drug dealers get out of the "game," the woman who sought refuge at the installation from domestic violence, or the nonprofit that hosted performances at the site. These stories demonstrated the value of public art.

We decided the most engaging approach would be to have our partners talk candidly about the experience. We selected Medium, an online storytelling platform, to host the series of "as told to" narratives, which we believed would be the most authentic way to hear from our partners. Our intention was to use the series as a tool to start a conversation. And it worked.

Taking Risks is Uncomfortable

The Rainin Foundation intentionally supported art in the public realm — knowing the risks involved — and we thought the discussion of what happened should be public, too. It was uncomfortable to share our missteps publicly, and it made us and our partners vulnerable. In fact, just weeks before publishing the stories, we were cautioned by a trusted colleague about going forward with the piece. The colleague expressed concern it could stir up negative feelings and backfire, harming the reputation of the foundation and our partners.

We took this advice to heart, and we also considered who we are as a foundation. We support cutting-edge ideas to accelerate change. This requires us to test new approaches, challenge the status quo, and be open to failure in both our grantmaking and communications. Taking risks is part of who we are, so we published the series.

Jennifer Rainin, CEO of the Kenneth Rainin Foundation, shares the year's pivotal moments in Turning Points: 2015.

We’ve applied a transparent approach to knowledge-sharing in other ways as well. To accompany one of our annual reports, the foundation created a video with Jen Rainin, our chief executive officer, talking about the foundation’s pivotal moments. Jen read some heartfelt personal letters from the parents of children suffering from Inflammatory Bowel Disease, explaining how their children were benefitting from a diet created by a researcher we support. Talking about scientific research can be challenging and complex, but sharing the letters in this way and capturing Jen’s reaction to them enabled us to humanize our work. The video was widely viewed (it got more hits than the written report), and has inspired us to continue experimenting with how we share our work.

Start Talking About Impact

I encourage foundations to look beyond formal evaluations and data for creative ways to be #OpenForGood and talk about their impact. While reports are important to growth and development, sharing stories can help you reach people in a way that statistics cannot. Explore new channels, platforms and content formats. Keep in mind that videos don’t have to be Oscar-worthy productions, and content doesn’t have to be polished to perfection. There’s something to be gained by encouraging those involved in your funded projects to speak directly and honestly. It creates intimacy and fosters human connections. And it’s hard to elicit those kinds of feelings with newsletters or reports.

What are your stories from the times you’ve tried, failed, and learned?

-- Mandy Flores-Witte

Why Evaluations Are Worth Reading – or Not
June 14, 2017

Rebekah Levin is the Director of Evaluation and Learning for the Robert R. McCormick Foundation, guiding the Foundation in evaluating the impact of its philanthropic giving and its involvement in community issues. She is working both with the Foundation’s grantmaking programs, and also with the parks, gardens, and museums at Cantigny Park. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

Rebekah Levin photoTruth in lending statement:  I am an evaluator.  I believe strongly in the power of excellent evaluations to inform, guide, support and assess programs, strategies, initiatives, organizations and movements.  I have directed programs that were redesigned to increase their effectiveness, their cultural appropriateness and their impact based on evaluation data, helped to design and implement evaluation initiatives here at the foundation that changed the way that we understand and do our work, and have worked with many foundation colleagues and nonprofits to find ways to make evaluation serve their needs for understanding and improvement. 

“I believe strongly in the power of excellent evaluations."

One of the strongest examples that I’ve seen of excellent evaluation within philanthropy came with a child abuse prevention and treatment project.  Our foundation funded almost 30 organizations that were using 37 tools to measure treatment impact of treatment, many of which were culturally inappropriate, designed for initial screenings, or inappropriate for a host of other reasons, and staff from these organizations running similar programs had conflicting views about the tools.  Foundation program staff wanted to be able to compare program outcomes using uniform evaluation tools and to use that data to make funding, policy, and program recommendations, but they were at a loss as to how to do so in a way that honored the grantees’ knowledge and experience.   A new evaluation initiative was funded, combining the development of a "community of practice" for the nonprofits and foundation together to:

  • create a unified set of reporting tools;
  • learn together from the data about how to improve program design and implementation, and the systematic use of data to support staff/program effectiveness;
  • develop a new rubric which the foundation would use to assess programs and proposals; and
  • provide evaluation coaching for all organizations participating in the initiative.

The evaluation initiative was so successful that the nonprofits participating decided to continue their work together beyond the initial scope of the project to improve their own programs and better support the children and families that they are serving. This “Unified Project Outcomes” article describes the project and established processes in far greater detail.

But I have also seen and been a part of evaluations where:

  • the methodology was flawed or weak;
  • the input data were inaccurate and full of gaps;
  • there was limited understanding of the context of the organization;
  • there was no input from relevant participants; and
  • there was no thought to the use of the data/analysis;

so that little to no value came out of them, and the learning that took place as a result was equally inconsequential.

Mccormick-foundation-logo_2xSo now to those evaluation reports that often come at the end of a project or foundation initiative, and sometimes have interim and smaller versions throughout their life span.  Except to a program officer who has to report to their director about how a contract or foundation strategy was implemented, the changes from the plan that occurred, and the value or impact of an investment or initiative, should anyone bother reading them?  From my perch, the answer is a big “Maybe.”  What does it take for an evaluation report to be worth my time to read, given the stack of other things sitting here on my desk that I am trying to carve out time to read?  A lot.

  1. It has to be an evaluation and not a PR piece. Too often, "evaluation" reports provide a cleaned up version of what really occurred in a program, with none of the information about how and why an initiative or organization functioned as it did, and the data all point to its success.  This is not to say that initiatives/organizations can’t be successful.  But no project or organization works perfectly, and if I don’t see critical concerns/problems/caveats identified, my guess is that I’m not getting the whole story, and its value to me drops precipitously.
  2. It has to provide relevant context. To read an evaluation of a multi-organizational collaboration in Illinois without placing its fiscal challenges within the context of our state’s ongoing budget crisis, or to read about a university-sponsored community-based educational program without knowing the long history of mistrust between the school and the community, or any other of the relevant and critical contextual pieces that are effect a program, initiative or organization makes that evaluation of little value.  Placed within a nuanced set of circumstances significantly improves the possibility that the knowledge is transferable to other settings.
  3. It has to be clear and as detailed as possible about the populations that it is serving. Too often, I read evaluations that leave out critical information about who they were targeting and who participated or was served. 
  4. The evaluation’s methodology must be described with sufficient detail so that I have confidence that it used an appropriate and skillful approach to its design and implementation as well as the analysis of the data. I also pay great attention to what extent those who were the focus of the evaluation participated in the evaluation’s design, the questions being addressed, the methodology being used, and the analysis of the data.
  5. And finally, in order to get read, the evaluation has to be something I know exists, or something I can easily find. If it exists in a repository like IssueLab, my chances of finding it increase significantly.  After all, even if it’s good, it is even better if it is #OpenForGood for others, like me, to learn from it.

When these conditions are met, the answer to the question, “Are evaluations worth reading?” is an unequivocal “YES!,” if you value learning from others’ experiences and using that knowledge to inform and guide your own work.

--Rebekah Levin

The Real World is Messy. How Do You Know Your Foundation Is Making an Impact?
June 7, 2017

Aaron Lester is an experienced writer and editor in the nonprofit space. In his role as content marketing manager at Fluxx, Aaron’s goal is to collect and share meaningful stories from the world of philanthropy. This post is part of the Glasspockets’ #OpenForGood series done in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood. View more posts in the series.

AaronLesterIn a perfect world, foundations could learn from every mistake, build on every new piece of knowledge, and know with certainty what impact every effort has made.

Of course, we’re not in that world. We’re in the real, fast-paced world of nonprofits where messy human needs and unpredictable natural and political forces necessitate a more flexible course. In that world, it’s more challenging to measure the effects of our grantmaking efforts and learn from them. It turns out knowledge sharing is a tough nut to crack.

And without meaningful knowledge sharing, we’re left struggling to understand the philanthropic sector’s true impact — positive or negative — within a single organization or across many. The solution is a more transparent sector that is willing to share data — quantitative as well as qualitative — that tells stories of wins and losses, successes and failures—in other words, a sector that is #OpenForGood. But, of course, this is much easier said than done.

My role at Fluxx creates many opportunities for me to talk with others in the field and share stories the philanthropic sector can learn from. I recently had the chance to speak with grantmakers on this very issue.

Measuring Whose Success?

Even within a foundation, it can be difficult to truly understand the impact of a grant or other social investment.

“Lose the mindset defined by a fear of failure; instead, embrace one that drives you to search for opportunity.”

As Adriana Jiménez, director of grants management at the ASPCA and former grants manager at the Surdna Foundation, explains, it’s difficult for foundations to prove conclusively that it’s their slice of the grantmaking that has made a meaningful difference in the community. “When you collect grant-by-grant data, it doesn’t always roll up to your foundation’s goals or even your grant’s goals.”

The issue is that there’s no standardized way to measure grantmaking data, and it’s an inherently difficult task because there are different levels of assessment (grant, cluster, program, foundation, etc.), there is similar work being done in different contexts, and a lot of data is only available in narrative form.

One way to combat these challenges is to make sure your foundation is transparent and in agreement around shared goals with grantees from the start of the relationship. Being too prescriptive or attempting to standardize the way your grantees work will never create the results you’re after. Part of this early alignment includes developing clear, measurable goals together and addressing how the knowledge you’re gaining can and should translate into improvements in performance.

A grantee should never have to alter their goals or objectives just to receive funding. That sends the wrong message, and it provides the wrong incentive for grantees to participate in knowledge-sharing activities. But when you work as partners from the start and provide space for grantees to collaborate on strategy, a stronger partnership will form, and the stories your data tells will begin to be much more meaningful.

The Many Languages of Human Kindness

If sharing knowledge is difficult within one organization, it’s even more challenging across organizations.

FluxxJiménez points out that a major challenge is the complexity of foundations, as they rely on different taxonomies and technologies and discuss similar issues using different language. Every foundation’s uniqueness is, in its day-to-day work, its strength, but in terms of big-picture learning across organizations, it’s a hurdle.

Producing cohesive, comprehensive data out of diverse, fragmented information across multiple organizations is a huge challenge. Mining the information and tracking it in an ongoing way is another obstacle made more difficult because the results are often more anecdotal than they are purely quantitative. And when this information is spread out over so many regions and focus areas, the types of interventions vary so widely that meaningful knowledge sharing becomes untenable.

Gwyneth Tripp, grants manager at Blue Shield of California Foundation, also cites a capacity issue. Most foundations don’t have designated roles for gathering, tracking, organizing, and exchanging shareable data, so they resort to asking staff who already have their own sizable to-do lists. Tripp says:

“They have an interest and a desire [in knowledge sharing], but also a real challenge of balancing the everyday needs, the strategic goals, the relationships with grantees, and then adding that layer of ‘let’s learn and think about it all’ is really tough to get in.

“Also, becoming more transparent about the way you work, including sharing successes as well as failures, can open your foundation up to scrutiny. This can be uncomfortable. But it’s important to delineate between ‘failure’ and ‘opportunity to learn and improve.’”

Sparking Change

But foundations know (possibly better than anyone else) that obstacles don’t make accomplishing a goal impossible.

And this goal’s rewards are great: When foundations can achieve effective knowledge sharing, they’ll have better insights into what other funding is available for the grantees within the issues they are tackling, who is being supported, which experiments are worth replicating, and where there are both gaps and opportunities. And with those insights, foundations gain the ability to iterate and improve upon their operations, even leading to stronger, more strategic collaborations and partnerships.

Creating and promoting this kind of accessible, useful knowledge sharing starts with a few steps:

  1. Begin from within. Tracking the impact of your grantmaking efforts and sharing those findings with the rest of the sector requires organizations to look internally first. Start by building a knowledge management implementation plan that involves every stakeholder, from internal teams to grantee partners to board executives.
  1. Determine and prioritize technology needs. Improvements in technology — specifically cloud-based technology — are part of what’s driving the demand for data on philanthropic impact in the first place. Your grants management system needs to provide integrated efficiency and accessibility if you want to motivate staff participation and generate usable insights from the data you’re collecting. Is your software streamlining your efforts, or is it only complicating them?
  1. Change your mindset. Knowledge sharing can be intimidating, but it doesn’t have to be. Lose the mindset defined by a fear of failure; instead, embrace one that drives you to search for opportunity. Promote a stronger culture of knowledge sharing across the sector by sharing your organizational practices and lessons learned. Uncover opportunities to collect data and share information across organizations.

There’s no denying that knowledge sharing benefits foundations everywhere, along with the programs they fund. Don’t let the challenges hold you back from aiming for educational, shareable data — you have too much to gain not to pursue that goal.  What will you #OpenForGood?

--Aaron Lester 

Warren Buffett Has Some Excellent Advice for Foundations That They Probably Won't Take
March 16, 2017

(Marc Gunther writes about nonprofits, foundations, business and sustainability. He also writes for NonprofitChronicles.com. This post also appears in Nonprofit Chronicles.)

This post is part of a new Transparency Talk series devoted to putting the spotlight on the importance of the 990PF, the informational tax form that foundations must annually file.  The series will explore the implications of the open 990; how journalists and researchers use the 990PF to understand philanthropy; and its role, limitations, and potential as a communications tool. 

Marc GuntherWith a collective $800 billion in assets under management, America’s big foundations spend vast sums of money to buy investment advice. They’re getting little, if anything, of value in return.

Their own investment offices, and the Wall Street banks, hedge funds, private equity firms and consultants they hire, when taken together, deliver investment returns that lag behind market indexes, all evidence indicates.

These foundations would do better to call an 800 number at Vanguard or Schwab and buy a diversified set of low-cost index funds.

So, at least, argues Warren Buffett, one of the great investors of our time. In his latest letter to investors in Berkshire Hathaway, Buffett writes:

When trillions of dollars are managed by Wall Streeters charging high fees, it will usually be the managers who reap outsized profits, not the clients. Both large and small investors should stick with low-cost index funds.

The limited data available about foundation endowments bears him out.

It’s not possible to prove that Buffett’s advice would enable foundations to improve their returns–and thus have more money to devote to their grant-making. Most foundations don’t disclose the financial performance of their endowments.

Of the 10 largest grant-making foundations in the US, only two — the MacArthur Foundation and the W.K. Kellogg Foundation — publish investment returns on their websites. MacArthur’s disclosure is exemplary. (So is its performance, perhaps not coincidentally.) I emailed all ten and got nowhere with the rest.

The best evidence about how foundations are managing their endowments comes from an annual study published by the Council on Foundations and Commonfund, a nonprofit asset management fund that serves foundations, colleges and nonprofits. Their most recent survey, which covers the 10-year period from 2006 through 2015, found that returns averaged 5.5 percent per year for 130 private foundations and 5.2 percent per year for 98 community foundations.

Further insight can be gleaned from Cambridge Associates, an investment firm whose clients include foundations, universities and wealthy families. Cambridge tracked the performance of 445 of its endowment and foundation clients and found they generated average annualized returns of 4.97 percent for the 10-year period ending June 30, 2016. (These returns should not be considered Cambridge’s performance track record, a spokesman told me.)

High pay for money managers does not necessarily translate into superior returns for foundations.

By contrast, Vanguard’s model portfolio for institutional investors, a mix of passively invested index funds, with 70 percent invested in stocks and the rest in fixed income securities, delivered 5.81 percent over the 10-year-period through 2015, and 6.1 percent for the 10-year period ending on June 30, 2016, according to Chris Philips, head of institutional advisory services at Vanguard. (All figures for investment returns are net of fees, meaning fees are taken into account.)

That may appear to be a small edge for Vanguard. But when institutions are investing hundreds of millions, or billions of dollars, small gains compounded over time add up to big money. Money, again, that could be better spent on programs.

Actually, it’s worse, because the figures reported by the Council on Foundations and CommonFund do not include the salaries that foundations pay to their in-house investment offices. The chief investment officers are often the highest-paid executives at foundations, and their deputies do well, too.

Why, then, do foundations continue to pay high salaries and high fees in the pursuit of market-beating returns, when so many fail?

They should know better. It’s no secret that passive approaches to investing outperform most active money managers, once fees and trading costs are taking into account. In 2005, Buffett wrote that “active investment management by professionals – in aggregate – would over a period of years underperform the returns achieved by rank amateurs who simply sat still.”

Taking aim at hedge funds, with their high expenses, Buffett then offered to bet $500,000 that no investment professional “could select a set of at least five hedge funds – wildly-popular and high-fee investing vehicles – that would over an extended period match the performance of an unmanaged S&P-500 index fund charging only token fees.”

Only one — one! — investment pro took the bet. Not surprisingly, Buffett will win the bet, by a very comfortable margin. And yet foundations and those who advise them are pouring more, not less, money into hedge funds.

Everyone Wants to Be Special

Buffett has a theory about why those in charge of foundations entrust their endowments to active money managers and hedge funds:

The wealthy are accustomed to feeling that it is their lot in life to get the best food, schooling, entertainment, housing, plastic surgery, sports ticket, you name it. Their money, they feel, should buy them something superior compared to what the masses receive.

In many aspects of life, indeed, wealth does command top-grade products or services. For that reason, the financial “elites” – wealthy individuals, pension funds, college endowments and the like – have great trouble meekly signing up for a financial product or service that is available as well to people investing only a few thousand dollars.

Vanguard’s Chris Philips has a similar theory:

There is this perception that by going index you are ceding that you do not have any skill and you are going to be average in the marketplace. That doesn’t feel good. As humans, we want to be good. We don’t want to be average.

Foundation executives may be especially prone to believe that they deserve better than “average” investment advice. By dint of their position, they are often told that they are wiser, funnier and better-looking than average.

Jeffrey Hooke, a senior lecturer at the Johns Hopkins Carey Business School and a former investment banker, says the trustees of foundations who serve on their investment committees are likely to favor active asset management.

The people on the boards tend to be in the business. They’re private equity executives, they’re stockbrokers or they’re in hedge funds. They’re totally biased in favor of active managing because that’s how they’ve made their living.

Hooke has researched public pension funds and found that they, too, underperform the markets by choosing active managers. Investment officers don’t want to talk themselves out of a job, he says:

They are never going to walk into the boardroom and say, ‘Hey, it just isn’t working.’ They’ve got wives, they’ve got mortgages they’ve got kids.

These investment officers aspire to be the rare bird who can consistently outperform the market, like David Swensen, the storied portfolio manager at Yale. (I profiled Swensen in 2005 for the Yale Alumni Magazine.) But Swensen, like Buffett, says that identifying the best asset managers is exceedingly difficult. In a 2009 interview, Swensen told me that investors who rely on “low-cost, passively managed index funds” and rebalance regularly will “end up beating the overwhelming majority of participants in the financial markets.” Buffett has said that in the course of his lifetime he has identified only about 10 investment professionals who can beat the markets over time; there are about 87,000 foundations in the US.

Pay for Performance?

In fairness, the foundation trustees and investment officers labor under a peculiar burden. They are obligated by law to give away five percent of their assets every year. So if they want to exist in perpetuity, they must earn in excess of five percent on their investments, which is a tall order. Of course, no foundation is entitled to live forever. If some spend down their assets, well, new foundations come along all the time.

Most foundations, though, aim to survive in perpetuity, and chase superior returns, at a cost. Consider, for example, the Ford Foundation, which, with assets of $12.2 billion (as of 12-31-2015), is the second-biggest foundation in the US, behind the behemoth Bill & Melinda Gates Foundation.

In 2015, the Ford Foundation’s highest-paid employee was vice president and chief investment officer Eric Doppstadt, who was paid $2.1 million. He was followed by  director of public investment Michael Walden at $1,017,061, director of private equity Sherif Nahas at $972,362 and director of hedge funds William Artemenko at $955,479. All were paid more than Darren Walker, Ford’s president, whose compensation was $788,542, according to Ford’s Form 990-PF filing,

Then there were Ford’s outside asset managers. In 2015, they included Silchester International Equity Management which was paid $2.2 million, Wellington Energy Investment Advisor, which collected just under $2 million and Eagle Capital Management, which got $1 million.

How did they perform? “Sharing the investment returns is outside of our policy,” says Joshua Cinelli, Ford’s chief of media relations, by e-mail.

In this, Ford is typical. At the David and Lucille Packard Foundation, chief investment officer John Moehling was paid $2.3 million, and three other investment professionals earned more than $1 million. All were better paid than Packard’s chief executive, Carol Larson. Packard, too, will not disclose its returns.

The Robert Wood Johnson Foundation, William and Flora Hewlett Foundation, Gordon and Betty Moore Foundation and MacArthur Foundation all pay their chief investment officer more than their top executives. The argument for doing so, presumably, is that these investment professionals could make as much money or more in the private sector.

But, again, with the exception of MacArthur and Kellogg, the foundations won’t say whether their investment officers and their outside asset managers are delivering market-beating performance.

What we do know is that high pay for money managers does not necessarily translate into superior returns. Interestingly, when pension-fund critic Jeff Hooke analyzed data from 33 state pension systems, he found that the 10 states with the highest fee ratios achieved lower return rates than those that spent the least.

Transparency and Accountability

Foundation endowment returns could probably be calculated by going through years of IRS filings. Unfortunately, the Form 990-PF tax form for foundations is “seriously flawed,” “unwieldy” and “unintelligible to the many lay readers, including trustees and journalists,” according to longtime foundation executive John Craig.

In a 2011 blog post for the Foundation Center, Craig lamented the fact that investment performance is not solicited on the Form 990:

Since their endowments are the only source of income for most foundations and effective endowment management is a challenge for many foundations, this is an egregious omission—equivalent to not requiring for-profit corporations to report their earnings on tax returns and financial statements.

I asked Brad Smith, president of the Foundation Center, which promotes transparency through its laudable Glasspockets initiative, why foundations won’t disclose their investment returns. “They don’t report it because it’s not required,” he said, “to state the obvious.”

Smith went on to say that foundations may be “worried about perverse incentives that could be created by a ranking.” If foundations compete to generate the best investment returns, he explained, they could feel pressured to take on risky investments. During the Great Recession, some foundations that pursued aggressive investment strategies had to sell highly-leveraged, illiquid investments at a loss. 

Still, I wonder if there’s a simpler explanation for the lack of disclosure: Foundation staff and trustees don’t want to be held accountable for mediocre results.

If MacArthur and Kellogg are exemplary in their disclosure — Kellogg kindly arranged a phone interview with Joel Wittenberg, its chief investment officer —  the Gates and Bloomberg foundations are unusually opaque. Gates Foundation money is housed in a separate trust and is reportedly managed by Cascade Investments, which also manages Gates’ personal fortune. (Buffett is a trustee of the Gates Foundation, and presumably keeps an eye on the endowment.) Bloomberg’s philanthropic and personal wealth are reported to be managed by Willett Advisors. Cascade and Willett have access to some of the world’s top money managers, and may have a shot at outperforming the averages.

This isn’t a new issue. Testifying before Congress in 1952, Russell Leffingwell, the chairman of the board of the Carnegie Foundation, famously said:

We publish our investments. We have to be very careful about our investments because we know that others, some others, take investment advice from our list of investments. Well, that is all right. We think the foundation should have glass pockets.

The bottom line: America’s foundations, as a group, are taking money that could be devoted to their programs – to alleviate global poverty, to improve education, to support medical research or promote the arts — and transferring it to wealthy asset managers. They should know better, and they do.

--Marc Gunther

From Good Idea to Problem Solved: Funding the Innovation Means Funding the Process
February 8, 2017

(Mandy Ellerton and Molly Matheson Gruen joined the [Archibald] Bush Foundation in 2011, where they created and now direct the Foundation's Community Innovation programs. The programs allow communities to develop and test new solutions to community challenges, using approaches that are collaborative and inclusive of people who are most directly affected by the problem.)

This post is part of the Funding Innovation series, produced by Foundation Center's Glasspockets and GrantCraft, and underwritten by the Vodafone Americas Foundation. The series explores funding practices and trends at the intersection of problem-solving, technology, and design. Please contribute your comments on each post and share the series using #fundinginnovation. View more posts in the series.

Mandy Ellerton

Molly Matheson Gruen

Good ideas for solving our toughest social problems come from a variety of places. But, we need more than just good ideas – we need transparent and thoughtful ways to get community buy-in and a wide variety of perspectives to make those ideas a reality.

For a cautionary case in point, take the origin story (later chronicled in the book The Prize) of the ill-fated attempt to transform the failing Newark public schools. A prominent governor, mayor and, later, an ultra-wealthy tech mogul, hatched the idea to radically transform the schools in the back of a chauffeured S.U.V. Commentary suggests that these leaders did not consult community stakeholders about the plan, only half-heartedly seeking community input much later in the process. As one community member put it to these leaders, "You have forced your plans on the Newark community, without the

measure of stakeholder input that anyone, lay or professional, would consider adequate or respectful." To some observers, it's no surprise that without initial community buy-in, nor a transparent process and over $100 million later, the plan ultimately crashed and burned.

But, let's not throw stones at glass houses. The Newark example is indicative of a larger pattern especially familiar to those of us in the field of philanthropy. We've learned that lesson the hard way, too. Many of us have been involved in (well-intentioned) backroom and ivory tower deals with prominent community leaders to magically fix community problems with some "good ideas." Sometimes, those ideas work. But a lot of times, they don't. And unfortunately, we often chalk these failures up to innovation simply being a risky endeavor, comparing our social innovation failure rates to the oft-discussed (maybe even enshrined?) business or entrepreneurship failure rates. What's more, we almost never actively, sincerely discuss and learn from these failed endeavors.

But social innovation failure often comes at a cost, leaving behind disillusioned community members, bad outcomes for some of our most vulnerable, and lots and lots of wasted dollars that could have gone to something better. Take the Newark example: the failed attempt to transform the schools created massive civic disruption, re-awakened historic hurts and injustice and will likely leave community members even more skeptical of any future efforts to improve the schools.

Through our work at the Bush Foundation, we've learned that truly good ideas–those that will really have a sustainable impact–are often created in deep partnership and trust between organizations, leaders, and–most critically–the people most affected by a problem.

But, that kind of deep community partnership and transparency takes a lot of work, time, and attention. And, most everything that takes a lot of work takes some funding.

Community-innovation

That's why we created our Community Innovation programs at the Bush Foundation in 2013: to fund and reward the process of innovation–the process of solving problems. While the emphasis in innovation funding is often on "early stage" organizations or projects, we joke that we are a "pre-early" funder or that we fund "civic R & D." We provide funding for organizations to figure out what problem to address in the first place, to get a better understanding of the problem, to generate ideas to solve the problem, and then, after all that work (and maybe having to revisit some of the earlier stages along the way), the organization might be ready to test or implement a good idea. See how we depict that "pre-early" problem solving process here.

Most importantly, throughout the innovation or problem-solving process, we also look for particular values to drive the organization's approach: Is the organization genuinely and deeply engaging the people most affected by the problem? Is the organization working in deep partnership with other organizations and leaders? Is the organization making the most of existing resources?

Let's bring it to life. Here are three examples of the 150+ organizations we've funded to engage in a process to solve problems in their communities:

  • World Wildlife Fund's Northern Great Plains initiative is bringing ranchers, conservationists, oil business developers, and government officials together to create a vision for the future of North Dakota's badlands and a shared energy development plan that protects this important landscape.
  • PACT for Families Collaborative engaged truant youth, their parents, education staff, and service providers to understand barriers to school attendance and redesign services and test strategies for positive, sustainable solutions to truancy in western Minnesota.
  • Pillsbury United Communities is using human-centered design processes to engage North Minneapolis residents to address their neighborhood's food desert and create North Market: a new grocery store managed in partnership with a local health clinic that will also be a clinic, pharmacy, and wellness education center.

"We've learned that truly good ideas–those that will really have a sustainable impact–are often created in deep partnership and trust between organizations, leaders, and...the people most affected by a problem."

Our grantees and partners are teaching us a lot about what it takes for communities to solve problems. One of the biggest things we've learned is that collaborative projects often take far more time than anyone initially expects, for a variety of reasons. Over the past few years nearly a third of our grantees have requested more time to complete their grants, which we have readily agreed to.

For example, the Northfield Promise Initiative is a highly-collaborative, cross-sector, community-wide effort to address education disparities in Northfield, Minnesota. The initiative utilizes action teams composed of diverse stakeholders to drive its work. Early on in the project they decided to stagger the rollout of the teams rather than launch them all at once. That allowed them to take more care in composing and launching each team and allowed interested stakeholders to engage in multiple teams. In addition, later teams could learn from the successes and challenges of the earlier ones. As the grantee put it, "Partners felt strongly that it is important to give the process this extra time to ensure that all the different community voices and insights have been included (thereby maintaining this as a community-owned initiative)." We gladly extended their grant term from two years to four years so that they could spend the time they believed necessary to lead the problem-solving effort thoughtfully and inclusively.

Bush-altlogo-colorFor more helpful examples, here are a couple of resources to explore:

  • One of our innovation programs is an award for organizations that have a track record of solving problems with their communities, called the Bush Prize for Community Innovation. Together with our evaluation partner Wilder Research, we created a report about some of our Bush Prize winners that digs into specific conditions, methods and techniques that appear to help organizations innovate.
  • We believe storytelling and transparency inspire innovation. Our grantees openly share what they're learning as they pursue solutions to community problems in grantee learning logs. The learning logs also include references to specific techniques and methods the organizations use to pursue innovation.

As funders, we also have a role in the innovation process that goes beyond writing the check. By virtue of our relationships and portfolios, we have a bird's eye view of the field. By opening up what we are learning, we hope to build trust with our stakeholders and help others build on our work, hopefully leading to more and better future innovations.

-- Mandy Ellerton and Molly Matheson Gruen

Building Communities of Practice in Crop Research
November 22, 2016

(Jane Maland Cady is International Program Director at The McKnight Foundation. This post first ran on The McKnight Foundation's blog.)

JCady_originalTo spur change at the systems level, it is critical to involve many individuals and institutions that work within that system, facilitating the sharing of information and knowledge. This has been a core belief of McKnight’s Collaborative Crop Research Program (CCRP) for many years. Our assessment, however, is that cross-sector collaboration, learning, and networking have historically been sorely lacking in agriculture research and development systems across the world.

Testing a New Model

Twelve years ago, CCRP sought to change this by testing out a community of practice (CoP) model in the Andes region of South America. Community of practice, a term that has come into fashion over the last few years, refers to a group of people with a common concern or passion who interact regularly to improve their work. In the case of CCRP, the cohort of Andes grantees was united by geographic region and common interest and experience in addressing the stark hunger and poverty issues in their communities. As the model began to prove effective in strengthening capacity at regional, institutional, project, and individual levels, CCRP expanded the model to our other regions.

Today, all four CCRP regions exchange ideas within their communities of practice and with each other, working to spark new thinking and innovation in agriculture research and development. Over time, the communities have grown their skills and approaches, particularly around farmer-centered research and agroecological intensification (AEI) — or, finding food solutions that balance the needs of the earth and its people.

CCRP-Blog-Image-2-cropped-resized
Kandela, the president of a women’s group belonging to the farmer federation FUMA Gaskiya (Niger) is marking her preferred pearl millet panicles during participatory pearl millet selection. (Photo credit: Bettina Haussmann).

 

10YrsCCRPMalawi-1Ways to Improve Networking, Learning, and Collaboration

With the success of The McKnight Foundation's four implemented communities of practices, the foundation has identified several methods that help to achieve success in networking, learning, and collective action. First, each community of practice is supported by a regional team that supports CCRP’s grantmaking processes; the team also facilitates ongoing support and feedback loops. These include reviewing concept notes and proposals, planning inception meetings, cross-project meetings and exchanges, initiating mid-year reviews, and providing feedback on annual reports and project progress. It is a resource-intensive model, to be sure. But the foundation hears consistently from grantees that this structure of regular interactions builds skills and relationships with project teams and other partners, serving to strengthen the capacity of the larger CoP.

Another important way that CCRP builds an effective community of practice is by tailoring its priorities and activities based on each region’s context. A combination of efforts help promote a CoP’s vibrancy within the crop program, including:

  • grantmaking portfolio driven by regional needs and opportunities
  • In-person and virtual trainings and workshops to explore particular thematic areas, strengthen research methods, and build particular sets of skills
  • Annual facilitated CoP convenings that typically involve scientific presentations, interactive or modeling exercises, peer exchange and critical feedback, collective reflection / idea generation, and immersive field visits
  • Targeted technical assistance based on emergent needs, both grantee-led and initiated by the regional team, as well as linking with program-wide technical expertise and support
  • Cultivating an evaluative culture that supports 1) integrated monitoring, evaluation, and planning; 2) learning regarding developmental-evaluation and adaptive action approaches; 3) using and incorporating foundational principles that guide the work and program as a whole; and 4) building participatory evaluation skills
  • Other resources and tools such as handbooks, guides, videos, checklists and templates, sensors, database access, and GIS technology provision
  • Ongoing formal and informal peer learning
  • Support and collaboration in the CoP for leadership development, mentorships, conference planning, peer review for publications, and other kinds of professional and academic development


10YrsCCRPWestAfricaThe foundation's crop research program first implemented the community of practice model in the Andes 12 years ago and in Africa 10 years ago. Today, these seasoned CoPs continue to lead to new innovations and inspiration. The foundation is excited and proud to celebrate the 10th anniversaries of both the Southern Africa and West Africa communities of practices this year. On the occasion of these anniversaries, each CoP recently produced collections of research and insights gathered from their respective areas of work. We invite you to review them and learn more.

--Jane Maland Cady

If An Evaluation Was Commissioned But Never Shared, Did It Really Exist?
November 15, 2016

(Fay Twersky is director of the Effective Philanthropy Group at The William and Flora Hewlett Foundation. Follow her on Twitter at @FayDTwersky. This post first ran on Center for Effective Philanthropy's blog.)

Fay photoThere are a lot of interesting data in the recent Benchmarking Foundation Evaluation Practices report, co-authored by the Center for Effective Philanthropy and the Center for Evaluation Innovation. There is useful, practical information on how foundations structure their evaluation operations, how much they spend on evaluation, the kinds of evaluations they commission, and so forth. Great stuff.

But some findings give me pause. Perhaps the most sobering statistic in the report is that very few foundations consistently share their evaluations with their grantees, other foundations, or the public. Only 28 percent share their evaluations “quite a bit or a lot” with their grantees.  And that drops to 17 percent for sharing with other foundations, and only 14 percent for sharing with the general public.

“We have a moral imperative to share what we are learning from the evaluations we commission so that others may learn from our successes and mistakes.”

Really? Why are we not sharing the lessons from the evaluations we commission?

It feels wrong.

It seems to me that we have a moral imperative to share what we are learning from the evaluations we commission so that others may learn — both from our successes and mistakes. 

After all, why would we not share?

Are we worried about our stock price falling? No. We don’t have a stock price.

Are we worried about causing undue harm to specific organizations? There are ways to share key lessons from evaluations without naming specific organizations.

Do we believe that others don’t care about our evaluations or our findings? Time and again, foundation leaders list assessment and evaluation as high on the list of things they need to get better at.

Are reports too technical? That can be a challenge, but again, there are ways to share an executive summary — or commission an easy to read summary — that is not a heavy, overly technical report.

So, the main question is, why commission an evaluation if you are going to keep the lessons all to yourself? Is that charitable?

--Fay Twersky 

The Foundation Transparency Challenge
November 2, 2016

Janet CamarenaI often get asked which foundations are the most transparent, closely followed by the more skeptical line of questioning about whether the field of philanthropy is actually becoming more transparent, or just talking more about it.  When Glasspockets launched six years ago, a little less than 7 percent of foundations had a web presence; today that has grown to a still underwhelming 10 percent.  So, the reality is that transparency remains a challenge for the majority of foundations, but some are making it a priority to open up their work. 

Our new Foundation Transparency Challenge infographic is designed to help foundations tackle the transparency challenge. It provides an at-a-glance overview of how and why foundations are prioritizing transparency, inventories common strengths and pain points across the field, and highlights good examples that can serve as inspiration for others in areas that represent particular challenges to the field. 

Trans challenge_twitter1-01

Using data gathered from the 81 foundations that have taken and shared the “Who Has Glass Pockets?” transparency assessment, we identified transparency trends and then displayed these trends by the benefits to philanthropy, demonstrating the field's strengths and weaknesses when it comes to working more openly.

Transparency Comfort Zone

Despite the uniqueness of each philanthropic institution, looking at the data this way does seem to reveal that the majority of foundations consider a few elements as natural starting points in their journey to transparency.  As we look across the infographic, this foundation transparency comfort zone could be identified by those elements that are shared by almost all participating foundations:

  • Contact Information
  • Mission Statement
  • Grantmaking Priorities
  • Grantmaking Process
  • Key Staff List

Transparency Pain Points

On the flip side, the infographic also reveals the toughest transparency challenges for philanthropy, those elements that are shared by the fewest participating funders:

  • Assessments of Overall Foundation Performance
  • Diversity Data
  • Executive Compensation Process
  • Grantee Feedback
  • Open Licensing Policies
  • Strategic Plans

What’s In It for Me?

Community of Shared LearningOnce we start talking about the pain points, we often get questions about why foundations should share certain elements, so the infographic identifies the primary benefit for each transparency element.  Some elements could fit in multiple categories, but for each element, we tried to identify the primary benefit as a way to assess where there is currently the most attention, and where there is room for improvement. When viewed this way, there are areas of great strength or at least balance between strengths and weaknesses in participating foundations when it comes to opening up elements that build credibility and public trust, and those that serve to strengthen grantee relationship-building.  And the infographic also illustrates that philanthropic transparency is at its weakest when it comes to opening up its knowledge to build a community of shared learning.  For a field like philanthropy that is built not just on good deeds but on the experimentation of good ideas, prioritizing knowledge sharing may well be the area in which philanthropy has the most to gain by improving openness. 

“The reality is that transparency remains a challenge of foundations, but some are making it a priority to open up their work.”

And speaking of shared learning, there is much to be learned from the foundation examples that exist by virtue of participating in the “Who Has Glass Pockets?” assessment process. Our transparency team often receives requests for good examples of how other foundations are sharing information regarding diversity, codes of conduct, or knowledge sharing just to name a few, so based on the most frequently requested samples, the infographic links to actual foundation web pages that can serve as a model to others.

Don’t know what a good Code of Conduct looks like?  No problem, check out the samples we link to from The Commonwealth Fund and the Alfred P. Sloan Foundation. Don’t know how to tackle sharing your foundation’s diversity data?  Don’t reinvent the wheel, check out the good examples we flagged from The California Endowment, The Rockefeller Foundation, and Rockefeller Brothers Fund. A total of 19 peer examples, across seven challenging transparency indicators are offered up to help your foundation address common transparency pain points.

Why did we pick these particular examples, you might ask?  Watch this space for a follow-up blog that dives into what makes these good examples in each category.

#GlasspocketsChallenge

And more importantly, do you have good examples to share from your foundation’s transparency efforts? Add your content to our growing Glasspockets community by completing our transparency self-assessment form or by sharing your ideas with us on Twitter @glasspockets with #GlasspocketsChallenge and you might be among those featured next time!

--Janet Camarena

 

The Annual Report is Dead. Long Live the Annual Report!
October 13, 2016

(Neal Myrick is Director of Social Impact at Tableau Software and Director of Tableau Foundation, which encourages the use of facts and analytical reasoning to solve the world’s problems. Neal has served in both private and nonprofit senior leadership positions at intersection of information technology and social change.)

Neal Myrick photoMaybe it is the headlines from the campaign trail, but I’ve spent a lot of time lately thinking about philanthropy, impact, and accountability.

As the head of Tableau Foundation, I’m responsible for ensuring that we embody the values our employees have entrusted us to uphold. My team and I are accountable to the thousands of people who make up Tableau, and to the tens of thousands of Tableau customers and partners who are passionate about using data to drive change.

The question I’ve been wrestling with is not if we should tell our story, but how. How can we share what’s been accomplished in a way that is both timely and true without taking credit for someone else’s work? Moreover, how can we do all of this while still being a good steward of the company’s resources?

Annual_Report_Open_ThumbnailThat’s why I’m pleased to share the Tableau Foundation’s brand new Living Annual Report. We’ve ditched the traditional, glossy printed annual report for a live report so anyone can get near real-time information on what we’re doing around the globe.

The Living Annual Report gives our stakeholders better, more timely information while reducing the investments of staff time and resources of a traditional printed report. It pulls information from the same data sources we use every day. The report updates weekly, and most pages have interactive capabilities that allow anyone to explore the data.

The Report doesn’t just take look back at what we’ve done, either. It is also helping us chart the course ahead.

Earlier this year we adopted the UN’s Sustainable Development Goals (SDG) as a framework for setting our priorities and measuring progress. While the 17 Goals themselves are expansive, the 230 underlying indicators help us organize our activities and approach partnerships with a clear sense of what we’re trying to achieve.

SDG breakdown

Page 3 of the report shows the latest breakdown of Tableau Foundation grants by goal.

We recognize that we’re capacity builders, and that the issues we’re trying to effect require much larger collaborative efforts. After all, the problems we’re trying to solve are multidimensional, so why should the solutions be different?

Almost immediately, real-time transparency around priorities led to more relevant and constructive conversations with potential partners.  We are finding more opportunities to deploy our two most valuable resources - our products and our people – to help people around the globe use facts and data to solve some of the world’s toughest challenges.  

And somewhere in putting the report together, it became about something bigger. We started to see the Report as a model that shows foundations and nonprofits that they don’t have to spend substantial resources printing reports that are outdated the moment they are printed.

The purpose of a foundation or nonprofit’s annual report is to persuade decision-makers – funders, board members, partners, lawmakers – to take action. But if the information in the report is outdated, how can those people make choices that lead to real impact?

“We’ve ditched the traditional, glossy printed annual report for a live report with near real-time information on what we’re doing around the globe.”

This is not to say we should sacrifice storytelling. On the contrary, interactive charts and graphs sitting seamlessly alongside photos, videos, testimonials, and one-click calls-to-action can create a holistic engagement experience far beyond what a static printout might do. 

My real hope is that our report will inspire others to ditch the glossy paper and to get on board with the real purpose of the report – sharing actionable, up-to-date information with those in a position to take action. Some already have. Heron Foundation has been reporting on their portfolio through data visualizations for several years now. The Foundation Center’s Glasspockets transparency assessment tools and Foundation Maps are bringing sector-wide insights to grantmaking. And after seeing our Living Annual Report, others tell me they’re not far behind.

Imagine talking to a Development Director, for example, and being able to explore an interactive, near-real-time annual report to help you understand how your investment in the organization is having impact?  Not “as-of last May” when a traditional annual report would have been printed, but as-of last week? As a funder, we can and should lead by example.

Which brings me back around to the idea of impact and accountability. To do our work well, we have to share timely information. This means sharing what we are doing, showing how our resources are being spent, and being responsible for the progress… or possibly lack thereof.

This level of accountability can be uncomfortable sometimes, but is necessary to establish more constructive partnerships based on trust, set ourselves up to learn from the data, and ultimately do more impactful work.

As the work grows and changes, this report will change with it. And we’re continually making improvements and all suggestions are welcome – feel free to email us anytime at foundation@tableau.com with any feedback.   

--Neal Myrick

Share This Blog

  • Share This

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:
    glasspockets@foundationcenter.org

Subscribe to Transparency Talk

Categories