Transparency Talk

« Glasspockets Find: HIP’s New Infographic Highlights 2011 Impact | Main | Gorillas in the Midst: Foundation Accountability in a Networked Age »

No pain, no gain.
February 15, 2012

Larry McGill(Larry McGill is vice president for research at the the Foundation Center.)

Transparency can be painful. Trust us, we know.

The Foundation Center is the primary data collection, analysis and reporting agency for the field of U.S. institutional philanthropy. Each year we analyze more than 150,000 grants awarded by about 1,500 of the country's largest and most influential foundations, and load them into our master database that now comprises more than 3 million grants awarded over the past 20 years.

Why would we issue reports based on imperfect data? Because it is the only way the data will get better.Every year, our database is accessed by thousands of grant seekers, looking for funding to do their work. It also underlies all of the research reports written by the Foundation Center, tracking trends in the field over time.

But here's the thing - our data aren't perfect. And we want you to know that.

Moreover, despite the limitations of our data, we fully intend to keep publishing reports documenting and explaining the work of U.S. foundations. Even if what we produce sometimes comes back to bite us.

Case in point - we have published a number of reports in recent years on issues related to diversity in philanthropy. Not everyone is satisfied with the findings we report, regardless of the caveats we issue about the limitations of available information on the populations that benefit from grantmaking. But we issue the reports anyway, because there is burgeoning demand for this type of information.

Why would we issue reports based on imperfect data? Because it is the only way the data will get better.

To build our grants database, we have relied for most of our 55-year existence on publicly-filed IRS Forms 990 and 990-PF filed by foundations. We transcribe verbatim the information provided by foundations on these forms that describes the purpose of each grant awarded during a given year. Sometimes this information is richly descriptive, sometimes it's sketchy, often it's nonexistent.

In recent years, we have developed a platform that allows foundations to send their grants information directly to the Center through an electronic reporting system. With more than 700 foundations participating, this has significantly improved both the range and depth of information available for analysis. But the quality of information we receive still varies a great deal from foundation to foundation.

As we confront the limitations of the information available to us, we have to make a choice about how best to spend our resources to build a database that describes the work of U.S. foundations. We can accept the limitations of the existing information and try to collect data on the work of as many foundations as possible each year.  Or we could drastically limit the total number of foundations and grants we analyze and focus instead on trying to obtain as much additional information as we can about each grant awarded by those foundations (e.g., about beneficiary populations, geographic area served, etc.). The former strategy, the one we've chosen, allows us to add more than 150,000 grants to our database each year. The latter would allow us to add only about one-tenth of that amount. We believe we owe it to the hundreds of thousands of individuals who use our grants database to make it as comprehensive as possible, so they can maximize their ability to find support for their good work.

Adopting that strategy means we have to live with some data limitations when doing research based on the information we have. But what is not generally understood or appreciated is that this is simply a fact of life regarding all research, at all times and in all places. Any research study that does not come with caveats, or explicitly stated limitations, is not an honest piece of research.

In our reports, we use colors, italics, boldface letters, boxes, sidebars, methodology sections, and other strategies to make people aware of both the findings and the limitations of our research. Of course, it's never enough. Findings still have a way of disconnecting themselves from the methodologies used to generate them.

But that's fine - as long as it leads to good faith conversations about what we think we know, what we don't know, and what we need to know. As the primary data collection agency for the field, the Foundation Center is committed to doing the best it can to answer the questions that people are asking about institutional philanthropy. The need to know will not go away, and we - all of us who care about philanthropy - must do whatever we can to ensure that we have the kinds of data that will allow us to meet this need.

Do you have thoughts about how we can collectively improve the quality of data available to the field? Let us know. Like all fields of endeavor in the 21st century, to be effective, philanthropy must operate from a solid base of knowledge that can only be built from reliable data on the issues it is addressing and the approaches it is taking to make a difference.

-- Larry McGill


Feed You can follow this conversation by subscribing to the comment feed for this post.


It makes good sense that you want the grants data to be inclusive and to represent the full range of grants, even if it means there are some cases where accompanying data is not up to standard.

Re: your question about improving the quality of data, have a couple of thoughts. Apologies if these are things you already do.

The first would be to set tighter criteria for data quality on certain fields you enter for grants. If data don't meet these criteria, then the entry is coded "missing" or "incomplete" for that data field. This would still allow you include grants in the database, while at the same time pruning out selective bad data within those grants.

To encourage foundations to provide better data, you could analyze the grants by foundation to identify the biggest offenders - those with the highest percentage of grants with missing or incomplete data. What if you published an annual "best of" report that ranked or rated the foundations on submission quality? The resulting peer pressure might encourage folks to get into line.

A bigger challenge is to validate whether reported grants data which appears complete, is also accurate. Might there be a way to encourage/incentivize grantees to to visit the database and double-check the accuracy of the data that foundations have reported for them? Incentivizing them to do this will also be tough. Maybe communicating the value of cleaner data to the community would help. This could be an idea worth kicking around at a future foundation brainstorming session to see where it leads.

Cheers, josh

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Share This Blog

  • Share This

Subscribe to Transparency Talk

  • Enter your email address:

About Transparency Talk

  • Transparency Talk, the GlassPockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Candid highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Candid.

    Questions, comments, and inquiries relating to guest blog posts may be
    directed to:

    Janet Camarena
    Senior Director of Candid Learning