Transparency Talk

« April 2013 | Main | June 2013 »

May 2013 (8 posts)

Advancing Social Media Measurement for Foundations: A Re-Cap (Part Two)
May 30, 2013

(Beth Kanter is a Master Trainer and the author of Beth's Blog, one of the longest running and most popular blogs for nonprofits, and co-author of the highly acclaimed book, The Networked Nonprofit, published by J. Wiley in 2010, and its new follow-up, Measuring the Networked Nonprofit, published in 2013 by J. Wiley.)

Kanter-100In my post yesterday, we discussed what working transparently means and its benefits, and I left you with the cliff-hanger on how to measure your return on investment. Transparency is like any other measurement challenge; you first need to be clear about what you are measuring. There are different ways to measure transparency. The first is assessing the impact of a change in transparency on your foundation by measuring the change in the benefits of that transparency – in this case efficiency or trust. The second way is a self-evaluation to determine how transparent your organization is.

The transition to transparency may be discomforting, but the benefits of inviting people in and sharing in a foundation’s strategy development far outweigh the potential downsides.

How to Measure Improvements in Efficiency
To measure efficiency you need to have a chat with your accounting and operations departments to figure out what your leadership team is already tracking for efficiency metrics. If they aren’t already doing it, then chances are that someone in one of those departments knows how to do it and can help you. Typical efficiency metrics include:

  1. Percent reduction in response time from inquiry to satisfied resolution
  2. Percent reduction in staff hours responding to queries
  3. Percent increase in satisfaction and knowledge of employees

The benefits of increased transparency can also be quantified by a relationship survey (see Chapter 7).

How to Measure Improvements in Trust
Several studies have shown that the more transparent people perceive an organization to be, the more likely they are to trust that organization. The more the organization provides honest, open, and occasionally vulnerable communications, the more people trust the institution. Amazingly, the ability to be open and transparent was found to be more influential than competence in terms of willingness to trust. In other words, people care more about your willingness to be open and transparent than whether you are competent enough to do what you say you are going to do!

How to Measure Your Own Transparency
While the field of transparency measurement is relatively nascent, thanks to the work of Rawlins and others there are established techniques to quantify it in your own organization. There are two elements to measuring transparency. The first is “How do I know just how transparent we are?” The second is: “Do our stakeholders perceive us as transparent? And, consequently, do they trust us?”

Measurement of transparency examines four separate but equal components:

  1. Participation – The organization asks for feedback, involves others, takes the time to listen, and is prompt in responding to requests for information.
  2. Substantial – The organization provides information that is truthful, complete, easy to understand, and reliable.
  3. Accountable – The organization is forthcoming with bad news, admits mistakes, and provides both sides of a controversy.
  4. Absence of secrecy – The organization doesn’t leave out important but potentially damaging details, the organization doesn’t obfuscate its data with jargon or confusion, and the organization is not slow to provide data or only discloses data when required.

In Measuring the Networked Nonprofit, we recommended using survey or focus group questions based on the work of Brad Rawlins or the Who Has Glass Pockets? assessment, which your foundation can use to audit its own transparency. Typically a survey like this is administered either as a group discussion, or as a written survey followed by a group discussion of the results. Some of the areas of discussion that should emerge as a result of such an exercise are outlined below:

Is your organization participative?

  1. Do we involve stakeholders to help identify the information we need?
  2. Do we ask the opinions of stakeholders before making decisions?
  3. Do we take the time with stakeholders to understand who we are and what we need?

Do we provide substantial information?

  1. Do we provide detailed information to stakeholders?
  2. Do we make it easy to find the information stakeholders need?
  3. Are we prompt when responding to requests for information from stakeholders?
  4. Are we forthcoming with information that might be damaging to the organization?
  5. Do we provide information that can be compared to industry standards?
  6. Do we present more than one side of controversial issues?

Are we accountable?

  1. Do we provide information in a timely fashion to stakeholders?
  2. Do we provide information that is relevant to stakeholders?
  3. Do we provide information that could be verified by an outside source?
  4. Do we provide information that can be compared to previous performance?
  5. Do we provide information that is complete?
  6. Do we provide information that is easy for stakeholders to understand?
  7. Do we provide accurate information to stakeholders?
  8. Do we provide information that is reliable?
  9. Do we present information in language that is clear?
  10. Are we open to criticism?
  11. Do we freely admit when we make mistakes?

Secrecy

  1. Do we provide only part of the story to stakeholders?
  2. Do we leave out important details in the information we provide to stakeholders?
  3. Do we provide information that is full of jargon and technical language that is confusing to people?
  4. Do we blame outside factors that may have contributed to the outcome when reporting bad news?
  5. Do we provide information that is intentionally written in a way to make it difficult to understand?
  6. Are we slow to provide information to stakeholders?
  7. Do we only disclose info when it is required?
  8. Do we only disclose “good” news?

Measuring Stakeholder Perceptions
The second part of transparency measurement is assessing whether your stakeholders perceive you as transparent. To measure whether your supporters and stakeholders perceive you as transparent you need to ask them whether they agree or disagree with the following statements. These questions could be added to grantee perception reports or other surveys that are routinely conducted by foundations.  

  1. The organization wants to understand how its decisions affect people like me.
  2. The organization provides information that is useful to people like me for making informed decisions.
  3. I think it is important to watch this organization closely so that it does not take advantage of people like me.
  4. The organization wants to be accountable to people like me for its actions.
  5. The organization wants people like me to know what it is doing and why it is doing it.
  6. This organization asks for feedback from people like me about the quality of its information.
  7. This organization involves people like me to help identify the information I need.
  8. Provides detailed information to people like me.
  9. Makes it easy to find the information people like me need.
  10. Asks the opinions of people like me before making decisions.

The transition to transparency may be discomforting, but the benefits of inviting people in and sharing in a foundation’s strategy development far outweigh the potential downsides. Imagine how much stronger your network's reactions, input, and suggestions will make your organization—and how exciting it will feel to share your great work with more people. And, with a measurement strategy in place, you can know for sure that your effort has paid off, and that your organization is changing from the inside out.

--Beth Kanter

Advancing Social Media Measurement for Foundations: A Re-Cap (Part One)
May 29, 2013

(Beth Kanter is a Master Trainer and the author of Beth's Blog, one of the longest running and most popular blogs for nonprofits, and co-author of the highly acclaimed book, The Networked Nonprofit, published by J. Wiley in 2010, and its new follow-up, Measuring the Networked Nonprofit, published in 2013 by J. Wiley.)

Kanter-100Last month, I was invited to participate in a meeting organized by the Robert Wood Johnson Foundation called Advancing Social Media Measurement for Foundations where I presented on the State of Nonprofit Social Media Measurement. The participants were a cross-disciplinary group and included people who work at different foundations in the areas of evaluation, communication, social media, and programs as well as people who work for nonprofits and as consultants who work in evaluation, social media, network analysis, data scientists, and others.

To be transparent means that a foundation is open, accountable, and honest with its stakeholders and the public. Transparency exists to a lesser or greater extent in all organizations. Greater transparency is a good thing, not just because it is morally correct, but because it can provide measurable benefits.

We had two working sessions where we focused on defining outcomes, strategies, key performance metrics, and measurement methods for five outcome areas that may be common to many foundation’s communication’s strategies including transparency –a topic that KD Paine and I devoted an entire chapter to in our book, Measuring the Networked Nonprofit.

Transparency is a developing practice for nonprofits and their funders, and the field of measurement of transparency for foundations and nonprofits is embryonic. To be transparent means that a foundation is open, accountable, and honest with its stakeholders and the public. Transparency exists to a lesser or greater extent in all organizations. Greater transparency is a good thing, not just because it is morally correct, but because it can provide measurable benefits.

Measureable Benefits of Transparency
Transparency helps an organization by engaging its audiences and by speeding the processes of learning and growing. Transparency helps foundation programs improve in ways they might not otherwise. Two of transparency’s readily measureable benefits are increased efficiency, and increases in the stakeholder perceptions of trust, commitment and satisfaction.

Increased efficiency: Transparency makes organizations more efficient because it removes the gatekeeping function, which not only takes extra time, but can be an exhausting way to work. When foundations are working transparently, problems are easier to solve, questions are easier to answer, and stakeholder’s needs are met more quickly.

Increased trust, satisfaction, and commitment:  Dr. Brad Rawlins’ research has demonstrated that increased organizational transparency is directly tied to increases in trust, credibility, and satisfaction among the organization’s stakeholders. He sees a key benefit of transparency as, “enhancing the ethical nature of organizations in two ways: first, it holds organizations accountable for their actions and policies; and second, it respects the autonomy and reasoning ability of individuals who deserve to have access to information that might affect their position.”  (Rawlins is the Dean of the Communications Department at Arkansas State University)

Now that we have defined what working transparently means and the benefits, what is the method for measuring it? 

In the second part of my post tomorrow, I will outline the best approaches to measuring the value of transparency to your foundation.

--Beth Kanter

Transparency for Impact and Learning
May 21, 2013

(Hallie Preskill, Managing Director at FSG Philanthropy Advisors, recently wrote a transparency-focused call to action in State of the Work,” a just-released report from the D5 coalition - a five-year effort to increase diversity in philanthropy. Given the focus on transparency, we are cross-posting her thoughts here. The new report focuses on how the philanthropic field can increase its diversity, advance equity, and improve its inclusiveness.)

Preskill-100As a long time evaluator who has been deeply committed to using evaluation findings, I am excited when I hear that foundations are looking to be more “transparent” in the ways they do their work, the decisions they make, and what they learn from their evaluation efforts. 

Yet, I don’t see much evidence that many are truly embracing this idea of transparency when it comes to sharing evaluation findings and other types of grantmaking data.

While there are many reasons organizations may be hesitant about sharing evaluation results, a true learning organization will understand that with any good evaluation, there are important insights and lessons that deserve to be shared both internally and externally. A learning organization also knows that a good evaluation must start with sound data on who the organization is trying to impact and the contexts in which they operate, including data related to demographics. 

This doesn’t mean foundations have to publicize mean scores, quotes from those interviewed, or volumes of evaluation findings. Instead, it means being committed to collecting relevant, credible, and useful information that is strategically informative; being open to sharing what was learned from engaging in the evaluation process in ways that that help others think about their own work more critically; growing and adapting their practices to be more effective; and finding ways to achieve greater social impact. When evaluation and research activities and findings are made transparent, they can be a powerful catalyst for facilitating individual, group, organization, community, and field learning.

--Hallie Preskill

The 2013 “State of the Work” profiles the many leaders across the country who are taking important steps toward diversity and inclusion, and outlines the lessons these leaders have learned. The report offers suggestions for determining how diversity, equity, and inclusion can help increase effectiveness—and provides concrete ideas for how to translate those values into action. Featuring insights from executives of the American Express Foundation, the Baltimore Community Foundation, Access Strategies Fund, the Silicon Valley Community Foundation, Lloyd A. Fry Foundation, Capek Consulting, Russell Family Foundation, and FSG, the report lays the groundwork for a more diverse sector going forward. The complete report can be found at http://www.d5coalition.org/tools/state-of-the-work-2013/.


 

The Unbearable Lightness of Being Transparent
May 20, 2013

Do you think it’s as important for foundations and nonprofits alike to be transparent? We do and we would like to make our case and share lessons learned from the field at the annual Communications Network conference. Help the Communications Network prioritize its conference planning by voting on your top picks for breakout sessions. You don’t need to be a Communications Network member to vote and there's plenty to choose from, including our own The Incredible Lightness of Being Transparent featuring Ellie Buteau, vice president, Research, Center for Effective Philanthropy; and Gabi Fitz, co-founder, Issue Lab; moderated by Janet Camarena, project lead, Glasspockets.org. Voting is live on the Communications Network web site until May 22. 

Help us send Glasspockets to New Orleans this fall to illuminate the importance of foundation transparency. Vote for the Incredible Lightness of Being Transparent session today!

Foundation Transparency: The More Things Change, The More They Stay The Same
May 17, 2013

(Aaron Dorfman is executive director of the National Committee for Responsive Philanthropy (NCRP). He frequently blogs about the role of philanthropy in society. Follow NCRP on Twitter @ncrp.)

Dorfman-100As I reviewed “Foundation Transparency: What Nonprofits Want,” the latest publication from the Center for Effective Philanthropy (CEP), I had an overwhelming sense of déjà vu. So I dug deep into the archives to find reports on the subject produced by the organization I now lead, the National Committee for Responsive Philanthropy (NCRP).

What I’m left with is a sense that, on the issue of transparency, the more things change, the more they stay the same.

In May 1980, NCRP released Foundations & Public Information: Sunshine or Shadow? It was a scathing report that took foundations to task for their reticent approach to sharing information, and it launched a decades-long commitment by NCRP to promote increased transparency. The report explored why foundations should be accountable and transparent, and also the inadequate government requirements at that time. It ranked and scored 208 of the largest philanthropies using a rigorous methodology and found that 60 percent of foundations in the sample did not meet an acceptable standard of transparency. Just 4 percent were found to be “excellent.”

The methodology included a heavily weighted assessment of whether foundations provided the kinds of information that nonprofits most desired, including information about grantmaking interests and policies, and how grant applications were evaluated and decisions made about which organizations to fund.

I see many parallel findings between that report and CEP’s excellent new report. A full 33 years later, nonprofits are still clamoring for more information about how foundations make funding decisions and they want clear and open communication about priorities. They want to know whether it’s worth their time to cultivate a relationship and pursue funding. And despite an explosion of glossy annual reports and fancy websites, leaders of grant-seeking organizations are still highly frustrated by the lack of clear communication about a central element of foundation activity, namely how foundations decide which organizations to fund.

Foundation Transparency surveyed 138 nonprofit leaders, and I was unsurprised to see many of the respondents reference a desire to know how foundations assess their own performance and the impact they have. It only seems fair that since foundations are requiring this from grantees, that they be willing to be accountable for articulating impact, too.

Some of the findings suggest to me that nonprofits really want foundations to function as true partners. For example, the fact that an overwhelming majority of respondents wanted to know more about what foundations are learning indicates that grantees want learning to go both ways.

The CEP report doesn’t explore the regulatory framework for foundation transparency, nor does it explore in-depth the arguments for why greater transparency may be warranted. But another report released this year does revisit those questions. The Philanthropy Roundtable published in March 2013 Transparency in Philanthropy: An Analysis of Accountability, Fallacy, and Volunteerism.

As I reviewed Foundations & Public Information in light of the Roundtable’s current offering, I was struck by how little the arguments in favor of greater foundation transparency have changed since 1980. The original NCRP report looks at the partially-public nature of philanthropy, which is revisited by the Roundtable (though our organizations obviously come down on different sides). The partially-public dollars argument asserts that because of the preferential tax treatment afforded to foundations, a high level of transparency and accountability is owed to the public and grantees. NCRP repeated and expanded on this argument in our 2009 publication Criteria for Philanthropy at Its Best: Benchmarks to Assess and Enhance Grantmaker Impact.

In 1980, NCRP devoted some attention to why greater transparency is in the self-interest of foundations and how it might improve their effectiveness. This topic is explored robustly in the Roundtable’s new report, Criteria, and is touched on in the CEP report. Because I see additional regulation as unlikely in the near future, the link between effectiveness and voluntary transparency merits further exploration.

Speaking of regulation, there has been some increase in activity around this in recent years, though nothing has actually changed for more than 20 years in terms of mandated disclosures. Most philanthropy insiders are familiar with efforts by the Greenlining Institute to pass AB624, which would have required new disclosures for the largest foundations in California. Fewer are aware of quieter efforts by the Philanthropy Roundtable to pass legislation in several states banning efforts similar to AB624.

The last substantive change that shaped the current required information disclosure in the IRS form 990-PF can be traced to when NCRP worked with Senator Dave Durenberger (R-Minn.) to influence the IRS to change what it required in the form. Those changes contributed to helping the Foundation Center produce the best data available about the sector. An abbreviated version of how NCRP’s efforts on transparency evolved, including the Durenberger intervention with the IRS, can be found on page 10 of this look back at NCRP’s history.

What I’m left with is a sense that, on the issue of transparency, the more things change, the more they stay the same.

Coincidentally, around the same time as I was reviewing the new CEP publication and beginning to think about crafting this blog post, Bob Bothwell invited me to join him on a Friday evening for a baseball game at Nationals Park. Bothwell was NCRP’s executive director from its inception in 1976 until 1998. I am reminded again of how important it is for those of us from a new generation who are leading nonprofits and foundations to intentionally nurture connections to our history, even while we attempt to take our organizations in new directions.

And in case you’re wondering, the Washington Nationals beat the Cincinnati Reds 1-0, and Jordan Zimmerman pitched a one-hitter.

--Aaron Dorfman

How Data Can Help Create Better Communities: A Re-Cap
May 16, 2013

(Natasha Isajlovic-Terry is the Reference Librarian at the Foundation Center-San Francisco)

Nit-100Data is everywhere these days, spilling out over the sides of its containers, and busting out at every seam. The world is literally teeming with it. At the BayNet Libraries Annual Meeting, we learned from Dr. Jonathan Reichental, why this is: “We are grappling with the volume of data in the world because we now collect the same amount of data every three days as we did throughout the entire year in 2003.” This was just one thing I learned from Dr. Reichental’s talk “How Data Can Help Create Better Communities.” Reichental is no stranger to data: He currently serves as the Chief Information Officer (CIO) for the city of Palo Alto. It sounds like they’re doing some pretty nifty tricks with big, open-data down there. If you’re interested, it’s happening up here too as San Francisco just hired its first CIO.

Data espouses positive effects when it is shared, or, to put it in more familiar terms, when we are transparent with it.

Dr. Reichental’s talk focused on how government data can be used to improve communities. Data mined from government sources is often mashed-up with data from other free sources, such as Google, to strengthen the quality of data. For example, Palo Alto mashed its data on street quality ratings with Google Street View to create Palo Alto StreetViewer; a tool used to visualize ratings to make decisions about infrastructure improvement.

Data is used in many different ways in the social sector. We know that nonprofits collect and analyze their data to measure the effectiveness of their services, and that strategic nonprofits use open data to better position their outreach and services. The same is true for foundations, but these applications are often conducted within the silos of the organizations. Data espouses positive effects when it is shared, or, to put it in more familiar terms, when we are transparent with it.

Reichental mentioned the following six things about government use of open data (outlined in a summary by Sarah Rich):

  • It  is the liberation of peoples’ data
  • To be useful, data needs to be consumable by machines
  • Data has a derivative value
  • Data eliminates the middleman
  • Data creates deeper accountability
  • Open data builds trust

Three of these things stood out to me in a major way as beneficial for foundations too: derivative value, accountability, and trust.

When data is made available to the public, other organizations can use the same data in interesting and powerful ways. Think about all those fantastic mashups they do on Glee, but with data sets! An example Dr. Reichental shared is the use of public health ratings in Yelp reviews to strengthen the overall value of reviews. I don’t need to repeat the fact that foundations sit on a “treasure trove” of information as they require nonprofits to report all sorts of useful data. Can you imagine the derivative benefits if they shared this information with the world?

When government shares data publicly it creates deeper accountability. Dr. Reichental used the example of how the government is sharing their data through USAspending.gov, which in turn creates greater accountability as the public can now see where and how their money is spent. The same is true for foundations. This is why we have form 990/990-PF. Some foundations are now going beyond the 990-PF and opening up their grants data via the new Glasspockets Reporting Commitment.

The last thing Dr. Reichental mentioned about data, the fact that it builds trust, is the most compelling thing for foundation transparency, and it goes hand-in-hand with accountability. Being transparent means you have nothing to hide, so conversely, when we aren’t transparent, the public assumes that we do have something to hide. The trust component is perhaps the biggest reason why the government decided to share data publicly. The government was collecting it all along, but it wasn’t until recently that they decided to free it. Now, five years into the Obama administration we have over 400,000 data sets available via data.gov.

Data isn’t going anywhere except up and out. We are heading in a direction where sharing data publicly will be expected and touted as part of the common good. In the case of foundations, sharing data may actually increase the value of the work.

--Natasha Isajlovic-Terry

The Giving Pledge Keeps Growing
May 15, 2013

Eye on the Giving Pledge

Explore now»

A press release on May 7, 2013, announced that nine additional participants have joined the Giving Pledge, the effort launched by Warren Buffett and Bill and Melinda Gates in 2010 to encourage the world's wealthiest to commit the majority of their assets to philanthropic causes. The new list includes American as well as international pledgers and the first female individual billionaire, bringing the total number of signatories to 114 individuals, spouses, and their families. Profiles for the new signatories are now available:

Since August 2012, Glasspockets has been keeping an Eye on the Giving Pledge, providing an in-depth picture of the participants and their publicly known charitable activities.

Explore the Eye on the Giving Pledge»

-- Daniel Matz

Valuing Beneficiary Feedback: Promoting Foundation Accountability and Programmatic Outcomes by Incorporating Recipient Assessments into Decision-Making
May 9, 2013

(Emily Keller is an editorial associate in the Corporate Philanthropy department at the Foundation Center.)

Emily KellerFoundation leaders who want to increase the accountability of their work should consider supporting efforts to solicit feedback from beneficiaries, say three experts in the field of conducting recipient assessments.

To succeed, the feedback must be representative, actionable, systematic, and comparable, said Fay Twersky, Phil Buchanan, and Valerie Threlfall in a webinar presented last month by the Stanford Social Innovation Review (SSIR). The webinar was based on the article, "Listening to Those Who Matter Most, the Beneficiaries," written by the webinar speakers and published in the spring 2013 issue of SSIR.

Despite its inherent difficulties, beneficiary feedback is poised for growth as a method for measuring performance and accountability within the social sector movement toward "big data."

Twersky, who is the lead author of the article and the director of the Effective Philanthropy Group at the William and Flora Hewlett Foundation, said gaining knowledge from beneficiaries, in addition to experts and crowdsourcing, is a moral issue and a smart thing to do to achieve effective program results. "I think if this is important to practitioners to listen systematically and to do it well, it will be important to funders who are responsive to their grantees...I don't think we have done a good job as funders of listening to those voices. I think we can do a lot better," she said.

A 2011 survey of CEOs by the Center for Effective Philanthropy (CEP) showed that just 19 percent of responding foundations use beneficiary focus groups or convenings to assess the effectiveness of their foundation's programmatic work, and 16 percent use beneficiary surveys to do so. Those who collect this information reported having "a better understanding of the progress their foundation is making against its strategies" and "a more accurate understanding of the impact the foundation is having on the communities and fields in which it works."

So why aren't more foundations supporting programs to solicit beneficiary feedback? The webinar speakers examined the issue by discussing benefits and success stories, challenges and criticisms, and best practices for establishing a feedback system.

Benefits and Success Stories
Twersky, Buchanan, and Threlfall drew on their experiences as co-founders of YouthTruth, a nonprofit organization that administers online surveys to high school students across the country, to exhibit an effective system for gathering beneficiary feedback. YouthTruth was created in 2008 by CEP, in collaboration with the Bill & Melinda Gates Foundation, to leverage student opinions into schools' decision-making processes. The questions focus on student engagement, school culture, student-teacher relationships, rigor of classes and instruction, and preparedness for the future. Two hundred thousand students from 275 schools have answered the survey, and 85 percent of participating administrators say they have used the data to make policy or programmatic decisions.

"In the case of YouthTruth, we have seen real benefits — courageous students and schools that have participated in the process — in that it has really opened up new areas for discourse and I think changed both adults' and students' expectations about their involvement in decision-making," said Threlfall, a senior advisor at YouthTruth.

In the healthcare sector, the push for using beneficiary feedback to improve outcomes has focused around patient-centered and accountable care, as measured through initiatives such as the Hospital Care Quality Information from the Consumer Perspective (HCAHPS) survey. The publication of "Crossing the Quality Chasm: A New Health System for the 21st Century" by the Institute of Medicine in 2001 spurred an increase in the collection and use of recipient feedback for this purpose, the authors explained. The Patient Protection and Affordable Care Act continues in this direction with goals for measuring patient experiences during hospital stays and incorporating consumer assessments in determining insurance reimbursements. "When patients have better communication with providers, and when they understand treatment options and feel that they have some say in their own care, they are more likely to follow a treatment regimen and improve their health," the authors wrote in the SSIR article.

They pointed to work by the Cleveland Clinic to increase nurse check-ins as a result of patient feedback, as part of a successful quest for a 90 percent satisfaction rate using HCAHPS; and an initiative by the California HealthCare Foundation to gather feedback from patients outside the commercially insured population that has been the traditional focus of data collection.

The education and healthcare sectors provide unique opportunities for collecting feedback, as the service structure enables providers to track recipient populations and compare information across institutions, the speakers said.

Criticism and Challenges
Despite positive outcomes, experts say that beneficiary feedback is under-utilized by philanthropic organizations for many reasons. According to the CEP study, 73 percent of the responding foundations provided some funding to assist grantees in understanding the effectiveness of their programs, but only 9 percent did so for all of their grantees.

The speakers identified a range of challenges that may explain why beneficiary data is not more commonly used. According to Buchanan, who is president of CEP, soliciting feedback can be expensive and difficult to gather, particularly for populations that are hard to reach, and power dynamics between recipients and service providers can create a barrier to candid information sharing with a high response rate. Twersky cited low literacy rates, trust issues among vulnerable populations, and access to technology as other potential barriers. Allocating funding for surveys may be viewed negatively as an administrative cost that takes resources away from the direct provision of services. Another criticism is that placing an increased focus on outcome metrics can impede innovation and risk-taking. And although recipients may be viewed as customers of a business, they are not the ones paying for the services they receive, the speakers explained, which makes it easier for service providers to overlook their opinions.

Best Practices for Gathering and Incorporating Beneficiary Feedback
Twersky, Buchanan, and Threlfall offered a series of recommendations for collecting and utilizing beneficiary feedback effectively, including the following:

  • Seek Feedback When it Matters

The speakers recommended initiating the survey process before or during a program rather than only after it ends, enabling the data to have the most impact. Twersky compared this to leading indicators used in business.

  • Design Surveys for Impact

The speakers recommended developing a process to integrate feedback early on and to consider the use of focus groups before or after administering surveys, as well as establishing requirements for a high response rate. Buchanan stressed the importance of detailed survey design and methodology and suggested working with consultants or providers if necessary.

  • Strive for Candid, Representative Responses

Threlfall made the recommendation: "Check for non-responder bias to make sure certain populations aren't left out." This requires cultural awareness of the population being surveyed. For example, with smallholder farmers, "Men tend to have disproportionately more access to mobile phones than women, so whose voices are we hearing?" Twersky noted.

  • Prepare for Negative Results

In a YouthTruth video shown in the webinar, Dr. Brennon Sapp, principal of Scott High School in Taylor Mill, KY, described receiving difficult feedback. "One question that will haunt me to my grave is the question that was ‘do your teachers care about you?' We rated bottom 1% in the nation on that one specific question and it really hit hard. It's hard to swallow, hard to take, hard for my teachers to take," he said in the video. Following the survey, Sapp said he worked with staff to change policies and shift the culture of the school, which led to a decline in the failure rate and increased teacher intervention.

  • Collaborate with Other Organizations

Working with other groups enables providers to generate comparative data. In the CEP survey, 26 percent of respondents said they were currently using coordinated measurement systems with other funders in the same issue area, and 23 percent were considering doing so. According to Twersky, developing benchmarks through collaboration will help organizations to interpret the data.

Beneficiary Assessments in the "Big Data" Movement
Despite its inherent difficulties, beneficiary feedback is poised for growth as a method for measuring performance and accountability within the social sector movement toward "big data." The push for evidence-based social programs utilizing impact evaluations was echoed in the Obama Administration's launch of the Social Innovation Fund (SIF) in 2009 to provide million-dollar matching funds to nonprofit organizations chosen by grantmakers that are working in the areas of economic opportunity, healthy futures, and youth development. The SIF's "key characteristics" include: "Emphasis on rigorous evaluations of program results not only to improve accountability but also to build a stronger marketplace of organizations with evidence of impact."

While some challenges will remain insurmountable — as Twersky pointed out during the webinar, "When the intended beneficiary is the earth, how do we listen to the earth?" — there are more than enough resources available to start speaking to more of its inhabitants today.

What are the biggest challenges your organization has faced in collecting and incorporating beneficiary feedback into decision-making? What role should recipient assessments play in the "big data" movement? How can foundations and nonprofits use beneficiary feedback to enable greater accountability and effectiveness? Please provide your comments below.

-- Emily Keller

Share This Blog

  • Share This

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:
    glasspockets@foundationcenter.org

Subscribe to Transparency Talk

Categories