Transparency Talk

Category: "Efficiency" (17 posts)

Grantmakers, Go On—Ask!
November 25, 2013

(Jessica Bearman is lead consultant to the Grants Managers Network’s Project Streamline, an initiative to help funders understand and minimize the burden of grantmaking. She blogs as Dr. Streamline at http://www.projstreamline.org/.)

Bearman-150Improving foundation transparency and accountability can improve relations with grantees and prospective grantees, especially around the application process. Have you ever wondered:

  • Are your grant application requirements sensible and comprehensible to applicants?
  • Does your online application system work well or waste grantseekers’ time?
  • What does your application process cost nonprofits (unfunded and funded) in time and financial resources?

If you don’t know the answers to these questions, you’re not alone. But there is no reason to be afraid to ask.

Project Streamling header (640x131)Project Streamline, an initiative of the Grants Managers Network, recently reported that many funders—more than half in our survey sample—don’t seek feedback on their practices from nonprofit grantseekers. Grantseekers reported that, on average, fewer than 15% of their funders had ever asked for input.

This means that funders don’t know the answers to the questions posed above.

Nonprofit grantseekers have learned all too well the perils of unsolicited candor. They often don’t believe that funders want to hear anything critical about practices. As one grantseeker put it, “As far as negative feedback, I don't give it unless they ask for it, or I do not plan to ever approach them again.” Meanwhile, funders seem oddly unwilling to invite constructive critique that would both demonstrate good partnership and improve their systems.

But funders have so much to gain from inviting feedback from grantseekers and grantees; it’s hard to imagine a significant downside to increased transparency. So, go onask!

Although you may have strong relationships with your grantees, they’re not likely to tell you the whole truth unless you ask specific questions in a format that allows them to comment anonymously.

Ask by surveying. I believe that an anonymous survey—either your own or one administered by a third party—is a good place to begin your inquiry. Although you may have strong relationships with your grantees, they’re not likely to tell you the whole truth unless you ask specific questions in a format that allows them to comment anonymously. For example, I worked with a foundation to ask very detailed questions about their budget forms. Grantee comments pointed out confusing sections, which the foundation was then able to clarify. They also decided to stop using budget forms for general operating support grants after receiving consistent feedback that the forms didn’t work for organizations’ budgets.

Ask in conversation. You can also get great insight through focus groups or individual conversations by asking directly for feedback after you’ve granted funding, or when there’s no funding on the table. Again, specificity is critical. “How was the process?” will not get the same type of useful response as, “Did you run into any issues using our online system? How would you suggest we improve it?”

One experienced grantseeker was asked by a funder to review their application, question by question, in a private conference call. She reported that her feedback made a difference; the funder later modified or eliminated requirements that were particularly difficult to manage.

Ask as part of ongoing learning and improvement. Grantseekers can appreciate and respect your interest in improving the grantmaking process, and they will be more inclined to be honest if they know you have a plan to use their constructive comments.

No matter how you do it, asking for input on these practices shows nonprofits that you recognize that applying for and reporting on grants carries an administrative burden. It tells them that you’re serious about minimizing unnecessarily labor-intensive tasks. It also tells them that you’re conscientious about your own learning process and improvement as a funder.

You may need to ask, and ask again, but eventually your nonprofit partners will understand that you truly want to know what they think—especially if you post your questions and share their input publicly. And they will thank you for it.

Tell us about your experiences seeking feedback from applicants and grantees. What worked well for you, and what did you learn?

-- Jessica Bearman

Make Success Open Source, and Bring on the Competition!
October 29, 2013

(Eric Stowe is the founder and director of Splash, an international nonprofit working on smart solutions to the water crisis in developing countries.)

Stowe-100Greater transparency and open source sharing could accelerate the pace of social sector change, but few organizations are able to take this thinking forward. I recently wrote a piece for the Stanford Social Innovation Review wherein I suggested that successful organizations in the social sector could finally start to see real traction and systems change if, and when, we open up our internal business strategies to competitors.

Help Entrepreneurs Use Proven Models

The belief behind this is that no one organization, or even a handful, can solve the massive problems we are fighting against. If we open-sourced our work and allowed some of the brilliant entrepreneurs out there to take our respective work further by enabling them to start at step 20, instead of step 1, it would ultimately advance our causes for the better.

No single group has effectively taken proven solutions to global scale to eradicate the very problems they started out to conquer.

If the end goal is true scale toward a solution, not an organization’s scale toward perpetuity, then we need to get a fraction of the growing pool of amazing social innovators away from focusing on the newest unproven solutions—continually building new starting lines toward untested finish lines.

Instead, encourage the sector as a whole to make success open source and scale what works. Why? Because no single group has effectively taken proven solutions to global scale to eradicate the very problem(s) they started out to conquer. I believe this trend will continue unless we methodically and systematically promote theft of our proven and successful models.

Lest it be seen simply as NGO naiveté, I am actually a fan of the market side of the equation. My argument ultimately advocates for more solid competition in the sector, not less. If someone can best us at our game (which they most assuredly can), and force us to either step it up or be put out of business (which they absolutely might), that is a net gain.

Use Risk Capital to Make Success Open Source

How do we take this further, from dialogue to action?

In funding terms, when we collectively talk about “scaling impact,” it usually means “scaling an organization’s footprint.” To funders, I say that lone organizations in any sector simply don’t have, nor have they ever had, the resources to pull it off. Funders should no longer bear the notion that single implementers should carry the burden alone; nor should funders accept organizations that say they can.

In my conversations with successful implementing organizations, most have stated they would be willing to promote imitation of their models by competent third parties. But funding it is incredibly tricky, if not outright impossible, in a field where most grants go the traditional route of project-by-project funding—which leaves little or no room in the budget to strategically document our respective paths to success and, more importantly, promote its imitation by separate organizations.

It is terribly exciting when we think of finally scaling our solutions rather than continually locking them down and walling them off.

If willing organizations could marry their openness to this concept with donors willing to bundle a bit of risk capital in their larger grants, it would open up the space to try this out and catalyze second-mover advantage, rather than hinder it.

Invest in Imitation and Move Toward Real Scale

But what happens if the imitation of a successful model is weak? The donors who will fund global solutions at scale can sniff out the difference between a weak approximation of the gold standard and the real thing. This should mitigate risk for donors at every level within the funding spectrum and ensure that the overall drag from anemic imposters doesn’t result in a net decrease in efficiency, reach, or quality. It will certainly take time to standardize and evaluate the growth of imitators—with all sorts of speed bumps along the way. Sadly, time is on our side, since we aren’t currently solving any singular problem on our own.

From an implementer’s perspective, this is scary ground to cover, because it has the potential to put our brand, our reputation, and our hard-won success at risk. Yet it is terribly exciting when we think of finally scaling our solutions rather than continually locking them down and walling them off.

Aqua_logo_smallFor my part, it is easy to preach this indefinitely without ever acting on it. To supplant that, in the coming years I will try to push my own organization, Splash, to invest up to 5% of our annual funding to nurture second-mover advantage. This will include rigorously documenting and standardizing our strategies, as well as bringing “competitors” into our office to learn our work from front to back—with the intention that we will start to see real growth outside my own organization's abilities and reach. I believe that committing 5% toward nurturing imitation will go much further toward real scale than isolating that same amount to our own program growth ever could.

If we get throttled and crushed in the process by a group that is quicker, smarter, and sharper than us—so be it. To them I say, “Bring on the competition!”

-- Eric Stowe

Advancing Social Media Measurement for Foundations: A Re-Cap (Part One)
May 29, 2013

(Beth Kanter is a Master Trainer and the author of Beth's Blog, one of the longest running and most popular blogs for nonprofits, and co-author of the highly acclaimed book, The Networked Nonprofit, published by J. Wiley in 2010, and its new follow-up, Measuring the Networked Nonprofit, published in 2013 by J. Wiley.)

Kanter-100Last month, I was invited to participate in a meeting organized by the Robert Wood Johnson Foundation called Advancing Social Media Measurement for Foundations where I presented on the State of Nonprofit Social Media Measurement. The participants were a cross-disciplinary group and included people who work at different foundations in the areas of evaluation, communication, social media, and programs as well as people who work for nonprofits and as consultants who work in evaluation, social media, network analysis, data scientists, and others.

To be transparent means that a foundation is open, accountable, and honest with its stakeholders and the public. Transparency exists to a lesser or greater extent in all organizations. Greater transparency is a good thing, not just because it is morally correct, but because it can provide measurable benefits.

We had two working sessions where we focused on defining outcomes, strategies, key performance metrics, and measurement methods for five outcome areas that may be common to many foundation’s communication’s strategies including transparency –a topic that KD Paine and I devoted an entire chapter to in our book, Measuring the Networked Nonprofit.

Transparency is a developing practice for nonprofits and their funders, and the field of measurement of transparency for foundations and nonprofits is embryonic. To be transparent means that a foundation is open, accountable, and honest with its stakeholders and the public. Transparency exists to a lesser or greater extent in all organizations. Greater transparency is a good thing, not just because it is morally correct, but because it can provide measurable benefits.

Measureable Benefits of Transparency
Transparency helps an organization by engaging its audiences and by speeding the processes of learning and growing. Transparency helps foundation programs improve in ways they might not otherwise. Two of transparency’s readily measureable benefits are increased efficiency, and increases in the stakeholder perceptions of trust, commitment and satisfaction.

Increased efficiency: Transparency makes organizations more efficient because it removes the gatekeeping function, which not only takes extra time, but can be an exhausting way to work. When foundations are working transparently, problems are easier to solve, questions are easier to answer, and stakeholder’s needs are met more quickly.

Increased trust, satisfaction, and commitment:  Dr. Brad Rawlins’ research has demonstrated that increased organizational transparency is directly tied to increases in trust, credibility, and satisfaction among the organization’s stakeholders. He sees a key benefit of transparency as, “enhancing the ethical nature of organizations in two ways: first, it holds organizations accountable for their actions and policies; and second, it respects the autonomy and reasoning ability of individuals who deserve to have access to information that might affect their position.”  (Rawlins is the Dean of the Communications Department at Arkansas State University)

Now that we have defined what working transparently means and the benefits, what is the method for measuring it? 

In the second part of my post tomorrow, I will outline the best approaches to measuring the value of transparency to your foundation.

--Beth Kanter

Valuing Beneficiary Feedback: Promoting Foundation Accountability and Programmatic Outcomes by Incorporating Recipient Assessments into Decision-Making
May 9, 2013

(Emily Keller is an editorial associate in the Corporate Philanthropy department at the Foundation Center.)

Emily KellerFoundation leaders who want to increase the accountability of their work should consider supporting efforts to solicit feedback from beneficiaries, say three experts in the field of conducting recipient assessments.

To succeed, the feedback must be representative, actionable, systematic, and comparable, said Fay Twersky, Phil Buchanan, and Valerie Threlfall in a webinar presented last month by the Stanford Social Innovation Review (SSIR). The webinar was based on the article, "Listening to Those Who Matter Most, the Beneficiaries," written by the webinar speakers and published in the spring 2013 issue of SSIR.

Despite its inherent difficulties, beneficiary feedback is poised for growth as a method for measuring performance and accountability within the social sector movement toward "big data."

Twersky, who is the lead author of the article and the director of the Effective Philanthropy Group at the William and Flora Hewlett Foundation, said gaining knowledge from beneficiaries, in addition to experts and crowdsourcing, is a moral issue and a smart thing to do to achieve effective program results. "I think if this is important to practitioners to listen systematically and to do it well, it will be important to funders who are responsive to their grantees...I don't think we have done a good job as funders of listening to those voices. I think we can do a lot better," she said.

A 2011 survey of CEOs by the Center for Effective Philanthropy (CEP) showed that just 19 percent of responding foundations use beneficiary focus groups or convenings to assess the effectiveness of their foundation's programmatic work, and 16 percent use beneficiary surveys to do so. Those who collect this information reported having "a better understanding of the progress their foundation is making against its strategies" and "a more accurate understanding of the impact the foundation is having on the communities and fields in which it works."

So why aren't more foundations supporting programs to solicit beneficiary feedback? The webinar speakers examined the issue by discussing benefits and success stories, challenges and criticisms, and best practices for establishing a feedback system.

Benefits and Success Stories
Twersky, Buchanan, and Threlfall drew on their experiences as co-founders of YouthTruth, a nonprofit organization that administers online surveys to high school students across the country, to exhibit an effective system for gathering beneficiary feedback. YouthTruth was created in 2008 by CEP, in collaboration with the Bill & Melinda Gates Foundation, to leverage student opinions into schools' decision-making processes. The questions focus on student engagement, school culture, student-teacher relationships, rigor of classes and instruction, and preparedness for the future. Two hundred thousand students from 275 schools have answered the survey, and 85 percent of participating administrators say they have used the data to make policy or programmatic decisions.

"In the case of YouthTruth, we have seen real benefits — courageous students and schools that have participated in the process — in that it has really opened up new areas for discourse and I think changed both adults' and students' expectations about their involvement in decision-making," said Threlfall, a senior advisor at YouthTruth.

In the healthcare sector, the push for using beneficiary feedback to improve outcomes has focused around patient-centered and accountable care, as measured through initiatives such as the Hospital Care Quality Information from the Consumer Perspective (HCAHPS) survey. The publication of "Crossing the Quality Chasm: A New Health System for the 21st Century" by the Institute of Medicine in 2001 spurred an increase in the collection and use of recipient feedback for this purpose, the authors explained. The Patient Protection and Affordable Care Act continues in this direction with goals for measuring patient experiences during hospital stays and incorporating consumer assessments in determining insurance reimbursements. "When patients have better communication with providers, and when they understand treatment options and feel that they have some say in their own care, they are more likely to follow a treatment regimen and improve their health," the authors wrote in the SSIR article.

They pointed to work by the Cleveland Clinic to increase nurse check-ins as a result of patient feedback, as part of a successful quest for a 90 percent satisfaction rate using HCAHPS; and an initiative by the California HealthCare Foundation to gather feedback from patients outside the commercially insured population that has been the traditional focus of data collection.

The education and healthcare sectors provide unique opportunities for collecting feedback, as the service structure enables providers to track recipient populations and compare information across institutions, the speakers said.

Criticism and Challenges
Despite positive outcomes, experts say that beneficiary feedback is under-utilized by philanthropic organizations for many reasons. According to the CEP study, 73 percent of the responding foundations provided some funding to assist grantees in understanding the effectiveness of their programs, but only 9 percent did so for all of their grantees.

The speakers identified a range of challenges that may explain why beneficiary data is not more commonly used. According to Buchanan, who is president of CEP, soliciting feedback can be expensive and difficult to gather, particularly for populations that are hard to reach, and power dynamics between recipients and service providers can create a barrier to candid information sharing with a high response rate. Twersky cited low literacy rates, trust issues among vulnerable populations, and access to technology as other potential barriers. Allocating funding for surveys may be viewed negatively as an administrative cost that takes resources away from the direct provision of services. Another criticism is that placing an increased focus on outcome metrics can impede innovation and risk-taking. And although recipients may be viewed as customers of a business, they are not the ones paying for the services they receive, the speakers explained, which makes it easier for service providers to overlook their opinions.

Best Practices for Gathering and Incorporating Beneficiary Feedback
Twersky, Buchanan, and Threlfall offered a series of recommendations for collecting and utilizing beneficiary feedback effectively, including the following:

  • Seek Feedback When it Matters

The speakers recommended initiating the survey process before or during a program rather than only after it ends, enabling the data to have the most impact. Twersky compared this to leading indicators used in business.

  • Design Surveys for Impact

The speakers recommended developing a process to integrate feedback early on and to consider the use of focus groups before or after administering surveys, as well as establishing requirements for a high response rate. Buchanan stressed the importance of detailed survey design and methodology and suggested working with consultants or providers if necessary.

  • Strive for Candid, Representative Responses

Threlfall made the recommendation: "Check for non-responder bias to make sure certain populations aren't left out." This requires cultural awareness of the population being surveyed. For example, with smallholder farmers, "Men tend to have disproportionately more access to mobile phones than women, so whose voices are we hearing?" Twersky noted.

  • Prepare for Negative Results

In a YouthTruth video shown in the webinar, Dr. Brennon Sapp, principal of Scott High School in Taylor Mill, KY, described receiving difficult feedback. "One question that will haunt me to my grave is the question that was ‘do your teachers care about you?' We rated bottom 1% in the nation on that one specific question and it really hit hard. It's hard to swallow, hard to take, hard for my teachers to take," he said in the video. Following the survey, Sapp said he worked with staff to change policies and shift the culture of the school, which led to a decline in the failure rate and increased teacher intervention.

  • Collaborate with Other Organizations

Working with other groups enables providers to generate comparative data. In the CEP survey, 26 percent of respondents said they were currently using coordinated measurement systems with other funders in the same issue area, and 23 percent were considering doing so. According to Twersky, developing benchmarks through collaboration will help organizations to interpret the data.

Beneficiary Assessments in the "Big Data" Movement
Despite its inherent difficulties, beneficiary feedback is poised for growth as a method for measuring performance and accountability within the social sector movement toward "big data." The push for evidence-based social programs utilizing impact evaluations was echoed in the Obama Administration's launch of the Social Innovation Fund (SIF) in 2009 to provide million-dollar matching funds to nonprofit organizations chosen by grantmakers that are working in the areas of economic opportunity, healthy futures, and youth development. The SIF's "key characteristics" include: "Emphasis on rigorous evaluations of program results not only to improve accountability but also to build a stronger marketplace of organizations with evidence of impact."

While some challenges will remain insurmountable — as Twersky pointed out during the webinar, "When the intended beneficiary is the earth, how do we listen to the earth?" — there are more than enough resources available to start speaking to more of its inhabitants today.

What are the biggest challenges your organization has faced in collecting and incorporating beneficiary feedback into decision-making? What role should recipient assessments play in the "big data" movement? How can foundations and nonprofits use beneficiary feedback to enable greater accountability and effectiveness? Please provide your comments below.

-- Emily Keller

Social Media, So What? RWJF Tackles How to Answer the Social Media, So What Question
April 17, 2013

Debra Joy Perez (@djoyperez) is currently serving as Interim Vice President of Research and Evaluation at the Robert Wood Johnson Foundation.

Perez-100Last year, after Steve Downs shared an overview of the Robert Wood Johnson Foundation’s (RWJF) social media strategy, we hosted a series of interviews with RWJF staff members about how social media, and more broadly, the transparency and participation they offer, are adding new and critical dimensions to their work. The first case study on social networking as a learning tool is available here. The second on experimenting with different social mediums to serve as a catalyst for collaboration is available here. The third on leveraging social media to expand networks is available here.

The latest post offers perspective on how the use of these tools—which have become essential to our communication efforts—can be measured to reflect the impact of our work and rooted in a context of achieving social change goals.

Q: Let’s start with a glimpse into a day in the life of your work at the Foundation in light of all these new technologies. Why are metrics important to RWJF?
A: RWJF has a 40 year history of developing evidence-based programming. We are known for our research and evaluation work nationally and internationally. Yet, as new ways to advance our goals in health and health care become more reliant on technology, we struggle with measuring success and accountability.

Since 2009, RWJF has been incorporating Web 2.0 technology in our everyday work. That is what people who visit our website  can see since our September redesign, as we have more social sharing facilitation tools across the site. We also invite conversation about how to advance health and health care on Twitter, Facebook, and produce content that can serve the needs of various online communities.

We can clearly see and have made projections about the future value of social media. Social media can help us create social change and build movements around the causes that we care deeply about. We have learned many key lessons from initiating this work guided by our principles of openness, participation, and decentralization. Specific lessons include:

  • Personal outreach matters;
  • Responsiveness to requests for engagement is important;
  • Criticism can lead to healthy dialogue;
  • Make engagement easy and simple; and
  • Engagement takes work and dedicated resources.

These take homes suggest that each of these principles requires concerted efforts and conversations about policies and processes for achieving the intended goals. With each social media campaign, we must be explicit about expectations. Social media metrics is an essential part of our efforts at RWJF. We need measurement to help us achieve those expectations. Measurement also helps us continually improve our use of social media to achieve our broader social change goals.

Social media is another tool to achieve larger goals. While it can be a very powerful tool, it should not be mistaken for an end in itself.

Q: What does an effective and efficient social media campaign look like?
A: So where do you start: well, you might start first with acknowledging what you are already doing in social media and celebrating that. Do you have a Facebook page, an organizational presence on Twitter, operations on Tumblr? Conduct an inventory of what you are doing as an organization, as well as the engagement by individuals. Do staff leverage social media for their job, how have they been able to extend their reach, do we regularly appear on relevant blogs?

As you do this, you might start to recognize how much you don’t know. BUT don’t let the “not-knowing” stop you.

  • Have an explicit dialogue about your goals, what are you trying to accomplish with your social media efforts, e.g. what is the purpose of tweeting something, what is the action you want an individual to take? Although click-through is not itself an outcome, in my view, it is a process measure. 
  • Identify your networks. You probably already have more of a network than you recognize (see The Networked Nonprofit  by Kanter).
  • Schedule a formal discussion about value proposition in-house. Talk to who does it now and who doesn’t. Don’t expect everyone to Tweet. Some are better long-form writers and therefore might be better suited for blogging.
  • Establish data points for measuring impact of what you do.
  • Provide unique URLs for product releases and then test URL placements against each other (AB testing) to see which one is more effective.

Ultimately, discuss to what end are you using social media. Social media is another tool to achieve larger goals. While it can be a very powerful tool, it should not be mistaken for an end in itself.

Q: What is the expected ROI for social media?
A: We believe social media can have a profound effect on expanding our reach, as more people are building trusted networks of individuals and organizations and engaging online. Appropriate use of social media channels help us provide the right information and the right tools into the hands of our health and health care advocates (also known as message evangelists). You then start to see how making data accessible in new ways, such as interactives, data visualizations, and infographics, enables us to illustrate key points for case-making and building awareness.  

Because social media is a vehicle through which ideas can be generated, tested, built upon, and spread, we believe that this is worth measuring. However, while there is a plethora of ready to use analytical tools crowding the market, the challenge will be to avoid the “low-hanging fruit” trap of measuring activity over action. If we do our job correctly, we will be able to say what works and what doesn’t using social media metrics, as well as distinguishing online from offline impact.

Q: What is the current state of the field for measuring social media? Where do we go from here?
A: The potential power of social is undeniable and we are looking for ways to continue to test our assumptions about what we are producing. For example, by watching others comment on Twitter about our work we not only have a better sense of how we are being understood, it also serves as a kind of content analysis of the impact we are having. By monitoring a recurring Twitter chat, we can hear whether our meaning and intention is influencing the discussion in the way we desire it to.

As the unit responsible for measuring the impact of our work, we regularly ask ourselves: What are we using social media for? Who are our target audiences (segmented, as well as aggregated)? (The ability to diversify our networks is a huge value to RWJF; developing metrics that includes demographics of our audiences is an important part of the measurement effort.) What is the expected action/behavior we wish to see? How do we measure behavior change? How can we best go beyond measuring online activity (page views, unique visitors, tweets, and re-tweets) to measuring offline action and policy change? This is the key challenge for philanthropy today: assessing an effective and efficient social media campaign. As a foundation, accountable to our Board and the public, we must have standards for our investments in social media just as we do for our programmatic investments. We ought to be able to answer the so-what question for investing staff time and talent in social media campaigning. As a sector, we are becoming much more sophisticated in our use of communications to advance our work. Looking at ways to measure social media should fit within the framework of measuring communications broadly. Even as the task of identifying communications indicators is challenging, social media lends itself well to being more precise and thus measurable.

In order to engage the field in a dialogue on social media measurement, RWJF is hosting a national convening of experts in three domains: evaluation, communications, and social media. The April convening will produce a set of indicators on five Foundation-focused outcomes:

1. Our foundation is viewed as a valuable information source.

2. Our foundation is viewed as transparent.

3. Lessons are disseminated, multiplying impact beyond our foundation’s reach.

4. Public knowledge, advocacy, influence, and action is increased in strategic areas

5. Our networks strengthen and diversify.

We invite you to help us advance the field of social media measurement. Please follow hashtag #SM_RE on Twitter for conversations stemming from the social media measurement meeting this month, including a live Twitter chat on April 18, 3 p.m. EDT, as we continue to move the field forward in using data to evaluate and assess impact of our work.

-- Debra Joy Perez

Small World, Big Data: 2012 Aid Transparency Index Released
October 26, 2012

David Hall-Matthews is the Director of Publish What You Fund, the global campaign for aid transparency. He was previously a Senior Lecturer (Associate Professor) in International Development at the University of Leeds, where he focused on governance and accountability, including food security, democracy, corruption, colonial administration and the global political economy of development.

Mathews-100I am pleased to have the chance to write for the Glasspockets blog, as I believe it is a powerful online resource that can help increase the understanding of best practices in foundation transparency and accountability. Its products, such as the Transparency Heat Map and Eye on the Giving Pledge, demonstrate the different areas of philanthropic giving that need to be made more transparent. 

Our recently released 2012 Aid Transparency Index shows a slow but steady improvement in global aid transparency, but it also finds that most aid information is still not published. For those not familiar with the Aid Transparency Index, it is a tool used to monitor the transparency of aid donors across 43 different indicators, to track progress and encourage further transparency.

Produced annually, this year’s Transparency Index ranks a total of 72 donors – a combination of bilateral and multilateral agencies, climate finance funds, humanitarian agencies, development finance institutions and philanthropic foundations. This year’s average transparency score rises to 41 per cent – a modest 7 percent rise from 2011.

The UK Department for International Development and the World Bank became the first two organisations to ever receive a ‘good’ rating. Six organisations – including the African Development Bank – also rose in 2012 to join nine others in the ‘fair’ category.

Unfortunately, the poor and very poor groups still contain nearly half of all organisations surveyed – including some of the world’s largest donors, such as France.

In addition to ranking donors, the Index urges all donors to sign and implement the International Aid Transparency Initiative (IATI), which offers a global common standard for publishing aid information. Foreign assistance published to this standard is shared openly in a timely, comprehensive, comparable and accessible way.

As mentioned above, this year we included private foundations in our Index, to recognise the role that foundations play in aid and development. We decided to include the William and Flora Hewlett Foundation and Bill and Melinda Gates Foundation in the ranking for specific reasons. The Hewlett Foundation is a prime-mover behind IATI, as one of the original signatories and second organisation to publish to the IATI Registry. The Gates Foundation was included because of its size and impact.

The Hewlett Foundation performed moderately well, increasing its score by 7 percentage points from 2011, to hold 31st place. Hewlett performed well at the activity level, tying 18th overall, due to its regular publication of project level information to IATI.

The Gates Foundation likewise performed moderately, ranking 33rd and scoring above the average for all donors. It performed consistently across all indicators, posting above average scores on the country and activity level indicators. Most information is found in a searchable, comprehensive grants database that could be converted to the IATI format – although it has yet to sign IATI.

While we understand the specific challenges that foundations face, particularly around safety for the NGOs they fund, they must publish to the IATI Registry as with any other development assistance donor. It is important to note that smaller foundations, such as the Indigo Trust, are already publishing to IATI.

Nine of the top 16 ranking donors in our Index have begun publishing to IATI, significantly improving the availability of timely and comparable information. Some of the biggest increases in rankings can be attributed to donors, such as Australia, publishing information via IATI.

But as the Index shows, there continues to be too little readily available information about aid, undermining the efforts of those who both give and receive it. All donors should publish ambitious implementation schedules, in line with the commitments made at the 4th High Level Forum on Aid Effectiveness in Busan, to start publishing to the IATI registry in 2013. This timeline is essential if donors are to deliver on their Busan commitment of full implementation by December 2015.

The work of organisations like the Foundation Center is crucial to improving the transparency of foreign assistance, and increasing the understanding of the role foundations in international development.

The Gates Foundation has been reviewing what approach to take in its own transparency policies, and we hope they will decide to publish to IATI as well, so that its current activities can be seen alongside the work of other foundations, NGOs and official donors. When all donors are publishing consistently to a common standard, it will help to improve – and demonstrate – the value of their aid. It will also help to encourage other, newer donors to improve their transparency.

For aid to be fully transparent, donors must publish information to IATI. Only then can aid and related development activities be made truly effective, efficient and accountable.

--David Hall-Matthews

What is Effectiveness in Foundation Work?
December 14, 2011

(Bill Somerville is Executive Director of the Philanthropic Ventures Foundation. This post is a response to a session on Foundation Transparency and Effectiveness, held in San Francisco, December 6, 2011, by the Center for Effective Philanthropy and the Foundation Center.)

Bill SomervilleFoundation critics say it isn't enough to have passion and caring about your work. You need to be effective. Maybe the retort is you can't be effective unless you have passion and caring in your work. Nonetheless, what does effectiveness mean?

At PVF effective means getting out of the office and finding people doing outstanding work -- and funding them. It means trusting these people and giving them money to spend at their discretion without requiring them to spend 25+ hours applying for funds, regardless if there is a common application form, as was advocated. It means not holding foundation processes sacred and getting money to people when they need it and not having them wait months for a decision.

Does transparency and glass pockets help effectiveness?Does transparency and glass pockets help effectiveness? I don't know. What difference does it make for people to know foundation salaries? If it does make a difference, then we are talking about accountability not effectiveness. Is the foundation accountable in being efficient, frugal, responsible, responsive and productive?

Foundations have a special place in the community in that they are answerable to themselves. They are independent and have maximum latitude to do their work. They have a unique asset in that their money is not political, not in competition with anything or anyone, and they have no ax to grind. So, what are the factors of excellence in the exercise of philanthropy? A question foundation personnel should ask themselves every day.

One is leadership. Foundations should exercise leadership in their willingness to venture where others haven't gone, to take risks, to think into the future rather than indulge themselves in endless paper. A leader is one who brings out the best in others. Isn't this what foundations should be doing?

Another factor of excellence is modesty. Money is the tool of philanthropy and money is power. Foundation personnel must understand that it is not their money nor is it their power. Foundations are investing funds in people and programs worthy of the investment. They are not "giving money away."

This commentary is meant to create a dialogue and stimulate other people to add their thoughts on what makes for effectiveness.

-- Bill Somerville

Share This Blog

  • Share This

Subscribe to Transparency Talk

  • Enter your email address:

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:
    glasspockets@foundationcenter.org

Categories