Transparency Talk

Category: "Sharing" (62 posts)

Evolving Towards Equity, Getting Beyond Semantics
December 17, 2018

Mona Jhawar serves as learning and evaluation manager for The California Endowment.

Mona JhawarIn my previous post, I reflected on The California Endowment’s practice of conducting a Diversity, Equity, and Inclusion (DEI) Audit and how it helps us to stay accountable to intentionally integrating and advancing these values across the foundation.

We started this practice with a “Diversity and Inclusion” Audit in 2008 and as part of our third audit in 2013, The California Endowment (TCE) adjusted the framing to a “Diversity, Equity, and Inclusion” Audit. This allowed us to better connect the audit with how the foundation viewed the goals of our strategy and broadened the lens used through the audit process.

While this could be viewed as a semantic update based on changes in the nonprofit and philanthropic sectors, by 2016 our audit results reflected how TCE described both our core values that lead with principles of DEI and the ultimate outcome of our work that point towards health equity and justice for all. And although we didn’t make a corresponding change to reflect this shift in what the audit specifically assesses, select findings from our most recent audit highlight how not only diversity, but how equity is also being operationalized within the foundation.

Getting beyond the numbers

In some ways, the most straightforward entry point for DEI discussions is to first examine diversity by assessing quantitative representation within the foundation at the board and staff level, among our partners, contractors, vendors, and investment managers. Though it’s a necessary beginning, reporting and reflection, however, cannot stop with counting heads.  While our audit may have started as a way to gauge inclusion through the lens of diversity, it’s become clear that collecting and examining demographic data sets the stage for critical conversations to follow.

Part of the inherent value of reflecting on diversity and representation is in service of getting beyond the numbers to discover what questions the numbers inspire. Questions such as:

  • Who’s missing or overrepresented and why?
  • What implications could the gaps in lived experiences have on the foundation, the strategies used and how our work is conducted?
  • What are the underlying structures and systems that shape the demographics of the foundation and of the organizations with which we partner?

It’s these types of questions about our demographics and diversity that help move us beyond discussions about representation into deeper discussions about equity.

The audit has been a valuable point of reflection and action planning over the past several years. It’s a comprehensive process conducted in partnership with evaluation firm, SPR, that spans an extensive number of sources.

Towards Equity and Inclusion

As TCE pursues our health equity goals, we’ve been able to define and distinguish key differences between diversity, equity, and inclusion. While diversity examines representation, we define equity as promoting fair conditions, opportunities, and outcomes. We also define inclusion as valuing and raising the perspectives and voices of diverse communities to be considered where decisions are being made. For future audits, we’re looking to refine our DEI audit goals to more explicitly focus on equity and inclusion across both our grantmaking efforts and to even more deeply examine our internal policies, practices, and operations. However, here are a few examples from our latest audit that highlight how equity and inclusion currently show up across the foundation outside of our grantmaking.

Equity in hiring

  • Recognizing the impact of structural racism and mass incarceration, TCE followed the lead of partners working to “ban the box” and the Executives’ Alliance for Boys and Men of Color to change hiring practices. TCE now utilizes a Fair Chance Hiring Policy that opens the door for hiring qualified applicants with a conviction or an arrest and shares open positions with anti-recidivism organizations.

Inclusion and equity in investments

  • In the spirit of inclusion, the criteria for our Program Related Investments (PRIs) integrate whether the PRI will engage the community it is intended to benefit as well as whether the investment will address a known health inequity or social determinant of health.
  • In recognition of structural racism leading to higher rates of incarceration within communities of color, in 2015 TCE announced that we will no longer invest in companies profiting from for-profit prisons, jails, or detention centers.

Equity in vendor selection

  • Operationalizing equity also requires considering how facility operations align with organizational values. In line with our divestment from for-profit prisons, an RFP process identified a vendor-nonprofit team that encouraged hiring formerly incarcerated and homeless community members within our onsite café. We remain committed to this approach.

The Work Ahead

We’ve accomplished a great deal. At the same time, as we evolve towards becoming an equity organization there are areas where we need to put more of our attention.

To move beyond articulating values and to get to deeper staff engagement, audit feedback suggests more staff resources are needed to connect individual functions and roles to our DEI values, including through our performance review process, particularly among non-program staff.

Connected to developing a greater vision regardless of department affiliation, we will soon embark to engage staff across the entire organization to develop a more deeply shared racial equity analysis of our work.  As part of this effort, our board is participating in racial equity trainings and adopted a resolution to utilize a racial equity lens as the foundation develops our next strategic plan.  Building on what we’re learning through our audits, in 2019 we’ll launch this effort towards becoming a racially equitable health foundation that will intentionally bring racial equity to the center of our work and how we operate.

Finally, as we continue to partner with and support community to fight for equity, there are several unanswered, imminent questions we’ll need to tackle. Within the walls of the foundation:

  • How do we hold ourselves to the same equity and inclusion principles that our partners demand of system leaders?
  • How do we confront the contradictions of how we operate as an organization rooted in a corporate or hierarchical design to share power with staff regardless of position, increase decision making transparency, and include those impacted by pending decisions in the same way we ask our systems leaders to include and respond to community?
  • With an interest in greater accountability to equity and inclusion, how do we not only tend to power dynamics but consider greater power sharing through foundation structures and current decision-making bodies both internally and externally?

Herein lies our next evolutionary moment.

--Mona Jhawar

Putting a Stop to Recreating the Wheel: Strengthening the Field of Philanthropic Evaluation
December 13, 2018

Clare Nolan is Co-Founder of Engage R+D, which works with nonprofits, foundations, and public agencies to measure their impact, bring together stakeholders, and foster learning and innovation.

Meg Long is President of Equal Measure, Philadelphia-based professional services nonprofit focused on helping its clients—foundations, nonprofit organizations, and public entities—deepen and accelerate social change.

2
Clare Nolan

In 2017, Engage R+D and Equal Measure, with support from the Gordon and Betty Moore Foundation launched an exploratory dialogue of funders and evaluators to discuss the current state of evaluation and learning in philanthropy, explore barriers to greater collaboration and impact, and identify approaches and strategies to build the collective capacity of small and mid-sized evaluation firms. Our goal was to test whether there was interest in our sector for building an affinity network of evaluation leaders working with and within philanthropy. Since our initial meeting with a few dozen colleagues in 2017, our affinity network has grown to 250 individuals nationally, and there is growing momentum for finding ways funders and evaluators can work together differently to deepen the impact of evaluation and learning on philanthropic practice.

At the recent 2018 American Evaluation Association (AEA) conference in Cleveland, Ohio, nearly 100 funders and evaluators gathered to discuss four action areas that have generated the most “buzz” during our previous network convening at the Grantmakers for Effective Organizations (GEO) conference and from our subsequent network survey:

1. Improving the application of evaluation in philanthropic strategy and practice.

2. Supporting the sharing and adaptation of evaluation learning for multiple users.

3. Supporting formal partnerships and collaborations across evaluators and evaluation firms.

4. Strengthening and diversifying the pipeline of evaluators working with and within philanthropy.

1
Meg Long

We asked participants to choose one of these action areas and join the corresponding large table discussion to reflect on what they have learned about the topic and identify how the affinity network can contribute to advancing the field. Through crowd-sourcing, participants identified some key ways in which action teams that will be launched in early 2019 can provide a value-add to the field.

1. What will it take to more tightly connect evaluation with strategy and decision-making? Provide more guidance on what evaluation should look like in philanthropy.

Are there common principles, trainings, articles, case studies, guides, etc. that an action team could identify and develop? Could the affinity network be a space to convene funders and evaluators that work in similar fields to share evaluation results and lessons learned?

2. What will it take to broaden the audience for evaluations beyond individual organizations? Create a “market place” for knowledge sharing and incentivize participation.

As readers of this blog will know from Foundation Center’s #OpenForGood efforts, there is general agreement around the need to do better at sharing knowledge, building evidence, and being willing to share what foundations are learning – both successes and failures. How can an action team support the creation of a culture of knowledge sharing through existing venues and mechanisms (e.g., IssueLab, Evaluation Roundtable)? How could incentives be built in to support transparency and accountability?

3. How can the field create spaces that support greater collaboration and knowledge sharing among funders and evaluators? Identify promising evaluator partnership models that resulted in collaboration and not competition.

Partnerships have worked well where there are established relationships and trust and when power dynamics are minimized. How can an action team identify promising models and practices for successful collaborations where collaboration is not the main goal? How can they establish shared values, goals, etc. to further collaboration?

4. What will it take to create the conditions necessary to attract, support, and retain new talent? Build upon existing models to support emerging evaluators of color and identify practices for ongoing guidance and mentorship.

Recruiting, hiring, and retaining talent to fit evaluation and learning needs in philanthropy is challenging due to education and training programs as well as changing expectations in the field. How can we leverage and build on existing programs (e.g., AEA Graduate Education Diversity Internship, Leaders in Equitable Evaluation and Diversity, etc.) to increase the pipeline, and support ongoing retention and professional development?

Overall, we are delighted to see that there is much enthusiasm in our field to do more work on these issues. We look forward to launching action teams in early 2019 to further flesh out the ideas shared above in addition to others generated over the past year.

If you are interested in learning more about this effort, please contact Pilar Mendoza. If you would like to join the network and receive updates about this work, please contact Christine Kemler.

--Clare Nolan and Meg Long

Living Our Values: Gauging a Foundation’s Commitment to Diversity, Equity, and Inclusion
November 29, 2018

Mona Jhawar serves as learning and evaluation manager for The California Endowment.

Mona JhawarThe California Endowment (TCE) recently wrapped up our 2016 Diversity, Equity, and Inclusion (DEI) Audit, our fourth since 2008. The audit was initially developed at a time when community advocates were pushing the foundation to address issues of structural racism and inequity. As TCE’s grantmaking responded, staff and our CEO were also interested in promoting DEI values across the entire foundation beyond programmatic spaces. Over time, these values became increasingly engrained in TCE’s ethos and the foundation committed to conducting a regular audit as a vehicle with which to determine if and how our DEI values were guiding organizational practice.

Sharing information about our DEI Audit often raises questions about how to launch such an effort. Some colleagues are in the early stages of considering whether they want to carry out an audit of their own. Are we ready? What do we need to have in place to even begin to broach this possibility? Others are interested to hear about how we use the findings from such an assessment. To help answer these questions, this is the first of a two-part blog series to share the lessons we’re learning by using a DEI audit to hold ourselves accountable to our values.

While the audit provides a frame to identify if our DEI values are being expressed throughout the foundation, it also fosters learning. Findings are reviewed and discussed with executive leadership, board, and staff. Reviews provide venues to involve both programmatic and non-programmatic staff in DEI discussions. An audit workgroup typically considers how to take action on findings so that the foundation can continuously improve and also considers how to revise audit goals to ensure forward movement. By sharing findings publicly, we hope our experience and lessons can help to support the field more broadly.

It is, however, no small feat. The audit is a comprehensive process that includes a demographic survey of staff and board, a staff and board survey of DEI attitudes and beliefs, interviews with key foundation leaders, examining available demographic data from grantee partners as well as a review of DEI-related documents gathered in between audits. Having dedicated resources to engage a neutral outsider to carry out the audit in partnership with the foundation is also important to this process. We’ve found it particularly helpful to engage with a consistent trusted partner, Social Policy Research Associates, over each of our audits to capture and candidly reflect where we’re making progress and where we need to work harder to create change.

As your foundation considers your own readiness to engage in such an audit process, we offer the following factors that have facilitated a productive and learning oriented DEI audit effort at TCE:

1. Clarity about the fundamental importance of Diversity, Equity, and Inclusion to the Foundation

The expression of our DEI values has evolved over time. When the audit started, several program staff members who focused on DEI and cultural competency developed a guiding statement on Diversity and Inclusiveness. Located within our audit report, it focused heavily on diversity although tweaks were made to the statement over time. A significant shift occurred several years ago when our executive team articulated a comprehensive set of core values that undergirds all our work and leads with a commitment to diversity, equity, and inclusion.

2. Interest in reflection and adaptation

The audit is a tool for organizational learning that facilitates continuous improvement. The process relies on having both a growth mindset and clear goals for what we hope to accomplish. Our 13 goals range from board engagement to utilizing accessibility best practices. In addition to examining our own goals, the audit shares how we’re doing with respect to a framework of institutional supports required to build a culture of equity. By comparing the foundation to itself over time we can determine if and where change is occurring. It also allows us to revise goals so that we can continue to push ourselves forward as we improve, or to course correct if we’re not on track. We anticipate updating our goals before our next audit to reflect where we are currently in our DEI journey.

3. Engagement of key leaders, including staff

Our CEO is vocal and clear about the importance of DEI internally and externally, as well as about the significance of conducting the audit itself. Our executive team, board, and CEO all contribute to the audit process and are actively interested in reviewing and discussing its findings.

Staff engagement is critical throughout audit implementation, reflection on findings, and action planning as well. It’s notable that the vast majority of staff at all levels feel comfortable pushing the foundation to stay accountable to DEI internally. However, there is a small, but growing percentage (23%) of staff who report feeling uncomfortable raising DEI concerns in the workplace suggesting an area for greater attention.

4. Capacity to respond to any findings

Findings are not always going to be comfortable. Identifying areas for improvement may put the organization and our leaders in tough places. TCE has historically convened a cross departmental workgroup to consider audit findings and tackle action planning. We considered co-locating the audit workgroup within our executive leadership team to increase the group’s capacity to address audit findings. However, now we are considering whether it would be best situated and aligned within an emerging body that will be specifically focused on bringing racial equity to the center of all our work.

5. Courage and will to repeat

In a sector with limited accountability, choosing to voluntarily and publicly examine foundation practices takes real commitment and courage. It’s always great to hear where we’re doing well but committing to a process that also raises multiple areas where we need to put more attention, requires deep will to repeat on a regular basis. And we do so in recognition that this is long term, ongoing work that, in lieu of having a real finish line, requires us to continuously adapt as our communities evolve.

Conducting our DEI audit regularly has strengthened our sense of where our practice excels—for example in our grantmaking, possessing a strong vision and authorizing environment, and diversity among staff and board. It’s also strengthened our sense of the ways we want to improve such as developing a more widely shared DEI analysis and trainings for all staff as well as continuing to strengthen data collection among our partners. The value of our DEI audit lies equally in considering findings as well as being a springboard for prioritizing action. TCE has been on this road a long time and we’ll keep at it for the foreseeable future. As our understanding of what it takes to pursue diversity, equity, and inclusion internally and externally sharpens, so will the demands on our practice. Our DEI audit will continue to ensure that we hold ourselves to these demands. In my next post, we’ll take a closer look at what we’re learning about operationalizing equity within the foundation.

--Mona Jhawar

What Does It Take to Shift to a Learning Culture in Philanthropy?
November 20, 2018

Janet Camarena is director of transparency initiatives at Foundation Center.

This post also appears in the Center for Effective Philanthropy blog.

Janet Camarena PhotoIf there was ever any doubt that greater openness and transparency could benefit organized philanthropy, a new report from the Center for Effective Philanthropy (CEP) about knowledge-sharing practices puts it to rest. Besides making a case for the need for greater transparency in the field, the report also provides some hopeful signs that, among foundation leaders, there is growing recognition of the value of shifting to a culture of learning to improve foundations’ efforts.

Understanding & Sharing What Works: The State of Foundation Practice reveals how well foundation leaders understand what is and isn’t working in their foundation’s programs, how they figure this out, and what, if anything, they share with others about what they’ve learned. These trends are explored through 119 survey responses from, and 41 in-depth interviews with foundation CEOs. A companion series of profiles tell the story about these practices in the context of four foundations that have committed to working more openly.

Since Foundation Center’s launch of GlassPockets in 2010, we have tracked transparency around planning and performance measurement within the “Who Has Glass Pockets?” self-assessment. Currently, of the nearly 100 foundations that have participated in GlassPockets, only 27 percent publicly share any information about how they measure their progress toward institutional goals. Given this lack of knowledge sharing, we undertook a new #OpenForGood campaign to encourage foundations to publicly share published evaluations through the IssueLab open archive.

As someone who has spent the last decade examining foundation transparency practices (or the lack thereof) and championing greater openness, I read CEP’s findings with an eye for elements that might help us better understand the barriers and catalysts to this kind of culture shift in the field. Here’s what I took away from the report.

Performance Anxiety

UWW_MAIN_COV_border (1)While two-thirds of foundation CEOs in CEP’s study report having a strong sense of what is working programmatically within their foundations, nearly 60 percent report having a weaker grasp on what is not working. This begs the question: If you don’t know something is broken, then how do you fix it? Since we know foundations have a tendency to be success-oriented, this by itself wasn’t surprising. But it’s a helpful metric that proves the point of how investing in evaluation, learning, and sharing can only lead to wiser use of precious resources for the field as a whole.

The report also reveals that many CEOs who have learned what is not working well at their foundations are unlikely to share that knowledge, as more than one-third of respondents cite hesitancy around disclosing missteps and failures. The interviews and profiles point to what can best be described as performance anxiety. CEOs cite the need for professionals to show what went well, fear of losing the trust of stakeholders, and a desire to impress their boards as motivations for concealing struggles. Of these motivations, board leadership seems particularly influential for setting the culture when it comes to transparency and failure.

In the profiles, Rockefeller Brothers Fund (RBF) President Stephen Heintz discusses both the importance of his board and his background in government as factors that have informed RBF’s willingness to share the kinds of information many foundations won’t. RBF was an early participant in GlassPockets, and now is an early adopter of the #OpenForGood movement to openly share knowledge. As a result, RBF has been one of the examples we often point to for the more challenging aspects of transparency such as frameworks for diversity data, knowledge sharing, and investment practices.

An important takeaway of the RBF profile is the Fund’s emphasis on the way in which a board can help ease performance anxiety by simply giving leadership permission to talk about pain points and missteps. Yet one-third of CEOs specifically mention that their foundation faces pressure from its board to withhold information about failures. This sparks my interest in seeing a similar survey asking foundation trustees about their perspectives in this area.

Utility or Futility?

Anyone who works inside a foundation — or anyone who has ever applied for a grant from a foundation — will tell you they are buried in the kind of paperwork load that often feels futile (which actually spawned a whole other worthy movement led by PEAK Grantmaking called Project Streamline). In the CEP study, the majority of foundation CEOs report finding most of the standard sources of knowledge that they require not very useful to them. Site visits were most consistently ranked highly, with the majority of CEOs (56 percent) pointing to them as one of the most useful sources for learning about what is and isn’t working. Grantee focus groups and convenings came in a distant second, with only 38 percent of CEOs reporting these as a most useful source. And despite the labor involved on both sides of the table, final grant reports were ranked as a most useful source for learning by only 31 percent of CEOs.

”Thanks to CEP’s research, we have evidence of real demand for a greater supply of programmatic knowledge.“

If most foundations find greater value in higher touch methods of learning, such as meeting face-to-face or hosting grantee gatherings, then perhaps this is a reminder that if foundations reduce the burdens of their own bureaucracies and streamline application and reporting processes, there will be more time for learning from community and stakeholder engagement.

The companion profile of the Weingart Foundation, another longtime GlassPockets participant, shows the benefits of funders making more time for grantee engagement, and provides a number of methods for doing so. Weingart co-creates its learning and assessment frameworks with grantees, routinely shares all the grantee feedback it receives from its Grantee Perception Report (GPR), regularly makes time to convene grantees for shared learning, and also pays grantees for their time in helping to inform Weingart’s trustees about the problems it seeks to solve.

Supply and Demand

One of the questions we get the most about #OpenForGood’s efforts to build an open, collective knowledge base for the field is whether anyone will actually use this content. This concern also surfaces in CEP’s interviews, with a number of CEOs citing the difficulty of knowing what is useful to share as an impediment to openness. A big source of optimism here is learning that a majority of CEOs report that their decisions are often informed by what other foundations are learning, meaning foundations can rest assured that if they supply knowledge about what is and isn’t working, the demand is there for that knowledge to make a larger impact beyond their own foundation. Think of all that untapped potential!

Of course, given the current state of knowledge sharing in the field, only 19 percent of CEOs surveyed report having quite a bit of knowledge about what’s working at peer foundations, and just 6 percent report having quite a bit of knowledge about what’s not working among their programmatic peers. Despite this dearth of knowledge, still fully three-quarters of foundation CEOs report that they use what they have access to from peers in informing strategy and direction within their own foundations.

Thanks to CEP’s research, we have evidence of real demand for a greater supply of programmatic knowledge. Now there is every reason for knowledge sharing to become the norm rather than the exception.

--Janet Camarena

Creating a Culture of Learning: An Interview with Yvonne Belanger, Director of Evaluation & Learning, Barr Foundation
November 8, 2018

Yvonne Belanger is the director of learning & evaluation at the Barr Foundation and leads Barr's efforts to gauge its impact and support ongoing learning among staff, grantees, and the fields in which they work.

Recently, Janet Camarena, director of transparency initiatives for Foundation Center, interviewed Belanger about how creating a culture of learning and openness can improve philanthropy. This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.


YvonneGlassPockets: More and more foundations seem to be hiring staff with titles having to do with evaluation and learning. You’ve been in this role at the Barr Foundation for just about a year, having come over from a similar role at the Bill & Melinda Gates Foundation. Why do you think roles like this are on the rise in philanthropy, and what are your aspirations for how greater capacity for evaluation and learning can benefit the field?

Yvonne Belanger: I think the spread of these roles in strategic philanthropy comes from increasing recognition that building a stronger learning function is a strategic investment, and it requires dedicated expertise and leadership. My hope is that strong evaluation and learning capacity at Barr (and across the philanthropic sector generally) will enable better decisions and accelerate the pace of social change to make the world more equitable and just.

GP: What have been your priorities in this first year and what is your approach to learning? More specifically, what is Barr’s learning process like, what sources do you learn from, how do you use the learnings to inform your work?

YB: At Barr, we are committed to learning from our efforts and continuously improving. Our programmatic work benefits from many sources of knowledge to inform strategy including landscape scans, academic research, ongoing conversations with grantees and formal site visits, and program evaluations to name a few. During this first year, I have been working with Barr’s program teams to assess their needs, to sketch out a trajectory for the next few years, and to launch evaluation projects across our strategies to enhance our strategic learning. Learning is not limited to evaluating the work of our programs, but also includes getting feedback from our partners. Recently, we were fortunate to hear from grantees via our Grantee Perception Report survey, including specific feedback on our learning and evaluation practices. As we reflected on their responses in relation to Barr’s values and examples of strong practice among our peers, we saw several ways we could improve.

GP: What kinds of improvements are you making as a result of feedback you received?

YB: We identified three opportunities for improvement: to make evaluation more useful, to be clearer about how Barr defines success and measures progress, and to be more transparent with our learning.

  • Make evaluations more collaborative and beneficial to our partners. We heard from our grantees that participating in evaluations funded by Barr hasn’t always felt useful or applicable to their work. We are adopting approaches to evaluation that prioritize grantee input and benefit. For example, in our Creative Commonwealth Initiative, a partnership with five community foundations to strengthen arts and creativity across Massachusetts, we included the grantees early in the evaluation design phase. With their input, we modified and prioritized evaluation questions and incorporated flexible technical assistance to build their capacity for data and measurement. In our Education Program, the early phase of our Engage New England evaluation is focused on sharing learning with grantees and the partners supporting their work to make implementation of these new school models stronger.
  • Be clearer about how we measure outcomes. Our grantees want to understand how Barr assesses progress. In September, we published a grantee guide to outputs and outcomes to clarify what we are looking for from grantees and to support them in developing a strong proposal. Currently, our program teams are clarifying progress measures for our strategies, and we plan to make that information more accessible to our grantees.
  • Share what we learn. To quote your recent GrantCraft Open for Good report, “Knowledge has the power to spark change, but only if it is shared.” To maximize Barr’s impact, we aim to be #OpenForGood and produce and share insights that help our grantees, practitioners, policymakers, and others. To this end, we are proactively sharing information about evaluation work in progress, such as the evaluation questions we are exploring, and when the field can expect results. Our Barr Fellows program evaluation is one example of this practice. We are also building a new knowledge center for Barr to highlight and share research and reports from our partners, and make these reports easier for practitioners and policymakers to find and re-share.

GP: Clearly all of this takes time and resources to do well. What benefits can you point to of investing in learning and knowledge sharing?

YB: Our new Impact & Learning page reflects our aspiration that by sharing work in progress and lessons learned, we hope to influence nonprofits and other funders, advance field knowledge, inform policy, and elevate community expertise. When you are working on changing complex systems, there are almost never silver bullets. To make headway on difficult social problems we need to view them from multiple perspectives and build learning over time by analyzing the successes – and the failures - of many different efforts and approaches.

GP: Barr’s president, Jim Canales, is featured in a video clip on the Impact & Learning page talking about the important role philanthropy plays as a source of “risk capital” to test emerging and untested solutions, some of which may not work or fail, and that the field should see these as learning opportunities. And, of course, these struggles and failures could be great lessons for philanthropy as a whole. How do you balance this tension at Barr, between a desire to provide “risk capital,” the desire to open up what you are learning, and reputational concerns about sharing evaluations of initiatives that didn’t produce the desired results?

YB: It’s unusual for Foundations to be open about how they define success, and admissions of failure are notably rare. I think foundations are often just as concerned about their grantees’ reputation and credibility as their own. At Barr we do aspire to be more transparent, including when things that haven’t worked or our efforts have fallen short of our goals. To paraphrase Jim Canales, risk isn’t an end in itself, but a foundation should be willing to take risks in order to see impact. Factors that influence impact or the pace of change are often ones that funders often have control over, such as the amount of risk we were willing to take, or the conceptualization and design of an initiative. When a funder can reflect openly about these issues, these usually generate valuable lessons for philanthropy and reflect the kind of risks we should be able to take more often.

GP: Now that you are entering your second year in this role, where are the next directions you hope to take Barr’s evaluation and learning efforts?

YB: In addition to continuing and sustaining robust evaluation for major initiatives across our program areas, and sharing what we’re learning as we go, we have two new areas of focus in 2019 – people and practices. We will have an internal staff development series to cultivate mindsets, skills, and shared habits that support learning, and we will also be working to strengthen our practices around strategy measurement so that we can be clearer both internally and externally about how we measure progress and impact. Ultimately, we believe these efforts will make our strategies stronger, will improve our ability to learn with and from our grantees, and will lead to greater impact.

 

Data Fix: Do's & Don'ts for Reporting Geographic Area Served
November 1, 2018

Kati Neiheisel is the eReporting liaison at Foundation Center. eReporting allows funders to quickly and easily tell their stories and improve philanthropy by sharing grants data.

This is the second post in a series intended to improve the data available for and about philanthropy.

KatiNeiheisel_FCphotoThe first post in our Data Fix series focused on areas that may seem straightforward but often cause confusion, including recipient location data. But don’t confuse recipient location (where the check was sent) with Geographic Area Served (the area meant to benefit from the funding). Data on recipient location, one of our required fields, allows us to match data to the correct organization in our database, ensuring accuracy for analyses or data visualizations. In contrast, Geographic Area Served, one of our highest priority fields, helps us tell the real story about where your funding is making an impact.

How to Report Geographic Area Served

We recognize that providing data on Geographic Area Served can be challenging. Many funders may not track this information, and those who do may depend on grantees or program staff to provide the details. It’s important to keep in mind that sharing some information is better than no information, as funders are currently the only source of this data.

DO DON'T
Do include details for locations beyond the country level. For example, for U.S. locations, specify a state along with providing geo area served at the city or county level. For non-U.S. locations, include the country name when funding a specific city, province, state or region. Don’t be too broad in scope. “Global Programs” may not be accurate if your work is focused on specific countries. Similarly, listing the geo area served as “Canada” is misleading if the work is serving the province of “Quebec, Canada” rather than the entire country.

Do use commas to indicate hierarchy and semi-colons to separate multiple areas served. For example:

  • Topeka, Kansas (comma used to indicate hierarchy)
  • Hitchcock County, Nebraska; Lisbon, Portugal; Asia (semi-colons used to list and separate multiple locations)
Don’t use negatives or catch-all terms. “Not California,” “Other,” “Statewide” or “International” may be meaningful within your organization, but these terms cannot be interpreted for mapping. Instead of “Statewide,” use the name of the state. Instead of “International,” use “Global Programs” or list the countries, regions, or continent being served.

Do define regions. If you are reporting on geo area served at the regional level (e.g. East Africa), please provide a list of the countries included in your organization’s definition of that region. Your definition of a region may differ from that of Foundation Center. Similarly, if your foundation defines its own regions (Southwestern Ohio), consider including the counties comprising that region.

Don’t forget to include the term “County” when reporting on U.S. counties. This will ensure your grant to an entire county isn’t assigned to the same named city (e.g. Los Angeles County, California, rather than Los Angeles, California).

Geographic Area Served in Foundation Center Platforms

Data provided (in a loadable format) will appear in “Grant Details” in Foundation Directory Online (FDO) and Foundation MapsFoundation Maps, including the complimentary eReporter map showing your own foundation’s data, also display an Area Served mapping view. 

Copy of Untitled

If data is not provided, Foundation Center will do one of the following:

  • Default to the location of the recipient organization
  • Add geo area served based on text in the grant description
  • Add geo area served based on where the recipient organization works, as listed on their website or in their mission statement, if this information is available in our database

Responsibly Sharing Geographic Area Served


Although our mission is to encourage transparency through the sharing of grants data, we acknowledge there are contexts in which sharing this data may be cause for concern. If the publishing of this data increases risks to the population meant to benefit from the funding, the grantee/recipient, or your own organization, you can either omit Geographic Area Served information entirely or report it at a higher, less sensitive level (e.g. country vs. province or city). For more information on this topic, please see Are You Over or Under-Protecting Your Grants Data? 5 Ways to Balance Transparency and Data Protection in Sensitive Contexts and Sharing Data Responsibly: A Conversation Guide for Funders.

More Tips to Come!

I hope you have a better understanding of how to report Geographic Area Served through eReporting. Without this data, valuable information about where funding is making a difference may be lost! Moving forward, we’ll explore the required fields of Recipient Name and Grant Description. If you have any questions, please feel free to contact me.

-- Kati Neiheisel

Data Fix: Do's and Don'ts for Data Mapping & More!
October 3, 2018

Kati Neiheisel is the eReporting liaison at Foundation Center. eReporting allows funders to quickly and easily tell their stories and improve philanthropy by sharing grants data.

This post is part of a series intended to improve the data available for and about philanthropy.

KatiNeiheisel_FCphotoAs many of you know, Foundation Center was established to provide transparency for the field of philanthropy. A key part of this mission is collecting, indexing, and aggregating millions of grants each year. In recent years this laborious process has become more streamlined thanks to technology, auto-coding, and to those of you who directly report your grants data to us. Your participation also increases the timeliness and accuracy of the data.

Today, over 1300 funders worldwide share grants data directly with Foundation Center. Over the 20 years we've been collecting this data, we've encountered some issues concerning the basic fields required. To make sharing data even quicker and easier, we've put together some dos and don'ts focusing on three areas that may seem straightforward, but often cause confusion.

Location Data for Accurate Mapping and Matching

Quite simply, to map your grants data we need location information! And we need location information for more than mapping. We also use this information to ensure we are matching data to the correct organizations in our database. To help us do this even more accurately, we encourage you to provide as much location data as possible. This also helps you by increasing the usability of your own data when running your own analyses or data visualizations.

DO DON'T
Do supply Recipient City for U.S. and non-U.S. Recipients. Don't forget to supply Recipient Address and Recipient Postal Code, if possible.
Do supply Recipient State for U.S. Recipients. Don't supply post office box in place of street address for Recipient Address, if possible.

Do supply Recipient Country for non-U.S. Recipients.

Don't confuse Recipient location (where the check was sent) with Geographic Area Served (where the service will be provided). 

What's Your Type? Authorized or Paid?

Two types of grant amounts can be reported: Authorized amounts (new grants authorized in a given fiscal year, including the full amount of grants that may be paid over multiple years) or Paid amounts (as grants would appear in your IRS tax form). You can report on either one of these types of amounts – we just need to know which one you are using: Authorized or Paid.

DO DON'T
Do indicate if you are reporting on Authorized or Paid amounts. Don't send more than one column of Amounts in your report – either Authorized or Paid for the entire list.
Do remain consistent from year to year with sending either Authorized amounts or Paid amounts to prevent duplication of grants. Don't forget to include Grant Duration (in months) or Grant Start Date and Grant End Date, if possible.
Do report the type of Currency of the amount listed, if not US Dollars. Don't include more than one amount per grant.

The Essential Fiscal Year

An accurate Fiscal Year is essential since we publish grants data by fiscal year in our data-driven tools and content-rich platforms such as those developed by Foundation Landscapes, including Funding the Ocean, SDG Funders, Equal Footing and Youth Giving. Fiscal Year can be reported with a year (2018) or date range (07/01/2017-06/31/2018), but both formats will appear in published products as YEAR AWARDED: 2018.

DO DON'T
Do include the Fiscal Year in which the grants were either Authorized or Paid by you, the funder. Don't provide the Fiscal Year of the Recipient organization.
Do format your Fiscal Year as a year (2018) or a date range (07/01/2017-06/31/2018). Don't forget, for off-calendar fiscal years, the last year of the date range is the Fiscal Year: 07/01/2017-06/31/2018 = 2018.

More Tips to Come!

I hope you have a better understanding of these three areas of data to be shared through Foundation Center eReporting. Moving forward, we'll explore the required fields of Recipient Name and Grant Description, as well as high priority fields such as Geographic Area Served. If you have any questions, please feel free to contact me. Thank you! And don't forget, the data you share IS making a difference!

-- Kati Neiheisel

Staff Pick: Foundation Funded Research Explores How to Improve the Voter Experience
August 9, 2018

Becca Leviss is a Knowledge Services Fellow at Foundation Center.

This post is part of the GlassPockets’ Democracy Funding series, designed to spotlight knowledge about ways in which philanthropy is working to strengthen American democracy.

Becca 2Voting is central to our democracy, providing citizens from all communities direct way to influence the future by conveying beliefs through civic participation. Though foundations by law must be non-partisan, they can and do support democracy in a variety of ways, and we are tracking these activities in our publicly available Foundation Funding for U.S. Democracy web portal.  
 
From this data we can see that encouraging broad civic participation is one of the most popular ways in which institutional philanthropy supports our democracy. Specific strategies under civic participation include issue-based participation, civic education and leadership, naturalization and immigrant civic integration, and public participation. So, what have foundations learned from these efforts about how to strengthen our democracy? Today we will zoom in to learn from a foundation-funded report that is openly available, containing findings from data collection on elections and voting patterns, including how well the process is workingand who is included or excluded. 
 
Our latest “Staff Pick” from IssueLab’s Democracy Special Collection, which is comprised of foundation-funded research on the topic, explores an aspect of the voter experience in America that could be improvedWith less than 90 days to go before the midterm elections, we’re pleased to offer this deep dive into an important piece of voting-related research. 
 
Research in the social sector can sometimes feel inaccessible or artificial—based on complex theories and mathematical models and highly-controlled situations. This report, however, presents its research methodology and results in a clear, understandable manner that invites the reader to continue its work to understanding how polling sites can use their resources to both investigate and improve the voter experience.  

STAFF PICK

Improving the Voter Experience: Reducing Polling Place Wait Times by Measuring Lines and Managing Polling Place Resources, by Charles Stewart III; John C. Fortier; Matthew Weil; Tim Harper; Stephen Pettigrew 

Download the Report

Publisher

Bipartisan Policy Center

Funders

Ford Foundation; The Democracy Fund

Quick Summary

Voting is the cornerstone of civic engagement in American democracy, but long wait times and inefficient organization at polling places can undermine the voting process and even discourage citizens from voting altogether. In 2013, President Barack Obama launched the bipartisan Presidential Commission on Election Administration (PCEA) to initiate studies and collaborative research on polling place wait times. The PCEA’s work revealed that while wait times and poll lines are a serious issue in the United States, they are also reflective of deeper, more complex problems within the election administration system. This report by the Bipartisan Policy Center summarizes the PCEA’s efforts and highlights how the knowledge gained can produce action and improvement at polling sites. Ultimately, the report emphasizes the need for continued research and innovation in approaching common issues in the voter experience.

Field of Practice

Government Reform

What makes it stand out?

Ne report“Long lines may be a canary in the coal mine,” begins the report,“indicating problems beyond a simple mismatch between the number of voting machines and voters, such as voter rules that are inaccurate or onerous.” Quantitative and qualitative data has shown that long lines at the polls have wide-reaching economic costs of over half a billion dollars in a presidential election, as well as the immeasurable cost of voter discouragement due to polling place problems. These issues are exacerbated at polling sites that are urban, dense, and with large minority populations, where lack of resources and access can disenfranchise the voting population.

While the dilemma of election administration is complex, the report describes a rather straight-forward series of projects by the Massachusetts Institute of Technology and the Bipartisan Policy Center. MIT and BPC collaborated to create a system of data collection on polling lines and polling place efficiency that would be simple and easily implemented by poll workers. The program utilized basic queuing theory: calculating the average wait time of a voter by dividing the average line length by the average arrival time. For fellow (and potential future) researchers, this report spends a meaningful portion of time explaining the significance of each variable, how it is calculated, and how its fluctuation impacts the overall results of the investigation. We are given examples of several successful iterations of the study and their evaluations, as well as insight into certain research choices.

MIT/BPC’s work has found that an overwhelming majority of Election Day polling sites—82 percent—experienced the longest line when the doors first opened. In all, a total of 90 percent of Election Day polling sites have their longest lines within the first two hourly samples (when observed on Hour 0 and Hour 1), with the lines declining at an average rate after that. Similarly, voters experience the longest wait times when the lines were at their longest. This pattern is vastly different from that of early voting sites, where wait time is relatively constant; however, these sites still most commonly experience their longest lines at the beginning of the day (25 percent of the studied population).

The research emphasizes the importance of how to adequately prepare for the length of the longest line. The report suggests that if polling sites adjust worker shifts to accommodate for strong early morning voter turnout on Election Day, they can easily clear the lines within the first few hours of voting, thus saving money and better serving their voters. The report also recognizes the range of its results: in other words, individual precincts have individual needs. Without meaningful research, however, we cannot know how to meet those needs and improve the voter experience. Therefore, as readers (and hopefully fellow voters), we are encouraged by MIT/BPC’s work to take clear and simple action to improve our own polling sites through continued research and investigation. This report exemplifies the importance of making the research and data process transparent and attainable so that we can not only understand its significance, but actively contribute to its efforts. There are many processes that could benefit from this kind of data analysis to improve the user experience. What if foundations analyzed their grant processes in this way? I can’t help but think that there is much that philanthropy can learn from the government from reports like this that show how institutions are opening up data collection to improve the user experience for actors and stakeholders.

Key Quote

“Precincts with large numbers of registered voters often have too few check-in stations or voting booths to handle the volume of voters assigned to the precinct, even under the best of circumstances. Precincts that are unable to clear the lines from the first three hours of voting are virtually guaranteed to have long lines throughout the day. Polling places in urban areas often face design challenges—small, inconvenient spaces—that undermine many election officials’ best efforts to provide adequate resources to these locations.”

--Becca Leviss

What Philanthropy Can Learn from Open Government Data Efforts
July 5, 2018

Daniela Pineda, Ph.D., is vice president of integration and learning at First 5 LA, an independent public agency created by voters to advocate for programs and polices benefiting young children. A version of this post also appears in the GOVERNING blog.

Daniela Pineda Photo 2Statistics-packed spreadsheets and lengthy, jargon-filled reports can be enough to make anybody feel dizzy. It's natural. That makes it the responsibility for those of us involved in government and its related institutions to find more creative ways to share the breadth of information we have with those who can benefit from it.

Government agencies, foundations and nonprofits can find ways to make data, outcomes and reports more user-friendly and accessible. In meeting the goal of transparency, we must go beyond inviting people to wade through dense piles of data and instead make them feel welcome using it, so they gain insights and understanding.

How can this be done? We need to make our data less wonky, if you will.

This might sound silly, and being transparent might sound as easy as simply releasing documents. But while leaders of public agencies and officeholders are compelled to comply with requests under freedom-of-information and public-records laws, genuine transparency requires a commitment to making the information being shared easy to understand and useful.

“…genuine transparency requires a commitment to making the information being shared easy to understand and useful.”

Things to consider include how your intended audience prefers to access and consume information. For instance, there are generational differences in the accessing of information on tablets and mobile devices as opposed to traditional websites. Consider all the platforms your audience uses to view information, such as smartphone apps, news websites and social media platforms, to constantly evolve based on their feedback.

Spreadsheets just won't work here. You need to invest in data visualization techniques and content writing to explain data, no matter how it is accessed.

The second annual Equipt to Innovate survey, published by Governing in partnership with Living Cities, found several cities not only using data consistently to drive decision-making but also embracing ways to make data digestible for the publics they serve.

Los Angeles' DataLA portal, for example, offers more than 1,000 data sets for all to use along with trainings and tutorials on how to make charts, maps and other visualization. The portal's blog offers a robust discussion of the issues and challenges faced with using existing data to meet common requests. Louisville, Ky., went the proverbial extra mile, putting a lot of thought into what data would be of interest to residents and sharing the best examples of free online services that have been built using the metro government's open data.

Louisville's efforts point up the seemingly obvious but critical strategy of making sure you know what information your target audience actually needs. Have you asked? Perhaps not. The answers should guide you, but also remember to be flexible about what you are asking. For example, the Los Angeles Unified School District is set to launch a new portal later this summer to provide parents with data, and is still learning how to supply information that parents find useful. District officials are listening to feedback throughout the process, and they are willing to adjust. One important strategy for this is to make your audience -- or a sampling of them -- part of your beta testing. Ask what information they found useful and what else would have been helpful.

“When you share, you are inviting others to engage with you about how to improve your work.”

Remember, the first time you allow a glimpse into your data and processes, it's inevitable your information will have gaps and kinks that you can't foresee. And if you are lucky to get feedback about what didn't work so well, it may even seem harsh. Don't take it personally. It's an opportunity to ask your audience what could be done better and commit to doing so. It may take weeks, months or maybe longer to package information for release, making it usable and accessible, but this is an investment worth making. You might miss the mark the first time, but make a commitment to keep trying.

And don't be daunted by the reality that anytime you share information you expose yourself to criticism. Sharing with the public that a project didn't meet expectations or failed completely is a challenge no matter how you look at it. But sharing, even when it is sharing your weaknesses, is a strength your organization can use to build its reputation and gain influence in the long term.

When you share, you are inviting others to engage with you about how to improve your work. You also are modeling the importance of being open about failure. This openness is what helps others feel like partners in the work, and they will feel more comfortable opening up about their own struggles. You might be surprised at who will reach out and what type of partnerships can come from sharing.

Through this process, you will build your reputation and credibility, helping your organization advance its goals. Ultimately, it's about helping those you serve by giving them the opportunity to help you.

--Daniela Pineda

Nominations for Foundation Center’s #OpenForGood Award Now Open
June 13, 2018

Sarina Dayal is the knowledge services associate at Foundation Center.

Sarina DayalTo encourage funders to be more transparent, Foundation Center has launched the inaugural #OpenForGood Award. This award will recognize foundations that display a strong commitment to transparency and knowledge sharing.

Last year, we started #OpenForGood, a campaign to encourage foundations to openly share what they learn so we can all get collectively smarter. Now, we’re launching this award as a way to bring due recognition and visibility to foundations who share challenges, successes, and failures openly to strengthen how we can think and act as a sector. The winning foundations will demonstrate an active commitment to open knowledge and share their evaluations through IssueLab, an open repository that is free, searchable, and accessible to all. We’re looking for the best examples of smart, creative, strategic, and consistent knowledge sharing in the field, across all geographic and issue contexts.

What’s In It for You?

Winners will receive technical support to create a custom Knowledge Center for their foundation or for a grantee organization, as well as promotional support in the form of social media and newsletter space. What is a Knowledge Center and why would you want one? It is a service of IssueLab that provides organizations with a simple way to manage and share knowledge on their own websites. By leveraging this tool, you can showcase your insight, promote analysis on your grantees, and feature learnings from network members. All documents that are uploaded to an IssueLab Knowledge Center are also made searchable and discoverable via systems like WorldCat, which serves more than 2,000 libraries worldwide, ensuring your knowledge can be found by researchers, regardless of their familiarity with your organization.

Why Choose Openness?

OFGaward-528The #OpenForGood award is focused on inspiring foundations to use existing and emerging technologies to collectively improve the sector. Today, we live in a time when most expect to find the information they need on the go, via tablets, laptops, and mobile phones, just a swipe or click away. Despite this digital era reality, today only 13 percent of foundations have websites, and even fewer share their reports publicly, indicating that the field has a long way to go to create a culture of shared learning. With this award, we hope to change these practices. Rather than reinvent the wheel, this award and campaign encourage the sector to make it a priority to learn from one another and share content with a global audience, so that we can build smartly on one another’s work and accelerate the change we want to see in the world. The more you share your foundation's work, the greater the opportunities to make all our efforts more effective and farther reaching.

Who Is Eligible for the Award?

  • Any foundation anywhere in the world (self-nominations welcome)
  • Must share its collection of published evaluations publicly through IssueLab
  • Must demonstrate active commitment to open knowledge
  • Preferential characteristics include foundations that integrate creativity, field leadership, openness, and community insight into knowledge sharing work
  • Bonus points for use of other open knowledge elements such as open licensing, digital object identifiers (DOIs), or institutional repository

Anyone is welcome to nominate any foundation through September 30, 2018. Winners will be selected in the Fall through a review process and notified in January. The award will officially be presented at next year’s annual GEO Conference. If you have any questions, please email openforgood@foundationcenter.org. Click here to nominate a foundation today!

Who will you nominate as being #OpenForGood?

--Sarina Dayal

Share This Blog

  • Share This

Subscribe to Transparency Talk

  • Enter your email address:

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:
    glasspockets@foundationcenter.org

Categories