Transparency Talk

Category: "Privacy" (3 posts)

New Guide Helps Human Rights Funders Balance Tension between Risk & Transparency
October 25, 2018

Julie Broome is the Director of Ariadne, a network of European donors that support social change and human rights.  

Tom Walker is the Research Manager at The Engine Room, an international organisation that helps activists and organisations use data and technology effectively and responsibly.

2
Julie Broome

Foundations find themselves in a challenging situation when it comes to making decisions about how much data to share about their grantmaking. On the one hand, in recognition of the public benefit function of philanthropy, there is a demand for greater transparency on the part of funders and a push to be open about how much they are giving and who they are giving it to. These demands sometimes come from states, increasingly from philanthropy professionals themselves, and also from critics who believe that philanthropy has been too opaque for too long and raise questions about fairness and access. 

At the same time, donors who work in human rights and on politically charged issues, are increasingly becoming aware of the risks to grantees if sensitive information ends up in the public domain. As a result, some funders have moved towards sharing little to no information. However, this can have negative consequences in terms of our collective ability to map different fields, making it harder for us all develop a sense of the funding landscape in different areas. It can also serve to keep certain groups “underground,” when in reality they might benefit from the credibility that foundation funding can bestow.

1
Tom Walker

As the European partners in the Advancing Human Rights project, led by the Human Rights Funders Network and Foundation Center, Ariadne collects grantmaking data from our members that feeds into this larger effort to understand where human rights funding is going and how it is shifting over time. Unlike the United States, in which the IRS 990-PF form eventually provides transparency about grantee transactions, there is no equivalent data source in Europe. Yet, many donors find grant activity information useful in finding peer funders and identifying potential gaps in the funding landscape where their own funds could make a difference. We frequently receive requests from donors who want to use these datasets to drill down into specific areas of interest, and map out different funding fields. But these types of sources of data will become less valuable over time if donors move away from voluntarily sharing information about their grantmaking.

Nonetheless, the risks to grantees if donors share information irresponsibly are very real, especially at a time when civil society is increasingly under threat from both state and non-state actors.  It was in the interest of trying to balance these two aims – maintaining sufficient data to be able to analyse trends in philanthropy while protecting grantees – that led Ariadne to partner with The Engine Room to create a guide to help funders navigate these tricky questions.

After looking at why and how funders share data and the challenges of doing so responsibly, The Engine Room interviewed 8 people and surveyed 32 others working in foundations that fund human rights organisations, asking how they shared data about their grants and highlighting any risks they might see.

Funders told us that they felt treating data responsibly was important, but that implementing it in their day-to-day work was often difficult. It involved balancing competing priorities: between transparency and data protection legislation; between protecting grantees’ data and reporting requirements; and between protecting grantees from unwanted attention, and publicising stories to highlight the benefits of the grantee’s work.

The funders we heard from said they found it particularly difficult to predict how risks might change over time, and how to manage data that had already been shared and published. The most common concerns were:

  • ensuring that data that had already been published remained up to date;
  • de-identifying data before it was published
  • Working with third parties to be responsible when sharing data about grantees, such as with donors who fund through intermediaries and may request information about the intermediaries’ grantees.

Untitled designAlthough the funders we interviewed differed in their mission, size, geographical spread and focus area, they all stressed the importance of respecting the autonomy of their grantees. Practically, this meant that additional security or privacy measures were often introduced only when the grantee raised a concern. The people we spoke with were often aware that this reactive approach puts the burden of assessing data-related risks onto grantees, and suggested that they most needed support when it came to talking with grantees and other funders in an open, informed way about the opportunities and risks associated with sharing grantee data.

These conversations can be difficult ones to have. So, we tried a new approach: a guide to help funders have better conversations about responsible data.

It’s aimed at funders or grantmakers who want to treat their grantees’ data responsibly, but don’t always know how. It lists common questions that grantees and funders might ask, combined with advice and resources to help answer them, and tips for structuring a proactive conversation with grantees.

”There are no shortcuts to handling data responsibly, but we believe this guide can facilitate a better process.“

There are no shortcuts to handling data responsibly, but we believe this guide can facilitate a better process. It offers prompts that are designed to help you talk more openly with grantees or other funders about data-related risks and ways of dealing with them. The guide is organised around three elements of the grantmaking lifecycle: data collection, data storage, and data sharing.

Because contexts and grantmaking systems vary dramatically and change constantly, a one-size-fits-all solution is impossible. Instead, we decided to offer guidance on processes and questions that many funders share – from deciding whether to publish a case study to having conversations about security with grantees. For example, one tip that would benefit many grantmakers is to ensure that grant agreements include specifics about how the funder will use any data collected as a result of the grant, based on a discussion that helps the grantee to understand how their data will be managed and make decisions accordingly.

This guide aims to give practical advice that helps funders strengthen their relationships with grantees - thereby leading to more effective grantmaking. Download the guide, and let us know what you think!

--Julie Broome and Tom Walker

Big Ideas That Matter for 2015: Are Philanthropic Organizations Ready?
January 12, 2015

(Sara Davis is the Director of Grants Management at The William and Flora Hewlett Foundation in Menlo Park, California. She can be followed on Twitter @SaraLeeeDeee or reached via e-mail at sdavis@hewlett.org. This post was originally featured on the Grant Craft blog.)

Sara davisOne way I mark the passage of another year is the welcome arrival of the latest Blueprint — the annual industry forecast report written by Lucy Bernholz and published by GrantCraft, a service of Foundation Center. This year’s report, Philanthropy and the Social Economy: Blueprint 2015, provides us once again with a rich opportunity to look back at the past year and to ponder what’s to come in the year ahead. The Blueprint is a great marker of time and creates a moment to pause for reflection. As I read this year’s report, I found much to digest, understand, and learn. Like the five previous editions, Blueprint 2015 is provocative, and — as I settled in to read — I was humbled to discover that it brought up many more questions than answers. The report piqued my curiosity about the state of the social economy and more explicitly about organized philanthropy and how we do our work. Specifically:

Are we agile and flexible enough? Are our philanthropic organizations ready?

The words “dynamic” and “dynamism” show up throughout the Blueprint 2015, and the pervasive thought I had while reading was that this is an exciting, creative, and expansive time for the social economy. Given this, I couldn’t help but wonder if philanthropic organizations are ready — will we be able to flex, bend, and adapt at the same pace as the change around us? Our ecosystem is evolving, moving, and reorganizing. In this time of globalization, disruptive technology, digital activism, new organizational forms, and even new language, are philanthropic organizations keeping pace? Do we have a picture of what “keeping pace” would really mean?

In this time of globalization, disruptive technology, digital activism, new organizational forms, and even new language, are philanthropic organizations keeping pace? Do we have a picture of what “keeping pace” would really mean?

My experience is that folks doing the work of philanthropy take their role very seriously. It’s a tremendous responsibility to be entrusted with private resources in order to create public benefit. That we take that trust seriously is a good thing. In practice, this means that we tend to be careful, we analyze everything thoroughly, and we remain deliberate, trying hard not to make mistakes. This subtle — or not so subtle — perfectionism creates a tension against our desire to also be nimble, innovative, creative, and dynamic. I wonder: how can we talk about and manage that tension? Are there times we should be using philanthropy as true risk capital, maybe leaping more and looking less? Can we be nimble enough to fail, learn, and course-correct quickly, and have that process be okay, even celebrated? It’s clear that many of the newer entrants in the social economy are working from this spirit of moment-to-moment dynamism. How can we collaborate with openness, adaptability, and readiness for change? Are we learning how to be more agile and flexible along the way?

Are the right people/skills at the table?

The other thing that struck me as I read the report is the variety of new skills and voices needed to work well within the changing social economy. We know, for example, that new technologies and digital data are emerging as important sources and byproducts for learning, innovation, and achieving results. It follows, then, that we need to make sure technology and data capacity are being fostered, used, and advanced within philanthropic organizations and across the sector. Together, we need to gain expertise as we take on challenging topics like intellectual property, open licensing, transparency, and privacy. Further, working in a digital world during this time of rapid change requires operational savvy. We need to build and maintain necessary infrastructure to execute well today, while also forging the space so we can adapt and shift easily in the future. Collectively, this is a tall order. Are we listening to the right experts to make this happen? Are we building the necessary capacity and knowledge?

We need to make sure technology and data capacity are being fostered, used, and advanced within philanthropic organizations and across the sector. Together, we need to gain expertise as we take on challenging topics like intellectual property, open licensing, transparency, and privacy.

As “pervasive digitization” has become the new normal, have we changed the way we think about technology and data expertise in our grantmaking? It doesn’t seem reasonable that all program officers now also need to be technology experts (though some are.) How do we make sure the technologists are being included at the right times? How can our daily work be informed by data expertise and digital best practices, and how do we successfully integrate these into our grantmaking? Bernholz notes that “technologists are becoming part of the sectors that they serve” and imagines a future where “data analysis and sensemaking skills” are integrated into strategy and grantmaking. What new understandings do we need in order to know how we will do this? And, who do we need to include in the conversation to live this out fully?

The 2015 Blueprint marks a time that is vibrant, rich, and exciting for us to be working in this sector. It also invites us to adapt, flex, and change — more than ever before. It’s not a perfect metaphor, but sometimes I find myself thinking about the proverb of the shoemaker whose children have no shoes. Those of us who work in philanthropy understand that our grantees need to adapt within changing circumstances and must constantly evolve. We know that executing well is the challenging standard we place upon grantees as we give them resources. I’m not sure we always hold ourselves to the same standard, or that we take the time to know what executing well might mean within our own changing context. Just as we offer capacity building support and technical assistance to the organizations we fund, it’s also important that we do our own capacity building work, making the necessary changes within our organizations to be effective, real-time participants in the social economy. Are we checking ourselves to make sure we have the skills, roles, knowledge, and processes needed to do that?

Our changing ecosystem will certainly require that we become comfortable with the continued blurring of lines and re-imagining of everything around us. As we strive to achieve impact and social benefit, it may mean we need to bring new people to the table, while developing new skills and new ways of working ourselves. My hope is that all of our good intentions and hard work continue to fuel the adaptability, learning, and dynamism that Bernholz points to so brilliantly.

--Sara Davis

Beyond Alphabet Soup: 5 Guidelines For Data Sharing
August 29, 2013

(Andy Isaacson is Forward Deployed Engineer at Palantir Technologies. This blog is re-posted from the Markets for Good blog. Please see the accompanying reference document: Open Data Done Right: Five Guidelines – available for download and for you to add your own thoughts and comments.)

The BaIsaacson-100tcomputer was ingenious. In the 1960s Batman television series, the machine took any input, digested it instantly, and automagically spat out a profound insight or prescient answer – always in the nick of time (watch what happens when Batman feeds it alphabet soup). Sadly, of course, it was fictional. So why do we still cling to the notion that we can feed in just any kind of data and expect revelatory output? As the saying goes, garbage in yields garbage out; so, if we want quality results, we need to begin with high quality input. Open Data initiatives promise just such a rich foundation.

High quality, freely available data means hackers everywhere, from Haiti to Hurricane Sandy, are now building the kinds of analytical tools we need to solve the world’s hardest problems.

Presented with a thorny problem, any single data source is a great start – it gives you one facet of the challenge ahead. However, to paint a rich analytical picture with data, to solve a truly testing problem, you need as many other facets as you can muster. You can often get these by taking openly available data sets and integrating them with your original source. This is why the Open Data movement is so exciting. It fills in the blanks that lead us to critical insights: informing disaster relief efforts with up-to-the-minute weather data, augmenting agricultural surveys with soil sample data, or predicting the best locations for Internally Displaced Persons camps using rainfall data.

High quality, freely available data means hackers everywhere, from Haiti to Hurricane Sandy, are now building the kinds of analytical tools we need to solve the world’s hardest problems. But great tools and widely-released data isn’t the end of the story.

At Palantir, we believe that with great data comes great responsibility, both to make the information usable, and also to protect the privacy and civil liberties of the people involved. Too often, we are confronted with data that’s been released in a haphazard way, making it nearly impossible to work with. Thankfully, I’ve got one of the best engineering teams in the world backing me up – there’s almost nothing we can’t handle. But Palantir engineers are data integration and analysis pros – and Open Data isn’t about catering to us.

It is, or should be, about the democratization of data, allowing anybody on the web to extract, synthesize, and build from raw materials – and effect change. In a recent talk to a G-8 Summit on Open Data for Agriculture, I outlined the ways we can help make this happen:

#1 – Release structured raw data others can use

#2 – Make your data machine-readable

#3 – Make your data human-readable

#4 – Use an open-data format

#5 – Release responsibly and plan ahead

Abbreviated explanations below. Download the full version here: Open Data, Done Right: Five Guidelines.

#1 – Release structured raw data others can use

One of the most productive side effects of data collection is being able to re-purpose a set collected for one goal and use it towards a new end. This solution-focused effort is at the heart of Open Data. One person solves one problem; someone else takes the exact same dataset and re-aggregates, re-correlates, and remixes it into novel and more powerful work. When data is captured thoroughly and published well, it can be used and re-used in the future too; it will have staying power.

Release data in a raw, structured way – think a table of individual values rather than words – to enable its best use, and re-use.

#2 – Make your data machine-readable.

Once structured, raw data points are integrated into an analysis tool (like one of the Palantir platforms), a machine needs to know how to pick apart the individual pieces.

Even if the data is structured and machine readable, building tools to extract the relevant bits takes time, so another aspect of this rule is that a dataset’s structure should be consistent from one release to the next. Unless there’s a really good reason to change it, next month’s data should be in the exact same format as this month’s, so that the same extraction tools can be used again and again.

Use machine-readable, structured formats like CSV, XML, or JSON to allow the computer to easily parse the structure of data, now and in future.

#3 – Make your data human-readable.

Now that the data can be fed into an analysis tool, it is vital for humans, as well as machines, to understand what it actually means. This is where PDFs come in handy. They are an awful format for a data release as they can be baffling for automatic extraction programs. But, as documentation, they can explain the data clearly to those who are using it.

Assume nothing – document and explain your data as if the reader has no context.

#4 – Use an open-data format.

Proprietary data formats are fine for internal use, but don’t force them on the world. Prefer CSV files to Excel, KMLs to SHPs, and XML or JSON to database dumps. It might sound overly simplistic, but you never know what programming ecosystem your data consumers will favor, so plainness and openness is key.

Choose to make data as simple and available as possible: When releasing it to the world, use an open data format.

#5 – Release responsibly and plan ahead

Now that the data is structured, documented, and open, it needs to be released to the world. Simply posting files on a website is a good start, but we can do better, like using a REST API.

Measures that protect privacy and civil liberties are hugely important in any release of data. Beyond simply keeping things up-to-date, programmatic API access to your data allows you to go to the next level of data responsibility. By knowing who is requesting the data, you can implement audit logging and access controls, understanding what was accessed when and by whom, and limiting exposure of any possibly sensitive information to just the select few that need to see it.

Allow API access to data, to responsibly provide consumers the latest information – perpetually.

...

These guidelines seem simple, almost too simple. You might wonder why in this high tech world we need to keep things so basic when we have an abundance of technological solutions to overcome data complexity.

Sure, it’s all theoretically possible. However, in practice, anybody working with these technologies knows that they can be brittle, inaccurate, and labor intensive. Batman’s engineers can pull off extracting data from pasta, but for the rest of us, relying on heroic efforts means a massive, unnecessary time commitment – time taken away from achieving the fundamental goal: rapid, actionable insight to solve the problem.

There’s no magic wand here, but there are some simple steps to make sure we can share data easily, safely and effectively. As a community of data consumers and providers, together we can make the decisions that will make Open Data work.

-- Andy Isaacson

Share This Blog

  • Share This

Subscribe to Transparency Talk

  • Enter your email address:

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:
    glasspockets@foundationcenter.org

Categories