Transparency Talk

Category: "Privacy" (2 posts)

Big Ideas That Matter for 2015: Are Philanthropic Organizations Ready?
January 12, 2015

(Sara Davis is the Director of Grants Management at The William and Flora Hewlett Foundation in Menlo Park, California. She can be followed on Twitter @SaraLeeeDeee or reached via e-mail at sdavis@hewlett.org. This post was originally featured on the Grant Craft blog.)

Sara davisOne way I mark the passage of another year is the welcome arrival of the latest Blueprint — the annual industry forecast report written by Lucy Bernholz and published by GrantCraft, a service of Foundation Center. This year’s report, Philanthropy and the Social Economy: Blueprint 2015, provides us once again with a rich opportunity to look back at the past year and to ponder what’s to come in the year ahead. The Blueprint is a great marker of time and creates a moment to pause for reflection. As I read this year’s report, I found much to digest, understand, and learn. Like the five previous editions, Blueprint 2015 is provocative, and — as I settled in to read — I was humbled to discover that it brought up many more questions than answers. The report piqued my curiosity about the state of the social economy and more explicitly about organized philanthropy and how we do our work. Specifically:

Are we agile and flexible enough? Are our philanthropic organizations ready?

The words “dynamic” and “dynamism” show up throughout the Blueprint 2015, and the pervasive thought I had while reading was that this is an exciting, creative, and expansive time for the social economy. Given this, I couldn’t help but wonder if philanthropic organizations are ready — will we be able to flex, bend, and adapt at the same pace as the change around us? Our ecosystem is evolving, moving, and reorganizing. In this time of globalization, disruptive technology, digital activism, new organizational forms, and even new language, are philanthropic organizations keeping pace? Do we have a picture of what “keeping pace” would really mean?

In this time of globalization, disruptive technology, digital activism, new organizational forms, and even new language, are philanthropic organizations keeping pace? Do we have a picture of what “keeping pace” would really mean?

My experience is that folks doing the work of philanthropy take their role very seriously. It’s a tremendous responsibility to be entrusted with private resources in order to create public benefit. That we take that trust seriously is a good thing. In practice, this means that we tend to be careful, we analyze everything thoroughly, and we remain deliberate, trying hard not to make mistakes. This subtle — or not so subtle — perfectionism creates a tension against our desire to also be nimble, innovative, creative, and dynamic. I wonder: how can we talk about and manage that tension? Are there times we should be using philanthropy as true risk capital, maybe leaping more and looking less? Can we be nimble enough to fail, learn, and course-correct quickly, and have that process be okay, even celebrated? It’s clear that many of the newer entrants in the social economy are working from this spirit of moment-to-moment dynamism. How can we collaborate with openness, adaptability, and readiness for change? Are we learning how to be more agile and flexible along the way?

Are the right people/skills at the table?

The other thing that struck me as I read the report is the variety of new skills and voices needed to work well within the changing social economy. We know, for example, that new technologies and digital data are emerging as important sources and byproducts for learning, innovation, and achieving results. It follows, then, that we need to make sure technology and data capacity are being fostered, used, and advanced within philanthropic organizations and across the sector. Together, we need to gain expertise as we take on challenging topics like intellectual property, open licensing, transparency, and privacy. Further, working in a digital world during this time of rapid change requires operational savvy. We need to build and maintain necessary infrastructure to execute well today, while also forging the space so we can adapt and shift easily in the future. Collectively, this is a tall order. Are we listening to the right experts to make this happen? Are we building the necessary capacity and knowledge?

We need to make sure technology and data capacity are being fostered, used, and advanced within philanthropic organizations and across the sector. Together, we need to gain expertise as we take on challenging topics like intellectual property, open licensing, transparency, and privacy.

As “pervasive digitization” has become the new normal, have we changed the way we think about technology and data expertise in our grantmaking? It doesn’t seem reasonable that all program officers now also need to be technology experts (though some are.) How do we make sure the technologists are being included at the right times? How can our daily work be informed by data expertise and digital best practices, and how do we successfully integrate these into our grantmaking? Bernholz notes that “technologists are becoming part of the sectors that they serve” and imagines a future where “data analysis and sensemaking skills” are integrated into strategy and grantmaking. What new understandings do we need in order to know how we will do this? And, who do we need to include in the conversation to live this out fully?

The 2015 Blueprint marks a time that is vibrant, rich, and exciting for us to be working in this sector. It also invites us to adapt, flex, and change — more than ever before. It’s not a perfect metaphor, but sometimes I find myself thinking about the proverb of the shoemaker whose children have no shoes. Those of us who work in philanthropy understand that our grantees need to adapt within changing circumstances and must constantly evolve. We know that executing well is the challenging standard we place upon grantees as we give them resources. I’m not sure we always hold ourselves to the same standard, or that we take the time to know what executing well might mean within our own changing context. Just as we offer capacity building support and technical assistance to the organizations we fund, it’s also important that we do our own capacity building work, making the necessary changes within our organizations to be effective, real-time participants in the social economy. Are we checking ourselves to make sure we have the skills, roles, knowledge, and processes needed to do that?

Our changing ecosystem will certainly require that we become comfortable with the continued blurring of lines and re-imagining of everything around us. As we strive to achieve impact and social benefit, it may mean we need to bring new people to the table, while developing new skills and new ways of working ourselves. My hope is that all of our good intentions and hard work continue to fuel the adaptability, learning, and dynamism that Bernholz points to so brilliantly.

--Sara Davis

Beyond Alphabet Soup: 5 Guidelines For Data Sharing
August 29, 2013

(Andy Isaacson is Forward Deployed Engineer at Palantir Technologies. This blog is re-posted from the Markets for Good blog. Please see the accompanying reference document: Open Data Done Right: Five Guidelines – available for download and for you to add your own thoughts and comments.)

The BaIsaacson-100tcomputer was ingenious. In the 1960s Batman television series, the machine took any input, digested it instantly, and automagically spat out a profound insight or prescient answer – always in the nick of time (watch what happens when Batman feeds it alphabet soup). Sadly, of course, it was fictional. So why do we still cling to the notion that we can feed in just any kind of data and expect revelatory output? As the saying goes, garbage in yields garbage out; so, if we want quality results, we need to begin with high quality input. Open Data initiatives promise just such a rich foundation.

High quality, freely available data means hackers everywhere, from Haiti to Hurricane Sandy, are now building the kinds of analytical tools we need to solve the world’s hardest problems.

Presented with a thorny problem, any single data source is a great start – it gives you one facet of the challenge ahead. However, to paint a rich analytical picture with data, to solve a truly testing problem, you need as many other facets as you can muster. You can often get these by taking openly available data sets and integrating them with your original source. This is why the Open Data movement is so exciting. It fills in the blanks that lead us to critical insights: informing disaster relief efforts with up-to-the-minute weather data, augmenting agricultural surveys with soil sample data, or predicting the best locations for Internally Displaced Persons camps using rainfall data.

High quality, freely available data means hackers everywhere, from Haiti to Hurricane Sandy, are now building the kinds of analytical tools we need to solve the world’s hardest problems. But great tools and widely-released data isn’t the end of the story.

At Palantir, we believe that with great data comes great responsibility, both to make the information usable, and also to protect the privacy and civil liberties of the people involved. Too often, we are confronted with data that’s been released in a haphazard way, making it nearly impossible to work with. Thankfully, I’ve got one of the best engineering teams in the world backing me up – there’s almost nothing we can’t handle. But Palantir engineers are data integration and analysis pros – and Open Data isn’t about catering to us.

It is, or should be, about the democratization of data, allowing anybody on the web to extract, synthesize, and build from raw materials – and effect change. In a recent talk to a G-8 Summit on Open Data for Agriculture, I outlined the ways we can help make this happen:

#1 – Release structured raw data others can use

#2 – Make your data machine-readable

#3 – Make your data human-readable

#4 – Use an open-data format

#5 – Release responsibly and plan ahead

Abbreviated explanations below. Download the full version here: Open Data, Done Right: Five Guidelines.

#1 – Release structured raw data others can use

One of the most productive side effects of data collection is being able to re-purpose a set collected for one goal and use it towards a new end. This solution-focused effort is at the heart of Open Data. One person solves one problem; someone else takes the exact same dataset and re-aggregates, re-correlates, and remixes it into novel and more powerful work. When data is captured thoroughly and published well, it can be used and re-used in the future too; it will have staying power.

Release data in a raw, structured way – think a table of individual values rather than words – to enable its best use, and re-use.

#2 – Make your data machine-readable.

Once structured, raw data points are integrated into an analysis tool (like one of the Palantir platforms), a machine needs to know how to pick apart the individual pieces.

Even if the data is structured and machine readable, building tools to extract the relevant bits takes time, so another aspect of this rule is that a dataset’s structure should be consistent from one release to the next. Unless there’s a really good reason to change it, next month’s data should be in the exact same format as this month’s, so that the same extraction tools can be used again and again.

Use machine-readable, structured formats like CSV, XML, or JSON to allow the computer to easily parse the structure of data, now and in future.

#3 – Make your data human-readable.

Now that the data can be fed into an analysis tool, it is vital for humans, as well as machines, to understand what it actually means. This is where PDFs come in handy. They are an awful format for a data release as they can be baffling for automatic extraction programs. But, as documentation, they can explain the data clearly to those who are using it.

Assume nothing – document and explain your data as if the reader has no context.

#4 – Use an open-data format.

Proprietary data formats are fine for internal use, but don’t force them on the world. Prefer CSV files to Excel, KMLs to SHPs, and XML or JSON to database dumps. It might sound overly simplistic, but you never know what programming ecosystem your data consumers will favor, so plainness and openness is key.

Choose to make data as simple and available as possible: When releasing it to the world, use an open data format.

#5 – Release responsibly and plan ahead

Now that the data is structured, documented, and open, it needs to be released to the world. Simply posting files on a website is a good start, but we can do better, like using a REST API.

Measures that protect privacy and civil liberties are hugely important in any release of data. Beyond simply keeping things up-to-date, programmatic API access to your data allows you to go to the next level of data responsibility. By knowing who is requesting the data, you can implement audit logging and access controls, understanding what was accessed when and by whom, and limiting exposure of any possibly sensitive information to just the select few that need to see it.

Allow API access to data, to responsibly provide consumers the latest information – perpetually.

...

These guidelines seem simple, almost too simple. You might wonder why in this high tech world we need to keep things so basic when we have an abundance of technological solutions to overcome data complexity.

Sure, it’s all theoretically possible. However, in practice, anybody working with these technologies knows that they can be brittle, inaccurate, and labor intensive. Batman’s engineers can pull off extracting data from pasta, but for the rest of us, relying on heroic efforts means a massive, unnecessary time commitment – time taken away from achieving the fundamental goal: rapid, actionable insight to solve the problem.

There’s no magic wand here, but there are some simple steps to make sure we can share data easily, safely and effectively. As a community of data consumers and providers, together we can make the decisions that will make Open Data work.

-- Andy Isaacson

Share This Blog

  • Share This

About Transparency Talk

  • Transparency Talk, the Glasspockets blog, is a platform for candid and constructive conversation about foundation transparency and accountability. In this space, Foundation Center highlights strategies, findings, and best practices on the web and in foundations–illuminating the importance of having "glass pockets."

    The views expressed in this blog do not necessarily reflect the views of the Foundation Center.

    Questions and comments may be
    directed to:

    Janet Camarena
    Director, Transparency Initiatives
    Foundation Center

    If you are interested in being a
    guest contributor, contact:
    glasspockets@foundationcenter.org

Subscribe to Transparency Talk

Categories