(Paul Connolly is Senior Vice President of TCC Group, a management consulting firm that serves nonprofit organizations, foundations, and corporate community involvement programs.)
Typically, when a foundation hires an evaluator to assess a program, that evaluator collects lots of information from a range of stakeholders, analyzes the data, writes a report, and discusses it with the funder. Then, an abridged final report is maybe shared with the field. The Packard Foundation has pursued a much more transparent and interactive approach for the current review of its Organizational Effectiveness programan approach which the foundation staff likens to having "a glass filing cabinet."
For over two decades, Packard has been making grants to support such efforts as strategic planning, board development, succession planning, and web site upgrades to strengthen the organizational capacity of its nonprofit grantees. Packard retained TCC Group several months ago to help retrospectively assess 1,300 of these grants made during the past ten years and ascertain what constitutes a successful organizational effectiveness project. Packard is grappling with questions like: What is the sustained impact of the grants we make? How and to what extent can we quantify impact, its staff, and their outcomes? What contributes to the consultant relationship success? What are the factors that contribute to a successful project?
Packard began by compiling a huge data set based on grantee records and survey research and then asked TCC to help with the analysis. Rather than scrutinizing Packard's data on our own behind closed office doors, we are facilitating a "learning in public" process through which we are sharing early research findings widely and encouraging input. Leveraging Packard's organizational effectiveness wiki site, the project has set up a section of the wiki for grantees, consultants, funders, and other interested parties to review preliminary findings and provide feedback (we invite yours, too!). And conversations have been emerging on Twitter, blogs, and other social media venues.
What have we discovered so far about this networked approach to collective learning?
- The Packard Foundation has been praised at several recent philanthropy conferences (such as the June 6-7 Grantmakers for Effective Organizations learning conference) for its open approach, so there seems to be some support in the field for this type of inclusive evaluation process.
- There has been some engagement on the wiki, but not very much. We recognized that the wiki was not as technologically accessible as we had wished and are working on improving that. We are also realizing that asking a broad array of people to sift through and comment on a lot of "semi-baked" data is, well, asking a lot. (A few consultants even went so far as to say, justifiably, that they would only do so if they were paid for their time.)
- We have learned to cull the findings and extract a few noteworthy nuggets that we then highlight and ask for feedback onso it is more like drinking water from a cup rather than a fire hose.
- We are also creating more opportunities for select constituents to participate in "old-fashioned" in-person discussion groups and teleconference webinars, during which we can "think out loud" with them. We have found that this live interaction engages people and make them more motivated to contribute their ideas online, too, as part of an ongoing conversation.
Ralph Waldo Emerson observed that "there are many things of which a wise man may wish to be ignorant." And New York University new media professor Clay Shirky points out that our society does not have a problem with information overload, but filter failure.
What are other foundations finding out about seeking broad input through two-way social media exchanges? How can philanthropies create better filters for seeking commentary when most people actually might not be that interested in poring through all of the information in those glass filing cabinets? At what point can a funder "over share" and ask constituents to review and comment on "too much information?" When is the best time to seek feedback from various types of stakeholders on slightly baked, half-baked, or fully baked findings? When soliciting experts' opinions, where exactly is that fine line between a foundation being open and receptiveand being presumptuous and insensitive? What is the best ways to blend online and offline input to maximize collective intelligence?
These are questions we are mulling over. We would like to hear what you think. And we would be glad to share more of our experience and insights as this public learning process evolves.