A recent article about social impact bonds written by Caroline Fiennes of Giving Evidence and published in the Stanford Social Innovation Review (April 3, 2013) makes several interesting points about the Peterborough SIB. Caroline writes that the success of a SIB can be evaluated on three levels – whether investors should be repaid according to the selected intervention and evaluation, whether the intervention works, and whether the bond structure works. She argues that the structure of the Peterborough SIB can help assess the project using the first measure of success, but not using the latter two measures. This is, she says, because the evaluation of the intervention is insufficiently rigorous.
Caroline’s thinking follows a clear logic of looking at a process and asking how we should evaluate the process and how we should evaluate the outcomes of the process. (An excellent process may produce poor outcomes, and a faulty process may nevertheless give rise to outstanding results.) The application of this logic to social impact bonds is complicated, however, by the fact that the process and outcomes at hand are innovations. As such, both the process and the outcomes may change drastically from the SIB’s first application in Peterborough to subsequent applications. And difficulties specific to producing public policy innovation create additional costs and risks for the first application of a process that may be replicated at lower costs elsewhere. Therefore, I believe that findings of fault in a process or an outcome for the first application of an innovation are not indicative of future success of either the process or the outcome – let alone the innovation. These questions, perhaps, should be asked and answered across a set of applications.
Caroline notes these concerns when she says that “The best cannot be the enemy of the good.” But I believe that sentence understates the particular difficulties of a public policy innovation, and I expand on that statement here.
On the one hand, despite Caroline’s throughout measures of success, she is actually asking too little of the SIB program. SIBs are being piloted in the context of a larger, global conversation around how to create a social finance marketplace. Participants in this conversation are asking questions in addition to the ones Caroline proffered. Can we create a financial mechanism that attracts profit-seeking investors and directs money toward socially-desirable outcomes? Can we create sustainable intermediary organizations for this marketplace? And, broader still, is the social finance marketplace a viable goal, and what components of that marketplace are necessary? A theoretically ideal design of a pilot project should be aware of all these questions, as they determine actions taken by policymakers, investors, intermediaries, nonprofits, and so on.
On the other hand, Caroline is asking too much of this first pilot project. The first measure of success that Caroline outlines, whether investors should (and implicitly, are) repaid, is insufficiently tested by one project. Is the Peterborough repayment structure appropriate for similar projects? Is the contact, as written, appropriate? Will investors pull out of the project part-way if it becomes clear that service providers are failing? Will investors sue if disagreements over outcomes arise? These issues are best answered across multiple contacts, of which Peterborough is first.
The second measure, whether the intervention works, also cannot be sufficiently measured in the Peterborough SIB. Even if the project relied on a randomized controlled trial, the result would not guarantee that the program outcomes could be replicated in other prisons. Good social science, and good policy, certainly do not rely on a single randomized controlled trial. Further, a costly randomized controlled trial may have answered in the negative the question of whether a cost-effective SIB program can be created.
The third measure should similarly be expanded beyond “does this structure work.” What structure is most appropriate? Are different structured appropriate for different governments or different topic areas? Answers to these questions are informed by the process evaluation conducted by RAND, and are decided by the structure replicability.
The reality is that innovation in public policy is difficult. It faces many hurdles, may fail for countless reasons, and produces successes that often pale compared to the outsize efforts required to achieve them. Creating the first social impact bond, as Social Finance has done, required years of discussion. The second social impact bond has been easier. Subsequent ones should get easier still. So perhaps the questions that Caroline raises, and the ones that are raised by the impact investment community worldwide, can best be answered across a series of SIB programs.
