Alisa Helbitz and Emily Bolton from Social Finance UK reply to the Caroline Fiennes’s earlier Stanford Social Innovation Review article on the first social impact bond. They make a passionate defense of the Peterborough SIB in their article. They describe the benefits of the SIB, the challenges and the value of creating the first SIB, and note that the performance management system they created has shown promising results already.
One thing the article omits is the rationale for deciding on propensity score matching as the method for evaluation, and why, as they write, “[r]andomised controlled trials were not an option.” A discussion of this question is important for two reasons. The first reason is that several SIBs being designed elsewhere are attempting to use randomized controlled trials as the basis of evaluation. Therefore challenges encountered in Peterborough may have interesting lessons for persons designing these other SIBs.
The second reason is that a randomized controlled trial is valuable for an important reason. That reason is that the RCT reduces the likelihood that the outcomes we observe in the program happened by chance. Very often programs that appear to have delivered significant results among a set of individuals turn out to produce results similar to a comparison group of individuals that were not part of the program. The difference between RCTs and alternative measures has become so important that the U.S. government has started spending much more money on that evaluation method. So an understanding of when an RCT may be the appropriate tool for a SIB would be a useful addition to the ongoing conversation about this promising innovation.
