Could paying for results (finally) help us learn?

MonitoringWaterfall

Source: Scott Chaplowe, International Federation Red Cross and Red Crescent Societies

I came across this really interesting blog post recently (shout-out to Instiglian Michael Eddy for sharing) by a disgruntled M&E consultant.

S/he asks a simple question: why are NGOs so slow to use monitoring data that could improve the effectiveness of their work?

The answers were spot-on and made me wonder whether – and to what extent – paying for results (often referred to as “results-based financing“) helps address some (if any) of these challenges.

I’ll just go down the list:

Reason 1: Most of the data that NGOs collect is useless, usually geared towards measuring a small/narrow number of quantitiative indicators that have little to do with the actual program. 

As someone who believes in and promotes RBF, I tend to be a big fan of outcome metrics, indicators and measurement. Having said that, I completely agree that the types of metrics and indicators used by most NGOs have little to do with their actual programs and theories of change. But I think it’s important to understand the underlying reason for that: most NGOs’ M&E frameworks (and therefore the data they collect) is driven by donors agendas. Indeed, rare is the donor who disburses “unrestricted” funds; instead, most donors fund a specific project that advances their narrow strategic priorities and impose their own methods and indicators for measuring success. So, dozens of donors –> dozens of M&E frameworks –> useless data for the NGO –> few opportunities (or incentives) for actual learning.

In an RBF contract, the donor is not paying for projects per se. Instead, s/he is purchasing an outcome (i.e. a 10% reduction in the incidence of malaria in a given community) regardless of which programs – or combination of programs – it took to get them there. This should, in theory, promote the use of metrics and indicators more closely aligned with NGOs’ theories of change. A major assumption, however, is that donors are both willing and able to wash their hands clean of any programatic decision-making once an outcome has been agreed…this is where third-party intermediary organizations can add value in terms of negotiating contracts and setting the record straight.

Reason 2: Pressure to spend quickly means there is little incentive (or need) for learning.

The pervasive disconnect between funding disbursements and programmatic realities on the ground is a huge problem in development…and usually comes from the top (i.e. the donors). Again, there’s a reason for that: if U.S. government agencies don’t spend all the money they’re allocated in a given year, their agencies will get less money the following year. For foundations, there are serious tax/legal ramifications for not disbursing a certain percentage of their endowment each year. Yes, the system’s messed up. Unfortunately, RBF alone won’t provide a solution. While it’s true that paying for results in one lump sum at the end should in theory remove the perverse incentive to spend, spend, spend, if donors are unable (for legal reasons or otherwise) to set aside a pool of money that just sits there for a couple of years and does nothing, RBF won’t stand a chance. Donors need to figure out a way around this ASAP.

Reason 3: Short-term projects often leave little time for staff to really learn from M&E. 

It’s true that most donors fund projects on a short-term basis…too short to enable any real focus on outcomes or learning. In an RBF contract, because outcomes are precisely what donors are paying for, they should be committing funds over longer time horizons. However, this is assuming that the donor has the legal authority/political will to actually commit – and hold on to – funding over multiple years in the first place (see previous grumblings).

Reason 4: Learning from monitoring data requires some kind of process to allow organizations to feed this learning back into performance. Donors don’t typically make this easy (em, DFID) by imposing rigid logframes that are impossible and/or take forever to amend. 

Don’t get me started on logframes. Suffice it to say that in an RBF contract, the idea is that once an outcome has been agreed and the donor has committed to paying a specified amount for that outcome, donors wash themselves completely of any programmatic decision-making. That includes changes to our beloved logframes.

Reason 5: NGOs genuinely don’t have a clue about how to actually improve programs. That’s because development is complex and programs typically aim to do ridiculously ambitious things.

I agree that development programs are complex, which is why a hands-off, more flexible approach like RBF might be preferable to a rigid, linear funding model. I also agree that many NGO programs aim to do ridiculously ambitious things. In some ways, because RBF requires both donors and NGOs to focus on things they can actually measure (ironically, this is a common criticism of RBF), perhaps RBF can help bring things down a notch…and in a good way. Start small, think big.

Standard

Let’s talk about MeE!

 

Pile of BooksIn an attempt to downsize a shameless backlog of books, papers and articles I’ve been meaning to read over the last year, I thought I’d start with Lant Pritchett, Salimah Samji & Jeffrey Hammer’s groundbreaking paper on making impact evaluation a useful learning tool for tackling complex problems, “It’s All About MeE: Using Structured Experiential Learning (“e”) to Crawl the Design Space.”

It just so happens that this paper crept into my conversations twice last week (inner nerd emerging from post-holiday stupor), once over coffee with the brilliant and indefatigable Jeff Hammer himself (while I was trying to explain to him the pitfalls of “friending” his former students on Facebook, which he recently joined), and the second time over nachos and beer with some former grad school classmates at the 14th street corridor’s latest hotbed of culinary innovation, Tortilla Coast.

Needless to say, this paper’s been on my mind!

The paper’s main thrust is that while we’ve come a long way in how rigorously we evaluate development programs (hats off to the “randomistas” for heralding in a new age of ex poste evaluation with a counterfactual), much of its current practice is still too often embedded in top-down strategies for implementation and learning. In other words, we still think we know what the solution is before we’ve attempted to solve the problem, usually in the form of elaborate “theories of change” and rigidly defined “logical frameworks.”

It's not that simple (credit Flickr user futureatlas.com)

The problem is that development doesn’t work that way. It’s messy and non-linear because development involves people, who are complex, embedded in organizations, which are complex, embedded in institutions, cultures, norms, etc. which are…you guessed it, complex, making development = complexity∞. (If you haven’t already seen Owen Barder’s presentation and podcast on complexity, you should.)

Thus, small changes in project design can yield huge impacts on outcomes, and given that there are seemingly limitless ways to “tweak” programs, what works in one context in one particular instance can’t reasonably be expected to produce the same (or even similar) results in a different context, the primary reason why randomized control trials (RCTs) – the latest fad in international development – are limited in their ability to teach us anything useful about “what works” (more RCT-bashing from Lant Pritchett here).

So, faced with this reality, the best way to achieve results is through an approach the authors call structured experiential learning, or little “e” (as opposed to the “big” E of traditional M&E).

The approach adapts principles of evolution – namely, nature’s tendency to randomly mutate and replicate the stuff it likes (also known as natural selection) – to development.

The application of evolutionary biology to non-scientific endeavors is not new. The authors point to the example of Steve Jones, an evolutionary biologist who helped Unilever create a better nozzle for soap production by making 10 copies of the nozzle with slight distortions at random and testing them all, then taking the most improved nozzle and making another 10 slightly different copies and repeating the process. After 45 such iterations, he came out with a complex and unexpected shape, way better than the original and probably better than anything he could have pre-designed himself.

Structured experiential learning is the equivalent of this approach in development. It requires mapping the universe of possibilities, or the “design space,” of a particular project; admitting we don’t know what will work but making educated guesses about key parameters; testing different variations of key parameters sequentially; measuring outcomes meticulously and making necessary adjustments along the way until we get the desired result. It’s a 7-step process, as illustrated in the graph below.

Screen Shot 2014-01-10 at 4.35.36 PMThe thing I find most promising about this approach is that it offers an opportunity to learn about the thing we fail most at in development but spend the least amount of time trying to understand: implementation.

Impact evaluation involves two distinct causal models – one moving from inputs to outputs (which is internal to the implementing agency) and one moving from outputs to outcomes (which is external to it).

Most projects fail in the first stage – that is, the project design fails to produce the intended outputs due to implementation challenges. However, most “rigorous” impact evaluation (e.g. RCTs) focus exclusively on “impact,” thus missing out on a whole lot of learning. The little “e” offers an interesting solution.

The approach is not without problems, however. While it’s nice to talk about big donors loosening their ties and “going with the flow,” it’s hard to imagine how this could work in practice while donors remain in the business of providing money upfront.

Namely, when governments/donors provide money upfront, that money is gone regardless of whether or not “outcomes” are achieved; thus, prescribing solutions and focusing on inputs and processes becomes the only tangible way for donors to mitigate against the risk that programs don’t achieve their objectives (as far as taxpayers are concerned, at least).

It’s also just easier.

At the end of the day, it’s easier for donors to tell their grantees how they should spend their money, and then focus on making sure it was spent accordingly, than worry about the “impact” of programs (which is notoriously difficult, time-consuming, expensive, etc.), particularly if the former allows donors to check off the accountability box and get away with it. Donors are, after all, in the business of doling out money. The incentives just aren’t aligned in my view.

One way around this is to take governments completely out of paying for programs upfront – and have them pay ex post for results. This gives implementing agencies the flexibility to tailor programs and focus exclusively on achieving the desired outcome, while shifting risk away from donors. The problem is, who pays for the programs upfront?!

Under current approaches, the burden usually falls on developing country governments and/or nonprofits, often ill-equipped to assume full implementation risk. In my (perhaps biased) view, Development Impact Bonds (DIBs), a new financing instrument I’ve been working on for the past year with colleagues at the Center for Global Development, offers the best solution for little “e.”

In a DIB, private investors provide money upfront to roll out interventions and donors pay them back (principal plus a return, commensurate with success) if – and only if – outcomes are achieved. If outcomes are not achieved, investors lose all or part of their investment.

What are other ways to make little “e” work? How do donors see this playing out in their respective agencies? (I’m less interested in fringe movements and more interested in how we mainstream this kind of approach.) I’d love to hear from you.

Standard

“Poverty Action Lab @10,” JPAL, Cambridge, MA

Join the Abdul Latif Jameel Poverty Action Lab in celebrating its 10th Anniversary. Over the past decade we have grown from a single office to a global network of researchers whose 400+ projects span 55 countries. We are marking the occasion with a day-long series of short talks, videos, and panel discussions from researchers, policymakers, staff, and partners.

We will hear of surprising results discovered, key lessons learned, some incredible adventures from the field, and the hard road to evidence-based policy in development. Speakers will include Abhijit Banerjee, Rukmini Banerji, Esther Duflo, Amy Finkelstein, Michael Greenstone, Martin Hirsch, Dean Karlan, Felipe Kast, Michael Kremer, Alan Krueger, Trevor Manuel, Benjamin Olken, Rafael Reif, and many others. More details here.

Standard