Let’s talk about MeE!

 

Pile of BooksIn an attempt to downsize a shameless backlog of books, papers and articles I’ve been meaning to read over the last year, I thought I’d start with Lant Pritchett, Salimah Samji & Jeffrey Hammer’s groundbreaking paper on making impact evaluation a useful learning tool for tackling complex problems, “It’s All About MeE: Using Structured Experiential Learning (“e”) to Crawl the Design Space.”

It just so happens that this paper crept into my conversations twice last week (inner nerd emerging from post-holiday stupor), once over coffee with the brilliant and indefatigable Jeff Hammer himself (while I was trying to explain to him the pitfalls of “friending” his former students on Facebook, which he recently joined), and the second time over nachos and beer with some former grad school classmates at the 14th street corridor’s latest hotbed of culinary innovation, Tortilla Coast.

Needless to say, this paper’s been on my mind!

The paper’s main thrust is that while we’ve come a long way in how rigorously we evaluate development programs (hats off to the “randomistas” for heralding in a new age of ex poste evaluation with a counterfactual), much of its current practice is still too often embedded in top-down strategies for implementation and learning. In other words, we still think we know what the solution is before we’ve attempted to solve the problem, usually in the form of elaborate “theories of change” and rigidly defined “logical frameworks.”

It's not that simple (credit Flickr user futureatlas.com)

The problem is that development doesn’t work that way. It’s messy and non-linear because development involves people, who are complex, embedded in organizations, which are complex, embedded in institutions, cultures, norms, etc. which are…you guessed it, complex, making development = complexity∞. (If you haven’t already seen Owen Barder’s presentation and podcast on complexity, you should.)

Thus, small changes in project design can yield huge impacts on outcomes, and given that there are seemingly limitless ways to “tweak” programs, what works in one context in one particular instance can’t reasonably be expected to produce the same (or even similar) results in a different context, the primary reason why randomized control trials (RCTs) – the latest fad in international development – are limited in their ability to teach us anything useful about “what works” (more RCT-bashing from Lant Pritchett here).

So, faced with this reality, the best way to achieve results is through an approach the authors call structured experiential learning, or little “e” (as opposed to the “big” E of traditional M&E).

The approach adapts principles of evolution – namely, nature’s tendency to randomly mutate and replicate the stuff it likes (also known as natural selection) – to development.

The application of evolutionary biology to non-scientific endeavors is not new. The authors point to the example of Steve Jones, an evolutionary biologist who helped Unilever create a better nozzle for soap production by making 10 copies of the nozzle with slight distortions at random and testing them all, then taking the most improved nozzle and making another 10 slightly different copies and repeating the process. After 45 such iterations, he came out with a complex and unexpected shape, way better than the original and probably better than anything he could have pre-designed himself.

Structured experiential learning is the equivalent of this approach in development. It requires mapping the universe of possibilities, or the “design space,” of a particular project; admitting we don’t know what will work but making educated guesses about key parameters; testing different variations of key parameters sequentially; measuring outcomes meticulously and making necessary adjustments along the way until we get the desired result. It’s a 7-step process, as illustrated in the graph below.

Screen Shot 2014-01-10 at 4.35.36 PMThe thing I find most promising about this approach is that it offers an opportunity to learn about the thing we fail most at in development but spend the least amount of time trying to understand: implementation.

Impact evaluation involves two distinct causal models – one moving from inputs to outputs (which is internal to the implementing agency) and one moving from outputs to outcomes (which is external to it).

Most projects fail in the first stage – that is, the project design fails to produce the intended outputs due to implementation challenges. However, most “rigorous” impact evaluation (e.g. RCTs) focus exclusively on “impact,” thus missing out on a whole lot of learning. The little “e” offers an interesting solution.

The approach is not without problems, however. While it’s nice to talk about big donors loosening their ties and “going with the flow,” it’s hard to imagine how this could work in practice while donors remain in the business of providing money upfront.

Namely, when governments/donors provide money upfront, that money is gone regardless of whether or not “outcomes” are achieved; thus, prescribing solutions and focusing on inputs and processes becomes the only tangible way for donors to mitigate against the risk that programs don’t achieve their objectives (as far as taxpayers are concerned, at least).

It’s also just easier.

At the end of the day, it’s easier for donors to tell their grantees how they should spend their money, and then focus on making sure it was spent accordingly, than worry about the “impact” of programs (which is notoriously difficult, time-consuming, expensive, etc.), particularly if the former allows donors to check off the accountability box and get away with it. Donors are, after all, in the business of doling out money. The incentives just aren’t aligned in my view.

One way around this is to take governments completely out of paying for programs upfront – and have them pay ex post for results. This gives implementing agencies the flexibility to tailor programs and focus exclusively on achieving the desired outcome, while shifting risk away from donors. The problem is, who pays for the programs upfront?!

Under current approaches, the burden usually falls on developing country governments and/or nonprofits, often ill-equipped to assume full implementation risk. In my (perhaps biased) view, Development Impact Bonds (DIBs), a new financing instrument I’ve been working on for the past year with colleagues at the Center for Global Development, offers the best solution for little “e.”

In a DIB, private investors provide money upfront to roll out interventions and donors pay them back (principal plus a return, commensurate with success) if – and only if – outcomes are achieved. If outcomes are not achieved, investors lose all or part of their investment.

What are other ways to make little “e” work? How do donors see this playing out in their respective agencies? (I’m less interested in fringe movements and more interested in how we mainstream this kind of approach.) I’d love to hear from you.

Standard