Waylaid Dialectic

September 11, 2013

Where the Devil Isn’t

Filed under: Research for Development — terence @ 1:07 pm
Tags:

Reading this, very interesting, DFID paper on impact evaluations and causality it strikes me that the beauty if Randomised Control Trials in development isn’t to do so much with their internal validity, or philosophical arguments about causality but rather, simply, that for RCTs the devil isn’t in the details.

With multivariate regressions there’s always somewhere for the bad stuff to hide: weak instrumental variables, Granger Causality, poor quality data, absence or presence of particular control variables, running so many regressions as to eventually get one of those magic stars…

Likewise with qualitative work as a reader you can never be wholly confident that the author hasn’t prioritised some evidence over others, or heard some voices or not others. Or that they’re not extrapolating too far from a small non-representative sample.

With an RCT on the other hand things are pretty simple. External validity issues beyond your population or context of interest, sure. But run enough RCTs in enough places and you start to overcome this. And, crucially, what you do run is simple (usually) treatment versus controls. Fairly simple maths and an obvious effect, or not.

In the tangled world of development research that, I think, is the humble RCT’s most persuasive selling point.

Not the be all and end all, but nice, because they leave a lot less space for hiding things.

 

June 16, 2012

Significant?

Filed under: Random Musings — terence @ 8:08 am
Tags: , ,

Reviewing a review of the Randomistas Berk Ozler points to something that troubled me when I read the book: many of the interventions that have been touted on the basis of delivering statistically significant improvements vis a vis control groups when tested in Randomised Control Trials actually haven’t had that significant of an impact in the everyday sense of the term.

Suppose that you’re told that a program reduced the rate of dropping out of school among 15 year-olds by 17% and this reduction was statistically significant. You are also told that the same figure among 12 year-olds is 38%. You would likely take note. Suppose now you’re told that these are the effects of a conditional cash transfer program, where the dropout rate among the control group is 37.7% and 16.8%, respectively for ages 15 and 12, thus the absolute effect sizes are 6.4 percentage points in each case.

When you hear the first sentence, you are likely to miss two things that you might have thought about had you been given the latter facts about the same program effects – simply stated differently. First, in the latter case, you might say: “31.3% of 15 year-olds dropped out of school even though the government offered their families a considerable sum?” Second, you notice the fact that the program averted just six dropouts for every 100 transfers. Would you be surprised if you were then told that these are the effect sizes of the much-heralded PROGRESA? If you are, feel free to examine Table 1 in this paper (gated, sorry no WP versions around) by Behrman, Sengupta, and Todd (2005). [Emphasis mine.]

I think you can still defend PROGRESSA and similar Conditional Cash Transfer (CCT) programmes in light of this. Mounting a defence along the lines of: (a) in the difficult world of development interventions some improvement is better than none, which is often all we get for a our money; and (b) small things add up to big things over time.

But at the same time Berk’s example strikes me as an incredible important reminder of the perils of being blinded by the ‘I have a result!’ effect of statistical significance.

Also, in light of the actual effect magnitude of PROGRESSA it’s hard not to think that the most appropriate response to its findings might not have actually been for us all to race off touting the wonders of CCTs and trying to cultivate them everywhere, but rather to dwell the fact that it turns out that for the poor, whether they send their children to school or not is, for the most part, something that happens regardless of short run financial incentives (obviously only up to a point of course).

In terms of future research regarding education in development quite possibly the most significant finding of those first PROGRESSA studies should have been that about a third of parents weren’t able to get their kids to school, or didn’t think it worth it, even in light of the money bonus that was on offer. Or that a significant majority of poor parents send their kids to school regardless of the financial costs that PROGRESSA was designed to offset. These would seem to be the real stories of the PROGRESSA results, even if we were to conclude that the programme’s small improvements were still significant enough to make it worth it.

Blog at WordPress.com.