Reading this, very interesting, DFID paper on impact evaluations and causality it strikes me that the beauty if Randomised Control Trials in development isn’t to do so much with their internal validity, or philosophical arguments about causality but rather, simply, that for RCTs the devil isn’t in the details.
With multivariate regressions there’s always somewhere for the bad stuff to hide: weak instrumental variables, Granger Causality, poor quality data, absence or presence of particular control variables, running so many regressions as to eventually get one of those magic stars…
Likewise with qualitative work as a reader you can never be wholly confident that the author hasn’t prioritised some evidence over others, or heard some voices or not others. Or that they’re not extrapolating too far from a small non-representative sample.
With an RCT on the other hand things are pretty simple. External validity issues beyond your population or context of interest, sure. But run enough RCTs in enough places and you start to overcome this. And, crucially, what you do run is simple (usually) treatment versus controls. Fairly simple maths and an obvious effect, or not.
In the tangled world of development research that, I think, is the humble RCT’s most persuasive selling point.
Not the be all and end all, but nice, because they leave a lot less space for hiding things.