Although I’m no fan of William Easterly’s polemics, I definitely pay attention to and appreciate his academic papers. And I can see the potential merit in what he, and his co-authors, have tried to do in their various aid agency ranking exercises, but at the and of the day I think the effort is mostly wasted, at least in those exercises emanating from the Easterly stable. My main problems with the latest effort [pdf] are pretty much the same ones I had with Easterly and Pfutze’s 2008 paper.
1.While the authors note that the optimal overheads to aid spend ratio for an aid agency isn’t zero they rank their aid agencies as if it is. In reality, really low overheads mean worse aid, not better, but this point (something that deserves to be a development truism) doesn’t factor into their calculations at all. They also fail to adequately address (they note the point but effectively ignore it) the problem that what aid agencies report as overheads varies dramatically depending on reporting practices. The New Zealand aid agency, for example, reports high overheads, not because it is particularly bloated, but rather because it is particularly diligent in calling an overhead an overhead. Finally, they fail to note that different agencies have different jobs, some of which might actually warrant higher overheads. I doubt this fact fully excuses the UNDP, but once you consider it performs a lot of work (monitoring MDGs, producing the HDR, and running things like the UNDP IPC) that is both useful and not actual aid disbursements, its 129% overheads figure doesn’t seem quite as egregious.
2. Their country fragmentation index scores country fragmentation the same for particular donors regardless of the aid modalities used. (In other words a donor that runs duplicative projects in 2 countries, will score the same as if it had contributed the same amount to two SWAps in the same countries).
3. Their governance scores (measuring the extent to donors target aid to countries that are democratic and well governed) also ignore modalities. So if you give aid to the government of a corrupt military dictatorship, this scores the same as if you give the same amount of aid solely to the local branch of transparency international. This limitation probably explains why some of the Scandanavian donors score so poorly.
I can see the appeal, and utility of such indices, and the longitudinal data in this one are interesting, but still think the limitations outweigh the merits, at least in the way they’re used here. It’s an interesting paper but ultimately more about heat than light.