Wednesday, October 5, 2011

What Do Recent Insights From Development Economics Tell Us ...

Author: Anders Olofsg?rd, SITE.

The short answer is: quite a lot, but different parts of the literature offer different recommendations. The problem is that these different recommendations are partly in conflict, and that political and bureaucratic incentives may reinforce these frictions when putting aid policy into practice. It follows that reforms aiming at improving aid effectiveness have to find a way to deal with this conflict and also balance the tendency of institutional sclerosis within bureaucratic agencies against short sighted incentives of politicians.

Download pdf

The currently predominant field of development economics focuses on impact evaluation of different economic and social interventions. These studies are all micro-oriented, looking at the impact on the level of the individual or household, rather than at the nation as a whole. One example is evaluations of the effects of different interventions on school participation, such as conditional cash transfers, free school meals, provision of uniforms and textbooks, and de-worming. Other well-known studies have looked at educational output, moral hazard versus adverse selection on financial markets, how to best allocate bed-nets to prevent malaria, and the role of information in public goods provision and health outcomes.

What has sparked the academic interest in these types of impact evaluations is the application of a methodology well known from clinical trials and first introduced in the field of economics by labor economists, randomized field experiments. The purpose of impact evaluation is to establish the causal effect of the program at hand. Strictly speaking this requires an answer to the counterfactual question; what difference does it make for the average individual if he is part of the program or not. Since an individual cannot be both part of, and not part of, the program at the same time, an exact answer to that question cannot be reached. Instead evaluators must rely on a comparison between individuals participating in the program and those that do not, or a before and after comparison of program participants. The challenge when doing this is to avoid getting the comparison contaminated by unobservable confounding factors and selection issues. For instance, maybe only the most school motivated households are willing to sign up for conditional cash transfer programs, so a positive correlation between program involvement and school participation may all be due to a selection bias (these households would have sent their children to school anyway). In this case participation is what economists refer to as ?endogenous?, individual characteristics that may impact the outcome variable may also drive participation in the program.

To get around this problem, the evaluator would want strictly ?exogenous? variation in the participation in the program, i.e. individuals should not get an opportunity to self-select into participation or not. The solution to this problem is to select a group of similar individuals/households/villages and then randomize participation across these units. This creates a group of participants in the program (the ?treated?, using the language of clinical studies) and a group of non-participants (the ?control group?) who are not only similar in all observable aspects thought to possibly affect the outcome, but who are also not given the opportunity to self-select into the program based on unobservable characteristics. Based on this methodology, the evaluator can then estimate the causal effect of the program. Exactly how that is done varies, but in the cleanest cases simply by comparing the average outcome in the group of treated with that in the group of controls.

So what has this got to do with aid policy? A significant part of aid financing goes of course to projects to increase school participation, give the poor access to financial markets, eradicate infectious diseases, etc. Both the programs evaluated by randomization, and the randomization evaluations themselves, are often financed by aid money. The promise of the randomization literature is thus that it offers a more precise instrument to evaluate the effectiveness and efficiency of aid financed projects, and also helps aid agencies in their choice of new projects by creating a more accurate knowledge bank of what constitutes current best practices. This can be particularly helpful since aid agencies often are under fire for not being able to show what results their often generous expenditures generate. Anyone who has followed the recent aid debate in Sweden is familiar with this critique, and the methodology of randomization is often brought forward as a useful tool to help estimate and make public the impact of aid financed development projects.

Limits to Randomization

Taken to the extreme, the ?randomization revolution? suggests that to maximize aid effectiveness all aid should be allocated to clearly defined projects, and only to those projects that have been shown through randomization to have had a cost-effective causal effect on some outcome included in the aid donors objective (such as the millennium development goals). Yet, most aid practitioners would be reluctant to ascribe to such a statement. Why is that? Well, as is typically the case there are many potential answers. The cynic would argue that proponents of aid are worried that a true revelation of its dismal effects would decrease its political support, and that aid agencies want to keep their relative independence to favor their own pet projects. Better evaluation techniques makes it easier for politicians and tax payers to hold aid agencies accountable to their actions, and principal-agency theory suggests that governments then should put more pressure on agencies to produce verifiable results.

There are other more benevolent reasons to be skeptical to this approach, though, and these reasons find support in the more macro oriented part of the literature. In recent papers studying cross national differences in economic growth and development almost all focus is on the role of economic and political institutions. The term ?institutions? has become a bit of a catch-phrase, and it sometimes means quite different things in different papers. Typically, though, the focus lies on formal institutions or societal norms that support a competitive and open market economy and a political system with limited corruption, predictability and public legitimacy. Critical components include protection of property rights, democracy, honest and competent courts, and competition policy, but the list can be made much longer. Also this time the recent academic interest has been spurred by methodological developments that have permitted researchers to better establish a causal effect from institutions to economic development. Estimating cleanly the effect of institutions on the level or growth rate of GDP is complicated since causality is likely to run in both directions, and other variables, such as education, may cause both. What scholars have done is to identify historical data that correlates strongly with historic institutions and then correlated the variation in current institutions that can be explained by these historical data with current day income levels. If cross national variation in current institutions maps closely to cross national variation in historical institutions (?institutional stickiness?) and if current day income levels, or education rates, do not cause historical institutions (which seems reasonable) then the historical data can be used as a so called ?instrument? to produce a cleaner estimate of the causal effect of institutions.

Note that randomization and instrumentation are trying to solve the same empirical challenge. When randomization is possible it will be superior if implemented correctly (because perfect instruments only exist in theory), but there is of course a fairly limited range of questions for which randomized experiments are possible to design. In other cases scholars will have to do with instrumentation, or other alternatives such as matching, regression discontinuity or difference-in-difference estimations to better estimate a causal effect.

A second insight from this literature is that what constitutes successful institutions is context specific. Certain economic principles may be universal; incentives work, competition fosters efficiency and property rights are crucial for investments. However, as the example of China shows, what institutions are most likely to guarantee property rights, competition and the right incentives may vary depending on norms and historical experiences among other things. Successful institutional reforms therefore require a certain degree of experimentation for policy makers to find out what works in the context at hand. To just implement blueprints of institutions that have worked elsewhere typically doesn?t work. In other words, institutions must be legitimate in the society at hand to have the desired effect on individual behavior.

Coming back to aid policy, the lesson from this part of the literature is that for aid to contribute to economic and social development, focus should be on helping partner country governments and civic society to develop strong economic and political institutions. And since blueprints don?t work, it is crucial that this process involves domestic involvement and leadership in order to guarantee that the institutions put in place are adapted to the context of the partner country at hand, and has legitimacy in the eyes of both citizens and decision makers. Indeed, institution building is also a central part of aid policy. This sometimes takes an explicit form such as in financing western consultants with expertise in say central banking reform or how to set up a well-functioning court system. But many times it is also implicit in the way the money is disbursed, through program support rather than project support (where the former is more open for the partner country to use at their own priorities), through the partner country?s financial management systems and recorded in the recipient country budget. Also in the implementation of projects there is an element of institution building. By establishing projects within partner government agencies and actively involving its employees, learning and experience will contribute to institutional development.

Actual aid policy often falls short of these ambitions, though. Nancy Birdsall has referred to the impatience with institution building as one of the donors? ?seven deadly sins?. The impatience to produce results leads to insufficient resources towards the challenging and long term work of creating institutions in weak states, and the search for success leads to the creation of development management structures (project implementation units) outside partner country agencies. The latter not only generates no positive spill-overs of knowledge within government agencies, but can often have the opposite effect when donors eager to succeed lure over scarce talent from government agencies. The aid community is aware of these problems and has committed to improve its practices in the Paris declaration and the Accra Agenda, but so far progress has been deemed as slow.

Micro or Macro?

So, I started out saying that there is a risk that these two lessons from the literature may be in conflict if put into practice for actual aid policy. Why is that? At a trivial level, there is of course a conflict over the allocation of aid resources if we interpret the lessons as though the sole focus should be on either institutional development or best practice social projects respectively. However, most people would probably agree that there is a merit to both. In theory it is possible to conceive of an optimal allocation of aid across institutional support and social project support, in which the share of resources going to project support is allocated across projects based on best practices learned from randomized impact evaluations. In practice, however, it?s important to consider why these lessons from the literature haven?t been implemented to a greater extent already. After all, these are not completely new insights. Political economy and the logic of large bureaucratic organizations may be part of the answer. Once these factors are considered, a less trivial conflict becomes apparent, showing the need to think carefully about how to best proceed with improving the practices of aid agencies.

As mentioned above, one line of criticism against aid agencies is that they have had such a hard time to show results from their activities. This is partly due to the complicated nature of aid in itself, but critics also argue that it is greatly driven by current practices of aid agencies. First of all there is a lack of transparency; information about what decisions are made (and why), and where the money is going is often insufficient. This problem sometimes becomes acute, when corruption scandals reveal the lack of proper oversight. Secondly, money is often spent on projects/programs for which objectives are unclear, targets unspecified, and where the final impact of the intervention on the identified beneficiaries simply can?t be quantified. This of course limits the ability to hold agencies accountable to their actions, so focus instead tends to fall on output targets (have all the money been disbursed, have all the schools been built) rather than the actual effects of the spending. So why is this? According to critics, a reason for this lack of transparency and accountability is that it yields the agencies more discretion in how to spend the money. Agencies are accused of institutional inertia, programs and projects keep getting financed despite doubts about their effectiveness because agency staff and aid contractors are financially and emotionally attached.

In this context, more focus on long run, hard to evaluate institutional development may be taken as an excuse for continuing business as usual. Patience, a long run perspective and partner country ownership is necessary, but it cannot be taken as an excuse for not clearly specifying verifiable objectives and targets, and to engage in impact evaluation. It is also important that a long term commitment doesn?t have to imply an unwillingness to abandon a program if it doesn?t generate the anticipated results. It is of course typically much harder to design randomized experiments to evaluate institutional development than the effect of say free distribution of bed-nets. But it doesn?t follow that it is always impossible, and, more importantly, it doesn?t preclude other well founded methods of impact evaluation. The concern here is thus that too much emphasis on the role of institutional development is used as an excuse for not incorporating the main lesson from the ?randomization revolution?, the importance of the best possible impact evaluation, because actual randomization is not feasible.

The concern discussed above is based on the implicit argument that aid agencies due to the logic of incentives and interests within bureaucratic institutions may not always do what is in their power to promote development, and that this is made possible through lack of transparency and accountability. The solution would in that case seem to be to increase accountability of aid agencies towards their politicians, the representatives of the tax payers financing the aid budget. That is, greater political control of aid policy would improve the situation.

Unfortunately, things aren?t quite that easy, which brings us to the concern with letting the ability to evaluate projects with randomized experiments being a prerequisite for aid financing. We have already touched upon the problem that programs for institutional development are hard to design as randomized experiments. It follows that important programs may not be implemented at all, and that aid allocation becomes driven by what is feasible to evaluate rather than by what is important for long run development. But there is also an additional concern that has to do with the political incentives of aid. The impatience with institution building is often blamed on political incentives to generate verifiable success stories. This is driven by the need to motivate aid, and the government policies more generally, in the eyes of the voters. It follows that politicians in power often have a rather short time horizon, that doesn?t square well with the tedious and long run process of institution building. Putting aid agencies under tighter control of elected politicians may therefore possibly solve the problem outlined above, but it may also introduce, or reinforce, another problem, the impatience with institution building.

Unfortunately, the perception that randomization makes it possible to more exactly define what works and what doesn?t, may have further unintended consequences if politicians care more about short term success than long term development. We know from principal-agent theory that the optimal contract gives the agent stronger incentives to take actions that contribute to a project if it becomes easier to evaluate whether the project has been successful or not. Think now of the government as the principal and the aid agency as the agent, and consider the case when the government has a bias towards generating short run success stories. In this case the introduction of a new technology that makes it easier to evaluate social projects (i.e. randomization) will make the government put stronger incentives on the aid agency to redirect resources towards social projects and away from institutional development. This would not be a problem if the government had development as its only objective, because then the negative consequences on effort at institution building would be internalized in the incentive structure. But in a second best world where politics trump policy, the improved technology may have perverse and unintended consequences. Greater political control will lead to less focus on institutional development than what is desired from a development perspective. A very benevolent (na?ve?) interpretation of the motivation behind aid agencies? tendencies to design social projects such that their effects are hard to quantify could thus be that it decreases the political pressure to ignore institutional development.

Concluding Remarks

The challenge to heed the two lessons from the literature thus goes beyond the mere conflict of whether to allocate the resources to institutional development or to best practice social projects once political economy and bureaucratic incentives are considered. Improved agency accountability may be necessary to avoid ?institutional sclerosis? in the name of institution building and make sure that best practices are followed, but too much political meddling may lead to short sightedness and a hunt for marketable success stories. It is even possible, that the ?randomization revolution? may make matters worse, if it becomes an excuse for neglecting the tedious and long term process of institution building and reinforces the political pressure for short term verifiable results.

What is then the best hope for avoiding this conflict of interest? That is far from a trivial question, but maybe the best way to make sure that agency accountability towards their political principals doesn?t lead to impatience with institution building is to form a broad-based political consensus around the objectives, means and expectations of development aid. The pedagogical challenge to convince tax payers that aid helps and that they need to be patient remains, but at least the political temptation to accuse political opponents of squandering tax payers money without proven effects and to pretend to have the final solution for how to make aid work, should be mitigated. But until then the best bet is probably to stay skeptical to anyone claiming to have the final cure for aid inefficiency, and to allow some trust in the ability of experienced practitioners to do the right thing.

?

Recommended Further Reading

  • Acemoglu, D., S. Johnson and J.A. Robinson (2001) ?The Colonial Origins of Comparative Development: An Empirical Investigation?, American Economic Review 91(5), 1369-1401.
  • Banerjee, A. (Ed.) (2007), ?Making Aid Work?, MIT Press.
    Bannerjee, A. and E. Duflo (2008), ?The Experimental Approach to Development Economics?, NBER Working Paper 14467.
  • Birdsall, N. (2005), ?Seven Deadly Sins: Reflections on Donor Failings?, CGD Working Paper 50.
  • Birdsall, N. and H. Kharas (2010), ?Quality of Official Development Assistance Assessment?, Working Paper, Brookings and CGD.
  • Duflo, E., R. Glennerster and M. Kremer (2007), ?Using Randomization in Development Economics Research: A Toolkit?, CEPR Discussion Paper 6059.
  • Easterly, W. (2002), ?The Cartel of Good Intentions: The problem of Bureaucracy in Foreign Aid?, Journal of Economic Policy Reform, 5, 223-50.
  • Easterly, W. and T. Pfutze (2008), ?Where Does the Money Go? Best and Worst Practices in Foreign Aid?, Journal of Economic Perspectives, 22, 29-52.
  • Knack, S. and A. Rahman (2007), ?Donor fragmentation and bureaucratic quality in aid recipients?, Journal of Development Economics, 83(1), 176-97.
  • Rodrik, D. (2008), ?The New Development Economics: We Shall Experiment, but how Shall We Learn??, JFK School of Government Working Paper 55.

Like this:

Be the first to like this post.

Source: http://freepolicybriefs.org/2011/10/03/what-do-recent-insights-from-development-economics-tell-us-about-foreign-aid-policy/

julianna margulies dr oz kym johnson hakeem nicks hakeem nicks alpha lipoic acid 105.1

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.