• Keine Ergebnisse gefunden

David Hall-Matthews

Im Dokument DEVELOPMENT POLICY A necessary (Seite 182-188)

Historians tend not to stick their necks out. They spend their days striving to root out empirical evidence from the past; then when it comes to interpreting it, they celebrate uncertainty. Each effect had multiple causes; every event could have been different had it not been for its contingent concatenation of contexts.

History is drawn upon every day to justify policy, but most historians would sooner deconstruct the origins of binding myths than help to construct more useful ones. If this generalization is true, historians and development policy-makers are uneasy bedfellows. Despite having to plan for the unknowable future, development strategists must make concrete recommendations. Usually they are aware of the limitations of drawing conclusions from linear models. Much less often, as in this volume, they ask historians for advice. But how can historians rise to the challenge, when confident predictions about the future are antithet-ical to their raison d’être?

For a start, it can only help policy-makers if specialists in analysing long-term trends are enlisted to point out the risks or errors in their assumptions, modify the tendency to isolate single causes (or effects) and emphasise the importance of political, social and cultural contexts. By embracing complexity, historians may be able to foresee possible ranges of outcomes or, with a little more certainty, warn of likely adverse reactions to policies that have chimed negatively with local populations in the past. That is useful, but not constructive. And social scientists can make similar claims too. Indeed many university departments of Development Studies see their role as precisely to critique the approaches of donor nations and institutions, using nuanced local case studies to demonstrate failures of sensitivity. Moreover, they are more willing to draw comparisons. All development students are encouraged to consider the transferability of their findings, specifying, as far as possible, which are locally significant and which might be applied elsewhere. Historians of particular regions are frequently

reluctant to do this. Nonetheless, there are several ways in which planners could and should seek out positive, practically useful lessons from good historical research. History can highlight previously successful strategies; aid reflection on the policy-making process itself; and expose the origins of current ideas.

Development policy-making, in the broadest sense, is as old as society. At the simplest level, historians can reveal which brilliant new programs have actually been tried before, then buried – as well as what has worked in the past. The reasons will sometimes be specific and context remains important, but previous unintended consequences ought never to be unanticipated in the future.

Historians, uniquely, can examine circumstances before, during and long after particular interventions, and thus assess their multiple impacts over a far greater time period and in a more nuanced way than is possible for contemporary programs. Development is necessarily a slow process and history can sometimes show where and in what forms long-term persistence may be needed. If, for example, the Indian anti-malaria campaign faltered in the 1960s because it was not followed through, as Sunil Amrith shows, it is straightforward to predict that recent successes in East Africa will also be reversed if local health services and political will are not bolstered in the long-term.

Development policy-makers, however, are ill-disposed to learn from the mistakes and successes of the past, for several reasons. Many, by instinct, are forward-looking modernizers and prone to believe the teleological fallacy that things can only get better. At its worst, there is a tendency to invert E.H. Carr and see developing countries as representing the past, from which, ergo, there is nothing to learn (Carr 1961). Agenda-setters rarely, in any context, trumpet their own past failings, but those in the development field also have particularly short institutional memories on which to draw. Development is widely perceived to have been invented in the 1940s, with Bretton Woods and President Truman’s much-misquoted inauguration speech (in which he coined the term under-development) (Rist 2002: 69–79), as are Development Studies – as if Adam Smith, Thomas Malthus and Karl Marx had written about something else entirely.

Given the direct similarities between the goals, methods, assumptions and even language of development agents today (including many NGOs) and those of colonial administrations, this is wilful – and harmful.

There is an enormous amount to be gleaned from colonial records. Not only are they open to scrutiny – unlike those of UN organizations, for example – they are exceptionally full. Though they rarely took place in the public domain at the time, it is possible to trace every last scribble in the margin of colonial debates. Thus history enables us not only to see what policies have been tried before, but what happened during often long-drawn-out policy-making processes. We can examine what alternatives were considered, who proposed them and why they were passed over. What made people angry, and what was generally assumed. What was prioritised and by what criteria success was measured. How unpopular measures were presented – and thus how some still

History, historians and development policy 170

extant development paradigms came to be. Here’s one example. Sir Richard Temple, Lieutenant-Governor of Bengal in 1874, ran a famine relief campaign so exemplary that nobody died, only to be lambasted for his excessive expenditure.

Three years later, having been transferred to Bombay, he was asked to explain why he had spent so little in another famine in which many perished. He declared that his aim had been to prevent people from becoming dependent on relief, admitting only privately that they had not been in his earlier campaign (Hall-Matthews 1996). The fear of aid dependence is still routinely used to justify limiting interventions that would more likely foster independence.

Analysing different types and aspects of policy-making processes can only benefit those involved in them today. When does strong individual leadership help and when does it stifle creativity? What are the pros and cons of extensive consultation? What affects the relationship between written guarantees and material outcomes? Again, such questions can by no means only be answered by historians, though they inevitably have a wider stock of divergent cases on which to draw – and perhaps a broader-minded sense of how strengths and weaknesses may be judged. Policy-making involves engagement with multiple individual, institutional and civil society stakeholders – but more importantly with a range of ideas. Some are embedded in particular communities, some are imported.

Some are tested scientifically, some contested politically. Whether economic, cultural, ideological, legal or moral, all ideas have specific origins and reflect people’s sense of their own history. All policy-making – even that proposing radical departures from the past – depends on hindsight.

That is why doing history properly is so important. Poorly informed or applied historical analogy has too often been used to support poor policy – and sometimes even been the origin of it (Komprobst 2007). The 2003 invasion of Iraq was publicly contrasted with the appeasement of Hitler in the 1930s, but never compared with the ruinously bloody and expensive British mandate of Mesopotamia in the 1920s. Applying lessons from one country too readily to others is particularly dangerous. Environmental concerns generated by the American dustbowl in the 1930s resulted in decades of draconian policies to prevent soil degradation in Africa for which there was little evidence (Anderson 1984). Knowing where ideas and concerns come from – and how they have changed over time – is therefore essential to sound policy and likely to throw up new insights.

History can be used, then, to reflect on current approaches, by revealing their origins and by carefully comparing them with similar strategies in the past. It can help us to see clearly how contemporary perceptions converge or diverge from those of previous development policy-makers. Why do we make the assumptions we do? When and why did our predecessors come up with them? Do we need to reconsider them, or are we standing on the shoulders of giants? Where issues that concern us have already been debated for decades or centuries without resolu-tion, at the very least we can cut to the chase by getting to know those debates.

Commentary: Can historians assist development policy-making, or just highlight its faults? 171

Then it can be judged whether to build from where they left off or discard them as futile – or worse. A paradigm shift comes not with a new answer to an age-old question, but with a new question. The long, ongoing debate discussed by Stephen Kunitz, for example – between those favoring assimilation and self-determination for American Indians – has served to distract attention from how best to meet their welfare needs. Indeed the relationship between the ideological battle and the level of resources allocated might best be seen as a downward spiral. It is worth remembering the reason why things can not only get better:

some people have an interest in others’ arrested development. Good development policy needs to take account of such interests and countermand their impacts – but they are fiendishly difficult to discern without deep knowledge of their origins and trajectories.

It is possible, then, for history to assist, positively, in the development of better policy, precisely by showing, negatively, where the obstacles have been to desirable outcomes – whether within policy-making processes themselves or in reactions to them. Public health policy is particularly useful to consider in this respect, because its goals are uncontroversial. It is also relatively easy to measure not only its impact but its scale. To a large extent, in a development context, good public health policy simply means sufficient resource allocation to the health sector to ensure that it has the capacity to function effectively and can be accessed by the entire population. This has rarely been achieved, for two main reasons.

First, the development of grassroots healthcare – as distinct from intervening in response to spectacular outbreaks of famine or epidemic disease – is dull, slow to show results and difficult to politicise. Second, governments everywhere are prone to social discrimination. But can they be incentivized to recognize their responsibility?

History may not be able to provide many examples of prior success in this area, but historical research gives clear pointers nonetheless. The cases in this volume from India and America show that it is not enough to have large, well-funded, federal states – or even written commitments. Development aid earmarked for healthcare can be undermined by fungibility and, besides, no conditionality is permanent. Rather, the respectable health outcomes in Tamil Nadu and Kerala are argued by Sunil Amrith to reflect a long-standing political culture that manifests itself in the ability of poor populations to demand effective service delivery. That is consistent with the relatively worse health provision in less politicized Indian states, or African countries – or among geographically dispersed American Indians – though it still leaves an enormous problem to solve. It is not easy to stimulate a political culture from scratch.

Stephen Kunitz shows that identity-based assertion is not enough to ensure the fulfilment of material rights – and for some minorities, it is a dangerous course. It could take decades to reach a point where people feel able to hold governments to account in specific policy areas, instead of acting as a malleable vote bank. Even Amartya Sen’s assertion that functioning democracies prevent

History, historians and development policy 172

famines can only be taken as a policy prescription in the extremely long term (Sen 1999: 160–88).

This further suggests the need for caution in trying to cut development’s Gordian knots. Frustrated by weak and reluctant governments – and perhaps also with passive populations – former Millennium Development Program chief Jeff Sachs recommends bypassing state healthcare agencies and delivering basic needs like mosquito bed nets directly to populations via NGOs (Sachs 2005) – a view echoed in much of the HIV/AIDS prevention work supported by the Bill and Melinda Gates Foundation, and in general by recent USAID policy. This has had some spectacular results in reducing the incidence of critical diseases – such as malaria in Zanzibar – but has done little to increase states’ capacity to deliver public healthcare, or people’s say over its delivery. Health services provided in the context of global philanthropy – whether by nineteenth-century missionaries in Kerala or Bill Gates today – can embed top-down paternalistic approaches and reduce states’ sense of responsibility for provision, as much as they can induce a culture in which people feel able to demand it. Where international efforts are directed through accountable states, their benefits are unmistakeable; where they sidestep ‘bad’ governments, they can further undermine poor people’s capacity to demand health provision (Helleiner 2000).

In such contexts, lively historical debates over what has generated political assertiveness in some places but not others – and why – are useful, and can contribute directly to development discourses around the relative merits of participatory approaches, empowerment, promotion of civil society and good governance. Historians routinely disagree with each other over the main causes and components of political culture, and over whether it is beneficial or even important. When trying to address messy, intractable development problems, however, a bit of messy uncertainty is a good thing. Trying to understand the complex processes of policy-making is hard enough at the institutional level, let alone trying to analyze the motivations, preferences and fears that go into the everyday decisions of entire populations. Development policy-makers can learn a great deal from historians, but perhaps the most important is the willingness to admit that they don’t have all the answers.

References

Anderson, David (1984). ‘Depression, dustbowl, demography, and drought: the colonial state and soil conservation in East Africa in the 1930s’, African Affairs 83(332): 321–44 Carr, Edward H. (1961). What is History?, London: Macmillan

Hall-Matthews, David (1996). ‘Historical roots of famine relief paradigms: ideas on dependency and free trade in India in the 1870s’, Disasters 20(3): 216–30

Helleiner, Gerry (2000). ‘Towards balance in aid relationships’, Cooperation South 2: 21–35 Komprobst, Markus (2007). ‘Comparing apples and oranges? Leading and misleading uses

of historical analogies’, Millennium: Journal of International Studies 36(1): 29–49

Rist, Gilbert (2002). The History of Development: From Western Origins to Global Faith, 2nd edition, London: Zed Books

Commentary: Can historians assist development policy-making, or just highlight its faults? 173

Sachs, Jeffrey (2005). The End of Poverty: How We Can Make it Happen in Our Lifetime, London:

Penguin

Sen, Amartya (1999). Development as Freedom, Oxford: Oxford University Press

Notes

1 These comments are inspired by Chapters 5 (Amrith) and 6 (Kunitz).

History, historians and development policy 174

Im Dokument DEVELOPMENT POLICY A necessary (Seite 182-188)