• Keine Ergebnisse gefunden

C 3 Risk characterization

Im Dokument Download: Full Version (Seite 76-80)

C 3.1

Certainty of assessment

Risks are classically defined by two factors: the prob-ability and the magnitude of damage (Hauptmanns et al., 1987). The assessment of these two factors de-pends upon the quantity and quality of respective data permitting a valid prediction of relative fre-quencies.This is where the concept of ‘certainty of as-sessment’ comes into play. Ideally, certainty of assess-ment can be expressed by statistical ranges of the probability and magnitude of damage.

By the term ‘certainty of assessment’, the Council understands the degree of reliability with which a statement can be made as to the probability of dam-aging events. Risk analyses normally place the two variables ‘magnitude of damage’ (e.g. 1–10,000 per-sons injured) and ‘probability of occurrence’ of each specific magnitude (from extremely low probabilities to almost one for an almost certain event) in relation to each other. We thereby receive a function from which we can read the probability of each magnitude of damage. However, there is usually a lack of clear and unambiguous information on the probabilities associated with specific magnitudes. If only limited data is available from observation of past events, then the tools of inductive statistics can be used to state a range of values within which – for instance with a 95% or 99% second order probability – the true value of the probability associated with a certain magnitude of damage must lie.

Often not even samples or observed data are available. In these situations, expert judgments must be used additionally as substitutes for empirical data records based on historical observation. Here two av-enues can be pursued. The first method is to ask a large number of experts to estimate the range, and then to aggregate to one interval the various ranges stated by the individual experts. The second is to ask the experts to deliver a discrete estimate that is as precise as possible, and then, after statistical process-ing, to take the dispersion among the experts as the range. In both cases we receive a

probability-magni-tude function showing for each manifestation of damage a mean value (the point on the curve) and a range (error bar). It may also be purposeful to take as reference the probability and to organize the range around the magnitude of damage. The question then becomes: what is the range of magnitude associated with an x% probability of occurrence? The findings of the analysis of ranges can be visualized by super-imposing the error bars (of either probability or mag-nitude of damage) over the probability-magmag-nitude function. Fig. C 3.1-1 shows an ideal-type curve of such a function.

The smaller the error bar, the higher the certainty of assessment. In order to further standardize this cri-terion, it has become common practice to state cer-tainty of assessment as a numerical value ranging be-tween 1 (high certainty or no error bar) and 0 (low certainty or error bar from 0 to almost infinity). If the value is close to 1, it can be stated with certainty that a damaging event with a probability of x is to be ex-pected. This value x is itself of course a limit value of frequency distribution (and thus not a forecast of a specific event), but all experts agree that it accurate-ly reflects the real-world situation. If the value is close to 0, all experts are evidently in disagreement with each other or the observed data are so dispersed that, while it is possible to form a mean value, the dis-persion around this mean is substantial. If values ap-proach 0, the boundary to indeterminacy or igno-rance is crossed, as the data or estimates then evi-dently vary so greatly that one cannot speak of any reliable assessment.

The concept of ‘certainty of assessment’ can be il-lustrated by the example of a lottery with black and white balls. If certainty of assessment is 1 (high cer-tainty), then one knows exactly the number of black and white balls. The probability with which a black or a white ball will be drawn can therefore be stated ex-actly. If certainty of assessment is low (large error bar), then the number of black and white balls (or their ratio in the urn) is unknown. It must be deduced indirectly from a number of draws or from the esti-mates of a number of experts who have been able to glance into the urn. Using classic statistics (in the case

of a number of draws) or Bayesian statistics (in the case of expert judgments), an approximation of the ratio of black to white balls can be formulated, for which in turn a (second order) probability can be stated. It is then possible to predict for a long series of draws, with a probability of, for instance, 95% the maximum number of white balls without knowing the precise expected value for drawing a white ball.

When considering global risks, the certainty of as-sessment is crucial. For even if the statistical mean for global damage is relatively low, the error bar can be large, i.e. there can still be great uncertainty as to whether the probability of global damage is not con-siderably larger or smaller than the mean value sug-gests. Two events with the same mean value in the probability-magnitude function must therefore be viewed very differently depending upon the certain-ty of assessment. If it is high (close to 1), then limit values and technical standards will usually suffice to place the risk in the normal area. If, however, it is low (close to 0), then precautionary measures need to be taken in order to be reasonably prepared for the event that the upper margin of the error bar proves to have been realistic.

It can be assumed that the certainty of assessment is relatively high if large quantities of data with low levels of variance are available, if there have been long observation periods with short intervals be-tween causes and effects and with a high constancy and if possible intervening variables are robust. In these cases the Council speaks of low uncertainty al-though singular events can still not be predicted.

Gaps in knowledge concerning the probability and magnitude of damage associated with uncertain events are a result of either information deficits

(which can essentially be remedied), a lack of experi-ential knowledge (due to singular events or extreme-ly long cycles), difficulties in understanding the sys-tematic causal chain (because of an impenetrable maze of intervening variables) or inadequate signifi-cance of the damage against the background noise of chance events. For indeterminate risks only the prob-ability of occurrence or the extent of damage is known, but for ignorance both components are un-known. Such risks need to be tackled by means of an-ticipatory strategies of risk avoidance and social sys-tem strengthening (Collingridge, 1996). These two types of risk are discussed in detail in Section G. A low certainty of assessment is indicative of an inade-quate data base or of events having a large compo-nent of chance.

It is expedient to distinguish between indetermi-nacy (probability or extent unknown) and ignorance (both components unknown). For instance, insurance companies can cope quite well with risks that have a low certainty of assessment on the probability side, as long as the certainty of assessment is high on the damage magnitude side (Kleindorfer and Kun-reuther, 1987). If, however, the magnitude is also highly uncertain, it is almost impossible for insurance companies to assess a loss-covering premium. In such cases, private or public liability funds may step in (Section F 3).

The Council therefore notes that the choice of risk management tools depends not only upon the proba-bility and magnitude of damage, but also upon the certainty of assessment of each of these components.

a

b

Error corridor of curve a

Error corridor of curve b Dose-response curve a (without threshold value)

Dose-response curve b (with threshold value)

threshold value Threshold value of curve b

0

Dose

Probability

low high

Dose-response function with error corridors.

Source: WBGU

54 C Risk: Concepts and implications

C 3.2

Further differentiation of evaluation criteria In addition to the two classic components of risk – probability and magnitude – further evaluation ele-ments should be included in risk characterization (Kates and Kasperson, 1983; California Environmen-tal Protection Agency, 1994; Haller, 1990). These evaluation elements can be derived from risk per-ception research. They have already been proposed as criteria for risk evaluation procedures in a number of countries (such as Denmark, the Netherlands and Switzerland). The following are particularly impor-tant:

• Ubiquity. Spatial distribution of damage or of damage potential (intragenerational equity)

• Persistency. Temporal scope of damage or damage potential (intergenerational equity)

• Irreversibility. Non-restorability of the state that prevailed prior to occurrence of damage. In the environmental context, this is primarily a matter of the restorability of processes of dynamic change (such as reforestation or water treatment), not of the individual restoration of an original state (such as preserving an individual tree or extirpating non-native plant and animal species).

• Delay effect. The possibility that there is large la-tency between the cause and its consequential damage. Latency can be of physical (low reaction speed), chemical or biological nature (such as in many forms of cancer or mutagenic changes). It can also result from a long chain of variables (such as cessation of the Gulf Stream due to climatic changes).

• Mobilization potential (refusal of acceptance).The violation of individual, social or cultural interests and values that leads to a corresponding reaction on the part of those affected. Such reactions can include open protest, the withdrawal of trust in de-cision makers, covert acts of sabotage or other forms of resistance. Psychosomatic consequences can also be included in this category.

The criteria that have been identified by perceptual research are fully or sufficiently covered by the crite-ria chosen here. A review of the relevant studies of risk perception shows that most people connect to risks questions of (individual and institutional) con-trollability, voluntariness, habituation to the source of risk and an equitable risk-benefit distribution (Jungermann and Slovic, 1993b). The evaluation of controllability is covered in its physical aspect by the criteria of ubiquity and persistency, and in its social aspect by the criterion of mobilization. From a col-lective perspective, the criterion of voluntariness can scarcely be used as an evaluation criterion for

soci-etal risks because the risks which interest us here are those which affect many at the same time and have asymmetrical distribution patterns. The protest po-tential associated with imposed risks is contained in the criterion of mobilization. Habituation to a source of risk is not in itself a normatively purposeful evalu-ation criterion, as it is possible to become accus-tomed to large and possibly unacceptable risks (e.g.

road accidents).

The desire to evaluate accustomed risks more pos-itively than novel ones is however an expression of justified concern that the degree of uncertainty of a risk can not yet be estimated with sufficient accuracy and one should therefore proceed with caution. This aspect is covered in our catalog of criteria by ‘cer-tainty of assessment’.

Criteria relating to distributional equity are har-der to address, as there is a lack of intersubjectively valid measures of equity and inequity. The question of whether the usufructuaries of an activity and the people who are affected by a risk are identical is un-problematic to answer.

If they are identical, an individual regulation of risk appears expedient as already set out above. If not, then collective regulation mechanisms need to be employed. These can be commitments under lia-bility law (and thus renewed individualization), rights of risk-bearers to participate in decisions, or li-censing regulations. However, to what extent asym-metries are felt to be inequitable and monetary or non-material compensation is viewed to be adequate depends upon the values prevailing in the cultural system concerned.

Usually it is necessary to examine effects on a case-by-case basis in order to substantiate intersub-jectively a violation of the equity postulate. Ubiquity and persistency provide an indication of the possibil-ity of an inequitable distribution of burdens. A risk with global effects generally affects intragenerational equity, while a persistent damage potential affects fu-ture generations.Where extreme values are found for these two indicators, there are grounds to suspect an inequitable distribution. But only the analysis of the specific case can definitely reveal whether certain eq-uity postulates are met or violated.

The analytical and philosophical literature on risks also contains proposals for multi-dimensional evaluation (Hohenemser et al., 1983; Akademie der Wissenschaften zu Berlin, 1992; Shrader-Frechette, 1985; Gethmann 1993; Femers and Jungermann, 1991). These proposals partially suggest similar and partially slightly divergent evaluation criteria. Multi-dimensional evaluation procedures have until now been included explicitly in the national legislation of Denmark and the Netherlands. In other countries, above all the USA, advisory bodies conduct such

multi-criteria evaluations as a part of the standard-setting process (Hattis and Minkowitz, 1997; Beroggi et al., 1997; Petringa, 1997; Löfstedt, 1997).The Coun-cil recommends such an approach for Germany, too, particularly where global risks are concerned.

The criteria recommended by the Council are summarized in Table C 3.2-1. This table serves in the further course of this report as a basis for character-izing the various individual risks and for formulating risk priorities. The criteria are further used to con-struct classes of risk (Section C 4).

C 3.3

Risk evaluation in the context of the Council’s guard rail concept

What role do these criteria play in risk evaluation? In its previous reports the Council has developed a

‘guard rail concept’ (WBGU, 1996). This concept de-rives from the idea that certain prospects of damage entail such far-reaching losses of substance that they cannot be justified by the associated gains. When cer-tain levels of damage are overstepped, then so many or such severe negative consequential effects are to be expected that even large former gains cannot compensate for these effects. The Council has taken this phenomenon into account by defining ecological and social ‘guard rails’. Certain ecological functions must not be endangered and certain economic and social attainments must not be jeopardized in order to achieve short-term economic gain or to enforce certain environmental protection measures.

To evaluate risks, this guard rail concept needs to be extended.As damage can only occur with a certain probability, an unambiguous guard rail can no longer be defined. Apart from cases in which major damage can occur with sufficiently large probability, it is hardly possible to define a clear-cut guard rail that might permit a definite ban or abstention, thus re-lieving us of the necessity to balance costs and

bene-fits. Instead, the Council proposes a ‘guard rail corri-dor’. This serves to signify that particular care is re-quired in controlling and regulating a particular risk.

The concept indicates the necessity of institutional regulations in order to arrive at an adequate evalua-tion and regulaevalua-tion. Risks that fall in the guard rail corridor are located in the transitional area set out above, or may be in the prohibited area.

The eight evaluation criteria can now be used to differentiate more clearly between the normal and transitional areas, and to assign risks in an under-standable manner to the one or other area. Risks reach the transitional area, i.e. the guard rail corridor, if the individual criteria of risk characterization have extreme values. If several extreme values are found for one and the same source of risk, then this risk will generally be in the prohibited area.

For instance, the probability of occurrence can ap-proach 1, or the extent of damage can tend towards infinity. A guard rail corridor is also entered if the certainty of assessment is infinitely small or if the consequences are irreversible, non-compensable and simultaneously highly persistent and ubiquitous, even if one knows little yet about the magnitude of possible damage. The next section constructs proto-typical classes of risks that reach one or several ex-treme values.

Criterion Bandbreadth

Probability of occurrence P 0 to approaching 1

Certainty of assessment of P Low or high certainty of assessment of the probability of occurrence Extent of damage E 0 to approaching infinity

Certainty of assessment of E Low or high certainty of assessment of the extent of damage

Ubiquity Local to global

Persistency Short to very long removal period

Irreversibility Damage not reversible to damage reversible Delay effect Short to very long time lag between triggering

event and damage

Mobilization potential No political relevance to high political relevance Bandbreadths of criteria.

Source: WBGU

Im Dokument Download: Full Version (Seite 76-80)