• Keine Ergebnisse gefunden

Scientific uncertainty

Im Dokument MAKING SENSE OF SCIENCE (Seite 31-36)

among its elements, so that current events heavily influence the probabilities of many kinds of later events’

2.3.2 Scientific uncertainty

English dictionary definitions of ‘uncertainty’ are generally framed rather broadly, e.g. ‘the state of being uncertain’ where ‘uncertain’ is defined as ‘not able to be relied on; not known or definite’ (“Oxford Dictionaries,” 2018). Many technical definitions are also broadly framed; e.g. the US National Research Council’s Committee on Improving Risk Analysis Approaches defined uncertainty as ‘lack or incompleteness of information’ (National Research Council, 2009).

In a much earlier definition, the economist Frank Knight restricted the term

‘uncertainty’ to unquantifiable uncertainty, distinguishing it from ‘risk’, which he used for quantifiable uncertainty, where the distribution of outcomes is known from a priori calculations or statistical data (Knight, 1921). This ‘Knightian’ view of uncertainty is common in economics (e.g. N. Stern, 2008) and in some perspectives on approaches to scientific advice (e.g. Stirling, 2010). However, as Cooke (2015) points out, Knight recognised that ‘we can also employ the terms ‘objective’ and ‘subjective’ probability to designate the risk and uncertainty respectively’ (Knight, 1921, part III, chapter VIII).

Knight also recognised that, in practice, people often use probability to express the types of uncertainty he would regard as unquantifiable, and he went so far as to state that forming good judgements of this type is the principal skill that makes a person ‘serviceable’ in business (Knight, 1921, part III, chapter VII). Similarly, it may be assumed, advice is requested from scientists because they are thought able to provide useful judgements in their field of expertise.

The work of Ramsey (1926), de Finetti (1937), Savage (1954) and others has since established an operational and theoretical framework for subjective probability, which justifies its use in probability calculations on the same basis as ‘objective’

probabilities derived from frequency data. Within that framework, all forms of uncertainty may be quantified using subjective probability, even for complex problems, provided that the question under assessment is well-defined (i.e. refers to the occurrence or non-occurrence of an unambiguously specified event). Both the Knightian and Bayesian perspectives are, for example, taken into account in the European Food Safety Authority’s guidance on uncertainty analysis in scientific assessment, which defines uncertainty as ‘a general term referring to all types of limitations in available knowledge that affect the range and probability of possible answers to an assessment question’, but also emphasises the need to identify situations where probabilities cannot be given (European Food Safety Authority, 2018b, 2018c) .

Scientific uncertainty relates to the limitedness or even absence of scientific knowledge (data, information) that makes it difficult to assess exactly the probability or likelihood and the range and intensity of possible outcomes (cf. Filar & Haurie, 2010). Uncertainty most often results from an incomplete or inadequate reduction of complexity in modelling cause-effect chains (cf. Marti, Ermoliev, & Makowski, 2010). Whether the world is inherently uncertain is a philosophical question that is not pursued here (Aven & Renn, 2009). It is essential to acknowledge (cf.

Chapter 3) that human knowledge is always incomplete and selective, and thus contingent upon uncertain assumptions, assertions and predictions (Funtowicz

& Ravetz, 1992; Laudan, 1996; Renn, 2008, p. 75). It is obvious that the modelled probability distributions within a numerical relational system can only represent an approximation of the empirical relational system that helps elucidate and predict uncertain events. It therefore seems prudent to include additional aspects of uncertainty when making claims of causal relationships (van Asselt, 2000, pp. 93-138; 2005). Although there is no consensus in the literature on the best means of disaggregating uncertainties, the following categories appear to be an appropriate

means of distinguishing between the key components of uncertainty (Renn, Klinke,

& Van Asselt, 2011):

• Variability refers to the different vulnerability of targets, such as the divergence of individual responses to identical stimuli among individual targets within a relevant population such as humans, animals, plants, landscapes, etc.;

• Inferential effects relate to systematic and random errors in modelling, including problems of extrapolating or deducing inferences from small statistical samples, using analogies to frame research questions and data acquisition, and using scientific conventions to determine what is regarded as sufficient proof (such as 95% interval of a normal distribution function). Some of these uncertainties can be expressed through statistical confidence intervals, while others rely on expert judgements;

• Indeterminacy results from a genuine stochastic relationship between cause and effects, apparently non-causal or non-cyclical random events, or badly understood non-linear, chaotic relationships;

• System boundaries allude to uncertainties stemming from restricted models and the need to focus on a limited number of variables and parameters;

• Ignorance means the lack of knowledge about the probability of occurrence of a damaging event and about its possible consequences.

The first two components of uncertainty qualify as statistically quantifiable uncertainty and, therefore, can be reduced by improving existing knowledge, applying standard statistical instruments such as Monte Carlo simulation and estimating random errors within an empirically proven distribution. The last three components represent genuine uncertainty and can be characterised, to some extent, by using scientific approaches, but cannot be completely resolved. The validity of the end results is questionable and, for making prudent policy decisions, additional information is needed, such as subjective probabilities and/or confidence levels for scientific conclusions or estimates, potential alternative pathways of cause–

effect relationships, ranges of reasonable estimates, maximum loss scenarios and others (e.g. Stirling, 2008). Furthermore, beyond the five components, representing absence or lack of knowledge as well as natural variability and indeterminacy (often under the heading of epistemic and aleatory uncertainties), other policy-related uncertainty types need to be considered (International Risk Governance Council, 2015). These include:

• Expert subjectivity, which may be due to philosophical or professional orientation, and conflict of interest (or even fraud) (see also Section 4.3);

• Communication uncertainty, which may be associated with ambiguity (lack of clarity about the intended meaning of a term or a concept), context dependence (failure to specify the context), under-specificity (overly-general statements), and vagueness;

• Under-determination of theory by data, where evidence available to scientists is not sufficient for forming a coherent theory or choosing between alternative theories supported by the data.

Examples of high uncertainty include:

• Many natural disasters, such as earthquakes;

• Environmental impacts, such as the cumulative effects of various environmental hazards below the threshold of statistical significance, or gradual degradation of eco-services due to the loss of biological diversity;

• Socio-political consequences, such as the persistence of political stability;

• Economic reactions to policy changes, for example, on the stock market;

• Regional impacts due to global climate change;

• The occurrence of pandemics (such as SARS or avian flu), caused by viruses characterised by a rapid rate of mutation.

There are other ways to classify and categorise uncertainty. Funtowicz and Ravetz (1990) distinguished between technical (inexactness), methodological (unreliability) and epistemological ((ignorance) dimensions of uncertainty. Building on that typology, Van der Sluijs (2017) proposed to add the societal dimension (limited social robustness). Table 1 below gives examples of the various sources of uncertainty for each dimension.

Dimension Type Can stem from or can be produced by Technical Inexactness Intrinsic uncertainty: variability; stochasticity;

heterogeneity

Technical limitations: error bars, ranges, variance;

resolution error (spatial, temporal); aggregation error;

linguistic imprecision, unclear definitions

Methodological Unreliability Limited internal strength of the knowledge base in: use of proxies; empirical basis; theoretical understanding;

methodological rigour (including management of anomalies); validation

Epistemological Ignorance Limited theoretical understanding

System indeterminacy: open-endedness of system under study; chaotic behaviour

Intrinsic unknowability with active ignorance: model fixes for reasons understood; limited domain of validity of assumptions; limited domains of applicability of functional relations; numerical error; surprises type A (some awareness of possibility exists)

Intrinsic unknowability with passive ignorance: bugs (software error, hardware error, typos); model fixes for reasons not understood; surprises type B (no awareness of possibility)

Societal Limited social robustness

Limited external strength of the knowledge base in:

completeness of set of relevant aspects; exploration of rival problem framings; management of dissent;

extended peer acceptance/stakeholder involvement;

transparency; accessibility

Bias/value ladenness: value laden assumptions;

motivational bias (interests, incentives); disciplinary bias;

cultural bias; choice of (modelling) approach (e.g. bottom up, top down); subjective judgement

Table 1. Dimensions of uncertainty (van der Sluijs, 2017).

According to Maxim and van der Sluijs (2011), uncertainty sources affecting knowledge production processes can be further distinguished according to location, type and position within the knowledge generation cycle:

• ‘Content uncertainty’ is related to data selection and curation, models’

construction and quality assurance, and statistical procedures. It also includes conceptual uncertainty, understood as ignorance about qualitative relationships between phenomena;

• ‘Context uncertainty’ relates to the socio-economic and political factors influencing the knowledge production process. Context means identifying the boundaries of the real world to be modelled at the moment that the problem is framed;

• ‘Procedural uncertainty’ relates to the procedural quality of the process of knowledge construction. Under this domain fall considerations of completeness, credibility, transparency, saliency, credibility, legitimacy, and fairness.

The relative impact of content, contextual and procedural uncertainty is highly dependent on the location in the knowledge cycle: ‘problem framing’, ‘knowledge production’, and ‘knowledge communication and use’ all have their distinctive characteristics. The concept of quality as applied to scientific evidence is not the same as quality in the deployment of the evidence for policy. For example, excellent scientific quality may simply escape, be ignored, or be miscommunicated accidentally or instrumentally, with a resulting poor policy decision, while — at the opposite extreme — modest scientific quality may end up being sufficient for the purpose of reaching a desirable policy compromise (Wynne & Dressel, 2001). The interplay between content, procedural, and contextual uncertainties may produce unforeseen effects (Maxim & van der Sluijs, 2011). For example, in a regulatory context, regulators may impose the use of risk assessment methods that are inadequate for the nature of the risk, or the experts involved may lack the relevant competence or enough time to critically review the knowledge.

Im Dokument MAKING SENSE OF SCIENCE (Seite 31-36)