• Keine Ergebnisse gefunden

CHALLENGES: DISSENT, IGNORANCE AND UNCERTAINTY

Im Dokument MAKING SENSE OF SCIENCE (Seite 52-56)

that the presumptive law may not hold in full generality and that future evidence may lead scientists to modify or abandon it’

3.7 CHALLENGES: DISSENT, IGNORANCE AND UNCERTAINTY

Scientific advice can play different roles in different policy and decision contexts.

Evidence and advice can crucially influence the decision-making process and compel action in a context where there is a broad agreement on underlying values and problem definitions, and the level of uncertainty is low (Pielke, 2007). In many cases, however, decisions on important issues must be made under conditions when

‘facts are uncertain, values in dispute, stakes high and decisions urgent’ (Funtowicz

& Ravetz, 1993).

In general, knowledge claims by scientists undergo peer review. For instance, through ad-hoc review panels or by letting the opinions percolate within the scientific community, waiting for reactions if any (if there is time enough). However, the recognised competence of a panel of advisers is not in itself a guarantee. For those questions which are not dramatically urgent, the broad opinion of the various scientific communities implied in the tailoring of the answer can be a useful filter;

for instance, through the publication of a discussion document or the organisation of ad-hoc meetings.

In the simple ‘linear’ or ‘deficit’ model, it is assumed that when all available facts and information are communicated to the public and policymakers, an agreement on scientific evidence leads to policy consensus and determines the decision to be taken (Hilgartner, 2000; Alan Irwin & Wynne, 1996; Jasanoff, 1991). In reality, different explanations can account for the available evidence, and there is no unique path to a specific decision. This is especially true when participants in a decision-making process do not share common objectives, or there are conflicting commitments based on different values (Pielke, 2007). In such cases, decision alternatives cannot be contrasted solely on the basis of scientific evidence. Rather, value conflicts and competing problem framings have to be resolved or taken into account in reaching a decision (Grove-White, Macnaghten, Mayer, & Wynne, 1997).

The articulation of values and alternative perspectives guides the selection of evidence and helps identify decision alternatives. Resolving — or least clarifying

— value conflicts improves communication and interaction between stakeholders.

Practical initiatives in this direction, which aim both to open up ethical matters and take account of different forms of evidence and understanding, have already been taken and deserve greater discussion (e.g. Lindner et al., 2016; Rip, Misa, & Schot, 1995; Stilgoe, Owen, & Macnaghten, 2013).

The process looks unproblematic in the case in which there is a general consensus in the scientific community, the question for consultation is restricted and the range of the estimated uncertainties is small. It may become harder whenever the disagreement among scientists grows strong. The question then is whether scientists must limit their role to faithfully communicating the uncertain state-of-the-art and letting politicians handle the technical matter (on which they have conceivably little or no competence), or whether scientists must take the responsibility to distil a chart of pros and cons for each of the competing positions.

The matter is even harder when the question posed to science is either too vague or requires answers that go beyond the scientific domain in which the question is asked (trans-scientific), as the tailoring of an answer may imply competences which do not fully overlap. The solution to this very delicate case requires one of the following options:

1. When the question posed is vague, to consult with those requesting the advice and explore whether the question can be reframed in a more defined and scientifically-specific form.

2. For either vague or trans-scientific questions, to define one or more well-defined interpretations of the question for which evidence is available and provide advice on each of them; an open assembly of a cluster of opinions which stresses the composite nature of the final advice.

3. However, choosing an option would imply that the advice given to policymakers clearly distinguishes which answers can be derived from the evidence and which represent the prudent judgements of the advisers.

In summary, science has the responsibility to provide frank and evidence-informed answers to policy questions, stressing:

1. Whether these questions are well-posed or not;

2. Whether there is only one correct answer to the question or many potentially correct answers (as science is not exact);

3. How the level of scientific reliability is characterised;

4. What the pros and cons are of each of multiple solutions;

5. The robustness of the evidence presented;

6. What assumptions and values have been included in the analysis and how sensitive the results are when assumptions and default values are changed.

Science-related public controversies can have different origins but they arise especially when evidence is conflicting or ambiguous, decision stakes are high, and uncertainty is large (Epstein, 1996; Nelkin, 1992). However, in many cases, public disagreement on specific issues involves conflicting values, understandings,

beliefs and interests, rather than scientific information alone. Conflicts may also be driven by deliberate or non-deliberate manipulation of the scientific data.

The values, beliefs and interests that affect how evidence is perceived and interpreted can be political, economic, religious, and cultural or a combination thereof. In addition to advice-givers and decision-makers, many stakeholders and laypeople participate in discussions on contentious issues, thus at least potentially contributing to the complexity of perception of scientific evidence. In particular,

‘the voices of organised interests and influential individuals are amplified in public discourse’ (National Academies of Sciences Engineering and Medicine, 2017).

Emerging issues in the science advisory process include the growing interest of civil society in scientific advice (Organisation for Economic Co-operation and Development, 2015). Specifically, citizen science has emerged as a significant phenomenon, often crossing the established boundaries between science and democratic engagement (Hecker, Haklay, Bowser, Makuch, & Vogel, 2018). Whilst citizen science can be defined in different ways (Bonney, Cooper, & Ballard, 2016), in one form it involves the ‘engagement of non-scientists in true decision-making about policy issues that have technical or scientific components’ (Lewenstein, 2004).

Modern science has become more socially embedded and both the public and policymaker focus on the accountability of scientific research and its applications (Jasanoff, 2012). This is especially true in the case of ‘issue-driven’ science, characterised by highly uncertain and/or contested evidence and high decision stakes (Funtowicz & Ravetz, 1993). The involvement and participation of people outside established scientific institutions is valuable in gathering and evaluating evidence, assessing the costs and benefits, and estimating risks. As a part of policy implementation, public engagement could be particularly important for trans-disciplinary applications, for which evidence and uncertainties are integrated from different scientific fields (Alan Irwin, 1995). Geographical distribution, different policy application areas, the level of engagement and type of activity influence the involvement of citizen science in policymaking (Haklay, 2015).

Citizen science can provide high-quality evidence relevant for decision-making, but these facts and information are, in most cases, obtained and produced by methods that differ from those employed by scientists and established research organisations (Hecker et al., 2018). To ensure sustained influence on the decision-making process it is, therefore, important to develop durable and methodologically reliable procedures for verifying the quality of evidence obtained by citizen science and provide incentives for collaboration between scientists and people from outside scientific institutions (Aisling Irwin, 2018). However, this must not prevent citizen science from introducing ‘novel viewpoints, radical critiques, or considerations lying outside the taken-for-granted framing of the problem’

(Jasanoff, 2012).

The implication from much of the growing literature on citizen science is that it can play a role within the scientific advisory process, not least by expanding the

range of evidential sources and providing a partial re-framing of policy-related questions (Aisling Irwin, 2018). However, there are also many critical voices about the reliability and function of citizen science (Guerrini, Majumder, Lewellyn,

& McGuire, 2018). Concerns are raised about the integrity of science (retaining the methodological rigour that is neded for conducting science), the issue of intellectual property (who is the owner of the knowledge generated by citizens) and on the image of science that is being conveyed to citizens (citizens collect data, professional scientists make the interpretations).

4.1 THE FUNCTION OF SCIENTIFIC EVIDENCE IN

Im Dokument MAKING SENSE OF SCIENCE (Seite 52-56)