• Keine Ergebnisse gefunden

C.3 Exploratory Data Analysis

6.3 Review of Relevant Concepts and Methods

6.3.4 Multi-Criteria Analysis

amenity so that responses can be evaluated in a manner similar to behavior observed in mar-kets. The basic architecture of a contingent valuation survey is: (a) a description of the ser-vice/amenity to be valued and the conditions under which the policy change is being sug-gested, (b) a set of choice questions that ask the respondent to place a value on the service/a-menity, and (c) a set of questions assessing the socioeconomic characteristics of the respondent that will help with determining what factors may shift that value (Mendelsohn and Olmstead 2009).

In early surveys, researchers simply asked people open-ended questions, such as how much they were willing to pay for each amenity. However, such open-ended questions are limited in their ability to provide accurate results. Closed-ended, discrete choice questions are questions in which respondents offer a “yes” or “no” responses when offered one or more specified prices for a good. A possible problem with stated preference surveys is that the responses to being willing to accept questions have generally been many times greater than the responses to being willing to provide questions. This is especially true for non-use values. The factors that cause these large differences are still an active topic of research. Mendelsohn and Olmstead (2009) suggest that these differences are measurement problems, whereas Flachaire et al. (2013) find that they can be due to so-called “protest behavior”, for example, many respondents refuse to pay at all.

to incorporate the value of the flexibility of adaptation options, the social costs induced by the distributional effects of adaptation and the aesthetic impact of adaptation strategies. MCA is a commonly used analytical tool and can help with integrating all of these aspects in a single decision-making framework in a meaningful way (Lawrence et al. 2019). MCA has the ad-vantage of offering decision-makers a direct way of incorporating qualitative and quantitative information into their decision processes (Preston et al. 2013). However, there exists a broad variety of approaches to MCA with different degrees of complexity. A useful starting point for a detailed general introduction and review of MCA methods in natural resource management and climate planning can be found in multiple sources (de Bruin et al. 2009; Ellen et al. 2016;

Greco et al. 2016; Mendoza and Martins 2006). Moreover, the UK government provides a highly used manual for MCA techniques (Department for Communities and Local Government 2009).

6.3.4.1 General Framework and Applications of MCA

Many MCA methods have been proposed in the literature, several of which may be quite com-plex and can be considered as a “black box” by decision-makers. Various MCA methods have been reviewed by Govindan and Jepsen (2016). In conducting an MCA to evaluate dairy efflu-ent managemefflu-ent options in Australia, Hajkowicz and Wheeler (2008) explicitly avoided “black box” methods and instead used the weighted summation with linear transformation MCA method. This method is also called the linear utility MCA method as proposed by Prato (2003).

Hajkowicz and Wheeler (2008) also carried out the analysis with another method known as PROMETHEE II, which is an outranking method, to check the robustness of the results. Addi-tionally, an overview and discussion of various other multi-criteria decision-making methods for the case of flood risk management can be found in de Brito and Evers (2016).

Prato (2003) has used the weighted summation approach to rank five water management alternatives for the Missouri River system. However, instead of using standardized scores (as in the case of Hajkowicz and Wheeler (2008)), Prato (2003) used relative scores, where an al-ternative (usually the current management scheme) is selected as a base alal-ternative, and the performance of other alternatives is evaluated relative to the base alternative. Relative scores provide a sense of how different alternatives perform compared with a system that may be familiar to the decision-maker (e.g., a system that is currently implemented), and how relative scores may be more intuitive than standardized scores are. Using relative scores, alternatives that have an overall performance score of 0 are considered to be as desirable as the base alter-native, whereas those with positive scores are more desirable. The advantage of relative scores is that the ranking of alternatives does not change when additional alternatives are considered.

Prato (2003) also outlines a non-linear utility using the square root functional form to model diminishing marginal utility. However, because the relative scores can be negative, in which case the square root utility does not exist, the author did not use the non-linear utility function in his empirical evaluation.

MCA has been successfully applied in various contexts of assessing the vulnerability of the coastal infrastructure and the project evaluation of coastal adaptation. Preston et al. (2011) pro-vide a comprehensive overview of the applications and challenges of MCA methods for coastal adaptation options. Various case studies are applied in conjunction with local governments in three regions in Australia to prioritize coastal adaptation and development options. A. John-ston et al. (2014) use a simple MCA method for ranking potential consequences of infrastructure loss through flooding in Scarborough, Maine (USA). The authors build a Flood Consequence Score using a four-tier scoring approach based on economic impacts, social impacts, health and safety impacts, and environmental impacts. Rizzi et al. (2016) developed a regional risk assess-ment for the Tunisian coastal zone of the Gulf of Gabes. This approach is based on MCA and on geographical information to prioritize adaptation strategies. Local experts are asked to assign relative scores based on the four susceptibility factors identified in the vulnerability matrix:

vegetation cover, coastal slope (°), wetland extension (inKm2), and percentage of urbanization.

Lawrence et al. (2019) use an MCA combined with dynamic adaptive pathway planning using real options analysis (ROA) to develop a 100-year coastal adaptation strategy in Hawke‘s Bay, New Zealand.

6.3.4.2 Criteria Weight Selection and Elicitation

The selection of a set of criteria and key variables is an important factor for implementing an MCA. The criteria should be selected specifically for the implementation and require a sub-stantial literature review to identify the most important factors. For example, when it comes to coastal disaster risk reduction, multiple factors, such as social acceptance, political will, the availability of financial resources and technological know-how, can play a major role in invest-ment decisions (Barquet and Cumiskey 2018; I. Davis et al. 2015). An empirical MCA frame-work suggested in the Resilience-Increasing Strategies for Coasts – toolKIT (van Dongeren et al. 2014)– an EU-funded project with the aim of developing the risk management tools of fea-sibility, acceptability, and sustainability– were selected as the three main categories of criteria (Barquet and Cumiskey 2018). Alternatively, Preston et al. (2013) provide a guideline for prior-itizing coastal adaptation and development options, and they categorize the criteria into four groups, namely governance, financial, social and environmental. Rouillard et al. (2016) pro-vide an overview of non-monetary criteria based on a literature review with 40 publications in various policy areas, including water management and coastal protection. No regret, urgency, climate mitigation potential, extreme events, robustness, flexibility and the level of autonomy have been identified as additional indicators used for adaptation processes. Regarding crite-ria selection for flood management, a summary of critecrite-ria used in ranking flood management alternatives in previous studies is provided by Chitsaz and Banihabib (2015). They indicated that the expected annual damage is the most common criterion, followed by the protection of wildlife habitats, the expected average number of casualties per year, and technical feasibility and construction speed. Additionally, a review of MCA applications for the case of flood risk management can be found in de Brito and Evers (2016).

In conducting an MCA, criteria weights play an important role, and it is important to obtain an accurate evaluation of these weights. In environmental economics studies, criteria weights are obtained by asking decision-makers direct questions about these weights (Prato 2003). It is often quite difficult for decision-makers to come up with criteria weights in that context. As suggested by Xia and Wu (2007), the weights obtained using this approach are often biased, and the MCA results may be considered to be unreliable. Alternatively, Félix et al. (2012) discuss the usage of a stochastic multi-criteria acceptability approach to take into account the uncertainty of decision-makers’ preferences.

De Almeida et al. (2016) present two methods for eliciting the criteria weights. The first method, called “exact weight”, involves comparing an alternative with known performance scores in all criteria with another alternative that has the performance score in one criterion left unspecified. The decision-maker is then asked to specify the missing performance score for the second alternative so that they are indifferent between the two alternatives. This information is then used to calculate the weights.

The second method is called “flexible weight”. In this method, the decision-maker is pre-sented with two hypothetical alternatives whose performance scores are all specified. The decision-maker is then asked to select the preferred alternative. The observed decision is used in a linear programming problem to infer the criteria weights. Moreover, the weight could also be derived using consistency or consensus weights. In the former case, the weights are deter-mined based on the idea of how consistent scores are between multiple rounds of ranking or scoring analysis (Beroggi and Waliace 2000; Tsiporkova and Boeva 2006). Thus, experts will be considered to be more reliable if they do not change their scores drastically between rounds

and are ultimately given more weight in the aggregation. In the latter case, each expert‘s score is compared with the dominant score and then weighted based on its proximity to that central score (Mathew 2012).