• Keine Ergebnisse gefunden

5 Causal Bayes Nets as Models of Causal Cognition 6

5.4 Causal Reasoning with Observations and Interventions

5.4.5 Experiments 1 to 4: Summary and Discussion

Taken together, the results of Experiments 1 to 4 provide clear evidence that learners successfully distinguished between merely observed states of variables and the very same states generated by interventions. The capacity to distinguish seeing from doing was demonstrated both for simple diagnostic judgments involving a single causal relation and more complex predictive inferences which required taking into account multiple variables and confounding pathways. In all studies participants had the competency to derive interventional predictions from causal models parameterized by passively observed events.

The results of Experiment 1 show that participants understood that intervening in an effect renders it independent of its actual causes, that is, learners performed graph surgery. However, when a different model was suggested in which the variable targeted by the intervention was the cause of the variable asked for, learners correctly understood that there is no difference between seeing and doing. In addition, participants successfully differentiated hypothetical from counterfactual interventions. Even though

CAUSAL BAYES NETS AS MODELS OF CAUSAL COGNITION 94 all subjects received identical learning input, learners’ predictions for the consequences

of actual and counterfactual interventions differed depending on minimal variations of the causal model assumed to underlie the data.

Experiment 2 extends these results by demonstrating that learners also have the capacity to differentiate between observations and interventions when a confounding backdoor path has to be taken into account. The analyses of the response patterns provide not only convincing evidence that learners correctly distinguished between observations and interventions but also demonstrate a surprising grasp of the implications of confounding pathways in a complex causal model. Participants correctly understood that interventions and observations differ with respect to the way the second confounding pathway needs to be taken into account. In line with causal Bayes nets theory and the results of the first study, assumptions about causal structure strongly influenced learners’ causal inferences. Participants correctly recognized that the potentially diverging consequences of observations and interventions crucially depend on the underlying causal model. However, while the results of Experiment 1 demonstrate that participants correctly distinguished hypothetical from counterfactual interventions, this capacity was impaired for the more complex inference tasks of Experiment 2. Even though the descriptive data indicates that participants draw a distinction between hypothetical and counterfactual actions, they had problems to differentiate between the two types of interventions. In general, the results of Experiments 1 and 2 emphasize the role of top-down influences in causal cognition and challenge purely bottom-up approaches of causal induction.

Experiments 3 and 4 further investigated the role of the learning data. The findings of the two studies demonstrate that learners integrate a causal model’s parameters in their causal inferences. Since identical causal models lead to very different causal judgments depending on the learning input these results refute the explanation that learners’ causal inferences were mainly driven by the causal models suggested to them prior to observational learning. Experiment 3 investigated participants’ sensitivity to base rate information in causal reasoning. The results show that learners’ responses were clearly affected by variations of the base rate of events A and C. For example, when the initial event A was frequent, it was correctly understood that in case of a preventive intervention in C the backdoor path must be taken into account. Conversely, when the instantiation of the backdoor path was rather unlikely because the initial event was rare learners’ responses reflected that there was only a slight difference between

seeing and doing. However, the obtained response patterns also showed some deviations from the normative values. One potential factor contributing to this problem was that some learners seemed to have misunderstood certain aspects of the cover story (e.g., they assumed the hidden causes generating A and C not to be independent). In addition, a number of studies on judgment and decision making have shown that people often tend to neglect base rate information, a phenomenon known as base rate neglect or base rate fallacy (e.g., Eddy, 1982; Kahneman & Tversky, 1982). This might also have contributed to the deviations from the normative values.

Finally, Experiment 4 illustrates how the strength of the causal relations affects causal inferences. As in Experiment 3, participants were suggested identical causal models but the learning input entailed different parameterizations of the causal models.

As in Experiment 3, participants’ causal inferences varied systematically in accordance with the manipulations of the learning input. For example, learners understood that a confounding backdoor path consisting of strong causal relations will exhibit a larger influence on the probability of the final effect occurring than the same pathway consisting of weak causal relations. Thus, it was correctly recognized that the strength of the causal mechanisms connecting the model’s variables crucially influences the consequences of interventions.

Taken together, the findings of the four experiments illustrate how reasoners integrate both qualitative knowledge (i.e., causal models) and quantitative knowledge (i.e., parameters) in their causal judgments. These studies corroborate the assumption that causal reasoning is neither purely data-driven nor completely determined by prior knowledge. Instead, top-down and bottom-up processes interact in causal reasoning in a fashion anticipated by causal Bayes nets theory. Alternative accounts of causal cognition (e.g., contingency models) cannot explain the fact that learners’ causal inferences differed depending on whether the state of a variable was merely observed or generated by means of intervention. These accounts fail to take into account the crucial differences between observations and interventions and their diverging implications.

The results are also at variance with the predictions of associative accounts. Even though learners never experienced the consequences of interventions (i.e., instrumental actions), they showed a remarkable competency to infer the consequences of hypothetical actions from their observational knowledge.

However, the responses to the counterfactual intervention questions also indicate participants’ problems in distinguishing between hypothetical and counterfactual

CAUSAL BAYES NETS AS MODELS OF CAUSAL COGNITION 96 interventions, especially when making judgments that require taking into account

confounding backdoor paths. Whereas the estimates of the counterfactual probabilities conformed to the normatively predicted response patterns when giving simple diagnostic judgments, this competency was impaired for the more complex predictive inferences. This is probably due to the complexity of counterfactual inferences, which require an updating of the model’s probabilities prior to the stage of model manipulation.

To sum up, the studies provide convincing evidence that learners were able to derive interventional predictions from observational data subsequent to a trial-by-trial learning phase. The results show that the capacity to predict the consequences of interventions from causal models parameterized by passively observed events is not limited to tasks in which learners are provided with lists of aggregated data (Waldmann

& Hagmayer, 2005) or description of causal situations (Sloman & Lagnado, 2005). The findings weaken associative theories of causal cognition and are at variance with the claim that trial-by-trial learning operates through different learning mechanisms than causal reasoning with aggregated data. Instead, the results suggest that there are different modes by which identical causal knowledge is accessed and integrated to derive observational and interventional predictions (cf. Waldmann & Hagmayer, 2005).

Causal Bayes nets theory, which captures the distinction between seeing and doing and provides the computational mechanisms to derive interventional predictions from observational data, is supported by these results. Theories of causal cognition that lack the representational power to express the differences between observations and interventions and do not take into account causal structure fail to account for the empirical findings.