• Keine Ergebnisse gefunden

The Problem of Inconsistent Models

Collin Rice

2 The Problem of Inconsistent Models

Idealized models are widespread in science, which raises a philosophical puzzle: how can we extract reliable information from representations we know to contain false assumptions, which often distort relevant features of the target system? An additional layer is added when we consider the fact that, in many instances, multiple idealized models with conflicting assumptions are used to study the same phenomenon ( Longino 2013 ; Morrison 2015 ; Mitchell 2009 ; Weisberg 2007 , 2013 ). Rather than sim-ply modeling different features (or aspects) of the target system in comple-mentary ways, genuinely inconsistent models often make contradictory assumptions about the target system (e.g., the nucleus), yield incompati-ble causal claims, and represent the system’s basic ontology in fundamen-tally inconsistent ways ( Morrison 2011 , 2015 ). Indeed, Michael Weisberg characterizes the common practice of multiple model idealization as “the practice of building multiple related but incompatible models, each of which makes distinct claims about the nature and causal structure giving rise to a phenomenon” ( Weisberg 2007 , 645). While Weisberg focuses on the need for these different models to satisfy various competing repre-sentational goals, the most pressing challenge for the realist comes from attempts to use multiple inconsistent models to explain and understand the same phenomenon because both explanation and understanding are typically thought to be factive —that is, they are both thought to have truth or accuracy requirements.

2.1 The Problem of Inconsistent Models and Accurate Representation Requirements

Among philosophers of science, it is widely accepted that a necessary con-dition for something to explain is that it be, at least in some sense, true ( Hempel 1965 ). This intuition is based on examples such as explaining the weather by citing Greek gods. Intuitively, although this would explain if it were true , there is a sense in which it fails to provide a satisfactory explanation of the weather. Hempel’s original Deductive-Nomological (DN) account builds this in due to the fact that, in order to be sound (or cogent), an argument must have all true premises. Hempel distinguished between genuine explanations, where the explanans is true, and potential explanations, which would be adequate if they were true. In modeling terms, this truth requirement claims that in order for a model to explain, it must accurately represent the explanatorily relevant features of the tar-get system(s) . 1

Many contemporary accounts of how models explain make this truth requirement explicit. For example, mechanistic accounts of explanation typically include particularly strong accurate representation requirements in order for a mechanistic model to explain, as is illustrated by David Kaplan and Carl Craver’s model-to-mechanism-mapping (3M) requirement:

(3M) A model of a target phenomenon explains that phenomenon to the extent that (a) the variables in the model correspond to identifiable components, activities, and organizational features of the target mechanism that produces, maintains, or underlies the phenomenon, and (b) the (perhaps mathematical) dependencies posited among these (perhaps mathematical) variables in the model correspond to causal relations among the components of the target mechanism.

( Kaplan 2011 , 347) The 3M requirement involves various kinds of “correspondence” between the model and the actual causal mechanisms in the target system. Kaplan adds that “3M aligns with the highly plausible assumption that the more accurate and detailed the model is for a target system or phenomenon the better it explains that phenomenon” ( Kaplan 2011 , 347). In general, mechanistic accounts typically require models to provide an accurate representation of the relevant components and interactions involved in the causal mechanisms that actually produced the explanandum ( Craver 2006 ; Kaplan and Craver 2011 ).

Accurate representation is also built into most causal accounts of expla-nation. As Michael Strevens puts it, “no causal account of explanation—

certainly not the kairetic account—allows non-veridical models to explain”

( Strevens 2008 , 297). On Strevens’s account, “a standalone explanation

of an event e is a causal model for e containing only difference-makers for e ” in which, “the derivation of e , mirrors a part of the causal pro-cess by which e was produced” ( Strevens 2008 , 71–72). While Strevens’s account does allow some idealized models to explain when they only distort causal factors that do not make a difference, accurate representa-tion continues to play a key role, since “the overlap between an idealized model and reality .  .  . is a standalone set of difference-makers for the target” ( Strevens 2008 , 318). In addition, James Woodward suggests that causal models explain when they “correctly describe,” “trace or mirror,”

or are “true or approximately so” with respect to the difference-making causal relations that hold between the explanans and the explanandum ( Woodward 2003 , 201–203). More generally, for causal accounts, in order for a model to explain, it must provide an accurate representation of the difference-making causal relationships (or causal processes) within the model’s target system(s). Indeed, Michael Weisberg describes a wide range of accounts of minimalist idealization, which claim that models that explain are those that “accurately capture the core causal factors,”

since “[t]he key to explanation is a special set of explanatorily privileged causal factors. Minimalist idealization is what isolates these causes and thus plays a crucial role for explanation” ( Weisberg 2007 , 643–645).

Some causal accounts involve less demanding accurate representation requirements. For example, Angela Potochnik’s (2015 , 2017 ) recent causal account allows idealized models that explain to distort some causal differ-ence makers. However, while Potochnik’s view does allow for some causal difference makers to be left out or idealized, her account still requires models that explain to accurately represent the causal factors that had a significant impact on the causal pattern of interest to the current research program. She argues that “posits central to representing a focal causal pattern in some phenomenon must accurately represent the causal factors contributing to this pattern. . . . Idealizations, in contrast, must . . . repre-sent as-if [such that . . .] none of its neglected features interferes dramati-cally with that pattern” ( Potochnik 2017 , 157). Therefore, while the set of causal factors is delimited somewhat differently, there is still a specific set of significant causal factors that needs to be accurately represented in order for an idealized model to explain. Moreover, similar to Strevens’s view, idealizations are justified in these explanations when they distort features that are irrelevant to the causal pattern of interest.

In addition to these accurate representation requirements for models to explain, philosophers of science have long recognized a strong con-nection between explanation and understanding ( de Regt 2009 ; Grimm 2008 ; Strevens 2013 ). For example, Wesley Salmon writes: “understand-ing results from our ability to fashion scientific explanations” ( Salmon 1984 , 259). An even stronger position is adopted by J. D. Trout, who claims that “scientific understanding is the state produced, and only pro-duced, by grasping a true explanation” ( Trout 2007 , 585–586, emphasis

added). Michael Strevens also argues that an individual has scientific understanding of a phenomenon only if they grasp the correct scien-tific explanation of that phenomenon ( Strevens 2008 , 2013 ). The link between explanation and understanding is also echoed by epistemol-ogists: “understanding why some fact obtains .  .  . seems to us to be knowing propositions that state an explanation of the fact” ( Conee and Feldman 2011 , 316). While I think there are good reasons for believing that understanding is possible without having an explanation ( Lipton 2009 ; Rohwer and Rice 2013 , 2016 ), the fact that much of our scien-tific understanding comes from providing explanations that are thought to be accurate representations of relevant features lends support to the idea that scientific understanding is typically thought to be factive in some sense as well (at least in cases of understanding why). Indeed, like the intuition that knowledge of a proposition requires that the proposi-tion be true, most epistemological accounts of understanding maintain that genuine understanding must be factive in some way ( Grimm 2006 ; Khalifa 2012 , 2013 ; Mizrahi 2012 ; Kvanvig 2003 , 2009 ; Rice 2016 ; Strevens 2013 ). 2

In general, most philosophical accounts claim that for idealized mod-els to provide explanations and understanding, the accurate representa-tion relarepresenta-tion must hold between the idealized model and the important, significant, or difference-making causes (or mechanisms) that actually produced the explanandum. Given these factive requirements for expla-nation and understanding, many philosophers have noted that the use of multiple inconsistent models for the same phenomenon raises serious challenges for scientific realism. Indeed, if the above accounts are right, then when multiple conflicting models are used to explain and under-stand the same phenomenon, they ought to be interpreted as making con-flicting claims about the causal interactions among the relevant features of the models’ target system(s). In other words, multiple conflicting mod-els that are used to explain and understand the same phenomenon ought to be interpreted as each aiming to provide an accurate representation of the difference-making causes (or mechanisms) for the explanandum. This is problematic given that the idealized models used in science often make conflicting assumptions ( Morrison 2011 , 2015 ), represent incompatible causal structures ( Longino 2013 ), and distort difference-making causes in a variety of ways ( Batterman and Rice 2014 ; Rice 2018 , 2019). The problem of inconsistent models, then, is to show how such a situation can result in genuine explanations and understanding of the target phe-nomenon, despite the inconsistency of the representations appealed to by scientific modelers. In light of the above discussion, I argue that the prob-lem of inconsistent models has largely resulted from the fact that most accounts of how to justify the use of idealized models to explain and understand are exclusively focused on accurate representation of relevant features of the target system.

2.2 The Problem of Inconsistent Models and Perspectivalism

One possible response to the challenge of inconsistent models comes from perspectivalism ( Giere 2006 ; Massimi 2018 ). Giere’s original perspectiv-alism suggests that we should not directly interpret models as making claims about their target systems. Instead, we should interpret models as contextually constructed representations of a system from the perspective of a particular theory. Models constructed within different theoretical perspectives will thus make different and sometimes contradictory claims about the same target system(s). In short, perspectivalism suggests that we need to recast claims about how models represent in the following form: from the perspective of theory T , model M represents system S in a particular way ( Giere 2006 ).

While this perspectival move is somewhat helpful in showing how multiple models might represent the system in different context-sensitive ways, I agree with Morrison (2011 , 2015 ) and Massimi (2018 ) that per-spectivalism on its own does little to solve the problem of inconsistent models for realism. For one thing, even if we can only interpret how (or what) models represent from within the perspective of a particular theory or context, this doesn’t tell us how sets of inconsistent models from differ-ent perspectives are able to yield genuine explanations and understanding of the same phenomenon. Several epistemological and methodological questions remain about how to interpret the inconsistent claims made by multiple perspectival models. But more than just failing to provide a clear solution, I contend that perspectivalism (as well as appeals to partial structures or structural realism) will continue to fall prey to the prob-lem of inconsistent models as long as philosophical accounts of modeling continue to conflate the following two questions:

1. How does the idealized model allow scientists to explain and under-stand the phenomenon?

2. Which of the relevant features (e.g., difference-making causes) for the occurrence of the phenomenon are accurately represented by the idealized model?

In short, the problem lies with equating explanation and understanding with accurate representation of relevant features, due to the truth and accuracy requirements maintained by most accounts of explanation and understanding outlined above. As long as these accurate representation requirements are central to how philosophers conceive of explanation and understanding, appealing to only a limited set of difference mak-ers ( Potochnik 2017 ), partially accurate representations ( Wimsatt 2007 ; Worrall 1989 ), or perspectivalism ( Giere 2006 ) will not be able to offer a solution to the problem of inconsistent models for realism. After all, if the models are still intended to accurately represent (or map on to) relevant

features of the target system, multiple conflicting idealized models will lead to inconsistent metaphysical claims about the components and inter-actions of the target system.

In support of this suggestion, Michela Massimi (2018 ) has shown how the main argument used to conclude that the use of multiple inconsistent perspectival models is incompatible with realism depends on what she calls the “representationalist assumption,” namely that scientific models (partially) represent relevant aspects of a given target system. Further-more, this representationalist assumption implicitly depends on the idea that “representation means to establish a one-to-one mapping between relevant (partial) features of the model and relevant (partial)—actual or fictional—states of affairs about the target system” ( Massimi 2018 , 342).

This kind of accurate representation (or mapping) assumption results in inconsistent models making inconsistent metaphysical claims about the target system even if we allow that models represent the target system from different perspectives. In short, the reason perspectival modeling is similarly challenged by the problem of inconsistent models is that it is typically taken to include the same kinds of assumptions regarding accurate representation of (or mapping on to) relevant features of the target system. This again shows that the problem of inconsistent models derives largely from the assumption that models that are used to explain and understand a phenomenon must accurately represent the relevant features of the target system. 3

As additional motivation for moving away from accurate representa-tion assumprepresenta-tions, Morrison notes that, in cases of inconsistent models,

“we usually have no way to determine which of the many contradictory models is the more faithful representation, especially if each is able to generate accurate predictions for a certain class of phenomena” ( Morrison 2011 , 343). Therefore, not only is accurate representation the source of the challenge from inconsistent models, but in many cases we simply can-not answer the question of which of the plethora of idealized models accu-rately represents the relevant features of the models’ target system(s). As a result, I contend that we ought to consider alternative ways of justifying the use of multiple inconsistent models to explain and understand the same phenomenon that do not depend on the accurate representation relations that are central to most accounts of how models provide expla-nations and understanding.

3 Using Universality Classes to Justify the Use