• Keine Ergebnisse gefunden

Using Universality Classes to Justify the Use of Idealized Models

Collin Rice

3 Using Universality Classes to Justify the Use of Idealized Models

Instead of focusing on accurate representation relations, I argue that a better way to justify the use of (multiple conflicting) idealized models to explain and understand is to appeal to universality classes ( Batter-man 2000 , 2002 ; BatterBatter-man and Rice 2014 ; Rice 2018 , 2019). The term

‘universality’ comes from mathematical physics, but in its most general form it is just an expression of the fact that many systems that are per-haps extremely heterogeneous in their physical features will nonetheless display similar patterns of behavior (typically at some macroscale) ( Batterman 2000 , 2002 ; Kadanoff 2013 ). The systems that display simi-lar behaviors despite differences in their physical features are said to be in the same universality class ( Kadanoff 2013 ). While most instances of universality have focused on patterns that are stable across real systems—

for example, the universality of critical exponents across a wide range of fluids—a more general conception of universality can be used to find classes of real, possible, and model systems that display similar behaviors despite differences in their features. 4 I contend that this link of being in the same universality class can justify the use of idealized models to explain and understand the behaviors of their target systems even when the models fail to accurately represent the relevant features of those systems.

Given that many idealized model systems are in the same universality class as their target systems, they will display similar patterns of behavior despite the fact that the model may drastically distort the causes, mecha-nisms, or other features responsible for the explanandum in any real-world system. Indeed, just like cases of universality that show stability across perturbations of the features of extremely different real systems (e.g., fluids and magnets), universality can also link stable behaviors across perturbations of the features of different models systems and their real (or possible) target systems. According to what I will call the ‘uni-versality account,’ it is precisely this stability of various behaviors across perturbations of most of the system’s features that can enable scientists to justifiably use idealized models that drastically distort difference-making features to explain and understand the behaviors of their target system(s) ( Batterman and Rice 2014 ; Rice 2018 , 2019).

There are several important things to note about this universality account.

First, idealized models that accurately represent difference-making fea-tures can clearly be justifiably used to explain and understand on the univer-sality account. After all, if those difference-making features are sufficient to produce the phenomenon in some real-world systems, then those fea-tures will likely be sufficient for producing the phenomenon in the model system as well (although a few additional assumptions may be required).

Thus, those models will be in the same universality class as their target system(s). Consequently, the universality account can easily accommo-date the cases used to motive accounts that focus on accurate representa-tion relarepresenta-tions. Importantly, however, the universality account shows that accurate representation is not what is doing the important justificatory work in using these idealized models to explain and understand—rather it is the fact that the idealized model produces the universal patterns of behavior that are of interest to the modeler and are stable across a class

of real, possible, and model systems. Indeed, the universality account allows scientists to justifiably use idealized models to explain and under-stand even in cases where the same universal patterns of behavior are produced by extremely different sets of features across the real, possible, and model systems within the universality class. This allows the account to justify using idealized models to explain and understand a phenom-enon even when we cannot identify which parts of the model are accurate and which parts are distorted representations of the target system. This is quite useful since in many cases the accurate parts of the model cannot be isolated from the idealizing assumptions used in constructing the model;

that is, it is often difficult (or impossible) to decompose scientific mod-els into their accurate and inaccurate parts ( Rice 2019 ). The universality account also provides a way around the problem posed by Morrison, that “we usually have no way to determine which of the many contradic-tory models is the more faithful representation” ( Morrison 2011 , 343).

Since idealized models within the same universality class as their target system(s) need not accurately represent the features of their target sys-tem, scientists can still be justified in using those models to explain and understand even when they are unable to determine which model is the most accurate or which features are being accurately represented by indi-vidual models. In short, universality classes provide a way of linking the behaviors of idealized models with the behaviors of their target systems without relying on the accurate representation relations required by most accounts of how idealized models can be used to explain and understand.

3.1 Modal Information, Explanation, and Understanding

Before going further, it is important to say a bit about why the informa-tion scientists obtain from idealized models within a universality class can be used to explain and understand the behaviors of real systems. Indeed, simply “reproducing a pattern of behavior” might sound like these are merely phenomenological models that produce the desired results but fail to explain why the phenomenon occurs. In what follows, I focus on the explanations and understanding produced by providing true modal information about the phenomenon . I argue that idealized models can be used to provide explanations and understanding because they not only produce the behaviors of interest but also enable scientists to identify which features are important for the occurrence of the explanandum, and how various changes in those relevant (and irrelevant) features would result in changes in the explanandum (or not). That is, these idealized models provide extensive modal information about the phenomenon of interest concerning counterfactual dependencies and independencies that hold in the model’s target system(s). For example, an optimization model that is within the same universality class as a real population can show biologists how changes in the tradeoffs between various fitness-enhancing

features within a system will result in changes in the expected equilib-rium outcome—even if the model represents those features and the rela-tionships between them and the explanandum in a highly idealized (i.e., distorted) way ( Rice 2013 ). Moreover, by showing that a class of real, possible, and model systems all display similar patterns of behaviors despite differences in their features, scientists can come to understand that many of the features of the system are counterfactually irrelevant to the explanandum. For example, by investigating an optimization model a biologist can determine that the equilibrium point of the population is independent of the starting point, trajectory, or method of inheritance in the population ( Potochnik 2007 , 2017 ; Rice 2013 , 2016 ).

Accordingly, on the account I present here, idealized models within the same universality class as their target system(s) can be used to pro-vide explanations by revealing a plethora of modal information about the counterfactual dependencies (and independencies) between features of real system(s) and the explanandum. 5 My appeal to modal information is not meant to provide a complete or exhaustive account of explanation—

that is, there are certainly additional criteria required of explanations, and there may be explanations that do not involve modal information—

but it is worth noting that modal information is central to many causal and non-causal accounts of explanation ( Batterman and Rice 2014 ; Bokulich 2011 , 2012 ; Kim 1994 ; Rice 2013 ; Woodward 2003 ). Indeed, many accounts of explanation agree with Woodward that “[an] explana-tion must enable us to see what sort of difference it would have made for the explanandum if the factors cited in the explanans had been different in various possible ways” ( Woodward 2003 , 11). That is, explanations are provided by giving counterfactual (i.e., modal) information about how changes in certain features in the explanans result in changes in the explanandum. This is precisely the kind of information that scientists are able to extract from model systems within the same universality class as their target system(s). The key feature of models that can be used to develop explanations is that they provide a set of modal information that captures how various changes in the explanatorily relevant features of the target system would result in changes in the explanandum.

In addition, I argue that understanding is also provided by grasping modal information. For example, understanding can be produced by ide-alized models that investigate other (perhaps very distant) scenarios in the network of possibilities, that is, the range of possible states of the system. By providing modal information about possible systems, mod-els that fail to explain may still be able to produce understanding of a phenomenon ( Lipton 2009 ; Rohwer and Rice 2013 ; Rice 2016 ). This approach builds on a proposal by Robert Nozick:

I am tempted to say that explanation locates something in actuality, showing its actual connections with other things, while understanding

locates it in a network of possibility showing the connections it would have to other nonfactual things or processes. (Explanation increases understanding too, since the actual connections it exhibits are also possible.)

( Nozick 1981 , 12) Nozick’s suggestion nicely links the understanding produced by explana-tions with the understanding produced by models that fail to explain ( Lipton 2009 ; Rice 2016 ). In both cases, understanding is produced by providing true modal information about the phenomenon of interest—in Nozick’s terminology, it “locates it in a network of possibility.” How-ever, I argue that in the case of understanding produced by an explana-tion, a particular and more expansive set of modal information about the explanandum must be provided that identifies a set of relevant fea-tures responsible for the explanandum and how changes in those feafea-tures would result in changes in the explanandum ( Bokulich 2011 , 2012 ; Rice 2013 ; Woodward 2003 ). This modal information improves understand-ing by tellunderstand-ing scientists about the range of possible states of the system that would (and would not) produce the explanandum. Still, modal infor-mation not included in an explanation can also improve understanding of the possible states of the system(s) of interest, for example, by iden-tifying which features are not necessary for the phenomenon to occur ( Rohwer and Rice 2013 ).

What is crucial to notice is that the modal information involved in explanation and understanding can be provided in ways other than accurately representing relevant features . An idealized model can tell a scientist quite a lot about how things would be in various counterfac-tual scenarios without having to accurately represent the features of any actual system(s). In other words, idealized models can provide modal information about changes in features of the system even when they fail to accurately represent those features or the actual processes that link those features to the target explanandum. I argue that in many cases this is possible because universality guarantees that the model system’s pat-terns of counterfactual dependence and independence will be similar to those of the target system(s), even if the entities, causes, and processes of those systems are extremely different. Therefore, even if the model drastically distorts the relevant features of its target system(s), it can still be used to explain or understand the phenomenon because many of the modal patterns that hold in the idealized model system will be similar to those of the real-world system(s). For example, in the case of criti-cal behaviors in physics, investigation of the universality class reveals that the explanandum counterfactually depends on the dimensionality of the system and the symmetry of the order parameter ( Batterman forth-coming ). Furthermore, by using renormalization techniques to explicitly delimit the universality class, scientists can demonstrate that most of the

other physical features of the systems in the class are counterfactually irrelevant to the explanandum ( Batterman 2002 ). Generalizing the con-cept of universality allows us to capture this stability of various patterns of behavior that are largely independent of the physical components, interactions, and features of a heterogeneous class of real, possible, and model systems. Moreover, by focusing on modal information we can see how highly idealized models (and various modeling techniques) can allow scientists to extract a plethora of modal information that can be used to explain and understand a phenomenon.

In summary, suppose that we could show that:

1. Certain (modal) patterns of behavior are universal across classes of real, possible, and model systems.

2. The idealized models used by scientists are within the same universal-ity classes as the real systems whose behaviors they want to explain and understand.

If this were the case, we could provide an epistemic justification for why those idealized models can be used to generate explanations and under-standing of phenomena in real-world systems, despite providing dras-tically distorted representations of their target system(s). 6 Importantly, this justification does not require the accurate representation relations involved in most other accounts of how to justify the use of idealized models to explain and understand.