• Keine Ergebnisse gefunden

4.2 Knowledge and understanding

4.2.2 Understanding

in this way that, to my mind, computer simulations establish the necessary meta-physical foundations of scientific explanation. That is, explanation by computer simulations is possible because representational dependence between results and the empirical phenomenon has been established. It follows that explaining the former is epistemically equivalent to explaining the latter.

Before I proceed to addressing the notion ofunderstanding and how it is related to scientific explanation, allow me to introduce a new term. For the remainder of this work, I shall refer to results of a computer simulation genuinely representing an empirical phenomenon as the simulated phenomenon. In this vein, to say that we want to explain the simulated spikes in Figure 3.3 is to say that we want to explain similar spikes found in the world. Let it be noted that such spikes might or might not have been observed or measured empirically. This means that a genuine representation of the empirical target system presupposes a genuine representation of each empirical phenomenon belonging to that target system. For instance, to genuinely represent the planetary movement by a simulation that implements a Newtonian model also means to genuinely represent each phenomenon that falls within that model.

Let me now turn to the notion ofunderstanding and how we can relate it to the concept of explanation, making the latter epistemic.

say that we ‘understand’ why the Earth revolves around the Sun, or that the velocity of a car could be measured by deriving the position of the body with respect to time.

Typically, epistemologists and philosophers of mind are more trained than any other philosopher for figuring out what exactly understanding amounts to. Philosophers of science, instead, seem to be content with assimilating the role that understanding plays in the construction of a scientific corpus of beliefs and the general truths about the world. A first rough conceptualization, then, takes scientific understanding as corresponding to a coherent corpus of beliefs about the world. These beliefs are mostly true (or approximately true) in the sense that our models, theories, and statements about the world represent the way it really is (or it approximately is).26 Naturally, not everything that has been scientifically represented is strictly true.

We do not have a perfect understanding of how our scientific theories and models work, or a complete grasp of why the world is the way it is. For these reasons a notion of understanding must also allow some falsehoods. Catherine Elgin has coined an accurate term for these cases; she calls them ‘felicitous falsehoods’ as a way to exhibit the positive side of a theory of not being strictly true. Such felicitous falsehoods are the idealizations and abstractions that theories and models purport.

For instance, scientists know very well that no actual gas behaves in the way that the kinetic theory of gases describes them. However, the ideal gas law accounts for the behavior of gases by describing their movement, properties, and relations. There is no such gas, but scientists purport to understand the behavior of actual gases by reference to the ideal gas law (i.e., to reference a coherent corpus of true beliefs).27 Let it be noted, however, that although a scientific corpus of beliefs is riddled with these felicitous falsehoods, this fact does not entail that the totality of our corpus of beliefs is false. A coherent body of predominantly false and unfounded beliefs, such as alchemy or creationism, does not constitute understanding of chemistry or the origins of beings, and it certainly does not constitute a coherent corpus of true beliefs. In this vein, the first demand for having understanding of the world is that our corpus of beliefs is mostly populated with true (or approximately true) beliefs.

Taken in this way, it is paramount to account for the mechanisms by which new beliefs are incorporated into the general corpus of true beliefs, how it is populated.

Gerhard Schurz and Karel Lambert, for instance, assert that “to understand a phe-nomenon P is to know how P fits into one’s background knowledge” (Schurz and Lambert, 1994, 66).28 There are several operations that allow scientists to do this.

For instance, a mathematical or logical derivation from a set of axioms incorporates new well-founded beliefs into the corpus of arithmetics or logic, making them more coherent and integrated. There is also a pragmatic dimension that considers that we

incorporate new beliefs when we are capable of using our corpus of belief for some specific epistemic activity, such as reasoning, working with hypotheses, and the like.

Elgin, for instance, indicates that understanding geometry entails that one must be able to reason geometrically about new problems, to apply geometrical insight in different areas, to assess the limits of geometrical reasoning for the task at hand, and so forth.29

Here I am interested in one particular way of incorporating new beliefs, that is, by means of explaining new phenomena. For this I need to show how a theory of scientific explanation works as a coherence-making process capable of incorporating new beliefs into a corpus of beliefs. This is the core epistemic role for scientific explanation largely admitted by the philosophical literature. Jaegwon Kim, for instance, considers that “the idea of explaining something is inseparable from the idea of making it more intelligible; to seek an explanation of something is to seek to understand it, to render it intelligible” (Kim, 1994, 54). Stephen Grimm makes the same point with fewer words: “understanding is the goal of explanation” (Grimm, 2010). Explanation, then, is a driving force for scientific understanding. We can understand more about the world because we can explain why it works the way it does. In addition, a successful account of computer simulations as explanatory devices must show how, by simulating a piece of the world, they yield understanding of it. By accomplishing this aim, I would be successfully conferring epistemic power on computer simulations.

Now, there are two ways to approach the explanatory role of computer simula-tions: either computer simulations provide a new account of scientific explanation, independent of current theories, or they are subsumed under the conceptual frame-work of current theories of explanation. In Section 4.3 I outline some of the reasons that make the first option inviable. I then make my case for the second option by analyzing how different theories of explanation provide the necessary conceptual framework for computer simulations. Let me finish this section by shortly address-ing a few technical aspects of scientific explanation.

Scientific explanation is concerned with questions that are relevant for scientific purposes.30 As tautological as this might sound to many, it introduces the first important delimitation to the notion of explanation. For instance, scientific expla-nation is interested in answering why puerperal fever is contracted or why water boils at 100ºC. Non-scientific explanations include why the ink was spilled all over the floor31 or why the dog is outside.32 Similarly, the researcher working with com-puter simulations wants to explain why a satellite under tidal stress forms the spikes

as shown in Figure 3.3 rather than, say, why the simulation raised an error.

The twentieth century’s most influential model of scientific explanation is the covering law model as elaborated by Carl Hempel and Paul Oppenheim.33 Several other models followed, leading to a number of alternative theories: the statistical relevance model, the ontic approach, the pragmatic approach, and theunificationist approach.34 We must also include on this list the current efforts on ‘model expla-nation’ accounts, such as (Bokulich, 2011) and (Craver, 2006), and mathematical explanation of physical phenomena, such as (Baker, 2009) and (Pincock, 2010).

The logic of scientific explanation typically divides theories of explanation into two classes. The first belong to explanatory externalism,35 which “assert[s] that every explanation ‘tracks’ an objective relation between the events described by the explanans and the explanandum” (Kim, 1994, 273). Causal theories of explanation are forms of this view (e.g., (Salmon, 1984)). The second class belongs toexplanatory internalism, which “see explanation as something that can be accounted for in a purely epistemic way, ignoring any relation in the world that may ground the relation between explanans and explanandum” (Kim, 1994, 273). For instance, Hempel’s covering law model and the unification views are theories of this latter class (e.g., (Friedman, 1974) and (Kitcher, 1981, 1989)).

This distinction is useful for our purposes because it narrows down the class of theories of explanation suitable for computer simulations. In Section 3.2.1 I argued that the results of a computer simulation do not depend on any external causal relation, but instead are the byproduct of abstract calculus (i.e., reckoning the simulation model). Given this fact about computers, theories of explanations that are classified as explanatory internalist are more suitable for explaining simu-lated phenomena than explanatory externalist accounts. The reason for excluding explanatory externalist accounts is very simple: we cannot track back the causal history belonging to the physical states of the computer that produced the results of the simulation.

With this firmly in mind, the search for a suitable theory of explanation is reduced to four internalist accounts provided by the philosophical literature. These are: the Hempelian Deductive-Nomological Model, mathematical explanation of physical phenomena, ‘model explanation’ accounts, and the unificationist account.

Equally important for organizing our analysis are the two questions that Kim poses for any theory of explanation, namely:

The Metaphysical Question: WhenG is an explanans forE, in virtue of what relation betweeng ande, the events represented byGandE respectively, isG an explanans for E? What is the objective relation connecting events, g and

e, that grounds the explanatory relation between their descriptions,GandE? (Kim, 1994, 56)

The Epistemological Question: What is it that we know (that is, what exactly is our epistemic gain) when we have an explanation of p? (Kim, 1994, 54)

The metaphysical question is interested in making visible the relation between ex-planans and explanandum that renders the latter as explained. The epistemological question, instead, aims at asking about the kind of epistemic insight that one ob-tains by carrying out an explanation. In this vein, it is intimately related to the notion of understanding as recently discussed.36

The next sections are organized as follows. I first argue against the possibility of computer simulations introducing a new account of scientific explanation. Then, I discuss the first three internalist accounts of explanation listed above: the Hempelian Deductive-Nomological model, mathematical explanation of physical phenomena, and ‘model explanation’ accounts. I use this discussion to show that none of these theories can account for computer simulations as explanatory devices. These dis-cussions are organized around Kim’s metaphysical and epistemic questions just pre-sented. The fourth internalist account, the unificationist theory of explanation, will be analyzed in full detail in the next chapter.