• Keine Ergebnisse gefunden

4.2 Knowledge and understanding

4.2.1 Knowledge

Any theory of scientific explanation aims at answering the question ‘why q?’

where q is the explanandum to be explained by the explanans. The nature of the explanans and explanandum varies depending on the theory of explanation: they might be an ‘argument,’ a ‘list of assemblage of statistically relevant factors,’ or a ‘sentence,’5 among others. The nature of the explanatory relation also varies, for it can be ‘deductive,’ ‘causal,’ or an ‘answer to a why-question.’6 Conforming to these different natures, several features follow. For instance, argument theories can be deductive as a way of giving substance to ideas of certainty (e.g., Hempel and Kitcher) and causal theories can be probabilistic, therefore conferring epistemic probability on the explanation (e.g., Salmon).

In addition, theories of scientific explanation purport to explain a particular,7 which might take several forms: it might be a ‘fact,’ a ‘phenomenon,’ a ‘concrete event,’ a ‘property,’ etc. Therelatum ofq (i.e., the referent of q) is therefore some-thing out there, an “individual chunk of reality” as David-Hillel Ruben graphically calls it, for which the explanatory relation must account.8 “The basis of explanation is in metaphysics,” Ruben says, “objects or events in the world must really stand in some appropriate ‘structural’ relation before explanation is possible. Explanations work, when they do, only in virtue of underlying determinative or dependency struc-tural relations in the world.” (Ruben, 1992, 210. Emphasis original.) For metaphys-ical economy, then, I will consider the relatum ofq as a natural phenomenon broadly construed: from an object such as pollen, to vibration over water. The notion of phenomenon, then, includes entities, events, processes, and regularities. Examples of these theoretical assumptions abound in philosophical literature on explanation.

Take for instance Hempel’s pioneering explanation of why the mercury column in a closed vessel grew steadily shorter during the ascent to top of the Puy-de-Dôme (see Section 4.3.2).

Metaphysical assumptions such as these beg the question of scientific realism, on which every theory of explanation has a view. Since I take the unificationist account as the most suitable theory of explanation for computer simulations, I will also adopt its realist point of view on explanation. In particular, I subscribe to what Philip Kitcher calls modest unificationism, which adopts a naturalist view of realism (see my discussion in Section 5.2). I have little doubt that a realist theory of explanation is the most suitable for accounting for the aims and achievements of the scientific enterprise. However, I will not discuss this point any further here.

The philosophical accounts of realism are not the principal problems here; rather, I am interested in showing how to accommodate computer simulations into a realist theory of explanation.

Now, the central issue raised with the metaphysics of explanation in computer simulations is this: how can we know that the explanation of results of a computer simulation also applies to an empirical phenomenon? In other words, what kind of epistemic guarantees could be provided in order to ensure that an explanation of results of a simulation are epistemically equivalent to the explanation of the empirical phenomenon? From this question it becomes clear that the problem is that the relatum of the explanandum is no longer an empirical phenomenon, as the metaphysics of scientific explanation assumes, but rather the results of the computer simulation (i.e., data obtained by computing the simulated model). The first step towards answering the above question is to find a justification that our beliefs in the results of the simulation genuinely represent the empirical phenomenon intended to be simulated.

In Section 3.2.1 I used the image of a ‘world of their own’ as a metaphor for describing simulations as systems whose results are directly related to the simulation model, but only indirectly related (via representation) to the target system. Such a metaphor depicted computer simulations as systems that might be disentangled from reality by stemming from the imagination of the scientists. For all purposes, my account of computer simulations as explanatory devices must be restricted to those simulations that represent an empirical target system.9 In order to illustrate the problem at hand, and to give grounds for the former restriction over computer simulations, let me briefly discuss cases where a computer simulation could not be explanatory:

Case a): A simulation model that does not represent an empirical target system (e.g., a simulation of the Ptolemaic movement of planets). We cannot legitimately claim for an explanation of the planetary movement using a false model.10

Case b): A computer simulation that is purely heuristic in nature. Al-though it might hold some interest as an exploratory simulation, its target system is known to be nonexistent (e.g., the Oregonator is a sim-ulation of the Belousov-Zhabotinsky chemical reaction whose system of equations is stiff and might lead to qualitatively erroneous results).11 Although a genuine simulation, it cannot be expected to offer a success-ful explanation.12

Case c): A computer simulation that miscalculates, leading to erroneous data. Such data does not represent any possible state of the target system, regardless of the representational goodness of the simulation model. Therefore an explanation, although possible, would not yield understanding of the world.

The first two cases are legitimate computer simulations, for there is nothing prevent-ing the scientist from carryprevent-ing out hypothetical or heuristic research. The principle is that in computer simulations there are no limits to the imagination. However, neither would lead to a successful explanation and ulterior understanding of the world. Let us note that case b) has some kinship with cases where the simulation model represents the target system and the results of the simulation are not the byproduct of miscalculations, since the initial and boundary conditions are not re-alistically construed (e.g., a simulation that speculates how the tides would be if the gravitational force were weaker than it actually is). This is a special case for computer simulations that I will deal with in Chapter 5. Finally, case c) is less likely to be considered a legitimate simulation, and it certainly does not yield understand-ing of the empirical world. However, it is a probable scenario and as such must be considered for analysis.

What is it left after this list? My interest here lies in what constitutes a suc-cessful explanation, that is, what would be rational to take as the structure of the phenomenon to be explained.13 Successful explanations, then, demand that a com-puter simulation yield insight into the way the world is. Thus understood, it is important to find the conditions under which one knows that the results of the computer simulation share some sort of structure with the empirical phenomenon.

Which are these conditions?

Typically, epistemologists analyze the notion of knowledge in the following way:

“to know something is to have justified true belief about that something.”14 Accord-ing to this assertion, then, knowledge is justified true belief; that is, ‘S knows that

p’ if and only if:

(i)p is true,

(ii) S believes that p,

(iii) S is justified in believing that p

For us, p might be interpreted as the proposition ‘the results of the computer simulation genuinely represent the empirical phenomenon,’ for this is precisely what we need to justify our knowledge of. Consider now p as a proposition for our example: “the spikes produced after simulating the orbital movement of a satellite under tidal stress genuinely represent the spikes that a real satellite under real tidal stress would produce.” Following the schema above, weknow that the results of the simulation represent an empirical phenomenon because we are justified in believing that p is true.

Epistemologists are aware that taking knowledge as justified true belief leads to problems. Most prominently are the so-called ‘Gettier problems,’ which establish situations where true belief is neither individually necessary nor jointly sufficient a condition for knowledge.15 In other words, the justification condition (i.e., (iii) above), which was set for preventing lucky guesses from counting as knowledge, proved to be ineffective against Gettier’s objection about the possibility of falsely justified true belief. The solution is to settle for a bit less, that is, for a belief-forming process that, most of the time, produces justified beliefs. One such candidate is reliabilism as elaborated by Alvin Goldman.16

In its simplest form, reliabilism says that a belief is justified in the case that it is produced by a reliable process.17 In our own terms, we know that we are justified in believing that the results of the simulation represent an empirical phenomenon because there is a reliable process (i.e., the computer simulation) that, most of the time, yields good results that represent the empirical phenomenon. The challenge is now reduced to showing the conditions under which a computer simulation is a reliable process. Let me first say something more aboutreliabilism.18

Goldman explains that the notion of reliability “consists in the tendency of a process to produce beliefs that are true rather than false” (Goldman, 1979, 10).

His proposal highlights the place that a belief-forming process has in the steps towards knowledge. In this vein, one knows p because there is a reliable process that produces the belief that p is the case. In other words, such a belief-forming process yields beliefs that are, most of the time, true rather than false. Consider, for instance, knowledge acquired by a ‘reasoning process,’ such as doing basic arithmetic operations. Reasoning processes are, under normal circumstances and within a

limited set of operations, highly reliable. Indeed, there is nothing accidental about the truth of a belief that2 + 2 = 4, or that the tree in front of my window was there yesterday and, unless something extraordinary happens, it will be in the same place tomorrow.19 Thus, according to the reliabilist, a belief produced by a reasoning process qualifies, most of the time, as an instance of knowledge.20

The question now turns to what it means for a process to bereliable and, specific to my interests, what this means for the analysis of computer simulations. Let us illustrate the answer to the first issue with an example from Goldman:

If a good cup of espresso is produced by a reliable espresso machine, and this machine remains at one’s disposal, then the probability that one’s next cup of espresso will be good is greater than the probability that the next cup of espresso will be good given that the first good cup was just luckily produced by an unreliable machine. If a reliable coffee machine produces good espresso for you today and remains at your disposal, it can normally produce a good espresso for you tomorrow. The reliable production of one good cup of espresso may or may not stand in the singular-causation relation to any subsequent good cup of espresso. But the reliable production of a good cup of espresso does raise or enhance the probability of a subsequent good cup of espresso.

This probability enhancement is a valuable property to have. (Goldman, 2009, 28. Emphasis mine)

The probability here is interpreted objectively, that is, as the tendency of a process to produce beliefs that are true rather than false. The core idea is that if a given process is reliable in one situation then it is verylikely that, all things being equal, the same process will be reliable in a similar situation. Let it be noted that Goldman is very cautious in demanding infallibility or absolute certainty for the reliabilist account.

Rather a long-run frequency or propensity account of probability furnishes the idea of a reliable production of coffee that increases the probability of a subsequent good cup of espresso.21

Although the reliabilist account has been at the center of much criticism,22 here I am more interested in its virtues. Particularly, the reliabilist analysis of knowledge facilitates the claim that we are justified in believing that computer simulations are reliable processes if the following two conditions are met:

(a) The simulation model is a good representation of the empirical target system;23 and

(b) The reckoning process does not introduce relevant distortions, mis-calculation, or any kind of mathematical artifact.24

Both conditions must be fulfilled in order to have a reliable computer simulation,

me illustrate what would happen if one of the conditions above were not met. As argued before, suppose that condition (a) is not met, as is the case of using the Ptolemaic model of planetary movement. In such a case, the computer simulation does not represent any empirical planetary system and therefore its results could not be considered as genuinely standing for an empirical phenomenon. It follows that there are no grounds for claiming a successful explanation of the movement of a real planet. Suppose now, instead, that condition (b) is not met. This means that during the calculation of the satellite’s orbit the simulation produced an artifact of some sort (most likely, a mathematical artifact). In this case, although the simulation model represents an empirical target system, the results of the simulation fail to represent the empirical phenomenon. The reason is that miscalculations directly affect and downplay the degree of representativeness of the results. It follows that one is in no condition to claim for a successful explanation when miscalculations are present. In simpler words, without genuine representation there cannot be successful explanation either.

In Chapter 2, I described with certain detail the three levels of computer software;

namely, the specification, the algorithm, and the computer process. I also claimed that all three levels make use of techniques of construction, language, and formal methods that make the relations among them trustworthy: there are well-established techniques of construction based on common languages and formal methods that relate the specification with the algorithm, and allow the implementation of the latter on the digital computer. The totality of relations are precisely what make the computer simulation a reliable process. In other words, these three levels of software are intimately related to the two conditions above: the design of the specification and the algorithm fulfill condition (a), whereas the running computer process fulfills condition (b). It follows that a computer simulation is a reliable process because its constituents (i.e., the specification, the algorithm, and the computer process) and the process of construing and running a simulation are based, individually and jointly, on trustworthy methods. Finally, from establishing the reliability of a computer simulation it follows that we are justified in believing (i.e., weknow) that the results of the simulation genuinely represents the empirical phenomenon.

We can now assimilate Goldman’s example of the espresso machine into our case:

if a computer simulation produces good results, then the probability that the next result is good is greater than the probability that the next result is good given that the first results were just luckily produced by an unreliable process. By entrench-ing computer simulations as reliable processes, we are groundentrench-ing representational compatibility between the results obtained and the empirical phenomenon. It is

in this way that, to my mind, computer simulations establish the necessary meta-physical foundations of scientific explanation. That is, explanation by computer simulations is possible because representational dependence between results and the empirical phenomenon has been established. It follows that explaining the former is epistemically equivalent to explaining the latter.

Before I proceed to addressing the notion ofunderstanding and how it is related to scientific explanation, allow me to introduce a new term. For the remainder of this work, I shall refer to results of a computer simulation genuinely representing an empirical phenomenon as the simulated phenomenon. In this vein, to say that we want to explain the simulated spikes in Figure 3.3 is to say that we want to explain similar spikes found in the world. Let it be noted that such spikes might or might not have been observed or measured empirically. This means that a genuine representation of the empirical target system presupposes a genuine representation of each empirical phenomenon belonging to that target system. For instance, to genuinely represent the planetary movement by a simulation that implements a Newtonian model also means to genuinely represent each phenomenon that falls within that model.

Let me now turn to the notion ofunderstanding and how we can relate it to the concept of explanation, making the latter epistemic.