• Keine Ergebnisse gefunden

‘funnel-and-gate systems.’60

A similar argument applies to the objection that computer simulations are not experiments because they will not warrant observational statements. Such an ob-jection does not carefully distinguish between ‘direct’ observations and ‘indirect’ or

‘oblique’ observations. As I argued before, computer simulations are able to simu-late phenomena that are otherwise unobservable by traditional means (e.g., most of the research on quantum mechanics depends today on computer simulations). Such observational statements are warranted by the correct implementation of scientific models and the correct computation of such models. The lesson here is also sober because, although the notion of crucial experiment does not apply directly to com-puter simulations as it does to scientific experimentation, an oblique approach still seems possible.

This section has addressed a few core problems in the epistemology of experiment and how they can be understood in the context of computer simulations. The general lesson here is that there is a host of specific issues in the epistemology of experimentation that might not be related to computer simulations. Despite the fact that it is possible to spot some symmetries between computer simulations and experiments, the general persuasion is that as philosophers we must address the epistemology of computer simulation at face value. The next section addresses the question regarding what is novel and philosophically relevant about computer simulations.

an open question, however, whether Frigg and Reiss have correctly interpreted the authors that they criticize.

Now, Frigg and Reiss’ aim goes well beyond setting limits to ungrounded philo-sophical claims and asserts that studies on computer simulations do not raise any new philosophical questions. Indeed, to their mind philosophical studies on com-puter simulations can be assimilated within existing philosophies, such as a philos-ophy of models or a philosphilos-ophy of experimentation. It is important to indicate that they are not objecting to the novelty of computer simulations in the scientific prac-tice, nor their importance in the advancement of science, but rather that simulations raise few, if any, new philosophical question.62 As the authors like to put it, “we see this literature as contributing to existing debates about, among others, scientific modelling, idealisation or external validity, rather than as exploring completely new and uncharted territory” (Frigg and Reiss, 2009, 595).

At this point it is important to ask whether there is an alternative to such a divided scenario. One way of approaching it is by rejecting the alleged claim that computer simulations represent a radical new way of doing philosophy of science, while accepting that computer simulations are philosophically interesting methods and do raise new questions for the philosophy of science. Such is the viewpoint that this work takes. But before addressing in more detail my own conceptualization of computer simulations,63 allow me first to discuss Frigg and Reiss’ work and its impact in the philosophical community.

The article virtually divided opinions, forcing philosophers to define their posi-tion on the matter. Paul Humphreys was the only philosopher that directly engaged into the discussion and gave an answer to the authors. According to Humphreys then, Frigg and Reiss’ assertions obscure the challenges that computer simulations pose for the philosophy of science. The core of Humphreys’ reply is to recognize that the question about the novelty of computer simulations has two sides: one which focuses on how traditional philosophy illuminates the study of computer simulations (e.g., through a philosophy of models, or a philosophy of experiment), and the other which focuses exclusively on aspects of computer simulations that represent, in and by themselves, genuine challenges for the philosophy of science. It is this second way of looking at the problem that grants philosophical importance to computer simulations.

One of Humphreys’ main claims is that computer simulations spell out inferential relations and solve otherwise intractable models, amplifying our cognitive abilities, whether logico-inferential or mathematical. Humphreys called such an amplification the anthropocentric predicament as a way to illustrate current trends in the science

of computer simulations moving humans away from the center of the epistemic ac-tivity.64 According to him, a brief overview on the philosophy of science shows that humans have always been at the center of knowledge. This conclusion includes the period of the logical and empirical positivism, where psychologism was removed from the philosophical arena. As Humphreys points outs, “the overwhelming major-ity of the literature in the logical empiricist tradition took the human senses as the ultimate authority” (Humphreys, 2009, 616). A similar conclusion follows from the analysis of alternatives to the empiricist, such as Quine’s and Kuhn’s epistemolo-gies. The central point here is, according to Humphreys, that there is an empiricist component in the philosophy of science that has prevented a complete separation between humans and their capacity to evaluate scientific activity. The anthropocen-tric predicament, then, comes to highlight precisely this separation: it is the claim that humans have lost their privileged cognitive position in the practice of science to computer simulations.65 This claim finally gets its support from the view that scientific practice only progresses because new methods are available for handling large amounts of information. Handling information, according to Humphreys, is the key for the progress of science today, which can only be attainable if humans are moved from the center of the epistemic activity.66

The anthropocentric predicament, as philosophically relevant as it is in itself, also brings about four extra themes unanalyzed by the philosophy of science; namely,

‘epistemic opacity,’ ‘computational representation and applications,’ ‘the temporal dynamics of simulations,’ and ‘the in practice/in principle’ distinction.67 All four are novel philosophical activities supplemented by computer simulations; all four have no answer in traditional accounts of models or experimentation; and all four represent a challenge for the philosophy of science.

As I have mentioned before, I do believe that computer simulation raise novel questions for the philosophy of science. One simple way of supporting this claim is by showing what distinguishes them from other units of analysis in the philoso-phy of science. Humphreys began this list of peculiarities by pointing out the five issues above. Allow me to briefly discuss four more issues that have also escaped Humphreys’ list.

1.4.1 Recasting models

The first overlooked feature of computer simulations is their capacity torecast a host of scientific models: phenomenological models, fictional models, data models, theoretical models, they all need to be readjusted in order to be implemented on the computer. This process of ‘readjusting’ or ‘recasting’ a host of models is part of the

methodology of computer simulations, and has no counterpart in the philosophy of models.

Indeed, a central aspect of any philosophy of modeling is to establish categories for the classification of scientific models. Such categories are at the heart of the differences among scientific models: phenomenological models are not theoretical models, which in turn differ from data models, and so forth (see my classification in Section 1.2.2.2). A computer simulation, however, cannot directly implement a scientific model, but rather it must be transformed into a ‘simulation model’ (see Chapter 2 for a more detailed discussion on how this is done). Such transforma-tion procedures facilitate the implementatransforma-tion of a host of diverse scientific models as disparate as theoretical models and phenomenological models. In this way, a simulation model has the capacity to recast and implement a theoretical model in the same way as a fictional model. Let me illustrate this claim with a simple ex-ample: a phenomenological model, such as Tycho’s model of the motion of planets, can be used for a computer simulation in the same sense as a Newtonian model.

In this sense, a simulation of Tycho’s model would produce new data, predict the next position of the planetary system, and similar epistemic activities as Newton’s model. From a purely methodological point of view, the process of adjusting Tycho’s model to a simulation model is no different from what the researcher would do with a Newtonian model. Note that I am not saying that implementing Tycho’s model provides the same knowledge as Netwon’s model. Instead, I am calling attention to the fact that simulation models put a host of scientific models on equal footing, as they act as a ‘super class’ of models. The criteria typically used for classifying scientific models, then, do not apply for a simulation model.68

1.4.2 Changes in ontology

Recasting scientific models is simply the tip of the iceberg. There are changes of ontology (i.e., continuous models implemented as discrete models69) that take place when recasting a scientific model into the simulation model. Despite the fact that there are scientific models that are fundamentally discrete,70 in the natural sciences it is more common to find continuous models that represent the diversity of empirical systems, such as ordinary differential equations or partial differential equations. Since computers are discrete systems, mathematical techniques must be in place in order to make possible the transformation from continuous equations to discrete ones.

A good discretization method, therefore, has to balance the loss of information intrinsic to these kinds of transformations along with ensuring reliability of results.71

Such a balance is not always easy to obtain, and sometimes ad hoc.

It is worth noting that, except for technical limitations, virtually every scientific model can be implemented on a computer.72

1.4.3 Representational languages

Another interesting problem that computer simulations pose for the philoso-pher of science (and to the philosophiloso-pher of language) is the host of representational languages (i.e., programming languages73) available. Raymond Turner makes an in-teresting claim about programming languages and their relation to representational theories. He says. “via its data types, every programming language outlines atheory of representation that could be made flesh via an axiomatic account of its underly-ing types and operators” (Turner, 2007a, 283). Instead, in the philosophy of models there is little room for study on language, for it is mostly reduced to mathematics or physics.

1.4.4 Abstraction in computer simulations

The introduction of new modes of abstractions in computer simulations is also of interest. Besides the traditional idealizations, fictionalizations, and the like found in the philosophy of models (see Section 1.2.2.1), computer simulations introduce a hierarchical notion of abstraction, which includes information hiding and informa-tion neglecting. Simply put, the different levels of programming a computer software requires that information is sometimes hidden from the user. As Colburn and Shute point out:

[there are] details that are essential in a lower-level processing context but inessential in a software design and programming context. The reason they can be essential in one context and inessential in another is because of abstraction and information hiding tools that have evolved over the history of software development (Colburn and Shute, 2007, 176).

The problem of information hiding and information neglecting is typically overcome by the selection of an appropriate computer specification language (see Chapter 2).74

1.4.5 Reliability of results

On the justificatory side, verification and validation are methodologies for the reliability of the simulation model. William Oberkampf and Christopher Roy ex-plain that in order to measure the correctness of a model, one must have accurate benchmarks or reference values against which to compare the model. Benchmarking

is a technique used for measuring the performance of a system, frequently by run-ning a number of standard tests and trials against it. In verification, the primary benchmarks are highly accurate solutions to specific, although limited, mathemat-ical models. On the other hand, in validation, the benchmarks are high-quality experimental measurements of systems responsible for the quantities of interest.75 In verification, then, the relationship of the simulation to the empirical world is not necessarily at stake; whereas in validation, the relationship between computation and the empirical world, that is, the experimental data obtained by measuring and observing methods, is the issue.76 These methods are novel insofar as they require specific elements of computer science (for a more detailed discussion, see Section 3.3.2).

The motivation here was only to supplement Humphreys’ list on the philosoph-ical importance of the study on computer simulations, not to engage into the topic.

Indeed, this work is not about analyzing in any more depth these motivations, but rather to study what it is that makes computer simulations epistemically powerful for scientific practice

There is great interest for understanding computer simulations fostered by their central role in scientific practice. Philosophers have addressed these interests via studies on traditional philosophy of models and scientific explanation. I have argued here that, although not misleading, these approaches fail to capture the rich universe that computer simulations have to offer. Motivated by these results, I turn to my central aims, which are to address computer simulations at face value and to show their importance as epistemic devices for the scientific enterprise.