• Keine Ergebnisse gefunden

3.2 On the notion of simulation

3.2.1 The analog and the digital

The distinction between analog and digital is the first step in clarifying the no-tion ofsimulation. There are different views on how to make this distinction feasible.

Let me begin with Nelson Goodman’s notion ofrepresentation of numbers.6 Good-man considers analog representation of numbers as dense. That is, for any two marks that are not copies, no matter how nearly indistinguishable they are, there could be a mark intermediate between them which is a copy of neither.7 A digi-tal representation, on the other hand, is ‘differentiated’ in the sense that, given a number-representing mark (for instance, an inscription, a vocal utterance, a pointer position, an electrical pulse), it is theoretically possible to determine exactly which other marks are copies of that mark, and to determine exactly which numbers that mark and its copies represent.8. The general objection is that Goodman’s repre-sentational distinction does not coincide with the analog-digital distinction made in ordinary technological language. In other words, Goodman’s distinction between analog and digital fails to account for the notion of analog or digital simulation.

David Lewis, the main proponent of this objection, believes that “what distin-guishes digital representation [from analog representation], properly so-called, is not merely the use of differentiated unidigital magnitudes; it is the use of the many com-binations of values of a few -valued unidigital magnitudes” (Lewis, 1971, 326). Thus understood, analog representation is redefined as the representation of numbers by physical primitive magnitudes, where a primitive magnitude is defined as any phys-ical magnitude expressed by a primitive term in the language of physics.9 On the other hand, a digital representation is defined as the representation of numbers by differentiated multidigial magnitudes.10 The problem with Lewis’ definition turns out to be the same as Goodman’s, namely, the representational viewpoint fails to explain the distinction between analog and digital computations.

It is Zenon Pylyshyn who shifts the focus from types of representations to types of processes. The motivation is that the notion of a process scores better for the distinction between analog and digital than the mere notion of representation does.

Pylyshyn objects that Lewis’ criterion allow magnitudes to be represented in an ana-log manner without the process itself qualifying as an anaana-log process. He proposes the following example to illustrate his point:

Consider a digital computer that (perhaps by using a digital-to-analogue con-verter to convert each newly computed number to a voltage) represents all its intermediate results in the form of voltages and displays them on a voltmeter.

Although this computer represents values, or numbers, analogically, clearly it operates digitally (Pylyshyn, 1984, 202).

The example shows that Lewis takes the modern computer as an analog process, a clearly misleading conceptualization. The basic problem is, again, to treat the distinction between analog and digital as types ofrepresentations. Pylyshyn’s shifts turn out to be an essential move, not only for a more understandable distinction between analog and digital, but also for a more comprehensive definition of computer process. Allow me now to briefly discuss Pylyshyn’s distinction between analog and digital processes.

The first thing to notice is that we can reconstruct Pylyshyn’s notion of analog process using the notion of laboratory experiment discussed earlier. According to the author, “for [a] process, as opposed to the representation alone, to qualify as analogue, the value of the property doing the representing must play the right causal role in the functioning of the system” (Pylyshyn, 1984, 202). This notion of analog process is at the heart of the ‘new experimentalism,’ as I discussed in Section 1.3: in laboratory experimentation a single independent variable is manipulated in order to investigate the possible causal relationship with the target system. Experimentation, then, is about causal processes manipulating the empirical world, intervening on it,

‘twisting the lion’s tail.’11 Having cleared this out of the way, allow me now to discuss Pylyshyn’s notion ofcomputational process (or digital process, as referred to previously).

A computational process is conceptualized as involving two levels of description:

asymbolic level that refers to the algorithm and data structures, and a description of thephysical manipulation process (i.e., the physical states of the machine). Pylyshyn carefully distinguishes between a symbolic level, which involves the abstract and for-mal aspects of the computational process, from the physical manipulation process that includes the physical states into which the computer enters when running the algorithm. In plain words the first part of the computational process is entirely symbolic, whereas the second is physical and mechanical.12 This distinction is ex-tremely useful for conceptualizing computer simulations, for it accounts for the lack of causal relationships acting in the simulation (i.e., a computer process cannot be described in terms of causal relationships), and it also depicts the existing internal mappings between the symbolic level and the physical manipulation process (i.e., the computer process is an internal affair between the representation - by means of algorithms and data structures - and the physical states of the machine). Unfortu-nately, Pylyshyn does not delve into these notions, nor into the way these two levels interact. Russell Trenholme, however, takes a similar view as Pylyshyn.

Trenholme distinguishes between analog simulations, characterized by parallel causal-structures isomorphic to the phenomenon simulated,13 and symbolic

simula-tionsthat is, symbolic processes that adequately describe some aspect of the world.14 An analog simulation, then, is “defined as a single mapping from causal relations among elements of the simulation to causal relations among elements of the simu-lated phenomenon” (Trenholme, 1994, 119). According to this conception, analog simulations provide causal information about the represented aspects of the physical processes being simulated. As Trenholme puts it, “[the] internal processes possess a causal structure isomorphic to that of the phenomena simulated, and their role as simulators may be described without bringing in intentional concepts” (Trenholme, 1994, 118). This lack of intentional concepts means that an analog simulation does not require an epistemic agent conceptualizing the fundamental structures of the phenomena, as a symbolic simulation does.15

The notion of symbolic simulation, on the other hand, is a bit more complex for it requires a further constituent; namely, the notion of symbolic process. A sym-bolic process is defined as a mapping from syntactic relations among symbols of the simulated model onto causal relations of elements of the hardware.16 A symbolic simulation, therefore, is defined as “a two-stage affair: first, the mapping of infer-ence structure of the theory onto hardware states which defines symbolic [process];

second, the mapping of inference structure of the theory onto extra computational phenomena” (Trenholme, 1994, 119).17 In simpler words, in a symbolic simulation there are two acting mappings, one at the level of the symbolic process, which maps the simulation model onto the physical states of the computer, and another that maps the simulation model onto the representation of an exogenous computational phenomenon. The central point here is that the results of computing a simulation model are dissociated from the target system insofar as the former only depend on the theory or the model implemented.

This conclusion proves to be very important for my definition of computer simu-lation. In particular, Trenholme’s studies on symbolic simulation make possible the idea of a computer simulation as a ‘world of its own’; that is, as a system whose re-sults are directly related to the simulation model and the computational process,18 and only indirectly to the target system. This is simply a metaphor comparable with the mathematician that manipulates mathematical entities regardless of their ontological reality in the empirical world. However, it should be clear that I am not suggesting that computer simulations are unrelated systems, for many simulations are of an empirical system, and therefore it is a desirable feature to be able to relate the results of the simulation to the target system. Instead, the metaphor aims to highlight the fact that, in the context of computer simulations, total independence from the represented target system can be imposed by means of assumptions. Of

course, it is also possible to implement some parts of the simulation as having gen-uine realistic content, but this may not necessarily be the case. What ultimately determines these proportions is our epistemic commitment to the target system and the way we represent it in the simulation model.19

With these ideas firmly in mind, let me now turn to discuss notions of computer simulation in current literature.20

In 1990, Humphreys published his first work on computer simulations. There, he maintains that scientific progress needs, and is driven by, tractable mathematics.21 Bymathematical tractability, Humphreys refers to pen-and-paper mathematics, that is, the kind of calculation that an average person is able to carry out without the aid of any computing device.22 In simpler words, mathematical tractability here requires a scientific model ‘to be analytically solvable.’23 In this context, computer simula-tions become central contributors to the (new) history of scientific progress, for they

“turn analytically intractable problems into ones that are computationally tractable”

(Humphreys, 1991, 501). Computer simulations, then, amend what analytic meth-ods could not undertake, that is, to find (approximate) solutions to equations by means of reliable (and fast) calculation. It is in this context that Humphreys offers the following working definition:

Working Definition: A computer simulation is any computer-implemented method for exploring the properties of mathematical models where analytic methods are unavailable (Humphreys, 1991, 501).

There are two ideas in this working definition worth underlining. The first one has been already discussed; that is, that computer simulations provide solutions to mathematical models where analytic methods are unsuccessful. A follow up com-ment is that Humphreys is careful in making clear that his working definition should not be identified with numerical methods: whereas both computer simulations and numerical methods are interested in finding approximate solutions to equations, only the latter is related to numerical analysis.24

The second idea stemming from the working definition is the ‘exploratory’ ca-pacity of computer simulations for finding the set of solutions of the mathematical model. This is certainly a major feature of computer simulations as well as a source for epistemic and methodological discussion. Computer simulations are not only rich in power for computing intractable mathematics, but also for exploring the mathematical limits of the models that they implement. Unfortunately, and to the best of my knowledge, there is little analysis of this topic.25

Despite having presented only a working definition, Humphreys received vig-orous objections that virtually forced him to change his original viewpoint. One

of the main objections came from Stephan Hartmann who correctly objected that Humphreys’ definition missed the dynamic nature of computer simulations. Hart-mann, instead, offered his own definition:

Simulations are closely related to dynamic models. More concretely, a simula-tion results when the equasimula-tions of the underlying dynamic model are solved.

This model is designed to imitate the time-evolution of a real system. To put it another way, a simulation imitates one process by another process. In this definition, the term “process” refers solely to some object or system whose state changes in time. If the simulation is run on a computer, it is called a computer simulation (Hartmann, 1996, 83).

Philosophers have warmly welcomed Hartmann’s definition. Recently, Wendy Parker has made explicit reference to it: “I characterize a simulation as a time-ordered se-quence of states that serves as a representation of some other time-ordered sese-quence of states” (Parker, 2009, 486). Francesco Guala also follows Hartmann in distin-guishing between static and dynamic models, time-evolution of a system, and the use of simulations for mathematically solving the implemented model.26

In spite of this acceptability, Hartmann’s definition presents a few issues of its own. The overall assumption is that a computer simulation is the result of direct implementation of a dynamic model on the digital computer, as it follows from the first, second, and last sentences of the previous quotation. To Hartmann’s mind, therefore, there is no conceptual difference between solving a dynamic model and running a computer simulation, for the latter is simply the implementation of the former on the digital computer. However, the diversity of methods and processes involved in the implementation of a dynamical model on the digital computer ex-ceed any interpretation of ‘direct implementation.’ There is generalized agreement among philosophers regarding the importance of analyzing the methodology of com-puter simulations for their conceptualization.27 A simple example will illustrate this point: consider the implementation of the Lotka-Volterra model of predator-prey as a computer simulation. Since such a mathematical model consists in a set of con-tinuous equations that represent the dynamics of the population of a given species, then, in order to implement it as a computer simulation it must first be discretized via the many formal and non-formal techniques discussed in Chapter 2. But this is not the only modification that a scientific model typically suffers. Considerations on the computability of the model in terms of time and computational power (e.g., memory available, accuracy of results, etc.) are also a major source of changes in the original scientific model. For instance, changing from the cartesian system of coordinates to the polar system of coordinates suggests that there is ‘no direct im-plementation’ as Hartmann indicates. In addition, a simulation is defined in terms

of a dynamic model which, when implemented on the digital computer, is conceived as a computer simulation. To put the same idea in a slightly different way: a dy-namic model (implemented on the computer) becomes a computer simulation while a computer simulation is the dynamic model (when implemented on the computer).

The notion of computer simulation, then, is simplified by defining it in terms of a dynamic model running on a special digital device.

Overall, Hartmann believes that computer simulations play a central function in the context of discovery. To this end, simulations are used as a technique, as heuristic tools, as substitutes for an experiment, and as tools for experimentalists.28 Is it possible for computer simulations to also play a central function in the context of justification? I believe this is not only a possibility, but a highly desirable aim for the studies on the epistemology of computer simulations.

After Hartmann’s initial objections, Humphreys coined a new definition, this time based on the notion of computational template.29 Briefly, a computational template is the result of a computationally tractable theoretical template. A theo-retical template, in turn, is the kind of very general mathematical descriptions that can be found in a scientific work. This includes partial differential equations, such as elliptic (e.g., Laplace’s equation), parabolic (e.g., the diffusion equation), and hy-perbolic (e.g., the wave equation), ordinary differential equations, among others.30

An illuminating example of a theoretical template is Newton’s Second Law: “[it]

describes a very general constraint on the relationship between any force, mass, and acceleration” (Humphreys, 2004, 60). These theoretical templates need to be spec-ified in some particular respects, for instance, in the force function: it could either be a gravitational force, an electrostatic force, a magnetic force, or any other variety of force. Finally, “if the resulting, more specific, equation form is computationally tractable, then we have arrived at a computational template” (Humphreys, 2004, 60-61). Arguably, this is less a definition than it is a characterization of the notion of computer simulation. In any case, this is, together with Hartmann’s, the most accepted conceptualization among current philosophical literature.

Let me finish this section with an illuminating classification of the term com-puter simulation as elaborated by Roman Frigg and Julian Reiss. According to the authors, there are two senses in which the notion ofcomputer simulation is defined in current literature:

In the narrow sense, ‘simulation’ refers to the use of a computer to solve an equation that we cannot solve analytically, or more generally to explore mathe-matical properties of equations where analytical methods fail (e.g., Humphreys 1991, p. 501; 2004, p. 49; Winsberg 1999, p. 275; 2001, p. 444).

ing, and justifying a model that involves analytically intractable mathematics (e.g., Winsberg 2001, p. 443; 2003, p. 105; Humphreys 1991, p. 501; 2004, p. 107). Following Humphreys (2004, p. 102-104), we call such a model a

‘computational model’. (Frigg and Reiss, 2009, 596)

Both categories are certainly meritorious and illuminating. Both capture the two senses in which philosophers define the notion of computer simulation. While the narrow sense focuses on the heuristic capacity of computer simulations, the broad sense emphasizes the methodological, epistemological, and pragmatic aspects of computer simulations.

The challenge now is to find a way to address the philosophical study of com-puter simulations as a whole, that is, as the combination of the narrow and broad sense as given above. Such a study would emphasize, on the one hand, the represen-tational capacity of computer simulations and their power to enhance our cognitive abilities, while on the other provide an epistemic assessment of their centrality (and uniqueness) in the scientific enterprise. In order to achieve this, I must first set apart the class of computer simulations that are of no interest from those relevant for this study. Let me now begin with this task.

3.2.2 Cellular automata, agent-based simulations, and