• Keine Ergebnisse gefunden

is a technique used for measuring the performance of a system, frequently by run-ning a number of standard tests and trials against it. In verification, the primary benchmarks are highly accurate solutions to specific, although limited, mathemat-ical models. On the other hand, in validation, the benchmarks are high-quality experimental measurements of systems responsible for the quantities of interest.75 In verification, then, the relationship of the simulation to the empirical world is not necessarily at stake; whereas in validation, the relationship between computation and the empirical world, that is, the experimental data obtained by measuring and observing methods, is the issue.76 These methods are novel insofar as they require specific elements of computer science (for a more detailed discussion, see Section 3.3.2).

The motivation here was only to supplement Humphreys’ list on the philosoph-ical importance of the study on computer simulations, not to engage into the topic.

Indeed, this work is not about analyzing in any more depth these motivations, but rather to study what it is that makes computer simulations epistemically powerful for scientific practice

There is great interest for understanding computer simulations fostered by their central role in scientific practice. Philosophers have addressed these interests via studies on traditional philosophy of models and scientific explanation. I have argued here that, although not misleading, these approaches fail to capture the rich universe that computer simulations have to offer. Motivated by these results, I turn to my central aims, which are to address computer simulations at face value and to show their importance as epistemic devices for the scientific enterprise.

first distinction from computer simulations. The discussion on the ‘epistemology of experiment’ is also useful for delimiting the scope of computer simulations as devices used for accessing some aspects of the world, as well as for discouraging certain philosophical practices (I discuss the ontology of computer simulations in Chapter 2 and its methodology in Chapter 3). Finally, the discussion in Section 1.4 shows, unequivocally to my mind, the importance and novelty of computer simulations for philosophical inquiry.

By means of these analyses, I organized the terminology that will be used in the following chapters as I encourage a change of angle in the analysis of the epistemic power of computer simulations. The following chapter will begin with these changes by analyzing the notion of ‘computer simulation.’ This time, however, I will make use of a more appropriate philosophy, one that provides the right conceptual tools:

the philosophy of computer science.

Notes

1Cf. (Rohrlich, 1990, 507)

2For further reading, see (Coffa, 1991).

3A shortlist of eminent logical empiricists includes (Ayer, 1946; Carnap, 1937, 1932; Hempel, 1935;

Nagel, 1945).

4For a thorough study on these issues, see (Suppe, 1977, 1989).

5Cf. (Suppe, 1977, 45).

6Cf. (Harré, 1981, 175).

7For further reading, see (Bridgman, 1959, 1927; Hempel and Oppenheim, 1945).

8(Quine, 1951).

9(Popper, 1935).

10On scientific progress, see for instance (Kuhn, 1962). For a classic work of Feyerabend, see (Feyerabend, 1975).

11See(Lakatos et al., 1980), (Leplin, 1982), and (Hempel, 1965) respectively.

12Cf. (Humphreys, 2004, 96-97).

13Of special interest are (Suppe, 1977, 1989; van Fraassen, 1980).

14See (Cartwright, 1983; Morgan and Morrison, 1999).

15See (Cartwright et al., 1995).

16See (Morgan and Morrison, 1999), (Suárez, 2003), (Hartmann, 1999), (Giere, 2004), and (Keller, 2000) among others.

17Cf. (Müller and Müller, 2003, 1-31).

18This fact is especially true when there are scale differences between the model and the empirical system.

19See (Frigg, 2009a) and (Weisberg, 2013), respectively.

20Cf. (Hartmann, 1999, 344).

21See (Brown, 1991), (Tymoczko, 1979; Swart, 1980), and (Girle, 2003) respectively.

22For representational isomorphism, see (van Fraassen, 1980; Suppes, 2002). For partial isomor-phism, see (Mundy, 1986; Díez, 1998). For structural representation, see (Swoyer, 1991). For homomorphism, see (Lloyd, 1988; Krantz et al., 1971). For similarity, see (Giere, 1988; Aronson et al., 1993). For theDDI account, see (Hughes, 1997). For a more detailed discussion on each account, see the Lexicon.

23Cf. (Frigg, 2009b, 51).

24Cf. (Weisberg, 2013, 27).

25In all fairness, the author identifies his account only provisionally with isomorphism (cf. Swoyer, 1991, 457). For the sake of the argument, however, such a provisional identification is unimportant.

26See, for instance, (Jones and Cartwright, 2005; Liu, 2004b,a; Suárez, 2009; Weisberg, 2007).

27For an analysis on reliability, see Section 4.2.1.

28See (Morgan and Morrison, 1999).

29Cf. (Hartmann, 1999, 327).

30For further reading, see (Redhead, 1980; McMullin, 1968; Frigg and Hartmann, 2006).

31Cf. (Da Costa and French, 2000, 121).

32For instance, see (Galison, 1997; Staley, 2004).

33Cf. (Frigg and Hartmann, 2006, 743).

34For further reading, see (Harris, 2003).

35Among international groups working on observatory data models, there is the ‘International Virtual Observatory Alliance,’ http://www.ivoa.net/pub/info/; and the ‘Analysis of the Interstellar Medium of Isolated GAllaxies’ (AMIGA), http://amiga.iaa.es/p/1-homepage.htm.

36The list of philosophers of experimentation is long, but it would certainly include (Hacking, 1988, 1983; Franklin, 1986; Galison, 1997; Woodward, 2003; Brown, 1994, 1991).

37(Hanson, 1958, 1963).

38See (Franklin, 1990).

39See (Harré, 1981) and (Aristotle, 1965).

40See (Quine, 1951; Duhem, 1954).

41(Ackermann, 1989; Mayo, 1994).

42For further reading, see (Kraftmakher, 2007; Radder, 2003).

43Cf. (Nitske, 1971).

44For a study oncounterfactual theories, see (Lewis, 1973). Forprobabilistic causation, see (Weber, 2009). Forcausal calculus, see (Hida, 1980).

45(Salmon, 1984).

46I have simplified Dowe’s definition of causal interaction. A more complete discussion must include the notions of (and distinctions between) ‘causal process’ and ‘causal interaction’ (see Dowe, 2000, 89).

47(Kosso, 2010, 1989).

48For a distinction between these two terms, see the Lexicon.

49Cf. (Franklin, 1990, 104ff).

50By ‘success’ here I simply mean that the experiment provides the correct results.

51Cf. (Franklin, 1989, 441).

52Cf. (Hacking, 1983, 194-197).

53(Penzias and Wilson, 1965a,b).

54See, for instance, (Dicke et al., 1965).

55(Franklin, 1981; Friedman and Telegdi, 1957; Wu et al., 1957).

56For instance, (Swart, 1980; Tymoczko, 1979, 1980).

57Cf. (Franklin, 1981, 368).

58See Section 3.2 for further discussion on this point.

59See (Holton, 1999).

60See (Cirpka et al., 2004).

61Cf. (Frigg and Reiss, 2009, 594-595).

62It can be read: “[t]he philosophical problems that do come up in connection with simulations are not specific to simulations and most of them are variants of problems that have been discussed in other contexts before. This is not to say that simulations do not raise new problems of their own.

These specific problems are, however, mostly of a mathematical or psychological, not philosophical nature” (Frigg and Reiss, 2009, 595).

63Such a conceptualization is fully developed in Chapter 2 and Chapter 3.

64Cf. (Humphreys, 2009, 617).

65Humphreys makes the distinction between scientific practicecompletely carried out by comput-ers (one that he calls the automated scenario) and one in which computers only partially fulfill scientific activity (the hybrid scenario). He restricts his analysis, however, to the hybrid scenario (cf. Humphreys, 2009, 616-617).

66Cf. (Humphreys, 2004, 8).

67See (Humphreys, 2009, 618ff). In (Durán, Unpublished) I claim that the anthropocentric predica-ment is silent on whether we are justified in believing that the results of a computer simulation are, as assumed, reliable. In here I also raise some objections to the ‘epistemic opacity,’ and ‘the in practice/in principle’ distinction (see Section 2.2 and Section 3.3.2).

68See, for instance (Morgan and Morrison, 1999, 10ff).

69For further reading, see (Atkinson, 1964).

70For instance (Young, 1963).

71For further reading, see (Kotsiantis and Kanellopoulos, 2006; Boulle, 2004).

72Computer models are subject to the Church-Turing thesis, that is, limited by functions whose values are ‘effectively calculable’ or algorithmically computable. See (Church, 1936; Turing, 1936).

73For instance (Butterfield, 2008; Turner, 2007b).

74For further reading, see (Ainsworth et al., 1996; Guttag et al., 1978; Spivey, 2001).

75See (Oberkampf and Roy, 2010).

76See (Oberkampf et al., 2003).

Chapter 2

A survey of the foundations of computer science

2.1 Introduction

Philosophers dealing with computer simulations have used the very notion of computer simulation in a rather loose way. Most prominent is the conception that a computer simulation is a dynamic system implemented on a physical computer1. But what exactly constitutes the notion ofcomputer simulation? In a broad context, computer simulations are just another type of computer software, like the web appli-cation of a bank or the database at the doctor’s office. I begin the study of computer simulations by analyzing them as general purpose computer software. More specifi-cally, I propose to divide the study of general purpose computer software into three main units of analysis, namely, the specification, the algorithm, and the computer process. Thus understood, I foster this study from the perspective of the philosophy of computer science, rather than the philosophy of science, as it has been typically addressed.

With these interests in mind, I have divided this chapter in the following way:

Section 2.2 defines the notion of computerspecification, and deals with the method-ological and epistemological uses. On the methodological side, specifications are taken as a blueprint for the design, programming, and implementation of algo-rithms; on the epistemic side, however, they bond the scientists’ knowledge about the target system (i.e., an empirical model, a theoretical model, a phenomenological model, etc.) together with the scientists’ knowledge about the computer system (architecture, OS, programming languages, etc.).

Section 2.3 discusses the notion of algorithm, as well as the formal and non-formal methods involved for its interpretation, design, and confirmation. I also

discuss some properties of algorithms that make them interesting for scientific rep-resentation, such as syntax manipulability, which is associated with the multiple (and equivalent) ways of representing the same target system (e.g., Lagrangian or Hamiltonian representations). The other important property of an algorithm is syn-tax transference, that is, the simple idea that by adding just a few changes, the algorithm can be used in different representational contexts.

Finally, Section 2.4 addresses the notion of computer process as the set of acti-vated logical gates (by the algorithm) working at the hardware level. I then argue that both the algorithm and the computer process are step-wise processes. From here two conclusions follow: first, the results of the computation fall within the algorithm’s parameter domain (or, more specifically, within the specification and algorithm’s space of solutions), and second, that the algorithm and the computer process are epistemically equivalent (albeit ontologically dissimilar). These conclu-sions are of central importance for my conceptualization of computer simulations.

This chapter, then, has two aims in mind: first, to discuss the ontology of com-puter software along with the epistemic and methodological characteristics that it offers; and second, to prepare the terminology used in the remainder of this disser-tation.