• Keine Ergebnisse gefunden

and the algorithm? The credibility of computer software comes not only from the credentials supplied by the theoretical model used for constructing the simulation, but also (and probably more fundamentally) from the antecedently established cre-dentials of the model building techniques employed in its construction. A history of prior success and accomplishments is the sum and substance of scientific progress.

The set of techniques, assumptions, languages, and methods successfully employed for interpreting the specification into an algorithm contribute to this history. They fit well into our web of accepted knowledge and understanding of a computer system, and therefore are part of the success of science. A history of successful interpretation of the specification into the algorithm by the means described is enough credential for their reliability. Of course, better techniques will supersede current ones, but that is also part of this scientific progress, part of a history of successful methods that connect specifications and algorithms.

It is time to address the final component in computer software, namely, the com-puter process. Just one last terminological clarification. I reserve the termcomputer model or computational model for referring to the pair specification-algorithm. The reason for this is that both are essential in the representation of the target system.

Let me now turn to the study of the computer process.

To illustrate the difference between the computer process and the specification, take for example the description of a milk-delivery system: “make one milk delivery at each store, driving the shortest possible distance in total” (Cantwell Smith, 1985, 22). As Cantwell Smith explains, this is a description of what has to happen, but not how it will happen. The program is in charge of this last part, namely, it shows how the milk delivery takes place: “drive four blocks north, turn right, stop at Gregory’s Grocery Store on the corner, drop off the milk, then drive 17 blocks north-east, [...]”

(Cantwell Smith, 1985, 22).

Now, what Cantwell Smith calls here ‘the program’ must be divided into two distinct components, namely, thealgorithm and the computer process. This distinc-tion is important because Cantwell Smith’s nodistinc-tion of program fails to differentiate a step-wise procedure understood as syntactic formulae and a step-wise procedure that puts the physical machine into causal states. In other words, there is an onto-logical difference that cannot be captured by his notion of program and it is crucial for a complete understanding of the nature of computer software. Let me begin by discussing this point.

The physical computer is structurally made to follow certain sets of rules built into its own hardware. Following the standard von Neumann architecture,19 the arithmetic unit of the microprocessor can compute because it is constructed using logic gates. Logic gates are the physical implementation of the logic operators ‘and,’

‘or,’ ‘not,’ and so forth, also present in the algorithm. Moreover, these logic gates are present not only in the arithmetic unit, but throughout the physical machine: mem-ory, computer bus, I/O devices, etc. There, specific languages (so-called ‘hardware description languages’) such as VHDL or Verilog are used for building these logic gates into all the physical micro-circuits.20 Such hardware description languages are, essentially, programming languages for the hardware architecture. It follows, then, that the lowest physical layer of the computer implements the algorithm because there is a common language that facilitates such implementation.

The implementation of the algorithm on the digital computer is what I call the computer process. A computer process, therefore, can be understood as the set of activated logical gates (by the algorithm) working at the hardware level. This result has two important consequences: first, an algorithm and a computer process are both step-wise processes in the sense that they both depend on a set of well-defined rules; second, the computer process is the physical realization of the algorithm, that is, the computer process physically implements the instructions set out by the algorithm.

The notion of ‘implementation’ here is taken in a semantic sense, that is, in the sense that a syntactic structure (i.e., the algorithm) is interpreted on a semantic domain (i.e., the physical computer). William Rapaport explains this in the fol-lowing way: “terms get interpreted by, or mapped into, elements of the interpreting domain, and predicates (operations) are mapped into predicates of the interpreting domain” (Rapaport, 2005a, 388). Of course, the interpreting domain is, in this con-text, the physical states of the computer. In other words, semantic implementation is the correct terminology for the idea that the computer process is the physical concretization of the algorithm on the computer.21 To complete the idea, it must be noted that such semantic implementation is carried out by the ‘compiler,’ that is, another computer program capable of mapping the elements of the domain of the algorithm into a computer language (see Figure 2.2). C. A. R. Hoare puts the same idea in the following way:

A program is a detailed specification of the behavior of a computer executing that program. Consequently, a program can be identified abstractly with a predicate describing all relevant observations that may be made of this behav-ior. This identification assigns a meaning to the program (Floyd, 1967), and a semantics to the programming language in which it is expressed (Hoare and Jones, 1989, 335).

It must finally be noted that if the computer architecture changes, say from an 8-bit to a 256-bit computer, then the compiler has the enormous responsibility to make this architectural change transparent for the scientist with a new mapping. For our purposes here, it is sufficient to characterize a computer process as the algorithm running on the physical machine.

Now, from a philosophical point of view, there are two outcomes of importance.

The first one states that the results of the computation fall within the algorithm’s parameter domain (or, more specifically, within the specification and algorithm space of solutions). The second conclusion is that algorithm and computer process are epistemically equivalent, albeit ontologically dissimilar. Let me briefly address these two outcomes.

According to my description of the implementation, it follows that the algorithm

‘tells’ the computer how to behave. Thus interpreted, the computer process neither includes nor excludes any information that was not previously programed in the algorithm (unless, of course, there is a miscalculation or an error of some kind22).

Now, this is a controversial point among philosophers. The most tendentious posi-tion is held by James Fetzer in the context of the verificaposi-tion debate.23 Throughout his work, Fetzer holds that there are no reasons for believing that the computer process will not influence somehow the results of the calculation. That is, since the

computer process is a causal process subject to all kinds of physical conditions (e.g., changes of temperature, hardware failure, etc.), it is to be expected that there will exist differences between pen-and-paper results of calculating the algorithm and the results of the computer process. Fetzer’s position has been strongly criticized on the grounds that he misrepresents the practice of computer science.24 The most pervasive objection to his viewpoint is that there are no grounds for claiming that the computer process introduces unexpected modifications of results. This objec-tion is appealing, for computer processes are, most of the time, reliable processes.25 Now, Fetzer’s position could be attacked from different angles. The easiest way to deal with these issues, however, is to reply that scientists replicate results and use statistical methods that provide sufficient guarantees that the results are within a given distribution of probability.

For our present purposes it is enough to presuppose that there are no miscalcu-lations or mathematical artifacts of any kind that the computer process introduces in the results. This presupposition is harmless philosophically speaking, and achiev-able technically speaking. It follows, then, that the equations coded in the algorithm are more or less straightforwardly solved by the computer process. Given that no information is added or subtracted, those results must belong in the space of pos-sible results, i.e., the algorithm’s space of solutions. Allow me to illustrate this presupposition with an analogy. Suppose that any given person carries out a simple calculation in her head: unless her brain works in unexpected ways, in principle there are no reasons for thinking that the result will be wrong. If I am correct, then the consequence is that the results of the computation were already contained in the algorithm.26

The second outcome is that an algorithm and a computer process are epistem-ically equivalent (albeit ontologepistem-ically dissimilar). This is actually a consequence of semantically implementing the algorithm, as discussed above. Since the algorithm and the computer process are linked by this semantic implementation, and since the algorithm is straightforwardly solved by the computer process, then both must be epistemically related. To be epistemically related means that the computer process contains the same information as the algorithm, albeit in a different form. It is in fact due to this epistemic equivalency that researchers are allowed to rely on the results of a computation. If they were epistemically different, then there are no guarantees that the results of a computation are reliable in any way. To illustrate this last point, take the simple mathematical operation 2 + 2 as an example. A possible algorithm written in language C may look like the following:

Algorithm 2.7 A simple algorithm for the operation 2 + 2 written in language C void main()

{ return(2+2) }

Now, in binary code, this is represented by ‘00000010 and 00000010,’ for 2 in binary is 00000010 and the arithmetic operation ‘+’ is the logical operator ‘and.’

As I mentioned before, the compiler would ‘translate’ the algorithm into machine-readable language (in this case, it translates ‘+’ into ‘and’). Once the computer process is executed on the machine, it returns 4, for this is the result of running the ‘and’ operation in the machine, i.e., 00000100 = 4. The conclusion is that the knowledge included in the algorithm in the form of instructions is processed by the computer rendering reliable results. Epistemically speaking, then, they are equivalent.

Admittedly, the examples used here are rather simple, whereas running computer software is a much more complex process where a large number of instructions are activated at the same time. However, they illustrate quite accurately the basic operations of the computer, which was the purpose of this chapter.

Figure 2.2: A general schema of the pro-cess of creating a computer software Figure 2.2 summarizes the three units of

computer software and their connections. At the top level there is the specification, where decisions for the computer software are made and integrated altogether. The algorithm is the set of instructions that interprets the specification and prescribes how the machine should behave. Finally, there is the com-puter process, i.e., the semantic implemen-tation of the algorithm that will deliver the results.

The purpose of this chapter was to an-alyze the notion of computer software as composed of three units: the specification, the algorithm, and the computer software.

Equally important was to understand their philosophical features, both individually and

as a working system. These results provide ontological and epistemic grounds for our study on computer simulations. In the following chapters, I restrict my view to

one special kind of computer software, namely, computer simulations. I will study them from a methodological and epistemic point of view.