• Keine Ergebnisse gefunden

5. Whatever the entry data, the execution of the algorithm will terminate after a finite number of steps;

6. The behavior of the algorithm is physically instantiated during the imple-mentation on the computer machine. (Chabert, 1994, 455)

Many structures in computer science fulfill these features for an algorithm: Hoare triple is a good example of this (e.g., Algorithm 2.1). Pseudo-codes, that is, descrip-tions of algorithms intended for human reading rather than machine reading, are another good example of algorithms (e.g., Algorithm 2.2).10 Of course the notion of algorithm is not limited to abstract representations, for it could also be written in a programming language. The universe of programming languages is significantly large, but one example of each kind would suffice: C, Java, Perl, and Haskell. The first is an example of low level programing language (e.g., Algorithm 2.3); the second is an Object Oriented Programming Language; Perl is a good example of a scripting language; and Haskell is the exemplar of functional programming.11

Algorithm 2.1 Hoare triple

A Hoare triple is a theoretical reconstruction of an algorithm. It consists of a formal system (initial, intermediate, and final states) that strictly obeys a set of logical rules. It has been developed for formal validation of the correctness of computer programs and it cannot be implemented on the computer.

The Hoare tripe has the form: {P}C{Q}

where P and Q are assertions and C is a command. P is named the precondition and Q the postcondition. When the precondition is met, the command establishes the postcondition. Assertions are formulae in predicate logic.

Rules:

Empty statement axiom schema: {P}Skip{P} Assignment axiom schema: {P[E/x]}x:=E{P}

From an ontological point of view, an algorithm is asyntacticstructure encoding the information set out in the specification. Let us note that studies in computer science make use of the notion of syntax and semantics in a rather different way than linguistics. For computer science,syntax is the study of symbols and their rela-tionships within a formal system; it typically includes a grammar (i.e., a sequence of symbols as well-formed formulas), and proof-theory (i.e., a sequences of well-formed formulas that are considered theorems). On the other hand,semantics is the study of the relationship between a formal system, which is syntactically specified, and a

Algorithm 2.2 Pseudo-code

Pseudocode is a non-formal, high-level description of the computer algorithm. It is intended to focus on the operational behavior of the algorithm rather than on a particular syntax. In this sense, it uses a similar language as a programming language but in a very loose sense, typically omitting details that are not essential for the understanding of the algorithm.

FOR all numbers in the array

SET Temp equal the addition of each number IF > 9 THEN

get the remainder of the number divided by 10 to that index and carry the "1"

Decrement one

Do it again for numbers before the decimal

semantic domain, which can be specified by a domain taken to provide interpreta-tion to the symbols in the syntactic domain. In the case of the implementainterpreta-tion of the algorithm on the digital computer, the semantic domains are the physical states of the computer when running the instructions coded into the algorithm. I refer to these physical states as the computer process, and it will be discussed in the next section.

Two other ontological properties of interest are that the algorithm isabstract and formal. It isabstract because it consists of a string of symbols with no physical causal relations acting upon them: just like a logico-mathematical structure, an algorithm is causally inert and disconnected from space-time. It is formal because it follows the laws of logic that indicates how to systematically manipulate symbols.12

These ontological features offer an epistemic counterpart. Any abstract and for-mal system allows its variables to be syntactically transformed into logico-mathematical equivalents. Such a capacity comes in two forms: syntax manipulability and syntax transference.13 Allow me to explain.

Consider a mathematical model. Such a model can be mathematically trans-formed by changing one formula for another formula isomorphic to it. For instance, there is mathematical isomorphism between 2a and a+a, for both belong to the same algebraic groups. Thus understood, syntax manipulability is a feature that also applies to computer simulations. Consider the following Algorithm 2.4 and its equivalent Algorithm 2.5. Both are logically isomorphic, and the proof of this is in Table 2.1:

Paul Humphreys warns that syntax manipulability is not about mere

mathemat-Algorithm 2.3 Imperative programming language C using local variables

Finally, an algorithm written in a programming language describes the specification in such a way that it can be implemented on the computer. In this work, when I refer to an algorithm, I will be referring to an algorithm written in a specific programing language (the particularities of the programing language are not important here).

Functionmean finds the mean between two integers:

float mean(int num1, int num2) { float p;

p = (num1 + num2) / 2.0;

return(p);

} main()

{ int a=7, b=10; float result;

result = mean(a, b);

printf("Mean=%f\n", result);

}

Algorithm 2.4

if (A) then {a} else {b}

ical power, but it also “makes the difference between a computationally intractable and a computationally tractable representation” (Humphreys, 2004, 97). For in-stance, Lagrangian and Hamiltonian sets of equations both represent dynamical systems. And despite the fact that both are equivalent in terms of their mathe-matics, the Hamiltonian representation has proven to be more suitable for some systems than Lagrangian’s (e.g., in quantum mechanics).14 There is, therefore, a clear superiority of the Hamiltonian representation over the Lagrangian. Now, rep-resentational superiority must be accompanied byalgorithmic adequacy, that is, the ability of an algorithm to be computationally more adequate than another. The issue is that in the same vein that syntax manipulability does not warrant represen-tational superiority, the latter does not warrant algorithmic adequacy. The reason for this is that algorithmic adequacy depends on the capacity of a computer system to compute the mathematics, and not necessarily on its representational capacities.

Indeed, an algorithm that codes Hamiltonian equations might not necessarily be algorithmically more adequate than one coding Lagrangian equations. Moreover, these two algorithms might not necessarily be isomorphic to each other. These facts nicely show that mathematical power and representational superiority are not positive features necessarily inherited by the algorithm.

Algorithm 2.5

if (not-A) then {b} else {a}

A {a} {b}

T T Ø

F Ø T

not-A {a} {b}

F T Ø

T Ø T

Table 2.1: Equivalency truth-table

The other interesting feature of algorithms is syntax transference. It is charac-terized by the simple idea that by adding just a few changes the algorithm could be reused for different representational contexts. As an example, consider Algorithm 2.6, which is a modification of Algorithm 2.3 replacing the two local integer vari-ables ‘int a=7, b=10’ for two global integer variables. Syntax transference, then, allows scientists to reuse existing code, to generalize it, which therefore broadens the scope of the algorithm. This feature has been acknowledged by Humphreys in the following words: “[a]nother advantage of such syntactic transformations is that a successful model can sometimes be easily extended from an initial application to other, similar systems” (Humphreys, 2004, 97).

Algorithm 2.6 Imperative programming language C using global variables int a, b;

Functionmean finds the mean between two integers:

float mean(int num1, int num2) { float p;

p = (num1 + num2) / 2.0;

return(p);

}

main(a, b) { float result;

result = mean(a, b);

printf("Mean=%f\n", result);

}

Admittedly, syntax manipulability and syntax transference beg the question about equivalency for a class of algorithms. The problem is the following. Syn-tax manipulability and synSyn-tax transference presuppose modifications of the original algorithm that lead to a new algorithm. Since algorithms are, as argued, syntactic

entities, then such modifications might entail an entirely new algorithm, one that significantly diverges from the original algorithm. Is Algorithm 2.3 equivalent to Al-gorithm 2.6? Equivalent in what sense? Is there a way to entrench equivalency for a class of algorithms? These are time-honored questions in the philosophy of computer science, and by no means will I provide a definite answer. However, two approaches have been agreed upon among computer scientists and philosophers alike: either two algorithms are ‘logically equivalent,’ that is, the two algorithms are formally isomor-phic;15 or they are ‘behaviorally equivalent,’ that is, the two algorithms behave in a similar fashion.

Let me briefly discuss these two approaches. Logical equivalency can be easily illustrated by using Algorithm 2.4 and Algorithm 2.5 since both are formally iso-morphic and the proof of this is shown in Table 2.1. Although formal procedures of any kind always warrant isomorphism between two structures, it is not always at-tainable due to practical as well as theoretical constraints. Examples of the former include formal procedures that are humanly impossible to carry out, or are time and money consuming. Examples of theoretical constraints include programming languages that tend to have more expressive power than formal languages, making the latter less capable of being used in a formal procedure.

Behavioral equivalency, on the other hand, is easier to achieve than logical equiv-alency since it depends on the general behavior of the algorithm. It is exposed, how-ever, to one concern and one objection. The concern is that behavioral equivalency is grounded in inductive principles. Indeed, behavioral equivalency could only be warranted for time t, when the observer corroborates the same behavior between algorithms, but not fort+ 1which is the next unobserved state of the algorithm. In plain words, two algorithms could be behaviorally diverse int+ 1while equivalent in t. Full behavioral equivalency, therefore is only warranted, if then, by the time the two algorithms halt. The question remains if an ex post facto comparison between algorithms is of any interest to the computer scientist.

The objection stems from the fact that behavioral equivalency could hide logi-cal equivalency. If this is the case, then two algorithms could diverge behaviorally while being logically equivalent. An example of this is one algorithm that imple-ments cartesian coordinates whereas another impleimple-ments polar coordinates. Both algorithms are isomorphic, but behaviorally dissimilar.

The general problem with behavioral equivalency is that it cannot ensure that two algorithms are the same, despite both behaving similarly. My concern suggests that two algorithms can be behaviorally equivalent up to certain number of steps and then diverge from each other. My objection indicates that two algorithms might

be logically equivalent, although behavioral equivalency hides it.

The lesson here is that syntax manipulability and syntax transference come at a price. Both are, however, a highly desirable feature for computer software. There are clear epistemic advantages in being able to implement (with a few changes) the Lotka-Volterra set of equations in a biological system as well as in an economic sys-tem.

Until now, I have discussed the conceptualization of an algorithm, along with its philosophical consequences as a syntactic formula. There is still one more issue to be discussed; namely, the existing link between the specification and the algorithm.

Ideally, the specification and the algorithm should be closely related, that is, the specification should be entirely interpreted as an algorithmic structure. For this there is a host of specialized languages that ‘mediate’ between the specification and the algorithm, such as Common Algebraic Specification Language (CASL), Vienna Development Method (VDM), or the Z notation,16 just to mention a few specifica-tion languages. Model checking was conceived as the branch in computer science that automatically tests whether an algorithm meets the required specification, and in that sense it is also helpful in the interpretation of the specification into an al-gorithm.17 In more general terms, the implementation of a specification into an algorithm has a long tradition in mathematics, logic, and computer science, and it does not represent a conceptual problem here.

I also mentioned that the specification includes non-formal elements, such as expert knowledge or design decisions that cannot be formally interpreted. These non-formal elements must also be interpreted in terms of the algorithm, otherwise it will not be part of the computer software. Take as an example the specification of an algorithm for computing statistics. Say that the team of statisticians decides to be very careful about the samples they collect, and pass on this concern to the group of computer scientists. Yet, the latter group fails to correctly specify some sensitive data, say the economic background of some minority included in the samples. If this specification is not corrected before the algorithm is implemented, then the results of computing the statistics will be biased.

This example shows that an algorithm must also be capable of interpreting non-formal elements included in the specification. There are no non-formal methods for interpreting non-formal elements of the specification. In situations like this, past experiences, expert knowledge, and ‘know-how’ become crucial in the interpretation and implementation of non-formal elements of the specification.

Where does this leave the question about the interpretation of the specification

and the algorithm? The credibility of computer software comes not only from the credentials supplied by the theoretical model used for constructing the simulation, but also (and probably more fundamentally) from the antecedently established cre-dentials of the model building techniques employed in its construction. A history of prior success and accomplishments is the sum and substance of scientific progress.

The set of techniques, assumptions, languages, and methods successfully employed for interpreting the specification into an algorithm contribute to this history. They fit well into our web of accepted knowledge and understanding of a computer system, and therefore are part of the success of science. A history of successful interpretation of the specification into the algorithm by the means described is enough credential for their reliability. Of course, better techniques will supersede current ones, but that is also part of this scientific progress, part of a history of successful methods that connect specifications and algorithms.

It is time to address the final component in computer software, namely, the com-puter process. Just one last terminological clarification. I reserve the termcomputer model or computational model for referring to the pair specification-algorithm. The reason for this is that both are essential in the representation of the target system.

Let me now turn to the study of the computer process.