• Keine Ergebnisse gefunden

6.2 Future challenges

6.2.1 For the unificationist

In this work I have defended the unificationist account of explanation as a suc-cessful theory for explaining simulated phenomena. I have also shown that through the epistemic goal that this account of explanation carries with it, it is possible to confer epistemic power to computer simulations. However, there are a handful of unsolved issues related to the unificationist account and inherited by computer simulations that have not been addressed in this work. In this section I will briefly outline them while possible solutions are also suggested. Let me note that most of them do not represent a threat to my view of explanation of computer simulations results insofar as they are not a threat to the unificationist account. Naturally, an exhaustive analysis must not be expected; rather these issues are addressed as future challenges that the unificationist account poses for computer simulations.

Allow me to begin with a claim already mentioned in Section 5.2 about the world as a disunified place. John Dupré, for instance, has extensively written on this topic, denying the view that science constitutes a single, unified project. According to him

“[t]he metaphysics of modern science, as also of much of modern Western philos-ophy, has generally been taken to posit a deterministic, fully law-governed, and potentially fully intelligible structure that pervades the material universe” (Dupré, 1993, 2). At the beginning of that section I argued that complete unity in science was a program that proved to be wrong. Instead, the image of the unificationist that Kitcher projects and that I support is one of ‘modest unificationist’; that is, the philosopher that, committed to unity, also leaves room for the belief that the world may be a disordered place. As argued, the modest unificationist believes that in many cases scientific disciplines share regular methodologies, common ontologies, and comparable epistemic results, all of which increases our understanding of the world, advances science, and strengthens our corpus of belief. The question is in what sense are computer simulations better off by adopting the modest unification-ist rather than following Duprés’ strategy? I must here make a further dunification-istinction in the notion of ‘unification.’ Margaret Morrison distinguishes two different types of unification: “reductive unity, where two phenomena are identified as being of the same kind (electromagnetic and optical processes), and synthetic unity, which

in-volves the integration of two separate processes or phenomena under one theory (the unification of electromagnetism and the weak force)” (Morrison, 2000, 5. Empha-sis mine.). With this distinction in mind, computer simulations fulfill both types of unification indicated above: by writing an algorithm that describes Maxwell’s equations one would be perfectly capable of explaining electromagnetic as well as magnetic phenomena; and by making use of different libraries and alternative soft-ware for reproducing a larger number of phenomena under the same simulation, the researcher captures the second type of unification. Computer simulations are well-known for their capabilities to bring a host of diverse phenomena under the same umbrella: simulations in economy are used for psychology and vice-versa, biology and evolutionary sociology are also a close match. Sometimes, by implementing small changes in the mathematics, new equations represent a completely different phenomena. This was my way of understanding the unificationist power of com-puter simulations. Dupré’s objection to the unified world does not reach the modest unificationist in the physical sciences nor does it apply to the modest unificationist that runs a computer simulation. Moreover, his objection does not even reach com-puter simulations applied to the biological sciences, the field that Dupré uses as a counterexample of a unified world.

Another challenge for computer simulations is the issue of comparative explana-tions. It is well known that the world of simulations changes constantly. Moore’s law, for instance, indicates that the power of computers doubles every 18 months. Such changes have several sources: either because the mathematical machinery used for computers changes over time or, more likely, because technology advances rapidly, providing new architectural support for faster and more powerful computers, along with a number of flourishing programming languages.

Such scientific and technological development provokes a flux of changes in com-puter simulations as well. An interesting question is how those changes affect ex-planation in computer simulations. In other words, if two or more simulations are competing to explain a similar phenomenon, is there any available criteria for de-ciding which simulation explains better? At this point it must be restated that my work here has been on one computer simulation producing one set of results that represents one phenomenon. The problem that I am posing here adds one dimension;

namely, a question aboutmultiple simulations of the same target system competing for explanation. In terms of the unificationist account, this means that two or more explanatory stores are competing for explanation.

To the modest unificationist, the choice of one argument pattern over another

lies in the comparison regarding their unifying power. This unifying power depends, in turn, on the paucity of patterns used, the number of descriptions derived, and the stringency of patterns.5 In previous chapters, characterizing explanatory unifi-cation in terms of the fewer patterns used and number of descriptions derived was paramount for a successful account of explanation. The process of deciding between rival explanations requires, now, the notion ofstringency of patterns. Indeed, in the unificationist account, competing explanations are possible because it presupposes that two or more derivations are similar in either of these ways: they are similar in terms of their logical structures (i.e., in the statements used for constructing the argument pattern), or they are similar in terms of the non-logical vocabulary. The problem, then, is how to interpret the notion of similarity in such a way that it is not reduced to a matter of degree.

The solution proposed by Kitcher is to recognize that one explanatory store can demand more or less from its instantiations than another explanatory store.

Kitcher then introduces the notion ofstringency as a way to decide between those competing patterns. The basic idea is that a pattern is more stringent when it imposes conditions that are more difficult to satisfy than those of other patterns.

Kitcher continues describing the notion of stringency in the following way:

the stringency of an argument pattern is determined in part by the classifica-tion, which identifies a logical structure that instantiations must exhibit, and in part by the nature of the schematic sentence and the filling instructions, which jointly demand that instantiations should have common nonlogical vo-cabulary at certain places (Kitcher, 1989, 433).

In a similar vein, a computer simulation that has more unifying power would be preferable over one with less unifying power. But, what exactly does this mean?

In the domain of computer simulations, it is not enough to produce more success-ful results with the use of fewer patterns, as Kitcher argues for science, but more accurate simulations are also preferred due to their capacity to produce more rep-resentative results. To illuminate this point, consider any model with at least one ordinary differential equation. In order to implement this model as a computer simulation, it first needs to be discretized. For the sake of the argument, let us consider only Euler and Runge-Kutta methods. While the Euler method is subject to large truncation errors, Runge-Kutta is much more accurate in its results. Given that accuracy of results is also representational of the target system, then, the more accurate a result is, the more representative of the target system they become (and vice versa). To the unificationist, accuracy is interpreted in terms of the nature of the schematic sentence and the filling instructions. A computer simulation using

are more representative of the target system; this is also true in terms of the filling instructions because the simulation would provide more detailed results. Accuracy and representativeness are coextensive in computer simulations, and this is reflected in the stringency of the patterns of behavior. It follows that the patterns of behav-ior of the simulation implementing the Runge-Kutta method can better explain the results than implementing Euler and, as such, it unifies more.

Of course, a more complete analysis of the criteria for choosing one explanation over another in the context of computer simulation requires more study. Here I am only outlining the challenge that could be of greatest interest to discussions for a unificationist view on computer simulations.

The problem of competing explanations has some kinship with changes in the corpus of beliefsK. My account has been based on a fixed corpus of beliefs at one moment in time. But, as I mentioned before, mathematics changes, and so does technology. Therefore, it is to be expected that our corpus of beliefs also changes.

Recall from Section 5.2, I briefly discussed how to seek the bestE(K), the explana-tory store. But the analysis I propose here has less to do with finding the best explanatory store than it has to do with changes in the corpus of beliefsK. Let me recall that I reconstructE(K) from thespecification (and the algorithm), where all the information about the simulation lies. If thespecification (and therefore, the al-gorithm) changes, it must be interpreted as a change of the explanatory storeE(K).

Now, such a change in the explanatory store has nothing to do with finding the best E(K), for this was assumed as a condition, but with the change in the corpus of beliefsK. In plain words, different computer simulations presuppose a change in the specification and in the algorithm, which presupposes in turn a change in the corpus of beliefs K. A theory of explanation for computer simulations must also account for changes in the specification and in the algorithm. This is a very difficult and complex issue within the heart of the unificationist account. Kitcher discusses it to some extent, but it requires a discussion in itself due to the amount of technical terms that need to be introduced.6 This problem will also be left as a challenge for future studies on explanatory unification for computer simulations.

The last challenge lies in my own selection of computer simulations, which has two parts. The first part goes back to Chapter 3 where I narrowed down the class of computer simulations for this study. This selection explicitly excluded cellular automata, agent-based simulations, and complex systems. Narrowing down the class of computer simulations was the proper course of action for my purposes.

But in any future study, these classes of simulations must be incorporated under the unificationist account of explanation as well (or at least analyzed with the possibility of an incorporation). As indicated in Section 3.2.2, this move would require a better analysis of these classes of computer simulations, one that may diverge from what I argued in Chapter 2 and Chapter 3.

The second part of this selection has to do with the simulations that I excluded in Section 4.2.1. There I argued that the metaphysics of scientific explanation de-manded us to be very careful in guaranteeing that the results of a simulation ac-tually represented the phenomenon to be explained. This demand was based on epistemological grounds: without the proper representation we cannot claim that an explanation yields understanding of that piece of the world just simulated. Com-puter simulations, however, are rich heuristic systems, understood as mechanisms for exploring the properties of models.7 Krohs poses another interesting example:

the Oregonator, a simulation of the Belousov-Zhabotinsky chemical reaction.8 The problem with this simulation was that the system of equations werestiff and lead to qualitatively erroneous results.9 It would be interesting to develop a theory of expla-nation that includes these simulations as genuine producers of explaexpla-nation despite the fact that they do not represent. This last point, I suspect, has direct connections with considering computer simulations asa priori experiments, where the reality of the world is subsumed to the rationality of representing it. The intricacy of the problem speaks for itself.