• Keine Ergebnisse gefunden

Arguments against the ability of IBE

to link empirical success with truthlikeness

Recall that Putnam’s ‘no miracle argument’ invokes the approximate truth of scientific theories as the best explanation for their empirical success. One of the most adamant critics of this argument is Larry Laudan (1981; 1984; 1996).

In his 1981 article, “A Confutation of Convergent Realism”, Laudan criticizes the following two theses, which he ascribes to realism:

(T1) If a theory is approximately true, then it will be explanatorily successful.

(T2) If a theory is explanatorily successful, then it is probably approximately true. (Laudan 1981: 30)

(T1) asserts that there is what Laudan calls a ‘downward path’ from approx-imate truth to empirical success, whereas (T2) asserts the existence of an

‘upward path’. Laudan criticizes each of these two theses. Yet, we shall im-mediately see that his criticism is not justified.

2.5.1 The downward path

In arguing against the downward path, Laudan relies on the conviction that there is no acceptable notion of ‘approximate truth’.

Virtually all the proponents of epistemic realism take it as unproblem-atic that if a theory were approximately true, it would deductively follow that the theory would be a relatively successful predictor and explainer of observable phenomena. Unfortunately, few of the writers of whom I am aware have defined what it means for a statement or theory to be

‘approximately true’. Accordingly, it is impossible to say whether the alleged entailment is genuine. This reservation is more than perfunctory.

Indeed, on the best account of what it means for a theory to be approxi-mately true, it doesnot follow that an approximately true theory will be explanatorily successful. (Laudan 1981: 30–31)

By the ‘best known account of what it means for a theory to be approximately true’, Laudan means Popper’s account of verisimilitude, which he subsequently rejects.13 Laudan further criticizes Newton-Smith’s view that the concept of approximate truth can be legitimately invoked even in the absence of a

‘philosophically satisfactory analysis’. According to Laudan, the problem is that the intuitive notion of approximate truth lacks the minimal clarity needed to ensure that it would explain science’s empirical success.

Finally, Laudan contends that even in the presence of an articulated se-mantic account of truthlikeness, the realist would have noepistemic access to it. He wouldn’t know whether his theory is actually truthlike or not. Nonethe-less, on the basis of Niiniluoto’s account of truthlikeness (see A.2), the answer to this point is straightforward: properly construed, truthlikeness has both a semantic and an epistemic component.

2.5.2 The upward path

Laudan declares himself ready to assume, for the sake of argument, the truth of (T1). He maintains that the truth of (T2) does not follow, i.e. that the explanatory success of a theory cannot be taken as a rational warrant for its approximate truth. To this purpose, he lists an impressive number of past

13The same is done in the Appendix to this book, yet I also show there that there are clearly better accounts of verisimilitude than the Popperian one.

theories which although empirically successful, are nowadays known to be not approximately true:

- the crystalline spheres of ancient and medieval astronomy;

- the humoral theory of medicine;

- the effluvial theory of static electricity;

- “catastrophist” geology, with its commitment to a universal (Noachian) deluge;

- the phlogiston theory of chemistry;

- the caloric theory of heat;

- the vibratory theory of heat;

- the vital force theories of physiology;

- the electromagnetic ether;

- the theory of circular inertia;

- theories of spontaneous generation.

The list, which could be extended ad nauseam, involves in every case a theory that was once successful and well confirmed, but which contained central terms that (we now believe) were nonreferring. (Laudan 1981: 33) This list has provided a lot of work for realist philosophers. One of the replies has come from McAllister (1993). He argues that many theories of the past that were highly valued, were neither approximately true, nor empirically successful.

McAllister denies that the past theories cited by Laudan had high degrees of empirical success. He presents his argument in terms of properties of theo-ries (such as the “property of being mathematical, the property of according with the data of a certain set, and the property of being susceptible of con-cise formulation.” (1993: 208)), and properties of properties of theories (“for instance, it may be a property of one possible property that different theories can possess it to different degrees, or that its presence in a theory is difficult to ascertain, or that it reveals itself in a theory only once the theory has been applied in the design of experiments.” (1993: 208)). In particular, McAllis-ter is inMcAllis-terested in the properties diagnostic of high measures of a theory’s empirical success, among which he situates consistency with known data, ex-planatory power, the ability to generate novel predictions, simplicity, etc. The discovery of the relevant properties of empirical success is itself a task grad-ually achieved by science, greatly relying on empirical research. Accordingly, science has ceased to value several properties of theories, such as consistency with the Bible, with vitalism, with energetism and the like. Closer to contem-porary science, the advent of quantum mechanics showed that determinism – a theoretical property intrinsic to the Newtonian paradigm – is not necessarily

required as a property which theories must have in order to get closer to the truth.

In the light of this, McAllister concludes that

...the judgments made in the remote past about the [empirical success]

measures of theories are in general not as reliable as those which take account of the later discoveries about the properties of theories. ...There-fore, the theories deemed successful in the history of science were deemed to be so on the basis only of a set of criteria constructed in the light of imperfect knowledge about the properties of the properties of theories.

(McAllister 1993: 211–2)

I agree that McAllister’s argument is able to block Laudan’s claim concerning a number of theories from his list. However, the argument in itself is problem-atic. First, it relies on a notion of empirical success (his term is ‘observational success’) which he defines as synonymous with ‘empirical adequacy’. But as already known, empirical adequacy consists in thetruth of the empirical conse-quences derivable from a theory. Strict truth is certainly too strong a demand for empirical success. Second, the higher standards that contemporary science has imposed on the relevant properties of empirical success can be, if only incidentally, satisfied by some theories of the past. Thus, McAllister cannot exclude the eventuality that some of the theories on Laudan’s list are in fact successful even by today’s lights. To conclude, I believe that we should admit the possibility that some of Laudan’s theories survive McAllister’s argument.

A different response to Laudan’s argument is given by Philip Kitcher (1993). He protests that Laudan’s argument “depends on painting with a very broad brush” (1993: 142), in the sense that Laudan’s examples, though admittedly empirically successful theories, are shown not to be approximately true by appeal to theiridle orinessential parts. Here is Kitcher’s diagnosis of the examples from Laudan’s list:

Either the analysis is not sufficiently fine-grained to see that the sources of error are not involved in the apparent successes of science of past science or there are flawed views about reference; in some instances both errors combine. (Kitcher 1993: 143)

To illustrate this, Kitcher focuses on a central example in Laudan’s list, namely the electromagnetic ether in nineteenth century optics. Laudan insists on the crucial role that ether played in explaining the phenomena of reflection, refraction, interference, double refraction, diffraction, and polarization, as well as in making predictions as surprising as the bright spot at the center of the shadow of a circular disc, in Fresnel’s approach (Laudan 1981: 27). This is

precisely the point that Kitcher contends: ether didnot play a crucial role in nineteenth century electromagnetic theories.

Kitcher argues convincingly that in Fresnel’s theory ether was nothing but an idle presupposition of a successful problem-solving schema employed for optical phenomena. In particular, Fresnel’s problem-solving schema concerned questions of the form “What is the intensity of light received at point P?”, whose answer involves Huygens’s conception of the wavefront as a source of secondary propagation, and the method of integration over the entire length of the wavefront. This is a mathematical technique still employed by contem-porary physics. By contrast, Fresnel’s considerations about the constitution of transversal electromagnetic waves played practically no role in the success of his theory. In the terminology Kitcher proposes, ether is apresuppositional posit within scientific practice, i.e. an entity that apparently has to exist if the instances of the schemata are to be true.14 This is to be contrasted with the working posits, i.e. with the putative referents of terms that occur in problem-solving schemata (Kitcher 1993: 149).

The ether is a prime example of a presuppositional posit, rarely employed in explanation or prediction, never subjected to empirical measurement (until, late in the century A. A. Michelson devised his famous experiment to measure the velocity of the earth relative to the ether), yet seemingly required to exist if the claims about electromagnetic and light waves were to be true. The moral of Laudan’s story is not that theoretical positing in general is untrustworthy, but that presuppositional posits are suspect.

(Kitcher 1993: 149)

Therefore, as Kitcher concludes, a finer-grained approach to Laudan’s list indicates that the theoretical terms essential to successful problem-solving schemata prove to be referential. More generally, as Kitcher states, an em-pirically successful theory is indeed approximately true, provided that its the-oretical postulates are indispensable to the derivation of the empirical conse-quences.

I take it that McAllister’s and Kitcher’s arguments – along with others, more or less successful15 – succeed in eliminating most of the theories on Lau-dan’s list. With respect to the few of them which possibly survive these ar-guments, recall that a reasonable realist does not assume that none of the well-established theories has ever been later refuted. As already mentioned, modern scientific realism ought to be a selective doctrine, capable to cope with

14Kitcher’s notion of a presuppositional posit is thus clearly reminiscent of Vaihinger’s

‘useful fictions’.

15Worthwhile noticing are Harding and Rosenberg’s (1981), and Psillos’s (1999) arguments to the effect that non-referring theories can, nevertheless, be approximately true.

the fact that not all scientific theories are to be taken realistically (see chapter 7 for more considerations on a selective scientific realism).

In light of all this, I conclude that Laudan’s objections do not succeed in showing that it is illicit to associate approximate truth with empirical suc-cess. Accordingly, they cannot speak against realist’s right to rely on IBE in accounting for the empirical success of science.