• Keine Ergebnisse gefunden

The lesson of the SMT case study is that both instrumentalism and a moderate social constructivism are present in science, at least in certain of its episodes.

It was argued in the preceding chapter that social factors have a limited influence in science, and that the thesis of social determination of scientific beliefs is self-undermining. Besides, given the compelling nature of deductive and mathematical reasoning, social explanation is not required in mathemat-ics. However, SMT highlights the role of scientists as creative agents who at critical junctures make theoretical choices that are capital for the development of whole research programs. Such choices are not entirely constrained by causal relations, but they take place in a tolerance space. Divergent paths may open up for research, the choice among which is physically underdetermined. As Cushing (1982; 1990) has repeatedly emphasized, the structure of the commu-nity of scientists becomes relevant in this respect. At least in HEP’s case, as already mentioned, this structure has a marked pyramidal form. The major research options are presented to the community by relatively few dominant epistemic authorities situated on the apex of the pyramid. In SMT and QFT, these epistemic authorities were Chew, Goldberger, Gell-Mann, Low, Mandel-stam, and a few others. Meanwhile, the majority of scientists ‘play the game’

as already outlined. This explains in part the stability and convergence of opinions in the research programs.

It is a general fact within science that few people are called to change scientific theories, resulting in the history of science being characterized by stability rather than by turning points. The point is aptly phrased by Cushing:

of course, such a theory is stable (for a longer or shorter period of time),

until some more clever person finds the crucial chink. This stability is related to the fact thatvery few people have the abilityand good fortune to create a theory that can cover a set of data and keep them nailed down while the theory is adjusted to cover some new phenomenon. (Cushing 1990: 245)

Apart from that, once a theory has entered a new field, it is defended against newcomers. This is another important stability factor. First, simple empirical adequacy will not do for the candidate theory;1 since the old one must also be empirically adequate, the new one has to do better than this in order to occupy the throne. Second, as the choice of the relevant class of scattering events in HEP shows, the relevant questions are tailored to fit the interests and competence of the old theory’s supporters. This fact diminishes the affirmation chances of the challenger.

Another important question is, how constraining are the physical phenom-ena on the process of theorizing. Are these constraints sufficient to uniquely determine a scientific theory? The SMT case study depicts the capital con-cern of accommodating the empirical phenomena. But no claim of more than empirical adequacy is warranted. We have already illustrated the pragmatism of the participants in the SMT program. The easiness with which some of them switched their theoretical frameworks displays little ontological commit-ment. Both SMT and QFT were used instrumentally, and some theoreticians expressed openly their theoretical opportunism: they used the approach that promised to solve the immediate problems.

Once the ontological anchor lets loose, the uniqueness of the physical the-ories becomes less probable. Both QFT and SMT were, in circumscribed domains, empirically successful theories, but the choice between them was, in the overlap domain, underdetermined by the empirical data. At different moments epistemic considerations – simplicity (especially calculational), pre-dictive power, theoretical potential – recommended one of the theories over the other. However, we saw that in both QFT and SMT, any conceptual concession was made for the sake of empirical success.

Using the terminology employed in chapter 5, the kind of underdetermi-nation of the choice between SMT and QFT stems from EE2, the thesis that a given theory T has empirical rivals such as that when T, and respectively its rivals, are conjoined with the set At of auxiliary hypotheses acceptable at time t, they entail the same observational sentences. It was argued that this sort of empirical equivalence does not entail a version of underdetermination problematic to scientific realism. In our case, the temporal indexttraversed a period of almost three decades, after which SMT was abandoned without

di-1This seems to be have happened to Bohm’s deterministic version of quantum mechanics.

rect empirical refutation. However, it cannot be excluded that novel empirical evidence might have made a crucial experiment feasible. In any event, QFT developed thereafter without a contender of SMT’s caliber. Gauge QFT be-came the ‘standard model’, offering us the most detailed and accurate picture of the internal constitution of matter by means of the electro-weak theory and of quantum chromodynamics.

A final issue to be discussed addresses theinevitability of constructed the-ories. In line with Cushing (1990: 266), we can ask whether theoretical physics would have arrived at one of its most promising constructs, superstring models, if it had not been for the existence, nearly four decades earlier, of Heisenberg’s S-matrix theory which ultimately proved in its own right to be a dead end.

It is impossible to prove that we would not have arrived at the quantized string model by another route. No doubt we might have, though I believe this eventuality is rather implausible. Nonetheless, among the most creative and influential high-energy physicists, there is the firm idea of a cognitive necessity dictating the normal succession of scientific discoveries. Richard Feynman, for example, said that if Heisenberg had not done it, someone else soon would have, as it become useful or necessary. Richard Eden, one of the early contributors to SMT, takes a similar position:

a general view that I would support [is] that most people’s research would have been discovered by someone else soon afterwards, if they had not done it. I think that this would have been true for Heisenberg’s S-matrix work. (Eden: correspondence with Cushing (1990: 267))

This strikes me as unjustified optimism. What kind of necessity can guarantee that a theory created by a scientist would, under different circumstances, have been created by another?2 I shall not press this issue, since even if we grant thatmost discoveries were made by someone else, the question remains open, as to whether at critical junctures, the same creative moves would have been made. In other words, while it is indeed plausible that quantum theory would have been created even without Niels Bohr, would it have necessarily looked like the Copenhagen version? To respond in the affirmative would be a risky induction. Even the most important theoretical decisions involve a degree of historical contingency. It is not unimaginable that SMT could have been pressed over QFT.

2I have in mind situations which precede the constitution of well-established theories.

Given quantum mechanics, it is highly probable that Heisenberg’s uncertainty relations would have been discovered by someone else too. The internal logic of the discipline would have im-posed it. However, given classical physics, no one need have discovered quantum mechanics.

(I am grateful to J. R. Brown for drawing this point to my attention.)

These considerations do not in any respect disconfirm the accuracy of the scientific realist account of science. They only raise doubts about a frequent interpretation of scientific realism as an overarching doctrine. A scientific real-ism more true to scientific practice ought to be selective, i.e. it ought to tolerate at its side episodes in which instrumentalism and moderate social construc-tivism are present. Certainly, it would be nice to have a clear-cut definition of such a selective scientific realism, but that would demand a principled distinc-tion between theories urging a realist interpretadistinc-tion, and theories urging an instrumentalist/constructivist interpretation. We cannot provided that, and it is doubtful that it can be provided.

For reasons exposed in 2.1.3, scientific realism accounts for the majority of scientific theories. The identification of those theories where instrumentalism and constructivism have the say demands empirical investigation, that is, a case by case analysis. Nonetheless, we have hinted at the main suspects:

the abstract model theories with complex mathematical formalisms, and loose causal connections to the physical world.

Chapter 8

Appendix: Truthlikeness

For multiple reasons, strict truth is not to be had in science. Scientific theo-ries involve virtually unexceptionably idealizations, approximations, simplifi-cations, andceteris paribus clauses. Scientific predictions can only be verified within the limits of experimental errors stemming both from calculation, and from unremovable ‘bugs’ and ‘noise’ in the experimental apparatus. Therefore, to impose standards of science so high as to accept only true sentences would mean to expel most of the scientific corpus as we know it.

For the antirealist, this is already a reason for scepticism. He explains the obvious empirical success of science as a matter of selection, through trial and error, of the lucky scientific theories – yet, we have seen that this is not, properly speaking, an explanation. The realist is an epistemic optimist. Aware that we cannot reach the exact truth, he resolutely maintains that we can well live with a more modest, fallibilist conception of science, according to which our best theories are truthlike, that is, ascertainably close to the truth. For many practical purposes, we have a fairly dependable intuitive notion of closeness to the truth.1 As Devitt (2002) indicates, “a’s being approximately spherical explains why it rolls.” However, against the advocates of an intuitive approach of truthlikeness (like Psillos 2000), it will be argued (A.2) that many problems related to the dynamics of scientific theories (e.g. unification, reduction, theory replacement, etc.) demand fairly accurate measurements of the distance to the truth. In fact, we have such an acceptable quantitative theory in Niiniluoto’s (1978; 1999) account. In A.3 we shall critically consider the position of Richard Giere (1988, 1999), an opponent to the notion of truthlikeness.

Let us now proceed with Karl Popper’s pioneering approach of verisimili-tude.

1The notions of approximate truth, truthlikeness, and verisimilitude are used interchange-ably by some authors. Still, though closely related, they should, for reasons to be exposed in A.2, be kept apart.

A.1 Popper’s theory of verisimilitude

According to Popper’s (1963; 1972) falsificationist view of science, theoretical hypotheses are conjectured and tested through the observable consequences derived from them. If a hypothesis passes the test, then it is ‘corroborated’.

Being corroborated really means nothing more than being unrefuted by exper-imental testing. In particular, corroboration is not to be taken as an indicator of evidential support or confirmation for the hypothesis – Popper wanted his view to be thoroughly deductivist, excluding notions such as induction or con-firmation.

Popper suggested, however, that corroboration is a fallible indicator of verisimilitude, meaning, likeness-to-the-truth. As Niiniluoto (1999: 65) notes, although the concept of probability is derived from the Latin verisimilitudo, Popper distinguished verisimilitude from probability. His insight was to define verisimilitude in purely logical terms. He took theories to be sets of sentences closed under deduction, and defined verisimilitude by means of relations be-tween their truth-content and falsity-content:

TheoryA is less verisimilar than theoryB if and only if (a) their truth-contents are comparable and (b) either the truth-content ofBis less than or equal to the truth-content ofA; or the truth-content of Ais less than or equal to the truth-content ofBand the falsity-content ofBis less than the falsity-content ofA. The truth-contentTT of a theoryT is the class of all false consequences ofT. (Popper 1972: 52)

Expressed formally, theory B is more truthlike than theory A if and only if either

A∩T ⊂B∩T andB∩F ⊆A∩F, or (1) A∩T ⊆B∩T andB∩F ⊂A∩F, (2) where T and F are the sets of true and false sentences, respectively, and ⊂ is the set-theoretic inclusion. However, Pavel Tich´y (1974) and David Miller (1974) proved that Popper’s definition is defective. They demonstrated that the two conditions in (1) cannot both be satisfied – and the same goes for (2).

Assume that (1) is the case. Then, B has at least one more true conse-quence than A has. Let us label this sentenceq. It is also true that there are false sentences common to A and B. Let us take one of them and label it p.

It follows that

p &q ∈B∩F and p& q /∈A∩F.

It turns out that contrary to our initial assumption, there is at least one false consequence ofB which is not a false consequence of A.

Assume now that (2) is the case. Then,Ahas at least one more false con-sequence thanB has. Let us label this sentencer. Take any false consequence common toA and B, say,k. It follows then that

k→r∈A∩T and k→r /∈B∩T,

since r /∈ B. So, contrary to the initial assumption, there is at least a true consequence ofAwhich is not a true consequence of B. Therefore, (1) and (2) cannot be true.

The vulnerable point in Popper’s account is that the verisimilitude of false theories cannot be compared. Suppose that starting from the false theoryA, we try to obtain a more verisimilar theoryBby adding true statements toA. The problem is that we thereby add toB falsities which are not consequences ofA.

The situation is symmetrical when we try to improve onA’s verisimilitude by taking away some of its falsities: we thereby take away fromAtrue statements which are not true consequences ofB.

In spite of the failure of Popper’s definition, the basic line of his approach of verisimilitude has been appropriated and integrated into more complex the-ories. According to Niiniluoto, what was missing from Popper’s approach was a notion ofsimilarity orlikeness:

truthlikeness = truth + similarity.

This approach was first proposed by Hilpinen and Tich´y (1974), and thereafter developed by Niiniluoto, Tuomela and Oddie.