• Keine Ergebnisse gefunden

Part II Towards a Flexible Bayesian Logic of Testing Descriptive Rules Testing Descriptive Rules

3 Philosophical Considerations: The Problem of Induction and a Knowledge-Based Bayesian Induction and a Knowledge-Based Bayesian

3.1 The Fundamental Problem of Induction

Is the common-sense belief that the sun will rise tomorrow justifiable at least to some probabilistic degree, and is this belief more justified than the opposed assumption that the sun will not rise tomorrow? Hume’s formulation of the fundamental problem of induction made it apparent that all such inferences from known facts to general theories (and then again to specific predictions) may be no more than arbitrary and irrational habits. Before discussing this ‘scandal of modern philosophy’ and some possibilities for escaping this problem, we first explicate the notion of ‘induction’.

The Notion of Induction

‘Induction’ (lat.: inductio) is derived from Cicero’s translation of the Aristotelian term

‘™pagwg»’ (epagoge). Despite changes in the meaning of epagoge or induction, throughout the history of thought these concepts were always used to describe the

(rational) inference from the particular to the general (for a historical overview, see Lumer, 1990, 660 f.).

Beside this characterisation of induction as an inference from the particular to the general, induction is often characterised in opposition to deduction as a generalisation which is insecure (excluding what has been called exhaustive or mathematical induction6). Consistent with this usage, we are here only concerned with insecure empirical induction. Empirical induction is concerned with the derivation, confirmation or disconfirmation of general hypotheses or theories based directly on perceptions, sense data or empirical protocol sentences.

Philosophically the notion of induction has a connotation of pure empiricism although moderate rationalists, of course, have also used induction. For our discussion of the problems of induction and my later proposal of the term ‘synduction’, it is helpful to spell out this connotation in more detail. In the wake of empiricism and logical positivism induction has often been seen, firstly, as exclusively a bottom-up process leading from the empirical to the theoretical. Secondly, it has been seen as what one may call an ‘external-to-internal-process’ leading from external facts to internal theory. Extreme empiricism can be understood as the position that the mind is originally a blank tablet (tabula rasa) and that any knowledge it possesses is imprinted on it directly by the senses. According to empiricism, theories are to be tested against the reality itself and this testing should ideally not be affected by any prior knowledge. In this perspective, induction has often been seen as a neutral and passive process based on ‘given’ sense data. Knowledge should be acquired, but any use of knowledge to mould (or improve) the inductive process of acquiring new knowledge is seen generally as a distortion of a neutral, dispassionate and context free process of passively receiving information from the senses.

Hume’s Fundamental Problem of Induction

Tentatively, we can characterize the fundamental problem of induction by the argument that any general hypothesis, theory or law, which transcends past exper-ience and makes future predictions, can never be logically derived from observations

6 Historically, an insecure generalisation has been called induction probabilis (Albertus Magnus), opposed to the complete and necessary generalisation called inductio perfecta. Today inductio perfecta is often seen as a particular case of deduction, a secure inference based on the exhaustive observation of all cases or on mathematical (complete) induction.

– exactly because it transcends past experience. Therefore, any justification of an inductive or generalising inference may appear problematic.

The fundamental problem of induction is sometimes attributed to the 18th century philosopher David Hume. However, the problem is as old as the reflection on the notion of induction itself. The problem of induction took centre stage in many classical treatments of epistemology, methodology and metaphysics, starting with Aristotle’s Organon (about 350 BC, e.g., Topica, I, pp. 8, 103, Analytica Posteriora, I, pp. 18, 31) and it provided a recurrent central theme in the history of Western philosophy.7

However, the most influential formulation of the problem of induction in modern philosophy is indeed found in David Hume’s A Treatise of Human Nature (1739, Book I, Part III). The original goal of the empiricist Hume was to show that knowledge about causal laws – which are understood as necessary and deterministic connections – could be rationally based on empirical grounds of sensory experiences.

Nonetheless, his problem of induction made him prominent as a gravedigger of a rational justification of empiricism.

According to Hume, any universal causal law of nature can only be based on the constant conjunction, in all past impressions. But according to Hume we can never justify the assumption of a resulting necessary connection of ideas: “From the mere repetition of any past impression, even to infinity, there never will arise any new original idea, such as that of a necessary connexion; and the number of impressions has in this case no more effect than if we confin’d ourselves to one only” (Part III, Section 6, p. 88).

Hume’s original statement of the problem of induction actually affects not only the induction of a general law but also the induction of each causal observation connecting an entity A and an entity B, since each such observation makes an inference that these entities and no other entities were causally connected. To assess whether an observed entity A really caused entity B, we, for instance, need to know that the effect has not coincidentally been caused by some hidden cause. No causal relation can directly be observed. We only had impressions of object A being located left or right from B, but we never had a direct impression of A causing B. To attribute that there was a causal connection between cause and effect we need to know which entities are candidates for such a connection in the first place.

7 Also, for instance, pre-Humean rationalism depended on the fundamental problem of induction. Leibniz (c. 1704) wrote: "The senses, although they are necessary for all our actual knowledge, are not sufficient to give us the whole of it, since the senses never give anything but instances, that is to say particular or individual truths. Now all the instances which confirm a general truth, however numerous they may be, are not sufficient to establish the universal necessity of this same truth, for it does not follow that what happened before will happen in the same way again. […] From which it appears that necessary truths, such as we find in pure mathematics, and particularly in arithmetic and geometry, must have principles whose proof does not depend on instances, nor consequently on the testimony of the senses, although without the senses it would never have occurred to us to think of them […]" (Preface, pp. 150-151).

But to limit our observations to particular causally relevant cases begs the question what is causally relevant (Part III, Section 3, cf. Section 6) and leads back to the problem of generalisation.

To solve the problem of generalisation one would have to beg the question by illegitimately assuming a constancy of nature: “there can be no demonstrative arguments to prove that those instances, of which we have had no experience, resemble those, of which we have had experiences. We can at least conceive a change in the course of nature […]” (Part III, Section VI, p. 89, cf. p. 91).

Hume considered the objection that a necessary (deterministic) conjunction of objects might be a too strict criterion for induction. Consequently, Hume also discussed probabilistic cause and effect relations and confirming or disconfirming evidence (Part III, Section 12, cf. Section 6 and 11). Hume for probabilistic laws likewise stressed that any calculation of probabilities is “founded on the transferring of past to future” (p. 137), and that “the supposition, that the future resembles the past, is not founded on arguments of any kind, but is deriv’d entirely from habit […]”

(p. 134; see also p. 139).

Finally, Hume argued that any inductive justification of induction has to be excluded because this would necessarily be circular (Part III, Section 6, pp. 90-91; we will come back to this later).

Based on this fundamental problem the empiricist Hume was led to take a sceptical stance on empirical knowledge. Although induction is ubiquitously used in human reasoning and science, Hume regards induction to be nothing but an irrational useful habit, which in his view can neither be justified nor overcome.

Hence, Hume’s position was alleged to promote scepticism and irrationalism. For instance, Russell wrote (1991/1961, 645) that Hume’s philosophy represents “the bankruptcy of eighteenth-century reasonableness”. If there would be no resolution of the problem of induction, “there is no intellectual difference between sanity and insanity. The lunatic who believes that he is a poached egg is to be condemned solely on the ground that he is in a minority […]” (646). Hume’s formulation of the fundamental problem of induction had an influence on modern philosophy which cannot be overestimated. Even Kant (1781/87), who did not aim to abandon the enlightenment project, wrote that Hume’s scepticism awoke him from his ‘dogmatic slumbers’.

However, Kant, Mill, Cassirer, Russell, Carnap, Reichenbach and Popper all developed positions to overcome this problem and to justify induction. Since I cannot provide a full history of the fundamental problem of induction here, I will directly discuss Popper’s falsificationism, which – as we have seen – has also been crucial for the psychology of hypothesis testing.

Falsificationism as Proposal to Solve the Problem of Induction Popper’s (1934, 1972, 1974, 1996) falsificationist methodology directly builds on Hume’s fundamental problem of induction (e.g., Popper, 1934, Chapter 1, cf. Chapter 10; 1972, Chapter 1 and 2). Hume showed that induction is not rationally justified, but he still regarded it as practically indispensable. Popper follows Hume in the assessment that induction is irrational, but he consistently discards induction and replaces it by a negative methodology exclusively based on falsification (1934, 14, 226; 1972, Chapter 1; cf. Quine, 1974).

Popper reformulated Hume’s problem of induction based on formal logic (1972, 7, 12, 20; 1934, 45). Popper connects Hume’s problem of induction to the argument that there is a logical asymmetry of verification (total confirmation) and falsification (total disconfirmation) of single universal statements (hypothesis, laws or theories). A universal hypothesis can be falsified by observations, but it can never be verified.

Universal deterministic hypothesis H can logically be falsified by one single observed counterexample (¬O), which by Modus tollens proves the hypothesis false: H → O ∧

¬O ⇒ ¬H. In contrast, even a large number of ‘confirming’ observations never allows a verification of a hypothesis. It always remains possible that future evidence will falsify a highly ‘confirmed’ hypothesis. Popper argued that the only methodologically correct way out of the problem of induction is to abandon induction. According to Popper, one should only be concerned with the falsification of theories, never with their confirmation.

Popper, in Objective Knowledge, an Evolutionary Approach (1972) additionally links his falsificationism with a Darwinian metaphysics. On his account any process of knowledge acquisition from amoeba to Einstein is always concerned with blind conjectures and external refutations, similar to mutation and selection in biology and trial and error in psychology of learning (for a detailed discussion of this interesting position see v. Sydow, 2001).

The resulting falsificationist methodology of hypothesis testing became the normative standard in the psychology of reasoning, particularly in the research on the WST (cf. Chapter 1, 2).

In the next section, falsificationism will be criticised as not resolving Hume’s actual fundamental problem but as falling pray to the problem itself. Subsequently, and as a way out, I try to expose an alternative knowledge-based solution of the problem of induction.

Falsificationism Falsified? – The Fundamental Problem of Falsification

The negative ‘solution’ of the problem of induction by falsificationism, which completely discards the concept of induction by confirmation, can be criticised.

Firstly, I will illustrate how the logical asymmetry, giving rise to falsificationism, is to be restricted to a certain domain of assertations and, secondly, that falsificationism, in my opinion, does not resolve the fundamental problem of induction, by limiting itself to what one may call ‘disconfirmatory induction’.

Criticism of Falsificationism Restricting its Domain of Applicability

One objection to a falsificationism as a general methodology is that the logical asymmetry of falsification and verification is not valid for all kinds of logical propositions:

Firstly, existential statements, like ‘genetic engineering has produced a red raven’, can even never be falsified and only be verified (Quine, 1974; cf. Popper 1934, 40). Moreover, sentences like ‘at least n objects of p are q’ can only be verified and not falsified.

Secondly, Quine (1974) conceded that the asymmetry is valid for single universal statements with a single quantification, but he objected that the asymmetry does not equally hold for statements that are more complex. Universal statements that are quantified in multiple ways cannot be falsified in an unambiguous way. Moreover, scientific theories have normally a complex structure and they are quantified in multiple ways. For these logical reasons, Quine (1974) concluded that Popper’s negative methodology cannot be a general methodology of science, but only a methodology for single universal statements (with one quantification). This is connected to the next point.

Thirdly, according to the Quine-Duhem problem of falsification (and confirmation), an empirical hypothesis (HT) is never tested alone; it is always tested together with some auxiliary hypothesis (HA) for instance concerning the process of

measurement etc. Hence, an observed falsification (¬O) needs not to be attributed to falsity of that theory – it may equally be attributed to falsity of only the auxiliary hypothesis: HT ∧ HA → O ∧ ¬O ⇒ ¬HT ∨ HA (cf., e.g., Chalmers, 2001, 74 f.).

Fourthly, the asymmetry is not valid for probabilistic laws, for which we would predict only a certain amount of correct observations. Hence, falsificationism cannot be applied in a straightforward way to probabilistic hypotheses and it should be noted, that almost all psychological hypotheses belong to this class. Likewise, if one aims to test a deterministic functional law, one generally makes use of a region of tolerance, which renders a strict application of a falsificationist methodology impossible.8

Fifthly, Putnam (1974) and Lakatos (1974) have pointed out that in the history of science theories were almost never discarded because of a single recalcitrant falsification. Moreover, it would have always been disadvantageous to discard them on this basis since almost all valid theories would have been prematurely discarded (cf. also Chalmers, 2001, 76). Similarly, the Göttingen scientist and aphorist, Georg Lichtenberg (1742-1799) wittily pointed out much earlier: “One should not take note of contradictory experiences until there are enough of them to make constructing a new system worthwhile.”

For these reasons, falsificationism cannot count as a methodology for science with general applicability. But this does not entitle us to conclude that falsification is not the correct norm of hypothesis testing if we have to test a simple universal and deterministic hypothesis and if the auxiliary hypotheses can be neglected. Because we will be concerned with such hypothesis, I outline an even more fundamental criticism in order to discard falsificationism.

The Fundamental Problem of Falsification

In my opinion, falsificationism does not solve Hume’s problem of how to transfer past knowledge to the future, and hence falsificationism itself falls pray to this problem. First the transferability of falsifications from past to future will be discussed, before the focus is switched to Popper’s theory of corroboration.

8 The theory of statistical tests by Sir Ronald A. Fisher (1890-1962) is sometimes interpreted as a falsificationist approach in the broad sense, since it is concerned with the refutation of a null hypothesis only.

One may argue that this refined ‘falsificationism’ can obviously handle probabilistic hypotheses. However, alternatively one may argue that Fisher’s theory is fundamentally one-sided and the theory of Jerzy Neyman (1894-1981) and Egon Pearson (1895-1980) provides a preferable and a more balanced approach considering both kinds of possible decision errors (cf. Hager, 2004).

(a) Opposing Popper, one may apply Hume’s sceptical argument not only to confirmations but also to falsifications of a universal hypothesis: a falsification implies nothing about whether the hypothesis will be false in respect of future instances or not. Hume himself (1739, Book I, Part III, Section 12, pp. 134 f.) discussed evidence which is contrary to a tested hypothesis. To use disconfirming and falsifying evidence for future predictions makes again the illegitimate assumption that

“instances, of which we have no experience, must necessarily resemble those of which we have” (p. 135). Popper seems to ignore this point, but the problem of applying past knowledge to the future is equally valid for falsifications.

For example, let us assume that two million years ago a hypothetical scientist put forward the optimistic hypothesis that “all adult and living members of the genus Homo have a brain size of over 1000 cm3”. Our hypothetical scientist would have no problem to falsify this hypothesis, since it is generally accepted today that presumably every member of the genus Homo at that time would have falsified this claim. Today it is assumed that the Homo habilis had a brain size of about 650 cm3 (varying between 500 and 800 cm3). Nonetheless, the claim has become true today: Homo sapiens sapiens has a brain size of about 1350 cm3 (varying between 1300 and 1500 cm3). One may object that the use of a different hypothesis, a hypothesis without time restriction, would allow us to argue that logically the hypothesis has nonetheless been falsified and remains false. This is correct. However, would we have learned anything from a falsification used only in this way? According to this usage, falsifications would only restate a particular past (negative) observation, without learning anything for predicting or explaining the future. In this sense falsificationism does not solve Hume’s problem of transferring past experiences to the future in a rational way.

(b) Falsificationism – like inductionism – obviously needs to provide some measure which of two or more competing theories (none of which have been falsified) should be preferred. If no such measure had been proposed, falsificationism would not be able to distinguish well-established theories (say the theory of quantum mechanics) from other theories that – ceteris paribus – have not been tested at all. In my view, two opposed positions on this matter can be found in Popper’s writings, both of which lead to fundamental problems.

On the one hand, Popper (1934, 1972) advocated that one should prefer the theory that has been corroborated more frequently and that has a higher resulting

‘verismilitude’. A theory is corroborated if it has been tested and has not been

falsified in this test. Putnam (1974, 222-223) and Lakatos (1974, 256, 261) have argued that Popper’s theory of corroboration is a pseudo-theory of confirmation. This theory still transfered degrees of corroboration from the past to the future. But since Popper claims to follow Hume in not allowing for such an inference his approach is inconsistent.

On the other hand, in other passages Popper (1972, 18-19; cf. 22) makes clear that different degrees of corroboration also do not provide a basis upon which to make predictions. The degree of corroboration would be a “report of past performance only”, saying “nothing whatever about future performance, or about ‘reliability’ of a theory” (p. 18). But why then should we use Popper’s degree of corroboration to assess competing theories? If the degree of corroboration says nothing about the probability of a theory being corroborated in future events, it would be absurd to use it as a criterion to distinguish between theories. Putnam (1974) criticised this in a clear way: if there “were no suggestion at all that a law which has withstood severe tests is likely to withstand further tests”, no theory would be more confirmed than any other, and “science would be a wholly unimportant activity”. A Popperian may perhaps object that past corroborations are still relevant to describe our past experience.

Nevertheless, according to Popper’s radical Humean account the description of the past is completely useless for most purposes, since it conveys no knowledge at all to access the future. With regard to the future there would be no reason at all to keep track of past corroborations and falsifications. Hence, the application of Hume’s problem to Popper’s position renders past knowledge – even knowledge about

Nevertheless, according to Popper’s radical Humean account the description of the past is completely useless for most purposes, since it conveys no knowledge at all to access the future. With regard to the future there would be no reason at all to keep track of past corroborations and falsifications. Hence, the application of Hume’s problem to Popper’s position renders past knowledge – even knowledge about