• Keine Ergebnisse gefunden

I. Weak Completion Semantics 13

4. Contextual Reasoning 53

5.1. Byrne’s Suppression Task

We have briefly presented the layout of the task in the introduction, which will now be discussed in detail. Byrne’s suppression task consists of two parts, where the results of the first part are achieved by forward reasoning, whereas the results of the second part are achieved by backward reasoning. Sections 5.1.1 and 5.1.2 explain Stenning and van Lambalgen’s two step approach and their techniques by means of the first part of Byrne’s suppression task. Furthermore, we show the cases, where their approach fails and why it requires the Weak Completion Semantics. These techniques are analogously applied to the second part of Byrne’s suppression task, which is presented in Section 5.1.3.

5.1.1 Representation as Logic Programs

Stenning and van Lambalgen’s first step of formalizing human reasoning, reasoning to-wards an appropriate representation, deals with conceptual cognitive adequacy, already discussed in the introduction. In particular, Stenning and van Lambalgen argue that conditionals shall not be encoded by inferences straight away, but rather by licenses for inference. Consider again the simple conditional from the introduction: ‘if she has an essay to write (e), then she will study late in the library (l)’ should be encoded by the clause l ← e∧ ¬ab1, where ab1 is an abnormality predicate and true if something

1 Section 5.1.4 has been published in (Dietz, H¨olldobler and Wernhard 2014). Section 5.3 has been published in (Dietz, H¨olldobler and Ragni 2012a). The original idea of Section 5.2 has first been published in (Dietz, H¨olldobler and Ragni 2012b) and an improved version has been published in (Dietz, H¨olldobler and Ragni 2013)

Facts/

Conditionals Assumptions

Pe ={l←e∧ ¬ab1, ab1 ← ⊥, e← >}

Pe+Alt ={l←e∧ ¬ab1, l←t∧ ¬ab2, ab1 ← ⊥, ab2 ← ⊥, e← >}

Pe+Add ={l←e∧ ¬ab1, l←o∧ ¬ab3, ab1 ← ¬o, ab3 ← ¬e, e← >}

P¬e ={l←e∧ ¬ab1, ab1 ← ⊥, e← ⊥}

P¬e+Alt ={l←e∧ ¬ab1, l←t∧ ¬ab2, ab1 ← ⊥, ab2 ← ⊥, e← ⊥}

P¬e+Add={l←e∧ ¬ab1, l←o∧ ¬ab3, ab1 ← ¬o, ab3 ←e, e← ⊥}

Table 5.1.: Representational form of the first part of the suppression task according to Stenning and Lambalgen (2008).

abnormal is known. In other words,l holds ifeis true and nothing abnormal is known (¬ab1), i.e. everything abnormal is false.

Table 5.1 shows the representational form of the first part of the suppression task as modeled by Stenning and van Lambalgen. In the first three cases, in addition to the conditionals, the participants had to draw conclusions based on the fact that ‘she has an essay to write’ (e← >). In the last three cases, they had to draw conclusions based on the assumption that ‘she does not have an essay to write’ (e ← ⊥). The predic-atesab1,ab2 and ab3 represent abnormalities with respect to the different conditionals, respectively.

The programsPe+Alt and Pe+Add contain two clauses with the conclusionl. They differ in the way that the premise of the second clause in in Pe+Alt is an alternative to the first clause, whereas in Pe+Add the premise of the second clause is an addition to the first clause. The second clause in Pe+Add (l ← o∧ ¬ab3) takes effect as an additional precondition forl. This is represented by the clause stating thatab1 is true when ‘the library does not stay open’ (ab1 ← ¬o) and the clause that states that ab3 is true when

‘she does not have an essay to write’ (ab3 ← ¬e).

5.1.2 Forward Reasoning

Adopting the programs obtained by Stenning and van Lambalgen as result of the first step of reasoning towards an appropriate representation, we will now focus on the second step, the inferential aspect, which corresponds to inferential cognitive adequacy.

The second column of Table 5.2 shows the weak completion of the programs encoding the first six examples of the suppression task. As already discussed in Chapter 2.3, the weak completion of all programs admits the model intersection property, therefore we can reason with respect to their least models. The third column in Table 5.2 depicts the corresponding least models. As the last column of Table 5.2 shows, our approach

5.1. Byrne’s Suppression Task

The weak completion ofP lm wcP Byrne

wcPe ={le∧ ¬ab1,ab1↔ ⊥, e↔ >} h{e, l},{ab1}i |=wcsl 96%L wcPe+Alt ={l(e∧ ¬ab1)(t∧ ¬ab2), h{e, l},{ab1,ab2}i|=wcsl 96%L

ab1↔ ⊥,ab2↔ ⊥, e↔ >}

wcPe+Add ={l(e∧ ¬ab1)(o∧ ¬ab3), h{e},{ab3}i 6|=wcsl∨ ¬l 38%L ab1↔ ¬o,ab3↔ ¬e, e↔ >}

wcP¬e ={le∧ ¬ab1,ab1↔ ⊥, e↔ ⊥} h∅,{e, l,ab1}i |=wcs¬l 46%L

wcP¬e+Alt ={l(e∧ ¬ab1)(t∧ ¬ab2), h∅,{e,ab1,ab2}i |=wcsl∨ ¬l 4% L

ab1↔ ⊥,ab2↔ ⊥, e↔ ⊥}

wcP¬e+Add ={l(e∧ ¬ab1)(o∧ ¬ab3), h{ab3},{e, l}i |=wcs¬l 63%L

ab1↔ ¬o,ab3↔ ¬e, e↔ ⊥}

Table 5.2.: The weak completion and the least models of the corresponding programs and the experimental results. The fourth column shows whether l or ¬l follow from the least models. The information in the last column refers to the experimental results of Byrne (1989).

coincides with the seemingly favored results of the suppression task and thus appears to be inferentially adequate. Consider in Table 5.2 for example Pe+Add and its weak completion. The interpretations h{e, o},{ab1, ab3, l}i and h{e},{ab3}i are both models of wcPe+Add. Given that I0 =h∅,∅i, ΦPe+Add is computed as follows:

I1 = ΦPe+Add(I0) = h{e},∅i,

I2 = ΦPe+Add(I1) = h{e},{ab3}i = ΦPe+Add(I2).

As shown by H¨olldobler and Kencana Ramli (2009b) and H¨olldobler and Kencana Ramli (2009c), h{e},{ab3}i is not a model of Pe+Add under SvL-semantics because the clause l ← o∧ab3 ∈ Pe+Add is mapped to U under SvL-semantics and not to > as under Lukasiewicz semantics. This is a counterexample to Lemma 4 (1.) in (Stenning and Lambalgen 2008, p. 194f), which states that the least fixed point of the ΦP operator under SvL-semantics is the (knowledge-) least model of P. Furthermore, Stenning and Lambalgen (2008) claim in Lemma 4 (3.) that all models of the completion of P are fixed points of ΦP and every fixed point is a model. The following example shows that both claims are not true. Consider the completion of P¬e+Alt, i.e.

{l↔(e∧ ¬ab1)∨(t∧ ¬ab2), ab1 ↔ ⊥, ab2 ↔ ⊥, e↔ ⊥, t↔ ⊥}.

Both, t and eare mapped to ⊥ and, consequently, l is mapped to ⊥as well. However, as pointed out by H¨olldobler and Kencana Ramli (2009b) and H¨olldobler and Kencana Ramli (2009c), the least fixed point of ΦP¬e+Alt is h∅,{e,ab1,ab2}i, where t and l are unknown, and is not a model of the completion of P¬e+Alt. This example also shows that reasoning underSvL-semantics with respect to the completion of a program is not

Conditionals O Explanations P = {le∧ ¬ab1, ab1← ⊥} {l} {e← >}

PAlt = {le∧ ¬ab1,lt∧ ¬ab2, ab1← ⊥, ab2← ⊥} {l} {e← >},{t← >}

PAdd = {le∧ ¬ab1,lo∧ ¬ab3,ab1← ¬o,ab3← ¬e} {l} {e← >, o← >}

P = {le∧ ¬ab1, ab1← ⊥} {¬l} {e← ⊥}

PAlt = {le∧ ¬ab1,lt∧ ¬ab2, ab1← ⊥, ab2← ⊥} {¬l} {e← ⊥, t← ⊥}

PAdd = {le∧ ¬ab1,lo∧ ¬ab3,ab1← ¬o ab3e} {¬l} {e← ⊥},{o← ⊥}

Table 5.3.: The Representational form of the second part of the suppres-sion task according to Stenning and Lambalgen (2008) and H¨olldobler, Philipp and Wernhard (2011).

adequate, as only 4% of the subjects conclude¬l in this case.

5.1.3 Backward Reasoning

The second part of the suppression task can best be described as abductive, that is, a plausible explanation is computed given some observation. This notion of abductive consequence with respect to least models of the weak completion has been elaborated by H¨olldobler, Philipp and Wernhard (2011) to model the backward reasoning cases of the suppression task. Table 5.3 shows the representational form of these instances, including the observations and their respective explanations. In the first three cases, additionally to the conditionals, the participants had to draw conclusions based on the fact that ‘she goes to the library.’ In the last three cases, they had to draw conclusions based on the assumption that ‘she does not go to the library.’ We consider an abductive framework as introduced in Chapter 2.5, consisting of a program P as knowledge base, a set Aof abducibles consisting of the facts and assumptions for each undefined atom inP,2 and the logical consequence relation|=wcs. As observations we consider a set of literals. For instance, consider the following two programs under skeptical reasoning:

1. PAlt where O={l}:

A={e← >, e← ⊥, t← >, t← ⊥} and lm wcPAlt=h∅,{ab1, ab2}i.

There are two explanations with either {e ← >} or {t ← >}. Accordingly, we cannot conclude that ‘she has an essay to finish’.

2. PAdd whereO={l}:

A={e← >, e← ⊥, o← >, o← ⊥} and lm wcPAdd=h∅,∅i.

2Recall thatAisundefinedinPiffP does not contain a clause of the formAbody.

5.1. Byrne’s Suppression Task O The least model of the weak completion ofPE Byrne

l lm wc(P∪ {e← >}) = h{e, l},{ab1}i |=swcs e 53%E l lm wc(PAlt∪ {e← >}) = h{e, l},{ab1,ab2}i

6|=swcs e∨ ¬e 16%E l lm wc(PAlt∪ {t← >}) = h{l, t},{ab1,ab2}i

l lm wc(PAdd∪ {e← >, t← >}) = h{e, l, o},{ab1,ab3}i |=swcs e 55%E

¬l lm wc(P∪ {e← ⊥}) = h∅,{e, l,ab1}i |=swcs ¬e 69%E

¬l lm wc(PAlt∪ {e← ⊥, t← ⊥}) = h∅,{e, l, t,ab1,ab2}i |=swcs ¬e 69%E

¬l lm wc(PAdd∪ {e← ⊥}) = h{ab3},{e, l}i

6|=swcs e∨ ¬e 44%E

¬l lm wc(PAdd∪ {o← ⊥}) = h{ab1},{l, o}i

Table 5.4.: The least models of the weak completion of the corresponding programs to-gether with the explanations forOand the experimental results. The fourth column shows whether e or ¬e follow from the least models. The cases P = PAlt,O = {l} and P = PAdd,O = {¬l} have two explanations. The last column shows the experimental results of Byrne (1989).

There is only one explanation{e← >, o← >}. Accordingly, we can conclude that

‘she has an essay to finish.’

Table 5.4 depicts results of the second part of the suppression task, which are adequate answers if compared to the seemingly favored results of the suppression task. One should observe that credulously we would concludeefromP =PAltandO={l}, which according to Byrne only 16% of the subjects did.

5.1.4 Well-founded Semantics Revisted

We show the results obtained with the Weak Completion Semantics and the Well-founded Semantics for the program representations in Table 5.1 and Table 5.3. We define At0 = {e, l, o, t, ab1, ab2, ab3} and for the well-founded models P+ we assume the models with respect to At = At0. For the least models of the weak completion of P and the well-founded models of Pmod we assume the models with respect to At = At0 ∪ {A0 |A ∈ undef(P)}. Table 5.5 shows the least models of the weak completion and the well-founded models from the first part of the suppression task. Note that for the well-well-founded model only normal logic programs (P+) are considered. Obviously there are differences between both semantics with respect to the least models. For instance, for Pe+Add

andP¬e+Alt, under the Weak Completion Semantics,lis neither inI>or inI, whereas

in the well-founded model l∈I in bothPe+Add+ and P¬e+Alt+ . This is due to the fact, that undefined atoms such as o in Pe+Add+ and tin P¬e+Alt+ are mapped to false in the

P lm wcP/wfmPmod wfmP+ Byrne Pe h{e, l},{ab1}i h{e, l},{o, t, ab1, ab2, ab3}i 96% L Pe+Alt h{e, l},{ab1, ab2}i h{e, l},{o, t, ab1, ab2, ab3}i 96% L Pe+Add h{e},{ab3}i h{e, ab1 },{l, o, t, ab2, ab3}i 38% L P¬e h∅,{e, l, ab1}i h∅,{e, l, o, t, ab1, ab2, ab3}i 46%¬L

P¬e+Alt h∅,{e, ab1, ab2}i h∅,{e, l, o, t, ab1, ab2, ab3}i 4%¬L

P¬e+Add h{ab3},{e, l}i h{ab3},{e, l, o, t, ab1, ab2 }i 63%¬L

Table 5.5.: The results of the first part of the suppression task. The highlighted atoms show the differences between the least models of the weak completion and the well-founded models.

lm wc(PE)/

P O E wfm((PE)mod) wfm(PE)+ Byrne

Pl l e← > h{e, l},{ab1}i h{e, l},{o, t, ab1, ab2, ab3}i53% E Pl+Alt l e← > h{e, l},{ab1, ab2}i h{e, l},{o, t, ab1, ab2, ab3}i16% E

t← > h{l, t},{ab1, ab2}i h{l, t},{e, o, ab1, ab2, ab3}i

Pl+Add l e← >, o← >h{e, l, o},{ab1, ab3}ih{e, l, o},{t, ab1, ab2, ab3}i55% E P¬l ¬l e← ⊥ h∅,{e, l, ab1}i h∅,{e, l, o, t, ab1, ab2, ab3}i69%¬E P¬l+Alt¬l e← ⊥, t← ⊥ h∅,{e, l, t, ab1, ab2}i h∅,{e, l, o, t, ab1, ab2, ab3}i69%¬E

P¬l+Add¬l e← ⊥ h{ab3},{e, l}i h{ab1, ab3},{e, l, o, t, ab2}i44%¬E

o← ⊥ h{ab1},{l, o}i h{ab1, ab3 },{e, l, o, t, ab2}i

Table 5.6.: The results of the second part of the suppression task. The highlighted atoms show the differences between the least models of the weak completion and the well-founded models.

well-founded model. Considering Byrne’s results, the Well-founded Semantics does not represent the participants’ conclusions of suppressing information, whereas the Weak Completion Semantics does.

Table 5.6 shows the results from the second part of the suppression task where abduction is required. In the first three cases, both semantics have the same conclusions aboute.

In the case ofPl+Alt two explanations are possible, e← >ort← >, with two different least models. With skeptical reasoning nothing can be concluded aboute, which seems to adequately represent Byrne’s findings. Similarly, forP¬l+Addwith skeptical reasoning nothing can be concluded abouteunder the Weak Completion, whereaseis true under the Well-founded Semantics. Considering Byrne’s results, that 44% of the participants concluded¬e, it is arguable which model adequately represents these results.