• Keine Ergebnisse gefunden

Logic II Helmut Schwichtenberg

N/A
N/A
Protected

Academic year: 2022

Aktie "Logic II Helmut Schwichtenberg"

Copied!
107
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Logic II

Helmut Schwichtenberg

Mathematisches Institut der LMU, Sommersemester 2021

(2)
(3)

Preface 1

Chapter 1. Logic 3

1.1. Natural deduction 3

1.2. Embedding intuitionistic and classical logic 9

1.3. The Curry-Howard correspondence 16

Chapter 2. Computability 19

2.1. Abstract computability via information systems 20 2.2. A term language for computable functionals 36

2.3. Denotational semantics 42

Chapter 3. A theory TCF of computable functionals 45 3.1. Formulas and their computational content 45

3.2. Axioms of TCF 51

3.3. Equality and extensionality w.r.t. cotypes 58

Chapter 4. Computational content of proofs 65

4.1. Realizers 65

4.2. Extracted terms, soundness 70

4.3. Extensionality of extracted terms 77

Chapter 5. Applications 83

5.1. Exact real arithmetic 83

5.2. Algorithms on stream-represented real numbers 90

Bibliography 99

Index 101

iii

(4)
(5)

This is a script for the course “Logic II” at the Mathematics Institute of LMU, Sommersemester 2021. It is mainly based on the textbook “Proofs and Computations” in the references. I thank Nils K¨opp and Philippe Vollmuth for their help with running the course and for many discussions concerning its content.

M¨unchen, 13. July 2021 Helmut Schwichtenberg

1

(6)
(7)

Logic

The main subject of Mathematical Logic is mathematical proof. In this chapter we deal with the basics of formalizing such proofs and analysing their structure. The system we pick for the representation of proofs is natu- ral deduction as in Gentzen (1935). Our reasons for this choice are twofold.

First, as the name says this is anatural notion of formal proof, which means that the way proofs are represented corresponds very much to the way a careful mathematician writing out all details of an argument would go any- way. Second, formal proofs in natural deduction are closely related (via the Curry-Howard correspondence) to terms in typed lambda calculus. This provides us not only with a compact notation for logical derivations (which otherwise tend to become somewhat unmanageable tree-like structures), but also opens up a route to applying the computational techniques which un- derpin lambda calculus.

An essential point for Mathematical Logic is to fix the formal language to be used. We take implication → and the universal quantifier ∀ as basic.

Then the logic rules correspond precisely to lambda calculus. The additional connectives (i.e., the existential quantifier ∃, disjunction∨ and conjunction

∧) will be added via axioms. Later we will see that these axioms are deter- mined by particular inductive definitions. In addition to the use of inductive definitions as a unifying concept, another reason for that change of empha- sis will be that it fits more readily with the more computational viewpoint adopted there.

This chapter does not simply introduce basic proof theory, but in addi- tion there is an underlying theme: to bring out the constructive content of logic, particularly in regard to the relationship between minimal and classical logic. It seems that the latter is most appropriately viewed as a subsystem of the former.

1.1. Natural deduction

Rules come in pairs: we have an introduction and an elimination rule for each of the logical connectives. The resulting system is calledminimal logic;

it was introduced by Kolmogorov (1932), Gentzen (1935) and Johansson

3

(8)

(1937). First we only consider implication → and universal quantification

∀. Note that no negation is yet present. If we go on and require ex-falso- quodlibet for a distinguished propositional variable ⊥ (“falsum”) we can embed intuitionistic logic with negation ¬A defined as A → ⊥. To embed classical logic, we need to go further and add as an axiom schema the prin- ciple of indirect proof, also called stability (∀~x(¬¬R~x → R~x) for relation symbols R), but then it is appropriate to restrict to the language based on

→,∀,⊥and ∧. The reason for this restriction is that we can neither prove

¬¬∃xA→ ∃xA nor ¬¬(A∨B)→A∨B (the former one for decidable Ais Markov’s scheme). However, we can prove them for the classical existential quantifier and disjunction defined by ¬∀x¬Aand ¬A→ ¬B → ⊥. Thus we need to make a distinction between two kinds of “exists” and two kinds of

“or”: the classical ones are “weak” and the non-classical ones “strong” since they have constructive content. We mark the distinction by writing a tilde above the weak disjunction and existence symbols thus ˜∨,∃.˜

1.1.1. Examples of derivations. To motivate the rules for natural deduction, let us start with informal proofs of some simple logical facts.

(A→B →C)→(A→B)→A→C.

Informal proof. Assume A → B → C. To show: (A → B) → A → C.

So assume A → B. To show: A → C. So finally assume A. To show: C.

Using the third assumption twice we have B →C by the first assumption, and B by the second assumption. From B → C and B we then obtain C. Then A → C, cancelling the assumption on A; (A → B) → A → C cancelling the second assumption; and the result follows by cancelling the

first assumption.

x(A→B)→A→ ∀xB, ifx /∈FV(A).

Informal proof. Assume∀x(A→B). To show: A→ ∀xB. So assumeA. To show: ∀xB. Letxbe arbitrary; note that we have not made any assumptions on x. To show: B. We have A → B by the first assumption. Hence also B by the second assumption. Hence ∀xB. Hence A → ∀xB, cancelling the second assumption. Hence the result, cancelling the first assumption.

A characteristic feature of these proofs is that assumptions are intro- duced and eliminated again. At any point in time during the proof the free or “open” assumptions are known, but as the proof progresses, free assump- tions may become cancelled or “closed” because of the implies-introduction rule.

We reserve the wordproof for the informal level; a formal representation of a proof will be called a derivation.

(9)

An intuitive way to communicate derivations is to view them as labelled trees each node of which denotes a rule application. The labels of the inner nodes are the formulas derived as conclusions at those points, and the labels of the leaves are formulas or terms. The labels of the nodes immediately above a node k are the premises of the rule application. At the root of the tree we have the conclusion (or end formula) of the whole derivation.

In natural deduction systems one works with assumptions at leaves of the tree; they can be either open or closed (cancelled). Any of these assump- tions carries a marker. As markers we use assumption variables denoted u, v, w, u0, u1, . . .. The variables of the language previously introduced will now often be called object variables, to distinguish them from assumption variables. If at a node below an assumption the dependency on this as- sumption is removed (it becomes closed) we record this by writing down the assumption variable. Since the same assumption may be used more than once (this was the case in the first example above), the assumption marked with u (written u: A) may appear many times. Of course we insist that distinct assumption formulas must have distinct markers. An inner node of the tree is understood as the result of passing from premises to the conclu- sion of a given rule. The label of the node then contains, in addition to the conclusion, also the name of the rule. In some cases the rule binds or closes or cancels an assumption variable u (and hence removes the dependency of all assumptions u: A thus marked). An application of the ∀-introduction rule similarly binds an object variablex(and hence removes the dependency on x). In both cases the bound assumption or object variable is added to the label of the node.

Definition. A formula A is called derivable (in minimal logic), writ- ten ` A, if there is a derivation of A (without free assumptions) using the natural deduction rules. A formula B is called derivable from assump- tions A1, . . . , An, if there is a derivation ofB with free assumptions among A1, . . . , An. Let Γ be a (finite or infinite) set of formulas. We write Γ`B if the formula B is derivable from finitely many assumptions A1, . . . , An∈Γ.

We now formulate the rules of natural deduction.

1.1.2. Introduction and elimination rules for → and ∀. First we have an assumption rule, allowing to write down an arbitrary formula A together with a marker u:

u:A assumption.

The other rules of natural deduction split into introduction rules (I-rules for short) and elimination rules (E-rules) for the logical connectives which, for the time being, are just→and∀. For implication→there is an introduction

(10)

rule →+ and an elimination rule → also called modus ponens. The left premiseA→B in→ is called themajor (ormain) premise, and the right premiseAtheminor (orside) premise. Note that with an application of the

+-rule all assumptions above it marked with u:A are cancelled (which is denoted by putting square brackets around these assumptions), and the u then gets written alongside. There may of course be other uncancelled assumptionsv:Aof the same formulaA, which may get cancelled at a later stage.

[u:A]

|M B →+u A→B

|M A→B

|N A → B

For the universal quantifier∀there is an introduction rule∀+(again marked, but now with the bound variable x) and an elimination rule∀whose right premise is the term tto be substituted. The rule ∀+xwith conclusion ∀xA is subject to the following (eigen-)variable condition: the derivation M of the premise A should not contain any open assumption having x as a free variable.

|M

A ∀+x (var.cond.)

xA

|M

xA(x) t

A(t)

We now give derivations of the two example formulas treated informally above. Since in many cases the rule used is determined by the conclusion, we suppress in such cases the name of the rule.

u:A→B →C w:A B →C

v:A→B w:A B

C →+w A→C

+v (A→B)→A→C

+u (A→B →C)→(A→B)→A→C For the second example we obtain

u:∀x(A→P x) x

A→P x v:A P x ∀+x

xP x

+v A→ ∀xP x

+u

x(A→P x)→A→ ∀xP x

Note that the variable condition is satisfied: x is not free inA(and also not free in ∀x(A→P x)).

(11)

1.1.3. Negation, disjunction, conjunction and existence. Recall that negation is defined by ¬A := (A → ⊥). The following can easily be derived.

A→ ¬¬A,

¬¬¬A→ ¬A.

However, ¬¬A→ A is in general not derivable (without stability – we will come back to this later on). The derivation of ¬¬¬A→ ¬Ais

u: ((A→ ⊥)→ ⊥)→ ⊥

w:A→ ⊥ v:A

⊥ →+w (A→ ⊥)→ ⊥

⊥ →+v A→ ⊥

+u (((A→ ⊥)→ ⊥)→ ⊥)→A→ ⊥ Derivations for the following formulas are left as exercises.

(A→B)→ ¬B → ¬A,

¬(A→B)→ ¬B,

¬¬(A→B)→ ¬¬A→ ¬¬B,

(⊥ →B)→(¬¬A→ ¬¬B)→ ¬¬(A→B),

¬¬∀xA→ ∀x¬¬A.

For disjunction the introduction and elimination axioms are

+0 :A→A∨B,

+1 :B →A∨B,

:A∨B→(A→C)→(B →C)→C.

For conjunction we have

+:A→B →A∧B, ∧:A∧B→(A→B →C)→C and for the existential quantifier

+:A→ ∃xA, ∃:∃xA→ ∀x(A→B)→B (x /∈FV(B)).

Remark. All these axioms can be seen as special cases of a general schema, that of an inductively defined predicate, which is defined by some introduction rules and one elimination rule. Later we will study this kind of definition in full generality.

(12)

It is easy to see that for each of the connectives ∨, ∧, ∃ the axioms and the following rules are equivalent over minimal logic; this is left as an exercise. For disjunction the introduction and elimination rules are

|M A ∨+0 A∨B

|M B ∨+1 A∨B

|M A∨B

[u:A]

|N C

[v:B]

|K C ∨u, v C

For conjunction we have

|M A

|N B ∧+ A∧B

|M A∧B

[u:A] [v:B]

|N C ∧u, v C

and for the existential quantifier

t

|M A(t) ∃+

xA(x)

|M

xA

[u:A]

|N

B ∃x, u(var.cond.) B

Similar to ∀+x the rule ∃x, u is subject to an (eigen-)variable condition:

in the derivation N the variable x (i) should not occur free in the formula of any open assumption other than u:A, and (ii) should not occur free in B.

We collect some easy facts about derivability; B←A meansA→B.

Lemma 1.1.1. The following are derivable.

(A∧B →C)↔(A→B→C), (A→B∧C)↔(A→B)∧(A→C), (A∨B →C)↔(A→C)∧(B →C), (A→B∨C)←(A→B)∨(A→C), (∀xA→B)← ∃x(A→B) if x /∈FV(B), (A→ ∀xB)↔ ∀x(A→B) if x /∈FV(A), (∃xA→B)↔ ∀x(A→B) if x /∈FV(B), (A→ ∃xB)← ∃x(A→B) if x /∈FV(A).

(13)

Proof. A derivation of the final formula is u:∃x(A→B)

x

w:A→B v:A B

xB

x, w

xB

+v A→ ∃xB

+u

x(A→B)→A→ ∃xB

The variable condition for ∃ is satisfied since the variable x (i) is not free in the formula A of the open assumption v:A, and (ii) is not free in ∃xB.

The rest of the proof is left as an exercise.

1.2. Embedding intuitionistic and classical logic

As already mentioned, we distinguish two kinds of “or” and “exists”:

the “weak” or classical ones and the “strong” or constructive ones. In the present context both kinds occur together and hence we must mark the distinction; we shall do this by writing a tilde above the weak disjunction and existence symbols thus

A∨˜ B:=¬A→ ¬B → ⊥, ∃˜xA:=¬∀x¬A.

These weak variants of disjunction and the existential quantifier are no stronger than the proper ones (in fact, they are weaker):

A∨B →A∨˜ B, ∃xA→∃˜xA.

This can be seen easily by putting C:=⊥in∨ and B :=⊥in∃. Remark. Since ˜∃x˜∃yAunfolds into a rather awkward formula we extend the ˜∃-terminology to lists of variables:

˜∃x1,...,xnA:=∀x1,...,xn(A→ ⊥)→ ⊥.

Nothing is lost here, since the omitted double negations could be eliminated.

Moreover let

˜∃x1,...,xn(A1∧˜. . .∧˜Am) :=∀x1,...,xn(A1 → · · · →Am→ ⊥)→ ⊥.

This allows to stay in the →,∀ part of the language. Notice that ˜∧ only makes sense in this context, i.e., in connection with ˜∃.

1.2.1. Intuitionistic and classical derivability. In the definition of derivability falsity ⊥ plays no role. We may change this and require ex- falso-quodlibet axioms, of the form

~x(⊥ →R~x)

(14)

withR a relation symbol distinct from⊥. Let Efq denote the set of all such axioms. A formula A is called intuitionistically derivable, written `i A, if Efq`A. We write Γ`i B for Γ∪Efq`B.

We may even go further and require stability axioms, of the form

~x(¬¬R~x→R~x)

with R again a relation symbol distinct from⊥. Let Stab denote the set of all these axioms. A formula A is called classically derivable, written `c A, if Stab`A. We write Γ`cB for Γ∪Stab`B.

It is easy to see that intuitionistically (i.e., from Efq) we can derive

⊥ → A for an arbitrary formula A, using the introduction rules for the connectives. A similar generalization of the stability axioms is only possible for formulas in the language not involving ∨,∃. However, it is still possible to use the substitutes ˜∨and ˜∃.

Theorem 1.2.1 (Stability, or principle of indirect proof). (a) `(¬¬A→A)→(¬¬B→B)→ ¬¬(A∧B)→A∧B. (b) `(¬¬B→B)→ ¬¬(A→B)→A→B.

(c) `(¬¬A→A)→ ¬¬∀xA→A.

(d) `c¬¬A→A for every formula A without ∨,∃.

Proof. (a) is left as an exercise.

(b) For simplicity, in the derivation to be constructed we leave out ap- plications of →+ at the end.

u:¬¬B →B

v:¬¬(A→B)

u1:¬B

u2:A→B w:A B

⊥ →+u2

¬(A→B)

⊥ →+u1

¬¬B B

(c)

u:¬¬A→A

v:¬¬∀xA

u1:¬A

u2:∀xA x A

⊥ →+u2

¬∀xA

⊥ →+u1

¬¬A A

(15)

(d) Induction on A. The case R~twithR distinct from⊥ is given by Stab.

In the case ⊥the desired derivation is u: (⊥ → ⊥)→ ⊥

v:⊥

+v

⊥ → ⊥

In the cases A∧B,A→B and ∀xA use (a), (b) and (c), respectively.

Using stability we can prove some well-known facts about the interaction of weak disjunction and the weak existential quantifier with implication. We first prove a more refined claim, stating to what extent we need to go beyond minimal logic.

Lemma 1.2.2. The following are derivable.

(˜∃xA→B)→ ∀x(A→B) if x /∈FV(B), (1)

(¬¬B →B)→ ∀x(A→B)→˜∃xA→B if x /∈FV(B), (2)

(⊥ →B[x:=c])→(A→˜∃xB)→˜∃x(A→B) if x /∈FV(A), (3)

∃˜x(A→B)→A→∃˜xB if x /∈FV(A).

(4)

The last two items can also be seen as simplifying a weakly existentially quantified implication whose premise does not contain the quantified variable.

In case the conclusion does not contain the quantified variable we have (¬¬B →B)→ ˜∃x(A→B)→ ∀xA→B if x /∈FV(B), (5)

x(¬¬A→A)→(∀xA→B)→˜∃x(A→B) if x /∈FV(B).

(6)

Proof. (1)

˜∃xA→B

u1:∀x¬A x

¬A A

⊥ →+u1

¬∀x¬A B

(2)

¬¬B →B

¬∀x¬A

u2:¬B

x(A→B) x

A→B u1:A B

⊥ →+u1

¬A

x¬A

⊥ →+u2

¬¬B B

(16)

(3) Writing B0 forB[x:=c] we have

x¬(A→B) c

¬(A→B0)

⊥ →B0

A→∃˜xB u2:A

˜∃xB

x¬(A→B) x

¬(A→B)

u1:B A→B

⊥ →+u1

¬B

x¬B

⊥ B0

+u2

A→B0

⊥ (4)

˜∃x(A→B)

x¬B x

¬B

u1:A→B A B

⊥ →+u1

¬(A→B)

x¬(A→B)

⊥ (5)

¬¬B →B

˜∃x(A→B)

u2:¬B

u1:A→B

xA x A B

⊥ →+u1

¬(A→B)

x¬(A→B)

⊥ →+u2

¬¬B B

(6) We derive ∀x(⊥ →A) →(∀xA →B)→ ∀x¬(A→B)→ ¬¬A. Writing Ax, Ay forA(x), A(y) we have

x¬(Ax→B) x

¬(Ax→B)

xAx→B

y(⊥ →Ay) y

⊥ →Ay

u1:¬Ax u2:Ax

⊥ Ay

yAy B →+u2 Ax→B

⊥ →+u1

¬¬Ax

(17)

Using this derivation M we obtain

x¬(Ax→B) x

¬(Ax→B)

xAx→B

x(¬¬Ax→Ax) x

¬¬Ax→Ax

|M

¬¬Ax Ax

xAx B

Ax→B

Since clearly `(¬¬A→A)→ ⊥ →A the claim follows.

Remark. An immediate consequence of (6) is the classical derivability of the “drinker formula” ˜∃x(P x → ∀xP x), to be read “in every non-empty bar there is a person such that, if this person drinks, then everybody drinks”.

To see this let A:=P x and B:=∀xP x in (6).

Corollary 1.2.3.

`c(˜∃xA→B)↔ ∀x(A→B) ifx /∈FV(B) and B without ∨,∃,

`i (A→˜∃xB)↔˜∃x(A→B) if x /∈FV(A),

`c∃˜x(A→B)↔(∀xA→B) ifx /∈FV(B) and A, B without ∨,∃.

There is a similar lemma on weak disjunction:

Lemma 1.2.4. The following are derivable.

(A∨˜ B →C)→(A→C)∧(B →C), (¬¬C →C)→(A→C)→(B →C)→A∨˜ B →C, (⊥ →B)→ (A→B ∨˜ C)→(A→B) ˜∨(A→C),

(A→B) ˜∨(A→C)→A→B ∨˜ C, (¬¬C →C)→(A→C) ˜∨(B →C)→A→B →C, (⊥ →C)→ (A→B →C)→(A→C) ˜∨(B →C).

Proof. The derivation of the final formula is

¬(B →C)

⊥ →C

¬(A→C)

A→B →C u1:A

B →C u2:B C →+u1 A→C

⊥ C →+u2 B →C

The other derivations are similar to the ones above, if one views ˜∃ as an

infinitary version of ˜∨.

(18)

Corollary 1.2.5.

`c(A∨˜ B →C)↔(A→C)∧(B→C) for C without ∨,∃,

`i (A→B ∨˜ C)↔(A→B) ˜∨(A→C),

`c(A→C) ˜∨(B→C)↔(A→B →C) for C without ∨,∃.

It is easy to see that weak disjunction and the weak existential quantifier satisfy the same axioms as the strong variants, if one restricts the conclusion of the elimination axioms to formulas without ∨,∃. In fact, we have

Lemma 1.2.6.

`A→∃˜xA,

`c˜∃xA→ ∀x(A→B)→B (x /∈FV(B), B without ∨,∃),

`A→A∨˜ B, `B →A∨˜ B,

`cA∨˜ B →(A→C)→(B →C)→C (C without ∨,∃).

Proof. The derivations of the second and the fourth formula are

¬¬B→B

¬∀x¬A

u1:¬B

x(A→B) x

A→B u2:A B

⊥ →+u2

¬A

x¬A

⊥ →+u1

¬¬B B

and

¬¬C →C

¬A→ ¬B→ ⊥

u1:¬C

A→C u2:A C

⊥ →+u2

¬A

¬B→ ⊥

u1:¬C

B →C u3:B C

⊥ →+u3

¬B

⊥ →+u1

¬¬C C

1.2.2. Gentzen translation. Classical derivability Γ`cBwas defined in Section 1.2.1 by Γ∪Stab ` B. This embedding of classical logic into minimal logic can be expressed in a somewhat different and very explicit form, namely as a syntactic translation A 7→ Ag of formulas such that A is derivable in classical logic if and only if its translation Ag is derivable in minimal logic.

(19)

Definition (Gentzen translationAg).

(R~t)g :=¬¬R~t forR distinct from ⊥,

g :=⊥, (A∨B)g :=Ag ∨˜ Bg, (∃xA)g := ˜∃xAg,

(A◦B)g :=Ag◦Bg for◦= →,∧, (∀xA)g :=∀xAg.

Lemma 1.2.7. ` ¬¬Ag →Ag. Proof. Induction on A.

Case R~t with R distinct from ⊥. We must show ¬¬¬¬R~t → ¬¬R~t, which is a special case of ` ¬¬¬B → ¬B.

Case ⊥. Use` ¬¬⊥ → ⊥.

Case A∨B. We must show ` ¬¬(Ag ∨˜ Bg) → Ag ∨˜ Bg, which is a special case of ` ¬¬(¬C→ ¬D→ ⊥)→ ¬C→ ¬D→ ⊥:

¬¬(¬C → ¬D→ ⊥)

u1:¬C→ ¬D→ ⊥ ¬C

¬D→ ⊥ ¬D

⊥ →+u1

¬(¬C→ ¬D→ ⊥)

Case ∃xA. In this case we must show` ¬¬∃˜xAg →∃˜xAg, but this is a special case of ` ¬¬¬B → ¬B, because ˜∃xAg is the negation¬∀x¬Ag.

Case A∧B. We must show ` ¬¬(Ag ∧Bg) → Ag ∧Bg. By induc- tion hypothesis ` ¬¬Ag → Ag and ` ¬¬Bg → Bg. Now use part (a) of Theorem 1.2.1 (on stability).

The cases A→B and ∀xAare similar, using parts (b) and (c) of Theo-

rem 1.2.1 instead.

Theorem 1.2.8. (a) Γ`cA implies Γg `Ag. (b) Γg `Ag implies Γ`cA for Γ, Awithout ∨,∃.

Proof. (a) Use induction on Γ`cA. For a stability axiom∀~x(¬¬R~x→ R~x) we must derive∀~x(¬¬¬¬R~x→ ¬¬R~x), which is easy (as above). For the rules →+,→,∀+,∀,∧+ and ∧ the claim follows immediately from the induction hypothesis, using the same rule again. This works because the Gentzen translation acts as a homomorphism for these connectives. For the rules ∨+i , ∨, ∃+ and ∃ the claim follows from the induction hypothesis

(20)

and Lemma 1.2.6. For example, in case ∃ the induction hypothesis gives

|M

˜∃xAg and

u:Ag

|N Bg

with x /∈FV(Bg). Now use `(¬¬Bg → Bg) → ∃˜xAg → ∀x(Ag → Bg) → Bg. Its premise¬¬Bg →Bg is derivable by Lemma 1.2.7.

(b) First note that `c (B ↔ Bg) for B without ∨,∃ (induction on B).

From Γg ` Ag we obtain Γ`c A as follows. We argue informally. Assume Γ. Then Γg by the note, hence Ag because of Γg ` Ag, hence A again by

the note.

1.3. The Curry-Howard correspondence

Clearly the tree structure of logical derivations of any complexity at all can be quite cumbersome, and the availability of some alternative represen- tation therefore becomes increasingly important, especially when we wish to operate on derivations. The Curry-Howard correspondence provides a neat, computationally inspired alternative. The underlying idea is that if we have a derivation M(x) ofA(x) then any means of (universally) binding the x should then represent a derivation of ∀xA(x). The notation chosen for binding thex isλxM(x), denoting the function x7→M(x). On the side of → a derivation M of B from some assumptions A, each of which must now in addition have a labelu, is then represented asλuM(u), denoting the function u 7→ M(u). This requires the labelling of assumptions so that all assumptions discharged by an application of→+must have the same label.

More precisely, we represent natural deduction derivations as typed

“derivation terms”, where the derived formula is the “type” of the term (and displayed as a superscript). This representation goes under the name of Curry-Howard correspondence. It dates back to Curry (1930) and some- what later Howard, published only in (1980), who noted that the types of the combinators used in combinatory logic are exactly the Hilbert style axioms for minimal propositional logic. Subsequently Martin-L¨of (1984) transferred these ideas to a natural deduction setting where natural deduction proofs of formulasAnow correspond exactly to lambda terms with typeA. This rep- resentation of natural deduction proofs will henceforth be used consistently.

We give an inductive definition of such derivation terms for the →,∀- rules in Table 1 where for clarity we have written the corresponding deriva- tions to the left. One can also define derivation terms covering the rules for

∨,∧ and ∃, but we shall not do so here.

To see the usefulness of derivation terms consider the problem of elim- inating “detours” in logical derivations. Such a detour occurs if the main premise of an elimination rule (for → or ∀) is derived by an introduction

(21)

Derivation Term

u:A uA

[u:A]

|M B →+u A→B

uAMB)A→B

|M A→B

|N A → B

(MA→BNA)B

|M

A ∀+x (with var.cond.)

xA

xMA)xA (with var.cond.)

|M

xA(x) t

A(t)

(MxA(x)t)A(t)

Table 1. Derivation terms for → and ∀

rule. One can then eliminate this detour be a “conversion”. We write them in tree notation and also as derivation terms.

→-conversion.

[u:A]

|M B →+u A→B

|N A → B

7→β

|N A

|M B or written as derivation terms

uM(uA)B)A→BNA7→β M(NA)B.

(22)

The reader familiar with λ-calculus should note that this is nothing other than β-conversion.

∀-conversion.

|M(x) A(x) ∀+x

xA(x) t

A(t)

7→β |M(t) A(t) or written as derivation terms

xM(x)A(x))xA(x)t7→β M(t)A(t).

Every derivation term carries a formula as its type. However, we shall usually leave these formulas implicit and write derivation terms without them. The two β-conversions above then appear as

uM(u))N 7→β M(N), (λxM(x))t 7→β M(t).

Remark (Normalization). One can show that every reduction sequence given by internal β-conversions terminates after finitely many steps, and that the resulting “normal form” is uniquely determined. For time reasons we refer to the literature for proofs of these facts.

(23)

Computability

At this point we leave the general setting of logic and aim to get closer to mathematics. We introduce free algebras (for example, the natural numbers) as basic data structures and consider function spaces based on them. The functional objects are viewed as limits of their finite approximations. We call a functional computable if it is the limit of a recursively enumerable set of finite approximations. To work with such objects in a formal theory, we need to have a language to denote them. Again lambda calculus is the appropriate tool, this time extended by constants for particular functionals defined by equations.

It is a fundamental property of computation that evaluation must be finite. So in any evaluation of Φ(ϕ) the argumentϕcan be called upon only finitely many times, and hence the value – if defined – must be determined by some finite subfunction of ϕ. This is the principle of finite support.

Let us carry this discussion somewhat further and look at the situation one type higher up. LetHbe a partial functional of type-3, mapping type-2 functionals Φ to natural numbers. Suppose Φ is given and H(Φ) evaluates to a defined value. Again, evaluation must be finite. Hence the argument Φ can only be called on finitely many functions ϕ. Furthermore each such ϕ must be presented to Φ in a finite form (explicitly say, as a set of ordered pairs). In other words, H and also any type-2 argument Φ supplied to it must satisfy the finite support principle, and this must continue to apply as we move up through the types.

To describe this principle more precisely, we need to introduce the notion of a “finite approximation” Φ0 of a functional Φ. By this we mean a finite setXof pairs (ϕ0, n) such that (i)ϕ0 is a finite function, (ii) Φ(ϕ0) is defined with value n, and (iii) if (ϕ0, n) and (ϕ00, n0) belong to X where ϕ0 and ϕ00 are “consistent”, then n =n0. The essential idea here is that Φ should be viewed as the union of all its finite approximations. Using this notion of a finite approximation we can now formulate the

Principle of finite support. If H(Φ) is defined with value n, then there is a finite approximation Φ0 of Φ such that H(Φ0) is defined with valuen.

19

(24)

The monotonicity principle formalizes the simple idea that onceH(Φ) is evaluated, then the same value will be obtained no matter how the argument Φ is extended. This requires the notion of “extension”. Φ0 extends Φ if for any piece of data (ϕ0, n) in Φ there exists another (ϕ00, n) in Φ0 such thatϕ0 extends ϕ00 (note the contravariance!). The second basic principle is then

Monotonicity principle. If H(Φ) is defined with value n and Φ0 extends Φ, then H(Φ0) is defined with valuen.

An immediate consequence of finite support and monotonicity is that the behaviour of any functional is indeed determined by its set of finite approximations. For if Φ, Φ0 have the same finite approximations andH(Φ) is defined with value n, then by finite support,H(Φ0) is defined with value n for some finite approximation Φ0 of Φ, and then by monotonicity H(Φ0) is defined with value n. ThusH(Φ) =H(Φ0), for allH.

This observation now allows us to formulate a notion of abstract com- putability:

Effectivity principle. An object is computable just in case its set of finite approximations is (primitive) recursively enumerable (or equivalently, Σ01-definable).

The general theory of computability concerns partial functions and par- tial operations on them. However, we might be interested in particular objects only, so once the theory of general (partial) objects is developed, we can look for ways to restrict attention to the particular ones. Examples for interesting sets of objects are (i) the total functions on natural numbers, (ii) so-called cototal objects like “streams” of signed digits {−1,0,1} (useful to represent real numbers), and (iii) extensional functionals mapping functions on natural numbers to natural numbers.

2.1. Abstract computability via information systems

We need to define appropriate domains for our to-be-defined computable functionals, viewed as limits of their finite approximations. Information systems are a convenient setting to introduce and study the latter.

2.1.1. Information systems. The basic idea of information systems is to provide an axiomatic setting to describe approximations of abstract objects (like functions or functionals) by concrete, finite ones. We do not attempt to analyze the notion of “concreteness” or finiteness here, but rather take an arbitrary countable set A of “bits of data” or “tokens” as a basic notion to be explained axiomatically. In order to use such data to build approximations of abstract objects, we need a notion of “consistency”, which determines when the elements of a finite set of tokens are consistent with each other. We also need an “entailment relation” between consistent sets

(25)

U of data and single tokens a, which intuitively expresses the fact that the information contained inU is sufficient to compute the bit of informationa.

The axioms below are a minor modification of Scott’s (1982), due to Larsen and Winskel (1991).

Definition. An information system is a structure (A,Con,`) whereA is an at most countable non-empty set (the tokens), Con is a set of finite subsets ofA(theconsistent sets) and`is a subset of Con×A(theentailment relation), which satisfy

U ⊆V ∈Con→U ∈Con, {a} ∈Con,

U `a→U∪ {a} ∈Con, a∈U ∈Con→U `a,

U ∈Con→ ∀a∈V(U `a)→V `b→U `b.

The elements of Con are called formal neighborhoods. We use U, V, W to denote finite sets, and write

U `V for U ∈Con∧ ∀a∈V(U `a),

a↑b for {a, b} ∈Con (a, bareconsistent), U ↑V for ∀a∈U,b∈V(a↑b).

Definition. The ideals (also called objects) of an information system A= (A,Con,`) are defined to be those subsetsx ofA which satisfy

U ⊆x→U ∈Con (x isconsistent),

U `a→U ⊆x→a∈x (x is deductively closed).

For example the deductive closure U :={a∈A |U `a} of U ∈Con is an ideal. The set of all ideals of Ais denoted by|A|.

Examples. Every countable setA can be turned into a “flat” informa- tion system by letting the set of tokens be A, Con :={∅} ∪ { {a} |a∈A} and U `ameana∈U. In this case the ideals are just the elements of Con.

For A=Nwe have the following picture of the Con-sets.

• {0}

• {1}

• {2}

...

A rather important example is the following, which concerns approxi- mations of functions from a countable set A into a countable set B. The

(26)

tokens are the pairs (a, b) with a∈A and b∈B, and

Con :={ {(ai, bi)|i < k} | ∀i,j<k(ai=aj →bi =bj)}, U `(a, b) := (a, b)∈U.

It is easy to verify that this defines an information system whose ideals are (the graphs of) all partial functions from A toB.

Remark. One can show that for an arbitrary information system A= (A,Con,`) the structure (|A|,⊆,∅) is a “domain” (also called Scott-Ershov domain, or “bounded complete algebraic cpo”), whose set of “compact ele- ments” can be represented as|A|c={U |U ∈Con}. The converse holds as well: every countable domain can be represented as an information system.

We will not need this relation to standard (non-constructive) domain theory, and hence not even define these notions here.

2.1.2. Function spaces. We define the “function space” A →B be- tween two information systems A and B.

Definition. Let A= (A,ConA,`A) andB = (B,ConB,`B) be infor- mation systems. Define A→B = (C,Con,`) by

C:= ConA×B,

{(Ui, bi)|i∈I} ∈Con :=∀J⊆I [

j∈J

Uj ∈ConA→ {bj |j∈J} ∈ConB . For the definition of the entailment relation`it is helpful to first define the notion of an application ofW :={(Ui, bi)|i∈I} ∈Con to U ∈ConA:

{(Ui, bi)|i∈I}U :={bi|U `AUi}.

From the definition of Con we know that this set is in ConB. Now define W `(U, b) by W U `Bb.

Remark. Clearly application is monotone in the second argument, in the sense that U `AU0 implies (W U0 ⊆W U, hence also) W U `B W U0. In fact, application is also monotone in the first argument, i.e.,

W `W0 implies W U `B W0U.

To see this let W = {(Ui, bi) | i ∈ I} and W0 = {(Uj0, b0j) | j ∈ J}. By definition W0U ={b0j |U `A Uj0}. Now fixj such that U `AUj0; we must show W U `B b0j. By assumption W ` (Uj0, b0j), henceW Uj0 `B b0j. Because of W U ⊇W Uj0 the claim follows.

Lemma 2.1.1. If A and B are information systems, then so is A→B defined as above.

(27)

Proof. Let A = (A,ConA,`A) and B = (B,ConB,`B). The first, second and fourth property of the definition are clearly satisfied. For the third, suppose

{(U1, b1), . . . ,(Un, bn)} `(U, b), i.e., {bj |U `AUj} `B b.

We have to show that {(U1, b1), . . . ,(Un, bn),(U, b)} ∈ Con. So let I ⊆ {1, . . . , n} and suppose

U∪[

i∈I

Ui ∈ConA.

We must show that {b} ∪ {bi |i∈I} ∈ ConB. Let J ⊆ {1, . . . , n} consist of those j withU `AUj. Then also

U ∪[

i∈I

Ui∪ [

j∈J

Uj ∈ConA. Since

[

i∈I

Ui∪[

j∈J

Uj ∈ConA,

from the consistency of {(U1, b1), . . . ,(Un, bn)} we can conclude that {bi|i∈I} ∪ {bj |j∈J} ∈ConB.

But {bj |j∈J} `Bb by assumption. Hence

{bi|i∈I} ∪ {bj |j∈J} ∪ {b} ∈ConB. For the final property, suppose

W `W0 and W0 `(U, b).

We have to show W ` (U, b), i.e., W U `B b. We obtain W U `B W0U by monotonicity in the first argument, and W0U `Bb by definition.

We shall now give an alternative characterization of the ideals inA→B, as “approximable maps”. The basic idea for approximable maps is the desire to study “information respecting” maps fromAintoB. Such a map is given by a relationrbetween ConAandB, where (U, b)∈rintuitively means that whenever we are given the information U ∈ ConA, then we know that at least the tokenb appears in the value.

Definition. Let A= (A,ConA,`A) andB = (B,ConB,`B) be infor- mation systems. A relation r ⊆ ConA×B is an approximable map if it satisfies the following:

(a) if (U, b1), . . . ,(U, bn)∈r, then{b1, . . . , bn} ∈ConB;

(b) if (U, b1), . . . ,(U, bn)∈r and {b1, . . . , bn} `B b, then (U, b)∈r;

(c) if (U0, b)∈r and U `AU0, then (U, b)∈r.

(28)

Theorem 2.1.2. Let A and B be information systems. Then the ideals of A→B are exactly the approximable maps from A to B.

Proof. Let A = (A,ConA,`A) and B = (B,ConB,`B). If r ∈ |A → B| then r ⊆ ConA×B is consistent and deductively closed. We have to show thatr satisfies the axioms for approximable maps.

(a) Let (U, b1), . . . ,(U, bn)∈r. We must show that{b1, . . . , bn} ∈ConB. But this clearly follows from the consistency of r.

(b) Let (U, b1), . . . ,(U, bn) ∈ r and {b1, . . . , bn} `B b. We must show that (U, b)∈r. But

{(U, b1), . . . ,(U, bn)} `(U, b)

by the definition of the entailment relation ` in A → B, hence (U, b) ∈ r since r is deductively closed.

(c) Let U `AU0 and (U0, b)∈r. We must show that (U, b)∈r. But {(U0, b)} `(U, b)

since{(U0, b)}U ={b}(which follows fromU `AU0), hence (U, b)∈r, again since r is deductively closed.

For the other direction suppose that r ⊆ConA×B is an approximable map. We must show that r∈ |A→B|.

Consistency of r. Suppose (U1, b1), . . . ,(Un, bn)∈r and U =S

{Ui |i∈ I} ∈ ConA for some I ⊆ {1, . . . , n}. We must show that {bi | i ∈ I} ∈ ConB. Now from (Ui, bi) ∈r and U `A Ui we obtain (U, bi) ∈ r by axiom (c) for all i∈I, and hence{bi |i∈I} ∈ConB by axiom (a).

Deductive closure of r. Suppose (U1, b1), . . . ,(Un, bn)∈r and W :={(U1, b1), . . . ,(Un, bn)} `(U, b).

We must show (U, b)∈r. By definition of` forA→B we haveW U `B b, which is {bi |U `A Ui} `B b. Further by our assumption (Ui, bi) ∈ r we know (U, bi) ∈ r by axiom (c) for all i with U `A Ui. Hence (U, b) ∈ r by

axiom (b).

2.1.3. Continuous functions. We can also characterize approxima- ble maps in a different way, which is closer to usual characterizations of continuity1:

Lemma 2.1.3. Let A and B be information systems and f:|A| → |B|

monotone (i.e., x⊆y implies f(x)⊆f(y)). Then the following are equiva- lent.

1In fact, approximable maps are exactly the continuous functions w.r.t. the so-called Scott topology. However, we will not enter this subject here.

(29)

(a) f satisfies the “principle of finite support” PFS: If b ∈ f(x), then b ∈ f(U) for some U ⊆x.

(b) f commutes with directed unions: for every directed D⊆ |A| (i.e., for anyx, y∈D there is a z∈D such that x, y⊆z)

f [

x∈D

x

= [

x∈D

f(x).

Note that in (b) the set {f(x)|x∈D} is directed by monotonicity of f; hence its union is indeed an ideal in |B|. Note also that from PFS and monotonicity of f it follows immediately that ifV ⊆f(x), then V ⊆f(U) for some U ⊆x.

Proof. Let f satisfy PFS, and D ⊆ |A| be directed. f(S

x∈Dx) ⊇ S

x∈Df(x) follows from monotonicity. For the reverse inclusion let b ∈ f(S

x∈Dx). Then by PFS b ∈ f(U) for some U ⊆ S

x∈Dx. From the directedness and the fact that U is finite we obtainU ⊆z for some z∈D.

Fromb∈f(U) and monotonicity inferb∈f(z). Conversely, letf commute with directed unions, and assume b∈f(x). Then

b∈f(x) =f([

U⊆x

U) = [

U⊆x

f(U),

hence b∈f(U) for some U ⊆x.

We call a function f:|A| → |B| continuous if it satifies the conditions in Lemma 2.1.3. Hence continuous maps f: |A| → |B| are those that can be completely described from the point of view of finite approximations of the abstract objectsx∈ |A|and f(x)∈ |B|: whenever we are given a finite approximation V to the value f(x), then there is a finite approximationU to the argument x such that already f(U) contains the information in V; note that by monotonicity f(U)⊆f(x).

Clearly the identity and constant functions are continuous, and also the composition g◦f of continuous functionsf:|A| → |B|and g:|B| → |C|.

Theorem 2.1.4. Let A = (A,ConA,`A), B = (B,ConB,`B) be in- formation systems. Then the ideals of A → B are in a natural bijective correspondence with the continuous functions from |A|to |B|, as follows.

(a) With any approximable mapr ⊆ConA×Bwe can associate a continuous function|r|:|A| → |B| by

|r|(z) :={b∈B |(U, b)∈r for some U ⊆z}.

We call |r|(z) the application of r toz.

(30)

(b) Conversely, with any continuous functionf:|A| → |B|we can associate an approximable map fˆ⊆ConA×B by

fˆ:={(U, b)|b∈f(U)}.

These assignments are inverse to each other, i.e., f =|fˆ|and r =c|r|.

Proof. Let r be an ideal ofA → B; then by Theorem 2.1.2 we know thatr is an approximable map. We first show that|r|is well-defined. So let z∈ |A|.

|r|(z) is consistent: letb1, . . . , bn∈ |r|(z). Then there areU1, . . . , Un⊆z such that (Ui, bi) ∈ r. Hence U := U1∪ · · · ∪Un ⊆ z and (U, bi) ∈ r by axiom (c) of approximable maps. Now from axiom (a) we can conclude that {b1, . . . , bn} ∈ConB.

|r|(z) is deductively closed: let b1, . . . , bn∈ |r|(z) and{b1, . . . , bn} `B b.

We must show b ∈ |r|(z). As before we find U ⊆ z such that (U, bi) ∈ r.

Now from axiom (b) we can conclude (U, b)∈r and henceb∈ |r|(z).

Continuity of |r| follows immediately from part (a) of Lemma 2.1.3 above, since by definition |r|is monotone and satisfies PFS.

Now letf:|A| → |B|be continuous. It is easy to verify that ˆf is indeed an approximable map. Furthermore

b∈ |fˆ|(z)↔(U, b)∈fˆ for someU ⊆z

↔b∈f(U) for someU ⊆z

↔b∈f(z) by monotonicity and PFS.

Finally, for any approximable map r⊆ConA×B we have

(U, b)∈r↔ ∃V⊆U(V, b)∈r by axiom (c) for approximable maps

↔b∈ |r|(U)

↔(U, b)∈c|r|,

hence r=c|r|.

Consequently we can (and will) view approximable maps r⊆ConA×B as continuous functions from |A|to|B|.

Equality of two subsets r, s ⊆ ConA ×B means that they consist of the same tokens (U, b). We can characterize equality r =s by extensional equality of the associated functions |r|,|s|. It even suffices that |r| and |s|

coincide on all compact elements U forU ∈ConA.

Lemma 2.1.5 (Extensionality). Assume that A = (A,ConA,`A) and B = (B,ConB,`B) are information systems and r, s⊆ConA×B approxi- mable maps. Then the following are equivalent.

(31)

(a) r=s,

(b) |r|(z) =|s|(z) for all z∈ |A|, (c) |r|(U) =|s|(U) for allU ∈ConA.

Proof. It suffices to prove (c)→ (a). As above this follows from (U, b)∈r↔ ∃V⊆U(V, b)∈r by axiom (c) for approximable maps

↔b∈ |r|(U).

Moreover, one can easily check that

s◦r:={(U, c)| ∃V((V, c)∈s∧(U, V)⊆r)} is an approximable map (where (U, V) :={(U, b)|b∈V }), and

|s◦r|=|s| ◦ |r|, g[◦f = ˆg◦f .ˆ

We usually write r(z) for |r|(z), and similarly (U, b) ∈f for (U, b) ∈fˆ. It should always be clear from the context where the mods and hats should be inserted.

2.1.4. Algebras and types. We now consider concrete information systems, our basis for continuous functionals.

Types will be built from base types by the formation of function types, τ → σ. As domains for the base types we choose non-flat free algebras, given by their constructors. The reason for taking non-flat base domains is that we want the constructors to be injective and with disjoint ranges. This generally is not the case for flat domains. Let α, β, ξ denote type variables.

Definition (Constructor types and algebra forms). Constructor types κ have the form

~

α→(ξ)i<n→ξ

with all type variables αi distinct from each other and from ξ. Iterated arrows are understood as associated to the right. An argument type of a constructor type is called a parameter argument type if it is different from ξ, and arecursiveargument type otherwise. A constructor typeκisnullary if it has no recursive argument types. We call

ι:=µξ

with ~κ not empty an algebra form. An algebra form is non-recursive (or explicit) if it does not have recursive argument types.

(32)

Examples. We list some algebra forms without parameters, with stan- dard names for the constructors added to each constructor type.

U:=µξ(Dummy :ξ) (unit),

B :=µξ(tt:ξ,ff:ξ) (booleans),

N:=µξ(0 :ξ, S:ξ→ξ) (natural numbers, unary), P :=µξ(1 :ξ, S0:ξ→ξ, S1:ξ→ξ) (positive numbers, binary), Y :=µξ(−:ξ,Branch :ξ→ξ→ξ) (binary trees, or derivations).

Algebra forms with type parameters are

I(α) :=µξ(Id : α→ξ) (identity), L(α) :=µξ(Nil : ξ,Cons : α→ξ→ξ) (lists), S(α) :=µξ(SCons : α→ξ →ξ) (streams), α×β :=µξ(Pair :α →β →ξ) (product), α+β :=µξ(InL : α→ξ,InR : β →ξ) (sum), uysum(α) :=µξ(DummyL : ξ,Inr :α→ξ) (forU+α), ysumu(α) :=µξ(Inl : α→ξ,DummyR : ξ) (forα+U).

The default name for the i-th constructor of an algebra form is Ci. Definition (Type).

ρ, σ, τ ::=α|ι(~ρ)|τ →σ,

whereιis an algebra form with~αits parameter type variables, andι(~ρ) the result of substituting the (already generated) types ~ρ for ~α. Types of the formι(~ρ) are calledalgebras. An algebra isclosed if it has no type variables.

The level of a type is defined by lev(α) := 0,

lev(ι(~ρ)) := max(lev(~ρ)),

lev(τ →σ) := max(lev(σ),1 + lev(τ)).

Base types are types of level 0, and a higher type has level at least 1.

Examples. 1. L(α),L(L(α)), α×β are algebras.

2. L(L(N)), N+B,Z:=P+U+P,Q:=Z×P are closed base types.

3. R:= (N→Q)×(P→N) is a closed algebra of level 1.

There can be many equivalent ways to define a particular type. For instance, we could take U+U to be the type of booleans, L(U) to be the type of natural numbers, andL(B) to be the type of positive binary numbers.

(33)

2.1.5. Partial continuous functionals. For every closed type τ we define an information system Cτ = (Cτ,Conτ,`τ). The definition is by induction on τ, and in case of an algebraι(~ρ) by a side inductive definition.

Definition (Information system of type τ). Case ι(ρ). For simplicity assume that there is only one parameter type ρ.

(a) Tokensa∈Cι(ρ)are the type correct constructor expressions CV a1. . . an whereai is anextended token, i.e., a token or the special symbol∗which carries no information, and V is a consistent set of tokens in Cρ. (b) A finite set U of tokens in Cι(ρ) is consistent (i.e., ∈ Conι(ρ)) if all its

elements start with the same constructor C, say of arityρ→ι(ρ). . .→ ι(ρ) → ι(ρ). Let U = {CV1a11. . . a1n, . . . ,CVmam1. . . amn}. Then we require that (i) V1 ∪ · · · ∪Vm is consistent (i.e., ∈ Conρ) and (ii) the setsUi consisting of all (proper) tokens at thei-th argument position of some token inU are consistent (i.e., ∈Conι(ρ)).

(c) {CV1a11. . . a1n, . . . ,CVmam1. . . amn} `ι(ρ) CV a1. . . an if and only if (i) V1 ∪ · · · ∪Vm `ρ V and (ii) for each set Ui as in (b) above we have Ui `ι(ρ)ai (where Ui` ∗ is taken to be true).

Case τ → σ. Tokens, consistency and entailment for Cτ→σ are defined as done generally in Section 2.1.2 (on function spaces). In more detail:

(a) Tokens in Cτ→σ are pairs (U, b) withU ∈Conτ and b∈Cσ. (b) {(Ui, bi)|i∈I} ∈Conτ→σ is defined to mean

J⊆I [

j∈J

Uj ∈Conτ → {bj |j∈J} ∈Conσ

. (c) W `τ→σ (U, b) is defined to meanW U `σ b.

Lemma 2.1.6. (Cτ,Conτ,`τ) is an information system.

Proof. Case ι(~ρ). For every constructor, for instance C :ρ → ι(ρ) → ι(ρ), we need to prove the axioms of information systems. We only give details for the last one, transitivity. By definition we can assume

{CU1a1, . . .CUnan} `ι(ρ)CVjbj and {CV1b1, . . . ,CVmbm} `ι(ρ) CW c By definition we have

U1∪ · · · ∪Un`ρVj

{a1, . . . , an} `ι(ρ) bj and V1∪ · · · ∪Vm `ρW {b1, . . . , bm} `ι(ρ)c By the (main and side) induction hypotheses we obtain

U1∪ · · · ∪Un`ρW {a1, . . . , an} `ι(ρ) c

Hence the goal {CU1a1, . . .CUnan} `ι(ρ)CW c follows by definition.

(34)

0 • @• S∗

@@

S0 • @• SS∗

@@

SS0 • @• SSS∗

@@

SSS0 • ...

Figure 1. Tokens and entailment for N

Case τ →σ. As in Section 2.1.2.

Observe that all the notions involved are computable: a∈Cτ,U ∈Conτ and U `τ a.

Definition (Partial continuous functionals). The ideals x ∈ |Cτ| are calledpartial continuous functionalsof typeτ. SinceCτ→σ =Cτ →Cσ, the partial continuous functionals of type τ → σ correspond to the continuous functions from |Cτ| to |Cσ|. A partial continuous functional x ∈ |Cτ| is computable if it is recursively enumerable when viewed as a set of tokens.

Definition (Cototal and total ideals of closed base type). Letι(~ρ) be a closed base type. Its tokens can be seen as constructor trees with some recursive argument positions occupied by∗. An idealxinCρiscototal if for each of its tokens P(∗) with a distinguished occurrence of∗there is another token of the form P(C~∅~∗) inx. We callx total if it is cototal and finite.

2.1.6. Examples. The tokens for the algebraNare shown in Figure 1.

For tokens a, bwe have {a} `bif and only if there is a path from a(up) to b (down).

Dyadic rational numbers in the interval (−1,1) are those of the form X

n<m

kn

2n+1 withkn∈ {−1,1}.

A pictorial representation is in Figure 2. Irrational real numbers like 12√ 2 then can be seen as infinite paths (or “streams”). Both of these objects appear in our present setting as total or cototal ideals of certain closed base types. This connection will make it possible to extract algorithms on stream-represented real numbers from proofs talking about ordinary reals given by Cauchy sequences with moduli.

Referenzen

ÄHNLICHE DOKUMENTE

The theorem on extraction of uniform bounds from classical proofs with extensionality in Sec.3.6.5 is due to Kohlenbach (1993); the formulation given is from (Kohlenbach, 2006)....

The third argument is for the step term, which is of a higher type and will be used many times (when the recursion operator is unfolded), so it must be normal as well... We now look

Hence one can develop from the theory of recursive functions first the theory of Σ 0 1 - definable relations with function arguments and then the theory of partial

I would like to find out why the data in grammars and dictionaries differ from the language usage as well as who decide which language form belongs to standard and which not.. Are

We consider the simply typed lambda calculus from the lecture, extended with unit and let. Show that the following terms are well typed in the given contexts by drawing a

The approach rests on ideas from the calibration method as well as from sublabel-accurate continuous multilabeling approaches, and makes these approaches amenable for

2 truth-table method cannot be extended to first-order logic model checking can overcome the first limitation (up to 1.000.000 atomic sentences).. proofs can overcome

A logical connective is truth-functional, if the truth value of a complex sentence built up using these connectives depends on nothing more than the truth values of the