• Keine Ergebnisse gefunden

Mathematical Logic

N/A
N/A
Protected

Academic year: 2022

Aktie "Mathematical Logic"

Copied!
96
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Mathematical Logic

Helmut Schwichtenberg

Mathematisches Institut der Universit¨at M¨unchen Wintersemester 2009/2010

(2)

Chapter 1. Logic 1

1.1. Natural Deduction 1

1.2. Normalization 15

1.3. Soundness and Completeness for Tree Models 30 1.4. Soundness and Completeness of the Classical Fragment 39

Chapter 2. Model Theory 45

2.1. Ultraproducts 45

2.2. Complete Theories and Elementary Equivalence 49

2.3. Applications 53

Chapter 3. Recursion Theory 57

3.1. Register Machines 57

3.2. Elementary Functions 61

3.3. The Normal Form Theorem 69

Chapter 4. G¨odel’s Theorems 75

4.1. G¨odel Numbers 75

4.2. The Notion of Truth in Formal Theories 83

4.3. Undecidability and Incompleteness 85

Bibliography 89

Index 91

i

(3)
(4)

Logic

The main subject of Mathematical Logic is mathematical proof. In this introductory chapter we deal with the basics of formalizing such proofs and, via normalization, analysing their structure. The system we pick for the representation of proofs is Gentzen’s natural deduction from (1934). Our reasons for this choice are twofold. First, as the name says this is a natural notion of formal proof, which means that the way proofs are represented corresponds very much to the way a careful mathematician writing out all details of an argument would go anyway. Second, formal proofs in natural deduction are closely related (via the so-called Curry-Howard correspon- dence) to terms in typed lambda calculus. This provides us not only with a compact notation for logical derivations (which otherwise tend to become somewhat unmanagable tree-like structures), but also opens up a route to applying (in part 3) the computational techniques which underpin lambda calculus.

Apart from classical logic we will also deal with more constructive logics:

minimal and intuitionistic logic. This will reveal some interesting aspects of proofs, e.g., that it is possible and useful to distinguish beween existential proofs that actually construct witnessing objects, and others that don’t.

An essential point for Mathematical Logic is to fix a formal language to be used. We take implication → and the universal quantifier ∀ as basic.

Then the logic rules correspond precisely to lambda calculus. The additional connectives: the existential quantifier ∃, disjunction ∨ and conjunction ∧, can then be added either as rules or axiom schemes. It is “natural” to treat them as rules, and that is what we do here.

An underlying theme of this chapter is to bring out the constructive content of logic, particularly in regard to the relationship between minimal and classical logic. For us the latter is most appropriately viewed as a subsystem of the former.

1.1. Natural Deduction

Rules come in pairs: we have an introduction and an elimination rule for each of the logical connectives. The resulting system is calledminimal logic;

1

(5)

it was introduced by Kolmogorov (1932), Gentzen (1934) and Johansson (1937). Notice that no negation is yet present. If we go on and require ex-falso-quodlibet for the nullary propositional symbol⊥(“falsum”) we can embedintuitionistic logicwith negation asA→ ⊥. To embed classical logic, we need to go further and add as an axiom schema the principle of indirect proof, also called stability (∀~x(¬¬R~x → R~x) for relation symbols R), but then it is appropriate to restrict to the language based on →,∀, ⊥ and ∧.

The reason for this restriction is that we can neither prove ¬¬∃xA → ∃xA nor¬¬(A∨B)→A∨B, for there are countermodels to both (the former is Markov’s scheme). However, we can prove them for the classical existential quantifier and disjunction defined by ¬∀x¬A and ¬A → ¬B → ⊥. Thus we need to make a distinction between two kinds of “exists” and two kinds of “or”: the classical ones are “weak” and the non-classical ones “strong”

since they have constructive content. In situations where both kinds occur together we must mark the distinction, and we shall do this by writing a tilde above the weak disjunction and existence symbols thus ˜∨,˜∃. Of course, in a classical context this distinction does not arise and the tilde is not necessary.

1.1.1. Terms and formulas. Let a countably infinite set {vi |i∈N} of variables be given; they will be denoted by x, y, z. A first order language L then is determined by its signature, which is to mean the following.

(i) For every natural number n≥0 a (possible empty) set of n-aryrela- tion symbols (or predicate symbols). 0-ary relation symbols are called propositional symbols. ⊥(read “falsum”) is required as a fixed proposi- tional symbol. The language will not, unless stated otherwise, contain

= as a primitive. Binary relation symbols can be marked as infix. (ii) For every natural numbern≥0 a (possible empty) set ofn-aryfunction

symbols. 0-ary function symbols are calledconstants. Binary function sysmbols can also be marked as infix.

We assume that all these sets of variables, relation and function symbols are disjoint. L is kept fixed and will only be mentioned when necessary.

Terms are inductively defined as follows.

(i) Every variable is a term.

(ii) Every constant is a term.

(iii) Ift1, . . . , tn are terms and f is an n-ary function symbol with n≥ 1, thenf(t1, . . . , tn) is a term. (Ifr, sare terms and◦is a binary function symbol, then (r◦s) is a term.)

From terms one constructsprime formulas, also called atomic formulas or justatoms: Ift1, . . . , tnare terms andRis ann-ary relation symbol, then R(t1, . . . , tn) is a prime formula. (Ifr, sare terms and∼is a binary relation symbol, then (r∼s) is a prime formula.)

(6)

Formulas are inductively defined from prime formulas by (i) Every prime formula is a formula.

(ii) IfAandBare formulas, then so are (A→B) (“ifAthenB”), (A∧B) (“A andB”) and (A∨B) (“A orB”).

(iii) If A is a formula and x is a variable, then ∀xA (“A holds for all x”) and ∃xA (“there is an xsuch that A”) are formulas.

Negation is defined by

¬A:= (A→ ⊥).

Notation. In writing formulas we save on parentheses by assuming that ∀,∃,¬ bind more strongly than ∧,∨, and that in turn∧,∨bind more strongly than →,↔ (where A ↔ B abbreviates (A → B) ∧(B → A)).

Outermost parentheses can be dropped. Thus A ∧ ¬B → C is read as ((A∧(¬B)) → C). In the case of iterated implications we use the short notation

A1 →A2 → · · · →An−1 →An for A1 →(A2→ · · · →(An−1→An). . .).

We also occasionally save on parentheses by writing for instance Rxyz, Rt0t1t2 instead of R(x, y, z), R(t0, t1, t2), where R is some predicate sym- bol. Similarly for a unary function symbol with a (typographically) simple argument, so f x for f(x), etc. In this case no confusion will arise. But readability requires that we write in full R(f x, gy, hz), instead ofRf xgyhz.

We shall often need to do induction on the height, denoted |A|, of formulas A. This is defined as follows: |P| = 0 for atoms P, |A◦B| = max(|A|,|B|) + 1 for binary operators ◦ (i.e.,→,∧,∨) and | ◦A|= |A|+ 1 for unary operators ◦(i.e., ∀x,∃x).

1.1.2. Substitution, free and bound variables. Expressions E,E0 which differ only in the names of bound variables will be regarded as iden- tical. This is sometimes expressed by saying that E andE0 areα-equivalent.

In other words, we are only interested in expressions “modulo renaming of bound variables”. There are methods of finding unique representatives for such expressions, for example the name-free terms of de Bruijn (1972). For the human reader such representations are less convenient, so we shall stick to the use of bound variables.

In the definition of “substitution of expression E0 for variable x in ex- pression E”, either one requires that no variable free inE0 becomes bound by a variable-binding operator in E, when the free occurrences ofx are re- placed by E0 (also expressed by saying that there must be no “clashes of variables”), “E0 is free for x in E”, or the substitution operation is taken to involve a systematic renaming operation for the bound variables, avoiding clashes. Having stated that we are only interested in expressions modulo

(7)

renaming bound variables, we can without loss of generality assume that substitution is always possible.

Also, it is never a real restriction to assume that distinct quantifier occurrences are followed by distinct variables, and that the sets of bound and free variables of a formula are disjoint.

Notation. “FV” is used for the (set of) free variables of an expression;

so FV(r) is the set of variables free in the termr, FV(A) the set of variables free in formula A etc. A formulaA is said to beclosed if FV(A) =∅.

E[x := r] denotes the result of substituting the term r for the variable x in the expression E. Similarly, E[~x := ~r] is the result of simultaneously substituting the terms ~r = r1, . . . , rn for the variables ~x = x1, . . . , xn, re- spectively.

In a given context we shall adopt the following convention. Once a formula has been introduced as A(x), i.e., A with a designated variable x, we write A(r) for A[x:=r], and similarly with more variables.

1.1.3. Subformulas. Unless stated otherwise, the notion of subfor- mula will be that defined by Gentzen.

Definition. (Gentzen) subformulas ofA are defined by (a) A is a subformula ofA;

(b) ifB◦C is a subformula ofA then so are B,C, for◦= →,∧,∨;

(c) if ∀xB(x) or ∃xB(x) is a subformula ofA, then so is B(r).

Definition. The notions of positive, negative, strictly positive subfor- mula are defined in a similar style:

(a) A is a positive and a strictly positive subformula of itself;

(b) ifB ∧C or B∨C is a positive (negative, strictly positive) subformula ofA, then so areB,C;

(c) if∀xB(x) or∃xB(x) is a positive (negative, strictly positive) subformula ofA, then so is B(r);

(d) ifB →C is a positive (negative) subformula ofA, thenB is a negative (positive) subformula ofA, andC is a positive (negative) subformula of A;

(e) if B →C is a strictly positive subformula ofA, then so is C.

A strictly positive subformula of A is also called a strictly positive part (s.p.p.) of A. Note that the set of subformulas of A is the union of the positive and negative subformulas of A.

Example. (P → Q) → R∧ ∀xS(x) has as s.p.p.’s the whole formula, R∧ ∀xS(x),R,∀xS(x), S(r). The positive subformulas are the s.p.p.’s and in addition P; the negative subformulas areP →Q,Q.

(8)

1.1.4. Examples of derivations. To motivate the rules for natural deduction, let us start with informal proofs of some simple logical facts.

(A→B →C)→(A→B)→A→C.

Informal proof. Assume A → B → C. To show: (A → B) → A → C.

So assume A → B. To show: A → C. So finally assume A. To show: C.

Using the third assumption twice we have B →C by the first assumption, and B by the second assumption. From B → C and B we then obtain C. Then A → C, cancelling the assumption on A; (A → B) → A → C cancelling the second assumption; and the result follows by cancelling the

first assumption.

x(A→B)→A→ ∀xB, ifx /∈FV(A).

Informal proof. Assume∀x(A→B). To show: A→ ∀xB. So assumeA. To show: ∀xB. Letxbe arbitrary; note that we have not made any assumptions on x. To show: B. We have A → B by the first assumption. Hence also B by the second assumption. Hence ∀xB. Hence A → ∀xB, cancelling the second assumption. Hence the result, cancelling the first assumption.

A characteristic feature of these proofs is that assumptions are intro- duced and eliminated again. At any point in time during the proof the free or “open” assumptions are known, but as the proof progresses, free assump- tions may become cancelled or “closed” because of the implies-introduction rule.

We reserve the wordproof for the informal level; a formal representation of a proof will be called a derivation.

An intuitive way to communicate derivations is to view them as labelled trees each node of which denotes a rule application. The labels of the inner nodes are the formulas derived as conclusions at those points, and the labels of the leaves are formulas or terms. The labels of the nodes immediately above a node k are the premises of the rule application. At the root of the tree we have the conclusion (or end formula) of the whole derivation.

In natural deduction systems one works with assumptions at leaves of the tree; they can be either open or closed (cancelled). Any of these assump- tions carries a marker. As markers we use assumption variables denoted u, v, w, u0, u1, . . .. The variables of the language previously introduced will now often be called object variables, to distinguish them from assumption variables. If at a node below an assumption the dependency on this as- sumption is removed (it becomes closed) we record this by writing down the assumption variable. Since the same assumption may be used more than once (this was the case in the first example above), the assumption marked with u (written u: A) may appear many times. Of course we insist that distinct assumption formulas must have distinct markers. An inner node of

(9)

the tree is understood as the result of passing from premises to the conclu- sion of a given rule. The label of the node then contains, in addition to the conclusion, also the name of the rule. In some cases the rule binds or closes or cancels an assumption variable u (and hence removes the dependency of all assumptions u: A thus marked). An application of the ∀-introduction rule similarly binds an object variablex(and hence removes the dependency on x). In both cases the bound assumption or object variable is added to the label of the node.

Definition. A formula A is called derivable (in minimal logic), writ- ten ` A, if there is a derivation of A (without free assumptions) using the natural deduction rules. A formula B is called derivable from assump- tions A1, . . . , An, if there is a derivation ofB with free assumptions among A1, . . . , An. Let Γ be a (finite or infinite) set of formulas. We write Γ`B if the formula B is derivable from finitely many assumptions A1, . . . , An∈Γ.

We now formulate the rules of natural deduction.

1.1.5. Introduction and elimination rules for → and ∀. First we have an assumption rule, allowing to write down an arbitrary formula A together with a marker u:

u:A assumption.

The other rules of natural deduction split into introduction rules (I-rules for short) and elimination rules (E-rules) for the logical connectives which, for the time being, are just→and∀. For implication→there is an introduction rule →+ and an elimination rule → also called modus ponens. The left premiseA→B in→ is called themajor (ormain) premise, and the right premiseAtheminor (orside) premise. Note that with an application of the

+-rule all assumptions above it marked with u:A are cancelled (which is denoted by putting square brackets around these assumptions), and the u then gets written alongside. There may of course be other uncancelled assumptionsv:Aof the same formulaA, which may get cancelled at a later stage.

[u:A]

|M B →+u A→B

|M A→B

|N A → B

For the universal quantifier∀there is an introduction rule∀+(again marked, but now with the bound variable x) and an elimination rule∀whose right premise is the term r to be substituted. The rule∀+x with conclusion∀xA is subject to the following (Eigen-)variable condition: the derivation M of

(10)

the premise A should not contain any open assumption having x as a free variable.

|M A ∀+x

xA

|M

xA(x) r

A(r)

We now give derivations of the two example formulas treated informally above. Since in many cases the rule used is determined by the conclusion, we suppress in such cases the name of the rule.

u:A→B →C w:A B →C

v:A→B w:A B

C →+w A→C

+v (A→B)→A→C

+u (A→B →C)→(A→B)→A→C

u:∀x(A→B) x

A→B v:A

B ∀+x

xB

+v A→ ∀xB

+u

x(A→B)→A→ ∀xB

Note that the variable condition is satisfied: x is not free inA(and also not free in ∀x(A→B)).

1.1.6. Properties of negation. Recall that negation is defined by

¬A:= (A→ ⊥). The following can easily be derived.

A→ ¬¬A,

¬¬¬A→ ¬A.

However, ¬¬A→ A is in general not derivable (without stability – we will come back to this later on).

Lemma. The following are derivable.

(A→B)→ ¬B → ¬A,

¬(A→B)→ ¬B,

¬¬(A→B)→ ¬¬A→ ¬¬B,

(⊥ →B)→(¬¬A→ ¬¬B)→ ¬¬(A→B),

¬¬∀xA→ ∀x¬¬A.

Derivations are left as an exercise.

(11)

1.1.7. Introduction and elimination rules for disjunction∨, con- junction ∧ and existence ∃. For disjunction the introduction and elimi- nation rules are

|M A ∨+0 A∨B

|M B ∨+1 A∨B

|M A∨B

[u:A]

|N C

[v:B]

|K C ∨u, v C

For conjunction we have

|M A

|N B ∧+ A∧B

|M A∧B

[u:A] [v:B]

|N C ∧u, v C

and for the existential quantifier

r

|M A(r) ∃+

xA(x)

|M

xA

[u:A]

|N

B ∃x, u(var.cond.) B

Similar to ∀+x the rule ∃x, u is subject to an (Eigen-)variable condition:

in the derivation N the variable x (i) should not occur free in the formula of any open assumption other than u:A, and (ii) should not occur free in B.

Again, in each of the elimination rules ∨,∧ and ∃ the left premise is calledmajor (ormain) premise, and the right premise is called theminor (or side) premise.

It is easy to see that for each of the connectives∨,∧,∃the rules and the following axioms are equivalent over minimal logic; this is left as an exercise.

For disjunction the introduction and elimination axioms are

+0 :A→A∨B,

+1 :B →A∨B,

:A∨B→(A→C)→(B →C)→C.

For conjunction we have

+:A→B →A∧B, ∧:A∧B→(A→B →C)→C and for the existential quantifier

+:A→ ∃xA, ∃:∃xA→ ∀x(A→B)→B (x /∈FV(B)).

(12)

Remark. All these axioms can be seen as special cases of a general schema, that of an inductively defined predicate, which is defined by some introduction rules and one elimination rule. Later we will study this kind of definition in full generality.

We collect some easy facts about derivability; B←A meansA→B.

Lemma. The following are derivable.

(A∧B →C)↔(A→B→C), (A→B∧C)↔(A→B)∧(A→C), (A∨B →C)↔(A→C)∧(B →C), (A→B∨C)←(A→B)∨(A→C), (∀xA→B)← ∃x(A→B) if x /∈FV(B), (A→ ∀xB)↔ ∀x(A→B) if x /∈FV(A), (∃xA→B)↔ ∀x(A→B) if x /∈FV(B), (A→ ∃xB)← ∃x(A→B) if x /∈FV(A).

Proof. A derivation of the final formula is u:∃x(A→B)

x

w:A→B v:A B

xB

x, w

xB

+v A→ ∃xB

+u

x(A→B)→A→ ∃xB

The variable condition for ∃ is satisfied since the variable x (i) is not free in the formula A of the open assumption v:A, and (ii) is not free in ∃xB.

The rest of the proof is left as an exercise.

As already mentioned, we distinguish between two kinds of “exists” and two kinds of “or”: the “weak” or classical ones and the “strong” or non- classical ones, with constructive content. In the present context both kinds occur together and hence we must mark the distinction; we shall do this by writing a tilde above the weak disjunction and existence symbols thus

A∨˜ B:=¬A→ ¬B → ⊥, ∃˜xA:=¬∀x¬A.

One can show easily that these weak variants of disjunction and the exis- tential quantifier are no stronger than the proper ones (in fact, they are weaker):

A∨B →A∨˜ B, ∃xA→∃˜xA.

This can be seen easily by putting C:=⊥in∨ and B :=⊥in∃.

(13)

1.1.8. Intuitionistic and classical derivability. In the definition of derivability in 1.1.4 falsity⊥plays no role. We may change this and require ex-falso-quodlibet axioms, of the form

~x(⊥ →R~x)

withR a relation symbol distinct from⊥. Let Efq denote the set of all such axioms. A formula A is called intuitionistically derivable, written `i A, if Efq`A. We write Γ`i B for Γ∪Efq`B.

We may even go further and require stability axioms, of the form

~x(¬¬R~x→R~x)

with R again a relation symbol distinct from⊥. Let Stab denote the set of all these axioms. A formula A is called classically derivable, written `c A, if Stab`A. We write Γ`c B for Γ∪Stab`B.

It is easy to see that intuitionistically (i.e., from Efq) we can derive

⊥ → A for an arbitrary formula A, using the introduction rules for the connectives. A similar generalization of the stability axioms is only possible for formulas in the language not involving ∨,∃. However, it is still possible to use the substitutes ˜∨and ˜∃.

Theorem (Stability, or principle of indirect proof).

(a) `(¬¬A→A)→(¬¬B→B)→ ¬¬(A∧B)→A∧B. (b) `(¬¬B→B)→ ¬¬(A→B)→A→B.

(c) `(¬¬A→A)→ ¬¬∀xA→A.

(d) `c ¬¬A→A for every formula A without ∨,∃.

Proof. (a) is left as an exercise. (b). For simplicity, in the derivation to be constructed we leave out applications of →+ at the end.

u:¬¬B →B

v:¬¬(A→B)

u1:¬B

u2:A→B w:A B

⊥ →+u2

¬(A→B)

⊥ →+u1

¬¬B B

(c).

u:¬¬A→A

v:¬¬∀xA

u1:¬A

u2:∀xA x A

⊥ →+u2

¬∀xA

⊥ →+u1

¬¬A A

(14)

(d). Induction on A. The case R~twithR distinct from ⊥is given by Stab.

In the case ⊥the desired derivation is v: (⊥ → ⊥)→ ⊥

u:⊥

+u

⊥ → ⊥

In the cases A∧B,A→B and ∀xA use (a), (b) and (c), respectively.

Using stability we can prove some well-known facts about the interaction of weak disjunction and the weak existential quantifier with implication. We first prove a more refined claim, stating to what extent we need to go beyond minimal logic.

Lemma. The following are derivable.

(˜∃xA→B)→ ∀x(A→B) ifx /∈FV(B), (1.1)

(¬¬B →B)→ ∀x(A→B)→∃˜xA→B ifx /∈FV(B), (1.2)

(⊥ →B[x:=c])→(A→∃˜xB)→∃˜x(A→B) ifx /∈FV(A), (1.3)

˜∃x(A→B)→A→˜∃xB ifx /∈FV(A).

(1.4)

The last two items can also be seen as simplifying a weakly existentially quantified implication whose premise does not contain the quantified variable.

In case the conclusion does not contain the quantified variable we have (¬¬B →B)→ ˜∃x(A→B)→ ∀xA→B if x /∈FV(B), (1.5)

x(¬¬A→A)→(∀xA→B)→˜∃x(A→B) if x /∈FV(B).

(1.6)

Proof. (1.1)

˜∃xA→B

u1:∀x¬A x

¬A A

⊥ →+u1

¬∀x¬A B

(1.2)

¬¬B →B

¬∀x¬A

u2:¬B

x(A→B) x

A→B u1:A B

⊥ →+u1

¬A

x¬A

⊥ →+u2

¬¬B B

(15)

(1.3) Writing B0 forB[x:=c] we have

x¬(A→B) c

¬(A→B0)

⊥ →B0

A→∃˜xB u2:A

˜∃xB

x¬(A→B) x

¬(A→B)

u1:B A→B

⊥ →+u1

¬B

x¬B

⊥ B0

+u2

A→B0

⊥ (1.4)

˜∃x(A→B)

x¬B x

¬B

u1:A→B A B

⊥ →+u1

¬(A→B)

x¬(A→B)

⊥ (1.5)

¬¬B →B

˜∃x(A→B)

u2:¬B

u1:A→B

xA x A B

⊥ →+u1

¬(A→B)

x¬(A→B)

⊥ →+u2

¬¬B B

(1.6) We derive ∀x(⊥ → A) → (∀xA → B) → ∀x¬(A → B) → ¬¬A.

Writing Ax, Ay forA(x), A(y) we have

x¬(Ax→B) x

¬(Ax→B)

xAx→B

y(⊥ →Ay) y

⊥ →Ay

u1:¬Ax u2:Ax

⊥ Ay

yAy B →+u2 Ax→B

⊥ →+u1

¬¬Ax

(16)

Using this derivation M we obtain

x¬(Ax→B) x

¬(Ax→B)

xAx→B

x(¬¬Ax→Ax) x

¬¬Ax→Ax

|M

¬¬Ax Ax

xAx B

Ax→B

Since clearly `(¬¬A→A)→ ⊥ →A the claim follows.

Remark. An immediate consequence of (1.6) is the classical derivability of the “drinker formula” ˜∃x(P x → ∀xP x), to be read “in every non-empty bar there is a person such that, if this person drinks, then everybody drinks”.

To see this let A:=P x and B:=∀xP x in (1.6).

Corollary.

`c (˜∃xA→B)↔ ∀x(A→B) ifx /∈FV(B) and B without ∨,∃,

`i (A→˜∃xB)↔˜∃x(A→B) if x /∈FV(A),

`c ∃˜x(A→B)↔(∀xA→B) ifx /∈FV(B) and A, B without ∨,∃.

There is a similar lemma on weak disjunction:

Lemma. The following are derivable.

(A∨˜ B →C)→(A→C)∧(B →C), (¬¬C →C)→(A→C)→(B →C)→A∨˜ B →C, (⊥ →B)→ (A→B ∨˜ C)→(A→B) ˜∨(A→C),

(A→B) ˜∨(A→C)→A→B ∨˜ C, (¬¬C →C)→(A→C) ˜∨(B →C)→A→B →C, (⊥ →C)→ (A→B →C)→(A→C) ˜∨(B →C).

Proof. The derivation of the final formula is

¬(B →C)

⊥ →C

¬(A→C)

A→B →C u1:A

B →C u2:B C →+u1

A→C

⊥ C →+u2

B →C

(17)

The other derivations are similar to the ones above, if one views ˜∃ as an

infinitary version of ˜∨.

Corollary.

`c (A∨˜ B →C)↔(A→C)∧(B→C) for C without ∨,∃,

`i (A→B ∨˜ C)↔(A→B) ˜∨(A→C),

`c (A→C) ˜∨(B→C)↔(A→B →C) for C without ∨,∃.

Remark. It is easy to see that weak disjunction and the weak existential quantifier satisfy the same axioms as the strong variants, if one restricts the conclusion of the elimination axioms to formulas without ∨,∃. In fact, we have

`A→A∨˜ B, `B →A∨˜ B,

`c A∨˜ B →(A→C)→(B→C)→C (C without ∨,∃),

`A→∃˜xA,

`c ˜∃xA→ ∀x(A→B)→B (x /∈FV(B),B without∨,∃).

The derivations are left as exercises.

1.1.9. G¨odel-Gentzen translation. Classical derivability Γ`c Bwas defined in 1.1.8 by Γ∪Stab ` B. This embedding of classical logic into minimal logic can be expressed in a somewhat different and very explicit form, namely as a syntactic translation A 7→ Ag of formulas such that A is derivable in classical logic if and only if its translation Ag is derivable in minimal logic.

Definition (G¨odel-Gentzen translation Ag).

(R~t)g :=¬¬R~t forR distinct from ⊥,

g :=⊥, (A∨B)g :=Ag ∨˜ Bg, (∃xA)g := ˜∃xAg,

(A◦B)g :=Ag◦Bg for◦= →,∧, (∀xA)g :=∀xAg.

Lemma. ` ¬¬Ag →Ag. Proof. Induction on A.

Case R~t with R distinct from ⊥. We must show ¬¬¬¬R~t → ¬¬R~t, which is a special case of ` ¬¬¬B → ¬B.

Case ⊥. Use` ¬¬⊥ → ⊥.

(18)

Case A∨B. We must show ` ¬¬(Ag ∨˜ Bg) → Ag ∨˜ Bg, which is a special case of ` ¬¬(¬C→ ¬D→ ⊥)→ ¬C→ ¬D→ ⊥:

¬¬(¬C → ¬D→ ⊥)

u1:¬C→ ¬D→ ⊥ ¬C

¬D→ ⊥ ¬D

⊥ →+u1

¬(¬C→ ¬D→ ⊥)

Case ∃xA. In this case we must show` ¬¬∃˜xAg →∃˜xAg, but this is a special case of ` ¬¬¬B → ¬B, because ˜∃xAg is the negation¬∀x¬Ag.

Case A∧B. We must show ` ¬¬(Ag∧Bg)→ Ag∧Bg. By induction hypothesis ` ¬¬Ag → Ag and ` ¬¬Bg → Bg. Now use part (a) of the stability theorem in 1.1.8.

The cases A → B and ∀xA are similar, using parts (b) and (c) of the

stability theorem instead.

Theorem. (a) Γ`c A implies Γg `Ag. (b) Γg `Ag implies Γ`cA for Γ, Awithout ∨,∃.

Proof. (a). We use induction on Γ `c A. For a stability axiom

~x(¬¬R~x → R~x) we must derive ∀~x(¬¬¬¬R~x → ¬¬R~x), which is easy (as above). For the rules →+, →, ∀+, ∀, ∧+ and ∧ the claim follows immediately from the induction hypothesis, using the same rule again. This works because the G¨odel-Gentzen translation acts as a homomorphism for these connectives. For the rules ∨+i ,∨, ∃+ and ∃ the claim follows from the induction hypothesis and the remark at the end of 1.1.8. For example, in case ∃ the induction hypothesis gives

|M

˜∃xAg and

u:Ag

|N Bg

with x /∈FV(Bg). Now use `(¬¬Bg → Bg) → ∃˜xAg → ∀x(Ag → Bg) → Bg. Its premise¬¬Bg →Bg is derivable by the lemma above.

(b). First note that`c (B ↔Bg) ifB is without∨,∃. Now assume that Γ, Aare without ∨,∃. From Γg `Ag we obtain Γ`c Aas follows. We argue informally. Assume Γ. Then Γg by the note, henceAg because of Γg `Ag,

hence A again by the note.

1.2. Normalization

A derivation in normal form does not make “detours”, or more precisely, it cannot occur that an elimination rule immediately follows an introduction rule. We will use “conversions” to remove such “local maxima” of complex- ity, thus reducing any given derivation to normal form. However, there is a

(19)

difficulty when we consider an elimination rule for∨,∧or∃. An introduced formula may be used as a minor premise of an application of ∨, ∧ or

, then stay the same throughout a sequence of applications of these rules, being eliminated at the end. This also constitutes a local maximum, which we should like to eliminate;permutative conversions are designed for exactly this situation. In a permutative conversion we permute an E-rule upwards over the minor premises of ∨,∧ or∃.

We analyse the shape of derivations in normal form, and then prove the (crucial) subformula property, which says that every formula in a normal derivation is a subformula of the end-formula or else of an assumption.

It will be convenient to represent derivations as typed “derivation terms”, where the derived formula is seen as the “type” of the term (and displayed as a superscript). This representation is known under the nameCurry-Howard correspondence. We give an inductive definition of such derivation terms for the→,∀-rules in table 1 where for clarity we have written the corresponding derivations to the left. In table 2 this is extended to the rules for ∨, ∧and

∃.

1.2.1. Conversions. A conversion eliminates a detour in a derivation, i.e., an elimination immediately following an introduction. We now spell out in detail which conversions we shall allow. This is done for derivations written in tree notation and also as derivation terms.

→-conversion.

[u:A]

|M B →+u A→B

|N A → B

7→

|N A

|M B

or written as derivation terms (λuM(uA)B)A→BNA 7→ M(NA)B. The reader familiar with λ-calculus should note that this is nothing other than β-conversion.

∀-conversion.

|M A(x) ∀+x

xA(x) r

A(r)

7→ |M0 A(r)

or written as derivation terms (λxM(x)A(x))xA(x)r7→M(r).

(20)

Derivation Term

u:A uA

[u:A]

|M B →+u A→B

uAMB)A→B

|M A→B

|N A → B

(MA→BNA)B

|M

A ∀+x (with var.cond.)

xA

xMA)xA (with var.cond.)

|M

xA(x) r

A(r)

(MxA(x)r)A(r)

Table 1. Derivation terms for → and ∀

∨-conversion.

|M A ∨+0 A∨B

[u:A]

|N C

[v:B]

|K C ∨u, v C

7→

|M A

|N C

or as derivation terms (∨+0,BMA)A∨B(uA.N(u)C, vB.K(v)C) 7→ N(MA)C, and similarly for ∨+1 with K instead ofN.

(21)

Derivation Term

|M A ∨+0 A∨B

|M B ∨+1 A∨B

(∨+0,BMA)A∨B (∨+1,AMB)A∨B

|M A∨B

[u:A]

|N C

[v:B]

|K C ∨u, v C

(MA∨B(uA.NC, vB.KC))C

|M A

|N B ∧+ A∧B

hMA, NBiA∧B

|M A∧B

[u:A] [v:B]

|N C ∧u, v C

(MA∧B(uA, vB.NC))C

r

|M A(r) ∃+

xA(x)

(∃+x,ArMA(r))xA(x)

|M

xA

[u:A]

|N

B ∃x, u(var.cond.) B

(MxA(uA.NB))B (var.cond.)

Table 2. Derivation terms for ∨,∧ and∃

(22)

∧-conversion.

|M A

|N B ∧+ A∧B

[u:A] [v:B]

|K C ∧u, v C

7→

|M A

|N B

|K C or hMA, NBiA∧B(uA, vB.K(u, v)C)7→K(MA, NB)C.

∃-conversion.

r

|M A(r) ∃+

xA(x)

[u:A(x)]

|N B ∃x, u B

7→

|M A(r)

|N0 B or (∃+x,ArMA(r))xA(x)(uA(x).N(x, u)B)7→N(r, MA(r))B.

1.2.2. Permutative conversions.

∨-permutative conversion.

|M A∨B

|N C

|K C C

|L C0

E-rule D

7→

|M A∨B

|N C

|L C0

E-rule D

|K C

|L C0

E-rule D

D

or with for instance→as E-rule (MA∨B(uA.NC→D, vB.KC→D))C→DLC 7→

(MA∨B(uA.(NC→DLC)D, vB.(KC→DLC)D))D.

∧-permutative conversion.

|M A∧B

|N C C

|K C0

E-rule D

7→

|M A∧B

|N C

|K C0

E-rule D

D

or (MA∧B(uA, vB.NC→D))C→DKC 7→(MA∧B(uA, vB.(NC→DKC)D))D.

(23)

∃-permutative conversion.

|M

xA

|N B B

|K C E-rule D

7→

|M

xA

|N B

|K C E-rule D

D

or (MxA(uA.NC→D))C→DKC 7→(MxA(uA.(NC→DKC)D))D.

1.2.3. Simplification conversions. These are somewhat trivial con- versions, which remove unnecessary applications of the elimination rules for

∨,∧ and ∃. For∨we have

|M A∨B

[u:A]

|N C

[v:B]

|K C ∨u, v C

7→ |N C

ifu:Ais not free inN, or (MA∨B(uA.NC, vB.KC))C 7→NC; similar for the second component. For ∧there is the conversion

|M A∧B

[u:A] [v:B]

|N C ∧u, v C

7→ |N C

if neither u: Anor v:B is free inN, or (MA∧B(uA, vB.NC))C 7→NC. For

∃ the simplification conversion is

|M

xA

[u:A]

|N B ∃x, u B

7→ |N B if again u:A is not free inN, or (MxA(uA.NB))B 7→NB.

1.2.4. Strong normalization. We now show that no matter in which order we apply the conversion rules, they will always terminate and produce a derivation in “normal form”, where no further conversions can be applied.

We shall write derivation terms without formula super- or subscripts.

For instance, we write ∃+ instead of ∃+x,A. Hence we consider derivation

(24)

terms M, N, K now of the forms

u|λvM |λyM | ∨+0M | ∨+1M | hM, Ni | ∃+rM | M N |M r|M(v0.N0, v1.N1)|M(v, w.N)|M(v.N) where, in these expressions, the variables v, y, v0, v1, w are bound.

To simplify the technicalities, we restrict our treatment to the rules for

→ and ∃. The argument easily extends to the full set of rules. Hence we consider

u|λvM | ∃+rM |M N |M(v.N).

The strategy for strong normalization is set out below, but a word about notation is crucial here. Whenever we write an applicative term as M ~N :=

M N1. . . Nk the convention is that bracketing to the left operates. That is, M ~N = (. . .(M N1). . . Nk).

We reserve the letters E, F, G for eliminations, i.e., expressions of the form (v.N), andR, S, T for both terms and eliminations. Using this notation we obtain a second (and clearly equivalent) inductive definition of terms:

u ~M |u ~M E|λvM | ∃+rM |

vM)N ~R| ∃+rM(v.N)R~ |u ~M ER ~S.

Here only the final three forms are not normal: (λvM)N ~Rand∃+rM(v.N)R~ both are β-redexes, and u ~M ER ~S is a permutative redex. The conversion rules for them are

vM(v))N 7→β M(N) β-conversion,

+x,ArM(v.N(x, v))7→β N(r, M) β-conversion,

M(v.N)R 7→π M(v.N R) permutative conversion.

In addition we also allow

M(v.N)7→σ N ifv:A is not free inN.

The latter is called asimplification conversion, andM(v.N) asimplification redex.

The closure of these conversions is defined by (a) If M 7→ξM0 forξ =β, π, σ, thenM →M0.

(b) If M → M0, then M R → M0R, N M → N M0, N(v.M) → N(v.M0), λvM →λvM0,∃+rM → ∃+rM0 (inner reductions).

SoM →N means thatM reduces in one step toN, i.e.,N is obtained from M by replacement of (an occurrence of) a redex M0 of M by a conversum M00 of M0, i.e., by a single conversion. The relation→+ (“properly reduces to”) is the transitive closure of →, and → (“reduces to”) is the reflexive transitive closure of →. A termM isin normal form (or simplynormal) if

(25)

M does not contain a redex. M has a normal form if there is a normal N such that M → N. A reduction sequence is a (finite or infinite) sequence M0, M1, M2. . . such thatMi →Mi+1, for all i.

We inductively define a set SN of derivation terms. In doing so we take care that for a given M there is exactly one rule applicable to generate M ∈SN. This will be crucial to make the later proofs work.

Definition (SN). M~ ∈SN (Var0) u ~M ∈SN

M ∈SN (λ) λvM ∈SN

M ∈SN (∃)

+rM ∈SN M , N~ ∈SN

(Var) u ~M(v.N)∈SN

u ~M(v.N R)S~ ∈SN

(Varπ) u ~M(v.N)R ~S∈SN

M(N)R~ ∈SN N ∈SN (β) (λvM(v))N ~R∈SN

. N(r, M)R~ ∈SN M ∈SN (β)

+x,ArM(v.N(x, v))R~ ∈SN

In (Varπ) we require that x (from∃xA) andv are not free in R.

It is easy to see that SN is closed under substitution for object variables:

ifM(x)∈SN, then M(r)∈SN. The proof is by induction on M ∈SN, ap- plying the induction hypothesis first to the premise(es) and then reapplying the same rule.

We write M↓ to mean that M is strongly normalizing, i.e., that every reduction sequence starting from M terminates. By analysing the possi- ble reduction steps we now show that the set {M | M↓ } has the closure properties of the definition of SN above, and hence SN⊆ {M |M↓ }.

Lemma. Every term in SN is strongly normalizing.

Proof. We distinguish cases according to the generation rule of SN applied last. The following rules deserve special attention.

Case (Varπ). We prove, as an auxiliary lemma, that u ~M(v.N R)S↓~ implies u ~M(v.N)R ~S↓.

As a typical case consider

u ~M(v.N(v0.N0))T S↓implies u ~M(v.N)(v0.N0)T S↓.

(26)

However, it is easy to see that any infinite reduction sequence of the latter would give rise to an infinite reduction sequence of the former.

Case (β). We show thatM(N)R↓~ andN↓imply (λvM(v))N ~R↓. This is done by induction on N↓, with a side induction on M(N)R↓. We need~ to consider all possible reducts of (λvM(v))N ~R. In case of an outer β- reduction use the assumption. IfN is reduced, use the induction hypothesis.

Reductions in M and in R~ as well as permutative reductions withinR~ are taken care of by the side induction hypothesis.

Case (β). We show that

N(r, M)R↓~ and M↓ together imply ∃+rM(v.N(x, v))R↓.~

This is done by a threefold induction: first on M↓, second on N(r, M)R↓~ and third on the length of R. We need to consider all possible reducts~ of ∃+rM(v.N(x, v))R. In case of an outer~ β-reduction it must reduce to N(r, M)R, hence the result by assumption. If~ M is reduced, use the first induction hypothesis. Reductions inN(x, v) and inR~ as well as permutative reductions within R~ are taken care of by the second induction hypothesis.

The only remaining case is whenR~ =S ~S and (v.N(x, v)) is permuted with S, to yield∃+rM(v.N(x, v)S)S, in which case the third induction hypothesis~

applies.

For later use we prove a slightly generalized form of the rule (Varπ):

Proposition. If M(v.N R)S~ ∈SN, then M(v.N)R ~S∈SN.

Proof. Induction on the generation of M(v.N R)S~ ∈ SN. We distin- guish cases according to the form of M.

Case u ~T(v.N R)S~ ∈SN. If T~ =M~ (i.e.,T~ consists of derivation terms only), use (Varπ). Else we have u ~M(v0.N0)R(v.N R)~ S~ ∈SN. This must be generated by repeated applications of (Varπ) from u ~M(v0.N0R(v.N R)~ S)~ ∈ SN, and finally by (Var) from M~ ∈ SN and N0R(v.N R)~ S~ ∈ SN. The induction hypothesis for the latter fact yields N0R(v.N~ )R ~S ∈ SN, hence u ~M(v0.N0R(v.N~ )R ~S)∈SN by (Var) and finallyu ~M(v0.N0)R(v.N~ )R ~S ∈SN by (Varπ).

Case ∃+rM ~T(v.N(x, v)R)S~ ∈SN. Similar, with (β) instead of (Varπ).

In detail: If T~ is empty, by (β) this came from N(r, M)R ~S ∈ SN and M ∈ SN, hence ∃+rM(v.N(x, v))R ~S ∈ SN again by (β). Otherwise we have ∃+rM(v0.N0(x0, v0))T~(v.N R)S~ ∈ SN. This must be generated by (β) from N0(r, M)T~(v.N R)S~ ∈ SN. The induction hypothesis yields N0(r, M)T~(v.N)R ~S ∈ SN, hence ∃+rM(v0.N0(x, v0))T~(v.N)R ~S ∈ SN by (β).

(27)

Case (λvM(v))N0R(w.N R)~ S~ ∈ SN. By (β) this came from N0 ∈ SN and M(N0)R(w.N R)~ S~ ∈ SN. But the induction hypothesis yields M(N0)R(w.N~ )R ~S ∈SN, hence (λvM(v))N0R(w.N)R ~~ S ∈SN by (β).

We show, finally, thatevery term is in SN and hence is strongly normal- izing. Given the definition of SN we only have to show that SN is closed under→and∃. But in order to prove this we must prove simultaneously the closure of SN under substitution.

Theorem (Properties of SN). For all formulasA,

(a) for allM ∈SN, ifM provesA=A0→A1 andN ∈SN, thenM N ∈SN, (b) for allM ∈SN, ifM provesA=∃xB andN ∈SN, thenM(v.N)∈SN, (c) for allM(v)∈SN, ifNA∈SN, then M(N)∈SN.

Proof. Induction on |A|. We prove (a) and (b) before (c), and hence have (a) and (b) available for the proof of (c). More formally, by induction on A we simultaneously prove that (a) holds, that (b) holds and that (a), (b) together imply (c).

(a). By side induction on M ∈SN. Let M ∈ SN and assume that M proves A =A0 →A1 and N ∈SN. We distinguish cases according to how M ∈ SN was generated. For (Var0), (Varπ), (β) and (β) use the same rule again.

Case u ~M(v.N0) ∈SN by (Var) from M , N~ 0 ∈SN. Then N0N ∈SN by side induction hypothesis for N0, hence u ~M(v.N0N) ∈ SN by (Var), hence u ~M(v.N0)N ∈SN by (Varπ).

Case (λvM(v))A0→A1 ∈SN by (λ) from M(v)∈SN. Use (β); for this we need to know M(N) ∈SN. But this follows from induction hypothesis (c) for M(v), sinceN derives A0.

(b). By side induction on M ∈ SN. Let M ∈ SN and assume that M proves A =∃xB and N ∈SN. The goal is M(v.N) ∈ SN. We distinguish cases according to how M ∈SN was generated. For (Varπ), (β) and (β) use the same rule again.

Case u ~M ∈SN by (Var0) fromM~ ∈SN. Use (Var).

Case (∃+rM)xA ∈ SN by (∃) from M ∈ SN. We must show that

+rM(v.N(x, v))∈SN. Use (β); for this we need to know N(r, M)∈SN.

But this follows from induction hypothesis (c) for N(r, v) (which is in SN by the remark above), since M derives A(r).

Case u ~M(v0.N0)∈SN by (Var) fromM , N~ 0∈SN. ThenN0(v.N)∈SN by side induction hypothesis for N0, hence u ~M(v.N0(v.N)) ∈SN by (Var) and therefore u ~M(v.N0)(v.N)∈SN by (Varπ).

(c). By side induction onM(v)∈SN. LetNA∈SN; the goal isM(N)∈ SN. We distinguish cases according to how M(v)∈SN was generated. For

(28)

(λ), (∃), (β) and (β) use the same rule again, after applying the induction hypothesis to the premise(es).

Case u ~M(v) ∈ SN by (Var0) from M~(v) ∈ SN. Then M~(N) ∈ SN by side induction hypothesis (c). If u6=v, use (Var0) again. Ifu=v, we must show N ~M(N) ∈SN. Note that N proves A; hence the claim follows from M~(N)∈SN by (a) withM =N.

Case u ~M(v)(v0.N0(v))∈SN by (Var) fromM~(v), N0(v)∈SN. Ifu6=v, use (Var) again. If u = v, we must show N ~M(N)(v0.N0(N)) ∈ SN. Note that N proves A; hence in case M~(v) is empty the claim follows from (b) with M =N, and otherwise from (a), (b) and the induction hypothesis.

Case u ~M(v)(v0.N0(v))R(v)S(v)~ ∈SN has been obtained by (Varπ) from u ~M(v)(v0.N0(v)R(v))S(v)~ ∈SN. Ifu6=v, use (Varπ) again. If u=v, from the side induction hypothesis we obtain N ~M(N)(v0.N0(N)R(N))S(N~ ) ∈ SN. Now use the proposition above with M :=N ~M(N).

Corollary. Every derivation term is inSNand therefore strongly nor- malizing.

Proof. Induction on the (first) inductive definition of derivation terms.

In casesu,λvM and ∃+rM the claim follows from the definition of SN, and in cases M N and M(v.N) from parts (a), (b) of the previous theorem.

1.2.5. On disjunction. Incorporating the full set of rules adds no other technical complications but merely increases the length. For the ener- getic reader, however, we include here the details necessary for disjunction.

The conjunction case is entirely straightforward.

We have additional β-conversions

+i M(v0.N0, v1.N1)7→β Ni[vi :=M] βi-conversion.

The definition of SN needs to be extended by M ∈SN (∨i)

+i M ∈SN

M , N~ 0, N1 ∈SN

(Var) u ~M(v0.N0, v1.N1)∈SN

u ~M(v0.N0R, v1.N1R)S~ ∈SN

(Var∨,π) u ~M(v0.N0, v1.N1)R ~S ∈SN

Ni[vi :=M]R~ ∈SN N1−iR~ ∈SN M ∈SN (βi)

+i M(v0.N0, v1.N1)R~ ∈SN

The former rules (Var), (Varπ) should then be renamed into (Var), (Var∃,π).

(29)

The lemma above stating that every term in SN is strongly normalizable needs to be extended by an additional clause:

Case (βi). We show thatNi[vi:=M]R↓,~ N1−iR↓~ andM↓together im- ply∨+i M(v0.N0, v1.N1)R↓. This is done by a fourfold induction: first on~ M↓, second onNi[vi :=M]R↓,~ N1−iR↓, third on~ N1−iR↓~ and fourth on the length of R. We need to consider all possible reducts of~ ∨+i M(v0.N0, v1.N1)R. In~ case of an outer β-reduction use the assumption. If M is reduced, use the first induction hypothesis. Reductions inNiand inR~ as well as permutative reductions within R~ are taken care of by the second induction hypothesis.

Reductions inN1−iare taken care of by the third induction hypothesis. The only remaining case is when R~ = S ~S and (v0.N0, v1.N1) is permuted with S, to yield (v0.N0S, v1.N1S). Apply the fourth induction hypothesis, since (NiS)[v:=M]S~ =Ni[v:=M]S ~S.

Finally the theorem above stating properties of SN needs an additional clause:

(b’) for all M ∈ SN, if M proves A = A0 ∨A1 and N0, N1 ∈ SN, then M(v0.N0, v1.N1)∈SN.

Proof. The new clause is proved by induction onM ∈SN. LetM ∈SN and assume that M proves A = A0 ∨A1 and N0, N1 ∈ SN. The goal is M(v0.N0, v1.N1)∈SN. We distinguish cases according to howM ∈SN was generated. For (Var∃,π), (Var∨,π), (β), (β) and (βi) use the same rule again.

Case u ~M ∈SN by (Var0) fromM~ ∈SN. Use (Var).

Case (∨+i M)A0∨A1 ∈ SN by (∨i) fromM ∈ SN. Use (βi); for this we need to know Ni[vi := M] ∈ SN and N1−i ∈ SN. The latter is assumed, and the former follows from main induction hypothesis (with Ni) for the substitution clause of the theorem, since M derives Ai.

Case u ~M(v0.N0) ∈ SN by (Var) from M , N~ 0 ∈ SN. For brevity let E := (v0.N0, v1.N1). Then N0E ∈ SN by side induction hypothesis for N0, so u ~M(v0.N0E) ∈ SN by (Var) and therefore u ~M(v0.N0)E ∈ SN by (Var∃,π).

Case u ~M(v00.N00, v10.N10) ∈ SN by (Var) from M , N~ 00, N10 ∈ SN. Let E := (v0.N0, v1.N1). ThenNi0E ∈SN by side induction hypothesis forNi0, so u ~M(v00.N00E, v01.N10E) ∈ SN by (Var) and therefore u ~M(v00.N00, v01.N10)E ∈ SN by (Var∨,π).

Clause (c) now needs additional cases, e.g.,

Case u ~M(v0.N0, v1.N1)∈SN by (Var) fromM , N~ 0, N1 ∈SN. Ifu6=v, use (Var). Ifu=v, we showN ~M[v:=N](v0.N0[v:=N], v1.N1[v:=N])∈

Referenzen

ÄHNLICHE DOKUMENTE

Dividing out a suitable compact normal subgroup, if necessary, by an application of [He Ro], Theorem 8.7, we obtain an infinite metrizable totally disconnected compact group..

Exercise 4 ∗ 6 ∗ Points A (totally) ordered class hA, ≤i is perfectly ordered if it satisfies the following conditions:. • A has a

These exercises are to be discussed with the assistant during the tutorial on the 29th of October.. Throughout, R and S

Throughout, R denotes a ring, and, unless otherwise stated, all rings are assumed to be associative rings with 1, and modules are assumed to be left

Deduce from Remark 35.12 of the Lecture Notes that column orthogonality relations for the Brauer characters take the form Π tr Φ =

It is known that Blichfeldt’s Theorem can be improved by considering only the fixed point numbers of non-trivial elements of prime power order.. This can be seen

(Counting of lattice points.) You can use for example Apostol, Introduction to analytic number theory, Theorem 3.3.. (NOT for crosses, and not to be handed in.) Deadline for

Theorem: A positive-definite binary quadratic form of discriminant 1 represents an odd