• Keine Ergebnisse gefunden

In the main text we presented our propositions in an order which seemed natural in view of their interpretation and/or application. The order in which these results are naturally deducible is rather different. Therefore we make a fresh start. The propositions of the main text should be seen primarily as but a convenient summary of the results from the arguments below.

Convention Whenever we refer tororR0we implicitly restrict ourselves to com-munity dynamical scenarios for whichEattr(X) is time-constant for all relevantX.

Otherwise we only requireEto be ergodic (and realisable asEattr(X) for someX).

The virgin environment will be denoted asEV.

The following four theorems and corollaries are trivial. The crux are the ques-tions that follow them.

Theorem 1 If there exist functionsψofX, andαofψandE, to the real numbers, withα increasing inψ, such that

signα(ψ(X), E) = signρ(X, E)

then evolution maximisesψ(X) (or equivalentlyα(ψ(X), E)for any fixed E).

Theorem 2 (universal Verelendungs principle) If there exist functions φ of E, andβ ofX and φ, to the real numbers, withβ increasing inφ, such that

signβ(X, φ(E)) = signρ(X, E) then evolution minimisesφ(Eattr(X)).

Corollary 3 If we can writer(X, E)in the form r(X, E) = α(ψ(X), E),

with α increasing in ψ, then evolution maximises r(X, EV) (and, more generally, r(X, E0)for any fixed E0).

Corollary 4 If we can writeR0(X, E)in the form R0(X, E) = exp α(ψ(X), E)

,

withαincreasing in ψ, then evolution maximisesR0(X, EV)(and, more generally, R0(X, E0)for any fixed E0).

Questions

1. Is there any relation between theorems 1 and 2?

2. Can theorems 1 and 2 be made into “if and only if” statements, e.g. by requir-ing that the extremisation principle should hold independent of the particular choice we may still make for a constraint on X?

3. Is this also possible for the corollaries?

Theorem 5 (answer to question 1) The assumptions of both theorems 1 and 2 are equivalent to: There exist functions φ ofE, and ψ of X to the real numbers, such that

sign ψ(X) +φ(E)

= signρ(X, E). (65)

Proof: Theorem 1: Define the functionφofE to the real numbers byα(−φ(E), E) = 0. Then

sign ψ(X) +φ(E)

= signα(ψ(X), E) = signρ(X, E).

Therefore the assumption of theorem 1 implies the assumption made above. The converse implication is obvious.

Theorem 2: Letψ(X) :=−φ(Eattr(X)). Asβ(X, φ(Eattr(X))) = 0 sign φ(E) +ψ(X)

= sign φ(E)−φ(Eattr(X))

= signβ(X, φ(E)) = signρ(X, E).

Therefore the assumption of theorem 2 implies the assumption made above. The converse implication is obvious.

Metz, Mylius & Diekmann When Does Evolution Optimise? 19

Apparently we may without loss of essential information replace α(ψ , E) by ψ +φ(E) respectively β(X, φ) byψ(X) +φ, withφrespectively ψ defined above.

Remark 1.1 The reasoning underlying theorem 5 does not extend to corollaries 3 and 4: From r(X, E) = α(ψ(X), E) we cannot even conclude that there exist functionsφ0 ofE andψ0 of X such thatr(X, E) =ψ0(X) +φ0(E). Neither can we conclude fromR0(X, E) = exp(α(ψ(X), E)) that there exist functionsφ0 ofE and ψ0 ofX such thatR0(X, E) = exp(ψ0(X) +φ0(E)).

The next theorem is again trivial. However, it forms a natural introduction to the somewhat unexpected, though on second thought equally trivial, theorem 7.

Theorem 6 (first part of the answer to question 2)

(1) If we require that we can determine the ESS under any possible constraint by maximising a functionψ of X then this function is uniquely determined up to an increasing transformation.

(2) If we require that that we can determine the ESS under any possible constraint by minimising a function φ of E ∈ Eattr(X) then this function is uniquely determined up to an increasing transformation.

Theorem 7 (second part of the answer to question 2)

(1) If there exists a functionψ ofXto the real numbers such that we can determine the ESS value ofX by maximisingψ, independent of any choice that we may still make for a constraint onX, then there exists a functionφofE such that (65) applies.

(2) If there exists a functionφofEto the real numbers such that we can determine the ESS value ofXby minimisingφ(Eattr(X)), independent of any choice that we may still make for a constraint onX, then there exists a functionψ of X such that (65) applies.

(3) The functions φrespectivelyψ are uniquely determined by their counterparts.

Proof: In case (1) we define φ by φ(Eattr(X)) := −ψ(X). In case (2) we de-fine ψ(X) :=−φ(Eattr(X)). (65) is derived by considering all possible constraints of the type X ∈ {X1, X2}. Maximising ψ(X) or minimising φ(Eattr(X)) will only predict the right ESS for this constraint if sign ψ(Xi) +φ(Eattr(Xj))

= signρ(Xi, Eattr(Xj)) for all values ofiandj. Uniqueness ofφrespectivelyψ follows from the fact that sign ψ(X) +φ(Eattr(X))

should be 0.

Apparently any optimisation principleψ automatically carries a pessimisation principleφin its wake, and vice versa.

Corollary 8 (last part of the answer to question 2) We may replace the opening

“if ”s of theorems 1 and 2 by “iff”s.

Corollary 9 (first part of the answer to question 3)

(1) If we can determine the ESS value of X by maximising r(X, E0) for some special valueE0 ofE, independent of any choice that we may still make for a constraint on X, then there exists a function φofE such that

sign r(X, E0) +φ(E)

= signr(X, E).

(2) If we can determine the ESS value of X by maximising R0(X, E0)for some special value E0of E, independent of any choice that we may still make for a constraint on X, then there exists a functionφof E such that

sign ln(R0(X, E0)) +φ(E)

= sign ln(R0(X, E)).

It is not possible to get any representation of r(X, E) or R0(X, E) under the, weak, condition that there is at least oneE0such that evolution maximisesr(X, E0) respectivelyR0(X, E0). We need to make a stronger assumption about the sense in which evolution maximisesrrespectivelyR0:

Theorem 10 (last part of the answer to question 3)

(1) If the maximisation principle from corollary 9 (1) holds good for all possible choices of E0, then it is possible to write

r(X, E) = α(ψ(X), E),

with α increasing in its first argument and ψ(X) =r(X, E0) for some, arbi-trary but fixed, E0.

(2) If the maximisation principle from corollary 9 (2) holds good for all possible choices of E0, then it is possible to write

R0(X, E) = exp β(ψ(X), E) ,

with β increasing in its first argument and ψ(X) = ln(R0(X, E0))for some, arbitrary but fixed, E0.

Proof: The maximisation of, say, γ(X, E), E fixed, can only lead to the same value of the maximum as the maximisation ofγ(X, E0) for all possible constraints ifγ(X, E0) andγ(X, E), considered as functions ofX, are related by an increasing function: γ(X, E) =f(γ(X, E0), E, γ), where the last argument is at this stage only notional. For any givenE (and γ) this function is necessarily unique. In cases (1) and (2) we defineα(ψ , E) :=f(ψ , E, r) respectivelyβ(ψ , E) := ln(f(ψ , E, R0)).