• Keine Ergebnisse gefunden

Recursive solution of the selection-recombination equation

0

1 2 3 4

5 6 7 8

9

1 2 3 4 5 6 7 8 9 10

Figure 4.4. A nondecreasing permutation of sites. The original labels of the sites, 16i6n, are at the top;below each site with labeli, we have noted the correspondingkfor whichik=i.

individuals in the case that i 6∈A. It is indeed a common pitfall to assume that Theorem 4.3 holds for arbitrary A. This is also implicit in [BB03]; see the corresponding erratum.

4.4 Recursive solution of the selection-recombination equa-tion

The first main result in this chapter will be a recursive solution of the SRE. The recursion will start ati and work along the site indices in agreement with the partial order introduced in Definition 4.1. If the original indices are used, the recursion must be formulated individually for every choice ofi; in particular, it looks quite different depending on whetheriis at one of the ends or in the interior of the sequence. To establish the recursion in a unified framework, we introduce a relabelling; let us fix a nondecreasing (in the sense of the partial order from Definition 4.1) permutation (ik)06k6n−1ofS(compare Fig. 4.4) and denote the corresponding heads and tails by upper indices, that is,C(k):=Cik and D(k):=Dik (compare Figure 4.1).

Note that i0 = i, D(0) =S and C(0) = ∅and also that this choice of permutation implies that for all > k, one has either D(ℓ)D(k) (if < k) or D(ℓ)C(k) (if and k are incomparable). Furthermore, we define ̺(k):=̺i

k and R(k)=Rik fork >0.

We now proceed as follows. First, we recapitulate the solution of the pure selection equation, that is, we solve (4.9) in the special case that all recombination rates vanish. Then, in accordance with the labelling given by (ik)16k6n−1, we will successively add sites at which we allow recombination. This can be formalised as follows.

Definition 4.4. For ̺(1), . . . , ̺(n−1) as above and everyk∈[0 :n−1], we set Ψ(k)rec :=

Xk ℓ=1

̺(ℓ) R(ℓ)−id, Ψ(k):= Ψsel+ Ψ(k)rec

(with the usual convention that the empty sum is 0). We then define the SRE truncated at k as the differential equation

˙

ωt(k)= Ψ(k)t(k)).

Furthermore, we understand (ω(k))06k6n−1 as the family of the corresponding solutions, all with the same initial condition ω0. In particular, ω(0) is the solution of the pure selection

equation

˙

ωt(0)= Ψsel(0)t ) =sf(0)t ) b(ω(0)t )−ω(0)t . (4.21) We also define ψ(k) = (ψt(k))t>0 as the flow semigroup associated to the differential equation defined via Ψ(k). In line with (4.9), we have ω =ω(n−1) (which is to sayωt=ωt(n−1) for all t>0) and Ψ = Ψ(n−1), and we likewise set ψ=ψ(n−1). We will also write ϕinstead of ψ(0)

for the (pure) selection semigroup. ♦

Proposition 4.5. The solution of the pure selection equation (4.21) with initial condition ω0∈ P(X) is given by

ωt(0)=ϕt0) = estF(ω0) + (1−F)(ω0)

estf0) + 1−f0) , t>0, (4.22) with f andF as given in (4.2)and (4.3). In particular,

f(0)t ) = estf0)

estf0) + 1−f0) (4.23) is increasing over time andω(0)t =ϕt0) is a convex combination of the initial type distribu-tions of the fit (that is, beneficial) and unfit (that is, deleterious) subpopulations introduced in Eqs. (4.5) and (4.6), namely,

ωt(0)=ft(0))b(ω0) +1−f(0)t )d(ω0).

This in particular implies

bϕt0)=b(ω0)and dϕt0)=d(ω0). (4.24) Proof. By straightforward verification. To see Eq. (4.24), recall that the fitness operator F is a projection and b(ω) is in the image of F, while d(ω) is in the image of 1F for any ω∈ P(X).

Remark 4.8. Eq. (4.23) generalises the well-known solution of the selection equation for a single site, which is simply a logistic equation; compare [Dur08, p. 198]. Eq. (4.24) reflects the plausible fact that, while the proportion of fit individuals increases at the cost of the unfit ones (as quantified in Eq. (4.22)), the type composition within the set of fit types remains

unchanged, and likewise for the set of unfit types. ♦

The main result in this section is the following recursion formula for the family of solutions of the (truncated) SREs.

Theorem 4.6. The family of solutions(k))16k6n−1 of Definition 4.4 satisfies the recursion ωt(k) = e−̺(k)tω(k−1)t +πC(k)t(k−1)πD(k).

Z t

0 ̺(k)e−̺(k)τωτ(k−1)

4.4 Recursive solution of the selection-recombination equation 45

for16k6n−1and t>0, whereω(0) is the solution of the pure selection equation given in Proposition 4.5.

We will first give an analytic proof. Then, in the next section, we will give a genealogical proof of the recursion by means of the ancestral selection-recombination graph (ASRG), which will provide additional insight.

To deal with the nonlinearity of recombination and to exploit the underlyinglinear structure (see [BB16]) more efficiently, we now introduce a variant of the product of two measures that are defined on XA and XB, where A and B need not be disjoint. Namely, given a subsetU of S, sets I, JU, and signed measures νI, νJ on XI and XJ, respectively, we define

νIνJ := (πI\JI)⊗νJ,

which is a signed measure on XI∪J (recall that π =ν(XI) for all signed measures ν on XI, IS in line with Remark 2.2). Note that we use νI here to mean any signed measure on XI, whereas we abbreviate byνI the specific signed measure onXI that is obtained from ν on X via νI =πI.ν.

Proposition 4.7. Let US. For I, J, KU and signed measures νI, νJ, νK on XI, XJ, and XK, respectively, the operation has the following properties.

(i) (νIνJ)νK =νIJνK) (associativity).

(ii) If IJ = ∅, we have νIνJ = νIνJ = νJνI (reduction to tensor product and commutativity).

(iii) If IJ, then νIνJ =νI(XIJ (cancellation property).

Proof. For associativity, note that

IνJ)νK = (πI\JI)⊗νJνK = π(I∪J)\K.(πI\J.(νI)⊗νJ)νK

=πI\(J∪K)IπJ\KJνK=πI\(J∪K)⊗(νJνK) =νIJνK), where we have used in the third step that ((I∪J)\K)∩(I\J) =I\(J∪K).

When IJ =∅, one has

νIνJ =πI\JIνJ =πIIνJ =νIνJ =νJνI,

which implies the claimed reduction to ⊗and thus commutativity. Finally, for IJ, νIνJ = (πI\JI)⊗νJ = (πI)⊗νJ =νI(XIJ

establishes the cancellation property.

Under the conditions of Proposition 4.7, we now denote by νJνK the formal sum of νJ and νK (and use for the corresponding formal difference). Note that the formal sum turns into a proper sum (and hence reduces to +) when I =J. Furthermore, we define

νIJνK) := (νIνJ)IνK). (4.25) Clearly, the right-hand side reduces to a proper sum whenIJ =IK.

Generalising the formal sum above, we define A(XU) to be the real vector space of formal sums

ν :=λ1νU1. . .λqνUq,

where q ∈ N,λ1, . . . , λq ∈R, U1, . . . , UqUS, and νU1, . . . , νUq are signed measures on XU1, . . . , XUq, respectively. We also writeν(XU) :=Pqi=1λiνUi(XUi).

Remark 4.9. If one extends the definition of canonically to all of A(XU) (recalling that the projections are linear), A(XU), becomes an associative, unital algebra with neutral element 1, the measure with weight 1 on X. Note that, when multiplying ν ∈A(XI) and µ∈A(XJ) for disjointI and J, the multiplication introduced above agrees with the measure

product. ♦

Now, we can rewrite Ψ(k)rec of Definition 4.4 as

Ψrec ωt(k)=ωt(k)

k

ℓ=1

̺(ℓ) πD

t(k)1; (4.26) note that the right-hand side indeed reduces to a proper (rather than a formal) sum of measures via (4.25), becauseωt(k) lives onXS and DS for 166k, so that each term is a measure on XS.

We shall see later that, when combined with selection, this representation has an advantage over the use of recombinators because it nicely brings out the recursive structure; this will streamline calculations and connect to the graphical construction in a natural way. The fact that the head alone determines the fitness of an individual manifests itself in the right-multiplicativity of Ψsel and its associated flowϕ(compare Definition 4.4) as follows.

Lemma 4.8. For all µ∈ P(X) and all ν ∈A(XS), Fν) =F(µ)ν.

If, in addition, ν(XS) = 1, one has

Ψselν) = Ψsel(µ)ν and therefore

ϕtν) =ϕt(µ)ν

4.4 Recursive solution of the selection-recombination equation 47

for everyt>0.

Proof. To keep the notation simple, we assume U1, U2S and ν = νU1νU2 with signed measuresνU1 andνU2 onXU1 and XU2, respectively. By the tensor product representation of F from (4.4), we have

FνU1 +µνU2) =FνU1) +FνU2) =FU

1νU1) +FU

2νU2)

= (Pi⊗idU1\i)(πU

1.µ)⊗idU1U1) + (Pi⊗idU2\i)(πU

2.µ)⊗idU2U2)

=πU

1.(Pi⊗idS)(µ)⊗idU1U1) +πU

2.(Pi⊗idS) (µ)⊗idU2U2)

=F(µ)νU1 +F(µ)νU2,

which gives the first claim. Taking the first claim together with the fact thatfν) =f(µ) ifν(XS) = 1, we get the second and the third claim.

Now, the proof of Theorem 4.6 is straightforward.

Proof of Theorem 4.6. Let Ψ(k) be as in Definition 4.4. With the shorthand νt(k−1) :=πD(k).

Z t 0

̺(k)e−̺(k)τωτ(k−1)dτ,

one has νt(k−1)(XD(k)) = 1−e−̺(k)t, and the right-hand side of the recursion formula from Theorem 4.6 can be expressed as

µ(k)t :=ω(k−1)t (e−̺(k)t1νt(k−1)). (4.27) First, we show that

µ(k)t πD(ℓ)(k)t = ωt(k−1)πD(ℓ)t(k−1)(e−̺(k)t1νt(k−1)) (4.28) for all 166k. To see this, write the left-hand side asω(k−1)t AB, where

A:= e−̺(k)t1νt(k−1) and

B :=πD(ℓ). ωt(k−1)(e−̺(k)t1νt(k−1))=πD(ℓ)(k)t .

Recall that, by our monotonicity assumption on the permutation of sites, we have either D(k)D(ℓ) or D(k)D(ℓ) = ∅. In the first case, (4.28) follows by cancelling A using Proposition 4.7 (note that A(XD(k)) = 1). In the second case,B is just πD(ℓ)t(k−1), and so AB = BA, again by Proposition 4.7. Now we compute, using (4.26) and (4.27) in the first step, (4.28) and Lemma 4.8 in the second, Definition 4.4 in the third, and Proposition 4.7

in the last:

Ψ(k)(k)t ) = Ψselt(k−1) e−̺(k)t1νt(k−1))+ Xk ℓ=1

̺(ℓ)µ(k)t D(ℓ)(k)t 1)

= Ψselt(k−1)) + Xk ℓ=1

̺(ℓ)ω(k−1)t D(ℓ)t(k−1)1)(e−̺(k)t1νt(k−1))

= Ψ(k−1)(k−1)t ) +̺(k)ω(k−1)t D(k)(k−1)t 1)(e−̺(k)t1νt(k−1))

= ˙ωt(k−1)(e−̺(k)t1νt(k−1)) +ω(k−1)t (k)e−̺(k)tπD(k)t(k−1)̺(k)e−̺(k)t1).

Identifying̺(k)e−̺(k)tπD(k)t(k−1) with ˙νt(k−1), we see that the last line is just the time deriv-ative ofµ(k)t of (4.27).

Remark 4.10. We could have proved Theorem 4.6 also without the help of formal sums and the new operations ,,. However, we decided on the current presentation in order to familiarise the reader with this — admittedly somewhat abstract — formalism, as it is the key to stating the duality result in Section 4.7 in closed form. It will also allow us later to state the solution itself in closed form; see Corollary 4.26. ♦ Remark 4.11. Note that the only property of the selection operator that entered the proof of Theorem 4.6 is the second property in Lemma 4.8, namely, Ψselν) = Ψsel(ω)ν for all ν ∈ A(XS) with ν(XS) = 1. Therefore, the result remains true if Ψsel is replaced by a more general operator with this property. In particular, Theorem 4.6 remains true when frequency-dependent selection and/or mutation at the selected site is included. ♦ Remark 4.12. Applying Theorem 4.3 to A = {i} shows that the marginal type frequency at the selected site is unaffected by recombination. More generally, consider the set

L(k):={i0 =i, i1, . . . , ik}

and note that L(k)\i is exactly the set of recombination sites that are considered up to and including thek-th iteration. Obviously, marginalisation consistency holds forL(k) for all 0 6k 6 n−1. Since̺Li(k) =̺i for iL(k)\i, Remark 4.6 and Eq. (4.18) together with Definition 4.4 give

πL(k)˙t=πL(k). X

i∈L(k)\i

̺i(Riωtωt) =πL(k)(k)rect) =πL(k)˙(k)t ,

and soπL(k)t(k)=πL(k)t. This implies that if one is only interested in the marginal with respect toL(k), then one may stop the iteration after the k-th step. ♦ An important application of Theorem 4.6 is the following recursion for the first-order correla-tion funccorrela-tionsωt(k)R(k)ω(k)t between the type frequencies at the sites contained inC(k) and