• Keine Ergebnisse gefunden

Examples of Models Meeting the Assumptions

In this section we will present examples for which the assumptions in 3.6 hold.

First, we note that the reproduction mechanism defines our time rescaling but that the migration and coalescence mechanisms can otherwise be considered separately.

As we already noted in Examples 1.6: one way of modeling migration of individuals in the population model is to consider independent random walks on G. For the backwards-in-time spatial coalescent process we can just reverse those random walks to get new random walks which describe the migration of individuals backwards in time (special case of Remark 1.9).

Thus, for N PN we consider a family of i.i.d. random walks pXi,NqiPrNs

on G with transition matrix PN pPNpx, yqqx,yPG. We assume that the random walks are irreducible and positively recurrent with stationary distri-bution πNNx qxPG. Furthermore we assume that the random walks are in equilibrium.

In this setting MN and N0N are given in the following way:

N0;xN : ¸

iPrNs

1tXi,N

0 xu

and for x, y PG with xy and k PN0: Mk;x,yM : ¸

iPrNs

1tXi,N

k x,Xki,N1yu.

Proposition 3.8 (Migration Via Independent Random Walkers).

Set cN :Nα and ktN : rt{cNs. Let PN I

cN ÑQ, πN Ñπ (3.8)

as entrywise convergence of real numbers, with Q pQpx, yqqx,yPG a stable (i.e., finite diagonal entries), conservative (i.e. row sums are zero) generator matrix (i.e. nonegative off diagonal entries) with°

xPGQpx, xqπx   8where π is a distribution on G (represented as a row vector) with πQ0.

Then we have for all t PR and x, y PG:

Rt;xN Ñπx and FkNN

t ;x,y ÑtπxQpx, yq (3.9) almost surely as N Ñ 8. Define Rt;xx and Ft;x,y :tπxQpx, yq for all tPR , xyPG. Then 1. and 3. to 7. of Assumptions 3.6 are fulfilled. We even achieve almost sure pointwise convergence instead of just convergence of the finite dimensional distributions of the considered processes.

3.2. Examples of Models Meeting the Assumptions 31 The random variables 1tXi,N

t xu are mutually independent Binp1, πxNq dis-tributed for iP rNs since the random walks are in equilibrium. We consider the array pEN,iqNPN,iPrNs given by

EN,i:1tXti,NxuπxN.

Then the array is row-wise independent with EpEN,iq 0 and |EN,i| ¤ 1.

Thus we have by a strong law of large numbers for such arrays (see Theorem 4 in [28]) that

Next we show the convergence of FkNN

t ;x,y for x, y P G and t P R . We

It is noteworthy, that heuristically, due to the ergodic theorem for Markov chains, the inner sum will behave asymptotically likekNt πxNPNpx, yqwhich would then converge to tπxQpx, yq forN Ñ 8. The issue with this approach is that we cannot apply the ergodic theorem since the transition matrix PN and the stationary distribution πN depend on N. Even worse, for large N the Markov chain XN will move slower and therefore we cannot expect that the ergodic theorem could be applied uniformly in N. Alternatively we could try to apply the strong law for row-wise independent arrays again. Consider the array pZN,iqNPN,iPrNs given by

Due to independence of Xi,N the rows of pZN,iq are again independent and since the chains are in equilibrium:

PpXji,N1 x, Xji,N yq πNxPNpx, yq (3.10)

and thusEpZN,iq 0 for all N PN, iP rNs. But we can no longer guarantee the uniform stochastic boundedness of ZN,i which was necessary to apply Theorem 4 in [28].

Thus, we have to estimate the probabilities explicitely and show conver-gence from scratch. To ease notation we define for j P t1, . . . ktNu:

Ai,Nj;x,y : tXji,N1 x, Xji,N yu.

Note that ifj1, . . . , jm P t1, . . . ktNuare mutually different the event“m

l1Ai,Njl;x,y entails that the Markov chainXi,N jumpes fromx toy on at leastm different occasions and thus

Moreover, if we expand this expression we get summands of the form that thejlare mutually different. Taking expectations and applying the above estimation yields that each summand is bound by PNpx, yq|I|PNpx, yqk PNpx, yqm. Since there are m2 summands we get

Ep|Yji,N1,...,jm;x,y|q ¤m2PNpx, yqm.

In the case that the summands are not mutually different we assumej1 jm without loss of generality. Then, since

|1tAi,N

(note that the expression inside the square is a random variable taking values inr0,1s) and thus

|Yji,N1,...,jm;x,y| ¤ |Yji,N1,...,jm1;x,y|

3.2. Examples of Models Meeting the Assumptions 33

we get Ep|Yji,N

1,...,jm;x,y|q ¤ pm1q2PNpx, yqm1. Applying this consecutively until there are only mutually different indeces we get

Ep|Yji,N

1,...,jm;x,y|q ¤a2PNpx, yqa. where aP rms is the number of mutually different jl.

We will now estimate theL4 distance betweenFkNN

t ;x,y andktNπxNPNpx, yq. We have using that the ZN,i are i.i.d. in iP rNs and have mean zero. We use this to expand the 4th moment:

E

Our goal is to show that the expectations above are bounded for N Ñ 8 which then shows that the forth moment goes to 0 for N Ñ 8 with order 1{N2.

We have for the first expectation where the sums are always taken over mutually different indexes in rktNs (we use multinomial coefficients):

E ZN,14

The right hand side of this inequality is bounded in N PN due to cN Ñ0

Next, we have to estimate the second moment. We get similar to the fourth moment (again the sums are taken over mutually different indices):

E ZN,12

Again, the right-hand side is bounded in N PN . Thus there is a constant D¥0 that does not depend on N such that The Markov inequality yields for ε¡0:

Pp|FkNN

t ;x,y ktNπxNPNpx, yq| ¡εq ¤ D N2ε4.

Since the probability is summable the Borel-Cantelli Lemma yields forN Ñ 8:

|FkNN

We show that the Assumptions 3.6 are fulfilled as claimed. 1. of As-sumptions 3.6 holds trivially. The functions t ÞÑ Ft;x,yxQpx, yq are linear int and thus absolutely continuous with differential Ft;x,y1 πxQpx, yq. Together with (3.9) we get 3. of Assumptions 3.6 with almost sure pointwise convergence of processes. Note that we have for all t¥0:

¸

3.2. Examples of Models Meeting the Assumptions 35

Thus 4. of Assumptions 3.6 is fulfilled. SinceR0;xπx is a distribution on G we have °

xPGR0;x 1 and thus 5. of Assumptions 3.6 is fulfilled. Let xP G, t¥0. Next, we show the almost sure convergence of the sum:

¸

We can argue completely analogously to the proof of the almost sure con-vergence Ft;x,yN ÑtπxQpx, yqby simply replacing PNpx, yqwith 1PNpx, xq and replacing “hittingy” with “avoiding x” in all events, to get D¥0 such that for all ε¡0:

And again by Borel-Cantelli and due to kNt πxN 1PNpx, xq

the almost sure convergence of Rt;xN now implies for N Ñ 8:

¸

and thus we have almost surely for N Ñ 8:

¸

yPGztxu

Ft;x,yN Ñ tπxQpx, xq ¸

yPGztxu

Ft;x,y

and ¸

yPGztxu

Ft;y,xN Ñ tπxQpx, xq ¸

yPGztxu

Ft;y,y.

Thus 6. of Assumptions 3.6 holds in terms of almost sure pointwise conver-gence. Due to Rt;x πx ¡0 for all x PG, t P R 7. of Assumptions 3.6 is also fulfilled.

One way to ensure that the requirements of Proposition 3.8 are met, especially if we want to attain a specific random walk in the limit, is by using lazy walkers.

Example 3.9 (Independent Lazy Walkers). LetQ pQpx, yqqx,yPGbe a conser-vative, stable generator matrix. More preciselyQis a matrix with nonnegative off-diagonal entries (interpretation as rates), row sums equal 0 (conservative) and finite, nonpositive diagonal entries (stable). andπ a distribution on G (represented as a row vector) with πQ0 and supxPGQpx, xq   8. Let α¡0. For large N P N we can define a stochastic matrix (a matrix with nonnegative entries and row sums equal 1)PN pPNpx, yqx,yPGqvia

PNpx, yq Qpx, yq

Nα ¤1 for xy and

PNpx, xq 1 ¸

yx

PNpx, yq 1 Qpx, xq Nα ¥0.

The stochastic matrix PN describes a discrete-time Markov chain and thus a random walk on G. We call this random walk lazy since the probability Ppx, xq to remain at a given state x converges to 1 for N Ñ 8. In words:

for large N the random walk will stay in its current state for long periods of time. Furthermore the random walks for differentN essentially only differ in this holding probability in the sense that the conditional probability of moving from x toy given that the walker does not stay in x is the same for allN. This Markov chain has the equilibrium distribution π since we have