• Keine Ergebnisse gefunden

Bounds for large t

Im Dokument EIGENVALUE DISTRIBUTIONS OF WILSON LOOPS (Seite 143-147)

10.5 Bounds for the domain of non-vanishing eigenvalue density

10.5.1 Bounds for large t

We first fix some N 1, take some fixed ε2 1, and study what happens as n → ∞.

In terms of our true interest, this means that we are trying to understand the asymptotic behavior of γ(t) for t = nε2 going to infinity. If we take n → ∞ at fixed ε2 > 0, the theorems of F¨urstenberg and Oseledec apply, which provide a generalization of the familiar law of large numbers to the case of independent and identically distributed random matrices (see Refs. [59, 63] and original references therein). For our case, this implies that for a generic matrix norm k. . .k,

λ1 = lim

n→∞

1

nlogkWnk (10.27)

exists with probability 1 and is a non-random quantity (λ1 is called the maximum char-acteristic Lyapunov exponent). Furthermore, the matrix

n→∞lim

WnWn2n1

(10.28) exists almost surely, too, and has non-random eigenvalues eλi1 ≥ λ2 ≥ · · · ≥ λN are referred to as the characteristic Lyapunov exponents).

The following discussion is based on the textbooks [59, 63] and Refs. [64, 65]. We start by defining the norm of a vector v∈CN by

kvk= v u u t

N

X

a=1

|va|2 (10.29)

and the matrix norm by17

kWnk= sup

kvk=1

kvWnk, (10.30)

identifying it with the square root of the largest eigenvalue ofWnWn. For a fixed Wn, we have in general

kWnk ≥eγw (10.31)

17In Dirac notation,vWn→ hv|WnandkvWnk2→ hv|WnWn|vi.

144 10.5 Bounds for the domain of non-vanishing eigenvalue density

with eγw being the spectral radius of Wn, i.e., |z| ≤ eγw for all z ∈ spectrum(Wn) (by definition, we have eγw ≤eγ(t) with probability 1, cf. Eq. (10.26)). This is a consequence of the submultiplicativity of the matrix norm [66],

kU Wk= supkvU Wk

kvk = supkvU Wk kvUk

kvUk

kvk ≤supkuWk

kuk supkvUk

kvk =kUkkWk, (10.32) where the supremum is taken only over those v 6= 0 for which vU 6= 0. Let now vγ be an eigenvector of Wn with eigenvalue z, being the eigenvalue of Wn that has maximum absolute value and definesγw, i.e.,Wnvγ=zvγ with|z|=eγw. Denoting byVγthe matrix with all the columns given by the eigenvectorvγ, we have WnVγ=zVγ and consequently

|z| kVγk=kWnVγk ≤ kWnkkVγk, (10.33) which proves Eq. (10.31).

The above inequality may be sharp because z could in general be associated to an eigenvector that is very different from the maximum eigenvector ofWnWn. However, we suppose that our case is sufficiently generic for a conjecture of Ref. [67] (where Lyapunov exponents are analyzed for a broad class of dynamical models) to apply, which would allow us to replace the inequality sign above by an asymptotic equality in the infinite-n limit, resulting in

kWnk=eγ(t) for t→ ∞. (10.34)

This assumption is non-trivial: For example, in the Ginibre ensemble [68], where Wn

is not given by a product, but is just a complex matrix C distributed according to the Gaussian distribution exp[−NTr(CC)], and no non-commutative matrix products are involved, the left-hand side of Eq. (10.31) equals twice the right-hand side (in the limit N → ∞). On the other hand, if the Gaussian distribution ofCis replaced by a distribution where each element is real, non-negative, and uniformly drawn from the segment [0,1] (in which case theorems due to Perron and Frobenius apply, cf. Ref. [66]) Eq. (10.31) does become an equality for largeN. A single matrix with Gaussian-distributed elements is of course very different from Wn for large t, however, it is intuitively close to the situation for small t, the case addressed in the next subsection. There, our estimate for γ(t) will be more direct, without involving the normkWnk. In any case, that Eq. (10.31) becomes an equality as t → ∞ will be confirmed by both the analytical and numerical results presented below.

To studykWnk, we need to know what happens tovWnfor an arbitraryvwithkvk 6= 0.

We definevi,i= 1, . . . , n, by

vi =v

i

Y

j=1

Mj (10.35)

and v0=v. We are only interested in the ray specified by v. Let Sv ≡ kvM1k

kvk . (10.36)

Because of the invariance under conjugation by U(N) elements, the distribution of Sv induced by that of M1 is independent of v. We now write

logkvnk2−logkvk2=

n

X

i=1

log kvik2

kvi−1k2 , (10.37)

which is a trick used in Ref. [64]. The terms in the sum on the RHS of Eq. (10.37) are i.i.d.

real numbers for any fixed values ofn,ε2,N by the same argument as below Eq. (10.36).

Therefore, we can calculate the probability distribution of the LHS by calculating the characteristic function F(k) (see Eq. (10.40) below) of one of the terms on the RHS, taking the power n, and taking the inverse Fourier transform of that.

For convenience, we reproduce the equation describing the source of randomness (ig-noring the zero trace condition):

M =eεC, P(C) =Ne−NTrCC. (10.38) The random variables on the RHS of (10.37) are denoted by x,

x= logkvMk2

kvk2 , (10.39)

and the characteristic function of the identical distributions is

F(k) =heikxiP(C). (10.40)

Denoting the random variable on the LHS of Eq. (10.37) by y, y= logkvnk2

kvk2 = logkvWnk2

kvk2 , (10.41)

its probability density is given by P(y) =

Z dk

2πe−iky[F(k)]n. (10.42)

We now expand x in ε to order ε2 in the calculation of F(k) and assume that the expansion inεcan be freely interchanged with various integrals. An expansion to orderε2 is assumed to be all that is needed since an alternative treatment of the ensemble, based on the Fokker-Planck equation, would also need only expansions to orderε2(cf. Sec. 10.4.1 above).

Because of the U(N) invariance, we can rotate the vectorv to point in the 1-direction, x= log

N

X

j=1

|M1j|2

. (10.43)

Thus, the vector vhas dropped out completely (we will reuse the symbol vbelow). Up to order ε2, we have

x= log

1 +ε(C11+C11 ) +ε2 2 C2

112 2 C2

112

N

X

j=1

C1jC1j

. (10.44) We now introduce some extra notation,

ReC11=u, Cj1=vj, C1j =wj for j= 2, . . . , N , (10.45) where v andw are (N−1)-dimensional complex column vectors. This leads to

x= 2εu+ε2

2(vTw+vw) +ε2ww+O(ε3). (10.46)

146 10.5 Bounds for the domain of non-vanishing eigenvalue density

To calculateF(k), it is sufficient to know the distribution of u,v,w, P(u, v, w) =N0e−N(u2+vv+ww), N0 = 1

The integral giving F(k) is Gaussian and can be easily done. We have

√1

and (with complex numbersv and w) π

To the level of accuracy inε2 at which we are working, we can write F(k) =eε

2k2 N eiε

2k(N−1)

N . (10.51)

The characteristic function ofy is therefore given by heikyi= [F(k)]n=e

the inverse Fourier transform leads to the probability distribution P(ˆy) =n

the F¨urstenberg theorems now tell us that almost surely

n→∞lim

So far,ε2 and N have been kept fixed. We therefore conclude that for large enoughn,

kWnk ≈en2ε2(1−1/N). (10.57)

We now simply replace nε2 by the large numbertand takeN → ∞, which is a relatively harmless limit. Assuming that

kWnk=eγ(t) (10.58)

holds asymptotically, we conclude that γ(t)≈ t

2 for t→ ∞. (10.59)

This asymptotic behavior will be confirmed below by the exact result for γ(t) that we obtain from the saddle-point analysis.

Im Dokument EIGENVALUE DISTRIBUTIONS OF WILSON LOOPS (Seite 143-147)