• Keine Ergebnisse gefunden

Proof of main results

4.2 Moderate, large, and superlarge deviations

In order to study the distribution of the largest eigenvalue λmax of unitary en-sembles, we recall the definition of the outer tailsON,V resp. OeN,V in global resp.

local variables (see (1.12), (1.13)):

ON,V(t) =PN,Vmax> t), t > bV, OeN,V(s) =PN,V λmax> bV + s

γVN2/3

!

, s ≥1,

for a function V that satisfies (GA). Having (1.15) and (1.16) in mind, one has to consider the integrals

Z L+

t

· · ·

Z L+

t

det ((KN,V(xi, xj))1≤i,j≤k) dx1· · ·dxk

for k = 1, . . . , N. The representations of KN,V(x, x) in Corollary 4.5 lead to the analysis of integrals with integrand

e−N ηV(x) (x−bV)(x−aV),

in Proposition 4.7 and Lemma 4.8, which is the leading order ofKN,V(x, x) up to the factor bV−aV. The distinction between the intervals (bV +γ 1

VN2/3, bV +δ) and [bV +δ, L+), whereδ is chosen from a compact subset of (0,12σV0] (with σ0V as in Lemma 3.13), has its counterpart in Theorem 4.10. We then show, how to derive the main results (Theorems 1.1–1.3 and Theorem 4.11) from that theorem. As in the previous section we suppress any V-dependence in all proofs.

Proposition 4.7. Assume thatV satisfies(GA)and letc >0. Then for anyt > bV with t+cL+ we have

Z L+

t+c

e−N ηV(x)

(x−bV)(x−aV)dx= e−N ηV(t)

N(t−bV)(t−aV0V(t)ON1.

The error bound is uniform if t is chosen from a bounded subset of (bV, L+) with t+cL+.

Proof. We first use the estimate

Z L+

t+c

e−N η(x)

(x−b)(xa)dx≤ e−N η(t) (t−b)(ta)η0(t)

Z L+

t+c

η0(t)

η0(x)η0(x)e−N(η(x)−η(t))

dx.

Due to Corollary 2.17 we have ηη00(x)(t) =O(1) and furthermore

Z L+

t+c

η0(x)e−N(η(x)−η(t))

dx≤ 1

Ne−N(η(t+c)−η(t))

.

Since η(t+c)η(t) is bounded below by a positive constant for t in bounded subsets, the claim follows.

Lemma 4.8. Assume that V satisfies (GA) and let ηV be given as in (2.30).

(i) For y ∈(bV, L+]∩R and t ∈(bV, y] we have

Z y t

e−N ηV(x)

(x−bV)(x−aV)dx

= e−N ηV(t)

N(t−bV)(t−aVV0 (t) · 1−e−N zV(t) +1−(N zV(t) + 1)e−N zV(t)·

O

1 N(t−bVV0 (t)

+O

1 N η0V(t)

!

with zV(t) :=ηV(y)−ηV(t).

The error bounds are uniform for y in bounded subsets of (bV, L+]∩R. (ii) Assume that L+ = ∞ and let V satisfy VV000(x)(x)2 = O(1) for x → ∞. For

t∈(bV,∞) we have

Z t

e−N ηV(x)

(x−bV)(x−aV)dx= e−N ηV(t)

N(t−bV)(t−aV0V(t)

1 +ON1. The error bound is uniform for t in subsets of (bV,∞) that have a positive distance from bV.

Proof. The procedure of the proof is similar for the cases (i) and (ii) and we treat them simultaneously as far as possible. Consider

Z dj

t

e−N η(x)

(x−b)(xa)dx with dj =

y , if j = 1,

, if j = 2,

related to the cases (i) and (ii) with the corresponding requirements ony and t.

Substitutingu:=η(x)η(t) we obtain

Z dj

t

e−N η(x)

(x−b)(xa)dx=e−N η(t)

Z η(dj)−η(t) 0

1

(x(u)−b)(x(u)a)η0(x(u))e−N udu (4.21) withx(u) :=η−1(u+η(t)). Observe thatη is strictly monotone and hence inver-tible and that η(d2)−η(t) =∞ (see (2.33)). Our approach is the following: For j = 1,2 we define the auxiliary functions

kj : [0, η(dj)−η(t)]∩R→R, kj(u) := 1

(x(u)−b) (x(u)a)η0(x(u)). Together with (4.21) we obtain

Z dj

t

e−N η(x)

(x−b)(xa)dx=e−N η(t)

Z η(dj)−η(t) 0

kj(u)e−N udu (4.22) for j = 1,2. Expressing kj(u) =kj(0) +k0ju)u for ζu ∈[0, η(dj)−η(t)]∩R, we need an estimate on kj0. Using x0(u) = η0(x(u))1 and the definition of η via G (see (2.30)) we obtain two representations for the derivative:

k0j(u) = − 1

(x(u)−b)(x(u)a)η0(x(u))2

"

1

x(u)b + 1

x(u)a + η00(x(u)) η0(x(u))

#

(4.23)

=− 1

(x(u)−b)(x(u)a)η0(x(u))2

"

3 2

1

x(u)b + 1 x(u)a

!

+ G0(x(u)) G(x(u))

#

. (4.24) It will turn out that one needs (4.24) to prove (i) resp. (4.23) to show (ii).

Sincex(u)∈(t, dj) foru∈(0, η(dj)−η(t)), we have x(u)−b1 =O(t−b1 ) and x(u)−a1 = O(t−a1 ). Furthermore, due to Corollary 2.17, one obtains η0(x(u))1 =O(η01(t)).

We now restrict our attention to (i) (i.e.j = 1), where we only need to consider t from bounded sets and the interval (t, d1) = (t, y) is contained in a fixed bounded subset of (bV, L+]∩R. The application of Lemma 2.15 (i) resp. Remark 2.16 (i) yields GG(x(u))0(x(u)) =O(1) for x(u)∈(t, y) and hence, (see (4.24)),

k10(u) = 1

(t−b)(ta)η0(t)2

Ot−b1 +O(1) foru∈(0, η(y)−η(t)).

Next we consider the fraction η000)2 for the case j = 2, which corresponds to

claim (ii). Due to the asymptotic behavior ofη0 and η00 given in (2.32) and (2.34), Remark 2.16 (i), the strict increase ofV0, and the boundedness of VV000(x)(x)2 forx→ ∞ in the assumption of (ii) we have

η00(x)

uniformly for t in subsets of (b,∞) that have a positive distance from b. This implies (see (4.23))

k02(u) = 1

(t−b)(ta)η0(t)O(1) foru∈(0,∞) with the required uniformity of the error bound.

Usingkj(0) = (t−b)(t−a)η1 0(t) forj = 1,2 we obtain from the Mean Value Theorem

we obviously obtain the desired results for both cases.

Recall the representation of the outer tail ON,V in terms of the kernel KN,V The next crucial observation is that the first summand determines the leading order of the outer tail. Propositon 4.9 provides an estimate in this direction.

Proposition 4.9. Assume that V satisfies (GA) and let ON,V and KN,V be given as in (1.12) and (1.18). Then, which is symmetric (see (1.18)) and positive semidefinite since for all w∈Rk we obtain

The determinant appearing in (4.26) can then be estimated by using Hadamard’s inequality: The claim is now obvious by Fubini’s Theorem.

We are now able to present the first main theorem of this section, namely the leading order behavior of the outer tail ON,V on all of (bV +γ 1

VN2/3, L+]∩R. Theorem 4.10. Assume that V satisfies (GA) and let ON,V, ηV, γV, and σV0 be given as in (1.12), (2.30), Lemma 3.10, and Lemma 3.13. Choose δ from an arbitrary but fixed compact subset of (0,12σ0V] and set

δ0 := 1

2δ. (4.27)

Then, the following holds:

(i) For tbV +γ 1

VN2/3, bV +δ0 we have ON,V(t) = bVaV

8π · e−N ηV(t)

N(t−bV)(t−aVV0 (t)

1 +ON(t−b1

V)3/2

.

The error bound is uniform for all tbV +γ 1

VN2/3, bV +δ0. (ii) For t ∈[bV +δ0, L+]∩R

(a) and L+ <we have ON,V(t) = bVaV

8π · e−N ηV(t)

N(t−bV)(t−aVV0 (t)

·h1−e−N zV(t)+1−(N zV(t) + 1)e−N zV(t)· ON1i

·1 +ON1 (4.28)

with zV(t) :=ηV(L+)−ηV(t).

The error bounds are uniform for all t∈[bV +δ0, L+].

(b) and L+ =∞ we have ON,V(t) = bVaV

8π · e−N ηV(t)

N(t−bV)(t−aVV0 (t)

1 +ON1. The error bound is uniform for t in bounded subsets of [bV +δ0,∞).

(iii) Assume that V satisfies(GA)SLD. For t ∈[bV +δ0,∞) we have ON,V(t) = bVaV

8π · e−N ηV(t)

N(t−bV)(t−aVV0 (t)

1 +ON1. The error bound is uniform for all t ∈[bV +δ0,∞).

Proof. Recalling (4.25) we structure the proof as follow: First, we consider the integral RtL+KN(x, x) dx in all cases (i)–(iii) under their respective assumptions.

Then, we deal with the remaining series starting from k = 2 in (4.25) by using Proposition 4.9.

Let us start with claim (i), i.e. in particulart < b+δ0, by dividingRtL+KN(x, x) dx into (recall (4.27))

Z L+

t

KN(x, x) dx=

Z b+δ t

KN(x, x) dx+

Z L+

b+δ

KN(x, x) dx. (4.29) Using in addition Corollary 4.5 (i), we obtain

Z b+δ t

KN(x, x) dx= ba

Z b+δ t

e−N η(x)

(x−b)(xa)dx·1 +ON(t−b)1 3/2

. (4.30) Sinceb+δ is bounded by construction, we can apply Lemma 4.8 (i) to the right hand side of (4.30). Due to t < b+δ0 and (4.27) we have b+δt > δ0 >0 and henceη(b+δ)η(t)cfor a constantc >0. Using η01(t) =O((t−b)11/2) we obtain

Z b+δ t

e−N η(x)

(x−b)(xa)dx= e−N η(t) N(t−b)(ta)η0(t)

1 +ON(t−b)1 3/2

. We now turn to the second summand of (4.29).

In the case L+<∞, Corollary 4.5 (ii) and Proposition 4.7, using b+δ > t+δ0, give

Z L+

b+δ

KN(x, x) dx= ba

8π · e−N η(t)

N(t−b)(ta)η0(t)· ON1. Combining this with (4.30), we obtain

Z L+

t

KN(x, x) dx= ba

8π · e−N η(t) N(t−b)(ta)η0(t)

1 +ON(t−b)1 3/2

(4.31)

forL+ <∞. If L+=∞, the just given argument still yields

Z M b+δ

KN(x, x) dx= ba

8π · e−N η(t)

N(t−b)(ta)η0(t) · ON1

for any fixed M > b+δ, where the error bound may depend on M. However, using Lemma 4.6, we can determine such a numberM with M > X0 and

Z M

KN(x, x) dx= ba

8π · e−N η(t)

N(t−b)(ta)η0(t) · ON1

uniformly for t∈(b+ γN12/3, b+δ0). Hence, (4.31) also holds for L+ =∞.

which completes the proof of (i) together with Proposition 4.9.

In order to show claim (ii) (a), we apply Corollary 4.5 (ii) by replacing δ by δ0, some fixed bounded subset of [b+δ0,∞). We denote this bounded subset withI and setS := sup(I). Corollary 4.5 (ii) and Lemma 4.8 (i) yield

Z M which proves claim (ii) (b).

Ift ∈[b+δ0,∞) arbitrary and V satisfies (GA)SLD, we have

due to Corollary 4.5 (iii). Since the assumption of Lemma 4.8 (ii) is satisfied in It remains to consider the sum starting from k= 2 in (4.26) for (ii) (a), (b), and (iii). Let us start with (ii) (b) and (iii) where RtKN(x, x) dx is given by (4.33) in both cases. Having Proposition 4.9 in mind, we consider

N and by Proposition 4.9 we obtain the claim.

Now, we prove Theorems 1.1–1.3 by applying the results of Theorem 4.10.

Proof of Theorems 1.1–1.3

Theorem 1.1 is immediate from Theorem 4.10 (i), (ii) (b), and (iii), and from the boundedness of N η(t), N(tb)(ta)η0(t), N(t − b)3/2, and ON(t) for t∈(b, b+ γN12/3].

The statement of Theorem 1.2 can be obtained from Theorem 4.10 in the same way sinceV is required to satisfy (GA).

In order to prove Theorem 1.3 we have to study the representation of ON,V in (4.28), especiallyz(t). Obviously, there exists ξ ∈[t, L+] with

z(t) =η(L+)−η(t) = η0(ξ)(L+t). (4.34)

Consider the case t < L+(logNN)α for some α > 1. Using (4.34), (2.30), and Lemma 2.15 (i), there existsc >0 such thatN z(t)> c(logN)α. Hence, e−N z(t) = O(N1), which proves claim (i) of Theorem 1.3.

Let nowL+t =o(N1). Due to (4.34) and η0(ξ) = η0(L+) +O(L+t) we obtain z(t) =η0(L+)(L+t) (1 +O(L+t)) (4.35) and hence, wN(t) := N z(t) = o(1) for N → ∞. Since e−wN(t) = 1−wN(t) + O(wN(t)2), we have

1−e−wN(t)+1−(wN(t) + 1)e−wN(t)ON1=wN(t) (1 +O(wN(t))). Applying in addition (4.28),η0(t) =η0(L+)(1 +o(N1)), and (4.35) we obtain

ON(t) = ba

8π · e−N η(t)

N(t−b)(ta)η0(L+)wN(t) (1 +o(1))

= ba

8π · e−N η(t)

(t−b)(ta)(L+t) (1 +o(1)).

We now analyze the outer tail OeN,V (see (1.13)) in the regime of moderate deviations. Due to the relation

t =bV + s

γVN2/3 (4.36)

between the global variabletand the locally rescaled ones, which impliesOeN,V(s) = ON,V(t), we can obtain from Theorem 4.10 (i):

Theorem 4.11. Assume that V satisfies(GA). LetOeN,V be given as in (1.13) and choose (s, N) from the regime of moderate deviations (see page 6). Then,

logOeN,V(s) s3/2 =−4

3− log16πs3/2

s3/2 +ONs2/3

+Os13

.

Under the additional requirement that Nq4/15N → 0 for N → ∞ (c.f. page 6) we have

OeN,V(s) = 1

16πs3/2e43s3/21 +ONs5/22/3

+Os3/21

. (4.37)

Proof. It suffices to consider NN0 for a suitable chosen N0 > 0. We may therefore use the asymptotic behavior of ON provided by Theorem 4.10 (i) and the representation ofηgiven in Lemma 3.10 (i). This implies together with (4.36)

η(t) = 4

3γ32(t−b)32f(t)ˆ 32 = 4 3 · s3/2

N

fˆb+ γNs2/3

3

2 , (4.38)

η0(t) = 2γ32(t−b)12fˆ(t)12 fˆ(t) + (t−b) ˆf0(t)

= 2γ s1/2 N1/3

fˆb+ γNs2/3

1 2

fˆb+ γNs2/3+ γNs2/3fˆ0b+γNs2/3. Using ˆf(b) = 1 (see Lemma 3.10 (iii)) and ˆf0(b+γNs2/3) = O(1) we obtain

η(t) = 4 3· s3/2

N

1 +ONs2/3

, η0(t) = 2γ s1/2 N1/3

1 +ONs2/3

. This yields (see Theorem 4.10 (i))

OeN(s) = ON(t) = ba

8π · e43s3/2(1+O(sN−2/3)) 2s32 ba+γNs2/3 1 +ON2/3s

1 +Os3/21

= 1

16πs32e43s3/2(1+O(sN−2/3))1 +ONs2/3+Os3/21 . (4.39) Hence,

logOeN(s) =−4

3s32 1 +ONs2/3

−log16πs32+ log1 +ONs2/3

+Os3/21

=−4

3s32 −log16πs32+ONs5/22/3

+Os3/21

, which proves the first claim.

In the case thats grows up to order o(N4/15) we have Ns5/22/3 =o(1) and therefore e43s3/2(1+O(sN−2/3)) =e43s3/21 +ONs5/22/3

. Together with (4.39) we obtain the second statement.

Remark 4.12. Observe that the definition of the outer tail OeN,V depends on the two V-dependent numbers bV and γV. Hence, the universality result in (4.37) holds for s = o(N4/15) up to the rescaling. The reason why one cannot expect (4.37) holding for values of s that grow larger thatN4/15 in general is due to the representation of ηV in (4.38). Since ˆfV is analytic in a neighborhood of bV (see Lemma 3.10), we can expand ˆfV(bV +γ s

VN2/3) as a Taylor series at bV: fˆV bV + γ s

VN2/3

= ˆfV(bV) + ˆfV0 (bVγ s

VN2/3 + 12fˆV00(bVγ s

VN2/3

2

+. . .

We know that ˆfV(bV) = 1 for any admissable function V, but we cannot state any general information about ˆfV0 (bV). For ˆfV0 (bV) 6= 0 one cannot improve on the error bound ˆfV(bV + γ s

VN2/3) = 1 +O(Ns2/3) and one needs the assumption s=o(N4/15) to duduce

eO(s5/2N−2/3) = 1 +ONs5/22/3

that implies

e−N ηV(t)=e43s3/21 +ONs5/22/3

.

However, if there exists a function ˜V satisfying (GA)with ˆfV0˜(bV˜) = 0, we would obtain ˆfV˜(bV˜ + γ s

V˜N2/3) = 1 +O(Ns4/32 ), and for s=o(N8/21), e−N ηV˜(t)=e43s3/21 +ONs7/24/3

, and therefore

OeN,V˜(s) = 1

16πs3/2e43s3/21 +ONs7/24/3

+Os3/21

(c.f. proof of Theorem 4.11). Hence, the assumption ˆfV0˜(bV˜) = 0 would lead to the enlargement of the range of applicability of (4.37) from o(N4/15) too(N8/21).

Similarly, the requirement ˆf(1)˜

V (bV˜) = . . . = ˆf(k−1)˜

V (bV˜) = 0 would enlarge the range of applicability of (4.37) to s=o(N9+6k4k ).

Another way to formulate this is the following:

In the region N9+6k4k << s << N9+6(k+1)4(k+1) the leading order behavior of the tail probability OeN,V(s) depends on all k values ˆfV(1)(bV), . . . ,fˆV(k)(bV). This can still be viewed as a weaker form of universality.

Finally, we study the Gaussian Unitary Ensembles, which are of special interest (c.f. Introduction).

Example 4.13. Considering the function V0 : R → R, x 7→ 12x2 (c.f. (1.3)) we go through the Chapters 2–4 and determine all relevant functions and numbers explicitly. The related MRS-numbers aV0, bV0 can be obtained by solving (2.4) and (2.5), which yield −aV0 = bV0 = 2. Together with GV0(t) = 1 for all t ∈ R (see (2.15)), we obtain γV0 = 1 and

ηV0(t) =

Z t 2

u2−4 du fort ≥2

(see (3.29) and (2.30)).

Since L+ =∞, we have to verify (GA)SLD for the study of the superlarge devi-ations regime. Obviously, V can be extended analytically to the whole complex plane, and in particular to

U(1,4) = {z ∈C|Re(z)≥4, |Im(z)| ≤ Re(z)−31 } (c.f. (3.79)). For allz =x+iy∈ U(1,4) we have x≥4,|y| ≤1, and

Re(V(z)) = 12(x2y2)≥ 12(x2−1)> x−1,

which shows that (GA) is satisfied. Due to the boundedness of VV0000(x)

0(x)2 = x12 for all x ≥ 2, the assumption (GA)SLD is ensured and we can apply Theorem 4.10, obtaining

ON,V0(t) = 1

2π · e−NR

t 2

u2−4 du

N(t2−4)3/2

1 +ON(t−2)1 3/2, if t2 + N12/3,2 +δ0,

ON,V0(t) = 1

2π · e−NR

t 2

u2−4 du

N(t2−4)3/2

1 +ON1, if t≥2 +δ0.

It is remarkable that the additional requirement for (4.37) in Theorem 4.11 is also necessary in the Gaussian case. As described above, a closer look at the representation of η is needed. With t = 2 + Ns2/3 and √

u+ 2 = 2 + 14(u−2) + O((u−2)2) foru≥2 we obtain

N ηV0(t) =N

Z t 2

u−2√

u+ 2 du= 4

3s32 + 1

10 · s5/2 N2/3

1 +ONs2/3

. Hence,

e−N ηV0(t) = exp

−4 3s3/2

·exp s5/2 10N2/3

1 +ONs2/3

!

can be written in the form

e−N ηV0(t) =e43s3/21 +ONs2/3

if and only if s = o(N4/15). This shows that one needs the same restriction on (s, N) for GUE to obtain the result (4.37) in Theorem 4.11 as for general functions V satisfying (GA).

[1] M. Abramowitz and I.A. Stegun. Handbook of mathematical functions with formulas, graphs, and mathematical tables, volume 55 of National Bureau of Standards Applied Mathematics Series. For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C., 1964.

[2] G. Akemann, J. Baik, and P. Di Francesco, editors. The Oxford handbook of random matrix theory. Oxford University Press, Oxford, 2011.

[3] G. W. Anderson, A. Guionnet, and O. Zeitouni. An introduction to ran-dom matrices, volume 118 of Cambridge Studies in Advanced Mathematics.

Cambridge University Press, Cambridge, 2010.

[4] J. Baik, R. Buckingham, and J. DiFranco. Asymptotics of Tracy-Widom distributions and the total integral of a Painleve II function. Comm. Math.

Phys., 280(2):463–497, 2008.

[5] A. Borovkov and A. Mogulskii. On large and superlarge deviations of sums of independent random vectors under Cramer’s condition I. Theory Probab.

Appl., 51(2):227–255, 2007.

[6] L. Choup. Edgeworth expansion of the largest eigenvalue distribution func-tion of GUE and LUE. Int. Math. Res. Not., 2006.

[7] P. Deift. Orthogonal polynomials and random matrices: a Riemann-Hilbert approach, volume 3 of Courant Lecture Notes in Mathematics. New York University Courant Institute of Mathematical Sciences, New York, 1999.

[8] P. Deift and D. Gioev. Universality at the edge of the spectrum for unitary, orthogonal, and symplectic ensembles of random matrices. Comm. Pure Appl. Math., 60(6):867–910, 2007.

[9] P. Deift and D. Gioev. Random matrix theory: invariant ensembles and universality, volume 18 of Courant Lecture Notes in Mathematics. Courant Institute of Mathematical Sciences, New York, 2009.

[10] P. Deift, T. Kriecherbauer, K. T-R McLaughlin, S. Venakides, and X. Zhou.

Strong asymptotics of orthogonal polynomials with respect to exponential weights. Comm. Pure Appl. Math., 52(12):1491–1552, 1999.

[11] P. Deift, T. Kriecherbauer, K. T.-R. McLaughlin, S. Venakides, and X. Zhou.

Uniform asymptotics for polynomials orthogonal with respect to varying ex-ponential weights and applications to universality questions in random ma-trix theory. Comm. Pure Appl. Math., 52(11):1335–1425, 1999.

[12] P. Deift, S. Venakides, and X. Zhou. New results in small dispersion KdV by an extension of the steepest descent method for Riemann-Hilbert problems.

Internat. Math. Res. Notices, (6):286–299, 1997.

[13] P. Deift and X. Zhou. A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation. Ann. of Math. (2), 137(2):295–368, 1993.

[14] A. Dembo and O. Zeitouni. Large deviations techniques and applications.

Springer, New York, 1998.

[15] A. S. Fokas, A. R. Its, and A. V. Kitaev. Discrete painleve equations and their appearance in quantum gravity. Comm. Math. Phys., 142(2):313–344, 1991.

[16] A. S. Fokas, A. R. Its, and A. V. Kitaev. The isomonodromy approach to matrix models in 2d quantum gravity. Comm. Math. Phys., 147(2):395–430, 1992.

[17] P. J. Forrester. Log-gases and random matrices, volume 34 of London Math-ematical Society Monographs Series. Princeton University Press, Princeton, NJ, 2010.

[18] F. Götze and M. Venker. Local universality of repulsive particle systems and random matrices. Ann. Probab., 42(6):2207–2242, 2014.

[19] K. Johansson. On fluctuations of eigenvalues of random Hermitian matrices.

Duke Math. J., 91(1):151–204, 1998.

[20] K. Johansson. Shape fluctuations and random matrices.Comm. Math. Phys., 209(2):437–478, 2000.

[21] T. Kriecherbauer, K. Schubert, K. Schüler, and M. Venker. Global asymp-totics for the Christoffel-Darboux kernel of random matrix theory. 2014.

arXiv:1401.6772, to appear in Markov Process. Related Fields.

[22] A. B. J. Kuijlaars and M. Vanlessen. Universality for eigenvalue correlations from the modified Jacobi unitary ensemble. Int. Math. Res. Not., (30):1575–

1600, 2002.

[23] N. S. Landkof. Foundation of modern potential theory. Springer, New York, 1972.

[24] M. Ledoux and B. Rider. Small deviations for beta ensembles. Electron. J.

Probab., 15(12):1319–1343, 2010.

[25] E. Levin and D. S. Lubinsky. Universality limits at the soft edge of the spec-trum via classical complex analysis. Int. Math. Res. Not. IMRN, (13):3006–

3070, 2011.

[26] K. T.-R. McLaughlin and P. D. Miller. The steepest descent method for orthogonal polynomials on the real line with varying weights. Int. Math.

Res. Not. IMRN, pages Art. ID rnn 075, 66, 2008.

[27] M. L. Mehta. Random matrices, volume 142 of Pure and Applied Mathe-matics (Amsterdam). Elsevier/Academic Press, Amsterdam, third edition, 2004.

[28] H. N. Mhaskar and E. B. Saff. Extremal problems for polynomials with exponential weights. Trans. Amer. Math. Soc, 285(1):203–234, 1984.

[29] L. Pastur and M. Shcherbina. Eigenvalue distribution of large random matri-ces, volume 171 of Mathematical Surveys and Monographs. American Math-ematical Society, Providence, RI, 2011.

[30] E. A. Rakhmanov. Asymptotic properties of orthogonal polynomials on the real axis. Mat. Sb. (N.S.), 119(161):163–203, 1982.

[31] E. B. Saff and V. Totik. Logarithmic potentials with external fields. Springer, New York, Berlin, 1997.

[32] E. M. Stein. Singular integrals and differentiability properties of functions.

Princeton Mathematical Series, 1974.

[33] G. Szegő. Orthogonal polynomials. American Mathematical Society, Prov-idence, R.I., fourth edition, 1975. American Mathematical Society, Collo-quium Publications, Vol. XXIII.

[34] C. A. Tracy and H. Widom. Level-spacing distributions and the Airy kernel.

Comm. Math. Phys., 159(1):151–174, 1994.

[35] M. Vanlessen. Strong asymptotics of Laguerre-type orthogonal polynomials and applications in random matrix theory. Constr. Approx., 25(2):125–175, 2007.

[36] E. Wigner. On the distribution of the roots of certain symmetric matrices.

Ann. of Math. (2), 67:325–327, 1958.

Verfügung stand und mich mit unermüdlichem Engagement hilfreich begleitet und unterstützt hat. Seine Anregungen und wertvollen Ratschläge haben maßgeblich zum Gelingen dieser Arbeit beigetragen.