• Keine Ergebnisse gefunden

3 A new formula of Blaschke-Petkantschin type

N/A
N/A
Protected

Academic year: 2022

Aktie "3 A new formula of Blaschke-Petkantschin type"

Copied!
22
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Gaussian polytopes: variances and limit theorems

Daniel Hug and Matthias Reitzner October 29, 2004

Abstract

The convex hull ofnindependent random points inRdchosen according to the normal distribu- tion is called a Gaussian polytope. Estimates for the variance of the number ofi-faces and for the variance of thei-th intrinsic volume, of a Gaussian polytope inRd,dN, are established by means of the Efron-Stein jackknife inequality and a new formula of Blaschke-Petkantschin type. These es- timates imply laws of large numbers for the number ofi-faces and for thei-th intrinsic volume of a Gaussian polytope asn→ ∞.

1 Introduction and statements of results

LetX1, . . . , Xnbe a Gaussian sample inRd(d∈N), i.e., independent random points chosen according to thed-dimensional standard normal distribution with mean zero and covariance matrix 12Id. Denote by Pn = [X1, . . . , Xn]the convex hull of these random points, and call Pn a Gaussian polytope. We are interested in geometric functionals such as volume, intrinsic volumes, and the number ofi-dimensional faces of Gaussian polytopes. Most of the previous investigations were concerned with expectations of such functionals. The starting point of this line of research is marked by a classical paper by R´enyi and Sulanke [21] in which the asymptotic behaviour of the expected number of vertices,Ef0(Pn), ofPnis determined in the plane, and thus also the expected number of edges,Ef1(Pn), asntends to infinity. This result was generalized by Raynaud [20] who investigated the asymptotic behaviour of the mean number of facets,Efd−1(Pn), for arbitrary dimensions. Both results are only particular cases of the formula

Efi(Pn) = 2d

√ d

d i+ 1

βi,d−1(πlnn)d−12 (1 +o(1)), (1.1) wherei ∈ {0, . . . , d−1} andd ∈ N, asn → ∞. This follows, in arbitrary dimensions, from work of Affentranger and Schneider [2] and Baryshnikov and Vitale [3]. Herefi(Pn)denotes the number of i-faces ofPn, and βi,d−1 is the internal angle of a regular (d−1)-simplex at one of itsi-dimensional faces. Recently, a more direct proof of (1.1) and some additional relations, which cannot be derived from [2] and [3], were given in [12]. However, it turned out to be difficult to extend these results to higher moments offi(Pn), and thus to prove limit theorems. An exception is the particular casei= 0, where Hueter [10], [11] states a Central Limit Theorem,

f0(Pn)−Ef0(Pn) pVarf0(Pn)

−→ ND (0,1) (1.2)

AMS 2000 subject classifications. Primary 52A22, 60D05; secondary 60C05, 62H10.

Key words and phrases. Random points, convex hull,f-vector, intrinsic volumes, geometric probability, normal distribution, Gaussian sample, Stochastic Geometry, variance, law of large numbers, limit theorems.

(2)

asntends to infinity; here −→D denotes convergence in distribution andN(0,1)is the (one-dimensional) normal distribution. The asymptotic behaviour of the variance is asserted to be of the form

Varf0(Pn) = ¯cd(lnn)d−12 (1 +o(1)),

asn → ∞. Most probably it is difficult to establish such a precise limit relation for all fi(Pn), i ∈ {1, . . . , d−1}. Our first result provides an upper bound for the order of the variance offi(Pn), for all i∈ {0, . . . , d−1}, which is of the same order.

Theorem 1.1. Letfi(Pn)be the number ofi-dimensional faces of ad-dimensional Gaussian polytope Pn,d∈N. Then there exists a positive constantcd, depending only on the dimension, such that

Varfi(Pn)≤cd(lnn)d−12 (1.3)

for alli∈ {0, . . . , d−1}.

Combining Chebyshev’s inequality and (1.3), we obtain P

|fi(Pn)−Efi(Pn)|(lnn)d−12 ≥ε

≤ε−2(lnn)−(d−1)Varfi(Pn)≤ε−2cd(lnn)d−12 , and thus the random variablefi(Pn)satisfies a (weak) law of large numbers for alld∈N(the cased= 1 is trivial). In fact, the law forfi(Pn)(lnn)−(d−1)/2converges in probability to the law concentrated at a constant.

Corollary 1.2. For d ∈ N andi ∈ {0, . . . , d−1}, the number of i-dimensional faces, fi(Pn), of a Gaussian polytopePninRdsatisfies

fi(Pn)(lnn)d−12 −→ 2d

√ d

d i+ 1

βi,d−1πd−12 in probability asn→ ∞.

Mass´e [16] deduces a corresponding weak law of large numbers ford= 2andi= 0from Hueter’s central limit theorem (1.2).

Our method of proof also works for the volume and, more generally, the intrinsic volumes. Denote by Vi(Pn)thei-th intrinsic volume of the Gaussian polytopePn; hence, for instance,Vd(Pn)is the volume, 2Vd−1(Pn)is the surface area andV1(Pn)is a multiple of the mean width ofPn. The expected values of thei-th intrinsic volumes were investigated by Affentranger [1] who proved that

EVi(Pn) = d

i κd

κd−i

(lnn)i2 (1 +o(1)), (1.4) fori ∈ {1, . . . , d}, asntends to infinity, whereκj denotes the volume of thej-dimensional unit ball.

The cased= 1which has been excluded in [1] can be checked directly. Relation (1.4) was expected to hold, since a result of Geffroy [6] implies that the Hausdorff distance betweenPnand thed-dimensional ball of radius(lnn)1/2and centred at the origin converges almost surely to zero. But it seems that (1.4) cannot be deduced directly from Geffroy’s result.

In the planar case, Hueter also states Central Limit Theorems forV1(Pn)andV2(Pn), V1(Pn)−EV1(Pn)

pVarV1(Pn)

−→ ND (0,1) and V2(Pn)−EV2(Pn) pVarV2(Pn)

−→ ND (0,1).

(3)

The variances suggested by Hueter are of the formVarVi(Pn) = 12π3/2(lnn)i (1 +o(1)).That her result cannot be correct can be seen from the following: if the stated asymptotic behaviour of the vari- ances were true, this would immediately implyP(V1(Pn) ≤ 0)→ Φ(−√4

4π)andP(V2(Pn) ≤ 0) → Φ(−√4

4π). But then the stated probabilities would be positive for largen, which obviously cannot hold.

In the next theorem we give an upper bound for the variances, for alli= 1, . . . , dandd∈N.

Theorem 1.3. LetVi(Pn)be thei-th intrinsic volume of a Gaussian polytopePninRd,d ∈ N. Then there exists a positive constantcd, depending only on the dimension, such that

VarVi(Pn)≤cd(lnn)i−32 (1.5)

for alli∈ {1, . . . , d}.

LetX1, X2, . . .be a sequence of independent random points which are identically distributed accord- ing to thed-dimensional normal distribution, and letPn= [X1, . . . , Xn]. Ford= 1, the quantityV1(Pn) is the sample range ofX1, . . . , Xn. Although its distribution and moments can be expressed as multiple integrals (see [19, Chapter 8], [13, Chapter 14] and [15]), explicit values are not available in general. The asymptotic behaviour of VarV1(Pn)ford= 1is deduced in Chapter 14 (see equation (14.100)) of [13], which yields (with the present normalization) thatVarV1(Pn) = π122(lnn)−1(1 +o(1))asn→ ∞. A different extension of the univariate sample range to higher dimensions is given by the largest interpoint distance of the given random points. Limiting distributions have been considered in [17] and in a more general framework in [8]. Still annother extension is discussed in [9].

From (1.4) and (1.5) we obtain an additive weak law of large numbers fori∈ {1,2}, i.e.

Vi(Pn)− d

i κd

κd−i

(lnn)2i →0

in probability as n → ∞. In order to derive a (multiplicative) strong law of large numbers for i ∈ {1, . . . , d}, we put nk = 2k. From the upper bound for the variance and Chebyshev’s inequality we deduce that

P

|Vi(Pnk)−EVi(Pnk)|(lnnk)i2 ≥ε

≤ε−2cd(lnnk)i+32 . Since

X

k≥1

(lnnk)−(i+3)/2 = (ln 2)−(i+3)/2X

k≥1

k−(i+3)/2 <∞,

(1.4) and the Borell-Cantelli Lemma imply that

Vi(Pnk)(lnnk)i2 → d

i κd

κd−i

(1.6) with probability one asktends to infinity. Moreover, sincen7→Vi(Pn)is increasing,

Vi(Pnk−1)(lnnk)2i ≤Vi(Pn)(lnn)i2 ≤Vi(Pnk)(lnnk−1)2i

fornk−1 ≤ n ≤ nk, where by definition (lnnk+1)/(lnnk) → 1. Thus (1.6) implies a strong law of large numbers.

Corollary 1.4. LetVi(Pn)be thei-th intrinsic volume of a Gaussian polytopePninRd,d∈N. Then, fori∈ {1, . . . , d},

Vi(Pn)(lnn)2i → d

i κd

κd−i

with probability one asn→ ∞.

(4)

This law of large numbers can also be deduced from a result of Geffroy in [6].

The estimates for the variances obtained in Theorems 1.1 and 1.3 are based on the solution of another problem which is of independent interest. Consider the random polytopePn and choose another inde- pendent random pointX according to the normal distribution. The question we are interested in is the following: ifX /∈ Pn, how many facets ofPncan be seen fromX? We will determine the asymptotic behaviour of the expectation of the corresponding random variable, as n → ∞, and we will provide upper and lower bounds for its second moment.

In the following, let Fn(X)be the number of facets ofPnwhich can be seen fromX, i.e., which are up to(d−2)-dimensional faces contained in the interior of the convex hull ofPnandX. Note that Fn(X) = 0ifXis contained inPn.

Theorem 1.5. LetX, X1, . . . , Xn be independent random points inRd, d ∈ N, which are identically distributed according to thed-dimensional normal distribution. LetPd(d−1)denote a Gaussian polytope inRd−1. Then

n→∞lim EFn(X)n(lnn)d−12 = 2d−1κdΓ(d+ 1)EVd−1(Pd(d−1)). (1.7) Further, there is a positive constantcd, depending only on the dimension, such that

c−1d n−1(lnn)d−12 ≤EFn(X)2 ≤cdn−1(lnn)d−12 . (1.8) For more information on random polytopes, we refer to the recent survey article by Schneider [24].

2 Projections of high-dimensional simplices

We want to give two interpretations of our results. The first one uses the fact that any orthogonal projec- tion of a Gaussian sample again is a Gaussian sample. So we make our notation more precise by writing Pn(d)for a Gaussian polytope inRdwhich is generated as the convex hull ofnnormally distributed ran- dom points inRd. LetΠi :Rd→Ribe the projection to the firsticomponents (i < d). For an arbitrary i-dimensional subspace ofRd, which we identify withRi, we then obtain

ϕ(ΠiPn(d))=d ϕ(Pn(i)), (2.1)

where=d means equality in distribution andϕis any (measurable) functional on the convex polytopes.

Now letPn+1(n) be a Gaussian simplex inRn. As a consequence of Corollary 1.4 and (2.1), we obtain a law of large numbers for projections of high-dimensional random simplices: for a fixed integeri≥1,

ViiPn+1(n)) (lnn)2i →κi

in probability asn→ ∞. Moreover, for a fixed integeri≥1, (1.4) implies that EViiPn+1(n)) =EVi(Pn+1(i) ) =κi(lnn)2i (1 +o(1)),

asn→ ∞. An estimate of the variance can be deduced from Theorem 1.3. Thus, fori≥1, we have VarViiPn+1(n))≤ci(lnn)i−32 .

Finally, Kubota’s theorem (see [25, (4.6)], [23, (5.3.27)]), H¨older’s inequality, and Theorem 1.3 yield the following asymptotic result for thei-th intrinsic volume of a high-dimensional Gaussian simplex (cf. the proof of Theorem 1.3 in Section 7 for a similar argument).

(5)

Corollary 2.1. LetVi(Pn+1(n)) be thei-th intrinsic volume of a Gaussian simplex inRn. Then, for any fixed integeri≥1,

Vi(Pn+1(n))c−1n,i(lnn)2i →κi in probability asn→ ∞, wherecn,i= ni

κn/(κiκn−i).

Another method of generatingn+ 1random points inRdgoes back to a suggestion of Goodman and Pollack. LetRdenote a random rotation ofRn, putΠd:= Πd◦R, (recall thatΠddenotes the projection onto Rd) and letT(n) be a regular simplex in Rn. ThenΠd(T(n)) is a random polytope in Rd in the Goodman-Pollack model. It was proved by Baryshnikov and Vitale [3] that

ϕ(ΠdT(n))=d ϕ(Pn+1(d) ), (2.2)

for any affine invariant (measurable) functional ϕ on the convex polytopes. Thus, if fi denotes the number ofi-faces, (1.1) is equivalent to

EfidT(n)) = 2d

√ d

d i+ 1

βi,d−1(πlnn)d−12 (1 +o(1))

asn → ∞, which is what was actually proved by Affentranger and Schneider [2]. By (2.2), the bound for the variance in Theorem 1.1 and the law of large numbers in Corollary 1.2 now give bounds and a law of large numbers, respectively, forΠdT(n), i.e.

VarfidT(n))≤cd(lnn)d−12 and, ford∈N,

fidT(n))(lnn)d−12 −→ 2d

√ d

d i+ 1

βi,d−1πd−12 in probability, asntends to infinity.

For further information on the ‘Goodman-Pollack model’ and related work of Vershik and Sporyshev [27], we refer to [2].

3 A new formula of Blaschke-Petkantschin type

We work in the d-dimensional Euclidean spacesRd with scalar producth·,·iand induced normk · k.

Thed-dimensional Lebesgue measure inRdwill be denoted byλd. We writeSd−1for the Euclidean unit sphere andσfor spherical Lebesgue measure (the dimension will be clear from the context). Recall that for pointsx1, . . . , xm ∈ Rd, the convex hull of these points is denoted by[x1, . . . , xm]. IfP ⊂Rdis a (convex) polytope, then we writeFk(P)for the set of itsk-dimensional faces andfk(P)for the number of thesek-faces, wherek ∈ {0, . . . , d}. Thek-dimensional Lebesgue measure in ak-dimensional flat E ⊂Rdis denoted byλE. Subspaces are endowed with the induced scalar product and norm. Finally, we writeΓ(·)for the Gamma function, especiallyΓ(n+ 1) =n!forn∈N.

An important tool in our investigations will be a new formula of Blaschke-Petkantschin type. The classical affine Blaschke-Petkantschin formula (see [22], II. 12. 3., [25],§6.1) states that

Z

Rd

· · · Z

Rd

f(x1, . . . , xd)

d

Y

j=1

d(xj)

= Γ(d) Z

Hdd−1

Z

H

· · · Z

H

f(x1, . . . , xdH([x1, . . . , xd])

d

Y

j=1

H(xj)d¯µ(H)

(6)

for any nonnegative measurable functionf : (Rd)d → R. Here µ¯ is the motion invariant Haar mea- sure on the affine Grassmannian Hdd−1 of hyperplanes inRd normalized such that the measure of all hyperplanes hitting the Euclidean unit ball is equal todκd. Any hyperplaneH with0 ∈/ H can be pa- rameterized (uniquely) by one of its unit normal vectorsu ∈ Sd−1 and its distancet ≥ 0to the origin such thatH={y∈Rd:hy, ui=t}. Then we have

µ(·) =¯ Z

Sd−1

Z 0

1{tu+u∈ ·}dt dσ(u),

whereudenotes the(d−1)-dimensional subspace ofRdtotally orthogonal tou.

The affine Blaschke-Petkantschin formula relates the d-dimensional volume elements dλd(xj) of points x1, . . . , xd to the differential d¯µ(H) of a hyperplane H and the (d−1)-dimensional volume elementsdλH(xj)of pointsxj ∈ H,j = 1, . . . , d. Intuitively speaking, instead of choosingdrandom points inRd, we first choose a random hyperplane and then, in a second step, we choosedrandom points in this hyperplane. More precisely, the corresponding transformation involves a Jacobian of the form [x1, . . . , xd].

In this paper we need an analogous formula for two sets of points. The pointsx1, . . . , x2d−kdeter- mine two hyperplanesH1, H2 which are the affine span ofx1, . . . , xdandxd−k+1, . . . , x2d−k, respec- tively. The following formula of Blaschke-Petkantschin type relates thed-dimensional volume elements dλd(xj), j = 1, . . . ,2d−k, to the differentials d¯µ(H1), d¯µ(H2) of the hyperplanes H1, H2, to the (d−1)-dimensional volume elementsdλH1(xj)anddλH2(xl)of pointsxj,j = 1, . . . , d−kandxl, l =d+ 1, . . . ,2d−k, which are contained in exactly one hyperplane, and to the(d−2)-dimensional volume elementsdλH1∩H2(xj)of pointsxj,j =d−k+ 1, . . . , d, which are contained in both hyper- planes. Again such a transformation involves a Jacobian which takes into account the angle between the hyperplanes. This angle is defined as the angle between the normal vectors of the hyperplanes. Since we only consider the sinus of this angle, the choice of the orientation of the normal vectors need not be specified.

Lemma 3.1. Let0 ≤ k ≤ d−1, and letg : (Rd)2d−k → Rbe a nonnegative measurable function.

Then

Z

Rd

· · · Z

Rd

g(x1, . . . , x2d−k)

2d−k

Y

j=1

d(xj) (3.1)

= Γ(d)2 Z

Hdd−1

Z

Hdd−1

Z

H1

· · · Z

H1

Z

H1∩H2

· · · Z

H1∩H2

Z

H2

· · · Z

H2

g(x1, . . . , x2d−k)

×λH1([x1, . . . , xd])λH2([xd−k+1, . . . , x2d−k]) (sinϕ)−k

×

2d−k

Y

j=d+1

H2(xj)

d

Y

j=d−k+1

H1∩H2(xj)

d−k

Y

j=1

H1(xj)d¯µ(H1)d¯µ(H2),

wheresinϕdenotes the sinus of the angle betweenH1andH2.

(7)

Proof. The Blaschke-Petkantschin formula applied tox1, . . . , xdand Fubini’s theorem show that Z

Rd

· · · Z

Rd

g(x1, . . . , x2d−k)

2d−k

Y

j=1

d(xj)

= Γ(d) Z

Hdd−1

Z

H1

· · · Z

H1

Z

Rd

· · · Z

Rd

g(x1, . . . , x2d−kH1([x1, . . . , xd])

×

2d−k

Y

j=d+1

d(xj)

d

Y

j=1

H1(xj)d¯µ(H1).

We fixH1 and set I(f) =

Z

H1

· · · Z

H1

Z

Rd

· · · Z

Rd

f(xd−k+1, . . . , x2d−k)

2d−k

Y

j=d+1

d(xj)

d

Y

j=d−k+1

H1(xj)

for nonnegative measurable functionsf : (Rd)d→R.

An essential ingredient of our proof is a special case of a generalized linear Blaschke-Petkantschin formula due to Vedel Jensen and Kiˆeu [14] (see also [26, Theorem 5.6, p. 135]). For all nonnegative measurable functionshand for a (fixed) hyperplaneH, we thus have

Z

H

· · · Z

H

Z

Rd

· · · Z

Rd

h(y1, . . . yd−1)

d−1

Y

j=k+1

d(yj)

k

Y

j=1

H(yj)

= Γ(d) Z

Ldd−1

Z

H∩L

· · · Z

H∩L

Z

L

· · · Z

L

h(y1, . . . yd−1)

×λL([0, y1, . . . , yd−1]) (sinϕ)−k

d−1

Y

j=k+1

L(yj)

k

Y

j=1

H∩L(yj)d¯ν(L),

whereϕdenotes the angle betweenHandL, and¯νis the rotation invariant Haar measure on the Grass- mannianLdd−1of(d−1)-dimensional linear subspaces inRdwith total measuredκd/2.

Using a standard argument (cf., e.g., Schneider and Weil [25],§6.1) this immediately gives a gener- alized affine Blaschke-Petkantschin formula forI(f). PutH =H1−x2d−kandxd−k+j−x2d−k=yj; then

I(f) = Z

Rd

Z

H

· · · Z

H

Z

Rd

· · · Z

Rd

f(y1+x2d−k, . . . , yd−1+x2d−k, x2d−k)

×

d−1

Y

j=k+1

d(yj)

k

Y

j=1

H(yj)dλd(x2d−k)

=Γ(d) Z

Rd

Z

Ldd−1

Z

H∩L

· · · Z

H∩L

Z

L

· · · Z

L

f(y1+x2d−k, . . . , yd−1+x2d−k, x2d−k)

×λL([0, y1, . . . , yd−1]) (sinϕ)−k

×

d−1

Y

j=k+1

L(yj)

k

Y

j=1

H∩L(yj)d¯ν(L)dλd(x2d−k)

(8)

=Γ(d) Z

Hdd−1

Z

H1∩H2

· · · Z

H1∩H2

Z

H2

· · · Z

H2

f(xd−k+1, . . . , x2d−k)

×λH2([xd−k+1, . . . , x2d−k]) (sinϕ)−k

×

2d−k

Y

j=d+1

H2(xj)

d

Y

j=d−k+1

H1∩H2(xj)d¯µ(H2).

Settingf(xd−k+1. . . , x2d−k) = Γ(d)g(x1, . . . , x2d−kH1([x1, . . . , xd])for fixedx1, . . . xd−k, one can easily complete the proof of the lemma.

4 Some auxiliary estimates

A random pointXinRdis said to be normally distributed with positive definited×d-covariance matrix Σand mean0, i.e.,X∼N(0,Σ), if it is chosen according to the density

fΣ(x) = 1

p(2π)ddet Σ e12xTΣ−1x, x∈Rd.

For simplicity, we will exclusively consider the caseΣ = 12Id. In this case we putfΣ(x) = φd(x), or simplyfΣ(x) =φ(x)ifd= 1. The one-dimensional normal distributionN(0,12)is given by

Φ(z) :=

z

Z

−∞

φ(x)dx= 1

√π

z

Z

−∞

e−x2dx, z∈R.

The corresponding measures having these functions as densities with respect to the appropriate Lebesgue measure, will be denoted bydφd(x)instead ofφd(x)dx, etc.

We will repeatedly use the following well known asymptotic expansions concerning the density of the normal distribution:

Lemma 4.1. Letj≥0andγ >0. Then, ash1→ ∞,

Z

h1

(h2−h1)jφ(h2)γdh2= Γ(j+ 1)

(2γ)j+1 h−(j+1)1 φ(h1)γ 1 +O h−21 .

Lemma 4.2. Forn∈N,α, β ∈Randγ >0,

Z

1

Φ(h1)n−αhβ1φ(h1)γdh1= Γ(γ)2γ−1n−γ(lnn)

β+γ−1

2 (1 +o(1))

asn→ ∞.

The proof of Lemma 4.1 is immediate by the substitutiont= 2γh1(h2−h1). The proof of Lemma 4.2 is a direct generalization of an argument given by Affentranger [1].

We provide two useful estimates which will be needed later.

(9)

Lemma 4.3. Letj, l ≥0andγ >0. Then there exists a constantc > 0depending only onj, l, γ such that, forh1 ≥1,

Z

h1

π

Z

0

(h2−h1)jφ(h2)γφ

h1sinϕ 2

(sinϕ)l dϕ dh2 ≤c h−(j+l+2)1 φ(h1)γ.

Proof. Sincesinis symmetric with respect to π2, by Fubini’s theorem and Lemma 4.1, we get

Z

h1

π

Z

0

(h2−h1)jφ(h2)γφ

h1sinϕ 2

(sinϕ)l dϕ dh2

≤2

Z

h1

(h2−h1)jφ(h2)γdh2

π 2

Z

0

φ

h1sinϕ 2

(sinϕ)l

≤2c1h−(j+1)1 φ(h1)γ

π

Z2

0

φ h1ϕ

π

ϕldϕ,

where π2ϕ≤sinϕ≤ϕfor0 ≤ϕ≤ π2 was used in the last step. Here the constantc1depends only on j, γ. Substitutingh1ϕ=t, we obtain

π

Z2

0

φ h1ϕ

π

ϕldϕ≤h−(l+1)1

Z

0

φ t

π

tldt≤c2h−(l+1)1 ,

wherec2is a constant depending only onl. Thus the assertion follows.

Lemma 4.4. Letj, l ≥0andγ >0. Then there exists a constantc > 0depending only onj, l, γ such that, forh1 ≥1,

Z

h1

π

Z

0

(h2−h1)jφ(h2)γφ

h1−h2cosϕ 2 sinϕ

(sinϕ)l−1 dϕ dh2≤c h−(j+l+1)1 φ(h1)γ.

Proof. We will use Fubini’s theorem repeatedly and letc1, c2, . . .denote constants depending only on j, l, γ. Then, first substituting (for fixedh2)

u= h1−h2cosϕ

sinϕ , du= h2−h1cosϕ sin2ϕ dϕ=

q

u2+h22−h21 sin−1ϕ dϕ with

sinϕ= 1 u2+h22

h1u+h2 q

u2+h22−h21

, and thenh2 =h1+ 2hs2

1, we get

Z

h1

π

Z

0

(h2−h1)jφ(h2)γφ

h1−h2cosϕ 2 sinϕ

(sinϕ)l−1 dϕ dh2

=

Z

h1

Z

−∞

(h2−h1)jφ(h2)γφ u

2

h1u+h2

pu2+h22−h21 l

pu2+h22−h21(u2+h22)l du dh2

(10)

≤c1h−j1 φ(h1)γ

Z

−∞

Z

0

φ(s)γφ s2

2h1 γ

φu 2

s2jh−2l1

×

h1u+

h1+2hs2

1

qu2+s2+4hs42 1

l

q

u2+s2+4hs42 1

s h1ds du

≤c2h−(j+l+1)1 φ(h1)γ

Z

−∞

Z

0

φ(s)γφ u

2

s2j s

q

u2+s2+ 4hs42 1

× |u|+

1 + s2 2h21

s

u2+s2+ s4 4h21

!l ds du

≤c2h−(j+l+1)1 φ(h1)γ

Z

−∞

Z

0

φ(s)γφ u

2 h

s2j |u|+ (1 +s2)(|u|+s+s2)li ds du

≤c h−(j+l+1)1 φ(h1)γ, since the last double integral is finite.

5 Reduction of Theorem 1.1 to Theorem 1.5

The essential tool for estimating the variance of functionals of random polytopes is the Efron-Stein jackknife inequality [5], see also Efron [4] and Hall [7].

IfS =S(Y1, . . . , Yn)is any real symmetric function of the independent identically distributed ran- dom vectorsYj,1 ≤ j < ∞, we setSi = S(Y1, . . . , Yi−1, Yi+1, . . . , Yn+1)andS(.) = n+11 Pn+1

i=1 Si. The Efron-Stein jackknife inequality then says

VarS ≤E

n+1

X

i=1

(Si−S(.))2 = (n+ 1)E(Sn+1−S(.))2. (5.1) Note that the right-hand side is not decreased ifS(.)is replaced by any other function ofY1, . . . , Yn+1.

We apply this inequality to the random variablef(Pn)wheref(·)is a measurable function of convex polytopes. ThenS =f([X1, . . . , Xn]) = f(Pn), and we replaceS(.)byf(Pn+1)which is a function of the convex hull ofPnand a further random pointXn+1. The Efron-Stein jackknife inequality then yields that

Varf(Pn)≤(n+ 1)E(f(Pn)−f(Pn+1))2. (5.2) In the case thatf(·)is the number ofi-faces ofPn, we obtain

Varfi(Pn)≤(n+ 1)E(fi(Pn+1)−fi(Pn))2.

LetPnbe fixed and choose the additional random pointXn+1. If the pointXn+1 is contained inPn, the random variable fi(Pn+1)−fi(Pn) equals0. If Xn+1 ∈/ Pn, the relative interior of some of the i-dimensional faces ofPn is contained in the interior of [Pn, Xn+1], let fi(Xn+1) be the number of theses faces, and some of thei-dimensional faces of[Pn, Xn+1]are not contained inPn, letfi+(Xn+1) be the number of those faces. Then we have

|fi([Pn, Xn+1])−fi(Pn)|=

fi+(Xn+1)−fi(Xn+1)

≤fi+(Xn+1) +fi(Xn+1).

(11)

SincePnis simplicial with probability one, this number can easily be estimated in terms of the number Fn(Xn+1)of facets ofPnwhich can be seen fromXn+1. HereFn(Xn+1) = 0ifXn+1is contained in Pn, and ifXn+1 ∈/ Pnthen Fn(Xn+1) > 0is the number of facets ofPnwhich are – up to(d−2)- dimensional faces – contained in the interior of the convex hull ofPnandXn+1. Now eachi-dimensional

“new” face of[Pn, Xn+1]not contained in Pn is the convex hull ofXn+1 and an(i−1)-dimensional face ofPn. Since this (i−1)-dimensional face is also a face of a facet ofPn which can be seen from Xn+1, and each facet is a simplex, we obtain

fi+(Xn+1)≤ d

i

Fn(Xn+1).

On the other hand eachi-dimensional face ofPnwhich is – up to(i−1)-dimensional faces – contained in the interior of[Pn, Xn+1]is also a face of a facet contained in the interior of[Pn, Xn+1]. Hence

fi(Xn+1)≤ d

i+ 1

Fn(Xn+1) and combining these estimates proves

E(fi(Pn+1)−fi(Pn))2

d+ 1 i+ 1

2

EFn(Xn+1)2. (5.3)

Thus each estimate for the second moment ofFn(Xn+1)yields an estimate forVarfi(Pn), and hence

Theorem 1.1 follows from Theorem 1.5.

6 Proof of Theorem 1.5

6.1 Asymptotic expansion of the expectation

We start with the proof of the first part of Theorem 1.5. The case d = 1is included by proper inter- pretations of the subsequent arguments. Choosen+ 1independent normally distributed random points X1, . . . Xn, XinRd. The convex hull of the firstnpoints is a Gaussian polytopePn, and with probabil- ity onePnis simplicial. ForI ⊂ {1, . . . , n}with|I|=d, denote byFI the convex hull of{Xi :i∈I}

which is a(d−1)-dimensional simplex. The affine hull ofFI is denoted byH(FI). With probability one, this affine hull is a hyperplane which dissectsRdinto two (closed) halfspaces. The halfspace which contains the origin will be denoted byH0(FI), the other byH+(FI). The origin is contained in exactly one halfspace with probability one. In the following, we want to assume thatPncontains the origin. This happens with high probability, since by Wendel’s theorem [28]

P(0∈/ Pn) =O(nd2−n),

an thus the condition thatPncontains the origin can be inserted by adding a suitable error term. More- over, we can also assume that the pointsX1, . . . , Xn, X are in general relative position (i.e. any subset of at mostd+ 1of these random points is affinely independent).

We are interested in the number of facets ofPnwhich can be seen from the additional random point X /∈Pn. Denote the set of these facets byFn(X), i.e.,

Fn(X) =F(X1, . . . , Xn;X)

={FI : Pn⊂H0(FI), X ∈H+(FI), I ⊂ {1, . . . , n},|I|=d} .

Here we can defineFn(X) as the empty set, if the origin is not contained in the interior ofPn or if the random points are not in general relative position. Similar definitions can be given for deterministic

(12)

points x1, . . . , xn andx in general relative position such that the convex hull ofx1, . . . , xn contains the origin. When applying Wendel’s theorem in considering EFn(X), the error term is of the order n2d2−n=O(c−n)with a suitable constantc >1, sinceFn(X)is bounded by nd

. Using this we have EFn(X) =

Z

Rd

· · · Z

Rd

X1{FI ∈ Fn(x)}

n

Y

j=1

d(xj)dφd(x) +O(c−n), (6.1)

where the summation extends over all subsetsI ⊂ {1, . . . , n}with|I| = d. Denote byF1 the convex hull ofx1, . . . , xd. Then

EFn(X) = n

d Z

Rd

· · · Z

Rd

1{F1 ∈ Fn(x)}

n

Y

j=1

d(xj)dφd(x) +O(c−n).

The probability content of the halfspaceH+(F1)is Z

H+(F1)

d(x) = 1−Φ(h1),

whereh1 is the distance ofH(F1) to the origin. IfF1 ∈ Fn(X), thenX is contained in the halfspace H+(F1)with probability content1−Φ(h1), and the random pointsXj,j∈ {d+1, . . . , n}, are contained in the halfspaceH0(F1)with probability contentΦ(h1). Hence we obtain

EFn(X) = n

d Z

Rd

· · · Z

Rd

Φ(h1)n−d(1−Φ(h1))

d

Y

j=1

d(xj) +O(c−n).

Parameterizing the hyperplaneH(F1) =:H1by its distanceh1 ≥0from the origin and its unit normal vectoru1in the formH1=H(u1, h1), and using the affine Blaschke-Petkantschin formula, we find that

EFn(X) =Γ(d) n

d Z

Sd−1

Z

0

Φ(h1)n−d(1−Φ(h1))

×

 Z

H1

· · · Z

H1

λH1([x1, . . . , xd])

d

Y

j=1

d(xj)dλH1(xj))

dh1dσ(u1) +O(c−n).

The inner integral (in brackets) is the expected volumeEVd−1(Pd(d−1))of a random(d−1)-dimensional Gaussian simplex inRd−1timesφ(h1)d, which gives

EFn(X) = Γ(d) n

d

EVd−1(Pd(d−1)) Z

Sd−1

Z

0

Φ(h1)n−d(1−Φ(h1))φ(h1)ddh1dσ(u1) +O(c−n).

The expected volume of a random Gaussian simplex was computed explicitly by Miles [18]. It now follows from Lemma 4.1 and Lemma 4.2 that

EFn(X) = 2d−1κdΓ(d+ 1)EVd−1(Pd(d−1))n−1(lnn)d−12 (1 +o(1)). (6.2) This proves the first part of Theorem 1.5.

(13)

6.2 Estimate of the variance

The main part of the proof is devoted to estimating the second moment ofFn(X). In the cased= 1, we haveFn(X) =Fn(X)2, hence the assertion follows. Now letd≥2.

As in (6.1) we have EFn(X)2 =

Z

Rd

· · · Z

Rd

X

I

1{FI ∈ Fn(x)}

!2 n

Y

j=1

d(xj)dφd(x) +O(c−n)

with somec >1. The summation extends over all subsetsI ⊂ {1, . . . , n}with|I|=d. We expand the integrand and get

EFn(X)2=X

I

X

J

Z

Rd

· · · Z

Rd

1{FI, FJ ∈ Fn(x)}

n

Y

j=1

d(xj)dφd(x) +O(c−n), (6.3) where the summation extends over all subsets I, J ⊂ {1, . . . , n}with |I| = |J| = d. If we fix the numberk =|I ∩J| ∈ {0, . . . , d}, then the corresponding term in (6.3) depends only onkand not on the particular choice ofI and J. For given k ∈ {0, . . . , d}, we putF1 = [X1, . . . , Xd]andF2(k) = [Xd−k+1, . . . , X2d−k]. Note that fork=dwe haveF1=F2(d). HenceEFn(X)2can be rewritten as

EFn(X)2 =

d

X

k=0

n d

d k

n−d d−k

Z

Rd

· · · Z

Rd

1{F1, F2(k)∈ Fn(x)}

n

Y

j=1

d(xj)dφd(x) +O(c−n).

The summand corresponding tok=dis justEFn(X), and thus (6.2) yields EFn(X)2≤c1

d−1

X

k=0

n2d−k Z

Rd

· · · Z

Rd

1{F1, F2(k)∈ Fn(x)}

n

Y

j=1

d(xj)dφd(x)

+O

n−1(lnn)d−12

;

here and in the followingc1, c2, . . .denote constants which are independent ofn. The summand corre- sponding tok=dalso yields the asserted lower bound forEFn(X)2.

Leth1, h2be the distance to the origin andu1, u2the unit normal vector ofH(F1), H(F2(k)), respec- tively, such thatH(F1) =H(u1, h1)andH(F2(k)) =H(u2, h2). Since the integrand is symmetric inF1

andF2(k), we restrict our integration toh1 ≤h2. Thus we get EFn(X)2 ≤c2

d−1

X

k=0

n2d−k Z

Rd

· · · Z

Rd

1{h1 ≤h2}1{F1, F2(k)∈ Fn(x)}

n

Y

j=1

d(xj)dφd(x)

+O

n−1(lnn)d−12

.

IfF1, F2(k) ∈ Fn(x), then the points x2d−k+1, . . . , xn are contained in H0(F1)∩H0(F2(k)), and the corresponding measure of the set of these points is at mostΦ(h1)n−2d+k. Moreover,F1, F2(k)∈ Fn(x) implies thatxis contained inH+(F1)∩H+(F2(k)). Denote the distance ofH+(F1)∩H+(F2(k))to the origin byh12. Then the corresponding measure is at most1−Φ(h12). This yields

EFn(X)2≤c2

d−1

X

k=0

n2d−k Z

Rd

· · · Z

Rd

1{h1 ≤h2}Φ(h1)n−2d+k(1−Φ(h12))

2d−k

Y

j=1

d(xj)

+O

n−1(lnn)d−12 .

Referenzen

ÄHNLICHE DOKUMENTE

An H -polyhedron is an intersection of finitely many closed half-spaces in

Interestingly, coarse-grained normal mode (CGNM) approaches, e.g., the elastic network model (ENM) and rigid cluster normal mode analysis (RCNMA), have emerged

In Section 4 we investigate the asymptotic behaviour of the diameter of the set of points that form the Poisson point process with intensity n κ , where κ is a spherically

Furthermore, the cumulative distribution function P {ξ ≤ x } of a max-stable random vector ξ with unit Fréchet marginals is determined by the norm of the inverse to x, where

The limits of these iterated convex combinations define a linear operator K on E that further appears in the definition of the expectation of E -valued random elements in Section 4 and

In the second part of this section we consider an equation that determines the limit of the empirical measures and give a theorem concerning existence and uniqueness of the

The reduction in the variance due to orthonormal variates is appreci- able, but even t h e crude estimator gives acceptable results.. After some reflection i t

Bounded combinatorics is a combinatorial condition which translates into explicit geometric control of the hyperbolic metric on the convex cocompact handlebody near the boundary of