• Keine Ergebnisse gefunden

A General Version of Price’s Theorem

N/A
N/A
Protected

Academic year: 2022

Aktie "A General Version of Price’s Theorem"

Copied!
12
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

https://doi.org/10.1007/s10959-020-01017-w

A General Version of Price’s Theorem

A Tool for Bounding the Expectation of Nonlinear Functions of Gaussian Random Vectors

Felix Voigtlaender1,2

Received: 29 September 2019 / Revised: 13 May 2020 / Published online: 26 June 2020

© The Author(s) 2020

Abstract

Assume that X ∈ Rnis a centered random vector following a multivariate normal distribution with positive definite covariance matrix. Letg :Rn→Cbe measurable and of moderate growth, say |g(x)| (1+ |x|)N. We show that the map → E[g(X)]is smooth, and we derive convenient expressions for its partial derivatives, in terms of certain expectationsE[(∂αg)(X)]of partial (distributional) derivatives ofg.

As we discuss, this result can be used to derive bounds for the expectationE[g(X)]of a nonlinear functiong(X)of a Gaussian random vectorXwith possibly correlated entries. For the case when g(x) = g1(x1)· · ·gn(xn)has tensor-product structure, the above result is known in the engineering literature asPrice’s theorem, originally published in 1958. For dimensionn =2, it was generalized in 1964 by McMahon to the general caseg : R2 → C. Our contribution is to unify these results, and to give a mathematically fully rigorous proof. Precisely, we consider a normally distributed random vectorX∈Rnof arbitrary dimensionn ∈N, and we allow the nonlinearityg to be a general tempered distribution. To this end, we replace the expectationE[g(X)]

by the dual pairingg, φS,S, whereφdenotes the probability density function of X.

Keywords Normal distribution·Gaussian random variables·Nonlinear functions of Gaussian random vectors·Expectation·Price’s theorem

AMS subject classification 60G15·62H20

B

Felix Voigtlaender felix@voigtlaender.xyz

1 Katholische Universität Eichstätt-Ingolstadt, Lehrstuhl Wissenschaftliches Rechnen, Ostenstraße 26, 85072 Eichstätt, Germany

2 Technische Universität Berlin, Institut für Mathematik, Straße des 17. Juni 136, 10623 Berlin, Germany

(2)

1 Introduction

In this introduction, we first present a precise formulation of our version of Price’s theorem, the proof of which we defer to Sect.4. We then briefly discuss the relevance of this theorem: In a nutshell, it is a useful tool for estimating the expectation of a nonlinear function g(X)of a Gaussian random vector X ∈ Rn with possibly correlated entries. In Sect.3, we consider a specific example application which illustrates this.

The relation of our result to the classical versions [6,8] of Price’s theorem is discussed in Sect.2.

1.1 Our Version of Price’s Theorem Let us denote by Symn:=

A∈Rn×n : AT =A

the set of symmetric matrices, and by

Sym+n :=

A∈Symn : ∀x∈Rn\ {0} : x,Ax>0

the set of (symmetric) positive definite matrices, where we writex,y := xTyfor the standard scalar product ofx,y∈Rnand|x| :=√

x,xfor the usual Euclidean norm. For ∈Sym+n, let

φ:Rn(0,∞),x

(2π)n·det1

2 ·e12x,1x, (1.1) and note thatφis the density function of a centered random vectorX∈Rnwhich follows a joint normal distribution with covariance matrix— that is,XN(0, ); see for instance [5, Chapter 5, Theorem 5.1].

Let us briefly recall the notion of Schwartz functions and tempered distributions, which will play an important role in what follows. First, withN = {1,2, . . .}and N0= {0}∪N, anyα∈Nn0will be called amultiindex, and we write|α| =α1+· · ·+αn

as well as α = xα1α1 1

· · ·xαnαn

n , and zα = zα11· · ·zαnn for z ∈ Cn. Finally, given α, β ∈Nn0, we writeβαifβjαj for all j∈ {1, . . . ,n}. With this notation, it is not hard to see that the density functionφfrom above belongs to theSchwartz class

S(Rn)=

gC(Rn;C) : ∀α∈Nn0N∈N∃C>0∀x∈Rn:

|∂αg(x)| ≤C·(1+|x|)N

of smooth, rapidly decaying functions; see for instance [3, Chapter 8] for more details on this space. In fact,φ(x) = c·e121/2x,1/2x = c·(1/2x), where is the usual Gaussian function(x)=e12|x|2, which is well-known to belong to S(Rn).

The spaceS(Rn)oftempered distributionsconsists of all linear functionals g : S(Rn)→Cwhich are continuous with respect to the usual topology onS(Rn); see [3, Sections 8.1 and 9.2] for the details. Since φS(Rn), given any tempered

(3)

distributiongS(Rn), the function

g:Sym+n →C, → g, φS,S (1.2) is well-defined, where·,·S,S denotes the (bilinear) dual pairing betweenS(Rn) andS(Rn). As an important special case, note that ifg :Rn→Cis measurable and of moderate growth, in the sense thatx(1+ |x|)N·g(x)L1(Rn)for some N ∈N, then

g()=E g(X)

(1.3) is just the expectation ofg(X), whereXN(0, ). Here, we identify as usual the functiongwith the tempered distributionS(Rn)→C, ϕ→

g(x)ϕ(x)d x.

The main goal of this note is to show for each gS(Rn)that the functiong

is smooth, and to derive an explicit formula for its partial derivatives. Thus, at least in the case of Equation (1.3), our goal is tocalculate the partial derivatives of the expectation of a nonlinear function g of a Gaussian random vector XN(0, ), as a function of the covariance matrixof the vector X.

In order to achieve a convenient statement of this result, we first introduce a bit more notation: Writen:= {1, . . . ,n}, and let

I :=

(i,j)n×n : ij

, I:=

(i,i) : in , I<:=

(i,j)n×n : i< j

, (1.4)

so thatI =II<. Since forn>1, the sets Symnand Sym+n have empty interior in Rn×n(because they only consist of symmetric matrices), it does not make sense to talk about partial derivatives of a function:Sym+n →C, unless one interprets Sym+n as an open subset of the vector space Symn, rather than ofRn×n. As a means of fixing a coordinate system on Symn, we therefore parameterize the set of symmetric matrices by their “upper half”; precisely, we consider the following isomorphism betweenRI and Symn:

:RI →Symn,

Ai,j 1ijn

ij

Ai,jEi,j +

i>j

Aj,iEi,j. (1.5)

Here, we denote by(Ei,j)i,jnthe standard basis ofRn×n, meaning that(Ei,j)k,= δi,k ·δj, with the usual Dirac delta δi,k. Below, instead of calculating the partial derivatives ofg, we will consider the functiong◦|U, whereU :=1

Sym+n ⊂ RIis open.

In order to achieve a concise formulation of our version of Price’s theorem, we need two non-standard notions regarding multiindicesβ=

β(i,j) (i,j)∈I ∈N0I. Namely, we define theflattened versionofβ as

β:=

(i,j)∈I

β(i,j) (ei+ej)∈Nn0 with the standard basis(e1, . . . ,en)ofRn, (1.6)

(4)

and in addition to|β| =

(i,j)∈Iβ(i,j), we will also use

|β|:=

(i,j)∈I

β(i,j)=

in

β(i,i). (1.7)

With this notation, our main result reads as follows:

Theorem 1 (Generalized version of Price’s theorem)Let gS(Rn)be arbitrary.

Then the functiong|U :U →Cis smooth and its partial derivatives are given by

β

g (A)=(1/2)|β|·

βg, φ(A)

S,SAU=1(Sym+n)∀β∈N0I. (1.8) Here∂βg denotes the usual distributional derivative of g.

Remark 1 Note that even if one is in the setting of Equation (1.3) whereg :Rn→C is of moderate growth, so thatg() = E[g(X)] is a “classical” expectation, it neednotbe the case that the derivativeβg is given by afunction, let alone one of moderate growth. Therefore, it really is useful to consider the formalism of (tempered) distributions.

1.2 Relevance of Price’s Theorem

An important application of Price’s theorem is as follows: For certain values of the covariance matrix , it is usually easy to precisely calculate the expectation E[g(X)]—for example ifis a diagonal matrix, in which case the entries of X are independent. As a complement to such special cases where explicit calculations are possible, Price’s theorem can be used to obtain (bounds for) the partial derivatives of the map→E[g(X)]. In combination with standard results from multivariable calculus, one can then obtain bounds forE[g(X)] for general covariance matrices. Thus,Price’s theorem is a tool for estimating the expectation of a nonlinear function g(X)of a Gaussian random vector X, even if the entries ofXare correlated.

An example for this type of reasoning will be given in Sect.3. There, we apply our version of Price’s theorem to show that if f(x)= fτ(x)“clips”x ∈Rto the interval [−τ, τ]and if(Xα,Yα)N(0, α)forα =1α

α1 , then the map Fτ : [0,1] → R, α→E[fτ(Xα)fτ(Yα)]isconvexand satisfiesFτ(0)=0. Thus,Fτ(α)α·Fτ(1), where Fτ(1)is easy to bound since X1 = Y1almost surely. These facts constitute important ingredients in [4]; see Theorem A.4 and the proof of Lemma A.3 in that paper.

2 Comparison with the Classical Results

The original form of Price’s theorem as stated in [8] only concerns the case when the nonlinearityg(x)= g1(x1)· · ·gn(xn)has a tensor-product structure. In this special

(5)

case, the formula derived in [8] is identical to the one given by Theorem 1, up to notational differences.

This tensor-product structure assumption concerninggwas removed by McMahon [6] and Papoulis [7] in the case of Gaussian random vectors of dimension n = 2 with covariance matrix of the form=α=1α

α1 withα(−1,1). Precisely, if XαN(0, α), then [6] states forg:R2→Cthat

g:(−1,1)→C, α→E[g(Xα)] is smooth with (gn)(α)=E 2ng

∂xn1∂x2n(Xα)

. (2.1) Based on the work by Papoulis, Brown [1] showed that Price’s theorem holds for Gaussian random vectors X of general dimensionality and unit variancei,i = E[(X)i2] = 1, if one takes derivatives with respect to the covariances i,j = E[(X)i(X)j]where i = j. In this setting, Brown also showed that Price’s the- oremcharacterizes the normal distribution; more precisely, if(X)is a (sufficiently nice) family of random vectors with Cov(X)=which satisfies the conclusion of Price’s theorem, thenXN(0, )is necessarily normally distributed. This extends and corrects the original work of Price [8], where a similar claim was made.

Finally, we mention the article [9] in which a quantum-mechanical version of Price’s theorem is established. In Sect.2of that paper, the author reviews the “classical” case of Price’s theorem, and essentially derives the same formulas as in Theorem1.

Despite their great utility, the existing versions of Price’s theorem have some shortcomings—at least from a mathematical perspective:

• In [1,6,8], the assumptions regarding the functionsg1, . . . ,gnorgare never made explicit. In particular, it is assumed in [6,8] without justification thatg1, . . . ,gnor gcan be represented as the sum of certain Laplace transforms. Likewise, Papoulis [7] assumes that g satisfies the decay condition|g(x,y)| e|(x,y)|β for some β <2, but does not imposeanyrestrictions on the regularity ofg. Finally, [1] is mainly concerned with showing that Price’s theoremonlyholds for normally dis- tributed random vectors, and simply refers to [7] for the proof that Price’s theorem does indeed hold for normal random vectors.

None of the papers [1,6–8], explains the nature of the derivative ofg (classical, distributional, etc.) which appears in the derived formula.

• In contrast, for calculating thek-th order derivatives of → E[g(X)], it is assumed in [9] that the nonlinearity g is C2k, with a certain decay condition concerning the derivatives. This classical smoothness of g, however, does not hold in many applications; see Sect.3.

Differently from [1,6–9], our version of Price’s theorem imposes precise, rather mild assumptions concerning the nonlinearityg (namelygS(Rn)) and precisely explains the nature of the derivativeβgthat appears in the theorem statement: this is just a distributional derivative.

Furthermore, maybe as a consequence of the preceding points, it seems that Price’s theorem is not as well-known in the mathematical community as it deserves to be.It is my hope that the present paper may promote this result.

(6)

Before closing this section, we prove that—assuming g to be a tempered distribution—the result of [6,7] is indeed a special case of Theorem1. With simi- lar arguments, one can show that the forms of Price’s theorem considered in [1,8,9]

are covered by Theorem1as well.

Corollary 1 Let gS(R2). Forα(−1,1), letα :=1α

α1 . Let g:(−1,1)→C, α→ g, φαS,S,

where φα : R2(0,∞) denotes the probability density function of XαN(0, α).

Then g is smooth with n-th derivative (gn)(α) = 2n

g

x1nx2n, φα

S,S for α(−1,1).

Remark 2 In particular, if bothgand the (distributional) derivative x2nn g

1x2n are given by functions of moderate growth, then Equation (2.1) holds, i.e.,

dn nE

g(Xα)

=E 2ng

∂x1n∂x2n(Xα)

.

Proof of Corollary1 In the notation of Theorem1, we have

g(α)=

g

A(α) with A(α)i,j =

1, ifi= j, α, ifi= j for (i,j)I = {(1,1), (1,2), (2,2)}.

Since

A(α) =α is easily seen to be positive definite, we haveA(α)U. Now, setting β:=n·e(1,2)∈N0I (with the standard basis e(1,1),e(1,2),e(2,2) of RI), the flattened versionβofβsatisfiesβ=ne1+ne2=(n,n). Thus, Theorem1and the chain-rule show thatgis smooth, with

(gn)(α)= dn n

(g)

A(α) =

β(g) A(α)

=

βg, φ(A(α))

S,S = 2ng

∂x1n∂x2n, φα

S,S.

3 An Example of an Application of Price’s Theorem

In this section, we derive bounds for the expectationE[fτ(Xα)fτ(Yα)], whereXα,Yα follow a joint normal distribution with covariance matrix1α

α1 ,and where the non- linearity fτis just a truncation (orclipping) to the interval[−τ, τ]. We remark that this example has already been considered by Price [8] himself, but that his arguments are

(7)

not completely mathematically rigorous, as explained in Sect.2. Precisely, we obtain the following result:

Lemma 1 Letτ >0be arbitrary, and define

fτ :R→R,x

⎧⎪

⎪⎩

τ, if xτ, x, if x∈ [−τ, τ],

−τ, if x≤ −τ.

Forα∈ [−1,1], setα :=1α

α1 ,and let(Xα,Yα)N(0, α). Finally, define Fτ : [−1,1] →R2, α→E

fτ(Xα)· fτ(Yα) .

Then Fτ is continuous and Fτ|[0,1]is convex with Fτ(0)=0. In particular, Fτ(α)α·Fτ(1)for allα∈ [0,1].

Proof It is easy to see that fτ is bounded and Lipschitz continuous, so that fτW1,∞(R)with weak derivative fτ = 1(−τ,τ). Therefore, using the notation (gh)(x,y)=g(x)h(y), we see thatgτ := fτfτW1,∞(R2)S(R2), with weak derivative x2gτ

1x2 =1(−τ,τ)⊗1(−τ,τ)=1(−τ,τ)2.Directly from the definition of the weak derivative, in combination with Fubini’s theorem and the fundamental theorem of calculus, we thus see for eachφS(R2)that

4gτ

∂x12∂x22, φ

S,S

= 2gτ

∂x1∂x2, 2φ

∂x1∂x2

S,S= τ

−τ

τ

−τ

2φ

∂x1∂x2

(t1,t2)dt1dt2

= τ

−τ

∂φ

∂x2

(τ,t2)∂φ

∂x2

(−τ,t2)dt2

=φ(τ, τ)φ(−τ, τ)φ(τ,−τ)+φ(−τ,−τ).

Now, Corollary1shows thatFτ|(−1,1)=gτ is smooth with

Fτ(α)= 4gτ

∂x12∂x22, φα

S,S

=φα(τ, τ)φα(−τ, τ)φα(τ,−τ)+φα(−τ,−τ)

forα(−1,1). We want to showFτ(α)≥0 forα∈ [0,1). Sinceφαis symmetric, it suffices to showφα(τ, τ)φα(−τ, τ)≥0, which is easily seen to be equivalent to

exp − 1

2(1−α2)(2τ2−2ατ2)

! !

≥exp − 1

2(1−α2)(2τ2+2ατ2)

!

⇐⇒ 2τ2+2ατ2!2−2ατ2 ⇐⇒ 4ατ2! 0,

(8)

which clearly holds forα∈ [0,1).

To finish the proof, we only need to show thatFτ is continuous withFτ(0)=0.

To see this, let (X,Z)N(0,I2), with the 2-dimensional identity matrix I2. For α∈ [−1,1], it is then not hard to see thatYα:=αX+√

1−α2Zsatisfies(X,Yα)N(0, α). Therefore, we see forα, β ∈ [−1,1]that

|Fτ(α)Fτ(β)| =""E[gτ(X,Yα)]−E

gτ(X,Yβ) ""

=""E

fτ(X)·

fτ(Yα)fτ(Yβ) ""

(since|fτ(X)|≤τ)τ·E""fτ(Yα)fτ(Yβ)""

(sincefτis 1-Lipschitz)τ·E""YαYβ""

τ· |α−β| ·E|X| +τ·""

""#

1−α2−$ 1−β2""

""·E|Z| −−−→

β→α 0, which shows that Fτ is indeed continuous. Furthermore, we see by independence of X,Z that

Fτ(0)=E[fτ(X)· fτ(Z)] =E[fτ(X)] ·E[fτ(Z)] =0,

sinceE[fτ(X)] = −E[fτ(−X)] = −E[fτ(X)],because of X ∼ −X and fτ(x)=

fτ(−x)forx∈R.

4 The Proof of Theorem1

The main idea of the proof is to use Fourier analysis, since the Fourier transform of the density functionφ will turn out to be much easier to handle thanφitself.

This is similar to the approach in [1,7] but slightly different from the approach in [6,8], where the Laplace transform is used instead.

For the Fourier transform, we will use the normalization Fϕ(ξ):=%ϕ (ξ):=

Rnϕ(x)·eixd x forξ ∈RnandϕL1(Rn).

It is well-known that the restrictionF :S(Rn)S(Rn)ofF is a well-defined homeomorphism, with inverseF1:S(Rn)S(Rn), whereF1ϕ(x)=(2π)n· Fϕ(−x). By duality, the Fourier transform also extends to a bijectionF:S(Rn)S(Rn)defined1byFg, ϕS,S := g,S,S forgS(Rn)andϕS(Rn).

Further, it is well-known for the distributional derivativesαgofgS(Rn)defined by∂αg, ϕS,S =(−1)|α|· g, ∂αϕS,Sthat if we set

Xα·ϕ:Rn→C,xxα·ϕ(x) and Xα·g, ϕS,S= g, Xα·ϕS,S

1 This definition is motivated by the identity %f(x) ·g(x)d x = f(ξ)eixg(x)dξd x = f(ξ)%g(ξ)dξwhich is valid forf,gL1

Rn thanks to Fubini’s theorem.

(9)

forgS(Rn)andϕS(Rn), then we have F

αg

=i|α|·Xα·FggS(Rn), α∈Nn0. (4.1) These results can be found e.g., in [2, Chapter 14], or (with a slightly different nor- malization of the Fourier transform) in [3, Sections 8.3 and 9.2].

Finally, we will use the formula (2π)n·F1φ(ξ)=

Rneixφ(x)d x=E eiξ,X

=e12ξ,ξ=:ψ(ξ) for ξ ∈Rn, (4.2) which is proved in [5, Chapter 5, Theorem 4.1]; in probabilistic terms, this is a statement about the characteristic function of the random vectorXN(0, ).

Next, by the assumption of Theorem 1, we havegS(Rn)and henceFgS(Rn). Thus, by the structure theorem for tempered distributions (see for instance [2, Theorem 17.10]), there areL ∈N, certainα1, . . . , αL ∈Nn0and certain polynomially bounded, continuous functions f1, . . . , fL :Rn →CsatisfyingFg =L

=1αf, i.e.,g = L

=1F1(∂αf). Since both sides of the target identity (1.8) are linear with respect tog, we can thus assume without loss of generality thatg=F1(∂αf) for someα∈Nn0and some continuous f :Rn→Cwhich is polynomially bounded, say|f(ξ)| ≤C·(1+ |ξ|)Nfor allξ ∈Rnand certainC>0,N ∈N0. We thus have

g()= g, φS,S =

g,FF1φ

S,S=

Fg,F1φ

S,S

(Eq. (4.2))=(2π)n·

αf, ψ

S,S

=(−1)|α|·(2π)n·

f, ∂αψ

S,S

=(−1)|α|·(2π)n·

Rn f(ξ)·

αψ (ξ)dξ for all∈Sym+n. (4.3) Our first goal in the remainder of the proof is to show that one can justify “differen- tiation under the integral” with respect toAi,j with =(A)in the last integral in Equation (4.3).

It is easy to see that Aψ(A)(ξ)is smooth, with partial derivative

Ai,jψ(A)(ξ)=e12ξ,(A·Ai,j −1 2 ·

n k,=1

(A)

k,·ξkξ

!

=

12·ξiξj·ψ(A)(ξ), ifi= j,

−ξiξj ·ψ(A)(ξ), ifi< j

for allξ ∈ Rnand arbitrary(i,j)I and AU. Givenβ ∈ N0I, let us writeβA for the partial derivative of orderβwith respect to A ∈RI. Then, a straightforward

(10)

induction using the preceding identity shows (with|β|andβas in (1.7) and (1.6)) that

βAψ(A)(ξ)=(−1)|β|· 1 2

!|β|

·ξβ·ψ(A)(ξ)β ∈N0I, ξ ∈Rn, AU. (4.4) Next, we show for arbitraryγ ∈Nn0that there is a polynomial pα,γ = pα,γ(,B) in the variables∈RnandB∈Rn×nthat satisfies

ξα

ξγ ·ψ(ξ)

=pα,γ(ξ, )·ψ(ξ)∈Sym+n, ξ ∈Rn. (4.5) To see this, we first note that a direct computation using the identity ξikξ) = δi,kξ+δi,ξkand the symmetry ofshows thatξiψ(ξ)= −( ξ)i·ψ(ξ). By induction, and since ( ξ)i is a polynomial in ξ, , we therefore see that for each β ∈Nn0there is a polynomialpβ =pβ(,B)in the variables∈RnandB ∈Rn×n satisfyingξβψ(ξ)=ψ(ξ)·pβ(ξ, ). Therefore, the Leibniz rule shows

ξα

ξγψ(ξ)

=

β∈Nn0withβ≤α

α β

!

βψ(ξ)·α−βξγ

=ψ(ξ)

β∈Nn0withβ≤α

α β

!

pβ(ξ, ) ∂α−βξγ,

which proves Equation (4.5).

Now we are ready to justify differentiation under the integral (as in [3, Theo- rem 2.27]) for the last integral appearing in Equation (4.3), with =(A), that is, for the function

U→C, A

Rn f(ξ)·

αψ(A) (ξ)dξ.

Indeed, let A0U be arbitrary. SinceU is open, there is some ε > 0 satisfying Bε(A0)U, for the closed ball Bε(A0) =

A∈RI : |AA0| ≤ε

, with the Euclidean norm| · |onRI. TheopenballBε(A0)is defined similarly.

Now, with

σmin(A):= inf

x∈Rn

|x|=1

x, Ax forA∈Rn×n

we have forA,B ∈Rn×nand arbitraryx∈Rnwith|x| =1 that

σmin(A)≤ x,Ax = x,Bx + x, (AB)xx,Bx + AB. Since this holds for all |x| = 1, we get σmin(A)σmin(B)+ AB, and by symmetry |σmin(A)σmin(B)| ≤ AB. Therefore, the continuous function

(11)

Aσmin

(A) has a positive(!) minimum on the compact set Bε(A0), so that ξ, (A)ξ ≥c· |ξ|2for allξ ∈Rnand ABε(A0), for a positivec>0. Further- more, there is someK =K(A0) >0 with(A) ≤K for allABε(A0).

Now, since the mapU×Rn (A, ξ)ψ(A)(ξ) ∈Cis smooth, we have (in view of Equations (4.4) and (4.5)) for arbitraryβ ∈N0I,AUandξ ∈Rnthat

βA f(ξ)·

αψ(A) (ξ)

= f(ξ)·ξα

βAψ(A)(ξ) (Eq. (4.4))= f(ξ)·(−1)|· 1

2

!|β|

·ξα

ξβ·ψ(A)(ξ) (Eq. (4.5))= f(ξ)·(−1)|· 1

2

!|β|

· pα,β(ξ, (A))·ψ(A)(ξ).

(4.6) Using the polynomial growth restriction concerning f, we thus see that there is a constantCα,β>0 and someMα,β∈Nwith

""

"∂βA f(ξ)·

αψ(A) (ξ)"""=""

""

"f(ξ)· 1 2

!|β|

·pα,β(ξ, (A))·ψ(A)(ξ)""

""

"

C·(1+|ξ|)N·Cα,β·(1+ |ξ| + (A))Mα,β· e12ξ,(A

Cα,βC·(1+ |ξ|)N·(1+ |ξ| +K)Mα,β ·ec2|ξ|2

=:hα,β,A0,f(ξ),

for allξ ∈ Rnand all ABε(A0). Sincehα,β,A0,f is independent ofABε(A0) and since we clearly havehα,β,A0,fL1(Rn), [3, Theorem 2.27] and Equation (4.3) show that the function

Bε(A0)→C, A(−1)|α|·(2π)n·g((A))=

Rn f(ξ)·

αψ(A) (ξ)dξ

is smooth, with partial derivative of orderβ ∈N0I given by

βA

(−1)|α|·(2π)n·g((A))

=

RnβA

f(ξ) (∂ξαψ(A))(ξ) (Eq. (4.6))=(−1)|β|· 1

2

!|β|

·

Rn f(ξ)·ξα

ξβ·ψ(A)(ξ)

=(−1)|β|· 1 2

!|β|

· f, ∂α

Xβ·ψ(A)

S,S

(Eq. (4.2))=(−1)|β|+|α|· 1 2

!|β|

·(2π)n·

Xβ·αf,F1φ(A)

S,S

(12)

g=F1(∂αf),Eq. (4.1),and(−1)|β|=i|β|

= 1 2

!|β|

·(2π)n·(−1)|α|·

F[∂βg],F1φ(A)

S,S.

In combination, this shows thatgis smooth onBε(A0), with partial derivatives given by

β

g

(A)= 1 2

!|β|

· F

βg

,F1φ(A)

S,S

= 1 2

!|β|

·

βg, φ(A)

S,S,

as claimed. Since A0Uwas arbitrary, the proof is complete.

Acknowledgements Open Access funding provided by Projekt DEAL. The author acknowledges support by the European Commission-Project DEDALE (contract no. 665044) within the H2020 Framework. The author is grateful to Martin Genzel for bringing up the topic discussed in this paper, and to Ali Hashemi for pointing out the original paper by Price. Last but not least, the author would like to thank the anonymous referees for valuable suggestions that led to an improved presentation and for suggesting the references [1,7].

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp://creativecommons.org/licenses/by/4.0/.

References

1. Brown, J.: Generalized form of Price’s theorem and its converse. IEEE Trans. Inform. Theory13(1), 27–30 (1967)

2. Duistermaat, J.J., Kolk, J.A.C.: Distributions. Birkhäuser Boston Inc, Boston, MA (2010)

3. Folland, G.B.: Real Analysis: Modern Techniques and Their Applications. Pure and Applied Mathe- matics, 2nd edn. Wiley, New York (1999)

4. Genzel, M., Kutyniok, G., März, M.:1-analysis minimization and generalized (co-)sparsity: when does recovery succeed? Appl. Comput. Harmon. Anal. (2020).https://doi.org/10.1016/j.acha.2020.01.002 5. Gut, A.: An Intermediate Course in Probability. Springer Texts in Statistics, 2nd edn. Springer, New

York (2009)

6. McMahon, E.: An extension of Price’s theorem (corresp.). IEEE Trans. Inform. Theory10(2), 168–168 (1964)

7. Papoulis, A.: Comments on ’an extension of Price’s theorem’ by McMahon. E. L. IEEE Trans. Inform.

Theory11(1), 154–154 (1965)

8. Price, R.: A useful theorem for nonlinear devices having Gaussian inputs. IRE Trans.4, 69–72 (1958) 9. Vladimirov, I.G.: A quantum mechanical version of Price’s theorem for Gaussian states. In: Control

Conference (AUCC), 2014 4th Australian, pp. 118–123. IEEE, (2014)

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Referenzen

ÄHNLICHE DOKUMENTE

[r]

2) The reported statistics include the mean and the standard deviation (std) of the time-series means of all inflation series included in a given group (level), the mean and the

profile to determine the spatial extent and connectivity of the anomaly.. 22 Figure 5: a) Horizontally layered conceptual geological model of the Ammer valley derived from three

Broadly unders tood as ‘Dialogue’ process by which stakeholder groups (in particular those from science, policy and regulation) have informal/formal discussions,

A generalisation of this kind can be applied for example in the theory of normal analysis of variance whenever independence of linear statistics has to be proved.. The premise in

• The output share of wage income receivers is determined by the distributed profit ratio and the spending behavior of the receivers of wage income and distributed profits, that is,

Algorithmic Game Theory, Summer 2018 Lecture 10 (page 4 of 4) There is again an analogous theorem that smoothness implies a bound on the price of anarchy..

Laske Betitelt links unten: Mieders im Stubaital Provenienz: Sammlung Franz Martin Haberditzl, Wien (Direktor der Österreichischen Galerie, Belvedere, Wien 1915-1938);