Green's-function solution for a special class of master equations
OttoJ.
Eder and Thomas Lackner*Austrian Research Center Seibersdorf, A-2444 Seibersdorf, Austria Maximilian Posch
Internationai Institute forApplied Systems Ana/ysis, A-2361Laxenburg, Austria (Received 1August 1984)
We consider a one-dimensional stochastic process described by a master equation and calculate the time-dependent distribution function. The time evolution ofthe system is given by the condi- tional probability h(x,t~xp), where x and xp are continuous variables. Assuming a quite general dependence ofthe transition probability
8'~(x —
+x') on aparameter0,
we show that,,the backward form ofthe master equation can be used tocalculate arbitrary conditional averages (f(x)
~xp), up toany given power inQ'.
This general expansion procedure will beused toconstruct b(x,t~xp)it- self. Weshow that—
introducing anew stochastic variable y—
the conditional probability h(x,t~xo) can be expanded into aseries ofHermite functions. The coefficients ofthis expansion, b„(t),which depend on the expansion parameter0,
can be uniquely determined via a recursion relation. %'e show that in the limit0 —
+00 all coefficients b„(t)vanish, except bo, which istime independent. In this limit a Gaussian distribution forthe conditional probability isobtained, which is in agreement with the so-called linear noise approximation.E. INTRODUCTION
A common formulation
of
stochastic processes is given in termsof
a hierarchyof
various probabilities,i. e.
, sing- let, joint, and conditional probabilities. One can either derive exact equations for the singlet probability involving also the conditional probabilities, or one can derive an open hierarchy for the conditional probabilities alone. Al- though these equations look formally very simple, they cannot be solved in general, since the equation for the lower-order conditional probability requires the knowledgeof
a higher-order probability. In order to avoid these difficulties one assumes that the higher-order conditional probabilities are identical to the lowest-order probability, which we will callh(x,
t ~xp). A process based on these assumptions is called a Markov process' and the time evolutionof
the conditional probabilityIt
(x
t ~xp)can then be described by an integro-differential equation, which is commonly called Markov or master equation. The central quantity entering the master equa- tion is the transition probability fVn(x—
+x ), which in general also depends on time. However, in many physical applications the transition probability can be considered as time independent, leading to a Markov process homo- geneous in time.It
is exactly this process, which will be considered in this paper for the caseof
a one-dimensional, continuous, unbounded stochastic variablex.
A variety
of
physical problems can be formulated in termsof
a master equation. They are found in transport problems, the studyof
gas-phase relaxation, reaction ki- netics, birth and death processes, random walks, etc. In each particular application physical intuition and/or first-principles calculations have to be used in order toar- rive at an explicit expression for the transition probability8'n(x ~x'),
which determines the entire process.It
turns out that in only very few examples the transition probability allows an analytic solution
of
the master equa- tion. Furthermore—
except in the caseof
linear one step processes—
no rigorous technique exists for solving master equations. This difficulty is the reason why—
insteadof
looking for an analytic solution one tries to find approx- imations to the original master equation. The well-known Kramer-Moyal expansion,e.
g., transforms the master equation into a partial differential equationof
infinite or- der, which involves the jump momentsa„(x) of
the tran- sition probability.If
one then assumes that the momentsa„with
n&3
are sma11 compared to a& and u2 and neglects them one arrives at the nonlinear Fokker-Planck equation. However,if
one tries to find higher-order ap- proximations including a3,u4,. . .
, one would run into a logical inconsistency as pointed out by Pawula. The first rigorous transformationof
the master equation to a Fokker-Planck type equation was presented by van Kam- pen. He assumes a quite general dependenceof
the tran- sition probability on aphysical parameter Q and, after in- troducing anew stochastic variable y, hewas able toshow that in the limitQ~
oo the original master equation can be replaced by a Fokker-Planck equation with time- dependent coefficients. Although this limit is exact, no general procedure or approximation scheme is available for the case where Q is finite. A third method, which is closely related to van Kampen's method, was given by Kubo et al. These authors assume an exponential form exp[gi
Q 'qt(g,xp,t)]
for the Fourier transformof
h
(x,
t ~xp) and show that forQ~
00 a Gaussian distribu- tion is obtained. In this limit the result is identical tothat obtained by van Kampen. This method has the disadvan- tage that higher approximations for the distribution itself are difficult to find, since the inverse transformof
exp[gt
Q qt(g,xp,t)]
isnot known in general31 366
198'
The American Physical Society31 GREEN'S-FUNCTION SOLUTION FORASPECIAL
CLASS. . .
367 In a previous paper' the present authors have shownthat the transport equation, which is the average
of
the backward master equation, can be used to calculate an ar- bitrary conditional average( f (x
}~xp),
systematically as a power series in Q'.
The method was also used to find an approximation to the distribution function h(x,
t ~xo)
itself, which turned out to be identical with the linear noise approximation obtained originally by van Kampen and Kubo etal.
However, from a mathematical pointof
view this approximation is unsatisfactory, since it results from neglecting certain terms in our expansion rather than being a limiting process in a mathematical sense.
The purpose
of
this paper is to go beyond the linear noise approximation and to present the complete solution to the master equation. From this general solution one can then easily find the mathematical limitations under which the Gaussian distribution is obtained. Introducing anew sto- chastic variable y (see Sec.V) we will show that the distri- bution function h(x, t
~xo)
can be expanded into a seriesof
Hermite functions,g„"
ob„(t)q)o(y)%'„(y).It
will turn out that the coefficientsb„(t),
which depend on the pa- rameter Q, are uniquely determined by recursion relations.En this sense our solution is not a formal solution to the problem and successive approximations can be actually computed by single integrations. Replacing the infinite sum
of
Hermite functions by a finite sum, pb„(t))Ilp(y)%'„(y), we define the kth approximation toh(x, t
~xp). Although we are not able to answer ques- tions concerning the positivityof
the kth approximation, one can easily show that the kth approximation repro- duces the exact moments(y )
for0(
l(k.
Furthermore, in the limitQ~ao,
all coefficientsb„(n
&1) vanish, while bo is a constant. The distribution is then given by the zeroth approximation4o(y)/v
m, which is identical with the linear noise approximation.II.
METHOD FOR SOLVING TRANSPORT EQUATIONSAlthough a detailed description
of
the method for solv- ing transport equations can be found in our previous pa- per,' we want to repeat the main results for completeness.Let us consider a stationary Markov process for a one- dimensional, continuous stochastic variable
x (t)
with—
oo(x(t)(oo.
The time distribution function (TDF) h(x,t
~xo),
which is the conditional probability for find- ing the stochastic variablex(t)
betweenx
andx +dx
at time t, given that at timet =0
it had the value xp can be described equivalently'"
by the forward master equationat hf(x,
t ~xo)= Pti(x)hf(x)t —
~
xo)
hf(x t=0lxo)=hb(x t=0lxo)=&(x — xo) . (lc)
which has tobe solved subject tothe initial condition
X(xo,
t=0)=f(xo) .
(4b)Equation (4a), which is called transport equation, is the starting point for our considerations. Let us assume that the transition probability
Wti(xp~xi)
can be written in the following form:Wn(xp~xi
)=F(Q)
W(xp,Q(xi — xp)),
where on the right-hand side (rhs)
of Eq.
(5) the depen- dence on Q appears explicitly andE(Q)
is a given func- tion. Although the precise definitionof
the parameter Q depends on the system considered, we assume that in our system there exists a size parameter Q which scales the jump length. As has been shown previously, ' this special dependenceof
the transition probability on the parameter Q can be used for separating the arbitrary conditional average X into a nonfluctuating part Xo and fluctuating parts Xt (l&1).
Expanding X in a power series in Q00
X(xp,
~)=
,g , n'
tXt(xp, ~)we have shown that the nonfluctuating part Xoisthe solu- tion
of
a homogeneous first-order partial differential equation,Xo(xo
&)=f(x)
where
x
isthe solutionof
the macroscopic equation HereWti(x~x, )dtdxi
has to be considered as a known function describing the probability for atransition fromx
toxi
in asmall time interval dt andPti(x)
isgiven byPn(x):= f Wti(x~xi)dxi .
. (2}The subscript Q refers to an explicit dependence
of
the transition probability on agiven parameterQ.
An impor- tant quantity, relatedto
the distribution function, is the conditional averageof
an arbitrary functionf (x
},definedby
(f(x) (xp))=X(xp, t):= f f(x)h(x,
t(x
)dx.
Comparing Eqs. (1a) and (lb) we seethat only the back- ward form
of
the master equation provides us with the following integro-differential equation for X(xp, t) (see Refs. 10and11):
a
BtX(xp,
t)+Pti(xp)X(xo, t)= f
Wti(xp— +xi)X(xi,
t)dxi'(4a)
+ f
Wn(xi+x)hf(xi, t
~xo)dxi—
orthe backward master equation Bthb(x, t ixp)
=
Pn(xp)hb(x, tix—p)(la)
dx =ai(x), x(0)=xo .
d'7
The fluctuating contributions Xt(l
&1}
obey inhomo- geneous first-order partial differential equations, the solu- tionsof
which are given by+ f
W(ti~
xp)xhib( txlxi)dxi
with the initial conditions
(lb) —Ht[y,
x ]
Xt(xo ~)
=Xt[xo x]=
"oai(y)
dy (9a)with
Hi(xo,
r) =Hi[xo, x]
i+i a
(xp), [xo, x],
I&1.
s=2 & dXO (9b)
carried out explicitly, since the dependence
of x
on the in- itial condition xo isgiven by''
dx
dxo
a, (x,
)In deriving the above equations we introduced a new time scale
E(Q)
Qand the jump moments
of
the transition probabilitya„(xo)= f 8'(xp, y)y"
dy(10)
which are independent
of Q.
Furthermore it turns out that the time dependenceof
Xi(xp,r)
is given by the solu- tionof
the macroscopic equation alone; this is indicated by using the square brackets inEq.
(9a) and (9b).From Eqs. (7)
—
(9) we see that the functions Xi can be determined successively, the calculationof
Xi involving only differentiationsof
the previous functions Xk (k(/ —
1)with respectto
xp. This differentiation can bewhich implies for example forthe rhs
of Eq.
(9b)d 8
ai x
8Xi[xp
x]= Xi[xo»]+ %[xo xl .
dxp Bxp
a
i(xoIn our previous paper we have also shown that the dependence
of
the quantities Xi on the arbitrary functionf
can be made explicit, yielding21—r
Xi[
p]= g f' ' "'(
)Xi,zik[x, ],
I)
1.
(14)k=0
In
Eq.
(14),f' '(x)
denotes the kth derivativeof f
withrespect to the argument and Xi2i k are functions indepen- dent
of f,
containing the momentsa„(xp)
only. The im- portanceof Eq.
(14) becomes clearer,if
we combine it withEq.
(6)to obtain(f(x)
~xo) = —
X(xo,r) = —
X[xp,x) = f(x)+ —
ff' (2)— '(x)Xi 2+ f'"(x)Xi i]
+,
1[f"'(x
)X2,~+ f"'(x)X2, 3+f'
'(x)X2,2+ f"'(x
)X2,i]+0
(15a)orin general
(f(x) ~xo), = f(x)
oo l 21—1
+ l=l X +
i k=oX f
(x)Xl,2!—k(xp &)~(15b) We see that an arbitrary conditional average can be calcu- lated up to any order in Q
',
once the functions Xi2i are known. Therefore the functions X&2i k will play an essential role in our further considerations.For
example, the mean-square deviationof
an arbitrary quantityf
canbe obtained directly from Eqs. (15a)and (15b),yielding up to order
0
(f'(x) ~xo), — (f(x)
~xo)',= — X, , [f~'~(x)]'.
(16) This shows that fluctuations areof
order Q',
and Xi2measures the size
of
these fluctuations. (For an explicit expression for Xi2seeSec.III.
)In the next section we will focus our attention on the general structure
of
7121 k and establish arecursion rela- tion for these functions.It
will turn out that for a fixed valueof
k the quantities L121 k which belong to the(k
+
1)th columnof Eq.
(15a)are polynomials inXizandI I
the dependence on the index l is explicitly given by the powers
of
Xi2. In particular, for k=0
we have already shown that the terms in the first columnof Eq.
(15a)can be expressed byX]
l+121
=
lf
which
—
neglecting all other terms—
gives rise to a Czauss- ian distribution function with time-dependent meanx
and variance 27~2. This approximation is the well-known linear noise approximation. However, from amathemat- ical pointof
view, there is no apriori reason for neglect- ing all other terms inEq.
(15b), and the question arises, what are the contributionsof
these termso In the follow- ing sections we will address this question and show that each column inEq.
(15b)gives riseto
a Hermite function in theTDF
meaning that ii(x,
t~xp) can be expanded in a seriesof
Hermite functions, where the coefficientsof
the series can be determined successively.III.
RECURSION RELATION FORIn this section we want to give both a recursion relation for the functions Xl21 k and their explicit dependence on the index / for a fixed value
of
k. We start combining Eqs.(9a) and (9b)to obtaindy,
l&1.
xo——y
'+'
1Xt[xo, x]= g —, f
., Xi+i s[xo x]—
z
2$. oaiy
dxp (18)CAREEN'8-FUNCTION SOLUTION FORASPECIAL
CLASS. . .
369I+1
X, = y 2Sf — f "00&
XI+]l sdy,t)1.
(20) In orderto
find a recursion relation for the XI2I k [see Eqs. (15a)and (15b)]we first prove that the general struc- tureof
Xlis given by2I
—
1XI[xo|x1= g f' "'(x)XI2I
—k[xoix]
k=0 with
(21)
Xl2I k
=0
for l&0
or k&0
or k)
2l but Xoo= 1,
where the XI2I k's no longer depend on
f.
We know al- ready that Xo and Xl have the formof Eq.
(21) and for Since the "initial condition" Xo— — f (x)
is known, we canuse
Eq.
(18) to determine the Xl's successively. An ex- plicit expression for X( has been given previously' and shows agreement with the general structureof Eq. (14).
Note that the differentiation on the rhs
of Eq.
(18)has to be understood according toEq. (13),
sincex
depends on XO.Let us first simplify the notation: Suppose
8[xo, x]
isan arbitrary function
of
the two-independent variables xo andx.
In order to denote the kth total derivative with respect to xo we use a superscript in square brackets and suppress the argument.dk
„a[x„x] =a[k] .
(19)dXO
By
f'"'(x)
we denote the nth derivativeof f
with respect to the argument which therefore does not depend explicit- ly onxo.
Furthermore we suppress the integration vari- abley.
With this simplified notationEq.
(18)readsgeneral l&1 we will prove this relation by induction. We start assuming that
Eq.
(21) is valid for all XL with0&L &l — 1.
Since on the rhsof Eq.
(20) we have only derivativesof
XL with0 &L
&l —
1,we can insert our as- sumption[Eq.
(21)]intoEq.
(20) in order to prove that the left-hand side (lhs)of Eq.
(20) has also the assumed form. In addition this procedure will provide us with a recursion relation for XI2I k. First we have to differen- tiateEq.
(21)with respect to xo2L
—
1 sX~'=r
k=0 mr
=0.
dXOdm s(2L—k)(—) X[sL,—
2Lm]—
ks m s
f
(2L—k+j)( —) X[s—
m]m=0j=O
(23)
2L+s—1
Xp] k=0
g f
(2L+s—k)(—
)S
pmjXL, 2L+s—
j
—k rj=Om
=j
where for XI2I k the constraints,
Eq.
(22), have to be used.Putting
I =I+1 —
s and inserting this expression into the rhsof Eq.
(20) we obtain—
again after rearranging the series with respect to the derivativesof f
Iwhere we have used Leibnitz' rule in the first step and the chain rule for higher derivatives in the second step (see Appendix A). Note that in
Eq.
(23) the dependence onf
is explicit and pm
I — —
pmj[xo, x]. If
we now rearrange the sums inEq.
(23) with respect to the derivativesof f
weobtain (see Appendix C)
k+2
j j
1j
f
(X)X X g
( nJ„Pn,
mXI+(j,
2I k mdy—~—
—k=o pg ~ 7l,m
(25) which proves
Eq.
(21),sinceEq.
(25) has the same structure. In addition, comparingEq.
(25) withEq.
(21),we immedi- ately obtain the following recursion relation forXl2Ik+2
j j
Xl,2I
—
kg g g O„Jf .
~f
0a1
pn,"™ XI+) j,
2l—k—
m—dy,»
1 (26)L = 1+1 — j, K =K+
m+2 — 2j, J = j n. —
Equation (26) then assumes the form
(27)
Xl,2I
—
k [L,SC,g f
&i,x,j
XL2L—x'dyJive&
(28)
The functions ad +
J
which depend on xo andx,
aregiven by
which enables us to calculate the functions Xl2I k succes- sively. In order to see this more clearly we change the in- dices
(j,
m, n) tonew indices(L, K, J)
via the relationsI
+
1—
L.LEJ (l+1 L)( l+1 L J
I
l+1—
L—J,2l—k—
2L+KI
—
k— 1&I &I — 1,
k+2L, —
2I&X &k — I+K +1,
0&J&L — K — l+k+lj .
(30a) (30b) (30c) (29) and the summation runs over all integer triples
(L, K, J) of
the set W&,which is defined by
Wl =
I(L,
K,J)
HZThe inequalities, Eqs. (30a)
—
(30c),are immediate conse- quencesof
the index transformationEq.
(27). We want to mention once again that in- evaluating the summation on the rhsof Eq.
(28) the constraints given inEq.
(22) have tobe taken into account.Next we want to show that the sum on the rhs
of Eq.
(28) contains only one term with
K =k.
From Eqs. (30a) and (30b)we obtainK
&k— l +L +
1&k, where the equal sign holds only forL =/ —
1. That is to say, forL
&l —
1we cannot find a value
of K
withK =k.
Inserting now(L, K) =(/ — l,
k)intoEq.
(30c)yieldsJ =0.
We conclude that(L,
K,J)=(/ — l,
k,0)
is the only triple for whichK = k.
The corresponding value for aLKJ
is easilyfound tobe
l,zl—1
&I'—1,0,
1=
X—E1] (39a)I21
—
1 1 2—
Ez]+I —1,0,0
2 cx1
(39b) on the rhs
of Eq.
(32) contains only termsof
the form Xp2L,the explicit structureof
which isalready known for allL
due toEq.
(37). In particular, region Wn consistsof
the following setof
triples:Wn
— —
[(l —
1,0,
1),(/—
1,0,0), (l —
2,0,0)
] (38)and the corresponding values for aLKz"are given by [see
Eq.
(29) and Appendix A]121 k 1 2 1 2 E1]2
aI' ),
ko=—
2 cx1 2 cx1 (31) 12l
—1 1 3
/—E1]x3
&I
—
zoo=
6 a1 (39c)
We can now split up the sum in
Eq.
(28) into two terms, one withE =
kand the other withX
&kyielding~1,21
—
k z x 2(xE1])+I
—1,2(l—
1)—kdyXp
+ g fz
aL,K,J ~
LiLKdy-
(L,E,J)
e
P'» (32)where
P'»
is the regionP'I
minus the point(L,
K,J) =(l — l,
k,O), 1.e.
, Wlt— — MIX [(/ — l,
k,O)].
Wenow evaluate the recursion relation,
Eq.
(32), for k=0.
The sum in
Eq.
(32) is zero, since for k=0
the set W&t is empty. Therefore weget+1,2I 2
f»
Xp(x
)~l —
1, 21—
2dy (33)This recursion relation can be solved explicitly, since the initial value
700= —
1is known.For /=1
we get with the aidof Eq.
(12)X)2[x', x]=
—,f
Xp (x ' )dy=
—,'a, (x) f
"o a', (y)3 dy.
(34)1
Therefore Eq (32.)reduces for k
=1
toX E1]
+I,zl—1 +1,2+I—1,21—3dy
+ + +
I—
1,21—2dyXp Xp
1 x 2 Ez]
+
z xXp & &I—
1,21—2''
(g1
+6
Xp & XI 221 4 y ~ (40)Equation (41) is now a recursion relation in
l
alone and can be solved explicitly usingEq.
(B2)of
AppendixB.
The result is
Inserting now the explicit expression forXIzl given by
Eq.
(37)yields
X
+l,zl—1 +1,2+1—1,21
—
3XI-'
~o 2
a, (l — I)!
t' 1—2
Xp gr 6 O;1
(41)
Introducing the abbreviation
~1,2 a ~1,2ly
x]
By (35)
with
I
—
1 I—2+1,2
—
+1,2 1,2I—1 1,1(/ 1)1
(l —
2)1 (42)we can rewrite
Eq.
(33)in the formX
+1,21 . +1,2+1—1,21—2dy (36)
X)1=
&f
Xpx(
dy (43)ol explicitly +1,I 2
+I
21 ~ (37) X2,3= f
xp a1—x
(')X) 2dy+ —,f
P CX1(x ('))'dy;
(44)Equation (37) can be easily proved by induction and was obtained previously. 10
For
convenience we define X)12=—0
for /&0
in order todeclareEq.
(37) valid for all l. This is in agreement withEq.
(24)and shall be tacitly assumed in the sequel.Next let us turn tothe case k
=1.
From Eqs. (30a) and (30b) we get for the region WuK =0;
therefore the sumAgain we have to keep in mind that by definition p1z
=0
for l
&0.
Equation (42)implies a considerable simplifica- tionof
our scheme,Eq.
(15b), since once the functions X11 and723
are calculated, we know all terms in the second columnof Eq.
(15b).It
is now obvious how the functions Xl21 2 can be cal- culated.For
region Wu we now haveK =0, 1.
This im-GREEN'S-FUNCTION SOLUTION FORASPECIAL
CLASS. . .
1—4 X12
(I — 4)!
where the new coefficients X22and X34are given by (45)
X.
,2=
lf„"
'XI',)dy+ f" 'x'"XI'Idy
zo Or
1 "ou1
plies that on the rhs
of Eq.
(32) we have only termsof
the form XL2L andX[
zL 1,the structureof
which is already known for allI.
[see Eqs. (37) and (42)]. Since the con- siderations are identical to those forX!2!
1, we only present the result. ThefunctionsX!
2l 2are again polyno- mials inXi2and have the formI
—
2 1—3Xl,21
— 2=
(Yx
1,1+X2
2}+(Xl
1X23+X3
4)C2, 2
=
X21+X2
2~ C23=Xl
1X23+X3
4~ C2 4=
X2,232 (50) In
Eq.
(48) the symbol[K/2]
denotes the largest in- teger less or equal toK/2,
and X12=0
forI. &0,
as de- fined in Sec.III. It
should be stressed thatEq.
(48) holds forK)
1 only, and in the caseK =0
we have to useEq.
(37). Taking now the
Jth
total derivative with respect to xoof
bothEq.
(48}andEq.
(37)we get+
J] 2K+J J
12Xk,2L
— z = I
=tK/2]+1g
cia,l ',
, forK
,J=0,
1,'2,. . .
(51a)
+
—,'f" 'x "lx"'dy
zo
~
(46)but (
K, J) &(0, 0)
andx'
foi (K,J) = (0, 0)
Lf
(51b)X, . =
—,'f (XI))2dy+ f x!')X2")dy
"o CX] "o (x
+' f" (x! l)X( ldy+
'f" (x!
))dyzo a1 zo O!1
(47) Although the coefficients X22 and X34 are rather in- volved, we see that the structure
of
Xl2! 2remains simple.Equation (45) represents an analytic expression for the coefficients in the third column
of Eq.
(15a). In principle no difficulty arises when calculating the terms Xl2!with k&
3.
However, rather than going beyond our present explicit calculations, we want to focus our atten- tion on the general structureof
Xl2l k. In the next sec- tion we shall prove that for all k the dependenceof X!2!
k on the index 1 is given by a polynomial inX12.
Furthermore the upper and lower index
of
this polynomi- al are given.It
is exactly this explicit structureof
Xl2lwhich will enable us
to
reconstruct the formof
the distri- bution function itself. This will be done in Sec.V.
+L,2L
—
KI
=[Kf2]+1x', '
1,2
(I. — I)!
' (48)where
cxr
are functionsof
xo andx
but independentof L.
They can be determined uniquely' by solving the recur- sion relationEq.
(28). In particular, by comparingEq.
(48) with
Eq.
s (42) and (45) we get for the first few cxI
C1
1=+1
1~ C12=+2
3 ~ (49)IV. GENERAL STRUCTURE OFXI,2I g,
According tothe results
of
the previous section it seems quite obvious that the dependenceof
XI2~ k on-the index I is given by a polynomial in712.
However, the deter- minationof
the upper and lower indexof
this polynomial is not evident. Therefore we want to prove in this section that the general structureof
XL2Lz
isgiven by(L,K,J)CP'»
(K,J)=(0,0)
where the sum has to be taken over the subregion
of
Wii defined by(K, J)=(0, 0).
Putting(K, J)=(0, 0)
inEqs.
(30a) and (30b) we find the following inequalities for
I. :
I
—
k—
1.&1. &l — I,
2L
&2l —
k.
(54a}
(54b) The coefficients aLoo aregiven by [see
Eq.
(29)]1 +1+1—L
LOO (r i! pI
+
I—
L,2!—k2L—
and the coefficients cx
I
are uniquely determined by thisdifferentiation.
According to Eqs. (51a) and (51b) let us next split up the sum in
Eq.
(32) into two terms, one with(K, J) =(0, 0)
and the other with(K, J)~(0, 0)
Z
Xl,2l
—
k ~ X1,2X!—1,2I—
2—kdy+ g „
l2L,1,21o, o—
kXL,2Ldy(L,K,
J)e&»
(K,J)=(0,0)
+ g f„cia,
rc,J
XL,zL rcdy. —
(L,K,J)GP'11 {K,J)+(0,0)
Note that for k
=0
both sums are empty. In order to proveEq.
(48) we assume thatEq.
(48) [andEqs.
(51a) and (51b) which are a consequenceof Eq.
(48)]isvalid for allK &k —
1 and for all l. In the caseof K =0,
1,2 this has been shown explicitly in Sec.III.
Since on the rhsof Eq.
(52) we only have terms XL(Lx
withK&k —
1,we can insert our assumption Eqs. (51a)and (51b)in order to prove that 7~2I k has again the structure given byEq.
(48). In the following we consider each sum in
Eq.
(52) separately.(i) Inserting
Eq.
(51b) intoEq.
(52) we obtain for the first sumIf
we now keep in mind thatp„0 — — 6„0
(see Appendix A), we see that,if 2L =21 —
k [which is possible due toEq.
(54b)), the corresponding coefficient aL1201
0"
is proportion- al to pl+1 L0, which is zero unlessL =l +1.
ButL =I+1
is in contradiction toEq.
(54a). We therefore can write insteadof Eq.
(54b) the inequality2L &2l —
k,which is equivalent to
L
&I— [k/2] —
1, since I has to be an integer. This latter inequality is fork)0
alwaysstronger than
Eq.
(54a) and so we can replace Eqs. (54a) and (54b)by1
—
k— 1&L &I — [k/2] —
1 . (56) Performing the index transformationL =l
l' in E—q. (53) we get0&J&k+1 — K — L,
(62c)l,21—k
L +
1 (XL+1PL+1
j,
K——k+2L(63) and are independent
of
the index I since neither Eqs.(62a)
—
(62d) nor the rhsof Eq.
(63)depend on l. Next we want to prove thatEq.
(61) can be written in the same form asEq.
(59),namely[K/2]+1&I &2K+JI .
(62d) The coefficients a~' LKI
are given by [seeEq.
(29)]k+1
~l —
il,2l
—
k /—' ',0, 0(~ ~)1
4
i=fk/2]+1
(57) 2k
gl
—ii=[k/2]+1
(64) According to the remark following
Eq.
(52) the above ex-pression isonly valid for k
) 1.
Defining now coefficients A;k as follows:The proof will be completed,
if
we can show thatI + L
inEq.
(61)satisfies the inequality[k /2]+
1&I + L
&2k (65)0,
fori )k+1.
for i&@+1,
12l—k 1 +1
&l
—
i 0,0—
I.
gg) Pi+1,2i.
—
k»(58a) (58b)
because then we can rearrange
Eq.
(61) according to powersof X,
2and arrive atEq.
(64).To
this end we first insert the lhsof
Eq. (62b) into the lhsof Eq.
(62d) andg«
K
k—
2L kI) — +1) +1= — +1 — L
2 2 2
We can extend the upper index
of
the sum inEq.
(57) to2k
yieldinggl
—if
A;k', dy,
I [k/2)+1 "o
' (l —
1)[ (59)l—L—
I J
12l—k +1,2cK,
I~i
L,K,J
(IL
I)14—'(L,K,J,I)GPIII (61)
where the sum has to be taken over all integer quadruples
(L,
K,J, I)
EP'n&, given bywhere it should be noted that according to
Eq.
(58a) and (58b) the coefficients A;k are independentof
l.(ii) Inserting
Eq.
(51a)into the second sumof Eq.
(52) we obtainK+J
gL—I
J
12l—kCKIAL K
J I
1d3(L,K,J)EP'I1
I
=[K/2]+1(K,J)+(O,Q)
(60) where the summation runs over all triples
(L,
K,J)
fromregion Wn excluding the points with
(K, J)=(0, 0), i.
e., over all(L,
K,J) E&ng t(L,
K,J) E&11
~K =J =OI.
Performing an index transformation
(L, K, J, I)
~(l L,
K,J, I)
we obta—ingl
—ii=[k/2]+1 0
(66) with C~k.
. — — A;k+B;
k. Note thatEq.
(66) is a recursionrelation in l alone with a fixed index k, since it has been derived under the assumption that LL2L K has the form
of Eq.
(48) forK &k —
1 and all 1. The explicit solutionof Eq.
(66) can be found easily,if
we make useof
lemma 1given in Appendix
B.
We obtain the relation orI+L
& k2
which we wanted to prove. Next we note that Eqs. (62c) and (62d) imply
I &k+1+K — L
orI+I &k+1+K .
For L =1
we haveK
&k—
1becauseof (L,
K,J)&( l,
k,O) and thereforeI+L (2k;
forL)2
we haveK(k —
1from
Eq.
(62b) and therefore againI+L (2k.
So we have shown thatEq.
(65)isvalid and therefore the expres- sion given inEq.
(64) exists.Inserting now
Eq.
(64) and (59) intoEq.
(52) we obtain the recursion relation&1n=
I(L,
K,J, I) HZ"
~ ( ,L, KIJ)~(l,
k,O,I),
+l,2l—k= gl
—i x
((
1)1f
~i,kdV (67)1&L &@+1,
k
—
2L&K&k — L+1,
(62a) (62b)
which is exactly the form
of Eq.
(48) and completes our proof.31 GREEN'S-FUNCTION SOLUTION FORASPECIAL
CLASS. . .
373 We briefly want to recall the significanceof
our resultderived in this section. We have shown that the depen- dence
of
the functions7I
2I k on the index l is explicitly given by apolynomial inp12. The coefficientsc;
kof
this polynomial can be determined successively by solvingEq.
(32). We therefore have found analytic expressions forthe coefficients in each column
of Eq.
(15a). In the next sec- tion we will see how this information can be used to derive the distribution function itself.V. EXPLICIT FORM OF THE DISTRIBUTION FUNCTION
We have already shown in a previous paper' that the terms in the first column
of Eq.
(15a)give rise toaGauss- ian distribution function. This approximation was called linear noise approximation.If
this approximation is used for calculating conditional averages X(xp,r)
we see fromEq.
(15b) that the linear noise approximation does not provide a systematic expansionof g
in powersof 0
To
see this more clearly, let us consider the special condi- tional average(x ~xp), .
From Eqs. (15b) and (48) we find immediately1 oo cz&
—
1I(x ~xp}, =x+
1=1g, + +I) — — x+
1=1g
(68)x'
&f( )~ &,
=
I=og ' f'"'(
) lfQo 1 21—1
+ g g f'"-"'(x)x»
k.
l=1 k=1
whereas the linear noise approximation yields only the first term on the rhs
of Eq.
(68). One can argue that in many examples the parameter0
is very large and contri- butionsof
order0
' can be neglected.To
be consistent, however, one should then neglect all termsof
order Q inEq.
(15b), which implies for the distribution functionh(x,
t~xp)=5(x — x).
This, however, describes only thedeterministic (macroscopic) motion
of
the system and gives no information about fluctuations, which areof
or- der0
' [seeEq. (16)).
Another argument for neglecting the sum on the rhs
of Eq.
(68) could be the following. Suppose that the func- tions XI1 are small compared to the macroscopic motionx of
the system. Then we could neglect them even for a finite valueof II '.
But this assumption, namely+I1~&x,
is in general not true. However, in contrast toour arguments it is known that the linear noise approxi- mation provides a satisfactory description
of
the stochas-tic
process,if
the parameter0
is large The purp. oseof
this section is to resolve this "discrepancy" and to give a mathematical limit for which the linear noise approxima- tion holds. Furthermore we want topresent corrections to the linear noise approximation, which are necessary,if
the parameter0
in the physical system under considerations is not "infinitely large."
I.
et us start by combiningEq.
(6)withEq.
(14). We ob- tain for the conditional averageof
an arbitrary quantityf
[see also
Eq. (151)]
Next we rearrange the second sum on the rhs
of Eq.
(69) performing first the summation in each columnof Eqs.
(15a) and (15b) and afterwards the summation with respect tothe rows. One easily finds
(f(x) ~xp), =
h=og '
If' "(x) x'
2k k=1i=[k/2]+1
=X X
k=1i=[k/2]+1 QO
(l — i)!
f
(2!+2i—k)(x)!
~!2 (71)QO +1,2
k,i l~
In the first expression we made use
of
our definitiong) 2=0
for k&0
leading togi,
. in the second one we performed an index transformation l— i~l.
Now the summation over l is independentof
all other indices and we can write1 &12 k'
f
(21+2i—k)(x)QI
ni
1=0~ ' k=1i=[@/2]+1
(72) Finally we rearrange the term in the large parentheses with respect to derivatives
of f
2k QQ
~'
f(2!+2i
—k)(—)
g f
(2!+s)(—)dk=1i=[k/2]+1 $=1
(73) The coefficients d, can be determined with the method described in Appendix
C.
One getsd. = ', ",
d().— —
1i=[(2$ +2)/3]
(74) and therefore
A=
l=Og
&&.
' s=1g f' +'(x)d
(75)Inserting this result into
Eq.
(70) and using the explicit formof
Xi2I [seeEq.
(37)]yields(f(x)
ix, }, — = f f
(x)h(x,
t ix())dx=X-
OOI=0 lt
g
QO df' '+"(x),
(76) s=0where we used the definition dp.
— — 1.
The special form«
the sum in
Eq.
(76)allows us to apply lemma 2,AppendixB, Eq.
(B3),and gives an explicit expression for the distri- bution function itself+
kg
=1I= g
1f
(2l—
k)(—)y (70)fk/2]+1
In what follows we consider only the second sum onthe rhs
of Eq.
(70). Denoting this expression with the letter A we obtain after inserting our explicit expression for~I,2l—k
[cf.
Eq (48))h(x,
t~xp)=
n=0g
4mX1 2 1/2exp[—, Q(x — x)
/4g& z]n/2
Xdn
He„(x
+1,2 1/2— x
)V:= 0
(x— x)
4&&2 (78)
and the distribution function h(y,
r)
h
(x,
t~xp)dx=:h(y,
~)dy.
(79)Furthermore, by expressing
Eq.
(77) in termsof
Hermite functions'%„(y) =e
'~ ~ 'He„(y) (80)(77) This expression can be written in a simpler form,
if
we in- troduce anew stochastic variable y (t)vialinear noise approximation. Corrections become especial- ly necessary,
if —
due tothe physical problem—
the param- eter0
cannot be considered infinitely large.To
give an example we refer tothe Rayleigh piston problem. 'In concrete examples it is unlikely to find a closed ex- pression replacing the infinite sum in
Eq.
(8la).
Therefore one is led to replace the infinite sum by a finite sum. In doing this we can define the kth approximation tothe dis- tribution functionk
hk(y
r):= y. b. +p(y)+.
(y).
(84)(y ) = f
yh (y, ~)dy=M~b&— — f
yh &(y,r)dy,
where we used
(85) Although it is in general difficult to answer questions concerning the positivity
of
the kth approximation, one interesting conclusion can be drawn. From the ortho- gonality relations for the Hermite functions one easily finds that the kth approximation ht,(y,r)
gives the exact result for all moments(y')
with l=0,
1,. .
. ,k. In par- ticular one finds forthe first momentone easily finds
f
yap(y)'k„(y)dy=~m5„& .
(86)h(y
&)= g
b~+p(y)+n(y) n=0bo:=
.~
1(8la)
b„:=
10
v~
4X,,
n/2
i=[(2n+2)/3]
C 2i—n,i
n)1.
n, '
(81b)
lim h(y,
~)= —
%p(y).
n '
v'~
(83)But this is exactly the linear noise approximation, which
—
in the new stochastic variable V—
becomes time independent. We want to mention thatEq.
(78) can be used as a starting point in order to transform the original master equation[Eq. (la)]
into a partial differential equa- tionof
infinite order, where the dependence on0
is shown explicitly. This has been done in the workof
van Kampen. In the limitQ~~
one obtains a Fokker- Planck equation with time-dependent mean and variance, the solutionof
which is identical toEq.
(83). However, our solutionEq.
(81)treats the full problem without any approximation and provides systematic corrections to the Equation (81a) is the central resultof
this paper and represents the Green's function solutionof
the original master equation[Eq. (la)].
Itisnot a formal expansionof
the distribution function h(y,~) in Hermite functions, since the coefficientsb„are
determined by recursion rela- tions [see Eqs. (28) and (48)]and their dependence on0
appears explicitly. In general the coefficients
b„(n )
1)depend on xp and on the scaled time
r
via the solutionof
the maeroscopie equation[Eq.
(8)]. From the definitionof
theb„'s [Eq.
(Slb)]wefindlim
b„=bo6„,
o0
—+oowhich yields for the distribution function
One can easily verify that writing
Eq.
(85) in the original variablex
reproducesEq.
(68), which serves as a check for our general expression for the distribution function given byEq. (Sla).
VI. EXAMPLE
For
asimple checkof
our results presented in the previ- ous sections let us consider the exampleof
the Rayleigh piston. ' In this ease the equilibrium distribution function h'q"(x) is known to be a Gaussian distribution depending explicitly on the parameter0
0 —
1 1/2h'q"(x)
=
exp[— x
(Q— 1)]
. (87)In what follows we want to show that our series expansion
[Eq.
(81a)] yields the correct equilibrium distribution function for all0
whereas the linear noise approximation isvalid only forQ~
m.According to (8 la) and (8lb) we need the coefficients ck (xp,r) for
r~
oo in order toget the equilibrium distri- bution function. However, due to the rather involved structureof
the recursion relation[Eq.
(28)] it seems un- likely to find in this particular case a general expression for the time-dependent functions Xt2t k from which the coefficientsck;
can then be derived[cf. Eq.
(48)]. In or- der to avoid this difficulty we restrict ourselves to the fol- lowing exercise: Rather than calculating directly the equilibrium distribution from the long time limitof
the Xi2I k's and the corresponding ck s we assume the equilibrium distribution h'q", given by Eq. (87), to be valid. We then use this knownh' "
for calculating equi- librium averages via the relationlim
(f(x) ~xp), =
limf f(x)h(x,
~~ x)pdx
= f f(x)h'q"(x)dx .
(88)31 GREEN'S-FUNCTION SOLUTION FORA SPECIAL
CLASS. . .
375 Expanding the rhsof Eq.
(88) into a power series inQ ' we can compare it with the lhs
of Eq.
(88) or equivalently, with the long time limitof Eq.
(15b). In this way we can determine theX!
2! k(xp,r)
forr~oo
and, usingEq.
(48), the coefficients ck;(xpr)
forr —
+00 are obtained. As expected, it will turn out that these coeffi-I
cients determine a distribution function [see
Eqs.
(Sla) and (81b)],which is the expansionof
h'q"(x) into Hermite functions.Let us first calculate the equilibrium average as given on the rhs
of Eq.
(88). Expandingf(x)
into a Taylorseries at
x =xp
——0
and usingEq.
(87)we obtainoP 1 l
—
1 1—
1f f(x)h' "(x)dx=f(0)+
l=1g +
k=og f' "'(0)
,
(l — k)!4'-"
(89)This expression can be compared with the long time limit
of
our nonequilibrium equation[Eq.
(15b)]yieldingl
—
1 1,2 im ~I,21—2k(xo~r) ~1,2I—2k k )(90a) T~00
with
+1,2
—
4I
Combining Eqs. (95a) and (95b) and
Eq.
(81a) one can easily verify that forr~
oo the Gaussian distribution0 —
1 1/22Q —
1exp
—
y (96)Iim h.(y,
r) =
T~00
is obtained, which
—
restoring the original variablex =y/v 0
[seeEq.
(78) forr~ao] —
is identical to the equilibrium distribution function given byEq.
(87).m
~l2!
—2k—t(xor)=0
for i&1T~00
(90b) In order to obtain the coefficients
ck;(xp, r)
forr~
00 wehave to cast Eqs. (90a) and (90b) into the following form
[cf. Eq. (48)]:
+l,21—k
2k i=I k/2]+1
~+I
—
i"' (I — i)!
(91)k
—
1 l—
A: A:—
1X ;+1
(92)one can easily verify that
Eq.
(90a) can be expressed by2k ~+i
—
k yel—
i1,2 1,2
X
i k 1 (k)! (i
))i
=k+1
with i-independent coefficients
ck;.
ComparingEq.
(91) withEq.
(90b)we find thatck; =0
for k odd. Next, using the formulaACKNOWLEDGMENTS
One
of
us(T. L.
) wants to acknowledge a grantof
the Max-Kade Foundation for his stay at the Massachusetts Instituteof
Technology.APPENDIX A
In this Appendix we will show, that the derivatives
of
a composite functionf(g (x))
are closely relatedto
the cu- mulant expansion. A relationao ~l ao ~l
exp
g i!q! = g i p! po=l
l=1
.
l=0 (A1)J1
q1P2=q2+q1
2 ~(A2a) (A2b) is called the cumulant expansion
of
the power series on the rhsof Eq. (Al),
and the coefficients q! are called cu- mulants. They can be uniquely expressed by thep!'s
and vice versa. Inparticular forp1,
p2, p3,and p4 we get/
If
we now compareEq.
(93)withEq.
(91)we findk
—
1 1c2k;= ! . —
kk—
11 .k,
fork+1&i
&2k (94a)I
3q3+3q2ql+q1
3I
4q4+4q3q1+3q2+6q2q1+q1
2 2 4(A2c) (A2d) and
c2k;
— — 0,
fori&2k .
(94b)We want tomention that only due to the requirement that
ck;
has to be independentof
lone obtains a unique repre- sentationof X!
2! k in the formof Eq.
(91)andEq.
(93), respectively.Finally, inserting Eqs. (94a) and (94b) into
Eq.
(Slb) yields the coefficientsb„(xp, r)
forr —
mapand for more details we refer to
Refs.
1 and2. If
we re- place inEq.
(Al) the q!'sby q!t
we getT
oo ~l ce ~1
exp
g
q,t= g—
,pi(t), p, (t)=—l,
(A3)l=1 ' l=o
l
pt(t)=
k=1g ptkt", l&1
(A4)where pt(t) is a polynomial in t
of
order /. So we can write1 1 1
lim b2n(xone r)
=
n!4" (0—
1)",lim b2„ i(xp,
r)=0 .
T~cc(95a) (95b)
with t-independent coefficients