• Keine Ergebnisse gefunden

Ornstein-Uhlenbeck Processes of Bounded Variation

N/A
N/A
Protected

Academic year: 2022

Aktie "Ornstein-Uhlenbeck Processes of Bounded Variation"

Copied!
22
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

https://doi.org/10.1007/s11009-020-09794-x

Ornstein-Uhlenbeck Processes of Bounded Variation

Nikita Ratanov1,2

Received: 26 December 2019 / Revised: 24 March 2020 / Accepted: 7 May 2020 /

©Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract

Ornstein-Uhlenbeck process of bounded variation is introduced as a solution of an analogue of the Langevin equation with an integrated telegraph process replacing a Brownian motion.

There is an interval I such that the process starting from the internal point ofI always remains withinI. Starting outside, this process a. s. reaches this interval in a finite time. The distribution of the time for which the process falls into this interval is obtained explicitly.

The certain formulae for the mean and the variance of this process are obtained on the basis of the joint distribution of the telegraph process and its integrated copy. Under Kac’s rescaling, the limit process is identified as the classical Ornstein-Uhlenbeck process.

Keywords Ornstein-Uhlenbeck process·Langevin equation·Telegraph process· Kac’s scaling

Mathematics Subject Classification (2010) 60J75·60J27·60K99

1 Introduction

For a long time by various reasons, different finite-velocity diffusion models have been studied as a substitute for classical diffusion, which is described by a parabolic equation with infinitely fast propagation. The main model represents motions performed by a par- ticle moving along a line at a finite velocity and changing directions after exponentially distributed holding times, see Cattaneo (1958). The corresponding random process of par- ticle’s position is called an integratedtelegraph process. The distribution of this process is described by the damped wave equation (hyperbolic diffusion equation, the so-called telegraph equation).

The one-dimensional version of the telegraph processT(t), t ≥0,with two alternating regimes is well studied, starting with the seminal works of M.Kac, see Kac (1974). This theory has a huge literature, see, for example, surveys in Kolesnik and Ratanov (2013) and

Nikita Ratanov

nikita.ratanov@urosario.edu.co; nikita.ratanov@csu.ru

1 Universidad del Rosario, Cl. 12c, No. 4-69, Bogot´a, Colombia

2 Chelyabinsk State University, 129, Br. Kashirinykh, Chelyabinsk, Russia Published online: 1 June 2020

(2)

Zacks (2017). A short text reminding of the basic properties of telegraph processes is also added to this paper, seeAppendix.

To introduce the telegraph process, we consider a two-state Markov processε =ε(t)∈ {0,1},defined on the complete filtered probability space(Ω, F, Ft, P). Processεis determined by two positive switching parametersλ0, λ1:

P{ε(t+dt)=i|ε(t)=i} =1−λidt+o(dt), dt→0, i∈ {0,1}. We define the (integrated) telegraph process by

T(t)= t 0

aε(s)ds,

wherea0, a1are constants;T(t)is the position of a particle moving in a line with velocities a0anda1,alternating at random times. Sinceεis the time-homogeneous Markov process, the (conditional) distribution ofT(t)−T(s)=t

saε(s)dsandT(ts)=t−s

0 aε(s)dsare identical for anys, t, 0≤s < t. Precisely, the following identity in law holds:

T(t)−T(s)= t s

aε(s)ds Fs

D

=

T(ts)= t−s

0

aε(s)ds ε(0)

, (1.1) see e.g. Kolesnik and Ratanov (2013).

The Gaussian Ornstein-Uhlenbeck processXOU is another class of processes we are interested in. This process can be defined as the solution to the stochastic differential equation

dXOU(t)= −γ XOU(t)dt+σdW (t), t >0, (1.2) whereW =W (t)is the standard Brownian motion andγ >0,Coffey et al. (2004). This model is used in various application areas as an alternative to Brownian motion with an average tendency to return, see Coffey et al. (2004) and Maller et al. (2009). Let me mention here only two of these application areas. The Vaˇs´ıˇcek interest rate model, Vasicek (1977), is the most famous financial application of this process. The same processes are also widely used for neuronal modelling, see e.g. Buonocore et al. (2015). Similar application areas have telegraph processes: for financial applications see e.g. Di Crescenzo et al. (2014), Kolesnik and Ratanov (2013), and Ratanov (2007); the first steps in the neuronal modelling based on the telegraph process are presented by Ratanov (2019) and Genadot (2019, Section 2.3.2).

In this paper, we study the Ornstein-Uhlenbeck process of bounded variation, which is determined by the version of Langevin equation (1.2) when the Brownian motion W is replaced by telegraph process T. More precisely, let X = X(t)be a stochastic process defined by the equation

dX(t)= −γε(t)X(t)dt+dT(t), t >0,

whereT(t)is the telegraph process based on the Markov processε. Since Kac’s telegraph processTis used instead of the Wiener process in the usual Langevin equation, this equation can be called the Kac-Langevin equation. The latter stochastic equation is equivalent to an integral equation of the following form,

X(t)=xt 0

γε(s)X(s)ds+T(t), t >0. (1.3) Herex =X(0)is the starting point of the processX. To the best of my knowledge, such a modification of the Langevin equation has not been studied before.

The detailed problem settings are presented by Section2. Not surprisingly, the analysis of the distribution ofX(t)is not as simple as for the Gaussian processXOU =XOU(t). The first peculiarity is the following. If the starting pointxis in the intervalI = [a11, a00],

(3)

xI,the processX(t)remains inside the band, that isX(t)∈ [a11, a00], t ≥0. In contrary, if the processXstarts from outside ofI, x /∈ [a11, a00],the process reaches I a. s. in finite time (here, we assume thata11< a00).

Let ϕ = ϕ(t), t ≥ 0,be a continuousFt-measurable random function. To analyse Ornstein-Uhlenbeck process of bounded variation, Eq.1.3, we need to study the properties of the stochastic integral

I(t)= t

0

ϕ(s)dT(s)= t

0

ϕ(s)aε(s)ds, t >0. (1.4) Sinceϕ(·)aε(·)a. s. has a finite number of discontinuities on[0, t],the integral in Eq.1.4 can be considered as apathwiseRiemann integral. The stochastic processI(t), t >0,can be considered as a generalised telegraph process with twotime-varyingvelocity patterns, a0ϕ(t)anda1ϕ(t), alternating after exponentially distributed holding times. The rectifi- able version of such a process (with deterministic functionϕ) has been studied in detail by Ratanov et al. (2019), butI =I(t),defined by Eq.1.4, does not belong to this class.

The main goal of this paper (see Section3) is to study the distribution of time over which the processX=X(t),starting from the outside of intervalI,falls intoI. This problem is associated with the first passage time of the telegraph process, which has been intensively studied recently, see, e.g. Di Crescenzo et al. (2018), Ratanov (2020), Ratanov (2019), Zacks (2004), and Zacks (2017).

The distribution ofX(t)looks much more sophisticated than the Gaussian distribution of the Ornstein-Uhlenbeck processXOU(t). Sections4and5take only a few simple first steps for this analysis.

For completeness, in theAppendixwe recall some modern results on telegraph processes, including explicit formulas for the joint distribution of X(t)andε(t), which have never been published before.

2 The Problem Setting

We study the path-continuous random processX=X(t), t ≥0,satisfying the stochastic (1.3), that is

dX(t)=

aε(t)γε(t)X(t)

dt, t >0, (2.1)

with the initial conditionX(0)=x. Recall, thatε=ε(t)is the two-state Markov process, a0, a1(−∞,)andγ0, γ1>0, a00> a11.

After applying the usual integrating factor technique, one can see that Eq. 2.1 is equivalent to

d eΓ (t)X(t)

=eΓ (t)aε(t)dt, whereΓ (t)= t

0γε(s)dsis the integrated telegraph process based on the same underlying processεas the telegraph processT(t)=t

0aε(s)ds. This yields the formula for the solution of Eq.2.1:

X(t)=eΓ (t)

x+ t 0

eΓ (s)aε(s)ds

=eΓ (t)

x+ t 0

eΓ (s)dT(s)

, t ≥0. (2.2)

(4)

The processX= X(t)can be considered as a piecewise deterministic path-continuous process of bounded variation which follows the two patterns,

φ0(x, t) = e−γ0t

x+a0

t 0

eγ0sds

= a0

γ0 +

xa0

γ0

e−γ0t, (2.3) φ1(x, t) = a1

γ1 +

xa1

γ1

eγ1t, t ≥0, (2.4)

alternating at Poisson times. Similarly defined generalisation of the telegraph process were recently studied by Ratanov et al. (2019); however, here the processXdoes not satisfy the homogeneity property with a common rectifying mapping, see Ratanov et al. (2019, (2.13)), which creates new difficulties.

Letτ =τ(0)(τ =τ(1)) be the first switching time ifε(0)=0 (ε(0)=1). The following identities of conditional distributions can be proved by conditioning on the first switch,

[X(t)|ε(0)=0, X(0)=x] =D[X(t−τ )|ε(0)=1, X(0)=φ0(x, τ )], (2.5) [X(t)|ε(0)=1, X(0)=x] =D[X(t−τ )|ε(0)=0, X(0)=φ1(x, τ )] . (2.6) If there are no switching to the time horizont, that isτ > t,we have

[X(t)|ε(0)=0, X(0)=x]=φ0(x, t), [X(t)|ε(0)=1, X(0)=x]=φ1(x, t), a.s.

Note that the mappingstφ0(x, t)andtφ1(x, t)satisfy semigroup property,

t→∞lim φ0(x, t)= a0

γ0, lim

t→∞φ1(x, t)= a1 γ1. A sample path is shown in Fig.1.

It follows from the definition that if the starting pointx = X(0)is in the interval,xI=(a11, a00),then

a11< φ0(x, t), φ1(x, t) < a00,t≥0.

Further, if the starting pointx =X(0)is outside the intervalI,the trajectoryX=X(t)a.

s. falls intoI and remains there for all subsequentt.

The distribution of this falling time is studied in the next section.

Fig. 1 The sample path ofX=X(t)

(5)

3 The Falling Time into the IntervalI=(a11, a00).

Letx > a00,andT (x)be the time of first passage through the levela00by the process X(t),which starts at pointx, x > a00,

T (x):=inf{t: X(t) < a00|X(0)=x > a00}, (3.1) see Fig.1.

We denote byt(x)the shortest time for crossing the levela00 by the processX = X(t), which starts atx, x > a00. This corresponds to movement only along the pattern φ1(x, t)without switching. Thus,t(x)is determined by the formula

t(x)= 1 γ1

log xa11

a00a11

>0, (3.2)

which is the root of the equation,φ1(x, t)=a00,Eq.2.4. A motion only with the pattern φ0(x, t), x > a00,Eq.2.3, (without switching) never crosses the levela00.

The distribution ofT (x)is supported on{t : tt(x)},and can be found in the form of the (generalised) density functionsQ0(t, x)andQ1(t, x),

P{T (x)∈dt|ε(0)=0} =Q0(t, x)dt, P{T (x)∈dt|ε(0)=1} =Q1(t, x)dt, assuming that

Q0(t, x)|t<t(x)=0, Q1(t, x)|t<t(x)=0.

Due to identities (2.5)–(2.6), functionsQ0(t, x)andQ1(t, x)follow the system of the integral equations,

⎧⎪

⎪⎪

⎪⎪

⎪⎩

Q0(t, x)= t

0

λ0e−λ0τQ1(tτ, φ0(x, τ ))dτ, Q1(t, x)=e−λ1t(x)δ(tt(x))+

t(x)

0

λ1e−λ1τQ0(tτ, φ1(x, τ ))dτ,

(3.3)

t > t(x), x > a00. Here,δ=δ(·)is Dirac’s delta-function.

By definition ofT (x), Eqs.3.1,3.3must be supplied with the boundary conditions

x↓alim00

Q0(t, x)=λ0eλ0t, lim

x↓a00

Q1(t, x)=δ(t). (3.4) Since limx↓a00t(x)=0 (see Eq.3.2) and limx↓a00φ0(x, τ )a00(see Eq.2.3), the same follows from the Eq.3.3themselves.

Let

L0:=

∂t +0xa0)

∂x, L1:=

∂t +1xa1)

∂x. Since

L0

0xa0)eγ0t

=0, L1

1xa1)eγ1t

=0, and1xa1) d

dx[t(x)] =1,we have the following identities:

L0[φ0(x, t)] =0, L1[φ1(x, t)] =0, L1[t(x)] =1,

(6)

see Eqs.2.3–2.4and3.2. By applyingL0andL1to Eq.3.3we obtain

⎧⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎩

L0[Q0(t, x)] =λ0e−λ0tQ1(0, φ0(x, t))t

0

λ0e−λ0τ d dτ

Q1(tτ, φ0(x, τ )) dτ, L1[Q1(t, x)] = −λ1eλ1t(x)δ(tt(x))+λ1eλ1t(x)Q0(tt(x), a/γ )

t(x)

0

λ1e−λ1τ d dτ

Q0(tτ, φ1(x, τ )) dτ. Integrating by parts, one can see that system (3.3) of the integral equations is equivalent to the system of the partial differential equations,

L0[Q0(t, x)] = −λ0Q0(t, x)+λ0Q1(t, x),

L1[Q1(t, x)] =λ1Q0(t, x)λ1Q1(t, x), (3.5) t > t(x), x > a00,with the boundary conditions (3.4).

Consider the Laplace transforms

Q0(q, x)=E0[exp(−qT (x))] =

0

e−qtQ0(t, x)dt, Q1(q, x)=E1[exp(−qT (x))] =

0

e−qtQ1(t, x)dt,

q >0. (3.6)

Note that 0≤Qi(q, x)≤1, i∈ {0,1}. FunctionsQ0(q, x)andQ1(q, x)have a sense of the complementary cumulative distribution function ofXeq = max

0teqX(t),where eq is an exponentially distributed random variable, Exp(q),independent ofX. Indeed, integrating by parts in Eq.3.6, one can see

Qi(q, x)=

0

e−qtd [P{T (x) < t|ε(0)=i}]=

0

qe−qtP{T (x) < t|ε(0)=i}dt

=P{T (x) <eq|ε(0)=i} =P{Xeq > x|ε(0)=i}. System (3.5) corresponds to the system of the ordinary equations,

⎧⎪

⎪⎨

⎪⎪

(xa00)dQ0

dx (q, x)= −β0(q)Q0(q, x)+β0(0)Q1(q, x), (xa11)dQ1

dx (q, x)=β1(0)Q0(q, x)β1(q)Q1(q, x),

x > a00, (3.7)

where

β0(q)= λ0+q

γ0 , β1(q)= λ1+q

γ1 . (3.8)

Due to Eq.3.4, system (3.7) is supplied with the boundary conditions Q0(q, a00+)=λ0/(λ0+q), Q1(q, a00+)=1.

Consider the series representations:

Q0(q, x)=

n=0

An(xa00)n, Q1(q, x)=

n=0

Bn(xa00)n, x > a00.

(7)

The boundary conditions giveA0=λ0/(λ0+q), B0=1,and by the system (3.7) we have the sequence of coupled equations for coefficientsAnandBn, n≥0:

nAn= −β0(q)An0(0)Bn, nBn+(n+1)(a00a11)Bn+11(0)An−β1(q)Bn, which is equivalent to

⎧⎪

⎪⎨

⎪⎪

An= β0(0) β0(q)+nBn,

Bn+11(0)An(β1(q)+n)Bn

(n+1)(a00a11) = β0(0)β1(0)(β0(q)+n)(β1(q)+n) (n+1)(a00a11)(β0(q)+n) Bn. The second equation can be rewritten as

Bn+1= −(b0(q)+n) (b1(q)+n)

β0(q)+n · Bn

(n+1)(a00a11), where

b0,1= 1 2

β0(q)1(q)±

(β0(q)−β1(q))2+4β0(0)β1(0)

. (3.9)

Due to the boundary conditions,B0=1; further, B1 = b0b1

β0·1!· −1 a00a11

, B2 =b0(b0+1)·b1(b1+1)

β0(β0+1)·2! ·

−1 a00a11

2

, . . . , Bn=(b0)n(b1)n

(β0)n·n! ·

−1 a00a11

n

, and, by the first equation,

An= β0(0)

β0(q)+n·(b0)n(b1)n

(β0(q))nn

−1 a00a11

n

= λ0

λ0+q· (b0)n(b1)n

(β0+1)n·n

−1 a00a11

n

,

n≥0,

whereβ00(q), β11(q)andb0 =b0(q), b1 =b1(q)are defined by Eqs.3.8and 3.9;(b)n=Γ (b+n)/Γ (b)=b(b+1) . . . (b+n−1)is the Pochhammer symbol.

As a result, functionsQ0andQ1are expressed by Q0(q, x)= λ0

λ0+qF

b0(q), b1(q)0(q)+1; a00x a00a11

, Q1(q, x)=F

b0(q), b1(q)0(q); a00x a00a11

.

(3.10)

HereFis the Gaussian hypergeometric function, defined by the series F (b0, b1;β;z)=1+

n=1

(b0)n(b1)n

(β)n·n! zn, (3.11) if one of the following conditions holds:

1. |z|<1;

(8)

2. |z| =1 andβ−b0b1 >0;

3. |z| =1, z=1,and−1<β−b0b1≤0.

Function F is also defined by analytic continuation everywhere in z, z ≤ −1, see Gradshteyn and Ryzhik (1994, Chap. 9.1) and Andrews et al. (1999).

Therefore, functionsQ0andQ1are defined by formulae (3.10) and by series (3.11), if the starting pointxsatisfiesa00x <2a00a11. Ifxis far froma00,analytic continuation is applied.

Theorem 1 Ifλ0>0,then a.s.

T (x) <, x > a/γ.

Proof Sinceb0(0)=0 andb1(0)=λ00+λ11, β0(0)=λ00,we have P{T (x) <∞ |ε(0)=0} =Q0(0, x)=F (0;b1(0)0(0)+1;z)≡1,

P{T (x) <∞ |ε(0)=1} =Q1(0, x)=F (0;b1(0)0(0);z)≡1, which give the proof.

From Eq.3.10one can obtain the moments of the falling timeT (x). For simplicity, we give the explicit formulae for the mean values ofT (x),when the initial pointxis not so far from the attractive band.

Theorem 2 LetT (x), xa00,be defined byEq.3.1.

In the following two cases 1. a00x <2a00a11; 2. x=2a00a11andλ1< γ1; the mean values ofT (x)are given by the series

E{T (x)|ε(0)=0} = −b0(0)

n=1

00+λ11)n

(1+λ00)n ·znn +λ10 <, (3.12) E{T (x)|ε(0)=1} = −b0(0)

n=10011)n

00)n ·znn <∞. (3.13) Here

b0(0)= λ0+λ1

λ0γ1+λ1γ0

>0

is the derivative at 0 of the minor root,b0(q),Eq.3.9, andz=z(x)= a00x a00a11. If x = 2a00a11 and γ1λ1 <1, then only the series (3.13) for E{T (x)|ε(0)=0}is finite.

In all other cases, the expectationsE{T (x)|ε(0)=0}andE{T (x)|ε(0)=1}follow after analytic continuation ofEq.3.10.

Proof Since b0(0)=1

2

β0(0)1(0)

(β0(0)−β1(0))2+4β0(0)β1(0)

=0, b1(0)= 1

2

β0(0)1(0)+

(β0(0)−β1(0))2+4β0(0)β1(0)

=λ0

γ0 +λ1

γ1

(9)

and by Eq.3.11, Fb

1(b0, b10+1;z)|q=0=0, Fβ

0(b0, b10+1;z)|q=0=0, we have

E[T (x)|ε(0)=0] = −

∂q Q0(q, x)

|q=0

= λ0

0+q)2|q=0λ0

λ0+q · b0(0)Fb

0+b1(0)Fb

10(0)Fβ

0

(b0, b1;1+β0; z)|q=0

= 1

λ0b0(0)Fb

0(0, λ00+λ11;1+λ00; z) withz=z(x)= a00x

a00a11

. Further, Fb0(b0, b1;1+β0; z)|q=0=

n=1

(n−1)!(b1(0))n

(10(0))n ·zn n! =

n=1

00+λ11)n

(1+λ00)n ·zn n, if the series converges.

Formula (3.12) follows from b0(q)|q=0= 1

2 1

γ0 + 1

γ100λ11)(1/γ0−1/γ1) λ00+λ11

= 1 2

2(λ0+λ1)/(γ0γ1)

λ00+λ11 = λ0+λ1

γ1λ0+γ0λ1

Similarly, one can obtain (3.13).

The subsequent moments,E[T (x)n|ε(0)=i], n≥2, i∈ {0,1},can be obtained by sequential differentiation.

Some plots are presented in Fig.2.

Remark 1 LetX=X(t),starts fromx=X(0), x < a11,and T(x)=inf{t : X(t) > a11}, x < a11,

be the first passage time through the levelx=a11. The formulae for the expectations of T(x)can be easily written by symmetry.

Remark 2 Formulae (3.10) are consistent with some simple reasonable results.

Letλ0=0.

Ifε(0)=0,thenX(t)=φ0(x, t)t >0, a. s. and, hence, the processXnever crosses the levela00. That is,T (x)= +∞.

Ifε(0) =1,then the processX=X(t)passes througha00if and only if there is no switching up to the timet(x),Eq.3.2. Therefore, conditionally (underε(0)=1)

T (x)=

t(x), with probability eλ1t(x),

+∞, with probability 1−e−λ1t(x) |ε(0)=1 . Therefore, by definition, forq >0, we have

Q0(q, x)≡0, Q1(q, x)=exp(−qt(x))·e−λ1t(x)=

xa11

a00a11

−(λ1+q)/γ1 . (3.14)

(10)

(a) (b)

(c) (d)

Fig. 2 The expectationE1=E[T (x)|ε(0)=1]in the casea0= −a1=aandγ0=γ1=γ ,as function of (a)x, 1x3,withλ0=λ1=1 fora=γ =1;2.5;5 (from top to bottom); (b)a, 1a10, withx =2, γ =a, λ0 =λ1 =1 forx= 1.5; 2; 2.5 (from top to bottom); (c)λ0, 0.2λ0 2.5, withx =2, a = γ = 1 forλ1 =0.1; 0.25; 0.5 (from bottom to top); (d)λ1, 0 λ1 2.5,with x=2, a=γ=1 forλ0=0.1; 0.25; 0.5 (from top to bottom)

The same result is given by Eq. 3.10: if λ0 = 0, then, due to Eq. 3.10, we have Q0(q, x)≡0,andb0(q), b1(q)coincide withβ0=q/λ0, β1=1+q)/γ1. Hence,

Q1(q, x)=1+

n=1

(β1(q))n

n! z(x)n=(1z(x))

λ1+q γ1 =

xa11

a00a11

−(λ1+q)/γ1 , which coincides with Eq.3.14.

Letλ1=0.

If the particle begins to move from pointx, x > a00,according to the patternφ1(x, t), Eq.2.4, then it will arrive without switching toa00at timet(x). It means that

Q1(t, x)=δ(tt(x)).

Thus, by Eqs.3.6and3.2

Q1(q, x)=e−qt(x)=

xa11 a00a11

−q/γ1

=(1z(x))−q/γ1

=1+ n=1

(b1)n n! z(x)n withb1=b1(q)=q/γ1.

(11)

This is repeated by Eq.3.10withb11=q/γ1andb00=0+q)/γ0. On the other hand, if the particle begins with the patternφ0(x, t),Eq.2.3, then it falls intoa00after a single switch (at timeτ) to the patternφ1. This means that

Q0(q, x)=E[exp(−q

τ+t0(x, τ ))

]. (3.15)

Since

t0(x, τ ))= 1 γ1

logφ0(x, τ )a11

a00a11 = 1

γ loga00a11+(xa00)e−γ0τ a00a11

= 1 γ1log

1−z(x)e−γ0τ

, z= a00x a00a11 <0, Equation3.15becomes

Q0(q, x)=

0

λ0e−(λ0+q)τ

1−z(x)e−γ0τq/γ1

= λ0

γ0

1 0

y−1+(λ0+q)/γ0(1z(x)y)−q/γ1dy.

Due to the integral representation of Gaussian hypergeometric function (Gradshteyn and Ryzhik1994, formula 9.111),

Q0(q, x)= λ0

λ0+qF (q/γ1, (λ0+q)/γ0;1+0+q)/γ0;z(x)), which coincides with the first equation of Eq.3.10(withλ1=0).

4 The Mean and Variance ofX(t)

The marginal distribution ofX(t),Eq.2.2, can not be so easily written as the distribution of the Gaussian Ornstein-Uhlenbeck process. In this section we give only a few hints on this matter.

Let 0=τ0< τ1 < . . . < τn< . . .be the sequence of switching times of the underlying Markov processε. LetN (t)corresponds to the number switchings till timet, t >0,

N (t)=n, if τnt < τn+1.

Recalling the distribution of the inhomogeneous Poisson process N (t), see L´opez and Ratanov (2014, Theorem 2.1), we have

π00(s)=P0{N (s)is even} =eλ0s

1+Ψ0(s, (λ0λ1)s) , π11(s)=P1{N (s)is even} =e−λ1s

1+Ψ0(s, (λ1λ0)s)

, π01(s)=P0{N (s)is odd} =λ0eλ0sΨ1(s, (λ0λ1)s), π10(s)=P0{N (s)is odd} =λ1e−λ1sΨ1(s, (λ1λ0)s),

(4.1)

where Ψ0(t, z)=

n=1

λn0λn1

(2n)!t2nΦ(n,2n+1;z), Ψ0(t, z)= n=1

λn−10 λn−11

(2n−1)!t2n−1Φ(n,2n;z); (4.2) Φ(·,·; ·)is the confluent hypergeometric function, Andrews et al. (1999).

(12)

Due to representation (2.2), the mean ofX(t)is given by E0[X(t)] =E0

e−Γ (t)

x+

t 0

eΓ (s)aε(s)ds

=0Γ(t) + t

0

a0π00(s)E e(Γ (t)Γ (s))|ε(s)=0

+a1π01(s)E e(Γ (t)Γ (s))|ε(s)=1 ds

=0Γ(t)+a0 t

0

π00(s)ψ0Γ(ts)ds+a1 t

0

π01(s)ψ1Γ(ts)ds.

(4.3) Similarly,

E1[X(t)] =1Γ(t)+a0 t

0

π10(s)ψ0Γ(ts)ds+a1 t

0

π11(s)ψ1Γ(ts)ds. (4.4) Hereπik(·)are determined by Eqs.4.1–4.2, and the moment generating functions,

ψkΓ(t)=Ek[exp(− t 0

γε(s)ds)], k∈ {0,1}, of the telegraph processΓ (t)are also known,

ψ0Γ(t)=e−(λ00)t[1+Ψ0(t, (λ0λ1+γ0γ1)t)+λ0Ψ1(t, (λ0λ1+γ0γ1)t)], ψ1Γ(t)=e−(λ11)t[1+Ψ0(t, (λ1λ0+γ1γ0)t)+λ1Ψ1(t, (λ1λ0+γ1γ0)t)], see e.g. L´opez and Ratanov (2014, (2.21)).

Remark 3 In the symmetric case,λ0 = λ1 = λ, γ0 = γ1 = γ anda0 = −a1 = a, formulae (4.3)–(4.4) can be simplified.

Since,ψ0Γ(t) = ψ1Γ(t) = e−γ t andπ00(s) = π11(s) = (1+e−2λs)/2, π01(s) = π10(s)=(1−e−2λs)/2,by Eqs.4.3–4.4we have

E0[X(t)] =xe−γ t+am(t) and

E1[X(t)] =xe−γ tam(t), where

m(t)=

⎧⎪

⎪⎪

⎪⎪

⎪⎩

e−2λt−e−γ t

γ −2λ , ifγ =2λ, te−γ t, ifγ =2λ, Further, notice that in the symmetric case,

E0[aε(s1)aε(s2)] =E1[aε(s1)aε(s2)] =a21+e−2λ|s1−s2|

2 −a21−e−2λ|s1−s2| 2

=a2exp(−2λ|s1s2|).

(13)

Hence, E

t

0

e−γ (t−s)dT(t) 2

=a2e−2γ t t

0

t 0

exp(γ (s1+s2)−2λ|s1s2|)ds1ds2

= a2 γ +2λ

⎧⎪

⎪⎪

⎪⎨

⎪⎪

⎪⎪

⎩ 1

γ − 2

γ−2λe−(γ+2λ)t+ γ +2λ

γ (γ−2λ)e−2γ t, ifγ =2λ, 1−e−2γ t−2γ te−2γ t

γ , ifγ =2λ,

which gives the expression for the variance ofX(t), Var[X(t)] =E

t 0

e−γ (t−s)dT(t) 2

E t

0

e−γ (t−s)dT(t) 2

=a2

⎧⎪

⎪⎪

⎪⎨

⎪⎪

⎪⎪

⎩ 1

γ (γ+2λ)− e2γ t −2λ)2

e2(γ−2λ)t− 8λ

γ+2λe−2λ)t+2λ γ

, ifγ=2λ ,

1 2γ2

1−e2γ t

1+2γ t+2γ2t2

, ifγ=2λ.

The limiting behaviour ofX(t)is consistent with known results.

Ast→ ∞,the limits are given by

tlim→∞E0[X(t)] = lim

t→∞E1[X(t)] =0, lim

t→∞Var[X(t)] = a2 γ (γ+2λ). On the other hand, under Kac’s scaling, a, λ → ∞, a2σ2, the limits of the expectation

limE[X(t)] =xe−γ t±limae−2λt−e−γ t

γ −2λ =xe−γ t, (4.5)

see Eqs.4.3–4.4, and of the variance lim Var[X(t)] =lima2

1

γ (γ+2λ)− e−2γ t −2λ)2

e2(γ−2λ)t− 8λ

γ +2λe−2λ)t+2λ γ

= lim a2

γ (γ+2λ)−e−2γ t

γ lim 2a2λ

−2λ)2 = σ2

2γ 1−e−2γ t

, (4.6)

Formulae (4.5)–(4.6) coincide with the known results for the classical Ornstein- Uhlenbeck process, see e.g. Maller et al. (2009, (4)-(5)).

5 On the Joint Distribution ofX(t) andN(t)

Due to technical difficulties, the distribution of the Ornstein-Uhlenbeck process with bounded variation cannot be presented explicitly. However, let’s sketch it out.

Consider the Ornstein-Uhlenbeck process of bounded variationX=X(t)based on the completely symmetric telegraph processT:the velocities are±athe switching intensities are identical,λ0=λ1 =λ, andγ0=γ1 =γ. Letfi(y, t;n|x), n≥0, i ∈ {0,1},be the density functions characterising the joint distribution of the particle positionX(t)and the number of the patterns switchingsN (t),

fi(y, t;n|x)=P{X(t)∈dy, N (t)=n|X(0)=x, ε(0)=i}/dy.

(14)

By definition, we have

f0(y, t;0|x)=e−λtδ(yφ0(x, t)), f1(y, t;0|x)=e−λtδ(yφ1(x, t)), (5.1) φ0(x, t)=a/γ +(xa/γ )e−γ t, φ1(x, t)= −a/γ +(x+a/γ )e−γ t. Further, by virtue of Eqs.2.5–2.6, functionsf0(y, t;n|x)andf1(y, t;n|x)satisfy the sequence of coupled integral equations,n≥1,

f0(y, t;n|x) =λt

0e−λτf1(y, tτ;n−1|φ0(x, τ ))dτ, (5.2) f1(y, t;n|x) =λt

0e−λτf0(y, tτ;n−1|φ1(x, τ ))dτ. (5.3) Due to the total symmetry of the underlying processT, we have the identity in law:

[X(t)|ε(0)=0, X(0)=x]= [−D X(t)|ε(0)=1, X(0)= −x], t >0.

Moreover, by induction, one can verify the following identities: for alln, n≥0,

f0(y, t;n|x)f1(y, t;n| −x), t >0. (5.4) Sinceφ1(x, t) ≡ −φ0(x, t),forn = 0 Eq. 5.4follows by definition, see Eq.5.1. Let Eq.5.4be proved forn−1. Equations5.2–5.3give

f1(y, t;n| −x)=λ t

0

e−λτf0(y, tτ;n−1|φ1(x, τ ))dτ

=λ t

0

e−λτf0(y, tτ;n−1| −φ0(x, τ ))dτ

=λ t

0

e−λτf1(y, tτ;n−1|φ0(x, τ ))dτ =f0(y, t;n|x), which proves the result (5.4).

In order to determine the explicit expressions of the density functionsf0(y, t;n|x)and f1(y, t;n|x),consider first (5.2)–(5.3) withn=1. By Eq.5.1we have

f0(y, t;1|x)= λeλtt

0δ(yφ10(x, τ ), tτ ))dτ, (5.5) f1(y, t;1|x)= λe−λtt

0δ(yφ01(x, τ ), tτ ))dτ. (5.6) Notice that the equations

yφ10(x, τ ), tτ )=0, (5.7) yφ01(x, τ ), tτ )=0, (5.8) have the solutions,τ, 0≤τt,if and only ifyI (x, t):= [φ1(x, t), φ0(x, t)],that is

a γ +

x+a

γ

e−γ t=φ1(x, t)yφ0(x, t)= a γ +

xa

γ

e−γ t. (5.9) Since

φ10(x, τ ), tτ )≡ −aγ +2aγ eγ (tτ )+ xaγ

eγ t, (5.10) φ01(x, τ ), tτ )γa2aγ e−γ (t−τ )+ x+γa

e−γ t, (5.11) see Eqs.2.3–2.4, the solution of Eq.5.7,τ =τ0(y, t|x),is given by

τ =τ0(y, t|x)=t+ 1

γ loga+γ y+(aγ x)e−γ t

2a . (5.12)

Referenzen

ÄHNLICHE DOKUMENTE

9) Recall the definition of (linear) bounded map and explain why continuous linear maps be- tween t.v.s. Does the converse hold? If yes, prove it. If not, do you know any class

5) Recall the definition of (linear) bounded map and explain why continuous linear maps be- tween t.v.s. Does the converse hold? If yes, prove it. If not, do you know any class

6 we discuss the pure Brownian case for relative entropy, the validity of the results for general ergodic driven noises such as red noise and derive conditions on ε for observing

If such a marked decline in clutch size is accompanied by a simultaneous increase in egg quality, this can explain why parental feeding rate decreases while the offspring

Abstract: We study conditions for stability and near optimal behavior of the closed loop generated by Model Predictive Control for tracking Gaussian probability density

Keywords Stochastic volatility · Hawkes processes · Jump clusters · Leverage effect · Exponential affine processes · VIX · Implied volatility for VIX options.. B Carlo

In Section 4 we have proved an asymptotic normality result for the discretized maximum likelihood estimator with jump filter (6) for models that involve a jump component of

The selection tool was then applied on the development factors supplied by the process assembler developer (see second number between brackets for each characteristic in Table 1).