• Keine Ergebnisse gefunden

Marginal densities of the perception and the background process

12.5 Viterbi algorithm for the HBMi

13.2.5 Marginal densities of the perception and the background process

4b 2t + 2

2t+ ν0 t 2



erf



2

2t− ν0 t 2



Figure 13.3: Convergence of the error function. The error function for the two terms in equations (13.4) (light gray) and (13.7) (dark gray) depending on the time t. b= 2.41 and ν0 = 0.72 are chosen according to the estimated parameters of subject B in Table 12.1 and are the same as used for the first column in Figure 13.2. Moreover, we set y := 2. Larger absolute values for y slow the convergence toward ±1 down, whereas smaller values speed the convergence up.

13.2.5 marginal densities of the perception and the background process are computed. Recall that the key results are Corollary 13.11 about the steady-state distribution and Proposition 13.13 on the asymptotic rate of changes in the HBMi. Readers more interested in the application may skip all other parts of this section.

13.2.1 HBMi as semi-Markov process

The HBMi can be linked to the concept of semi-Markov processes (recall the introduction in Section 8.2.3.3) which is of importance in the theory of stochastic processes. The process ( ˜Yt)t≥0 denoting the state of the HBMi at time t ≥ 0 constitutes a semi-Markov process (Definition 8.15, assuming known hidden states), and (Yi,(0, ti))i≥1 is a Markov renewal process, where (ti)i≥1 are the perceptual reversal times and Yi is the i-th hidden state. Here, we derive the key variables of this semi-Markov process. The entries of the semi-Markov kernel Q(t) are given by

Qjk(t) =P(Di+1≤t, Yi+1 =k|Yi =j) =P(Yi+1 =k|Yi =j)P(Di+1 ≤t|Yi+1 =k, Yi =j)

=pjk Zt

0

fDjk(s)ds, (13.8)

wherefDjk(s) are the densities of the dominance times conditioned on the next state as derived in Lemma 11.3. In the context of semi-Markov processes the (Djk) are the conditional sojourn times. In addition, (Yi)i≥1 is the embedded Markov chain.

13. The HBM: Theoretical properties

13.2.2 First passage times, steady-state distributions and renewal results In this section we focus first on the first passage times of the HBMi. These are defined like in Definition 10.4 with HMMi replaced by HBMi, ΞHMMi exchanged by ΞHBMi and Tij,T˜ij instead ofTij,T˜ij, i.e.,

Tij := inf{m≥2 :Ym =j|Y1=i} −1, T˜ij := inf{t >0 : ˜Yt=j|Y˜0 =i}.

Then, we turn to the steady-state distributions. The steady-state distributions are of practical relevance, e.g., for the investigation of group differences between the control and the patients group in the data set Schmack et al. (2015). Note again that the following results are approximate due to the non-constant drifts of the perception process and the background process and that we omit approximation signs here for convenience.

Proposition 13.8. First passage and first recurrence times of the HBMi:

Expectation

Let pSS, pU U be defined as in equation (11.7)and µS, µU be as in equation (11.4). Moreover, µ+S, µS, µ+U and µU are defined as in Lemma 11.2.

Assume the HBMi starts with a perceptual reversal in the stable state S and pSS <1.

The expected first passage times of the unstable stateU are approximately given by E[TSU ] = 1

1−pSS and E[ ˜TSU ] = pSS

1−pSSµ+SS = µS 1−pSS.

Similar results hold for the first passage time of the stable state when starting with a perceptual change in the unstable state and pU U <1.

If pSS = 1 or pU U = 1, the corresponding expected first passage times are ∞.

If max(pSS, pU U)<1, the expected first recurrence time to the initial state is independent of the starting state approximately given by E[TSS ] =E[TU U ] = 1−p1

SS +1−p1 U U and E[ ˜TSS ] =E[ ˜TU U ] = p

SS

1−pSSµ+SS + p

U U

1−pU Uµ+UU = µ

S

1−pSS + µ

U

1−pU U. If max(pSS, pU U) = 1, the recurrence time is∞.

Proof: The assertions for max(pSS, pU U) = 1 are trivial. Therefore, we assume max(pSS, pU U) < 1. As the lengths of stable dominance times are independent and the background process starts at zero at the beginning of each new dominance time, the number of visits to the stable state when starting in this stable state is a geometrically distributed random variable with success probability 1−pSS. Thus, its expectation is given byE[TSU ] = 1−p1

SS. The length of a stable phase ˜TSU is a random variable distributed like PTSU

i=1 DiS, where D1S, . . . , DST

SU denote the random stable dominance times. Given TSU , we use linearity of expectation to compute the conditional expectation

E

TSU

X

i=1

DiS|TSU

= (TSU −1)µ+SS. Now we take expectation over TSU and use Corollary 11.4 to find

E[ ˜TSU ] =E

TSU

X

i=1

DiS

= pSS

1−pSSµ+SS = µS 1−pSS.

182

13. The HBM: Theoretical properties

The assertions for TU S and ˜TU S can be shown using the same arguments. By the linearity of the expectations the claims about the expected first recurrences times follow.

We derive the distribution of the first passage times as an additional result. For ˜Tkj we therefore sum over fTi˜

kj

(x) which are the densities of observing a phase of state k with i dominance times and lengthx followed by state j. Applying the law of total probability, the summation overioffTi˜

kj

(x) gives the probability of a phase of statek with lengthx followed by a phase of statej.

Proposition 13.9. First passage and first recurrence times of the HBMi:

Distribution

a) Assume the HBMi starts with a perceptual reversal in the stable state S and pSS <1.

The weights wTSU of the first passage time of U are the weights of a Geometric distribution with parameter p= 1−pSS. The density fT˜SU of the first passage time of the point process

ΞHBMi is expressed using convolutions fT˜SU (x) =

X

i=1

fTi˜ SU

(x) = (1−p˜SS(x))fIG

2bSS,

2bSS3(x)+

X

i=2

(1−p˜SS)fIG

2bSS,

2bSS3

˜ pSSfIG

2bSS,

2bSS3

∗(i−2)

(x), (13.9) if x≥0 and0 otherwise. There, p˜SS(x) is given by (11.5).

Similar results hold for the first passage time of the stable state when starting with a perceptual change in the unstable state and pU U <1.

If max(pSS, pU U) = 1, all finite weights vanish.

b) If max(pSS, pU U)<1, the weights wr(i) of the first recurrence time TSS are wr(i) = 0 if i≤1 and

wr(i) =

i−1

X

j=1

wT

U S(j)wT

SU(i−j),

else. The density fr(x) of the first recurrence timeT˜SS is fr(x) = 0 if x <0 and fr(x) =

x

Z

0

fT˜U S (y)fT˜SU (x−y)dy, else.

If max(pSS, pU U) = 1, the recurrence times are∞.

Proof: First, we show a). The assertions for max(pSS, pU U) = 1 are trivial. Therefore, we assume max(pSS, pU U) < 1. As explained in the proof of Proposition 13.8, ˜TSU is a geometrically distributed random variable with success probability 1−pSS which yields the assertion forwTSU .

We turn to the first passage time distribution of the unstable state by the point process ΞHBMi. To show the assertion a case-by-case analysis of the (random) number of dominance times in

13. The HBM: Theoretical properties

the stable stateTSU is necessary. We explain the structure offi˜

TSU (x) :=P( ˜TSU =x, TSU =i) fori= 1 andi= 2 in detail and then generalize to an arbitrary i >2. In the last step of the proof we use the law of total probabilityfT˜

SU(x) =

The stable phase ends after one dominance time (TSU = 1) if at the timexof a border crossing by the perception process the background process is below ˜bS. Due to the independence of the two processes during one dominance time, we derive

fT1˜

Now, assume that the stable phase of lengthxends after two dominance times, i.e., TSU = 2.

This implies that at the end of the first dominance time with length x1 (0 < x1 < x) the background process is above the border ˜bS (A) and, in contrast, at the end of the second dominance time with lengthx−x1 the background process is below ˜bS (B). Again, we make use of the independence of the two processes during one dominance time and the independence of two stable dominance times and need the convolution of the densities of events A and B:

fT2˜

For TSU > 2 we need TSU −1-fold convolutions (see Figure 13.4) following the same idea as explained forTSU = 2, i.e., TSU −1 dominance times where the background process has crossed ˜bS and the last dominance time where the background process is below its border (equation (13.9)).

0 x1 x2 x3 x

Figure 13.4: A stable phase of length x >0 consisting of TSU = 4 dominance times. The perceptual changes occur at points in time 0< x1< x2 < x3 < xwhich may vary as indicated by arrows.

The assertion follows by the law of the total probability as explained above.

Now, we turn to b). By simple convolution arguments the claims about the first recurrences times follow.

The first recurrence times can also be interpreted as life time distributions of a renewal process.

We illustrate Proposition 13.9. by comparing the theoretical density of the first passage time of the unstable state with the empirical density based on simulations. Therefore, we made use of the estimated HBMi parameters of subject C in Table 12.2 and simulated a HBMi with

184

13. The HBM: Theoretical properties

10000 stable phases separated by unstable phases. For each stable phase we derived the length.

The corresponding histogram is plotted in Figure 13.5 together with the theoretical density given in Proposition 13.9. We note that the theoretical and empirical density closely fit.

0 200 400 600 800

0.0000.0020.004density

T~

SU

* [s]

f

T~

SU

*

Figure 13.5: Comparison of the theoretical density fT˜SU (x) (blue line, Proposition 13.9) of the length of a stable phase with a simulated histogram for parameter combination C (see Figure 9.1 [page 70] and Table 12.2).

Again, there exists a renewal equation for the distribution function of the first passage time of the process ˜Yt. Proposition 13.9 gives a solution of this equation.

Proposition 13.10. Renewal equation for the first passage time of the HBMi Regard the processY˜t. Let Gjk(t) :=P( ˜Tjk ≤t|Y˜0 =j) describe the distribution function of the first passage time of state kstarting in state j. It holds for j, k∈ {S, U}, the semi-Markov kernelQ(t) (equation (13.8)) and t≥0

Gjk(t) =Qjk(t) + Z t

0

Gjk(t−x)dQjj(x).

Proof: This is Theorem 7.5 in Medhi (2009).

Next, we analyze the stationary and steady-state distribution. Recall that the steady-state distribution ˜π = (˜πS,˜πU) denotes the probabilities of being in a given state at time t for t→ ∞, i.e., ˜πS := limt→∞t=S as well as the long term relative time spent in the two states (equation (8.3)). Hence, ˜πS is an important quantity to analyze group differences in the data set Schmack et al. (2015), and Corollary 13.11 is one of the key results stated in this chapter.

To distinguish between the HBM and the HMM we now use stars as superscript forπ and ˜π.

13. The HBM: Theoretical properties

Corollary 13.11. Stationary and steady-state distribution of the HBMi

Let (µS, σS, µU, σS, pSS, pU U) be the derived parameter set of a HBMi. Assume max(pSS, pU U) < 1. The stationary distribution π= (πS, πU) of the underlying hidden state process is given by

π =

pU U −1

pSS+pU U −2, pSS−1 pSS+pU U −2

.

The steady-state distribution π˜= (˜πS,π˜U) with ˜πS := limt→∞P( ˜Yt = S),

˜

πU := limt→∞P( ˜Yt = U) of the resulting point processΞHBMi derives as

˜ π =

µS 1−pSS µS

1−pSS + µ

U

1−pU U

,

µU 1−pU U µS

1−pSS + µ

U

1−pU U

. (13.10)

If pSS = 1, we haveπ = ˜π = (1,0), and if pU U = 1, it holds π= ˜π = (0,1).

Proof: Assume max(pSS, pU U) < 1. Note that the mean probability that the perception process remains in the stable state (i.e., is during thei+ 1-st dominance time stable given it is stable in thei-th dominance time) is given by pSS if thei-th dominance time is unknown.

AnalogouslypU U is the mean probability to remain in the unstable state in case of an unknown i-th dominance time. These probabilities only depend on the current state. Therefore, we can interpret the hidden process (Yi)i≥1 as embedded Markov chain (compare also Section 13.2.1 about the interpretation as semi-Markov process). Assuming thatP is the transition matrix of this Markov chain (with entries given bypjk for j, k∈ {S, U}) we know that π satisfies πP (Section 9.1). By standard calculations it followsπ=

p U U−1 pSS+pU U−2, p

SS−1 pSS+pU U−2

. To derive the steady-state distribution of ΞHBMi, we interpret ΞHBMi as regenerative process (compare Section 8.2.3.2). The life times ( ˜Di)i≥1 of the corresponding renewal process are given by the first recurrence times in the HBMi. The amount of time spent in state S during a life time then is just the first passage time of the state U when starting in S. Using equation (8.2) and Proposition 13.8, it follows directly

˜

πS = E[amount of time in state j during ˜D1]

E[ ˜D1] = E[ ˜TSU] E[ ˜TSS] =

µS 1−pSS µS

1−pSS +1−pµU U U

and analogously for ˜πU.

The assertions for max(pSS, pU U) = 1 are obvious.

By equation (8.3) we know that the steady-state distribution also describes the long term relative time spent in the two states. Therefore, we now compare the theoretical relative time spent in the stable state with the empirical relative time spent in the stable state. 1000 response patterns for the four subjects given in Table 12.2 are simulated forT ∈ {1200,3600}.

The empirical value ˜πSemp is for each response pattern estimated as

˜ πSemp:=

P

i|Yi=S

di

T . (13.11)

186

13. The HBM: Theoretical properties

Note that the hidden statesYi are known as we simulate the response patterns. In Table 13.1 the mean and median of ˜πSemp for the two time horizons are compared with the theoretical values ˜πS given by Corollary 13.11. We note that already for the comparatively small recording length in the dataT = 1200 theoretical and empirical values agree quite closely.

T = 1200 T = 3600

subject π˜S π¯˜Semp med˜πSemp π¯˜empS med˜πSemp

C 0.785 0.761 0.781 0.787 0.786

D 0.872 0.871 0.873 0.871 0.872

E 1.00 1.00 1.00 1.00 1.00

F 0.155 0.216 0.209 0.177 0.176

Table 13.1: Comparison of empirical and theoretical relative times spent in the stable state by the HBMi. For the empirical steady-state distribution 1000 response patterns were simulated with the parameters given in Table 12.2 (page 162). Mean and median values of (13.11) are used for the comparison. The theoretical values are derived using (13.10).

The empirical validity of the stationary distribution π of the underlying Markov chain is comparable (data not shown).

Remark 13.12. Renewal results for the HBMi

The results about the expected number of cycles in the HBMi, the renewal equations for Yi and Y˜t, the exact probability of being in the stable state and the probability generating function of the HBMi translate directly from the HMMi (Lemma 10.9, Proposition 10.10, Corollary 10.11 and Lemma 10.13). pSS and pU U have to be exchanged bypSS and pU U, respectively.

13.2.3 Number of changes

We focus on the rate of perceptual changes. The following proposition 13.13 is one key result of this chapter. It is, for instance, required to derive the asymptotic expected number of perceptual reversals in the HBMi when changing the length of the blank display. With a different length of the blank display (µS, σS, µU, σS, pSS, pU U) change (as the mean drifts νS, νU, νB change). Proposition 13.13 enables us to derive the asymptotic expected rate of perceptual reversals dependent on the new parameter set.

Proposition 13.13. Rate of changes in the HBMi

Let(µS, σS, µU, σS, pSS, pU U) be the derived parameter set of a HBMi and ∆>0 be the length of an interval. If max(pSS, pU U)<1, it holds for the expected number of changes N(∆)in this interval

E[N(∆)]

∆ −−−−→

∆→∞

2−pU U −pSS

(1−pU US+ (1−pSSU. (13.12) If pSS = 1, it holds

E[N(∆)]

∆ −−−−→

∆→∞

1 µS and analogously for pU U = 1.

We define ρHBMi := lim∆→∞E[N(∆)]

as the (asymptotic) rate of changes.

13. The HBM: Theoretical properties

Proof: Using Corollary 13.11 about the steady-state distribution, the proof is analogous to the proof of Proposition 10.14.

The asymptotic rate of changes in the HBMi and the rate of changes in the data set Schmack et al. (2015) are compared in Figure 13.6 A. As with the HMM (compare Figure 10.5 on page 120) these rates are close together (mostly not more than 10% apart from each other). The same explanations as in the HMM case are possible for differences in the rates.

Panel B shows the comparison with the rate of changes in the data for the simulated rate of changes in the HBMi conditioning on the same (Viterbi-estimated) starting state and the same recording length as in the original data. As in contrast to the HMM not the transition probabilities directly are maximized, the model fit is a slightly worse compared to the HMM.

0.00 0.05 0.10 0.15 0.20

0.00 0.05 0.10 0.15 0.20

mean sim. rate (HBMi) [1/s]

B

Figure 13.6: Comparison of the rate of changes between the HBMi and the data set Schmack et al. (2015). In A the theoretical (asymptotic) rate ρHBMi (Proposition 13.13) is contrasted with the rate of changes in the data, and in B the mean rate of changes in 1000 simulations started in the same (Viterbi-estimated, Section 12.5) state as the original data is used (T = 1200).

Additionally, we state results about the distribution of the number of perceptual changes in the HBMi. For technical reasons, we first assume an interval of length ∆ starting with a perceptual change to state Proposition 13.14. In Proposition 13.15 we relax this assumption.

Proposition 13.14. Number of changes in the HBMi: Distribution I

Let ΘHBMi = (bS, νS, bU, νU,˜bS,˜bU, νB, πstart,S ) be the parameter set of a HBMi, π be its stationary distribution and pSS, pU U be defined as in equation (11.7).

Moreover, let I(∆) := [t, t+ ∆) be an interval of length ∆≥0 starting with a perceptual change and Y be a sequence of hidden states. The probability of a realization y covering I(∆) with |y| ≥2 states in I(∆) is for t→ ∞ given by

13. The HBM: Theoretical properties

where the following function is folded

gyi(s) :=fIG

2byiyi,

2byiy3 i

(s)·









 Φ

B,

s(˜byi), if yi+1=U, yi=S, Φ−sν

B,

s(˜byi), if yi+1=U, yi=U, 1−Φ

B,

s(˜byi), if yi+1=S, yi=S, 1−Φ−sν

B,

s(˜byi), if yi+1=S, yi=U.

(13.14)

If |y|= 1, we have

P(Y =y) =πy1 1−F2bIG

y1y

1,q 2by13

y 1

(∆)

!

. (13.15)

The probability weights of the number of changesNc(∆)ofΞHBMi in the intervalI(∆)(starting with a perceptual change) are then given by

P(Nc(∆) =j) = X

y∈Y(∆):|y|=j

P(Y =y), (13.16)

whereY(∆)denotes all hidden state sequences starting with a perceptual change whose resulting point processes cover the interval I(∆).

Proof: In the first step of the proof, we take the probabilities P(Y =y) as given to show the assertion (13.16). In the second step of the proof, an explanation for the correctness of the expressions forP(Y =y) is provided.

Given a perceptual change at t, j changes in the interval [t, t+ ∆) imply that the hidden state sequencey leading to the response pattern has a length of j (recall also Figure 10.6 on page 121 for an example). To compute the probability ofj changes in the intervalI(∆), we therefore have to sum over all possible hidden state paths of lengthj, i.e.,

P(Nc(∆) =j) = X

y∈Y(∆)||y|=j

P(Y =y).

Now, we focus on the derivation of P(Y = y). For t → ∞ the probability weights of the first state of the interval are given by the stationary distributionπ. Thus, the first factor of P(Y =y) is the stationary weight πy1 of the first statey1.

|y|= 1 implies that no change occurs in the interval [t, t+ ∆). This means that the perception process has not crossed the borderbS orbU (depending on the hidden state) in this interval, i.e., (13.15) holds true.

For j = |y| ≥ 2, we assume the last perceptual reversal occurring at time t+s < t+ ∆ (compare Figure 13.7). Then, no changes occur in (t+s, t+ ∆). Thus, the perception process has not crossed its current borderbS orbU in an interval with length ∆−s. This event has the probability 1−FIG

2by|y|y

|y|,q

2by|y|y

|y|

3(∆−s). As the last perceptual change can occur at an arbitraryt+s < t+ ∆, we need the convolution defined in (13.13). The other j−1 changes occur in the interval [t, t+s). For all these perceptual changes it holds: The state remains stable or changes to stable if the current border ˜bS or ˜bU of the background process is crossed at the end of a dominance time (third and fourth case in (13.14)).

13. The HBM: Theoretical properties

If the background process border is not crossed, the state remains or changes to unstable (first and second case in (13.14)). The functiongyi(s) in (13.14) considers for each pair (yi, yi+1) the length of dominance times and the hidden states. Folding it implies that we allow for arbitrary perceptual reversal timest+s1, t+s2, . . . , t+sj−2, t+swith

t < t+s1< . . . < t+sj−2 < t+s < t+ ∆ (compare the arrows in Figure 13.7).

)

t t+s1 t+s2 t+s t+ ∆

change 1 change 2 change 3 change 4

Figure 13.7: Visualization of the number of changes in the HBMi. In this example j = 4 changes (marked by vertical bars) occur in the interval I(∆) at times t < t+s1 < t + s2 < t+s which are not fixed as indicated by arrows. There is no perceptual change in the interval (t+s, t+ ∆).

Next, we relax the assumption that the interval I(∆) has to start with a perceptual change.

Proposition 13.15. Number of changes in the HBMi: Distribution II

Let I(∆) := [t, t+ ∆)be an interval of length ∆≥0. The probability weights of the number of changes N(∆) of ΞHBMi in the interval I(∆) are then for t→ ∞ and j≥1 given by

P(N(∆) =j) = ˜πS×

Z

0

fRS(∆S)˜pSS(∆S)P(Nc∗,S(∆−∆S) =j)/πS

+fRS(∆S)(1−p˜SS(∆S))P(Nc∗,U(∆−∆S) =j)/πUd∆S

 + ˜πU×

Z

0

fRU(∆U)(1−p˜U U(∆U))P(Nc∗,S(∆−∆U) =j)/πS

+fRU(∆U)˜pU U(∆U)P(Nc∗,U(∆−∆U) =j)/πUd∆U

,

where Nc∗,S(∆) andNc∗,U(∆) denote the number of changes in an interval I(∆) starting with a perceptual change in the stable or the unstable state, respectively (equation (13.16)), and π˜j denotes the steady-state probability of state j (Corollary 13.11). RS and RU are the asymptotic residual times in the stable and the unstable state as given in Proposition 8.12 with life time distributions IG(µS, σS) and IG(µU, σU).

190

13. The HBM: Theoretical properties

˜

pSS(∆S) is the probability to remain stable given the residual time ∆S and derives as

˜

pSS(∆S) :=P Y2 =S|Y1 =S, RS= ∆S

=

Z

S

1−Φ˜

B,

d˜(˜bS)

fIG

2bSS,

2bSS3( ˜d) 1−FIG

2bSS,

2bSS3(∆S)dd.˜ (13.17)

˜

pU U(∆U) is defined similarly.

Forj= 0 we have

P(N(∆) = 0) = ˜πSP(RS >∆) + ˜πUpP(RU >∆).

The point process ΞHBMi is asymptotically crudely stationary.

Proof: The proof mainly follows the same ideas as the proof of Proposition 10.16. Therefore, we only show that the expression for ˜pSS(∆S) in equation (13.17) holds true. First, we derive the density of the stable dominance timeDS1 given its residual time ∆S

fDS

1|RS=∆S( ˜d) = fAS,RS( ˜d−∆S,∆S)

fRS(∆S) = fDS 1( ˜d) fRS(∆S)E[DS1],

whereAS denotes the age (Definition 8.11), and the second equal sign follows by known results (e.g., Lemma 2.1 in Haviv, 2013).

Given this conditioned dominance time distribution we are able to derive the probability that the subject remains stable, i.e., that the background process is above ˜bS at the end of the dominance time ˜d. We integrate

˜

pSS(∆S) =

Z

S

P(BDS

1 >˜bS|D1S = ˜d, RS = ∆S)fDS

1|RS=∆S( ˜d)dd˜

=

Z

S

1−Φ˜

B,

d˜(˜bS)

fDS

1( ˜d) fRS(∆S)E[D1S]dd˜

=

Z

S

1−Φ˜

B,

d˜(˜bS)

fIG

2bSS,

2bSS3( ˜d) 1−FIG

2bSS,

2bSS3(∆S)dd,˜

where in the last line the density ofRS (Proposition 8.12) was plugged in. The same ideas are applicable to ˜pU U(∆U).

The asymptotic crude stationarity follows directly from the independence of the probability weightsP(N(∆) =j) from tfort→ ∞.

13.2.4 Residual time

Here, we derive the density and the expected value of the residual time, i.e., the time span until the next perceptual reversal in the HBMi.

13. The HBM: Theoretical properties

Corollary 13.16. Residual time in the HBMi

Let ΞHBMi be the point process of a HBMi with parameter set ΘHBMi = (bS, νS, bU, νU,˜bS,˜bU, νB, πstart,S ) and asymptotic residual time Rin. Its expectation is given by

E[Rin] = ˜πU and the distribution has the density

fRin(x) = π˜U Proof: By substituting the mean and standard deviation parameters by the border and drift

fRin(x) = π˜U Proof: By substituting the mean and standard deviation parameters by the border and drift