• Keine Ergebnisse gefunden

2.4 Summary

3.1.1 Stochastic processes and Markov chains

AMarkov processis a special type ofstochastic process[121]. Astochastic processis a collection of random variables{X(t)|t 2 T}, defined on a probability space, and indexed by a parameter t (usually assumed to be time) which can take values in a set T [55]. T is called the index set or parameter space. If the index set is discrete, then the process is called adiscrete-time stochastic process; otherwise, ifT is continuous, the process is acontinuous-time stochastic process.

The values assumed by random variablesX(t) are calledstates. The set of all possible states forms thestate spaceof the process and this may be discrete or continuous. If the state space is discrete, the process is referred to as achain and the states are usually identified with the set or a subset of natural numbers. Without loss of generality, we assume the state space can be denoted I ={0,1,2, ...}.

AMarkov chainis a Markov process with a discrete state space. AMarkov processis a stochastic process whose conditional probability distribution function satisfies the following property: Given a stochastic process{X(t)|t2 T}, for anyt0 < ... < tn < tn+1the distribution ofX(tn+1)only depends onX(tn), not on the valuesX(t0), ..., X(tn 1), i.e.,

P r{X(tn+1)xn+1 |X(tn) =xn, ..., X(t0) =x0}=P r{X(tn+1)xn+1 |X(tn) =xn}. (3.1) As can be observed, the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it. Eq. 3.1 is generally denoted as theMarkovormemorylessproperty. Then we give the definitions ofdiscrete-time Markov chainsandcontinuous-time Markov chains.

Definition 3.1.1. Discrete-time Markov chainA time Markov chain (DTMC) is a discrete-time process{Xn|n= 0,1,2, ...}that satisfies the Markov property: For all natural numbersnand all statesxn,

P r{Xn+1 =xn+1 |Xn=xn, ..., X0 =x0}=P r{Xn+1=xn+1 |Xn=xn}.

Thus, the fact that the system is in statex0at time step0, in statex1at time step1, and so on, up

3.1. MARKOV CHAINS AND QUEUEING MODELS to the fact that it is in statexn 1 at time stepn 1is completely irrelevant for the next step. The state in which the system finds itself at time stepn+ 1depends only on where it is at time stepn.

To simplify the notation, rather than usingxi to represent the states of a Markov chain, henceforth we shall use single letters, such asi,j, andk.

The conditional probabilities P r{Xn+1 = j|Xn = i} are called transition probabilities and denoted bypij(n). Then we have thetransition probability matrix

P(n) = 0 BB BB BB BB B@

p00(n) p10(n) p20(n)

...

pi0(n) ...

p01(n) p11(n) p21(n)

...

pi1(n) ...

p02(n) p12(n) p22(n)

...

pi2(n) ...

· · ·

· · ·

· · · ...

· · · ...

p0j(n) p1j(n) p2j(n)

...

pij(n) ...

· · ·

· · ·

· · · ...

· · · ...

1 CC CC CC CC CA

(3.2)

Pis a stochastic matrix, i.e., for all statesiandj, pij(n) 0, X

j

pij(n) = 1.

A Markov chain is said to behomogeneousif for all statesiandj

P r{Xn+1=j|Xn=i}=P r{Xn+m+1=j|Xn+m=i} forn=0,1,2, ...andm 0.

If the parameter spaceT is continuous, a Markov process is called a continuous-time Markov chain (CTMC). The formal definition of CTMC is:

Definition 3.1.2. Continuous-time Markov chainWe say that a stochastic process{X(t)|t 0}is a continuous- time Markov chain if for all integers (states)n, and for any sequencet0, t1, ..., tn, tn+1 such thatt0 < t1 < ... < tn< tn+1,

P r{X(tn+1) =xn+1|X(tn) =xn, ..., X(t0) =x0}=P r{X(tn+1) =xn+1|X(tn) =xn}.

The state residence times in CTMCs are exponentially distributed [55]. Thus, we can associate with every stateiin the CTMC a parameter µi describing the rate of the exponential distribution,

that is, we have as residence distribution in statei:

Fi(t) = 1 e µit, t 0. (3.3)

Thus the vectorµ = (µ1, ..., µi, ...)describes the state residence time distributions in the CTMC.

The interactions in a continuous-time Markov chain are usually specified in terms of the rates at which transitions occur. A continuous-time Markov chain in some state iat timet will move to some other statej at rateqij(t) per unit time. The matrixQ(t), formed by placingqij(t)in rowi and columnj, for alliandj, is called thetransition-rate matrix. We have

Q(t) = 0 BB BB BB BB B@

q00(t) q10(t) q20(t)

...

qi0(t) ...

q01(t) q11(t) q21(t)

...

qi1(t) ...

q02(t) q12(t) q22(t)

...

qi2(t) ...

· · ·

· · ·

· · · ...

· · · ...

q0j(t) q1j(t) q2j(t)

...

qij(t) ...

· · ·

· · ·

· · · ...

· · · ...

1 CC CC CC CC CA

(3.4)

Notice that the elements of the matrixQ(t)satisfy the following properties:

qij(t) 0, i6=j, and

qii(t) =X

j6=i

qij(t) =µi.

In this manner, a continuous-time Markov chain is represented by its matrix of transition rates,Q(t), at timet.

Transient distribution analysis of a CTMCLet the probability that the system is in stateiat time tbe⇡i(t), i.e.,

i(t) =P r{X(t) =i}, (3.5) and⇡(t) ={⇡i(t), i2I}. The transient distribution of a CTMC can be computed by solving

d⇡(t)

dt =⇡(t)Q(t). (3.6)

3.1. MARKOV CHAINS AND QUEUEING MODELS Steady-state distribution analysis of a CTMCIf the CTMC arrives a point in time at which the rate ofchangeof the probability distribution vector⇡(t)is zero, then the left-hand side of Eq. 3.6 is identically equal to zero. In this case, we call the system is in a steady-state. The steady-state distribution is written simply as⇡={⇡i, i2}in order to show that it no longer depends on timet.

⇡can be computed by solving the system of linear equations

⇡Q= 0. (3.7)

Semi-Markov Processes (SMPs) are generalizations of Markov chains in that the future evolu-tion of the process is independent of the sequence of states visited prior to the current state and independent of the time spent in each of the previously visited states, as is the case for discrete-and continuous-time Markov chains. Semi-Markov processes differ from Markov chains in that the probability distribution of the remaining time in any state can depend on the length of time the process has already spent in that state.

A semi-Markov process consists of two components: (i) a discrete-time Markov chain{Xn|n= 0,1,2, ...} with transition probability matrixP which describes the sequence of states visited by the process, and (ii)Hij(t), the conditional distribution function of a random variable Tij which describes the time spent in stateifrom the moment the process last entered that state

Hij(t) =P r{Tij t}=P r{⌧n+1nt|Xn+1 =j, Xn=i}, t 0.

The random variableTij is the sojourn time in stateiper visit to stateiprior to jumping to statej.

The evolution of a semi-Markov process is as follows:

1) The moment the semi-Markov process enters any statei, it randomly selects the next state to visitjaccording toP, its transition probability matrix.

2) If statejis selected, then the time the process remains in stateibefore moving to statejis a random variableTij with probability distributionHij(t).

Thus the next state to visit is chosen first and the time to be spent in stateichosen second, which allows the sojourn time per visit to depend on the destination state as well as the source state. The steady-state probabilities{⇡i, i2I}of the SMP states can be computed in terms of the embedded DTMC steady-state probabilitiesviand the mean sojourn timeshi[124]:

i= vihi P

jvjhj i, j2Xs. (3.8)

Figure 3.1: (a) TheM/M/1queue and (b) its state transition diagram