• Keine Ergebnisse gefunden

Random Sequences

Im Dokument Signal Processing (Seite 196-200)

12.1 Chapter Summary . . . . 169

12.2 What Is a Random Variable? . . . . 170

12.3 The Coin-Flip Random Sequence . . . . 171

12.4 Correlation . . . . 172

12.5 Filtering Random Sequences . . . . 173

12.6 An Example . . . . 174

12.7 Correlation Functions and Power Spectra . . . . 174

12.8 The Dirac Delta in Frequency Space . . . . 176

12.9 Random Sinusoidal Sequences . . . . 176

12.10 Random Noise Sequences . . . . 177

12.11 Increasing the SNR . . . . 178

12.12 Colored Noise . . . . 178

12.13 Spread-Spectrum Communication . . . . 178

12.14 Stochastic Difference Equations . . . . 179

12.15 Random Vectors and Correlation Matrices . . . . 181

12.16 The Prediction Problem . . . . 182

12.17 Prediction Through Interpolation . . . . 182

12.18 Divided Differences . . . . 183

12.19 Linear Predictive Coding . . . . 185

12.20 Discrete Random Processes . . . . 187

12.20.1 Wide-Sense Stationary Processes . . . . 187

12.20.2 Autoregressive Processes . . . . 188

12.20.3 Linear Systems with Random Input . . . . 189

12.21 Stochastic Prediction . . . . 190

12.21.1 Prediction for an Autoregressive Process . . . . 190

12.1 Chapter Summary

When we sample a functionf(x) we usually make some error, and the data we get is not precisely f(nΔ), but contains additive noise, that is, our data value is reallyf(nΔ) + noise. Noise is best viewed as random, so it becomes necessary to treat random sequences f ={fn} in which each fn is a random variable. The random variables fn and fm may or may 169

not be statistically independent. In this chapter we survey several topics from probability and stochastic processes that are particularly important in signal processing.

12.2 What Is a Random Variable?

The simplest answer to the question “What is a random variable?”is

“A random variable is a mathematical model”. Imagine that we repeatedly drop a baseball from eye-level to the floor. Each time, the baseball behaves the same. If we were asked to describe this behavior with a mathemati-cal model, we probably would choose to use a differential equation as our model. Ignoring everything except the force of gravity, we would write

h(t) =32

as the equation describing the downward acceleration due to gravity. Inte-grating, we have

h(t) =32t+h(0)

as the velocity of the baseball at timet≥0, and integrating once more, h(t) =−16t2+h(0)t+h(0)

as the equation of position of the baseball at timet≥0, up to the moment when it hits the floor. Knowingh(0), the distance from eye-level to the floor, and knowing that, since we dropped the ball,h(0) = 0, we can determine how long it will take the baseball to hit the floor, and the speed with which it will hit. This analysis will apply every time we drop the baseball. There will, of course, be slight differences from one drop to the next, depending, perhaps, on how the ball was held, but these will be so small as to be insignificant.

Now imagine that, instead of a baseball, we drop a feather. A few rep-etitions are all that is necessary to convince us that the model used for the baseball no longer suffices. The factors that we safely ignored with regard to the baseball, such as air resistance, air currents, and how the object was held, now become important. The feather does not always land in the same place, it doesn’t always take the same amount of time to reach the floor, and doesn’t always land with the same velocity. It doesn’t even fall in a straight vertical line. How can we possibly model such behavior? Must we try to describe accurately the air resistance encountered by the feather?

The answer is that we use random variables as our model.

While we cannot predict exactly the place where the feather will land, and, of course, we must be careful to specify how we are to determine

“the place” where it does land, we can learn, from a number of trials, where it tends to land, and we can postulate the probability that it will land within any given region of the floor. In this way, the place where the feather will land becomes a random variable with associated probability density function. Similarly, we can postulate the probability that the time for the fall will lie within any interval of elapsed time, making the elapsed time a random variable. Finally, we can postulate the probability that its velocity vector upon hitting the ground will lie within any given set of three-dimensional vectors, making the velocity a random vector. On the basis of these probabilistic models we can proceed to predict the outcome of the next drop.

It is important to remember that the random variable is the model that we set up prior to the dropping of the feather, not the outcome of any particular drop.

12.3 The Coin-Flip Random Sequence

The simplest example of a random sequence is the coin-flip sequence, which we denote by c ={cn}n=−∞. We imagine that, at each “time” n, a coin is flipped, and cn = 1 if the coin shows heads, andcn =1 if the coin shows tails. When we speak of this coin-flip sequence, we refer to this random model, not to any specific sequence of ones and minus ones; the random coin-flip sequence is not, therefore, a particular sequence, just as a random variable is not actually a specific number. Any particular sequence of ones and minus ones can be thought of as having resulted from such an infinite number of flips of the coin, and is called arealizationof the random coin-flip sequence.

It will be convenient to allow for the coin to be biased, that is, for the probabilities of heads and tails to be unequal. We denote by p the probability that heads occurs and 1−pthe probability of tails; the coin is calledunbiased orfairifp= 1/2. To find theexpected valueofcn, written E(cn), we multiply each possible value of cn by its probability and sum;

that is,

E(cn) = (+1)p+ (1)(1−p) = 2p−1.

If the coin is fair thenE(cn) = 0. The variance of the random variablecn, measuring its tendency to deviate from its expected value, is var(cn) = E([cn−E(cn)]2). We have

var(cn) = [+1(2p1)]2p+ [1(2p1)]2(1−p) = 4p−4p2.

If the coin is fair thenvar(cn) = 1. It is important to note that we do not change the coin at any time during the generation of a realization of the random sequencec; in particular, the pdoes not depend onn.

Also, we assume that the random variablescn are statistically indepen-dent. This means that, for any N, any choice of “times” n1, ..., nN, and any values m1, ..., mN in the set{−1,1}, the probability that cn1 = m1, cn2 =m2,...,cnN =mN is the product of the individual probabilities. For example, the probablity thatc1=1,c2= +1 andc4= +1 is (1−p)p2.

12.4 Correlation

Let u and v be (possibly complvalued) random variables with ex-pected valuesE(u) andE(v), respectively. The covariance betweenuand v is defined to be

cov(u, v) =E

(u−E(u))(v−E(v))

, and the cross-correlation betweenuandv is

corr(u, v) =E(uv).

It is easily shown that cov(u, v) = corr(u, v)−E(u)E(v). When u = v we get cov(u, u) = var(u) and corr(u, u) = E(|u|2). If E(u) = E(v) = 0 thencov(u, v) =corr(u, v). In statistics the “correlation coefficient” is the quantitycov(u, v) divided by the standard deviations ofuandv.

Whenuandv are independent, we have E(uv) =E(u)E(v), and

E

(u−E(u))(v−E(v))

=E(u−E(u))E

(v−E(v))

= 0.

To illustrate, letu=cnandv=cnm. Then, if the coin is fair,E(cn) = E(cnm) = 0 and

cov(cn, cnm) =corr(cn, cnm) =E(cncnm).

Because thecn are independent,E(cncnm) = 0 formnot equal to 0, and E(|cn|2) =var(cn) = 1. Therefore

cov(cn, cnm) =corr(cn, cnm) = 0,form= 0,

and

cov(cn, cn) =corr(cn, cn) = 1.

In the next section we shall use the random coin-flip sequence to gen-erate a wide class of random sequences, obtained by viewing c ={cn} as the input into a shift-invariant discrete linear filter.

12.5 Filtering Random Sequences

Suppose, once again, that T is a shift-invariant discrete linear filter with impulse-response sequenceg. Now let us take as input, not a particular sequence, but the random coin-flip sequencec, withp= 0.5. The output will therefore not be a particular sequence either, but will be another random sequence, sayd. Then, for eachnthe random variabledn is

dn = m=−∞

cmgnm= m=−∞

gmcnm. (12.1) We compute the correlationcorr(dn, dnm) =E(dndnm). Using the con-volution formula Equation (12.1), we find that

corr(dn, dnm) = k=−∞

j=−∞

gkgjcorr(cnk, cnmj).

Since

corr(cnk, cnmj) = 0,fork=m+j, we have

corr(dn, dnm) = k=−∞

gkgkm. (12.2) The expression of the right side of Equation (12.2) is the definition of the autocorrelationof the non-random sequenceg, denotedρg=g(m)}; that is,

ρg(m) = k=−∞

gkgkm. (12.3)

It is important to note that the expected value ofdn is E(dn) =

k=−∞

gkE(cnk) = 0

Im Dokument Signal Processing (Seite 196-200)