• Keine Ergebnisse gefunden

Lecture summaries, exercises and exam questions

N/A
N/A
Protected

Academic year: 2022

Aktie "Lecture summaries, exercises and exam questions"

Copied!
21
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Random Communication Networks (SS 2020)

Lecture summaries, exercises and exam questions

Prof. Dr. Hannes Frey, (last edited: July 14, 2020)

I. INTRODUCTION

In this lecture we are concerned about random communi- cation networks. Network nodes are randomly placed on a given plane or space and are connected according to some deterministic or stochastic rule. The resulting construct is an undirected graph.

Definition 1 (Undirected graph). An undirected graph G= (V, E)consists of a finite or countable infinite setV ofvertices and a set E of edges. Each edge is represented by a set with two different elements fromV. We use the short hand notation uvto represent the edge{u, v}betweenuandv. We also use the notion node and link are synonyms for vertex and edge, respectively.

Definition 2 (Basic graph terminology). Consider an undi- rected graphG= (V, E). For eachu∈V we term all vertices v ∈ V with uv ∈ E a neighbor of u. The total number of neighbors ofuis called the degreeof u.

A sequence of edges v1v2, v2v3, . . . , vn−1vn ∈E is called a path between v1 and vn. We write p = v1v2v3. . . vn to denote such path. Given a path p = v1v2v3. . . vn, we say vivi+1. . . vj (for 1 ≤i < j ≤n) is a cycle if vi =vj. If a path contains no cycle, we say the path iscycle-free.

We say two vertices uandv are connected if there exists at least one path pbetweenuandv. We then say thatuand v are connected by that path. A set of vertices W ⊆ V is called a connected component if each pair of nodes u, v in W is connected and none of the other vertices inV \W is connected to a vertex inW. Each node is obviously contained in exactly one connected component (it can be just {v} if v is isolated from all other nodes). We define C(v) as the connected component which contains node v.

Definition 3 (Specific graphs). We term a graphG= (V, E) complete ifE contains all possible edges between vertices in V. A graphG= (V, E)is atreeif for each pair of vertices in V is there exists exactly one cycle free path connecting these two vertices. We term a graph an Euclidean graph if each vertex has assigned a location in a d-dimensional Euclidean space. We denote theEuclidean distancebetween vertexuand v by kuvk.

Informally, arandom graphrefers to a graph resulting from some probabilistic rules. We can distinguish two principle directions: random graphs where each vertex has a location and existence or non existence of an edge depends on the vertex positions, and random graphs where existence or non existence of an edge does not depend on vertex positions.

When the vertex location has no influence on edge creation we speak in general of theErd˝os–R´enyi random graph model.

Exam question: How is the Erd˝os-R´enyi random graph model related to the general random graph model?

Exercise 1 (Erd˝os–R´enyi and Gilbert model). Consider the following two closely related variants of the Erd˝os–R´enyi random graph model:

Erd˝os–R´enyi model: Given is a setV withn >0vertices.

Let 0 ≤ m ≤ n(n−1)/2 and consider all possible undirected graphs with vertex setV andmedges. In the Erd˝os-R´enyi model one of these possible graphs is chosen randomly where each graph has the same probability to be chosen.

Gilbert model: Given is a setV withn >0 vertices. Let 0 ≤ p ≤ 1. We generate a graph over V as follows.

For each possible vertex pair u, v ∈ V we create an edge between u and v with probability p where the probability of connecting them is independent of the other connections.

What is the probability to choose a specific graph G out of the set of all possible graphs with vertex set V andm edges in the Erd˝os-R´enyi and the Gilbert model, respectively.

Exam question: Explain the difference between the Erd˝os–R´enyi and the Gilbert model of the previous exercise.

When vertex positions are considered, two possible cases can be distinguished, the discrete and the continuous case. In the discrete case, possible node locations are a countable set, e.g. a 2D grid. In this setting the theory ofdiscrete percolation studies connectivity in such random grid. For example, for an n×ngrid, percolation theory answers the question about the probability to find at least one path from a node at the top to a node at the bottom of the grid.

In the continuous case node locations can be any of a continuous set, e.g. all points in R2. Such graphs are called random geometric graphs. The study of connectivity in such continuous setting it then calledcontinuous percolation.

We will study how far changes of local properties (e.g. the average degree) may yield a macroscopic change of the whole graph (e.g. connectivity). We will overserve for large networks that with small changes of local properties macroscopic chan- ges can happen surprisingly fast. Such phenomenon is called phase transition.

The following aspects will be covered in this lecture:

required probability basics

branching processes

(2)

discrete percolation

Poisson point processes

continuum percolation

advanced tools for percolation theory

connectivity and coverage in finite graphs

algorithmic aspects of path discovery and navigation

II. PROBABILITY ESSENTIALS

A. Events and probability

In this lecture we consider probability expressions of the form P[A]where Arepresents an event of arandom experi- mentandP[·]is aprobability measurewhich assigns an event A a probability valueP[A] in[0,1]that this event occurs.

Consider for example the random experiment of rolling a fair six sided die. We can consider the event A that the die shows an even number. The probability P[A]of that event is obviously1/2. If we consider the eventB that the die shows number one, the probability of that event will beP[B] = 1/6.

Technically speaking, a random experiment generates in- stances of a sample space Ω. For example the sample space of the before mentioned random experiment is Ω = {1,2,3,4,5,6}. Events are subsets of the sample spaceΩ. The before mentioned eventAthat the die shows an even number isA={2,4,6}. The eventB that the die shows number one is B={1}.

Probability P is thus a mapping which assigns each event a probability value. In the considered specific case of rolling the fair six sided die the mapping is reasonably defined as

P[A] = |A|

|Ω| =|A|

6

However, the mapping is not required to be exactly like that. Consider a biased die for example with P[{1}] = 1/2, P[{2}] = 1/4,P[{3}] = 1/8, P[{4}] = 1/16, andP[{5}] = P[{6}] = 1/32. In this caseP[A] = P[{2,4,6}] = 11/32and P[B] = P[{1}] = 1/2, for example.

In summary, calculating with probabilities is always subject to a probability space with sample space Ω, events A ⊆ Ω, and a probability measure P which assigns each event A a probability valueP[A].

Describing events as subsets of a sample space implies that set calculus can be used to operate on events. In particular, two events AandB can be combined to

A∩B which means thatA andB occurs

A∪B which means thatA or B or both occur

A\B which means thatAbut not B occurs

A: which means thatAdoes not occur

Set calculus on events naturally transfers to a calculus on probabilities. If two events are mutually exclusive (e.g.

the event A that the die shows an even number and at the same time the event B that it shows an odd number) the probability that both events occur at the same time should be 0. Technically, for two disjoint eventsAandB which means A∩B=∅, their probabilityP[A∩B] = P[∅] is always0.

Consider on the other hand the event that the random experiment creates any sample of the sample space (e.g.

the event that the die of the previously considered random experiment shows any of the possible six numbers). This event should have probability one, i.e.P[Ω] = 1.

Further properties which are always satisfied by a probabi- lity measure include the following. Whenever two events A andB are disjoint (i.e.A∩B=∅) we have

P[A∪B] = P[A] + P[B]

This property also extends to a collection of countably in- finte (and thus also finite) many eventsAi which are mutually disjoint, i.e.Ai∩Aj =∅ for i6=j. We have

P

" [

i=1

Ai

#

=

X

i=1

P[Ai]

Provided the before mentioned properties hold (as axoims), further properties follow immediately:

A⊆B always implies P[B\A] = P[B]−P[A]

P[A] = 1−P[A]

P[A∪B] = P[A] + P[B]−P[A∩B]

Sometimes, when an exact solution is hard to compute or not required, it is useful to estimate a union of events from above. This is always possible by the followingunion bound.

Let A1, A2, . . . be events which are not necessarily disjoint.

It always holds:

P

" [

i=1

Ai

#

X

i=1

P[Ai]

When describing the sample space to define a random experiment we must assure that the sample space includes exactly all possible outcomes of the random experiment. To clarify, consider two other examples of a random experiment involving a six sided die. The sample space of the random experiment of rolling a die twice is

Ω ={(1,1),(1,2), . . . ,(1,6), (2,1),(2,2), . . . ,(2,6),

. . .

(6,1),(6,2), . . . ,(6,6)}

The sample space of rolling a die as long as it shows a number less than six would be

Ω ={(x1, . . . , xk,6) :k≥0andxi∈ {1, . . . ,5}}

Exercise 2 (Tossing a coin). Consider tossing a coin exactly three times. With each toss the coin shows either head (H) or tail (T).

1) Describe the sample spaceΩof this experiment as a set.

2) Describe the following events as subsets ofΩ:

the coin never shows head

the coin shows head exactly once

the coin shows head at least once

3) Let p be the probability that the coin shows head and 1 −p that it shows tail. Assume each coin toss is independent of the other two. Describe an appropriate function which represents the probability measure Pof this random experiment.

(3)

4) Compute the probabilities of the events you described in the previous part of the exercise.

When the sample space is clear from the context, a pedantic treatment of the sample space as a set and events as subsets of a sample space is not necessary. Instead typically events are just described textual or in terms of a formula (involving random variables which are discussed next). Thus events are expressed in terms of Boolean predicates. In this interpretation we use Boolean calculus instead of set calculus to operate on events.

As an aside: for the random experiments considered so far, each possible subset of Ω is obviously a reasonable event.

This is in fact true since the sample space is finite. Even with countable infinite sample spaces this remains true. However, for uncountable infinite sample spaces (for example all real numbers), not all possible subsets can be allowed as valid events. In such case a probability measure with the properties discussed before does not exist. The set of possible events has to be proper subset of the set of all possible subsets ofΩ. This leads us to the measure theoretic concept ofsigma algebras and ways to construct sigma algebras which allow for valid probability measures. Although we are implicitly dealing with this fact in the context of this lecture, we do not need to be aware of this concept here and do not need to investigate it further.

B. Independence and conditional probability

Consider rolling a fair die two times. Assume the outcome of the first does not influence the outcome of the second and vice versa. Let A be the event that the die shows an even number in the first time it is rolled andB be the event that is shows as well an even number when rolled the second time.

Obviously,P[A] = P[B] = 1/2and

P[A∩B] = 1/2·1/2 = 1/4

This is exactly the defining property of two independent events. We say in general two eventsA andB arestochasti- cally independent iff

P[A∩B] = P[A]·P[B]

Consider rolling a fair die once. What is the probability for event A = {2,3,4} (i.e. the die showing number two, three or four) if we already know that the outcome of the experiment is an even number? Let B = {2,4,6} describe the event that the die shows an even number. Given B, the possible outcome can only be a sample in A ∩B which is {2,4} in this case. To compute the probability of event A ∩B, we have to “normalize” the probability of A∩B such that the probability of the sure event B under condition B (which is {2,4,6} here) is one. This means dividing the probability P[A∩B] with P[B]. In our example this yields P[A∩B]/P[B] = (1/3)/(1/2) = 2/3.

We can describe this principle in general. Given the outcome is already determined by eventB, we have to relate the event A ∩B with the probability that event B occurs. In short,

assumingP[B]>0, the probability ofAconditionedonB is defined as

P[A|B] = P[A∩B]

P[B]

Obviously, when AandB are independent events then P[A|B] = P[A]

Theorem 1(Law of total probability). Consider a probability space with sample space Ω and probability measure P. Let Bi be disjoint events with non-zero probability each. Assume that the events Bi are a partition of Ω, i.e., Ω = S

Bi. For each event Awe have

P[A] =X

P[A|Bi]·P[Bi]

C. Random variables

A random variable is a mathematical concept to assign values to outcomes of random experiments. Consider for example tossing a coin twice which shows either headH or tail T. Thus, we have the sample spaceΩ ={HH, HT, T H, T T}.

Assume you earn1Euro when the coin shows head and0Euro when it shows tail. The pay-out of the first and the second toss can be described as a random variableX1(ω)andX2(ω), i.e.,

X1(ω) =

(1 if ω=HT or ω=HH 0 else

and

X2(ω) =

(1 if ω=T H or ω=HH 0 else

We can use standard calculus on random variables. For example, for the previous random experiement we can express the total pay-offS(ω)of both tosses by

S(ω) =X1(ω) +X2(ω)

In this lecture we follow the usual convention and represent random variables with upper case letters in order to distinguish them from regular deterministic variables.

Though by the motivation the purpose of a random variable is clear, a complete mathematical definition of the general concept of random variables requires concepts frommeasure theory. For random variables applied in this lecture this is however not necessary. It is sufficient to treat random variables just characterized by theirdistribution as discussed next.

A random variable X randomly takes a value in R. We write P[X ∈ A] to denote the probability that the variable takes a value from the set A ⊆ R. For intervals like (a, b]

we typically use the short hand notation P[a < X ≤ b] to express the probability P[X ∈(a, b]]. In particular, we write P[X ≤x] forP[X∈(−∞, x]].

The randomness of X is completely characterized by it’s cummulative distribution function (cdf)FX(x) which is defi- ned as

FX(x) = P[X ≤x]

Analogously, the random variable is as well completely charachterized by it’scomplementary cummulative distribution function (ccdf)FX(x)defined as

FX(x) = 1−FX(x) = P[X > x]

(4)

With a random variable’s cdf (or ccdf) also the probability of more complicated expressions can be determined. For example,

P[a < X≤b] = P[X ≤b]−P[X≤a] =FX(b)−FX(a) P[X < b] = P[X ≤b]−P[X=b] =FX(b)−P[X =b]

P[X ≥a] = P[X > a] + P[X =a] =FX(a) + P[X=a]

Moreover, for disjoint intervalsI1 andI2 we have P[X∈I1∪I2] = P[X ∈I1] + P[X ∈I2]

We say that two random variablesXandY are independent if

P[X ≤xandY ≤y] = P[X ≤x]·P[Y ≤y]

We distinguish two types of random variables. A discrete random variable takes only values from a countable set (e.g.

{0,1}, N, or {1/2,5,1000}). Otherwise, we speak of a continuous random variable. (Remark, we ignore here that there exists also so called mixed random variables which are neither discrete nor continuous, but are a mixture of both.)

D. Continuous random variables

The continuous random variables we encounter are all

“benign” ones, having a differentiable cdf. If the cdf FX(x) of a random variable X is differentiable we can define it’s probability density function (pdf):

fX(x) = d dxFX(x)

wheredxdFX(x)stands for the first order derivative of function FX(x)with respect to variablex(we also use the short hand notationf0(x)for the first order derivative of a functionf(x)).

Knowing the pdf fX(x) we can also determine the cdf FX(x)by integration:

FX(s) = Z s

0

fX(x) dx

Also, ifFX(x)is differentiable, it is easy to see thatP[X= a] = 0. In words, the probability thatX takes a specific value a is always0.

Exercise 3 (Neglecting zero probability events). Let X be a continuous random variable with differentiable cdf. Show P[X < a] =FX(a)andP[X ≥b] =FX(b)

A specific type of continuous random variables which we encounter for a specific wireless cannel model in this lecture are Gaussian distributed random variables X. Such variable is characterized by two parameters µ(mean) andσ(standard deviation). The pdf is given by

fX(x) = 1

√2πσe(x−µ)22

We use notation X ∼ N(µ, σ) to express that random variableX is distributed according to that pdf.

A closed form expression of the cdf does not exists in this case. However, the cdf is typically expressed by the following function symbol:

FX(s) = 1−Q

x−µ σ

where theQ-functionQ(x)is defined as Q(x) =

Z x

√1 2πey

2 2 dy

You will also find the error function in connection of Gaussian distributed random variables. The function is defined by

erf(x) = 1

√π Z x

−x

e−t2dt

It is easy to see that the probability P[X ≤ x] for X ∼ N(µ, σ)can also be expressed by

P[X ≤x] = 1 2+1

2erf

x−µ

√2σ

Another type of continuous random variables we encounter in this lecture are uniformly distributed random variablesX. Such a variable is characterized by two parameters a andb, which define the interval [a, b] in which X takes its values.

It’s pdf is given as fX(x) =

( 1

b−a x∈[a, b]

0 otherwise Its cdf is given by

FX(x) =





0 x < a

x−a

b−a x∈[a, b)

1 x≥b

We write X∼U(a, b).

E. Discrete random variables

Opposed to the continuous case where P[X =x] = 0 for allx, a discrete random variableX typically satisfiesP[X = x]>0 if xis a possible outcome ofX. Here we can define theprobability mass function (pmf)defined as

pX(x) = P[X =x]

The distribution of a discrete random variable is determined completely by it’s probability mass function. Moreover, we can determine the cdf FX(x) by summing up P[X = y] for all y≤x. In this lecture we consider only such discrete random variables taking values inN0. Thus we can express the cdf as follows

FX(n) =

n

X

i=0

pX(i)

Three specific distributions of discrete random variables are of interest in this lecture, theBernoulli, theBinomial, and the Poissondistribution.

(5)

A Bernoulli distributed random variableXtakes only values from{0,1}. It is characterized by one parameterp. Its pmf is

pX(n) =





p n= 1

1−p n= 0

0 else

We write X ∼Ber(p).

A Binomial distributed random variable X is the result of summing up n independent identically Bernoulli distributed random variables. It is thus characterized by two parameters n (the number of summed Bernoulli variables) and p (the parameter of the Bernoulli variables). Its pmf is

pX(k) = n

k

pk(1−p)n−k where

n k

= n!

k!(n−k)!

We write X ∼Bin(p).

A Poisson distributed random variable X describes the number of events in a time interval, assuming that the events occur randomly at a constant rate and independently of the other events. Its pmf is

pX(k) =λke−λ k!

We write X ∼Poiss(λ).

Exercise 4 (Sum of two independent Poisson random varia- bles). Show that the sum of two independent Poisson random variables with rateλis a Poisson distributed random variable with rate 2λ. [Hint: use the law of total probability.]

F. Expectation and Variance

The expectation or mean E[X] of a random variableX is defined for the continuous case as

E[X] = Z

−∞

xfX(x) dx

and for the discrete case E[X] =

X

i=0

ipX(i)

It is easy to see that the expectation is a linear operator, i.e.

for two random variablesX andY

E[aX+bY] =aE[X] +bE[Y]

The expectation can also be used when a function is applied to a random variable. It holds for the continuous and discrete case, respectively:

E[g(X)] = Z

−∞

g(x)fX(x) dx

E[g(X)] =

X

i=0

g(i)pX(i)

In particular, the concept applied to the functiong(x) =xn (i.e.E[Xn]) forn∈N0 yields what we term thenthmoment of X. For n >1 we also speak of thehigher momentof X.

An example application of higher moments is to express how the values generated by a random variable X varies around it’s mean. This can be expressed by the variance defined as

Var[X] = E[(X−E[X])2] By linearity ofE[·]this expression equates to

Var[X] = E[X2]−(E[X])2 Thestandard deviation is defined byp

Var[X].

When two random variablesX andY are independent, the expectation of their product is the product of their expectati- ons, i.e.

E[X·Y] = E[X]·E[Y]

LetXandY be independent andf(x)andg(x)two functions.

Sincef(X)andg(Y)are independent as well, we have E[f(X)·g(Y)] = E[f(X)]·E[g(Y)]

Exercise 5 (Expectation of a fraction of two random varia- bles). Let X and Y be two independent random variables.

Argue that E[X/Y] = E[X]/E[Y] is not correct in general.

What is the right expression to calculateE[X/Y]?

Exercise 6(A simple estimate of the expected value). Consi- der a discrete random variable N ∈N0. Proof the following estimate:

E[N]≥P[N ≥1]

For the example distributions we have

E[X] Var[X]

N(µ, σ2) µ σ2 U(a, b) a+b2 (b−a)12 2

Ber(p) p p(1p)

Bin(p, n) np np(1p)

Poiss(λ) λ λ

G. Central limit theorem

A reason why Gaussian distributions often occur in com- munication system models is the central limit theorem. The theorem defines the limiting distribution of the sum over a large number of independent identically distributed random variablesX1, X2, . . .. Let

Yn =

n

X

i=1

Xn

and

Zn = (Yn−µn)/σn

whereµnandσnare the expected value and standard deviation ofYn. The central limit theorem states that in the limiting case (i.e.n→ ∞) the distribution ofZn is N(0,1).

Thus, any random variable resulting from the sum of a large number of independent identically distributed random components has approximately a Gaussian distribution.

(6)

For example noise at the receiver resulting from many random components like agitation of electrons due to thermal noise is accurately modeled as a Gaussian distribution.

H. Conditional probability

LetAbe an expression over random variables (for example 1≤(X+Y)·Z ≤5). We will also consider the probability that this expression is satisfied conditioned that one variable, say X, is already determined to take the value x. We write

P[A|X =x]

We can thus replace any occurrence inAwith the valuex(i.e.

1 ≤(x+Y)·Z ≤5 in the above example). We denote this by A(X=x). Thus,

P[A|X=x] = P[A(X =x)|X =x]

In particular, if all other variables in A(X =x)are indepen- dent of X we have

P[A|X =x] = P[A(X =x)]

For calculation the probability of a given expression A we can apply thelaw of total probability as follows.

If X is a discrete variable we have P[A] =

X

i=1

P[A|X =i]·P[X=i]

If X is a continuous variable we have P[A] =

Z

−∞

P[A|X =x]fX(x) dx

The law of total probability is also applicable when calcu- lating the expectation of a given expressionA:

If X is a discrete variable we have E[A] =

X

i=1

E[A|X =i]·P[X =i]

If X is a continuous variable we have E[A] =

Z

−∞

E[A|X=x]fX(x) dx

III. BRANCHING PROCESSES

A. Motivation and basic definitions

Definition 4 (k-regular tree). A k-regular tree is a tree of infinite size where all nodes have degreek for somek >0.

Definition 5 (k-branching tree). A k-branching tree denoted by Tk is an infinite tree with oneroot nodehaving degree k and all other nodes having degree k+ 1 for some k >0.

Consider the following percolation question. LetTkbe ak- branching tree. Assume for each edge that we keep that edge with a probabilitypfor somep >0, independent of the choice for all other edges.

Exercise 7 (Distribution of the number of child nodes).

Argue that the number of child nodes is binomial distributed Bin(k, p)in the previously defined random tree.

We are interested in the following two questions:

Let C be the connected component which contains the root node. What is the probability that this component has infinite many nodes?

What is the probability to have such infinite connected component somewhere in the constructed random graph?

We will answer the first question by considering the follo- wing more general model.

B. Galton-Watson branching process

Definition 6(Galton-Watson branching process and extinction probability). Let Xi(j) be independent and identically distri- buted random variables. We denote their common distribution as offspring distribution. Define the random variables Zi as follows. Let Z0 = 1 be the population size at generation 0. Define recursively the population size at generation i by Zi =X1(i)+· · ·+XZ(i)

i−1. The extinction probability is the probability that this tree does not grow for ever (formally limn→∞P[Zn= 0]).

Exam question: Explain the Galton-Watson branching process.

Obviously, the branching tree we considered in the motiva- tion by independently erasing links fromTk is a special case of a Galton-Watson branching process. In this special case the offspring distribution is Bin(p, k) as we discussed in the previous exercise.

Theorem 2 (Growth of branching processes). Let µ = E[Xi(j)] the mean number of children, where Xi(j) are in- dependently and identically distributed like a generic random variableX. Ifµ≤1, the branching process does not grow for ever with probability1, except whenP[X = 1] = 1. Ifµ >1 then there exists a probability greater0that the process grows for ever.

Exam question: Explain intuitively under which condition a Galton-Watson branching may grow to infinity and when definitely not.

The theorem can be proven in an elegant way using the concept of generating functions which are typically applied when dealing with stochastic sums as we encounter here.

Definition 7(Generating function). LetX ∈N0be a discrete random variable. Thegenerating functionof X is defined as:

GX(s) = E[sX] =

X

i=0

P[X =i]·si Lemma 1 (Properties of generating functions). It holds:

1) GX(0) = P[X= 0] andGX(1) = 1 2) GX(s)is non-decreasing in s

3) E[X] = G0X(1), where f0(1) is the short hand nota- tion of the first derivative of f(x) evaluated at 1, i.e.

df(x) dx

x=1

4) IfX andY are independent then GX+Y(s) =GX(s)· GY(s)

5) Let Xi ∈ N0 be independent and identical distributed random variables. Let X be a generic random variable

(7)

of that distribution, i.e. Xi ∼ X. Let N ∈ N0 be a discrete random which is independent of all Xi. Then the generating function of thestochastic sumS=X1+ X2+· · ·+XN is given by

GS(s) =GN(GX(s))

Exam question: Explain how to determine the generating function of a stochastic sum of independent identically dis- tributed random variables by using the generating functions of the summed random variables and the generating function of the maximum index.

Exercise 8 (Properties of generating functions). Prove all properties of generating functions listed in the previous lemma.

[Hint: (3) here we have d

P i=0f(x)

dx = P

i=0 df(x)

d due to absolute convergence. (4) if X and Y are independent then also sX and sY are independent. (5) apply the law of total probability for expectation.]

The main theorem on growth of a branching process is based on the following two lemmas which can be proved with the concept of generating functions:

Lemma 2 (Mean growth). Consider a branching process.

Let µ = E[Xi(j)]. The expected population size of the nth generation is given by

E[Zn] =µn

Exam question: Explain how expected population size of one generation and the expected number of offspring of one node are related.

The lemma reveals that the expected size of a population grows or decays exponentially fast depending on the expected number of children per node.

Lemma 3 (Extinction probability). The probability η that Zn= 0for ann∈Nequals the smallest non-negative solution for equation G(s) =s.

Exam question: Explain how to determine the probability that the branching process dies out.

Exercise 9 (Broadcast propagation). Assume an arbitrarily large network. A broadcast transmission is initiated at a starting node which sends the broadcast message to all its neighbors. A node receiving the broadcast message for the first time will send it to all its neighbors. Subsequent message receptions are ignored by that node. Assume the following simplified stochastic propagation model. Assume that the initial message transmission and all broadcast retransmissions are stochastic independent. For the initial transmission or a retransmission, assume that the number of nodesX receiving with this transmission the broadcast message for the first time is either 0,1 or 2with the following probabilities:

P[X = 0] = 1/8, P[X = 1] = 1/2, P[X= 2] = 3/8 1) Consider one of the broadcast message transmissions.

What is the expected number of neighbors receiving the broadcast message for the first time.

2) Argue that this broadcast process will not die out for sure.

3) What is the probability that this broadcast process re- aches an infinite number of nodes? [Hint: instead of solving by hand you can also try solvewith Wolfram Alpha.]

4) What is the expected number of nodes reached by the firstnbroadcast retransmissions? [Hint: sum expression can symbolically be handled by sum with Wolfram Alpha.]

C. Percolation probability for thek-branching tree

Now let’s return to the construction we considered in the motivation, which is the special case of a Galton Watson branching process with a binomial offspring distribution. The following corollary answers the first question we started with in the motivation.

Corollary 1 (Extinction probability for binomial offspring distribution). Consider a Galton Watson branching process with binomial offspring distribution, i.e.X ∼Bin(k, p).

Whenp >1/k, there exists a positive probability that the tree grows to infinity.

In this case the probability is determined by solving for the smallest solutions≥0 for the equation

(1−p+ps)k=s

The expected number of nodes at tree level nis(pk)n. As we have seen, we can compute the probability that the root node is located in an infinite connected component.

Actually, it was not required to consider the root node. We could have started at any node in Tk and ask if starting from there, the node is connected in an infinite component with its descendants inTk. It would be the same probability.

The probability just depends on the parameterp (which is a probability). Moreover, as we will see in the following section, we can extend this concept to more general graph structures, for example an infinite 2D grid. We summarize the general concept of percolation in the following definition.

Exam question: Consider percolation on a k branching tree.

Explain if or not there if we consider the root node or any other node when asking for the probability that this node is in a finite or infinite connected component.

Definition 8(Percolation probability). LetG(p)be a random graph model depending on the parameterp, wherepis a proba- bility value. Thepercolation probabilityθ(p)is the probability that a given node is located in an infinite component.

Exam question: What means percolation probability?

As we have seen, the percolation probabilityθ(p)becomes greater0whenpexceeds a certain value (here whenp >1/k) and that it is0whenpfalls below. It turns out that this behavior is not limited to the tree structure we studied here but holds for many other cases. The following definition summarizes this observation.

(8)

Definition 9(critical probability).LetG(p)be a random graph model depending on the parameter p, which is a probability value. The valuepc is termed critical probabilityif θ(p) = 0 for p < pc andθ(p)>0for p > pc.

Exam question: What means critical probability?

In terms of these definitions we can summarize the results of this section as follows:

Theorem 3(Percolation threshold fork-branching trees). The critical probability of the k-branching tree is pc = 1/k. For p > pc the percolation probability isθ(p) = 1−η, where η is the smallests≥0 solving the equation(1−p+ps)k =s.

Forp≤pc we have θ(p) = 0.

The second question which we started this section with is still not answered: what is theprobability, sayψ(p), that there exists a connected component somewhere in the constructed random graph? If θ(p) = 0 for each node v in Tk the probability that C(v) is infinite is 0. Thus, nowhere in the network such infinite component can occur with probability greater than 0.

Assume now θ(p) > 0. Consider any node v0. If C(v0) is finite, there exists a node v1 inTk which is not in C(v0) (sinceTk has infinite many nodes). Again, ifC(v1)is finite, there exists a nodev2 inTwhich is not inC(v1)andC(v0).

This can be continued arbitrarily. Consequently, as long we select a node located in a finite component we still have the opportunity to try a next node. For each node we choose, there is a fixed probability θ(p) greater 0 that this node is in an infinite component. Since we have infinite many trials, we will thus eventually choose a node which is in an infinite component. This observation can be summarized as follows:

Lemma 4 (Existence of an infinite component). Consider the k-branching tree. If p > pc then there exists an infinite connected component with probability 1. If p ≤ pc the probability for an infinite connected component is0.

Exam question: What is the difference between the probability of existence of an infinite connected component and the percolation probability?

Exam question: Based on the branching tree example, explain why the probability of existence of an infinite connected component it is either 0 or1.

Exam question: How is the probability of existence of an infinite connected related to the critical probability for per- colation in ak branching tree?

Exercise 10 (Evaluation for the 3-branching tree). Consider the k-branching tree fork= 3.

1) What is the critical probability pc?

2) Determine the percolation probabilityθ(p)as a function ofpand plot that function forpfrom0to1. [Hint: You can try for examplesolve,reduce,fullsimplify andplotwith Wolfram Alpha.]

3) For which values of p is the probability to have an infinite connected component1 and for which value of pis it0?

D. Mean cluster sizes for thek-branching tree

Finally, we want to study the expected size of a connected component. Again, we will discuss this topic in view of branching processes but in general this concept can also be studied for any percolation model. We define:

Definition 10 (Expected component size). Let xbe a vertex in a random graph model (e.g. the k-branching tree) with parameter p, where p is a probability value. Consider the connected componentC(x)wherexis located in. We denote theexpected cluster sizeof the component of xby

χx(p) = E[|C(x)|]

where|C(x)|denotes the number of elements inC(x).

We define the critical probability for an expected infinite component sizep0c as follows

p0c= sup{p:χx(p)<∞}= inf{p:χx(p) =∞}

Exam question: What is the difference between the critical probability for an expected infinite component size and the critical probability for percolation probability?

It is easy to see that p > pc always implies χx(p) = ∞.

However, is the converse p < pc implies χx(p) < ∞ also always true, i.e. is pc =p0c always satisfied? In the case of k-branching trees this is easy to prove:

Lemma 5(Critical probability for expected mean cluster size of the k-branching tree). The k-branching tree satisfies pc = p0c.

Exam question: How are the critical probability for an ex- pected infinite component size and the critical probability for percolation probability related for percolation in a k- branching tree?

Especially for the binary branching tree (i.e. k = 2) even the distribution of the cluster size can be derived.

Lemma 6 (Cluster size distribution of the binary branching tree). Consider the binary branching tree and the edge proba- bility p. The probability that a connected component C has sizen is

P[|C|=n] = 1 n

2n n−1

pn−1(1−p)n+1

IV. BOND AND SITE PERCOLATION

A. Motivation and basic definitions

A basic structure to study percolation is an infinite 2D grid.

We defineL2which is the infinite graph with verticesV =Z2 (whereZ is the set of integers) and edgesE={{x, y} ∈V2: kx−yk= 1}. We define two types of random graphs on this grid:

Definition 11(Bond percolation and site percolation models).

Let 0 ≤p ≤1. In the bond percolation model (also termed edge percolation) each edgeuvL2is removed with probability 1 −p and kept otherwise. In the site percolation model (also termed node percolation) each node inL (and it’s four

(9)

incident edges) is removed with probability 1−p and kept otherwise. In both models, the random removal of nodes/edges is independent of all other removals.

Exam question: Explain the bond and site percolation model.

As we have already studied for k-branching trees, we will study in this context again the percolation probabilityθ(p), the critical probability pc for a positive percolation probability, and the probability that there exists an infinite component somewhere in the considered random graph. Again, we also ask about the expected size E[|C(o)|] of the component containing the origin and the critical probability p0c that this expected value is ∞.

Exam question: What means percolation probability, critical probability and probability for an infinite cluster in the bond and site percolation models?

Note, bond and site percolation can be extended into any dimensiond. Instead ofL2we can consider andLdwith node setZd and all edgese with Euclidean lengthkek= 1.

Exam question: How can bond and site percolation model be extended to any dimension d?

Bond and site percolation originates from the study of porous material. In this context, the connection between to nodes, i.e. existence of an edge, is termedopenwhile a missing edge between two nodes is termedclosed.

B. General observations for bond percolation

We can already observe some basic properties of the defined percolation models for any dimension d. When increasing p, more edges appear. Thus, θ(p) should be non-decreasing.

Formally, this statement can be proved by the technique of coupling. This proof technique means that we consider simultaneously to instances of a random network one n1 for p1 and onen2for p2. The construction must assure that each ni (i = 1,2) for itself is an unbiased random instance of the percolation process for pi. Both instances are, however, coupled such that if the property holds inn1then it also holds in n2. Let’s illustrate this with the following theorem and its proof.

Exam question: Explain the proof technique of coupling.

Theorem 4 (Percolation probability is non-decreasing in p).

Consider any dimension d. Let 0 ≤p1 ≤ p2 ≤1. We have θ(p1)≤θ(p2).

Exam question: Argue why percolation probability is non- decreasing in p (where p is the probability that an edge is kept).

The technique requires that the property remains valid when adding edges, for example an infinite component remains infinite, when adding edges. Formally, coupling is applicable for events or random variables which satisfy the following property:

Definition 12. An event A is increasing, when for each instance ω of the random graph where this event occurs (i.e.

ω ∈ A), adding edges to that instance leads to an instance

ω0 which remains in A. Similarly, a random variable X is increasing if for each ω its value X(ω) does not decrease whenever edges are added to ω, i.e. X(ω)≤ X(ω0), where ω0 is ω plus additionally added edges. Conversely, an event is decreasing, when its complement is increasing. A random variable isdecreasingwhen its negation is increasing. An event or random variable which is increasing or decreasing is called monotone.

Exam question: What means that an event or a random variable is increasing/decreasing/monotone.

Exam question: Explain why the technique of coupling requires such monotone event or random variable.

Exercise 11(Connectivity in a region is an increasing event).

Consider bond percolation onL2. Consider a box B around the origin o= (0,0). Define the event A ={ all vertices in B are connected by at least one path inB }. Letξ(p)be the probability of that event.

1) Argue thatA is an increasing event.

2) Prove 0 < p1 < p2 < 1 implies ξ(p1)≤ξ(p2) in the same way we proved the previous theorem in the lecture.

The percolation probability obviously satisfiesθ(0) = 0(i.e.

the grid without edges) and θ(1) = 1 (i.e. the grid with all edges). Since θ is non-decreasing, there has to be a critical probabilitypc such that

θ(p) =

(= 0 , p < pc

>0 , p > pc

wherepc= sup{p:θ(p) = 0}.

Letpc(d)denote the critical probability for the percolation model inddimensions. Obviously, the critical probability for d= 1(i.e. an infinite chain of nodes) ispc(1) = 1.

Moreover, a connected component in Ld is as well a a connected component inLd+1. Thus,pc(d+ 1)≤pc(d).

Exam question: Explain the relationship of critical probabi- lities of bond percolation depending on the dimension of the grid.

As we already observed for branching trees, also for bond and site percolation the probability ψ(p) to have an infinite cluster somewhere in the random graph is either0 or1.

Theorem 5 (Existence of an infinite connected component).

Let ψ(p) be the probability for the existence of an infinite component somewhere in the random grid. Then

ψ(p) =

(0 , θ(p) = 0 1 , θ(p)>0

Exam question: Explain the relation between percolation pro- bability and existence probability of an infinite component.

Thus, given a critical probabilitypc, we have aphase tran- sition atpc. When p > pc the probability to have an infinite component is one (we also say an infinite component exists almost surely). In this case we say that the networkpercolates, or equivalently that the percolation modes is supercritical.

(10)

Otherwise, ifp < pc the probability for an infinite component is zero (or conversely the graph almost surely consists only of finite connected components). In this case we say the percolation model is subcritical.

Exam question: Explain the terms percolation, phase transi- tion, subcritical and supercritical.

C. Simple bounds for bond percolation on the square lattice To develop our understanding of percolation theory and its proof arguments we start with bond percolation on the square lattice. We now proof a coarse estimate on its critical probability. The proof we study is basically a proof pattern which can be applied to estimate the critical probability also for other percolation models (see the next exercise). Note, for percolation on the square lattice an exact solution forpcexists.

We will see this later after we learned more techniques which yield sharper results.

Theorem 6 (Estimate on the critical probability of bond percolation on the square lattice). The critical probability pc of bond percolation on the square lattice is bounded by 1/3≤pc ≤2/3.

The proof for pc ≥1/3 is based on a coarse estimate on the number of pathsσ(n)from the originowith lengthnand letting nto infinity.

Exercise 12 (Lower bound on the critical probability for the triangular lattice). Consider instead of the square lattice L2 a triangular lattice T on the plane, i.e. we have an infinite grid of equilateral triangles instead of squares. Consider the percolation model, where each edge of the triangular lattice is independently erased with probability p. Let θ(p) be the percolation probability for this percolation model. Follow the scheme of the proof for the previous theorem to estimate a lower bound on the critical probabilitypc for this percolation model.

The proof of pc ≤ 2/3 is based on a counting argument calledPeierls’ argument. Consider a planar drawing of a graph G. Place a vertex in each face of that drawing and connect two such vertices if their faces share a boundary edge. We term this graph the adual graphof G. In particular, for the square lattice we consider the dual graph resulting from placing a vertex in the center of each square. The result in this case is obviously a square grid, the original grid shifted by(1/2,1/2) (which we also call the dual lattice).

Exam question: Explain the concept of a dual graph and in particular the dual lattice.

Analogously, we can construct a dual for the random lattice.

Consider the square lattice and its dual. Keep an edge from the dual lattice, if the edge which it crosses in the original lattice is not set in the random graph. Otherwise, delete that edge.

Exam question: Draw an example of a random instance of the bond 2D percolation model and its dual graph.

Acircuit is a path where start and end vertex are the same and each vertex of the path is visited only once. We observe,

that any finite component in the random grid is surrounded by a circuit in the dual random grid.

Thus, the event |C(o)| <∞ in the original graph, i.e the originolocated in a finite connected component, is equivalent witholying in a circuit of the dual graph.

Exam question: How can you characterize by its dual graph, that a node is in a finite component in the graph of 2D bond percolation?

Exam question: How is the probability of an edge in the dual graph related to the probability of an edge in the original graph?

We consider the set C of all circuits where o is located in. Consider the following deterministic quantities first. Let B(k) = [−k, k]2∩Zbe the boxes of equal side length and center o. Define the set Ck ⊆ C of all circuits surrounding B(k). Letρ(n)be the circuits of lengthnwhich are surroun- ding the origino. This quantity can be estimated from above byρ(n)≤nσ(n−1).

Now consider the random grid. We derive in the lecture that θ(p)>0 follows when the expression

X

n=8k

nσ(n−1)qn<1

holds for sufficiently large k. This is equivalent to finding q such that the sum converges for fixedk. The upper bound for the critical probability is then given by pc ≤ 1−q. For the case of the square grid this yieldspc≤2/3.

X

n=8k

nσ(n−1)qn<1

holds for sufficiently large k. This is equivalent to finding q such that the sum converges for fixedk. The upper bound for the critical probability is then given by pc ≤ 1−q. For the case of the square grid this yieldspc≤2/3.

Exercise 13 (Upper bound on the critical probability for the triangular lattice). Consider percolation on a triangular lattice T as defined in the pervious exercise. Derive with the following steps and based on the scheme of the proof in the lecture an estimate on an upper bound on the critical probabilitypc for this percolation model.

1) What is the dual grid for T?

2) Draw an example for the dual of a random grid overT. 3) Is a finite connected component in the random grid always contained in a circuit of the dual of the random grid?

4) Derive an upper bound on the numberσd(n)of length ncycle free paths in the dual grid ofTwhich start from an origin vertex o.

5) Derive an upper bound on the numberρd(n)of circuits of length n in the dual of T which contain the origin o= (0,0) of T.

6) Check that for this this exercise hexagons Hk of cir- cumference 6k are appropriate “boxes” here to follow the construction of the proof of the lecture.

(11)

7) Use the results of the previous steps to estimate the critical probability from above the same way as we did in the proof in the lecture.

8) Which upper bound do you get for pc?

9) Compare the result of this and the previous exercise with the bounds we derived for L2 in the lecture.

Have a guess about the difference between the critical probability forL2 andTand justify your conjecture.

D. Site percolation

The results we derived so far also hold for site percolation.

Likewise for site percolation the percolation probability θ(p) is non decreasing in p and in case of θ(p) > 0 there is almost sure an infinite component somewhere in the random network. We can also estimate the critical probability pc for site percolation from below and above in a similar way as we did before (though we have to use a different construction than dual graphs). Moreover, the following relationship between critical probability for bond and site percolation holds.

Theorem 7 (Relation between critical probability for bond and site percolation). The critical probabilities psitec (G) and pbondc (G)for bond and site percolation over the same infinite connected graph Galways satisfies

psitec (G)≥pbondc (G)

Exam question: Explain how and why the critical probability of bond and site percolation on arbitrary infinite graphs are related.

The theorem can be proven formally by the technique of dynamic coupling which is an extension of the idea of coupling which we already discussed. However, here we consider an algorithm constructing two coupled instances of both percolation models. Coupling means (again) that a path in one instance always implies a path in the other instance.

The coupling has to be such, that both constructed instances by themselves stand for two independent generated random instances of the respective percolation models.

Exam question: Explain the proof principle of dynamic cou- pling.

For proving the theorem we consider any vertex x0 ∈ G and construct a random spanning tree as follows. For each neighbor y of x0 we delete the edge x0y with probability 1−p. If the edge is deleted we also delete y. Otherwise we keep y. Every node we touch (by either deleting or keeping it) will be marked as visited. In all subsequent steps we proceed as long as possible by selecting a not erased node y reachable from x0 and we repeat the before mentioned step for all neighbors of y which have not been visited so far. In this way we get a (possibly infinite) spanning tree.

Obviously, this method generates an instance of a component C(x0) of site percolation. Moreover, the tree constructed this way is a subgraph of the component C(x0) in bond percolation. Consequently, an infinite component C(x0) in site percolation is always coupled with an infinite component

C(x0)in bond percolation. Thus, when G percolates with p for bond percolation it also percolates for site percolation.

Exercise 14 (Simulating a random connected component in site percolation). Implement a program which generates an instance of C(o) (the component which contains the origin (0,0)) in the site percolation model over L2 based on the previously described dynamic coupling approach.

1) Describe your program idea with pseudo code.

2) Write a program in Python (using a Jupiter notebook in CoCalc) which creates an instance of C(o) for a given probability p. To avoid an infinite loop for p is in the supercritical range, limit your simulation within an observation window [k, k]2, i.e. every node we reach outside this window is just ignored. Use matplotlib.pyplot to show a 2D plot of the generated instance.

3) Use your code to plot one instance each in the obser- vation window [−500,500]2 for the values p = 0.56, 0.57, 0.58, 0.59, 0.60, 0.61. Where could the critical probability be approximately?

4) Use your code to plot one instance each in the obser- vation window [−500,500]2 for the subcritical values p = 0.570, 0.571, 0.572, 0.573, 0.574, 0.575, 0.576, 0.577,0.578,0.579. What do you observe?

Exam question: Explain how an instance of the component of any originxcan efficiently be generated for site percolation by a computer simulation by using the algorithm described for dynamic coupling.

Site percolation can be seen as a more general concept than bond percolation. A bond percolation model can be translated into a site percolation model but not vice versa. Given a graph G, define a covering graph Gc by placing a node in the middle of each edge ofG. Connect all those nodes ofGc

whose corresponding edges inGhave a node in common. For bond percolation onGwith probabilitypwe can also consider site percolation onGcwith the same probabilityp. Every path inGcorresponds to a path inGc and vice versa. Thus,

pbc(G) =psc(Gc)

Exam question: Explain the concept of a covering graph by an example.

Exam question: Argue how bond percolation on a graph and site percolation on a covering graph are related.

When comparing bond and site percolation, a special case arises when the underlying graph is a tree. Interpreting an edge removal as a node removal of the child node reached by that edge along the tree, we see that bond percolation is the same a size percolation in this specific case.

Exam question: Explain why bond and site percolation are the same when the underlying graph is a tree.

E. Further results

We state a few further results on bond and site percolation without elaborating the details. However, some of the sharp

(12)

results listed here will be discussed in more detail once we learned the required set of tools to proof them.

The estimate 1/3 ≤ pc ≤ 2/3 for bond percolation on the square latticeL2 also follows from a more general result.

Define λ(2) for dimension2by λ(2) = lim

n→∞σ(n)1/n

whereσ(n)represents the number of possible paths as defined in the previous proof on the estimate. The series converges when σ(n+m) = σ(n)σ(m) (Fekete’s Lemma). This is obvoisly satisifed by σ(n). The same proof technique we used to show1/3< pc<2/3 yields

1/λ(2)≤pc(2)≤1−1/λ(2)

wherepc(2)is the critical probability for the square grid. Our estimate σ(n)≤4·3n−1 yieldsλ(2) = 3and establishes the result 1/3 ≤pc ≤2/3 we already know. However, a better estimate onσ(n)immediately yields a better lower and upper bound estimate for pc(2). The best known estimate forλ(2) is 2.6256≤λ(2)≤2.6792.

Exam question: Argue why a better estimate on the number of acyclic paths of length n and the same origin yields a better estimate on the lower and upper bound of the critical probability for bond percolation on the grid.

The same technique applies to higher dimensions to describe lower bound (but no upper bound) by

1/λ(d)≤pc(d)

wheredis the dimension. Note, with each dimension two more outgoing edges are added to each node of the grid.

With σ(n)≤2d(2d−1)n−1 we get the lower bound pc(d)≥ 1

2d−1

Exercise 15 (A lower bound of the critical probability for bond percolation in higher dimensional grids). Follow the arguments discussed before to show that pc(d), the critical probability for bond percolation in a d dimensional grid, is lower bounded by

pc(d)≥ 1 2d−1 When increasing the dimension we have

pc(d)∼(2d)−1

Thus, when d is large, percolation on the grid behaves like percolation on a 2dbranching tree.

Exam question: Explain intuitively why with increasing dimen- siondbond percolation on addimensional grid behaves like percolation on a 2dbranching tree.

Exact results for the critical probability are only known for some cases. Later in this lecture we will develop tools which enables us to proof that the critical probability of bond percolation on L2 is exactly 1/2. The same critical probability1/2also holds for site percolation on the triangular lattice T. In contrast, an exact derivation of the critical

probability of site percolation on the grid L2 does not exist.

Computer simulations suggest that the critical probability is approximately0.59275. Also a general estimate for the critical probability for bond and site percolation on arbitrary infinite connected graphsGwith a maximum degree∆ exists:

1

∆−1 ≤pbc(G)≤psc(G)≤1–(1−pbc(G))∆−1 One question we have left out so far is how many infinite cluster can exist in the supercritical regime? The answer depends on the model. For bond and site percolation on the grid of any dimension there exists only one infinite cluster with probability one in the supercritical regime.

Exercise 16(A model with infinite many infinite components).

Provide and discuss a bond and/or site percolation model were in the supercritical regime infinite many infinite components exist with probability one.

V. MODELS FOR WIRELESS COMMUNICATION

Percolation theory has an application spectrum in natural sciences and engineering disciplines. When studying percola- tion theory by its own, of course there is no need to discuss specific modeling details of the application domain. However, here we also look at that mathematical tool from the per- spective of wireless communications network modeling. Thus, introducing some of the standard wireless communication channel models which we discuss in much more detail in the lecture “Wireless Communication (DraKo)” is appropriate.

This section is a short summary of some of the main results from that lecture.

A. Average path loss for an isotropic radiator

Definition 13 (Average path loss for the isotropic radiator).

Letdbe the distance between transmitter and receiver, α≥1 be thepath loss coefficientandpTX be the power radiated by the transmitter antenna. We define the following models (APL stands for Average Path Loss Model) for theaverage received power pRX at the receiver antenna:

APL1 pRX=pTXd−α

APL2 pRX=pTXmin{1, d−α} APL3 pRX=pTX(1 +d)−α

Exam question: Explain the model for average path loss for an isotropic radiator.

Definition 14 (Log-distance path loss model). The log- distance path loss modelexpresses the received power RRX in dBm at a given communication distance d for suitable constantsc,αandd0 as follows:

PRX(d) =c−10αlog10(d/d0)

whereαis thepath loss coefficient,d0is thereference distance andcis the average path loss in dB at the reference distance d0.

Definition 15 (Log-Normal Shadowing Model). The Log- Normal Shadowing Modelexpresses the received powerRRX

(13)

in dBm at a given communication distance d for suitable constants c, α and d0 and a Gaussian distributed random variableX ∼ N(0, σ2)as follows

PRX(d) =c−10αlog10(d/d0) +X

whereα,d0andc have the same meaning as in the previous definition. The random variable X models the fluctuations of the average signal strength due to objects in the environment.

Exam question: Explain the log-normal shadowing model.

B. Flat fast fading

Definition 16(Flat fast fading model). LetpRXbe the average received power at a receiver antenna. LetX be a non-negative random variable with E[X] = 1, here referred to as fading coefficient. Theinstantaneous received powerPRX is defined as PRX=pRX·X.

Exam question: Explain the fast fading model.

Definition 17 (Rayleigh fading). Rayleigh-Fading describes a channel consisting of several indirect communication paths without a dominant communication path. The fading coeffi- cient satisfiesX ∼Exp(1)in this model.

Exam question: Describe the scenario for Rayleigh fading.

Definition 18(Rice-Fading). Rice-Fadingdescribes a channel consisting of several indirect communication paths and one additional dominant communication path (usually the so alled direct line-of-sight or LOS path). One considers the ratio K between average power P1 received via the dominant path versus average power P2 received in total via the indirect paths, i.e. K=P1/P2 (the total power received isP1+P2).

The fading coefficientXsatisfies in this modelX= 2(K+1)1 Y withY ∼nzX22(2K).

Exam question: Describe the scenario for Rice fading.

Definition 19 (Nakagami fading). WithNakagami fadingthe communication channel is considered to be the sum of several independent (approximately) identically distributed Rayleigh fading channels. In this model, the fading coefficient X satisfiesX ∼Γ(m, m)for a suitably selected model parameter m≥0.

Exam question: Describe the scenario for Nakagami fading.

Exam question: Describe the relationship between the Ray- leigh, Rice and Nakagami fading models informally.

Corollary 2(Distribution and density of a fading model). Let Y =pRX·X be the random variable considered in a fading model. From distribution function FX(x)and density fX(x) the distribution function and density forY can be determined as follows:

FY(x) =FX( x pRX

)

fY(x) = 1 pRX

fX( x pRX

)

C. SINR-, SIR- and SNR model

Definition 20 (SINR, SIR and SNR model). Given is a network node t, which receives a transmission from node s with power S. Let V be the set of all further nodes sending simultaneously and Sv for each v ∈ V be the power with which node t receives the transmission of v. We refer to the transmissions of the nodes from V as interference. In addition, N is the interference independent power of the noise at the receiver. The power quantities can be modeled as deterministic values or as random variables. The SINR (Signal-to-Interference-and-Noise-Ratio) is defined as

SINRdef= S

N+I , whereI=X

v∈V

Sv

If the noise is neglected (i.e.N = 0), this is called SIR (signal- to-interference ratio). If, on the other hand, no interference is assumed (i.e. I = 0), this is called SNR (Signal-to-Noise- Ratio).

Exam question: Describe the SINR, SIR and SNR models.

Definition 21(probability of transmission success, probability of outage). We refer tops(θ) = P(SINR> θ)as probability of successful transmission. The opposite probability1−ps(θ) is referred to as outage probability. Analogously, we define these terms for SIR and SNR.

Exam question: Explain the terms transmission success and outage probability.

VI. POISSON POINT PROCESS

So far we considered percolation on a given fixed infinite graph structure (e.g. a square grid). Moreover, we considered only one single probability p to keep a vertex or edge. We want to relax both constraints.

Let’s consider the first constraint first. Assume now that the vertex positions themselves are random. This leads us to the theory of stochastic point processes. Informally, a point process generates a countable set of points (the network nodes we consider here) in a given space; which isR2in this lecture (we cover point processes in much more detail in the lecture on “performance evaluation of wireless networks’‘). In the context of this lecture we consider essentially only one specific type of point processes which is motivated by the following desirable properties:

Stationarity: the distribution of the number of nodes fal- ling in a region should be independent of any translation of that region.

Independence: disjoint regions should be independent, i.e.

the number of points falling in one region should not depend on the number of node falling in the other one.

No accumulations: Only a finite number of nodes should fall into a bounded area. The number should only depend on the size of the area.

A heuristic derivation of a point process with such properties yields that the numberN of points falling in a region A of

Referenzen

ÄHNLICHE DOKUMENTE

Combining evolution semigroups, time-dependent Dirichlet forms and space-time processes we deduce probabilistic representations for classical solutions of abstract evolution

01.12.: Alexey Muravlev (Steklov Mathematical Institute, Moscow). 08.12.: Mikhail Zhitlukhin (Steklov Mathematical

be independent, identically distributed, integrable random variables, and T an integrable stopping time (w.r.t. the canonicalen filtration).. Exercise 8.4 (Examples of

Asymptotic lower bounds for optimal tracking: a linear programming approach 19.01.: Giorgio Ferrari (University of Bielefeld). Nash equilibria of threshold type for

The induction method in Question 2 (b) has clearly the disadvantage that one has to know in advance the formula for the sum, in order to prove it.. How many are the triangles,

[r]

!  Good rule of thumb: choose the size of the cells such that the edge length is about the average size of the objects (e.g.,.. measured by

Previous experimental research has shown that such models can account for the information processing of dimensionally described and simultaneously presented choice