• Keine Ergebnisse gefunden

Application to Small Worlds

Im Dokument Theory and Applications of the Laplacian (Seite 101-116)

Together with k ≥√

n this yieldsq ≤1/4.

The bounds forkcan be improved when we do not use the least upper bound of the inverse mapping ofL−bλ0I, but we assume that the right hand sides are independent random vectors and consider expected values. The lower bound forkthen becomes proportional to the cubic root ofn. Anyhow, since during the summation in (4.3) many vectors may cancel out, Theorem 4.4.1 is not tight. In fact, as can be seen in Fig. 4.2, λ2(t) is a smooth function also in some range outside these bounds.

Figures4.2 to4.4 also show that the property of smooth functions carries over to small worlds with p 6= 0 to some extent. While it still holds quite well for small worlds up to p= 0.05 it is clearly no more fulfilled forp= 0.4.

Lemma 4.4.2 For the sequence C0 := 0 and Cj+1 := Pj

`=0C`Cj−`, j ≥ 0 holds Cj ≤4j, j ≥0.

Proof: The sequence Cj is well-known as Catalan numbers whose generating function x7−→(1−√

1−4x)/(2x) delivers Cj = 1

j+ 1 2j

j

.

Now C0 = 1≤40, Cj+1/Cj = 4(j+ 1/2)/(j+ 2) <4 proves the lemma.

4.5 Application to Small Worlds

Watts and Strogatz [122] introduced a random graph model that captures some often-observed features of empirical graphs simultaneously: sparseness, local clustering, and small average distances. This is achieved by starting

100

Figure 4.2: Maximum, minimum and average norm of the coefficients ofu2(t) of 100 samples of a small world with n= 100, p= 0 and (from upper left to upper right to lower left to lower right) k= 4,8,16,32

from a cycle and connecting each vertex with its 2k nearest neighbors for some small, fixed k. The resulting graph is sparse and has a high clustering coefficient (average density of vertex neighborhoods), but also high (linear) average distance.

The average distance drops quickly when only a few random edges are rewired randomly. If each edge is rewired independently with some proba-bility p, there is a large interval ofpin which the average distance is already logarithmic while the clustering coefficient is still reasonably high.

4.5.1 Dynamic Laplacian layout

Interestingly, spectral layouts highlight the construction underlying the above model and thus point to the artificiality of generated graphs. This is due to the fact that spectral layouts of regular structures display their symmetry very well, and are, usually, only moderately disturbed by small perturbations in the graph (mirroring the argument for their use in dynamic layout). The initial ring structure of the small world in Fig.4.11is therefore still apparent, even though a significant number of chords have been introduced by random rewiring. In fact, the layout conveys very well which parts of the ring have been brought together by short-cut edges.

1027

Figure 4.3: Maximum, minimum and average norm of the coefficients ofu2(t) of 100 samples of a small world with n = 100, p= 0.05 and (from upper left to upper right to lower left to lower right) k = 4,8,16,32

Figures 4.5 and 4.6 point out differences between the two approaches using intermediate layouts obtained from the power iteration and from matrix interpolation. It can be seen that the power iteration first acts locally around the changes. This stems from the fact that in the first multiplication only the neighborhood of the change, i.e., the two incident vertices of an edge with changed weight or the neighbors of a deleted or inserted vertex, is affected.

The next step also affects vertices at distance 2, and so on. Hence, the change spreads like a wavefront. The matrix interpolation approach acts globally at every step. Interpolating the Laplacian matrices corresponds to gradually changing edge weights. The animation therefore is much more smooth.

Figures 4.7 and 4.8 show differences between simple linear interpolation of the positions and matrix interpolation. In Fig. 4.7it can be seen, that the symmetry of the graph to its vertical axis is not preserved during the anima-tion, whereas in Fig. 4.8 each intermediate layout preserves this symmetry.

Figure 4.11 finally shows some snapshots of a small world evolving from the initial ring structure. The layouts were obtained by using matrix inter-polation (one intermediate step per change shown). Note that deletion and insertion of vertices requires some extra efforts, in particular, if the deletion

100

Figure 4.4: Maximum, minimum and average norm of the coefficients ofu2(t) of 100 samples of a small world with n = 100, k = 16 and (from upper left to upper right to lower left to lower right) p= 0, .05, .1, .4

of a vertex disconnects the graph.

4.5.2 Deletion and insertion of vertices

Consider deletion of a single vertexv that does not disconnect the graph. Ma-trixL(G(1)) is then expanded by one row and column of zeros corresponding to vertex v, such that L(G(0)) and L(G(1)) have the same dimension. This derived matrix has a double eigenvalue 0. A new corresponding eigenvector is, e.g., (0, . . . ,0,1,0, . . . ,0)T, where the 1 is at position corresponding to v.

This eigenvector will cause vertex v to drift away during power iteration, and thus all other vertices stick together. This can be prevented by defining Lv,v = g in matrix L(G(1)), leading to a movement of v towards 0. But in practice, the following method proved to be successful. After every matrix multiplication reset the position of v to the barycenter of its neighbors. This either prevents a drifting away or an absorbing to 0, which would other-wise be hard to manage. Apart from using matrix (1−t)L(G(0)) +tL(G(1)) for the power iteration, orthogonalization and normalization also have to be adapted. For time t = 1 we only need x⊥(1, . . . ,1,0,1, . . . ,1), instead of x⊥1, and only the restriction to the elements not corresponding tov have to be normalized. Both can be done by linear interpolation of these operations.

Figure 4.5: Update by iteration (read top left to top right to bottom right to bottom left). Note the spread of change along the graph structure

Insertion of a vertex v is treated analogously. Expand matrix L(G(0)) by one row and column of zeros as above. Orthogonalization and normalization again have to be adapted.

4.5.3 Disconnected graphs

The deletion of a cut vertex (or abridge) disconnectsG(1) intoq≥2 compo-nents G1, . . . , Gq. Each component is drawn separately by spectral methods and afterwards these layouts are merged to a layout for G(1). Basically, there are three parameters for each component, that have to be determined after a layout p1 for each Gj was computed. The first one is a suitable rotation angle ψj for each component. The second one determines the size of each component, i.e., find a constant sj, that scales p1 to sjp1. The third one determines the barycenter cj of each component.

The removal of a cut vertex (or a bridge) yields a matrixL(G(1)) that, af-ter rearranging, consists ofqblocks L1, . . . , Lq, which are Laplacian matrices of lower dimensions

L(G(1)) =

L1 0

L2 . ..

0 Lq

 .

Figure 4.6: Update by interpolation. Layout anomalies are restricted to modified part of graph

Each of the components is now drawn separately, simply by the common power iteration of the whole matrix L(G(1)), where only normalization and orthogonalization have to be modified appropriately. The barycenter cj of each component thus is 0, but will be reset later. The goal of the rotation of a component is to minimize the difference between its old (p = (x, y)) and new rotated layout (p = (x, y)), i.e., to find an angle ψj that minimizes

|x−x|2+|y−y|2. Such problems are solved as part of a Procrustes analysis, where a singular value decomposition yields the optimal angle. But here, the angle can also simply be determined directly. The scaling factor is set to

sj :=

√νj|Gj| P

v∈Gj

px(v)2+y(v)2 , νj := |Gj|

|G(1)| ,

where|Gj|denotes the number of vertices ofGj. This entails that the average distance to the barycenter in the componentjis proportional to√

νj. A circle around the barycenter whose radius is the scaled average distance has now area proportional toνj. This method works well in practice since components often are round-shaped. The main idea for arranging the components in the 2-dimensional plane is to place the barycenters on a circle around the origin with radius r. A sector is assigned to each component with angle proportional to the number of vertices, analogous to the scaling factor. For notational purposes identify the plane with complex numbers and reset the

Figure 4.7: Update by simple linear interpolation. Intermediate layouts are less symmetric

barycenters to

cj :=rexp2πi ν − νj

2 +

j

X

`=1

ν`

, ν :=

q

X

`=1

ν` .

The radius r is chosen as

r:= max

1≤j≤q

maxv∈Gj{d(v, cj)}

sin(πνj/ν) ,

where d(v, cj) is the distance from v to the barycenter cj. This guarantees that overlapping is prevented.

Altogether, when removing a cut vertex, power iteration with modified orthogonalization/normalization is applied for the chosen breakpoints, the components are rotated, scaled and move linearly to their new barycen-ters. Further splitting and merging of connected components are handled analogously, see Fig. 4.9 for an example. Note that the outcome of a split-ting/merging process is assigned the same amount of area and sector as before (under the assumption that there is only a slight change in the number of ver-tices). The rotation also helps recognizing the structures, since unmodified components maintain their shape, size and sector.

Figure 4.8: Interpolation updates maintain symmetry

4.6 Conclusion

We have proposed a scheme for dynamic spectral layouts and applied it to changing small-world graphs. While there is no need to make special provisions for logical updates, it turns out that matrix interpolation is the method of choice for the physical update. Despite its simplicity, the scheme achieves both static layout quality and mental-map preservation, because it utilizes stability inherent in spectral layout methods. Much of the scheme directly applies to force-directed methods as well, and is in fact driven by common practices [40].

For both spectral and force-directed layout update computations are rather efficient, since the preceding layouts are usually very good initial-izations for iterative methods. For large graphs, it will be interesting to gen-eralize the approach to multilevel methods, possibly by maintaining (at least part of) the coarsening hierarchy and reusing level layouts for initialization.

In general, spectral layouts are not suitable for graphs with low connectivity, even in the static case. However, our dynamic approach is likely to work with any improved methods for static spectral layout as well.

Finally, Fig. 4.10 shows a screenshot of an interactive, stereoscopic 3D demo to visualize our methods. All involved computations can easily be handled, such that we obtained smooth animations although the demo is written as an svg-file with embedded javascript, where 3D is only simulated, and viewed on a web browser.

Figure 4.9: Drawing connected components

Figure 4.10: Screenshot of a stereoscopic 3D demo for dynamic graph drawing

Figure 4.11: Evolution of a small world (read top to bottom, left to right)

Routing in Wireless Networks

We consider routing methods for networks when geographic positions are available. Instead of using the original geographic coordinates, however, we precompute virtual coordinates using a barycentric embedding. Combined with simple geometric routing rules, this greatly reduces the lengths of routes and outperforms algorithms running on the original coordinates. Along with experimental results we show guaranteed message delivery and give time bounds for the precomputation. Finally, a method with a less consuming precomputation is introduced. Our methods apply to networks, where short routes are of great importance, and the one-time-precomputation is afford-able.

5.1 Introduction

Routing in a communication network, an undirected, finite, simple graph G= (V, E), denotes the task of sending a message from a source s∈V to a target t ∈ V. When no direct connection is available this means to forward the message on a path from s to t using intermediate vertices. While in wired networks the specific path is usually determined by routers, in wireless networks each vertex has to decide how to forward the message. Specific for geographic routing is the existence of geographic positions of the vertices.

It is assumed that each vertex v knows the position of t, its own position and the positions of all its neighbors. This information can be used for the search of an st-path. Among the simplest routing algorithms are, e.g., Greedy Routing (see, e.g., [111]) and Compass Routing [83]. Greedy Routing always forwards the message from a vertex v to its neighbor w closest to t.

Since w must be strictly closer to t than v (by definition), the method can get stuck in a dead-lock. Compass Routing chooses the neighbor with least

(absolute) deviation angle, but again message delivery to t is not guaran-teed. The first geographic routing algorithm with guaranteed delivery was Face Routing (originally called Compass Routing II [83]). Currently, the most efficient algorithm is GOAFR+[85], which we therefore use as a ref-erence. Our methods employ a mixture of Greedy and Compass Routing running not on the original geographic coordinates, but instead on precom-puted virtual coordinates. This greatly reduces the lengths of routes and guarantees message delivery. Our choice of virtual coordinates is motivated by the theorem of W. T. Tutte [113] on barycentric embeddings that we al-ready presented in Sect.2.5. The improvement of route lengths is mainly due to the following observation. The number of dead-locks is reduced signifi-cantly in planar barycentric embeddings, since each face is convex. Hence, Greedy Routing, which heuristically delivers short routes, leads to t more often (see, e.g., [103]). In [100] Papadimitriou and Ratajczak conjecture that every 3-connected can be embedded in the Euclidean plane, such that Greedy Routing is even always successful.

Surveys about geographic routing and virtual coordinates are given in corresponding chapters [126] and [49]. Our routing methods – presented in detail in Sect. 5.3 – with a brief summary of their properties are:

• BR (Barycentric Routing): very simple routing rules, good in practice for a certain density range of networks, guaranteed delivery,

• GBR (Greedy BR): very short routes for all densities, outperforming routing methods on geographic coordinates, guaranteed delivery,

• AGBR (Adaptive GBR): fixes worst-cases of GBR, guaranteed delivery,

• GBFR (GB Face Routing): less precomputation, guaranteed delivery.

5.2 Preliminaries

We consider wireless networks modeled asunit disk graphs G= (V, E) whose vertices are embedded in the Euclidean plane R2, and two vertices are adja-cent iff their distance is not greater than 1. We assume that Gis connected and there are no two vertices at the very same position. The number of vertices is denoted by n. Geographic routing uses these positions to route a message from a source vertex s to a target vertex t under the assumptions that the coordinates of t, the coordinates of all neighbors of v and its own position are known to each vertex v. The path along which a message is routed is called message path. A mapping p: V −→R2 is called embedding.

Let R2 be equipped with the usual topology T induced by the Euclidean norm. Each embedding pof a graph G naturally corresponds to a closed set G ⊆R2 (we will not distinguish between Gand G), where the edges of Gare drawn as straight lines between its incident vertices. We call an embeddingp planar if its straight-line drawing G is planar. Instead of using the original geographic coordinates, that will be denoted x(v),b y(v), we computeb virtual coordinates x(v), y(v) andz(v) during a precomputation phase of our routing methods. The Euclidean distance of two verticesv, w in original coordinates isd(v, w), in virtual coordinatesb d(v, w). (For distances in virtual coordinates the z-coordinate is always neglected.) The angleϕt(v) is calleddirection an-gle to the target vertext, where ϕt(s) is defined to be 0 and for each further vertex v on a message path ϕt(v) is increased or decreased according to the angle at t to the predecessor on the path. Thus, ϕt(v) is an arbitrary real value and even could later be a different value (adding multiples of 2π) if the message path surrounds t and hits v again. The angle ψt(v, w) is called deviation angle and denotes the angle atv fromt tow and is defined to take values in ] −π, π]. Note that all vertices w on the (open) right hand side of the (infinite) line −→

vt have negative deviation angle. During the precom-putation phase, the restricted Gabriel Graph GGG will be used, i.e., if the enclosing circle of an edge contains a vertex different from its two end ver-tices, it is removed from G. This condition can be checked locally and then yields the planar embedded graph GGG that is connected if G is connected.

Each vertex v ∈ GGG maintains an ordered list (counter-clockwise) of its neighbors. To pass a message right hand rule denotes to pass the message to the next neighbor in this list after the sender if a sender exists. Otherwise, we will give some direction: to pass a message right hand rule directed to

−→

vw denotes to take the next neighbor after−vw. Sometimes,→ virtual edges are added to GGG. These are edges that possibly do not exist in G and have to be realized as paths. However, by construction such virtual neighbors can easily be reached by sending the message right hand rule or left hand rule, which has to be specified for each virtual edge. When adding a virtual edge, the two incident vertices simply update their lists. Since virtual edges are always inserted within a face of GGG, planarity of GGG is never destroyed (although the straight-line embedding of GGG may then contain a crossing).

For ease of simplicity we call a graph that is a subdivision (replacing an edge by edge-vertex-edge) of a 3-connected graph also 3-connected. Moreover, we even allow cut pairs on the polygon C (see Theorem 5.3.1 for polygon C).

The theorem of Tutte also holds for this class of graphs.

Im Dokument Theory and Applications of the Laplacian (Seite 101-116)