• Keine Ergebnisse gefunden

Theory and Applications of the Laplacian

N/A
N/A
Protected

Academic year: 2022

Aktie "Theory and Applications of the Laplacian"

Copied!
180
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

of the Laplacian

Dissertation

zur Erlangung des akademischen Grades Doktor der Naturwissenschaften

an der Universit¨ at Konstanz

Fachbereich Informatik und Informationswissenschaft vorgelegt von

Daniel Fleischer September 2007

Tag der m¨ undlichen Pr¨ ufung: 23. November 2007

1. Referent: Prof. Dr. U. Brandes, Universit¨ at Konstanz 2. Referent: Prof. Dr. D. Wagner, Universit¨ at Karlsruhe

Konstanzer Online-Publikations-System (KOPS) URL: http://www.ub.uni-konstanz.de/kops/volltexte/2008/4625/

(2)
(3)

Einer der grundlegenden Begriffe der Graphentheorie ist ein ungerichteter, endlicher, einfacher Graph G = (V, E). Die Elemente von V ={v1, . . . , vn} heißen Knoten und E ⊆ {{v, w} : v, w∈V und v 6=w} ist eine Menge von Kanten. DieLaplace-Matrix L=L(G) ist eine n×n-Matrix, deren Eintr¨age definiert sind durch

Lj,k :=





#{v` ∈V : {vj, v`} ∈E} , fallsj =k ,

−1 , fallsj 6=k und {vj, vk} ∈E ,

0 sonst.

Ebenso wie andere Matrizen, die sich zu gegebenem Graphen definieren las- sen, erm¨oglicht die Laplace-Matrix ein fruchtbares Zusammenspiel von Gra- phentheorie und linearer Algebra. Viele Eigenschaften des Graphen spiegeln sich in algebraischen Eigenschaften der Matrizen wider. In Kapitel 2 be- trachten wir einige bekannte Anwendungen der Laplace-Matrix und liefern theoretische Grundlagen. In den Kapiteln3bis6stellen wir vier neue Anwen- dungen vor. Die Laplace-Matrix L geht in diesen Kapiteln in den folgenden drei Formen ein.

Zun¨achst trittL in Gleichungssystemen der Form

Lx=0 (1)

auf,x∈Rn. Ebenso betrachten wirLx=ρ, diePotentialgleichung, mit gege- benemρ∈Rnoder beschr¨anken uns auf einige Zeilen des Gleichungssystems, siehe Abschnitt 2.4 und 2.5 bzw. Kapitel3 und 5.

Abschnitt2.2, Kapitel4und6behandeln dasSpektrum vonL, das durch Eigenwerte λj und zugeh¨orige Eigenvektoren uj ∈Rn gegeben ist, wobei

0 =λ1 ≤λ2 ≤ · · · ≤λn 1/√

n=:u1 und u2, . . . , un eine orthonormale Basis desRn sei. (2) Schließlich benutzen wir in Abschnitt 2.5 und den Kapiteln 4 und 5 die quadratische Form

xTLx= 1 2

X

{vj,vk}∈E

(xj −xk)2 . (3)

(4)

Ahnliche Gleichungen existieren f¨¨ ur denLaplace-Operator ∆ :=∂x2

1+. . .+∂x2

n, der z.B. auf zweifach stetig differenzierbaren Funktionen f: Rn −→ R ope- riert. Dieser kann als kontinuierliche Version der Laplace-Matrix aufgefasst werden und wird in Kapitel1vorgestellt. Viele Dinge wie, vielleicht am wich- tigsten, das Dirichlet Problem lassen sich vom kontinuierlichen Fall auf den endlichen Fall auf Graphen ¨ubertragen, was in beiden F¨allen eine allgemei- nere Sicht erm¨oglicht.

Beitrag. Der Hauptbeitrag dieser Arbeit sind vier neue Anwendungen, welche die Laplace-Matrix benutzen. Sie werden in den Kapiteln 3 bis 6 vorgestellt und in den n¨achsten Abs¨atzen kurz zusammengefasst. Die bei- den ersten Kapitel stellen bekannte Eigenschaften und Anwendungen des Laplace-Operator und der Laplace-Matrix vor. W¨ahrend sich die Literatur aber meistens jeweils entweder auf den kontinuierlichen oder den endlichen Fall konzentriert, stellen wir beide F¨alle systematisch gegen¨uber und betrach- ten dabei Fragen, die f¨ur die Kapitel des Teils II dieser Arbeit relevant sind.

Außerdem stellt der Abschnitt2.3einige neue Ergebnisse zum Problemvertex bisection vor.

Inhalt der einzelnen Kapitel. Die grobe inhaltliche Aufteilung dieser Arbeit ist bereits durch ihren Titel vorgegeben. Dennoch enth¨alt Teil I ei- nige Anwendungen und Teil II liefert zus¨atzliche theoretische Grundlagen.

Die behandelten Themen kommen aus vielen verschiedenen Gebieten wie z.B. Graphentheorie, Algebra, partielle Differentialgleichungen, Funktional- analysis, Algorithmik, Numerik, elementare Physik, soziale Netzwerke etc.

All diese Gebiete bringen eine eigene Notation mit sich. Leider ¨uberschneidet sich dabei die Verwendung vieler Symbole. Auf Seite 170 erl¨autern wir un- sere Notation und geben eine Liste der am h¨aufigsten verwendeten Symbo- le an. H¨aufig verwendete S¨atze, Hilfsmittel und Algorithmen sind im An- hang A angegeben. Teile dieser Arbeit wurden bereits in [15, 13, 16, 17, 14]

ver¨offentlicht.

Kapitel 1: Der Laplace-Operator. In diesem Kapitel stellen wir den Laplace-Operator ∆ vor und betrachten zwei partielle Differentialgleichun- gen: die Potentialgleichung und die Wellengleichung, wobei Erstere f¨ur unsere Anwendungen wichtiger ist. Zu beiden existiert eine entsprechende Gleichung auf Graphen. Mit diesem Kapitel legen wir den Grundstein f¨ur viele Verglei- che zwischen kontinuierlichem und endlichem Fall im n¨achsten Kapitel.

(5)

Kapitel 2: Die Laplace-Matrix. Viele Eigenschaften und Konzepte wie z.B. das Dirichlet-Problem, Mittelwerteigenschaft, Maximumprinzip, Grund- l¨osungen, die Methode von Perron, das Dirichletsche Prinzip, Spektra etc. die f¨ur den kontinuierlichen Fall im letzten Kapitel definiert wurden, ¨ubertragen sich auf Graphen, wobei die Laplace-Matrix die Rolle des Laplace-Operators einnimmt. Wir vergleichen die beiden F¨alle und zeigen beispielsweise, wie sich daraus Absch¨atzungen f¨ur die Qualit¨at von Approximationen gewinnen lassen. Schließlich erw¨ahnen wir kurz weitere bekannte Anwendungen.

Kapitel3: Netzwerk Analyse. Die erste Anwendung in TeilIIdieser Ar- beit stammt aus dem Bereich der Netzwerkanalyse. Zwei bekannte Maße f¨ur Knoten v eines Graphen G sind closeness cC(v) und betweenness cB(v). Zu diesen lassen sich Varianten current-flow closeness cCC(v) und current-flow betweenness cCB(v) definieren, dieG als ein elektrisches Netzwerk auffassen, d.h. die Kanten sind Leitungen mit einem gewissen Widerstand bzw. ei- ner gewissen Leitf¨ahigkeit. Um die beiden Maße zu berechnen wird der ef- fektive Widerstand Reff(s, t) zwischen Knoten s und t ben¨otigt, sowie der Durchfluss τs,t(v), d.h. der Strom, der durch v fließt, w¨ahrend ein Strom der St¨arke 1 vonsnachtfließt. Wir haben die Berechnung von cCB von O(mn2) aufO(M(n−1) +mnlogn) verbessert, wobei M(n−1) die Zeit zur Invertie- rung einer (n−1)×(n−1)-Matrix bezeichnet. Die MatrixList d¨unn, wennG d¨unn ist. F¨ur die Invertierung der Matrix bietet sich deshalb die Methode der konjugierten Gradienten an, die in der Praxis durch Vorkonditionierung noch weiter verbessert werdet kann.

Kapitel 4: Dynamisches Graphenzeichnen. Eine weitere Anwendung ist das (dynamische) Zeichnen von Graphen. Verwendet man einen Vektor x∈Rn, der (3) minimiert, als (eindimensionale) Knotenpositionen, dann ent- spricht das der Minimierung der quadrierten Kantenl¨angen. Letzteres wird oft als ein Qualit¨atskriterium zum Zeichnen von Graphen verwendet. Die Minimierung kann durch eine Eigenwertzerlegung von L gel¨ost werden. Der konstante Eigenvektor u1 zum Eigenwert 0 ist nicht von praktischem Inter- esse, da er alle Knoten auf die selbe Position setzt. Deswegen werden bei- spielsweise f¨ur den R3 die drei folgenden Eigenvektoren u2, u3, u4 (paarweise orthonormal) verwendet. F¨ur sich dynamisch ver¨andernde Graphen, z.B.G2 entsteht aus G1 durch Einf¨ugen einer zus¨atzlichen Kante, ist es sinnvoll, die entsprechenden Eigenvektoren u2, u3, u4 von linear interpolierten Matri- zenL(t) (wobeiL(0), L(1) die Laplace-Matrizen der GraphenG1, G2 sind) zu benutzen, um eine fl¨ussige Animation zu erhalten. F¨ur das Zufallsgraphen- Modellsmall worlds gilt dies im Besonderen, da u2(t), u3(t), u4(t) hier durch Potenzreihen gegeben sind, deren h¨ohere Koeffizienten schnell abfallen.

(6)

Kapitel5: Geographic routing in drahtlosen Netzwerken. Eine drit- te Anwendung istgeographic routing in drahtlosen Netzwerken, d.h. in einem Netzwerk von z.B. Minicomputern (Knoten) mit geographischen Positionen in der Ebene sollen Nachrichten von Knoten s zu Knoten t geschickt wer- den. Mit Hilfe der Laplace-Matrix lassen sich virtuelle Positionen berechnen, wobei Randknoten zun¨achst auf gewisse Positionen fixiert werden und die restlichen Positionen dann so bestimmt werden, dass jeder innere Knoten im Schwerpunkt seiner Nachbarn liegt. Diese virtuellen Positionen lassen sich iterativ und verteilt bestimmen, was genau einer Jacobi Iteration zur L¨osung linearer Gleichungssysteme entspricht. Auf den virtuellen Positionen lassen sich dann sehr einfache Regeln zum Weiterleiten von Nachrichten angeben, die garantieren, dass Nachrichten ihr Ziel erreichen und in der Praxis sehr kurze Wege liefern.

Kapitel 6: F¨arben von Graphen. Zuletzt l¨asst sich die Laplace-Matrix auch dazu verwenden, um Graphen zu f¨arben bzw. Strukturen, aus denen sich leicht ein Graph ableiten l¨asst, wie etwa L¨ander (Knoten) auf einer Karte, wobei benachbarte L¨ander durch eine Kante verbunden werden. Wir betrach- ten hier jedoch nicht das bekannte kombinatorische Optimierungsproblem, sondern konzentrieren uns auf die beiden Forderungen

• benachbarte Knoten so unterschiedlich wie m¨oglich zu f¨arben und

• paarweise gut unterscheidbare Farben zu verwenden.

Dieses Problem l¨asst sich als Graphenzeichnen-Problem in einen Farbraum auffassen. Im Vergleich zu sonst ¨ublichen Qualit¨atskriterien verlangen die beiden Forderungen hier aber, dass Kanten m¨oglichst lang sind, und dass alle Knoten paarweise einen gewissen Mindestabstand von einander haben.

(7)

One of the fundamental ingredients in graph theory is an undirected, finite, simple graph G = (V, E), where elements of V = {v1, . . . , vn} are called vertices, and elements of E ⊆ {{v, w} : v, w ∈ V and v 6= w} are called edges. The Laplacian matrix L=L(G) is the n×n-matrix defined by

Lj,k :=





#{v` ∈V : {vj, v`} ∈E} if j =k ,

−1 if j 6=k and {vj, vk} ∈E ,

0 otherwise.

Like other matrices associated with graphs, it enables a fruitful interplay between graph theory and linear algebra, where graph-theoretic properties of G are reflected by algebraic properties of one of its matrices. Well-known applications of the Laplacian matrix are reviewed in Chap.2, and theoretical foundations are given. In Chaps. 3 to 6 we present four new applications of the Laplacian matrix. There, it is involved in the following three ways.

First,L appears in linear equations given by

Lx=0 (1)

forx∈Rn, orLx=ρ, thepotential equation, for someρ∈Rn, or we consider only some rows of the equations, see Sects. 2.4 and 2.5, Chaps. 3and 5.

Section2.2, Chaps.4and 6focus on thespectrum of Lgiven by eigenval- ues λj and corresponding eigenvectors uj ∈Rn, such that

0 =λ1 ≤λ2 ≤ · · · ≤λn , 1/√

n=:u1 and u2, . . . , un is an orthonormal basis of Rn . (2) Finally, Sect. 2.5, Chaps. 4and 5 make use of thequadratic form

xTLx= 1 2

X

{vj,vk}∈E

(xj −xk)2 . (3)

(8)

Similar equations exist for the Laplacian operator ∆ :=∂x2

1 +. . .+∂x2n act- ing on, e.g., twice continuously differentiable functions f: Rn −→ R. The Laplacian operator can be considered as continuous version of the Laplacian matrix, and is presented in Chap.1. Many concepts like, maybe most impor- tant, the Dirichlet problem carry over from the continuous case to the finite case on graphs. This offers a more general view in both cases.

Contribution. The main contribution of this thesis are four new appli- cations involving the Laplacian matrix that are given in Chaps. 3 to 6 of Part II and are briefly summarized after this paragraph. Furthermore, the two chapters of Part I present well-known properties and applications of the Laplacian operator and the Laplacian matrix. While literature mostly focuses either on the continuous or the finite case, we oppose them subject to our needs for Part II, pointing out several similarities, and essential differences.

Furthermore, Sect. 2.3 presents some new results about vertex bisection.

Content of the chapters. The basic structure of this thesis is already given by its title. Anyhow, there are several applications in Part Iwhile also Part II provides some additional theory. The topics we consider come from a lot of different areas, like, e.g., graph theory, algebra, partial differential equations, functional analysis, algorithmics, numerical analysis, elementary physics, social networks, etc. All these areas come along with their own standard notation. Unfortunately, the use of symbols is not disjoint. So we had to choose between the use of symbols with multiple meanings and to redefine traditionally reserved symbols (and prevent running out of letters).

We rather decided in favor of the latter. Anyhow, see page 170 for some comments about our notation and a list of the most frequently used symbols.

Basic theorems, tools and algorithms that we apply frequently are given in Appendix A. Parts of this thesis have already been published in [15,13, 16, 17, 14].

Chapter 1: The Laplacian Operator. This chapter introduces the Laplacian operator ∆ and focuses on two partial differential equations: po- tential equation and wave equation, where the first one is more important for our purposes. Both have finite counterparts on graphs and many results carry over to the finite case. With this chapter we provide the basis for many comparisons between continuous and finite case in the next chapter.

(9)

Chapter 2: The Laplacian Matrix. Many properties and concepts like, e.g., Dirichlet problem, meanvalue property, maximum principle, fundamen- tal solutions, Perron’s method, Dirichlet’s principle, spectra, etc., that were defined for the continuous case in the last chapter carry over to the finite case on graphs, where the Laplacian matrix plays the role of the Laplacian operator. We compare the two cases and show how they can benefit from each other. Finally, we briefly mention some more applications.

Chapter 3: Network Analysis. The first application in Part II is from the area of network analysis. Two common measures for vertices v of a graph G are called (shortest-path) closeness cC(v) and betweenness cB(v) for which exist two variants current-flow closeness cCC(v) and current-flow betweenness cCB(v) that consider G as an electrical network, i.e., edges are wires with a given resistance and conductance, respectively. These are de- fined by means of the effective resistance Reff(s, t) between vertices s and t, and throughput τs,t(v), i.e., the current that flows throughv when a current of size 1 flows fromstot. We improved the computation ofcCB fromO(mn2) to O(M(n−1) +mnlogn), where M(n−1) is the time for inversion of an (n−1)×(n−1)-matrix. Since sparseness ofGimplies sparseness ofLmethods like the conjugate gradient method (CGM) may be used for matrix inversion in practice, which leads to a running time of O(mn√

κ+mnlogn), where κ is the condition number of a submatrix of L, that may also be decreased by the use of a preconditioner.

Chapter 4: Dynamic Graph Drawing. Another application of the Laplacian is used for (dynamic) graph drawing. Using a vector x ∈ Rn as 1-dimensional vertex positions that minimizes the left hand side of (3) corresponds to minimizing the sum of squared edge lengths. The latter term is often used as a quality criterion for drawings of G. Its minimization can be done by a spectral decomposition of L. Since the constant eigenvectoru1 corresponding to eigenvalue 0 is not of interest because it would send all vertices to the same position, a good drawing usually uses a set of pairwise orthonormal eigenvectors, e.g., u2, u3, u4 corresponding to the next smallest eigenvalues for a 3-dimensional drawing. For dynamically evolving graphs, e.g., G2 results from G1 by inserting an additional edge, it is reasonable to use the corresponding eigenvectorsu2, u3, u4 of linearly interpolated matrices fromL(G1) toL(G2) for a smooth animation sequence. For a random graph model called small worlds this holds in particular, where u2, u3, u4 can be expressed by power series with small higher coefficients.

(10)

Chapter 5: Geographic Routing in Wireless Networks. A third ap- plication is geographic routing in wireless networks, i.e., a set of small mini computers (vertices) with geographic positions in the plane that can commu- nicate with each other (edges) iff their distance is not greater than a given maximum transmission range. By means of the Laplacian matrix we can compute virtual positions, where perimeter vertices are fixed onto a strictly convex polygon, and inner vertices are set to the barycenter of their neigh- bors. These positions can be determined in an iterative and distributed man- ner that corresponds to a Jacobi iteration to solve linear equations. On these virtual positions a simple set of routing rules guarantees message delivery and generates short routes in practice.

Chapter 6: Coloring graph-like data. Finally, the Laplacian matrix has an application in coloring graphs or data that basically corresponds to a graph, e.g., countries (vertices) on a map, where adjacent countries are joined by an edge. Anyhow, we do not consider the usual combinatorial optimization problem, but instead focus on the two objectives that

• adjacent vertices should be colored as different as possible, and

• the used colors can be distinguished well.

This translates to a graph drawing problem into a color space with the un- usual objective that edges should be as long as possible, while non-adjacent vertices should not be too close to each other.

(11)

Deutsche Zusammenfassung 3

Introduction 7

I Theory of the Laplacian 13

1 The Laplacian Operator ∆ 15

1.1 Potential Equation . . . 16

1.2 The Classical Dirichlet Problem . . . 18

1.3 Fundamental Solutions . . . 22

1.4 Dirichlet’s Principle . . . 27

1.5 Perron’s Method . . . 29

1.6 Iterative Methods . . . 31

1.7 Wave Equation . . . 34

2 The Laplacian Matrix L 37 2.1 Graphs and Matrices . . . 37

2.2 Spectrum of the Laplacian Matrix L . . . 42

2.3 Excursus: Edge and Vertex Bisection . . . 45

2.4 Electrical Networks . . . 57

2.5 The Dirichlet Problem on Graphs . . . 59

2.6 The Wave Equation on Graphs . . . 66

2.7 Finite Element Methods . . . 70

2.8 Conclusion . . . 75

II Applications of the Laplacian 79

3 Network Analysis 81 3.1 Introduction . . . 81

3.2 Preliminaries . . . 82

3.3 Current-Flow . . . 83

3.4 Exact Computation . . . 87

(12)

3.5 Approximation . . . 90

3.6 Experimental Results . . . 92

3.7 Conclusion . . . 92

4 Dynamic Graph Drawing 95 4.1 Introduction . . . 95

4.2 Preliminaries . . . 96

4.3 Dynamic Layout . . . 96

4.4 Updates . . . 97

4.5 Application to Small Worlds . . . 101

4.6 Conclusion . . . 108

5 Routing in Wireless Networks 113 5.1 Introduction . . . 113

5.2 Preliminaries . . . 114

5.3 Barycentric Routing . . . 116

5.4 Theoretical Results . . . 124

5.5 Experimental Results . . . 132

5.6 Conclusion . . . 132

6 Color Assignment of Vertices 137 6.1 Introduction . . . 137

6.2 Preliminaries . . . 138

6.3 Initialization . . . 141

6.4 Refinement . . . 143

6.5 Applications . . . 145

6.6 Conclusion . . . 148

7 Summary 149 Bibliography 151 A Appendix 161 A.1 Basic Theorems . . . 161

A.2 Multidimensional Scaling . . . 162

A.3 Jacobi and Gauß-Seidel Iteration . . . 163

A.4 Conjugate Gradient Method . . . 164

A.5 Continued Fractions . . . 168

Glossary of Symbols 171

Index 175

(13)

Theory of the Laplacian

(14)
(15)

The Laplacian Operator ∆

In this chapter we give a survey of the continuous version of the Laplacian (matrix), the Laplacian operator ∆. Our main goal thereby is to introduce results that carry over to thefinite case of the Laplacian matrixLthat will be introduced in Chap. 2. Whenever possible without too much effort, we give proofs or proof sketches on our way to these results. For the understanding of the chapters of Part II, this chapter is not necessary, but it allows a more general view on the results there.

First, we present classical results frompotential theory, i.e., the study of functionsf: Ω⊆Rn−→Rsatisfying the partial differential equation (PDE)

−∆f =ρ (potential equation), (1.1) for some ρ: Ω −→ R, where, often, some boundary conditions on ∂Ω are present. Secondly, we present classical results concerning the wave equation, i.e., the study of functionsf: R×(Ω⊆Rn)−→R,(t, x)7−→f(t, x) satisfying the PDE

∆f =∂t2f (wave equation), (1.2) where also some boundary conditions f|∂Ω = g may be present. For both PDEs, a finite analog is given in the following Chap.2. The canonical exam- ple for the wave equation is a thin (2-dimensional) vibrating membrane that is fixed on the boundary ∂Ω to zero, i.e., g = 0. This example can be solved with a spectral decomposition of the Laplacian operator ∆, which is the main reason why we consider this equation. The two equations (1.1) and (1.2) are connected by the fact that the inhomogeneous wave equation, i.e., boundary conditions g 6= 0, can be solved by means of the homogeneous wave equa- tion and the corresponding potential equation with boundary condition g, see Sect. 1.7. The next section directly starts with an introductory example for the potential equation without spending much time on definitions. These will be given in full detail immediately afterwards from Sect. 1.2 on.

(16)

1.1 Potential Equation

The potential equation arises naturally in physics as in the following intro- ductory example from [86]. Consider two point masses (i.e., with volume 0) or two point electric charges at some positions x,0 ∈ R3 with x 6= 0. Ne- glecting all other relevant effects (of relativistic or quantum nature) there is either a gravitational or an electrical force

F(x) = c1· x

|x|3, (1.3)

for some constant c1 depending on the two masses and the two electric charges, respectively, to be observed. For masses, (1.3) is known asNewton’s Law of gravitation, and for electric charges, (1.3) is known asCoulomb’s Law. The partial derivatives of F are

jFk =c1 δjk

|x|3 − 3xjxk

|x|5

,

where δjk is the Kronecker symbol. Consider the set Ω :=R3 \ {0}. On Ω

divF = 0 and rot F =0 . (1.4)

If we now move the particle at x along a continuously differentiable curve γ : [0,1] −→ Ω, the energy needed for this movement within the vector field F is given by

1

Z

0

(−F(γ(t)))Tγ0(t)dt ,

where γ0(t) is the tangent vector of γ at γ(t). Now, because Ω is simply connected and rot F = 0, the energy needed for a movement on a closed curve γ is zero (see also Theorem A.1.6). Hence, the energy only depends on the end points of γ, and for a fixed point x0 6= 0, we obtain a well-defined potential U (which, for now, is just a function whose differences express an energy) on Ω by

U(x0) := c1

|x0| and U(x) :=U(x0) +

1

Z

0

(−F(γx(t)))Tγx0(t)dt = c1

|x|, (1.5) whereγxis an arbitrary continuously differentiable curve in Ω with end points γ(0) =x0 andγ(1) =x. Thus, the energy needed on a curve in Ω from some

(17)

point x1 to x2 can simply be expressed by U(x2)−U(x1). Independent of the choice of x0 ∈Ω =R3\ {0}, we obtain from (1.4) on Ω

−∆U =−div∇U = divF = 0 . (1.6)

A function U that fulfills this special type of potential equation (1.6) on Ω is called a harmonic function. For the given force F, it is unique up to an additive constant. It becomes unique if we define this additive constant, e.g., if we define as done above U(x0). This can already be seen as a boundary condition (on x0), but this is not the common use of boundary conditions.

In general, let Ω be a domain in Rn, and let g: ∂Ω −→ R be a continuous function on the boundary of Ω (we will later discuss details about the shape of

∂Ω, which is essential for solvability). Now, determine a functionf: Ω−→R, such that

∆f = 0 on Ω and f =g on ∂Ω.

If we furthermore claim that f ∈ C2(Ω)∩ C0(Ω), where Ck(Ω) is the set of k-times continuously differentiable functions on Ω, this is called theclassical Dirichlet problem(withDirichlet boundary conditions1) that will be discussed in the next section. To turn our example into a classical Dirichlet problem we can proceed as follows. Let Ω :=Rn\B(0, ε), where B(0, ε) is the ball with radius ε >0 and center 0. Letg :=c1/ε on∂Ω. The functionU given above restricted to Ω solves this classical Dirichlet problem. But it is not unique.

For example, the constant functionf =c1/εis also a solution. This is typical for unbounded domains Ω, where the behavior of the solution for |x| → ∞ is not determined. We will later see that for bounded domains, this cannot occur since uniqueness immediately follows from themaximum principle, see Lemma 1.2.9. We very briefly sketch how to obtain uniqueness also in our example. But afterwards, having in mind that later we will carry over some results tofinite graphs, we will restrict ourselves to bounded domains Ω. The Kelvin transform Kr for r >0 in Rn is given by

(Krf)(x) :=|x|2−nf (r/|x|)2·x

for x6=0 (Kelvin transform), (1.7) where the mapping x 7−→ (r/|x|)2 ·x represents a reflection at the bound- ary of the ball B(0, r). The Kelvin transform can be used to transform a harmonic function onto another domain as follows. Let fin be the unique solution of the classical Dirichlet problem on Ωin :=B(0, ε). One can show

1 Boundary conditions may also be formulated in terms of given normal derivatives off on the boundary. This is calledNeumann boundary conditions.

(18)

that the Kelvin transform f := Kεfin is a (unique) solution of the classi- cal Dirichlet problem on Ω. Because fin(0) is defined, f(x) ∈ O(|x|−1) and

∇f(x)∈ O(|x|−2). The other direction also holds, i.e., if we assume the latter two asymptotical behaviors then f is unique. Finally, to conclude with our introductory example, and to return to the general potential equation (1.1), which actually is the reason why we call U a potential (see Def. 1.2.3), we state that in the sense of distributions (see Def.1.3.5)

−∆U =c2δ (1.8)

for a constant c2, and where δ is the Dirac distribution, see Def. 1.3.6. The general form of (1.8) is one of the Maxwell’s equations (in isotropic media without electric polarization), known as Gauß’ Law. For some c3 ∈R

divE =c3ρ (Gauß’ Law), (1.9)

where E denotes the electric field (corresponding to a constant times F) and ρ denotes an electric charge density (here, ρ is a constant times δ).

1.2 The Classical Dirichlet Problem

We consider Rn with its canonical topology T induced by the Euclidean norm | · |. Expressions like neighborhood, connected, boundary, etc., are all used subject toT. Let Ω⊆Rn in the followingalways denote a domain, i.e., an open and connected set. Let Ω, ∂Ω denote closure and boundary of Ω.

We write f|∂Ω = g if f =g on ∂Ω. Let B(x0, ε) := {x∈Rn : |x−x0|< ε}

and B(x0, r1, r2) :={x∈Rn : r1 <|x−x0|< r2}. For details exceeding this survey, see some standard references, e.g., [61,86].

Definition 1.2.1 Let C(Ω) denote continuous, real functions on Ω. Analo- gously, let Ck(Ω) denote the class of k-times continuously differentiable, real functions, where C0(Ω) :=C(Ω). For f(x1, . . . , xn)∈ C2(Ω) let

∇f :=

1f ...

nf

:=

x1f ...

xnf

:=

∂f /∂x1 ...

∂f /∂xn

 and

∆f :=∂12f+. . .+∂n2f

We write, e.g., f ∈ C2(Ω)∩ C(Ω) to denote that f: Ω −→ R is continuous on Ω and twice continuously differentiable on Ω.

(19)

Definition 1.2.2 A functionf ∈ C2(Ω) is called harmonic if ∆f = 0 on Ω.

If ∆f = 0 holds only on some subset M of its domain, we call f harmonic on M. Closely related is the term potential (function).

Definition 1.2.3 A function U ∈ C2(Ω) (or a distribution) is called a po- tential for a densityρ, i.e., a functionρ∈ C(Ω) (or a distribution) if the po- tential equation−∆U =ρholds (in the sense of distributions, see Def.1.3.5).

Hence, any harmonic function is a potential for density ρ = 0, and any potential U is a harmonic function on the set where ∆U vanishes, e.g., (1.8).

We need one more function space to specify a type of boundary of Ω.

First, call β := (β1, β2, . . . , βn) ∈ Nn a multi-index. For higher derivatives we write ∂β :=∂1β12β2· · ·∂nβn and |β|:=Pn

j=1βj. Definition 1.2.4 For f: Ω−→R and 1≥α >0, let

Ck,α(Ω) :={f ∈ Ck(Ω) : h¨olα(∂βf,Ω)<∞ for all |β|=k}, where h¨olα(f,Ω) := sup

x,y∈Ω

|f(x)−f(y)|

|x−y|α .

To define smoothly bounded domains, we write ∂Ω ∈ C2,α if there exists 1 ≥ α > 0, and for all x ∈ ∂Ω there exists B := B(x, ε) and a one-to- one ψ: B −→ M ⊆ Rn, such that ψ(B ∩Ω) ⊆ Rn+, ψ(B ∩∂Ω) ⊆ ∂Rn+, and ψ ∈ C2,α(B), ψ−1 ∈ C2,α(M), whereψ ∈ C2,α(B) means that allncomponents are in C2,α(B), see [61]. In the following, we use the Lebesgue integral.

Theorem 1.2.5 (Integration by parts) Let Ω ⊆ Rn be a bounded do- main with sufficiently smooth boundary, e.g., ∂Ω ∈ C2,α. Let f ∈ C1(Ω) and F: Ω−→Rn be continuously differentiable.

Z

f(x) divF(x) dx= Z

∂Ω

f(x)(F(x))Tn(x) dx− Z

(∇f(x))TF(x) dx , where n(x)is the unit normal vector on the boundary ∂Ω pointing outwards.

Note that integration by parts will often be applied to functions, where the integral on∂Ω vanishes. Iff = 1, we obtain the well-known Gauß integration theorem as a special case. In the following, let ∂nf := (∇f)Tn denote the derivative with respect to unit normal vector n, i.e., the normal derivative. Theorem 1.2.6 (Green’s second identity, special case) For a domain Ω := B(0, r1, r2) and f1, f2 ∈ C2(Ω)∩ C(Ω)

Z

r1<|x|<r2

f1∆f2 −f2∆f1 dx= Z

|x|=r1,r2

f1nf2−f2nf1 dx .

(20)

For every harmonic function f, follows withf2 = 1, r1 = 0 Z

|x|=r

nf dx= 0 . (1.10)

Proof: Z

|x|=r1,r2

f1nf2−f2nf1 dx= Z

r1<|x|<r2

divf1∇f2−divf2∇f1 dx

= Z

r1<|x|<r2

f1∆f2−f2∆f1 dx .

Lemma 1.2.7 (Meanvalue property) If f is harmonic on Ω⊇B(x0, r) f(x0) = 1

rn−1ωn Z

|x−x0|=r

f(x) dx ,

whereωn:= 2πn/2/Γ(n/2), (ωn = 2π,2π, π2,2π2,3π3/2, . . . , for n= 1,2, . . .) is the area of the unit sphere surface in Rn (Γ is Euler’s Gamma function).

Proof: W.l.o.g. let x0 = 0 and r = 1. First, considern ≥3. Let 0 < ε <1.

The function f2(x) :=|x|2−n is harmonic on B(0, ε,1) since for 1≤j ≤n

j2|x|2−n= 1

|x|n − nx2j

|x|n+2 . (1.11)

Hence, with f1 =f, Green’s second identity yields 0 =

Z

|x|=ε,1

f(x)∂n|x|2−n− |x|2−nnf(x) dx

= (2−n) Z

|x|=1

f(x) dx−(2−n)ε1−n Z

|x|=ε

f(x)dx

− Z

|x|=1

nf(x) dx−ε2−n Z

|x|=ε

nf(x) dx .

Since f is harmonic, and by (1.10), the last two terms are zero. Thus, Z

|x|=1

f(x) dx=ε1−n Z

|x|=ε

f(x)dx .

(21)

Since f is continuous in 0, the right hand side tends to ωnf(0) for ε → 0, which proves the lemma for n≥3. Forn = 2, we use the same method with

f2(x) := ln|x|, the case n= 1 is trivial.

In the last section, we already mentioned the classical Dirichlet problem.

Problem 1.2.8 (Classical Dirichlet problem) LetΩ⊆Rn be a domain, g ∈ C(∂Ω). Determine a harmonic functionf ∈ C2(Ω)∩C(Ω) withf|∂Ω = g.

If Ω is bounded (i.e., Ω ⊆B(0, r) for some r >0), then a solution f of the classical Dirichlet problem is unique. To prove this, we need the maximum principle. We say that a function f: Ω−→R has a local maximum x0 ∈Ω if there exists ε >0, such that f(x)≤f(x0) for all x∈B(x0, ε)⊆Ω.

Lemma 1.2.9 (Maximum principle) LetΩ⊆Rnbe a domain,f ∈ C2(Ω) harmonic. If f has a local maximum, f is constant. If Ω is bounded and f can be extended continuously onto ∂Ω, the maximum is attained on ∂Ω.

The theorem also holds when replacing maximum by minimum, yielding the minimum principle, which becomes obvious from the proof.

Proof: Let x0 be a local maximum of f. Define M :={x∈Ω : f(x) =f(x0)} .

First, we prove that M is an open set. Let x1 ∈ M and B(x1, ε) ⊆ Ω.

Assume there exists an x2 ∈B(x1, ε) withf(x2)< f(x0). Let r=|x2−x1|.

Then, because f is continuous f(x1) = 1

rn−1ωn Z

|x−x1|=r

f(x) dx < f(x1).

Thus, B(x1, ε) ⊆ M, and M is open. Secondly, M is closed (with respect to Ω) because for each in Ω convergent sequence in M, by continuity of f, also its limit is inM. Since Ω is connectedM = Ω. The maximum is attained on ∂Ω because Ω is compact, and f ∈ C(Ω) thus has a maximum on Ω. If f is not constant, the maximum is attained only on ∂Ω, and not on Ω.

Corollary 1.2.10 IfΩis bounded, a solution of a classical Dirichlet problem is unique.

Proof: Let f1 and f2 be two solutions of a given classical Dirichlet problem (Ω, g), where f1|∂Ω = f2|∂Ω = g. By linearity of ∆, the differences f1−f2 and f2−f1 are both solutions of the classical Dirichlet problem (Ω,0) that vanish on ∂Ω. Thus,f1(x)−f2(x) =f2(x)−f1(x) = 0 for all x∈Ω by the

maximum principle.

From now on, we will restrict ourselves to bounded domains Ω.

(22)

1.3 Fundamental Solutions

In this section we want to construct solutions of the classical Dirichlet prob- lem similar to methods known from complex analysis, where the value f(z) of a given holomorphic function f is already determined by its values on the boundary of an arbitrary disk. More precisely.

Theorem 1.3.1 (Cauchy’s integral formula) Let Ω⊆C be a (complex) domain and f: Ω−→C be holomorphic. If B(z, r)⊆Ω

∀z ∈B(z, r) f(z) = 1 2πi

I

|ζ−z|=r

f(ζ) ζ−z dζ ,

where the parameterization around the boundary is counter-clockwise.

Note that the valuef(z) is given by integration over the fundamental function ζ 7−→(ζ−z)−1. Before we can apply this concept to harmonic functions, we need some more basics, in particular, we have to define distributions.

Definition 1.3.2 Let C(Ω) denote functions in \

k∈N

Ck(Ω). Furthermore, suppf :={x∈Ω : f(x)6= 0} (support of f),

M bΩ :⇐⇒M ⊆Ω and M is compact,

C0k(Ω) :={f ∈ Ck(Ω) : suppf bΩ} fork ∈N∪ {∞} .

Functions in C0(Ω) are called test functions. A member ϕ ∈ C0(Ω) for B(0,1)⊆Ω is, e.g.,

ϕ(x) :=

(exp(−1/(1−x2)) if |x|<1 ,

0 otherwise.

By defining ϕε(x) :=ϕ(x/ε)·c/εn, wherecis such that R

Rnϕε(x)dx= 1, we obtainFriedrichs’ mollifier. Letf: M ⊆Rn−→Rbe Lebesgue measurable.

Definition 1.3.3 Lebesgue spaces Lp(M) for 1 ≤ p < ∞ are given for functions f: M −→R by means of the norm

|f|Lp(M) :=Z

M

|f(x)|p dx1/p

, and then Lp(M) :={f: M −→R : |f|Lp(M) <∞} ,

Lploc(M) :={f: M −→R : ∀M0 bM f ∈ Lp(M0)} .

(23)

Actually, we consider, as usual, quotient spaces of these Lebesgue spaces, i.e., modulo functions of Lebesgue measure zero (but we use the same symbols).

If an equivalence class contains, e.g., a continuous function, we may also implicitly identify the class with this function. Note that L2(M) is a Hilbert space with respect to the inner product

(f, g)L2(M):=

Z

M

f(x)g(x) dx ,

while for 1 ≤p <∞ all Lp(M) are Banach spaces with respect to | · |Lp(M). Also note that C0(Ω) is dense in L2(Ω) with respect to L2-norm. Let D(Ω) :=C0(Ω) be endowed with the following concept of convergence.

Definition 1.3.4 A sequenceϕj ∈ C0(Ω)is called aD-zero sequenceif there isM bΩwith∀j ∈N suppϕj ⊆M and ∀β ∈Nn supx∈Ω|∂βϕj(x)| −→

j→∞0. Definition 1.3.5 A linear functional D(Ω) −→R is called D-continuous if D-zero sequences are mapped to R-zero sequences. Let D0(Ω) be the space of D-continuous linear functionals. Its elements are called distributions.

Each distribution T ∈ D0(Ω) for which there exists a function f ∈ L1loc(Ω) withT(ϕ) =R

f(x)ϕ(x)dxfor all test functionsϕ∈ C0(Ω) is calledregular. We may identify a function f ∈ L1loc(Ω) with the corresponding distribution.

A non-regular distribution is, e.g., the Dirac distribution δ.

Definition 1.3.6 Let 0 ∈ Ω. The Dirac distribution δ: C0(Ω) −→ R is defined by δ(ϕ) :=ϕ(0).

We briefly show that δ is not regular. Hence, we cannot avoid the use of distributions for Def. 1.3.7.

Proof: Assume there exists f ∈ L1loc(Ω) with

∀ϕ∈ C0(Ω) δ(ϕ) = Z

x∈Ω

f(x)ϕ(x) dx . Let ε > 0 with R

|x|<ε|f(x)| dx < 1 (and B(0, ε) ⊆ Ω), and let ϕε ∈ C0(Ω) be Friedrich’s mollifier. Then we obtain the contradiction

ϕε(0) = Z

x∈Ω

f(x)ϕε(x)dx≤ϕε(0) Z

|x|<ε

|f(x)| dx < ϕε(0).

Now we can define fundamental solutions.

(24)

Definition 1.3.7 A function f ∈ L1loc(Rn) is called fundamental solution in Rn if −∆f =δ in the sense of distributions.

In fact, we have already implicitly used fundamental solutions for the proof of the meanvalue property, Lemma 1.2.7, and in Sect.1.1, we already saw a fundamental solution, a constant times |x|−1, for R3.

Theorem 1.3.8 The function hn(x) := 1 ωn

−ln|x| if n= 2, 1

n−2|x|2−n if n6= 2 is a fundamental solution in Rn.

Proof: Clearly, hn ∈ L1loc(Rn). Because we will apply the Green identity, Theorem 1.2.6, note that hn is harmonic on Rn \ {0}, which we already showed for n ≥ 3 in (1.11), but also holds for n = 1,2 with the same rhs of (1.11). Consider the distribution corresponding to ∆hn. For ϕ∈ C0(Rn) and ε >0,

∆hn(ϕ) = Z

Rn

hn(x)ϕ(x) dx= Z

|x|>ε

hn(x)ϕ(x) dx+ Z

|x|<ε

hn(x)ϕ(x) dx

(∗)= Z

|x|=ε

hn(x)∂nϕ(x)−ϕ(x)∂nhn(x) dx+O(ε)

=O(ε) + Z

|x|=ε

−ϕ(x)∂nhn(x)dx+O(ε),

where (∗) is an application of the Green identity onB(0, ε, r), andris chosen, such that the support of ϕ is contained. The notationO(ε) here denotes the asymptotical behavior for ε→0. In the last integral, the term∂nhn is equal to ε1−nn. Hence, we obtain

∆hn(ϕ) = lim

ε→0

Z

|x|=ε

−ε1−n

ωn ·ϕ(x)dx =−ϕ(0) =−δ(ϕ).

Now we can state the first main theorem of this section, using the fundamen- tal solution hn.

Theorem 1.3.9 Let Ω ⊆ M ⊆ Rn for a domain M, let ∂Ω ∈ C2,α and f ∈ C2(M) be harmonic.

∀x∈Ω f(x) = Z

∂Ω

hn(ξ−x)∂nf(ξ)−f(ξ)∂nξhn(ξ−x) dξ , where ∂nξ is the normal derivative with respect to ξ.

(25)

Proof: Let ϕ ∈ C0(Ω) with ϕ = 1 in a neighborhood of x ∈Ω. Integration by parts yields

0 = Z

ϕ(ξ)−1

f(ξ)·∆hn(ξ−x) dξ

=− Z

ϕ(ξ)−1

f(ξ)T

∇hn(ξ−x) dξ +

Z

∂Ω

(ϕ(ξ)−1)f(ξ)·∂nξhn(ξ−x) dξ

= Z

ϕ(ξ)−1 f(ξ)

·hn(ξ−x) dξ +

Z

∂Ω

−f(ξ)·∂nξhn(ξ−x)−hn(ξ−x)·∂n

ϕ(ξ)−1 f(ξ)

= Z

ϕ(ξ)f(ξ)·∆hn(ξ−x) dξ+ Z

∂Ω

hn(ξ−x)·∂nf(ξ)−f(ξ)·∂nξhn(ξ−x) dξ Unfortunately, apart from the values of f on ∂Ω, also its normal derivatives are involved. Thus, in general, i.e., for arbitrary ∂Ω ∈ C2,α, the classical Dirichlet problem (where only Dirichlet boundary conditions are given, and normal derivatives of f on the boundary are unknown) is not yet solved by Theorem 1.3.9. (Anyhow, in the next chapter we will see how the solution can be approximated by means of the Laplacian matrix.) However, for spe- cial shapes of Ω, the solution can explicitly be given by means of Green’s functions.

Definition 1.3.10 A function Γ : Ω×Ω−→R is called Green’s function if

• ∀x∈Ω Γ(·, x)∈ C(Ω\ {x})∩ C2(Ω\ {x}) and −∆Γ(·, x) =δ(· −x),

• ∀x∈Ω ∀ξ ∈∂Ω Γ(ξ, x) = 0 .

A Green’s function Γ(·, x) is to replace the fundamental solution hn(· −x) in the formula of Theorem 1.3.9. The first requirements on Γ state that this is possible within Ω. The second requirement assures that Γ(·, x) vanishes on the boundary ∂Ω. Hence, in the formula of Theorem 1.3.9, the normal derivatives of the solution f are no more needed, and the solution can be given just in terms of its values on the boundary ∂Ω. For special shapes of Ω, a Green’s function can be given directly. Let Ω := B(0, r) and x∈Ω

Γn(ξ, x) :=

hn ξ−x

−(Krhn) ξ−x

if n 6= 2 , 1

2 ·

hn ξ−x

−(Krhn) ξ−x

if n = 2 ,

(26)

where Kr denotes the Kelvin transform (see (1.7)), defines a Green’s func- tion. The idea is to use the difference of the fundamental solutionhninside Ω and the Kelvin transform of hn outside Ω. If we apply this to the formula of Theorem 1.3.9, we obtain the celebrated Poisson formula which completely solves the classical Dirichlet problem on balls. This is the second main the- orem of this section.

Theorem 1.3.11 Let Ω = B(0, r)⊆Rn and g ∈ C(∂Ω). The function

f(x) :=





r2− |x|2n

Z

|ξ|=r

g(ξ)

|ξ−x|n dξ if x∈Ω (Poisson formula),

g(x) if x∈∂Ω

is the solution of the classical Dirichlet problem (Ω, g).

The proof consists of determining the normal derivatives of Γn(·, x) on the boundary ∂Ω. While also for other special shapes of Ω, a Green’s function can be given explicitly, e.g., by using conformal mappings2 to map harmonic functions onto other domains, in general, a direct computation is not easy.

Finally, we discuss the question for what kind of shapes of Ω the solution of the classical Dirichlet problem exists. In this section we just state the result that will be proved in Sect. 1.5. Recall the meanvalue property for harmonic functions. Now we define subharmonic (superharmonic) functions.

Definition 1.3.12 A function f ∈ C(Ω) is called subharmonic (superhar- monic) if for all x∈Ω ∀r >0 with B(x, r)⊆Ω

1 rn−1ωn

Z

|ξ−x|=r

f(ξ) dξ≥f(x) ≤f(x) .

For functions f ∈ C2(Ω)∩ C(Ω), this simplifies to ∆f ≥0 ≤0 on Ω.

Definition 1.3.13 A function b(·, x)∈ C(Ω) is called a barrier function for x ∈∂Ω if b(·, x) is superharmonic, b(x, x) = 0 and positive everywhere else.

A point x∈∂Ω is called regular if there exists a barrier function for x.

A sufficient condition for regularity of a point x∈∂Ω is as follows.

2The Kelvin transform in Rn is a conformal mapping. InR2, where potential theory is developed very well because many results from complex analysis can be used (e.g., real and imaginary part of a holomorphic function correspond to a harmonic function), every holomorphic function, one-to-one onC∪ {∞} −→C∪ {∞}, yields a conformal mapping.

(27)

Lemma 1.3.14 The point x ∈ ∂Ω is regular if there exist ε > 0, x0 ∈ Rn, such that B(x0, ε)∩Ω =∅ and B(x0, ε)∩Ω ={x}.

Proof: Forn ≥3, the functionb(·, x) :=ε2−n−|·−x0|2−nis a barrier function.

Forn = 2, useb(·, x) := ln|·−x0|−lnε, and forn= 1, useb(·, x) := |·−x0|−ε

as a barrier function.

One can proof the same result for an open cone instead of a ball.

The following concluding main theorem of this section characterizes the shapes of Ω for which the classical Dirichlet problem is solvable. A proof is given in Sect. 1.5.

Theorem 1.3.15 A classical Dirichlet problem (Ω, g)is solvable if and only if each x∈∂Ω is regular.

1.4 Dirichlet’s Principle

In this section we present the Dirichlet principle, which offers another view on the classical Dirichlet problem, but also can be used to generalize it. We present it because there is a finite analog that will be used intensely in PartII.

First, again consider the function space F := C2(Ω)∩ C(Ω) for a bounded domain Ω, and a given function g ∈ C(∂Ω).

Theorem 1.4.1 (Dirichlet’s principle) Letf ∈ F minimize the Dirichlet integral

D(h) :=

Z

x∈Ω

|∇h(x)|2 dx= Z

x∈Ω

(∇h(x))T∇h(x) dx

among all functions in F that coincide with g on the boundary ∂Ω. Then f is the solution of the classical Dirichlet problem (Ω, g).

Proof: Let f ∈ F with f|∂Ω = g minimize D. Thus, ∀ε > 0 ∀ϕ ∈ C0(Ω) D(f +εϕ)≥D(f). Because f is minimum and

D(f +εϕ) = Z

|∇f|2 dx+ 2ε Z

|(∇f)T∇ϕ dx+ε2 Z

|∇ϕ|2 dx , follows

0 = Z

(∇f)T∇ϕ dx= Z

(∆f)ϕ dx , (1.12)

where the last step is integration by parts. Since ∆f is continuous by as- sumption, the theorem follows directly. More general, since C0(Ω) is dense

(28)

in L2(Ω) (and its inner product is continuous in L2(Ω)) (1.12) also holds for

all ϕ∈ L2(Ω). Hence, ∆f = 0 (inL2(Ω)).

Historically, the Dirichlet principle was formulated without taking into ac- count that the minimum might not exist, see [95] for an interesting survey.

Anyhow, since a minimum in a suitably chosen function space may be found, the Dirichlet principle can be used to generalize (actually, to modify since there is no inclusion in any direction) the classical Dirichlet problem. A solution now being a minimum of the Dirichlet integral. We call this the generalized Dirichlet problem. Interestingly, there exist solvable classical Dirichlet problems, where the minimum of the Dirichlet integral does not exist, see [66]. On the other hand, consider the following (slightly modified) example from [125].

Example 1.4.2 Let Ω := B(0,0,1)⊆R2, F :=W1,2(Ω)∩ C(B(0, ε,1)) for some 1 > ε > 0 and g(x) = 0 for |x|= 1 and g(0) = 1. The solution of the generalized Dirichlet problem f(x) = 0 inΩ andf(x) =g(x)on ∂Ω does not solve the classical Dirichlet problem.

In this example we used the Sobolev space W1,2(Ω) defined as follows3 Definition 1.4.3 Let 1 ≤ p < ∞. A function f ∈ Lp(Ω) is called k-times weakly differentiable if for all |β| ≤k there exists h∈ Lp(Ω) and

∀ϕ∈ C0(Ω) ∂βf(ϕ) := (−1)|β|

Z

f(x)∂βϕ(x) dx= Z

h(x)ϕ(x) dx , i.e., ∂βf =h in the sense of distributions. The Sobolev spaces Wk,p(Ω) are defined by

Wk,p(Ω) :={f ∈ Lp(Ω) : f is k-times weakly differentiable} . Note that with respect to the inner product

(f, g)Wk,2(Ω) := X

|β|≤k

(∂βf, ∂βg)L2(Ω) .

Wk,2(Ω) is a Hilbert space for all k. Also note that for 1 ≤p <∞ and all k, the Sobolev space Wk,2(Ω) is equal to the closure of Ck(Ω)∩ Wk,2(Ω) with respect to the Wk,2(Ω)-norm. This can be used for a definition based on approximating functions, where Wk,2 is denoted by Hk,2. Anyhow, we will not use this fact, but only wanted to mention the alternative notation.

3The use ofF might be more intuitive thanW01,2(B(0,1)), see Sect.2.7.

(29)

Proof (of Example 1.4.2): Before we start to analyze Example 1.4.2, first consider the modified problem given by Ω := B(0, ε,1) for 0 < ε < 1, g(x) = 0 for|x|= 1, andg(x) = 1 for|x|=ε(see Fig.1.1)4. We immediately recognize the fundamental solution for R2 in its classical solution

f(x) = ln|x|

lnε for x∈Ω.

This illustrates the difficulties for ε → 0. Clearly, the function f = 0 in Ω and f =g on∂Ω solves the generalized Dirichlet problem of Example1.4.2.

On the other hand, each continuous function on Ω with f|∂Ω = g has a

positive Dirichlet integral.

Hence, solutions of the generalized Dirichlet problem might not solve the classical Dirichlet problem, and vice versa. The generalized Dirichlet problem has also been studied intensely and is responsible for many developments in mathematics in general, and in functional analysis in particular. Anyhow, we return to the classical Dirichlet problem and consider some more solution methods that have a finite analog on graphs.

1.5 Perron’s Method

In Sect. 1.3 we defined sub- and superharmonic functions, see Def. 1.3.12.

Considering the proof of the maximum principle, Lemma 1.2.9, we can state the following lemma without proof.

Lemma 1.5.1 Subharmonic functions satisfy the maximum principle. Su- perharmonic functions satisfy the minimum principle.

Corollary 1.5.2 Let f1 be a subharmonic function andf2 be superharmonic with f1 ≤f2 on ∂Ω. Then either f1 < f2 or f1 =f2 on Ω.

Proof: Clearly, f1 −f2 is a subharmonic function with non-positive values on ∂Ω, hence f1 ≤ f2 by the maximum principle. If there exists an x ∈ Ω with f1(x) = f2(x), then by the maximum principle f1 = f2, and otherwise

f1 < f2 on Ω.

The following theorem is known as Perron’s method to solve (not yet con- structively) the classical Dirichlet problem.

Theorem 1.5.3 (Perron’s method) Let Ω ⊆ Rn be a bounded domain and g ∈ C(∂Ω). Let F := {f ∈ C(Ω) : f is subharmonic and f|∂Ω ≤ g}.

The (pointwise) supremum f(x) := sup{h(x) : h∈ F } is harmonic on Ω.

4Rendered with POV-Ray, www.povray.org

(30)

Figure 1.1: The Dirichlet problem given in the proof of Example 1.4.2

(31)

A proof of this theorem is by far too tedious for our survey and is omitted here. Note that if a solution exists, the supremum can, of course, be replaced by the maximum. Note also that F is not empty since the function constant to infx∈∂Ωg(x) is contained. Furthermore, the supremum f exists because each function inF is bounded by the function constant to supx∈∂Ωg(x) by the maximum principle. But we did not yet mention the behavior of the supre- mum f on the boundary ∂Ω. Whether f coincides with g on the boundary is already answered by Theorem 1.3.15 that we can now prove by means of Perron’s method.

Proof (of Theorem 1.3.15): Let each x ∈ ∂Ω be regular. Then each barrier function b(·, x)∈ F. Hence, f from Perron’s method coincides with g on∂Ω and solves the classical Dirichlet problem. On the other hand, let Ω be the bounded domain of a classical Dirichlet problem that is solvable. Letx∈∂Ω be fixed. Let h(·) := | · −x|. Then h is a subharmonic function. Let b(·, x) be the solution of the classical Dirichlet problem (Ω, h(·)|∂Ω). Clearly,b(·, x) is superharmonic and by Cor. 1.5.2 we know that b(·, x) ≥ h(·) = | · −x|.

Hence, b(·, x) is a barrier function for x.

1.6 Iterative Methods

In this section we discuss iterative methods that also allow to approximate the solution of a given solvable classical Dirichlet problem (Ω, g). One is called Beer-Neumann method (considering the iterative aspect), but is better known asintegral equation method. Since questions about its convergence deeply dip into functional analysis areas as Riesz-Schauder theory, we only very briefly sketch the method in R3. From the introductory example of this chapter we have seen that the potential U(x) = c1/|x| for a point electric charge at 0 defines a harmonic function on R3\ {0}. Analogously, for continuous electric charge densities ρin R3, or, the case that we consider now, a density on the surface ∂Ω of a bounded domain Ω, the potential U is given by

U(x) := 1 4π

Z

∂Ω

ρ(ξ) 1

|ξ−x| dξ . (1.13)

One can prove thatU is continuous onR3and harmonic on Ω. The relation to the classical Dirichlet problem is the following. Given a function g ∈ C(∂Ω), determine ρ, such that U = g on ∂Ω. Clearly, then f := U is its solution.

Unfortunately, the determination of ρ from a given function g has shown to be a very challenging task. A better way proved to be the following. Apart from the one-layer electric charge density from (1.13) one can also define a

(32)

two-layer electric charge density, adipole density ν, where normal derivatives are given (think, e.g., of a ball with a very thin boundary, where the electric charge is only on the outer side). The corresponding potential is given by

U(x) := 1 4π

Z

∂Ω

ν(ξ)∂nξ 1

|ξ−x| dξ . (1.14)

The potential U is continuous and harmonic in Ω and in Ωout :=R3\Ω. It is continuously extendable from inside and outside, but jumps on∂Ω. More precisely.

Theorem 1.6.1 For x∈∂Ω U(x) = lim

Ω3ξ→xU(ξ) + ν(x) 2 , U(x) = lim

out3ξ→xU(ξ)−ν(x) 2 .

The relation to the classical Dirichlet problem is now similar to before. Given a function g ∈ C(∂Ω) determine ν, such that the continuous extension of U in Ω to the boundary ∂Ω coincides with g. By Theorem 1.6.1 this can be rewritten to

g(x) = −ν(x) 2 + 1

4π Z

∂Ω

ν(ξ)∂nξ 1

|ξ−x| dξ=:−ν(x)

2 +K(ν)(x) 2

for x∈∂Ω. Thus, the original problem has been transformed to the integral equation 2g = −ν +K(ν). The studies of the operator K then allowed to define its solution ν := −P

j=0Kj(2g). Hence, the classical Dirichlet problem can iteratively be solved by

ν0 :=−2g and for j >0 νj :=ν0+K(νj−1) , defining ν as limit, such that the solution f is given by

f(x) :=

 1 4π

Z

∂Ω

ν(ξ) 2 ∂nξ

1

|ξ−x| dξ if x∈Ω,

g(x) if x∈∂Ω ,

while approximate solutions would be obtained from any νj.

A more obvious connection to the iteration in the finite case would be the following method which we now construct, modeled on the Gauß-Seidel iteration for the finite case (see paragraph following Def. 2.5.4). Anyhow, for the continuous case, it is not clear whether convergence is given or not.

Referenzen

ÄHNLICHE DOKUMENTE

Indeed they show that if capital has a vintage structure and depreciation is of a horse-shoe form — that is a machine is as good as new for a predetermined number of years after

Assumption: the training data is a realization of the unknown probability distribution – it is sampled according to it.. → what is observed should have a

Assumption: the training data is a realization of the unknown probability distribution – it is sampled according to it.. → what is observed should have a

Keywords Ambiguity · Distortion premium · Dual representation · Premium principles · Risk measures · Wasserstein distance..

Although the power exponent −ð1=νÞ does not change with N, the “inverse tem- perature” β increases with N (Fig. 1C, Inset), which shows that the process becomes more persistent

Existence of viable (controlled invariant) solutions of a control problem regulated by absolutely continuous open loop controls is proved by using the concept of

We state in section 2 the main theorem concerning the necessary conditions satisfied by an optimal solution to a... differential

as important examples of replicator equations, both in classical population genetics and in the chemical kinetics of polynucleotide replication.. The corresponding maximum