• Keine Ergebnisse gefunden

The ideal mathematical model for source location problem is geometrically very simple, find the unique common point of a collection of spheres

find x¯∈ ∩mj=1Sj, (5.1)

whereSj (j= 1,2, . . . , m) is the sphere inRncentered at aj and with radiusrj >0.

The simplicity of (5.1) provides a useful intuition of rather technical regularity notions involved in the convergence theory in Chapter3.

Let us consider (5.1) inR3 and make the following natural assumption on the sensors.

The treatment for the problem inn-dimensional case is analogous.

Assumption 5.1.1. There are always three sensors {aj1, aj2, aj3} that together with the true source x¯ are affinely independent.

The following facts follow from the prox-regularity of the spheres and Assumption5.1.1.

Fact 5.1.2(prox-regularity of spheres). EachSj (j= 1,2, . . . , m)is prox-regular atx, i.e.,¯ for any givenε∈(0,1)it holds

x−PSjx,x¯−PSjx

≤ε

x−PSjx

¯x−PSjx

∀x∈Bδ(¯x), (5.2) where δ:= 2rjε√

1−ε2 >0.

103

For allδ >0sufficiently small, the constantεin (5.2) can be represented as a functional ofδ,

ε=f(δ) := 1

√2 1− s

1−δ2 rj2

!1/2

∈(0,1/√

2). (5.3)

This function will be needed for estimating radius of linear convergence of algorithms.

It is important to note thatf(δ)↓0asδ ↓0.

Fact 5.1.3 (strong subtransversality). Assumption 5.1.1 implies that {Sj1,Sj2,Sj3} is strongly subtransversal atx, that is, there exist¯ κ,∆>0such that ∩3i=1Sji

∩B2∆(¯x) ={¯x}

and

kx−xk¯ = dist(x,∩3i=1Sji)≤κ max

i=1,2,3dist(x,Sji) ∀x∈B(¯x).

Let us denote

r:= min{rj >0 : 1≤j ≤m}>0.

5.1.1 Cyclic and averaged projections

The following theorem guarantees local linear convergence of TCP for solving (5.1) under Assumption5.1.1.

Theorem 5.1.4 (linear convergence for TCP). Let δ∈(0,min{r,∆}) satisfy f(δ)< 1

2m(κ+ 1),

where f(δ) is given by (5.3). Then for any starting point in Bδ(¯x), the method TCP for solving (5.1) converges linearly to x¯ with rate at most

c=

1 +2f(δ)(κ+ 1)

κ2 − 1

2 1/2

∈(0,1).

Proof. Assumption5.1.1 implies the strong subtransversality of{Sj1,Sj2,Sj3} atx¯by Fact 5.1.3. The latter in turn implies the strong subtransversality of the collection {Si} at x.¯ The statement now follows from Theorem3.2.13in view of Fact 5.1.2.

The following theorem guarantees local linear convergence ofTAP for solving (5.1) under Assumption5.1.1.

Theorem 5.1.5 (linear convergence for TAP). Let δ ∈(0,min{r,∆}) satisfy f(δ) +f(δ)2+ f(δ)

2(1−f(δ)) ≤ 1 mκ2,

where f(δ) is given by (5.3). Then for any starting point in Bδ(¯x), the method TAP for solving (5.1) converges linearly to x¯ with rate at most

c=

1 +f(δ) +f(δ)2+ f(δ)

2(1−f(δ))− 1 mκ2

1/2

.

Proof. Assumption5.1.1 implies the strong subtransversality of{Sj1,Sj2,Sj3} atx¯by Fact 5.1.3. The latter in turn implies the strong subtransversality of the collection {Si} at x.¯ The statement now follows from Theorem3.3.9 in view of Fact5.1.2.

Remark 5.1.6. Since f(δ) ↓ 0 as δ ↓ 0, the rate c estimated in Theorems 5.1.4 and 5.1.5 must be strictly smaller than 1 when the starting point is sufficiently close to x.¯ This convergence rate is improving (smaller) when the iterates get closer to x¯ (i.e., δ is decreasing).

5.1.2 Forward–backward algorithm and variants of the DR method We discuss the source location problem with three sensors in the product space

find u¯∈Λ∩ S, whereΛis the diagonal and S:=Q3

j=1Sj inR3×3.

The following lemma ensures that{Λ,S}is transversal at the solution under Assumption 5.1.1.

Lemma 5.1.7. If the three sensors satisfy Assumption 5.1.1, then {Λ,S} is transversal at the solution.

Proof. The statement follows from the linear independence of three nonzero normal vectors of the spheres at the solution.

In turn, the transversality of{Λ,S} implies the metric subregularity condition imposed in Corollary 3.1.4 for these algorithms, and as a consequence these algorithms are locally linearly convergent.

5.1.3 ADMM algorithm

Let us consider the source location problem with noise, that is to find an appropriate approximation of the true sourcex¯ that is descibed by the following system of equations

rj =k¯x−ajk+εj (j = 1,2, . . . , m), (5.4) where εj is the j-th unknown noise. The parameters a = (a1, a2, . . . , am) ∈ Rnm and r = (r1, r2, . . . , rm) ∈ Rm+ are the receiver locations and distances of the receivers to the unknown senderx, respectively.¯

One strategy proposed in [99] to address (5.4) is to find a solution to the following minimization problem

(x,u)∈minRn×Rnm

f(x, u) :=

m

X

j=1

1

2kx−ajk2−rjhuj, x−aji+ιB(uj)

. (5.5)

Let us denote

A:={x∈Rnm | ∃ z∈Rn such thatx+a= (z, z, . . . , z)}, B:= (r1B)×. . .×(rmB),

E:= (Rn)m the product space endowed with the 2-norm.

The problem (5.5) then takes the form of (3.45), and hence Algorithm3.6.1 applied to this problem converges globally thanks to Theorem3.6.2.

5.1.4 Numerical simulation

We do simulation for the source location problem inR3 withm= 20sensors.

• Generate randomly the sensor locations aj, j = 1,2, . . . , m, and the true source location x, from a uniform distribution over the box¯ [−10000,10000]3.

• Compute the rangesrj,j= 1,2, . . . , m, using the relation rj =kx¯−ajk+εj, whereεj are noise.

• Generate random starting point again from uniform distribution over the box for all methods.

The stopping criterion kx−x+k < 10−10 is used. We run all the above algorithms in Matlab and observe their convergence behaviors which appear to be consistent with the convergence theory discussed above. The parameter is chosen with seemingly best performance for each method: λ=.15 for FB, β =.8 for RAAR, λ=.15 for DRAP, and ρ= 1.15for ADMM.

Thechange of the distance between two consecutive iterates is of interest. When linear convergence appears to be the case, this observable information may provide the rate of convergence. Under the assumption that the iteration will remain in the convergent area, one can obtain practically useful error bound for the distance from the iterate to a solution.

We also pay attention to the iterategap that in a sense measures the infeasibility at the iterates. If we think feasibility as the problem of minimizing the function that is the sum of (the squares of) the distance functions to the sets, then iterate gaps are simply the values of

0 50 100 150 200 iteration

10-15 10-10 10-5 100 105

CP AP FB RAAR DRAP ADMM

0 50 100 150 200

iteration 10-15

10-10 10-5 100 105

gap

CP AP FB RAAR DRAP ADMM

Figure 5.1: Source location problem without noise: the change in iterates (left) and the gap in iterates (right).

that function evaluated at the iterates. For the RAAR and DRAP algorithms, the iterates are themselves not informative but their shadows, by which we mean the projections of the iterates on one of the sets. Hence, the iterate gap corresponding to these methods is calculated for the shadow iterates instead of the iterates themselves.

Figures 5.1 and 5.2present the changes and the gaps of the algorithms for solving the source location problem both without noise and with noise, respectively. In the simulation with noise, the noise εj (j = 1,2, . . . , m) are generated from a normal distribution with zero mean and standard deviation20.

0 50 100 150 200 250

Figure 5.2: Source location problem with noise: the change in iterates (left) and the gap in iterates (right).