• Keine Ergebnisse gefunden

Approximate Projected Natural Level Function

4.1 Basic Approximation Idea

We consider a functionFwhich fulfills Assumption 2.1 and anx∈ DwithF(x)6= 0 andF(x) nonsingular. LetH∈Rn×nbe an approximation forF(x)−1such that

δx:=−HF(x) (4.1)

is available. Also, assume that we can compute

δxTH and HF(x)δx. (4.2)

In the following we will define an approximation to the PNLF in terms of such a matrixHand provide a corresponding direction of descentδx. As stated in the introduction of this chapter we will call this new level functionapproximate projected natural level function which we will abbreviate by the termAPNLF.

We will see that the approximation quality of the APNLF and ofδxw.r.t. the PNLF and the Newton correction ∆xatx, respectively, can be monitored by means of the angle between δxand the transposed negative gradient atxof the APNLF and the angle betweenδxand ∆x.

Without knowledge of ∆xthe angle betweenδxand ∆xis usually unknown. But we will provide an estimate for it.

Two concepts are of crucial importance for our approach to be viable and computationally efficient.

Projection:

Dropping the indices, recall from (3.70) that for the relative change of the PNLF we have T(x+λ∆x|PNF(x)−1)

T(x|PNF(x)−1) =¡

1−λ+µ(λ)¢2

(4.3) with

µ(λ) =− ∆xT k∆xk22

F(x)−1¡

F(x+λ∆x)−F(x)−λF(x)∆x¢

. (4.4)

One will see from Theorem 4.5 below that the relative change of the APNLF in the direction ofδxis astructuralanalogy to (4.3). This analogy will turn out to be fundamental to justify the aforementioned angles as an approximation monitor—see Theorem 4.7 and Corollary 4.8.

Automatic Differentiation (AD):

As we will see in the course of providing the above mentioned angles or estimates of angles, respectively, we will needadjointanddirect tangentevaluations, i.e.,

wT·F(x) and F(x)·d, w, d∈Rn.

Applying AD-techniques is an efficient way to obtain these values without calculating the whole Jacobian. Adjoint terms are computable via thereverse mode of AD and direct tangents via theforward modeof AD, [15]. So we assume in the following that these modes of AD are available and hence the above products.

Remark 4.1 We would like to emphasize that the concept of the APNLF with angle monitors can be combined with any techniques which aim to provide a better approximationHtoF(x)−1as long as adjoint and direct tangent evaluations as well as the products (4.1) and (4.2) are available.

It is not mandatory to employ quasi-Newton techniques like we will do it in this work to provide

better approximations to the Jacobian. ¤

In terms of the approximationHto the inverse of the JacobianF(x) and in terms ofδxfrom (4.1) we define the APNLF in the following way.

Definition 4.2 (Approximate projected natural level function)SupposeFfulfills Assump-tion 2.1. Letxl∈ DwithF(xl)6= 0. Furthermore, letHl∈Rn×nand assume thatδxldefined according to(4.1)is nonzero. Then for

Pl:=δxlδxTl

δxTlδxl

we call

T(x|PlHl) =1

2kPlHlF(x)k22

theapproximate projected natural level function(atxl) or abbreviatedAPNLF.

According to (2.12) we know that the Newton correction atxlis a direction of descent for the APNLF. More precisely,

d

dλT(xl+λ∆xl|PlHl)|λ=0=−2T(xl|PlHl). (4.5) This behavior is not necessarily mimicked by the basic approximationδxl. In fact, it is not even guaranteed thatδxlis a direction of descent for the APNLF atxl. To overcome this nuisance we define thedescent approximation

δxl:= 1 1−αl

δxl where αl:=δxTl

¡I−HlF(xl)¢ δxl

δxTlδxl

6= 1 (4.6)

is assumed. Since the descent approximation is based onδxlwe denote the correctionδxlby the termbasic approximation.

Remark 4.3 The scalarαlmay either be determined by employing the direct tangent evaluation F(xl)δxl(forward mode of AD) or the adjoint tangent evaluationδxTlHlF(xl) (reverse mode of AD). Though the second one is slightly more expensive than the first one in terms of floating point operations, see [15], we opt for the second evaluation. From a short calculation or (4.16) it is seen that the adjoint tangent evaluation equals the negative gradient of the APNLF atx. Since one of our monitors is given by the angle betweenδxand the transposed negative gradient of the APNLF

atxwe need this evaluation anyway. ¤

The notationdescent approximationforδxis indeed justifiable as the following proposition shows.

−grad T(x|P H)T

∆x

δx

δx

(a) Descent ensured and good approximation to ∆x

−grad T(x|P H)T

∆x

δx

δx

(b) Though descent holds approximation quality ofδx is bad

Figure 4.1: Visualization of the descent property ofδx. The Newton correction is denoted by ∆x.

Proposition 4.4 (Descent property) Let the descent approximation(4.6)be well defined and nonzero. Furthermore, let∆xlbe the Newton correction atxl. Then the descent property

d

dλT(xl+λδxl|PlHl)|λ=0=−2T(xl|PlHl)

= d

dλT(xl+λ∆xl|PlHl)|λ=0

holds.

Proof.Differentiation yields d

dλT(xl+λδxl|PlHl)|λ=0

PlHlF(xlT

PlHlF(xl)δxl

=¡ HlF(xlT

HlF(xl)δxl. Since

δxl=−HlF(xl), 1

1−αl= δxTlδxl

δxTlHlF(xl)δxl

and δxTlδxl= 2T(xl|PlHl) we obtain

¡HlF(xlT

HlF(xl)δxl=−δxTlHlF(xl) 1

1−αlδxl=−δxTlδxl=−2T(xl|PlHl).

The relation w.r.t. to the Newton direction is just (4.5) which follows from (2.12). ¥ For the following discussion we will drop the iteration index and abbreviateJ:=F(x).

The update ofδxto obtainδxhas a nice geometric interpretation: The above descent property ofδxis accomplished by ensuring that the orthogonal projection ofδxand the Newton correction

∆xonto the transposed gradient of the APNLF at the current iterate coincide—see also Figure 4.1.

Certainly, such a property cannot be true if the basic approximationδxand−gradT(x|P H)T=

¡δxTHJ¢T

are perpendicular since the factor 1/(1−α) is only capable of changing the length and orientation of a vector. This is directly reflected by the value ofαsinceα= 1 if and only ifδx

and−gradT(x|P H)T=¡ δxTHJ¢T

are perpendicular. If this is true a better approximation to the inverse ofJis required. In the next section we will provide techniques to obtain such better approximations which can also be used to avoid a situation like the one depicted in Figure 4.1(b).

Forδxfrom (4.6) the following result about the relative change of the APNLF is true.

Theorem 4.5LetFfulfill Assumption 2.1 and letx∈ Dsuch thatF(x)6= 0. LetJ:=F(x).

For givenH∈Rn×nsuppose thatδxfrom(4.1)is nonzero. Furthermore, letδxbe given according to(4.6). Then for allλ∈Λwith

Λ :={λ∈(0,1]|x+λδx∈ D} (4.7)

we obtain for the APNLF

T(x+λδx|P H) T(x|P H) =¡

1−λ+µ(λ)¢2

(4.8) withµ(λ)given as

µ(λ) :=− δxT δxTδxH¡

F(x+λδx)−F(x)−λJδx¢

=δxTδx+

δxTδx −(1−λ)

(4.9)

whereδx+:=−HF(x+λδx).

Proof.We abbreviateF:=F(x). It holds that

P HF(x+λδx) =P HF+λP HJδx+P χ(λ) where

χ(λ) :=H¡

F(x+λδx)−F−λJδx¢ . Since

1

1−α= δxTδx

δxTHJδx (4.10)

and due to the definition ofδxwe have

δxTHJδx

δxTδx = 1. (4.11)

Hence,

P HJδx=δx=−HF=−P HF.

By the definition ofµ(λ) it is clear that

P χ(λ) =µ(λ)HF

and thereforeP χ(λ) =µ(λ)P HF. Putting things together, we obtain P HF(x+λδx) =¡

1−λ+µ(λ)¢ P HF

which implies (4.8). From the definition ofδx+and by means of (4.11) it directly follows that

µ(λ) =δxTδx+

δxTδx −(1−λ).

¥

Remark 4.6 The relation (4.8) provides the basis for the globalization approach based on quasi-Newton updates which we will discuss in Section 4.4. Due to the structural analogy ofµ(λ) toµ(λ) from (4.4) it is straightforward to adapt the step size strategies from Section 3.4—see

Subsection 4.4.6 for details. ¤

Due to the projectional aspect of the APNLF and PNLF and the descent property ofδxwe obtain by mans of (4.8) an expression for the relative change of the APNLF which is structurally analogous to the one for the PNLF, cf. (4.3). However, thisdoes notnecessarily imply equalbehavior. Due to a poor approximation quality ofHthe quantityµ(λ) may produce values which are far off from the ones given byµ(λ) of (4.4). We ask for sufficient conditions such thatµ(λ) =µ(λ)∀λ∈Λ.

An answer is provided by the next theorem.

Theorem 4.7LetF fulfill Assumption 2.1 and letx∈ Dsuch thatF(x)6= 0. LetJ:=F(x)be nonsingular and∆xbe the Newton correction atx. Define for givenH∈Rn×nthe approximation δxvia(4.1)and letδxbe given according to(4.6). Furthermore, letµ(λ)be as in(4.9)andµ(λ) defined via(4.4). Then,

δx= ∆x and δxTHJ=δxT (4.12a)

implies that

δx= ∆x and δxTHJ δxTδx = δxT

δxTδx (4.12b)

which in turn implies that

µ(λ) =µ(λ) ∀λ∈Λ (4.12c)

withΛfrom(4.7).

Proof.We abbreviateF:=F(x). Ifδx= ∆xthenδx6= 0 since ∆x6= 0. Also, HJδx=δx

which implies that

α=δxT(I−HJ)δx δxTδx = 0

and thereforeδx=δxwhich by the assumptions means that the first part of (4.12b), i.e.δx= ∆x, is true. Fromδx=δxandδxTHJ=δxit directly follows that the second part of (4.12b) holds.

Now assume (4.12b) to be valid. Sinceδxis well defined we haveδx6= 0. Employing the definition

ofδx+and (4.9) we obtain

µ(λ) =−δxTHJJ−1F(x+λδx)

δxTδx −(1−λ) =−δxTJ−1F(x+λδx) δxTδx −(1−λ)

=−∆xTJ−1F(x+λ∆x)

∆xT∆x −(1−λ)

=− ∆xT

∆xT∆xJ−1¡

F(x+λ∆x)−F−λJ∆x¢

=µ(λ).

(4.13)

This concludes the proof.

¥ The condition (4.12b) has a nice geometrical interpretation. We define fory, z∈Rn\ {0}the angle between these vectors via

∠(y, z) := arccos

· yTz kyk2· kzk2

¸

∈[0, π].

With this definition at hand we can state

Corollary 4.8 Let the assumptions and definitions from Theorem 4.7 be given and additionally Paccording to Definition 4.2. Then the condition(4.12b)is equivalent to

∠(δx,∆x) = 0 and ∠¡

δx,−gradT(x|P H)T¢

= 0. (4.14)

Proof.We abbreviate againF:=F(x).

First, let (4.12b) be true. Fromδx= ∆xit directly follows that∠(δx,∆x) = 0. Furthermore, δxT

δxTδx= (1−α)2 δxT

δxTδx. (4.15)

Hence, the second part of (4.12b) implies that 1

(1−α)2δxTHJ=δx.

With

−gradT(x|P H) =δxTHJ (4.16)

and the fact that (1−α)2>0 we verify that the second part of (4.14) holds true.

Now suppose that the condition (4.14) is valid. Then, from∠(δx,∆x) = 0 it follows that δx=β∆xfor someβ >0. By means of Proposition 4.4 we obtain

gradT(x|P H)δx= gradT(x|P H)∆x.

Hence, the scalarβmust be equal to one which simply means thatδx= ∆x. By (4.16) and the second part of (4.14) we know that there is aγ >0 such that

δxT=γδxTHJ.

Multiplying byδxfrom the right and exploiting (4.10) yields γ=δxTδx

δxTδx= 1 (1−α)2.

So it holds that This is just the second part of (4.12b).

¥

IfF(x) = 0 is a scalar problem, i.e. n= 1, then according to (4.10) the factor 1/(1−α) simplifies to (HJ)−1 which by definition ofδximplies thatδx= ∆x. Keeping that in mind, a short calculation shows that also∠¡

δx,−gradT(x|P H)T¢

= 0 holds. However, forn >2 the descent property ofδxdoes not necessarily imply (4.14). Since∠(δx,∆x) is practically not available we will provide a computable estimate for it. This estimate will be derived from the following upper bound which takes the descent property ofδxinto account.

Theorem 4.9Forn>2suppose thatFfulfills Assumption 2.1 and letx∈ Dsuch thatF(x)6= 0 andF(x)is nonsingular. The Newton correction at x is denoted by∆x. For givenH∈Rn×nlet δxbe defined via(4.1)and different from zero. Assume thatδxis given according to(4.6). Let

β:=∠¡ (4.20) is valid in this case. For the remainder of the proof letr >0.

The following lines of the proof are based on geometrical arguments. Therefore, we will provide several figures as part of the proof. For the sake of convenience we identify the symbols for given

points in a figure like, say,p1andp2with their respective position vectors. Hence,kp1k2,kp1−p2k2

and∠(p1, p2) are valid vector operations.

We split the remainder of the proof into two parts. First, the casen= 2 is considered, then the casen>3. Note that in any case it holds by the descent property ofδxthat

06β <π2. (4.21)

n= 2: W.l.o.g. we may choose an orthonormal basis{v1, v2}ofR2such that withV =³ v1 v2

´

we can write

−gradT(x|P H)T=V g where g:=¡

kgradT(x|P H)k2,0¢T

, δx=V m where m:= (m(1), m(2))T with m(1)>0 andm(2)>0.

The scalarm(1)is positive due to the descent property ofδx.

Note that for arbitraryw∈R2there is a uniqueξw∈R2such that

w=V ξw, kwk2=kξwk2. (4.22a) Also, with an additionalz∈R2it holds that

∠(w, z) =∠(ξw, ξz). (4.22b)

The descent property ofδximplies that ∆x=m(1)v1+τ v2for someτ∈R. Hence,

∆x∈ℓ∩B

where ℓ:={y∈R2|y=m+s·(0,1)T, s∈R} and B:={y∈R2| ky−mk26r}.

In order to obtain an upper bound for∠(δx,∆x) we considerℓ∩∂Bwhere∂Bdenotes the boundary ofB. This intersection contains only two pointsp1,p2, see Figure 4.2. Clearly,

∠(δx,∆x)6maxi=1,2∠(m, pi). Letα:=∠(m, p1) andα:=∠(m, p2). Since

r/kmk2=rrel (4.23)

∂B

m

p1

p2 r

Figure 4.2: Intersection of∂Bandℓconsists of two points

span(g) wherep3is given as in Figure 4.3. From the law of sines it follows that

kp1k2

To computeαwe exploit thatkmk2can be written as kp2k2·cos(α) +r·cos(γ) =kmk2.

Also,γ=π/2−βwhich implies that cos(γ) = sin(β) and sin(γ) = cos(β). Hence, by means of (4.23) and (4.24) we obtain

α= arccos

¡

1−rrel·sin(β)¢

·

°°

°°

°

Ãrrel·sin(β)−1 rrel·cos(β)

!°°°°

°

−1

2

.

This is just what is stated in (4.19).

n>3: To obtain an upper bound for∠(δx,∆x) which takes the descent property ofδxinto account only a three dimensional (sub-)space ofRnwhich contains gradT(x|P H)T,δxand ∆xneeds to be considered. In analogy to the casen= 2 w.l.o.g. we may choose a basis{v1, v2, v3}of the above (sub-)space such that withV=³

v1 v2 v3

´

it holds thatVTV=Iand

−gradT(x|P H)T=V g where g:=¡

kgradT(x|P H)k2,0,0¢T

, δx=V m where m:= (m(1), m(2),0)T with m(1)>0 andm(2)>0,

∆x=V ν for someν∈R3. Certainly, properties like (4.22) are also true. Let

B:={y∈R3| ky−mk26r}

and let∂Bbe the boundary ofB. Furthermore, let

C:={t∈R3| ∃!σ >0 s.t. σ·t∈∂B}

be the cone of tangent vectors w.r.t. B. From geometrical evidence and rrel < 1, cf.

Figure 4.4, it follows that

∠(t, m) =const.=:αt<π2 ∀t∈C and

∠(m, y)< αt ∀y∈B\(C∩∂B). (4.25) Thus,

∠(m, ν)6αt.

Since∠(δx,∆x) =∠(m, ν) a first upper bound is provided byαt. By means of Figure 4.4 it is easy to see thatαtcan be calculated via

αt= arcsin µ r

kmk2

= arcsin(rrel). (4.26)

In the following we will derive to what extent a refinement of the boundαtis possible by taking the descent property ofδxinto account.

The intersectionC∩∂Buniquely defines a planeEtsuch thatC∩∂B⊂Et. Furthermore, let

Ed:={y∈R3|gTy= 2T(x|P H)}.

m u

∂B t

r

αt

Figure 4.4:∠(t, m) =:αtprovides an upper bound for∠(δx,∆x)

It holds thatm, ν∈Ed. By the definition ofEtandEdthe intersectionEt∩Ed =:ℓtd

is a straight line in case β >0 or the empty set ifβ = 0. In addition, withE12 :=

span¡

(1,0,0)T,(0,1,0)T¢

the intersectionℓtd∩E12=:Aeither consists of a single pointaif β >0 or is empty in case ofβ= 0, cf. Figure 4.5. Let

diffa,m:=



ka−mk2 ifβ >0

∞ ifβ= 0.

Then,

∃ˆy∈(C∩∂B∩Ed)⇔ℓtd∩∂B6=∅

⇔A⊂(B∩E12)\ ∅

⇔diffa,m6r.

(4.27)

Refer again to Figure 4.5 for aid to verify this. So if diffa,m6rwe cannot exclude that ˆ

y=ν. Hence,∠b(δx,∆x) shall be equal toαtin this case. We show now that this is true.

From diffa,m6rit follows thatβ >0. By means of Figure 4.5 we obtain diffa,m=ka−mk2=kmk2− kuk2

sin(β) .

From Figure 4.4 it follows thatkuk2=kmk2·cos2t) andr=kmk2·sin(αt). Hence, diffa,m=kmk2·¡

1−cos2t

sin(β) =r·sin(αt) sin(β) which means that

diffa,m6r ⇔ sin(αt)

sin(β) 61 (4.26)⇔ rrel

sin(β)61.

Therefore, by the definition of∠b(δx,∆x) in (4.19) it indeed holds that∠b(δx,∆x) =αt. Let diffa,m> r, i.e., one of the cases depicted in Figure 4.5(c) or 4.5(d), respectively, is true.

Then,ν6∈Etbut stillν∈(Ed∩B). Hence,

∠(m, ν)6 max

y∈(Ed∩∂B)∠(m, y). (4.28)

span(g) In order to determine the maximum in (4.28) we exploit that

y∈(Ed∩∂B) ⇔ ∃ϕ∈[0,2π) s.t. y=y(ϕ) :=m+r· β >0. By means of a computer algebra system it is easily seen that

ψ(ϕ) =r2·sin(ϕ)·m(2)·¡

m(2)·cos(ϕ) +r¢

ψ(ϕ)˜ where ψ(ϕ)˜ >0 ∀ϕ∈[0,2π).

The assumption diffa,m> ris equivalent tom(2)< r, see Figure 4.5 for evidence. Thus, extrema ofψmay only be assumed forϕ= 0 andϕ=π. A discussion ofψ′′shows that ϕ=πis the unique maximizer ofψ. Therefore,

ψ(π) = max

Thus, this quantity can be identified withp2from the discussion of the casen= 2 leading to the same expression for∠b(δx,∆x) we already established there. Ifβ= 0 thenψ(ϕ) =const.

and any value ofϕwill yield the maximum in (4.28), especiallyϕ=πleading to the same result as forβ >0.

¥ Recall that we assume the productδxTHand adjoint evaluationswTJ,w∈Rnto be available.

Therefore,βfrom (4.17) is computable. Hence, to provide an estimate for∠b(δx,∆x) we have to provide an estimate forrrelfrom (4.18). To do so, the following auxiliary result is given.

Proposition 4.10Suppose thatF fulfills Assumption 2.1 and forx∈ DletF(x)6= 0. Further-more, let the associated Newton correction∆xbe well defined, i.e., assume thatJ :=F(x)is nonsingular. Define forH∈Rn×nthe basic approximationδxaccording to(4.1). Considerαand δxdefined according to(4.6). Furthermore, suppose that

°°

°I−1−α1 HJ°°°2<1. (4.29)

Then,His nonsingular and one has kδx−∆xk2

The above bound is tight in the sense thatδx= ∆ximplies that the bound is equal to zero.

Proof.We abbreviateF:=F(x) andH:=1−α1 H. From the Perturbation Lemma and (4.29) it follows thatHJis nonsingular. Hence,His nonsingular and we obtain

δx−∆x=J−1F−HF

dividing (4.31) bykδxkyields (4.30). Ifδx= ∆xthen

HJδx=HJ·1−α1 δx=HJδx=HJ∆x=−HF=δx.

Thus, the bound in (4.30) becomes zero.

¥ For the bound (4.30) we need to computeHJδx. This is one of the products from (4.2) which is assumed to be available. Note thatJδxis efficiently computable via the forward mode of AD.

Unfortunately, the bound (4.30) also depends on an operator norm, namely°°°¡

I−1−α1 HJ¢°°°2, which is usually computationally not available. So we will estimate it. Such an estimate should introduce only minor computational effort. Recall from Remark 4.3 that the productδxTHJwas used to determineαand therefore is available. Furthermore, the bound (4.30) itself provides the quantity°

To determine opnestonly a computational effort ofO(n) floating point operations arises if we do not take the costs forHJδxandδxTHJinto account. Assume that opnest<1. Then according to the above proposition we define our estimaterestrelforrrelfrom (4.18) via

restrel:= 1

Note that analogously to the bound (4.30) the estimaterestrelis tight in the sense thatδx= ∆x implies thatrestrel= 0. Ifrestrel<1 we substitute it forrrelin (4.19) to finally obtain our estimate

est(δx,∆x) for∠(δx,∆x). For an algorithmic summary of computingrrelestand∠est(δx,∆x) refer to Algorithm 4.1 and 4.2.

Motivated by Theorem 4.7 and Corollary 4.8 we use the quantitiesβ=∠¡

δx,−gradT(x|P H)T¢ and∠est(δx,∆x) to monitor the approximation quality ofδxand the APNLF. Basically, we accept the descent approximation and the APNLF as a substitute for ∆xand the PNLF, respectively, if

β6φ and ∠est(δx,∆x)6ψ (4.34)

for predefined 06φ, ψ < π2. One of these checks may fail to pass due to a bad approximation quality ofH. Even worth, such an insufficiency ofHmay lead to opnest>1 orrestrel>1 making our estimate∠est(δx,∆x) unavailable. Therefore, we need a concept to improve the quality ofH.

Our approach is based on quasi-Newton techniques which we will discuss in the next section.

Algorithm 4.1 (Calculatingrestrel)

1:given:H,F:=F(x)6= 0,J:=F(x) nonsingular

2:determineδx=−HF ⊲cf. (4.1)

3:ifδx= 0then

4: increase approximation quality ofHand reinvoke this algorithm 5:end if

6:determine ˆg=δxTH

7:determineg= ˆgJ ⊲i.e. −gradT¡ x|P H¢

, done by reverse mode of AD 8:determines=δxδxTTδx

HJδx ⊲i.e. 1−α1 , cf. (4.10)

9:determineu=Jδx ⊲done by forward mode of AD

10: determine ˜r=δx−s·Hu ⊲i.e. (I−1−α1 HJ)δx

11: set opnest= max³

k˜rk2/kδxk2,kδxT−s·gk2/kδxk2

´ ⊲cf. (4.32)

12: ifopnest>1then

13: increase approximation quality ofHand reinvoke this algorithm 14: end if

15: setrestrel=1−opn1

est·kδxkrk22 ⊲cf. (4.33)

16: ifrrelest>1then

17: increase approximation quality ofHand reinvoke this algorithm 18: end if

Algorithm 4.2 (Calculating∠est(δx,∆x))

1:given:g,δxandrestrel<1 from Algorithm 4.1 2:determineβ= arccosh g·δx

kgk2·kδxk2

i ⊲cf. (4.17)

3:ifn= 2 || β= 0 || ¡

β >0 &&restrel/sin(β)>1¢

then ⊲cf. (4.19)

4: set∠est(δx,∆x) = arccos

¡

1−restrel·sin(β)¢

·

°°

°°

°

Ãrestrel·sin(β)−1 restrel·cos(β)

!°°°°°

−1

2

5:else

6: set∠est(δx,∆x) = arcsin(restrel) 7:end if