• Keine Ergebnisse gefunden

Constraints with autocorrelated noise can be understood with

6. Preprocessing sublayer 101

6.3. Cross-correlated errors between pose sources

6.4.2. Constraints with autocorrelated noise can be understood with

also seen that the augmented optimization problem is harder to solve as it becomes less sparse and more memory consuming. As a consequence, if we would implement these models within the core estimator, we would lose its block-tridiagonal structure and the associated advantages. Moreover, we have derived the AR(1) NLLSQ estimation proce-dure, but we do not have a graph-based representation for it. We are generally interested in such a representation to help us reason more intuitively about the underlying prob-lem structure. For this reason we have already derived in Section 5.3 a prior node to represent the effect of marginalization on our sliding window pose graph. In the same spirit we derive graph elements to represent constraints with autocorrelated noise in the following.

6.4.2. Constraints with autocorrelated noise can be

6.4. Measurements with autocorrelated errors 127

xi

xj

xwk

xwl ew,ARm ,Ωw,ARm

ewk,Ωw,ARk ewl ,Ωw,ARl

Figure 6.10.: The same part of a pose graph as in Figure 6.8, but this time we assume the noise of the observed nodes to be modeled as AR(1). We need an additional constraint to model the influence of the autocorrelated noise.

shows the same part of the graph as Figure 6.8, but this time we assume the noise of the observed nodes to be described as AR(1). The most striking difference when comparing this to the graph in Figure 6.8 is the introduction of the hyperedgezw,ARm that connects both observed nodes and both hidden nodes simultaneously. Looking more closely, we also see that the edges from the observed to the hidden nodes have undergone a cor-rection of their associated information matrices. We derive in the following that these three edges have the same influence on the underlying NLLSQ formulation as directly augmenting the stochastic model in the optimization problem as we have done in Sec-tion 6.4.1. We conclude that these edges are therefore the graph-based representaSec-tion of constraints from measurements with autocorrelated noise.

The new hyperedge zw,ARm connects xwk,xwl ,xi, and xj. It depends on a new error functionew,ARm with

ew,ARm (xi,xj,zwk,zwl ) = ewk(xi,zwk) +ewl (xj,zwl ). (6.56) Its partial derivatives are

∂ew,ARm (xi,xj,zwk,zwl )

∂xi

= ∂ewk(xi,zwk)

∂xi

= ˘Jm,iw,AR= ˘Jk,iw, (6.57)

∂ew,ARm (xi,xj,zwk,zwl )

∂xj

= ∂ewl (xj,zwl )

∂xj

= ˘Jm,jw,AR = ˘Jl,jw. (6.58)

Its Jacobian is therefore

mw,AR= h i j

· · · 0 ∂ew,ARm (x∂xi,xj,zwk,zwl)

i

x=˘x 0 · · · 0 ∂ew,ARm (x∂xi,xj,zwk,zwl)

j

x=˘x 0 · · · i (6.59)

= h i j

· · · 0 J˘k,iw 0 · · · 0 J˘l,jw 0 · · · i

. (6.60)

The corresponding constraintzw,ARm takes into account this error function and the infor-mation matrixΛw,ARmw,ARkl such that the squared error becomes

ew,ARm =ew,ARm (xi,xj,zwk,zwl )>Λw,ARm ew,ARm (xi,xj,zwk,zwl ). (6.61) This constraint leads toHmw,ARwith

Hmw,AR =







 ...

( ˘Jk,iw)>Λw,ARmk,iw · · · ( ˘Jk,iw)>Λw,ARml,jw

... ... ...

( ˘Jl,jw)>Λw,ARmk,iw · · · ( ˘Jl,jw)>Λw,ARml,jw ...









, (6.62)

andbw,ARm with

bw,ARm =









...

( ˘Jk,iw)>Λw,ARm ˘ew,ARm ...

( ˘Jl,jw)>Λw,ARmw,ARm ...









. (6.63)

This is a nice intermediate result as it leads to the correct addends inHijw,ARandHjiw,AR, but because of the information matrix Λw,ARm it leads to slightly wrong addends in Hiiw,AR, Hjjw,AR, bw,ARi , and bw,ARj . We correct for this by considering the other two mentioned edges zwk and zwl . In terms of the error functions, we use them as ordinary

6.4. Measurements with autocorrelated errors 129 edges between observed and hidden nodes, just as we would model them when assum-ing AWGN noise. However, we adapt the associated information matrices to be

Λwkw,ARk −Λw,ARm , (6.64) Λwlw,ARl −Λw,ARm . (6.65) The two constraints zwk and zwl lead with these corrected information matrices to Hkw andHlwwith

Hkw =



 ...

( ˘Jk,iw)>Λwkk,iw ...



, (6.66)

Hlw =



 ...

( ˘Jl,jw)>Λwll,jw ...



. (6.67)

Equivalently we find that

bwk =



 ...

( ˘Jk,iw)>Λwkwk ...



, (6.68)

bwl =



 ...

( ˘Jk,iw)>Λwlwl ...



. (6.69)

These three constraints together allow us to compute Hw,AR and bw,AR for the graph depicted in Figure 6.10:

Hw,AR =Hkw+Hlw+Hmw,AR, (6.70) bw,AR =bwk +bwl +bw,ARm . (6.71) (6.72)

xi

xj

xk

ev,ARn ,Ωv,ARn evl,Ωv

,AR l

emv,Ωv,AR m

Figure 6.11.: The same part of a pose graph as in Figure 6.9, but this time we assume the noise of the odometry edges to be modeled as AR(1). We need an addi-tional constraint to model the influence of the noise of the AR(1) process.

It is straightforward to verify that these results are equal to (6.34) and (6.35). Therefore, we have defined a graph-based representation for modeling observed nodes with auto-correlated noise that is equal to the direct modeling as a AR(1) NLLSQ problem. A key piece that we are still missing is, how a graph-based representation of odometry edges with autocorrelated noise looks like. We approach this question in the following.

Odometry edges Within a similar train of thought as the derivation of a graph-based representation for observed nodes with autocorrelated noise, it is now our goal to derive the same for odometry edges. To this end, we begin by stating that the graph-based representation in Figure 6.11 solves this problem. With a similar approach as for observed nodes, we first define the new hyperedgezv,ARn and then show that we need to adapt the information matrices of the usual constraintszvl andzvmto achieve the desired result.

The constraintzv,ARn of the corresponding hyperedge is associated to the error func-tion

ev,ARn (xi,xj,xk,zvl,zvm) =evl(xi,xj,zvl) +evm(xj,xk,zvm). (6.73) Its partial derivatives with respect to the state variables are

∂ev,ARn (xi,xj,xk,zvl,zvm)

∂xi

= ∂evl(xi,xj,zwl )

∂xi

, (6.74)

∂ev,ARn (xi,xj,xk,zvl,zvm)

∂xj = ∂evl(xi,xj,zwl )

∂xj +∂evm(xj,xk,zwm)

∂xj , (6.75)

∂ev,ARn (xi,xj,xk,zvl,zvm)

∂xk = ∂evm(xj,xk,zwm)

∂xk . (6.76)

With the usual abbreviations the partial derivatives evaluated at the linearization point

6.4. Measurements with autocorrelated errors 131 are

n,iv,AR= ˘Jl,iv, (6.77)

n,jv,AR= ˘Jl,jv + ˘Jm,jv , (6.78)

n,kv,AR= ˘Jm,kv . (6.79)

The Jacobian of the constraintzv,ARn is therefore

nv,AR = h i j k

· · · 0 J˘n,iv,AR 0 · · · 0 J˘n,jv,AR 0 · · · 0 J˘n,kv,AR 0 · · · i . (6.80) The constraintzv,ARn leads toHnv,ARwith

Hnv,AR= ( ˘Jnv,AR)>Λv,ARnv,AR = (6.81)













 ...

( ˘Jn,iv,AR)>Λv,ARnn,iv,AR · · · ( ˘Jn,iv,AR)>Λv,ARnn,jv,AR · · · ( ˘Jn,iv,AR)>Λv,ARnn,kv,AR

... ... ...

( ˘Jn,jv,AR)>Λv,ARnn,iv,AR · · · ( ˘Jn,jv,AR)>Λv,ARnn,jv,AR · · · ( ˘Jn,jv,AR)>Λv,ARnn,kv,AR

... ... ...

( ˘Jn,kv,AR)>Λv,ARnn,iv,AR · · · ( ˘Jn,kv,AR)>Λv,ARnn,jv,AR · · · ( ˘Jn,kv,AR)>Λv,ARnn,kv,AR ...















(6.82) andbv,ARn with

bv,ARn =















...

( ˘Jn,iv,AR)>Λv,ARnv,ARn ...

( ˘Jn,jv,AR)>Λv,ARnv,ARn ...

( ˘Jn,kv,AR)>Λv,ARnv,ARn ...















. (6.83)

Using the definition of the Jacobians we develop (6.82) toHnv,ARwith the entries Hn,iiv,AR= ( ˘Jl,iv)>Λv,ARnl,iv, (6.84) Hn,ijv,AR= ( ˘Jl,iv)>Λv,ARnl,jv + ( ˘Jl,iv)>Λv,ARnm,jv , (6.85) Hn,ikv,AR= ( ˘Jl,iv)>Λv,ARnm,kv , (6.86)

Hn,jiv,AR= (Hn,ijv,AR)>, (6.87)

Hn,jjv,AR= ( ˘Jl,jv)>v,ARnl,jvv,ARnm,jv ) + ( ˘Jm,jv )>v,ARnm,jvv,ARnl,jv ), (6.88) Hn,jkv,AR= ( ˘Jm,jv )>Λv,ARnm,kv + ( ˘Jl,jv)>Λv,ARnm,kv , (6.89)

Hn,kiv,AR= (Hn,ikv,AR)>, (6.90)

Hn,kjv,AR= (Hn,jkv,AR)>, (6.91)

Hn,kkv,AR= ( ˘Jm,kv )>Λv,ARnm,kv (6.92) and similarly, (6.83) to

bv,ARn =















...

( ˘Jl,iv)>Λv,ARn ˘evl + ( ˘Jl,iv)>Λv,ARnvm ...

( ˘Jl,jv)>Λv,ARn ˘evl + ( ˘Jl,jv)>Λv,ARnvm+ ( ˘Jm,jv )>Λv,ARnvm+ ( ˘Jm,jv )>Λv,ARn ˘evl ...

( ˘Jm,kv )>Λv,ARn ˘evm+ ( ˘Jm,kv )>Λv,ARn ˘evl ...













 .

(6.93)

As in the derivation for the graph-based representation of observed nodes with auto-correlated noise, we notice that some of these matrix entries are already correct, while others need some modification. To be more precise, the entries underlined in blue are not yet identical when comparing them to (6.45) and (6.55), which is our goal. This is caused by the deviation of the information matrices. We correct for this by adapting the

6.4. Measurements with autocorrelated errors 133 information matrices of the constraintszvl andzvmto

Λvlv,ARl −Λv,ARn , (6.94) Λvmv,ARm −Λv,ARn . (6.95) These two constraints then lead toHlvandHmv with

Hlv=







 ...

( ˘Jl,iv)>Λvll,iv · · · ( ˘Jl,iv)>Λvll,jv

... ...

( ˘Jl,jv)>Λvll,iv · · · ( ˘Jl,jv )>Λvll,jv ...









(6.96)

and

Hmv =







 ...

( ˘Jm,jv )>Λvmm,jv · · · ( ˘Jm,jv )>Λvmm,kv

... ...

( ˘Jm,kv )>Λvmm,jv · · · ( ˘Jm,kv )>Λvmm,kv ...









. (6.97)

The corresponding right-hand side vectors arebvl andbvm with

bvl =









...

( ˘Jl,iv)>Λv,ARl ˘evl ...

( ˘Jl,jv)>Λv,ARl ˘evl ...









, (6.98)

bvm =









...

( ˘Jm,jv )>Λv,ARm ˘evm ...

( ˘Jm,kv )>Λv,ARmvm ...









. (6.99)

In total, the three constraintszvl,zvm, andzv,ARn allow us to computeHv,ARandbv,ARfor the graph depicted in Figure 6.11 to

Hv,AR=Hlv+Hmv +Hnv,AR, (6.100) bv,AR=bvl +bvm+bv,ARn . (6.101) We compare the result with (6.45) and (6.55) and verify that they are indeed identi-cal. We conclude that the given three constraints are a graph-based representation for odometry edges with autocorrelated noise. Together with the graph-based representa-tion for observed nodes with autocorrelated noise, we are now able to understand the influence of this kind of noise on the graph-based representation. This by itself is a big step forward. However, as these new constraints lead to the exact same entries as AR(1) NLLSQ, we still suffer from the increased computational demand. In the next section we therefore tackle this challenge.

6.4. Measurements with autocorrelated errors 135

6.4.3. AR(1) scaling: efficient implementation by scaling