• Keine Ergebnisse gefunden

7.5 Multi-Frame Estimation

7.5.2 Integration of Multiple-Frames

In this and the following sections, the term multi-step estimation will be used to indicate that the estimation of motion between two frames is actually obtained as a function of the information obtained at multiple time steps. On the contrary, single-step estimation will denote the estimation of absolute orientation with the frames obtained at only two time instances. The term Multi-Frame Estimation(MFE) will denote the method which approaches a multi-step estimation, while the term Two-Frame Estimation (TFE) will denote the method which approaches a single-step estimation.

Let us suppose that the tracking algorithm is able to track at least n features over mframes. At each time step, the list of tracked features and corresponding 3D points are stored in lists. Let us define the set of tracked world points for timetlasXl. At the current timetk, the motionM0kof the vehicle between timestk−1andtkis required.

Two-Frame Estimation implies computing a single-step estimation, by finding the absolute orientation between the data sets obtained at both times, i.e.Xk andXk−1. The resulting motion is the observed inverse rigid motion of the static scene, i.e. the ego-motion of the camera. Multi-Frame Estimation, instead, iteratively computes the absolute orientation between the clouds Xk and Xk−i for i = 1, ..., m−1. At the end of each iteration the current motion estimate M0k is refined by the integration with the multi-step result (i.e. the absolute orientation between the clouds obtained at timetk−i and timetk, denoted as M˜+k−i,k).

Each iteration carries out four main steps:

• Motion Prediction: at iteration i a motion prediction between time tk−i and timetk is required before applying the SMC. In the first iteration (i.e.i= 1) the

7.5.2 Integration of Multiple-Frames 103

(a) As a first step the absolute orientation between current and previous frame is computed, obtaining a first motion estimate.

(b) The motion estimate of step 1 (showed in grey) is used as the predicted motion for applying the SMC. The absolute orientation between times6and 8is computed (unfilled black circle) and the motion decomposed in two parts: the first part from state6to state7and the second part from state7 to state8. The first part was computed before and is known. The second motion part is interpolated with the current motion, obtaining a new motion estimate (diagonal hatched circle).

1

(c) The same procedure as in the previous step is applied here. The resulting absolute orientation between times5 and8is now decomposed as the motion chain (5 7,7 8). The last link is integrated with the current motion obtaining a new motion estimate.

Figure 7.12: Example of Multi-Frame Estimation.

104 Robust Real-Time 6D Ego-Motion Estimation

(d) The same procedure as in step2and3is applied here, but between times5and8.

1

(e) The resulting motion step is the interpolation of all multi-step estimates.

Figure 7.12: Example of Multi-Frame Estimation.

motion prediction is taken from the initialization which provides exactly a first motion estimation between the previous and current time. For the second and following iterations the motion prediction is computed as3

k−i,k =M−1k−iMk−1M0k (7.15)

i.e. the observed motion between timestk−i andtk−1updated with the current motion estimation of the current time.

• Smoothness Motion Constraint: the prediction obtained in the previous step is required in order to apply the SMC. The rejection of outliers, as well the assignments of weights for each 3D point pair in the iteration phase, is more precise than in the initialization phase, because the prediction is much more accurate. The application of the SMC for multi-steps follows exactly the same procedure as explained in Section 7.4, i.e. Equations 7.9 and 7.8 for the WLS and Equations 7.11 to 7.14 for TLS are still valid. Only a small change takes place since the prediction M0k−1 used in those equations must be replaced

3Equation 7.15 is also valid for the first iteration since wheni= 1thenM−1k−1Mk−1=I4×4and M˜k−1,k=M0k.

7.5.2 Integration of Multiple-Frames 105

with the motion prediction matrixM˜k−i,k obtained from Equation 7.15.

• Computation of Absolute Orientation: the absolute orientation between the setsXk−i andXk is computed, resulting in the multi-step motionM˜+k−i,k.

• Motion Integration: Once the camera motion matrix M˜+k−i,k between times tk−i and tk is obtained, it must be integrated with the current step estimation M0k. This is performed by interpolation. The interpolation of matrices makes sense, if they are estimates of the same motion. This is not the case, since the current step motion matrixM0k is an estimation of motion between times tk−1 and tk (one step motion), and the multi-step motion matrix M˜+k−i,k is an estimation of motion overiframes. Ifi= 1, the interpolation is straightforward.

If i > 1, the multi-step motion matrix must be decomposed in two motion matrices: the motion matrix from timetk−i totk−1 and the motion matrix from timetk−1 to timetk. The latter is the motion matrix to be interpolated. Thus, the multi-step matrix obtained in the previous step is expressed as the product of the two matrices

+k−i,k =Mk−i,k−1+k−1,k (7.16)

where the first matrix of the right hand side is be obtained from the total motion matrices as

Mk−i,k−1 =M−1k−iMk−1. (7.17)

Replacing Equation 7.17 in Equation 7.16 and solving forM˜+k−1,k we obtain M˜+k−1,k =M−1k−1Mk−i+k−i,k. (7.18) The rotation matrices of M0k and M˜+k−1,k are converted to quaternions in or-der to apply a spherical linear interpolation (see appendix A for details). The translation vectors are linearly interpolated. The interpolation factors are ob-tained in the following way. Let us define fk−i,k as the inverse of the square of the sum of Mahalanobis distances (or residuum) of Equation 5.13, obtained when computing the absolute orientation between data setsXk−i andXk. The interpolation factors are obtained from the values:

fk−i,k for the multi-step motion matrix; and

i−1

P

l=1

fk−l,k for the integrated motion matrix.

which corresponds to the weighted average of all multi-step motions. This means that every additional step has less and less impact on the final result.

Figures 7.12(a) to 7.12(e) exemplifies one iteration of the Multi-Frame Estimation.

With this algorithm, the estimation improves because of the integration of more measurements. Also, the accumulation of errors is reduced considerably, as shown in the next section.

106 Robust Real-Time 6D Ego-Motion Estimation