• Keine Ergebnisse gefunden

Input, pose fusion, and application layer

3. Fundamentals 27

4.2. Input, pose fusion, and application layer

GPS receiver IMU pose fusion

(a) Tight coupling.

GPS receiver IMU

pose solver INS

pose fusion

(b) Loose coupling.

Figure 4.1.: Comparison of tightly and loosely coupled architectures for sensor fusion of data from a GPS receiver and an IMU.

loosely coupled approaches hint at a similar performance (Schwarz et al., 1994), there are others that show advantages for tightly coupled approaches (Scherzinger, 2000).

The architectural choice for one of these two patterns is straightforward for generic pose fusion. This is because its main challenge is that it cannot make specific assump-tions about the sensors, concepts, or implementaassump-tions of underlying pose sources. How-ever, tightly coupled architectures are tailored to specific sensors and their measure-ments. Therefore, we build our generic pose fusion as a loosely coupled approach. This choice promotes the kind of modularity, flexibility, and extensibility that we want to gain from a generic pose fusion.

We approach the design of a loosely coupled system by proposing a layered fusion architecture, as shown in Figure 4.2. Layering is the organization of a system into separate functional components that interact in a hierarchical way. These functional components can be grouped to form a sublayer. Usually, each (sub)layer only has an interface to the layer below and above. Thus, layering is our main tool for complexity reduction and management.

4.2. Input, pose fusion, and application layer 75

input layer

sensor processing 0 pose source 0

sensor 0 sensor 1

sensor processing m pose source m

sensor n sensor n-1

· · ·

pose fusion layer core estimator

online preprocessing sublayer bias

estimation

outlier handling

cross-correlated

noise

auto-correlated

noise application layer

application 0 application 1 · · · application k

Figure 4.2.: Layer architecture of the pose fusion.

pose source builds upon a single or multiple sensors. Conversely, a sensor can feed into a single or into multiple pose sources. From the point of view of the pose fusion layer, the input layer serves to abstract from the specific sensor setup.

The interface between the input layer and the pose fusion layer is defined for infor-mation in the global or the vehicle reference frame. The i-th global pose estimate is given by the triple(zwiiw, ti)wherezwi = h

xwi , ywi , θwi i>

. Σiw is the pose estimate’s covariance matrix. The timeti specifies the time for which the pose estimate is valid.

The i-th local pose estimate is similarly defined. It is given by the triple(zviiv, ti) wherezvi = h

∆xvi, ∆yiv, ∆θivi>

. The difference to global pose estimates is that local pose estimates encode a change of the current pose compared to the last pose rather than the pose itself. Again,Σivis the covariance matrix of the pose estimate andtiis its time.

Generally, we refer to pose estimates transferred from the input to the pose fusion layer asinput pose estimates.

Pose fusion layer The pose fusion layer receives pose estimates from the input layer. We subdivide this pose fusion layer into two modules: a sublayer of preprocess-ing techniques and the core estimator. The latter is the workhorse for the online state estimation. Its main purpose is to take a set of pose estimates, fuse them, and output an estimate of the current vehicle’s pose. To this end, it has to make certain assump-tions about the nature of the pose estimates’ errors. In essence, it assumes that the error characteristics of the pose estimates can be modeled as AWGN. Even though this is a common assumption we can already provide for that these assumptions are not always met. Therefore, we need a set of preprocessing techniques.

We design the preprocessing sublayer such that we achieve modularity, flexibility, and extensibility. For this we create the preprocessing techniques souch that we can compose them in multiple ways. The best ordering of these techniques is dependent on the specific set of pose sources, as not every pose sources requires every preprocessing technique. Also, this sublayer is extensible in case that we want to preprocess pose sources in a different way. The set of modules in this thesis provides for a rich set of preprocessing that covers all relevant aspects of the pose sources that were available during the development of this thesis.

The preprocessing techniques manipulate the pose estimates such that they better fit the assumptions of the core estimator. In the spirit of generic pose fusion, we design

4.2. Input, pose fusion, and application layer 77 these preprocessing techniques to be as universally applicable as possible. For this we classify the pose sources into sets that exhibit certain error characteristics. A preprocess-ing technique for one of these classes is then supposed to reduce this error characteristic as much as possible without making assumptions about the internal working of the pose sources. To clarify this structure, let us consider an example of an error characteristic.

This could be a bias in the pose estimate of a global pose source. We group all pose sources that exhibit this characteristic into one class. It is hard to imagine a generic core estimator that produces a bias-free estimate if it relies on biased data. Therefore, one of our preprocessing techniques is to eliminate biases in global pose estimates as much as possible. We describe this approach in Section 6.1.

Another preprocessing technique allows us to better deal with outliers. We propose in Section 6.2 a method to adapt the covariance matrices of pose estimates based on prior knowledge. This prior knowledge is a precise map of the road. The effect is that the influence of certain pose estimates is downscaled.

The treatment of correlated errors between pose sources requires another preprocess-ing technique. We stated that a sensor can feed into a spreprocess-ingle or multiple pose sources.

The latter case might lead to difficulties for the core estimator because it assumes that the errors of different pose sources are uncorrelated. This assumption might not hold if, for example, two pose sources rely on the same sensor. Generally, both pose sources will be affected by the identical sensor noise. Formally speaking, this leads to errors that are correlated between multiple pose sources. In Section 6.3 we will derive a method to treat this kind of difficulty.

Another challenge for the core estimator are pose estimates with autocorrelated er-rors. Suppose we have a pose source based on processing satellite signals. Errors due to multipath effects typically do not only affect a single pose estimate but rather a series of pose estimates. In this case, the core estimator’s assumption of independently dis-tributed pose estimates does not hold. Instead, it has to deal with autocorrelated errors.

Therefore, we propose a preprocessing technique in Section 6.4 to reduce this effect.

The core estimator is at the heart of the pose fusion. Its main purpose is to take a set of pose estimates, fuse them, and output an estimate of the current vehicle’s pose.

Additionally, it provides the estimated uncertainty of this pose estimate. Formally, the i-th output of the core estimator—and with that, the output of the pose fusion layer—is the Gaussian distribution at timeti with meanxi and associated covariance matrixΣi.

It is thus defined as the triple(xii, ti). We call this theoutputorfused pose estimate. Application layer The output of the pose fusion layer is passed to the application layer. Depending on the vehicle setup, there might be multiple applications that require a global pose estimate, e.g., path planning, routing, or collision avoidance. From the point of view of the application layer, the pose fusion layer acts like a centralized, vir-tual pose source. The pose sources are encapsulated behind the pose fusion layer. The applications do not directly interact with them and do not need to adjust for their differ-ent interfaces or behaviors. Instead, they receive a single unified pose estimate with its estimated uncertainty.

Software architecture We implement the pose fusion layer in the Automotive Data and Time-Triggered Framework (ADTF), see Appendix A for a brief introduction on this framework. Each module of the preprocessing sublayer corresponds to an ADTF filter. These filters are implemented in C++. The connections between the filters are easily adapted for a specific set of pose sources. The core estimator is implemented in a filter calledPoseGraphFusion (PGF).