• Keine Ergebnisse gefunden

3.2 D ISTRIBUTED S ENSOR F USION

3.2.2 Application Architecture

In the following, we describe how the merged representations are processed such that they appear as being observed by a single virtual sensor.

Points. To resemble the output of a real laser-scanner, the merged set of points must be sorted by increasing angles (with respect to the abscissa). However, this is not sufficient.

The sequence of points is sorted in several runs to ensure that points belonging to the same geometrical primitive are neighbors. This is not necessarily true if the output of several sensors is merged, because points belonging to different arcs or edges may be interleaved, if one of this primitives is located behind the other in the common coordinate frame.

Contours. For the merged set of contours, it is checked if several primitives can be com-bined and represented by single primitive. Two arcs are comcom-bined if the distance between their centers and between their radii is sufficiently small. Two edges are combined if they are sufficiently parallel (the angle between the vectors indicating their direction is less than a threshold) and sufficiently close. The two edges are replaced by a new edge having the two end points with the longest distance as its end points.

Objects. Similar to the fusion of contours, in the object fusion, it is checked if several ob-jects in the merged object set can be replace by a single one. For circles this is achieved in the same way as for the fusion of arcs. Two polylines are combined if their bounding boxes touch or overlap. Basically, this is achieved by concatenating the vertex lists of both poly-lines. There are, however, cases that are more complicated but have been omitted here for space limitation reasons.

Elements. For the fusion of elements no geometrical computations are necessary. Only redundant elements, i.e. objects of the same type at about the same place, are eliminated from the merged element set.

protocols and the precedence constraints of the task pairs, as will be explained in sections 5.4 and 5.5 respectively.

Dynamic Network Scheduling

Fusion Fusion

Filter Filter

Filter Filter

execute timestamp multicast deliver

Reliable Multicast Clock

Synch.

Atomic Multicast 1 1

1

2 2

2 3 4

3 3

4 4

Task Execution

Task Execution

Figure 3-5. Architecture of the sensor fusion (system layer omitted)

The arrows in Figure 3-5 depict the way of the data through the application architecture.

Each set of raw data provided by a scanner is assigned a timestamp from the global clock (Figure 3-5 (2)). After preprocessing the data, they are exchanged within the group, using the multicast communication services of the middleware (Figure 3-5 (3)). The timeliness of the communication services ensures that the data are delivered with a bounded delay (Figure 3-5 (4)). Each system fuses the data it receives and possibly feds them into some further filtering stages (called post-processing). Since there are no complex interactions between the systems in the application layer, the timeliness of the task execution service and the timeliness of the communication services allow achieving a predictable timing behavior for the distributed sensor fusion.

There are two possibilities to multicast the data within the group. The first, which is de-picted in Figure 3-5, is using the atomic multicast service. This ensures that all mobile sys-tems fuse the same data and allows providing to them a common external worldview. Ac-cording to our approach to the coordination of mobile systems, the mobile systems can make local decisions based on such a common worldview and still achieve a coordinated behavior. However, task abortions in the fusion or post-processing stages still can lead to the mobile systems’ having different worldviews. For example, if one the systems fuses all input data while another one only manages to process part of it, say, representing only the left half of the scene, they obviously come to different worldviews. This means that the more important common worldviews are for the application, the lower the probability of task abortions in fusion or the post-processing must be. Thus, if common views are re-quired, it seems best to perform fusion on the highest layer of abstraction. For one thing, the mobile systems only have to perform the fusion and no post-processing after receiving the data. Moreover, fusion on the highest layer has a smaller execution time than on any layer below.

Another possibility is multicasting data with the reliable multicast service. This should be done if achieving common worldviews is not required. In this case, the overhead and the

increased delay of the atomic multicast protocol can be avoided (the delays of both ser-vices are compared in 6.3). Thus, if short delays are more important then providing a common view to the group members another protocol stack can be configured to match this requirement.

The global time base provided by the clock synchronization protocol is required to achieve time coherence for the data to be fused. Two alternative approaches have been imple-mented in the prototype (Section 6.2): The first approach does not require synchronizing the observations of the systems. It uses a Kalman filter (Bar-Shalom and Fortmann 1988) to fuse scans observed at different points of time. The global clock is used to timestamp the data when they are observed. In the second approach, the clock synchronization is used to synchronize the times at which the systems observe the environment; that is, the laser scanners are triggered at common instants on the global time base. This ensures that the observed data are time coherent so that this approach can be applied when fusion takes place on lower levels of abstraction where Kalman filter cannot be used.

This sensor fusion scenario illustrates our idea of a configurable middleware. First, com-pared to the previous scenario, the Event Service was omitted. Whereas the Event Service provides a common view on the global state of the controlled system, observations of the external environment of the controlled systems have to be fused in the scenario at hand.

Second, as was explained above, the atomic multicast service may or may not be used in the architecture, depending on whether or not a common view and coordinated behavior are requirements. Finally, dynamic network scheduling may be omitted also if fusion takes place in a static group. This shows how the middleware can be adapted to the scenario at hand with its different possible requirements.

The distributed sensor fusion scenario comprises application tasks with large and environ-ment-dependent execution times that communicate using a reduced set of communication services. As the communication services of the middleware are considered in the context of the shared spatial resources scenario, the one at hand is mainly used in presenting and evaluating the task execution service. Besides the load characteristics, it is useful for that purpose because it was designed to include several kinds of application-inherent redun-dancy. In combination with the first scenario, it illustrates how the middleware can be adapted to fit the needs of different applications.

4 Communication in Cooperative Mobile Sys-tems

Communication is essential for the cooperation of mobile embedded systems, particularly to achieve a coordinated behavior. In this chapter, we present the communication part of the middleware, which consists of a modular and configurable family of communication services. The architecture of the communication part, which extends over four layers, re-flects the three main problems we address as well as our approaches to tackle them. The two bottom layers achieve a reliable and timely predictable communication over a wireless LAN characterized by a widely varying number of message losses. The two top layers pro-vide common views to the application; while the lower of them achieves consensus about general aspects of the distributed control system, the Event Service on the highest layer turns to the more application-specific common views on aspects of the controlled system.

The three bottom layers constitute the communication hardcore, which provides reliable and timely services with strong, application-independent consensus semantics and serves as the basis to realize services directly matching the semantics of the application on top of it. We call the family of protocols that implements the group communication services of the hardcore the Real-Time Group Communication Protocols (RGCP).

While Chapter 2 gave an architectural overview of the communication services, this chap-ter presents the communication protocols realizing these services. Our presentation com-prises a description of the polling mechanism of the IEEE 802.11 Standard also, so as to describe the basic structure of this mechanism, which is common to both the original 802.11 Standard and the upcoming supplement 802.11e, and to specify those aspects which the standard leaves open, but which are important for our design. Apart from that, we focus our presentation on those protocols being developed as part of this thesis and refer to exist-ing publications regardexist-ing the clock synchronization protocol (Mock et al. 2000b,Mock et al. 2000a).

The design of the protocols is based on a formal system model describing a synchronous system with omission failures. To match the characteristics of wireless media, this model does not make a priori assumption about the quality of the communication links; that is to say, about the number of omission failures they experience. Rather, it uses time-dependent

link quality predicates to model the dynamically changing link quality. Based on such a model, the protocols are designed to accomplish a safe operation under the weak a priori assumptions, namely without assuming a fixed bound on the number of omission failures.

Furthermore, they achieve progress for all those stations and times for which the link qual-ity is sufficiently good. The atomic multicast and the membership service are fail-aware;

that is, they indicate to their users whether or not they are able to guarantee progress.

This chapter is structured as follows. At first, we have to lay some foundations in sections 4.1 and 4.2. Section 4.1 introduces some preliminaries that help understanding the remain-der of the chapter. It explains some basic concepts of layered architectures and system modeling, and gives a summary of the IEEE 802.11 Standard, which constitutes the basic layer of the communication part and is one of the starting points of the design. Section 4.2 presents the system model underlying the design of the protocols. Section 4.3 first gives an overview of the protocol stack as a whole, before it explains the protocols composing the stack. The description starts with the lowest layer where the polling mechanism resides and then moves up the stack to the highest layer where the Event Service is located. Finally, in Section 4.4, we discuss how related works addressed the problems this chapter deals with.