• Keine Ergebnisse gefunden

5.6 Summary

6.1.1 Analogue Multi-Robot Mission

The analogue multi-robot mission was performed on 16 November 2016 and it used a semi-autonomous control approach with the mission control centre being located in Bremen, Ger-many. The motivation for performing the mission was twofold. Firstly, demonstrating the feasibility of the remote control approach of a multi-robot space mission. Secondly, evaluating the infield performance of the multi-robot team consisting of the robot SherpaTT and Coy-ote III. The operation infrastructure presented in Chapter5has been used for the operation of the robots and in parts for the inter-site communication. The following sections present the experimental setup and an evaluation which focuses on communication characteristics.

Operations

Figure 6.2 depicts the setup for operations: the local control centre in Utah, USA acted as communication proxy between the deployed robots and the mission control centre. Human operators in the local control centre prepared the robots for the start of the mission. During the mission the local operators were not involved in commanding the robots. Each robot re-ceived target waypoints from the mission control centre in Bremen, through the local control centre. Target waypoints were identified by the operators in the mission control centre based on environment maps generated from sensor data of the robots. Planthaber, Mallwitz, and E. A. Kirchner (2018) setup the communication between Utah and Bremen based on a satellite link. To establish this communication link, the core elements of the distributed communi-cation infrastructure described in Chapter 5 have been used as basis for the data transport.

Figure6.3illustrates the data processing chain: the data transfer was based on the UDT pro-tocol. Since a large delay had to be accounted for establishing a connection, the internally set default timeout for establishing a UDT connection had to be raised to 20 s. The local control station and the robots used a wireless network IEEE 802.11n and a central access point. Note that a central access point based communication infrastructure was selected for the analogues mission (instead of a mesh-based setup): the satellite modem required a very particular net-work setup which inflicted with using a mesh-based communication. Post analysis identified a configuration error in the mesh setup, which could lead to looping network packages.

Figure 6.1: Left hand side: general overview over the testing site in Utah with the respec-tive main landmarks and approximate distances (Source: manually annotated Imagery ©2018 Google, Map data ©2018 Google), right hand side: view onto the site from additional perspec-tives.

Figure 6.2:The general mission control setup, including a local control centre which redirects the data streams between mission control and robots.

Figure 6.3: Reuse and setup of the demultiplexing and multiplexing infrastructure compo-nents for the inter-site communication communication.

Robot autonomy

The analogue mission involved the robots SherpaTT and Coyote III. The mission neither used payloads nor a coalition structure change. Each robot is able to build its own environment map using a distributed SLAM approach and can navigate autonomously towards a given waypoint.

Hence, mission operators could request an environment map from the robots as well as images of the environment, so that suitable waypoints could be identified. A set of approximate target science points has been communicated to the mission operators by the local team prior to the actual mission. This approach is similar to a science team which communicates the desired target. The exact waypoints, however, were only defined and known by the mission operators until communicated to the robots.

Mission performance

The mission was performed between 10:30 am and 11:30 am local time (Mountain Standard Time) in Utah - all data presented in the following refers to the absolute starting time of 10:30 am. Figure 6.4 illustrates the robot poses throughout the mission. No ground truth data has been available. Hence, the plot reflects the positions assumed by each robot accord-ing to the distributed SLAM approach. The position assumption can change abruptly as the result of pose corrections, triggered by new sensor input. Figure6.4shows larger corrections for Coyote III, compared to SherpaTT. This behaviour can be attributed to the different sensor equipment and parametrisation in combination with the traverse through feature-poor areas:

Coyote III used a maximum sensing distance of 15 m while SherpaTT used a maximum sens-ing distance of 40 m. Figure6.5illustrates the relative distances between the operating robots.

Based on the robot’s assumed poses, the maximum distance between both robots did not ex-ceed 30 m. The robots were constantly operating in line of site to each other, so that a direct robot-to-robot communication was possible at all times during the mission. Figure6.5as well as several following illustrations visualise the sequentially performed mission: firstly, Sher-paTT is commanded to its destination by using three intermediate waypoints. Subsequently, Coyote III starts from approximately 1400 s into the mission. Coyote III is directed towards a rendezvous location to meet SherpaTT. Plateau phases in Figure 6.5point to phases where a robot does not change its pose. This can result from idle times resulting from preparation activities of the human operator’s in the mission control centre, e.g., between 2500 s and 3000 s

Figure 6.4: The assumed positions of the robots. Progression of time is encoded in greyscale.

Approximate location coordinates: local control centre at (0,-35), Landmark 1 at (30,15), Land-mark 2 at (20,-15).

SherpaTT’s manipulator was controlled from the mission control centre. However, plateaus phases also indicate robot activities such as manipulation. This activity can also be seen by the corresponding higher communication rate of (arm) joint data in Figure6.7.

According to Figure6.5the combined travelled distance of both robots exceeds 100 m within one hour of operation. An even higher rate could have been achieved by a parallel execution of both systems. For comparison, the anticipated speed of the ExoMars Rover in full autonomy mode is 14 m/h according to Winter et al. (2015).

Communication between mission control and local control centre

The local control centre proxies the data between the mission control centre and the team of robots. Figure6.6shows a low frequency of control commands to the robots in order to define waypoints. A high outgoing and incoming data rate can be observed between seconds 2500 and 3000, and this corresponds to the remote control of SherpaTT’s manipulator. Figure6.6 shows that control commands typically consume few data volume, in contrast to the transfer of images and maps as seen in Figure6.7. For that reason the setup of the mission control station by Planthaber, Mallwitz, and E. A. Kirchner (2018) also allowed to control the telemetry data sending frequency. Both figures show again the sequential flow of the operation and the data transfer changes correspondingly - since it has been adapted by the operator in the mission control centre.

(a)Distance between SherpaTT and Coyote III.

(b)Travelled distance per robot.

Figure 6.5:Inter-robot and travelled distance for the analogue mission.

Figure 6.6:Communication from mission control (Bremen) to local control centre (Utah).

Figure 6.7:Communication from local control centre (Utah) to mission control (Bremen).

Communication between robots

The direct communication between SherpaTT and Coyote III servers mainly the distributed SLAM approach. This distributed SLAM approach has been developed by Sebastian Kasper-ski and it is based on previous work by Eich et al. (2014). The distributed SLAM serves as an example client application for the distributed communication architecture and acts as place-holder for any solution which enables the identification of robot poses relative to each other.

The approach relies on graph-based SLAM approach where the so-called localised pointclouds are stored with spatial constraints. The localised pointclouds represent nodes in the graph and each edge corresponds to a spatial constraint. To avoid sharing of the complete map, lo-calised pointclouds and spatial constraints are shared been robots. Lolo-calised pointclouds and spatial constraints are globally identifiable, and they can also be directly requested from any robot by referring to this id. Each localised pointcloud is a sub-sampled pointcloud of the robot’s currently sensed environment. Localised pointclouds are sent in combination with spa-tial constraints, but they are only sent when a robot changes its position or when an update is explicitly requested, e.g., by a human operator or the robot’s supervision. The exchange of the localised pointclouds is based on the FIPA-based distributed communication infrastructure.

Figure6.8 illustrates the sending ofFIPA letters for each robot including the size of a letter and the corresponding multiplexed samples that are communicated. The sending of data is triggered by a robot’s location change, so that Figure6.8can also be interpreted as an activity graph for both robots.

The overall communication volume for inter-site and inter-robot data is depicted in Figure6.9.

The total communication volume for inter-site and inter-robot data has the same magnitude, but the inter-robot communication channels have more remaining bandwidth. As already men-tioned, the inter-site communication has already been actively managed in order to reduce the overall communication volume. The inter robot communication is not explicitly restricted, al-though several characteristics of the distributed SLAM approach - such as sending of data only

Figure 6.8: Communication between the participating robots using the distributed communi-cation infrastructure.

when the robot moves - aim at a reduction of the communication volume.