• Keine Ergebnisse gefunden

Mitigating Errors Caused by Inconsistency between Lasers and

6.2 Implementation of the Scanning Sensor Simulation on the GPU

6.2.3 Mitigating Errors Caused by Inconsistency between Lasers and

The lasers of real-life LiDAR sensors rotate or oscillate around an axis, as is demon-strated in Figure 6.9. As a result, the lasers that are not perpendicular to the rotation axis trace out conical surfaces, as is demonstrated in Figure 6.10(a). In comparison, the light rays used to simulate the lasers in the viewing frustum, i.e., those virtual rays that pass through the centres of the pixels on the near plane, render a plane surface (cf. Figure 6.10(b)). As the HFOV becomes larger, the mismatch gets more obvious and cannot be ignored. Figure 6.11 gives an idea of such discrepancy. Recall that the lasers covering the overall HFOV are simulated at the same time. Consequently, the simulation of the lasers using one default camera node would cause intolerable errors.

To that end, a macro-micro approach intended to deal with this problem is proposed as follows:

• Macro-scale approximation: approximate the laser beams with the light rays pass-ing through the pixels that are nearest to the intersections of the laser beams and the near plane. The camera node used to simulate one LiDAR sensor has a much finer resolution than the sensor does itself. For example, the IBEO sensor expected to have 4×220 laser beams is simulated with one camera node which can render an image of 200×400 pixels. The FOVs of the IBEO sensor and the viewing frustum defined by the camera node are the same. Then the real-life laser beams are intersected with the near plane of the camera node, and the coordinates of the intersections are calculated. For each intersection, the pixel that is nearest to it on the near plane is located and recorded. The final result of the macro-scale ap-proximation is a textureLASER−T EXT U RE containing 200×400 texels. Each texel is related to a unique pixel of the image that will be rendered by the camera node. If a pixel turns out to be nearest to the intersection of a laser beam and the near plane, its corresponding texel will record the coordinates of the intersection.

Figure 6.12displays this approximation scheme.

• Micro-scale correction: further rectify the target positions captured by the light rays chosen in the macro-scale approximation stage so that they can more closely approximate the information returned by the actual lasers. In the OpenGL ren-dering pipeline, the camera node is supposed to render the image onto the window screen. The window-relative coordinates of the image pixels (i.e. the fragments) can be obtained in the fragment shader via the built-in variable gl F ragCoord of the OpenGL shading language. The window-relative coordinates of the inter-sections of the actual lasers and the near plane can also be computed by scal-ing their coordinates on the near plane by the scale factor between the near

plane and the window screen. As mentioned above, the coordinates of the in-tersections on the near plane are already precomputed and stored in a texture LASER−T EXT U RE. This texture is attached to the camera node as a uniform variable which can be accessed by the fragment shader. The partial derivatives of the data (in this simulation context, i.e., the rasterization result of the points on the reflective targets for the fragment) contained in a fragment with respect to the window-relative coordinates of the said fragment can be obtained by the built-in functions dF dx() and dF dy(). Consequently, the data pertaining to the actual lasers can be approximately computed as:

posactual= pospixel+dF dx(pos)(xactual−gl F ragCoord.x) +dF dy(pos)(yactual−gl F ragCoord.y)

where posactual refers to the approximate position of the point on the reflector targeted by the actual laser beam that passes through the intersection, andxactual andyactual compose the window-relative coordinates of the intersection. The posi-tions rectified in this way are much closer to the target posiposi-tions captured by the real-life lasers than the default light rays are. Figure6.13 shows the effect of this correction strategy.

There is still another problem regarding the simulation of the Velodyne sensor. Al-though the lasers of the Velodyne sensor are turning around the same axis, they are sent out from locations on the sensor with irregular horizontal and vertical offsets between them. For example, two laser transmitters can have a horizontal offset of 17cm. Such laser layout cannot be efficiently modelled by camera nodes as the lasers simulated using the same camera node are supposed to have the same origin. In the current application, it is assumed that all the lasers sent out from the Velodyne sensor have the same origin.

In this way, the lasers can be simulated with as few camera nodes as possible. The configuration file of the Velodyne sensor which stores the values of the original offsets is also modified according to that assumption. In this way, the Velodyne sensor data processing module can interpret the simulated sensor data correctly.

6.3 Performance Evaluation

This section evaluates the performance of the proposed scanning sensor simulation strat-egy. The GPU and CPU installed on the computer used for the evaluation are an Nvidia GeForce Quadro 2000 containing 192 parallel processing cores and an Intel Xeon E31245.

The 3D models that compose the traffic environment consist of 16 houses, 1 road, 7 trees, 25 randomly on road running cars, 1 pedestrian and 6 traffic lights. The average size of

axis

Lasers

Figure 6.9: Four laser beams turning around an axis.

(a) Conical surface traced out by the actual lasers.

(b) Plane surface traced out by the light rays passing through the centres of the pix-els of the rendered image.

Figure 6.10: Surfaces rendered by the actual lasers and the light rays within the viewing frustum.

a 3D model file is 2M B. The simulated sensors include 6 IBEO LiDAR sensors, each being simulated with 1 camera node, 1 Velodyne sensor simulated with 3 camera nodes and 1 radar sensor simulated with 1 camera node. For the corresponding videos, please refer to Appendix 1.

The accuracy of the simulated sensor data in comparison with the real sensor data has not yet been conducted, which is left for future work. At the moment, the accuracy of the simulated sensor data is only checked by visual display of the simulated sensor data and the objects identified by the obstacle detection module based on those data.

The scanning results of the simulated IBEO, Velodyne and radar sensors are displayed in Figure 6.14, Figure6.15 and Figure6.16, respectively.

The computation time consumed by the rendering of the simulated IBEO sensors is also examined. The frame rates of the GPU for rendering simulated sensors with different configurations are reported in Table6.1. As can be concluded from Table6.1, the frame

(a) Scanning pattern of the actual lasers.

(b) Scanning pattern of the light rays passing through the centres of the pixels of the to-be-rendered image.

Figure 6.11: Scanning patterns of the actual lasers and the light rays within the viewing frustum. V F OV = 20.

rate declines when the number of objects in the scene increases. Furthermore, the frame rate also drops drastically when the number of pixels used to simulate the sensor gets larger. Note that more accurate sensor data should be generated by a simulation with more pixels. In the case of the simulation of the IBEO LiDAR sensor, the size of pixels is set to be 200×400, which can generate sufficiently accurate sensor data (from the perspective of approximating the lasers with the virtual light rays). The corresponding computational cost is tolerable as can be seen from Table 6.1. Nonetheless, it is still necessary to further decrease the rendering time in future work.

6.4 Summary

This chapter presented the simulation models of the scanning sensors and the GPU-based implementation of the simulation. The performance evaluation in terms of accuracy and computational cost was reported. It should be pointed out that the proposed simulation of scanning sensors and the related simulation framework are designed based on the work

(a) The zoomed out view.

(b) The zoomed in view.

Figure 6.12: Using the pixels of the default to-be-rendered image to approximate the intersections of the actual laser beams and the near plane. The green points represent the pixels, while the red points display the intersections. The black points are those of the pixels that are nearest to the intersections and thus are chosen to be the approximations of the intersections. Note that the vertical interval between two neighbouring green points is so small that they are not distinguishable on the picture.

Number of scene objects

Frame rate rendered by 6 IBEO sensors (The size of pixels for each sensor is 200×400)

Frame rate rendered by 6 IBEO sensors (The size of pixels for each sensor is 200×2000)

0 34 30

1 33 24

2 31 18

3 25 13

4 20 11

Table 6.1: Computational costs for simulating IBEO sensors with different con-figurations. The rendering frame rate of the simulation environment without any simulated sensor is 34.

(a) The simulation result by directly using the pixels that are nearest to the intersections.

(b) The simulation result after the micro-scale correction is applied.

Figure 6.13: The effect of the micro-scale correction on the sensing result of two layers of laser beams that are reflected by the ground.

presented in [74]. The simulation approach employed in this thesis is similar in spirit to what is demonstrated in [68] and [69]. The proposed radar sensor simulation can generate the velocities of other traffic participants relative to the ego-vehicle which are one important output of the radar sensors mounted on autonomous vehicles. Regarding LiDAR, a macro-micro method was proposed to approximate the actual scanning pat-tern of the sensors and improve the accuracy of the simulated sensor data. Future work includes a validation of the simulated sensor data against the actual sensor data, enrich-ing the physical phenomena considered in the simulation, addenrich-ing proper noise models to the sensor simulation and decreasing the computational complexity of the rendering

Figure 6.14: Simulation result for IBEO sensors. The detection range of the sensor is set to be 200m.

Figure 6.15: Simulation result for the Velodyne sensor. The white points form-ing concentric circles are the simulated sensor data interpreted by the obstacle detection module. The green boxes are the detection results of the obstacle de-tection and tracking modules. The pink lines demonstrate the rotating lasers.

Figure 6.16: Simulation result for the radar sensor. The green balls are the detected reflectors. The HFOV of the simulated radar sensor is larger than its real-life counterpart for demonstration purpose.

of the sensors.

Motion Planner Evaluation

In this chapter, the proposed motion planner is first examined according to the evaluation criteria illustrated in Chapter1. After that, the capability of the proposed motion plan-ner in handling time-critical traffic scenarios is reported. Then, the performance of the planner in driving the vehicle around road networks is displayed. Lastly, a comparison of the proposed planner and the planners presented in other works is given. It is note-worthy that all the presented experiments are carried out in a simulation environment, where the vehicle dynamics, the sensors and the traffic scenarios are all simulated. The software modules involved in the experiments, such as those for detecting and tracking obstacles, are directly adopted from the autonomous driving system of MIG. Figure7.1 shows the software framework of the experimental platform. The hardware adopted here is the same as the one that is employed for the evaluation of the simulated scanning sen-sors (cf. Section6.3). The configuration of the planner applied in the experiments follows Table5.1in general. The alterations of specific parameters required by the purposes of the experiments will be highlighted where necessary. As is mentioned earlier, there is no scenario reasoning module in the current implementation of the proposed planner.

Correspondingly, the configuration parameters of the planner are adjusted manually on the fly when necessary. Besides, some scenario-related targets, such as target speed and location, are also set manually in runtime. For the corresponding videos, please refer to Appendix1.