• Keine Ergebnisse gefunden

Extension for an User-Friendly HMD Calibration

6. Geometric Optical See-Through Display Calibration 111

6.7. Extension for an User-Friendly HMD Calibration

Fig.6.9summarizes the results. The dashed and dotted lines (c/d) represent the errors in the sequences when their own model was used for prediction, i.e. these represent the residuals after minimization of the calibration model. They approach asymptotically towards the accuracy of the blob detection method7 of Sec. 6.4.1and naturally they should also not go below that. As for the cross validation (solid lines), it can be observed that the error remains on a significantly higher level in both situations. It even increases for a polynomial degree of four and five when applying the calibration model of the shorter sequence to the longer sequence (Fig.6.9(a)), which might be seen as a disappointing result at first glance. However, as shown in a further analysis (see Fig.6.10) the error increase occurs mostly at locations that are underrepresented by the sequence used for calibration. These are also locations where a driver usually would not move his head to (e.g. directly in front of steering wheel or behind his own seat). At locations in between the calibration camera path the model performs almost equally well.

Figure 6.10.: Camera paths of both image sequences. The camera path whose calibration model was used (1193 images) is colored in solid dark blue. The coloring of the other path (2251 images) represents the average measurement-to-model error in millimeters.

6.6.2. Qualitative Validation

Furthermore, we evaluated the view-dependent dynamic rendering of 3D objects on the real HUD display quali-tatively. This was realized by using the camera tracking again as a simulation unit for a head tracker. The camera position was sent as ’head-position’ to the rendering system of the HUD (by means of a wireless connection). The rendering of the HUD then adapted the perspective rendering and the pre-warping of the image to the updated parameters.

At the same time, the camera image offered a means for qualitative inspection. Having the camera pose, we also rendered the same 3D object in the camera image, so that in the ideal case both virtual objects should overlap. Although there is a permanent lag of one frame (the same camera image is used for tracking as well as for inspection), the calibration model and the rendering can be validated qualitatively when moving the camera very slowly. Fig.6.11shows the results for a wireframe model of a tunnel of 60 meters length. As can be seen, both models are well aligned.

6.7. Extension for an User-Friendly HMD Calibration

We have extended our HUD calibration for optical see-through HMDs and presented it as a demonstrator at the ISMAR conference [WEK14]. The purpose of this demonstrator was twofold. First, we wanted to show that OST-HMD and HUD calibration are in principle very similar and can be realized with common

7An error of two millimeters on the virtual plane approximately corresponds to an error of 0.4 pixel units in the image of the measuring camera.

(a) (b) (c)

(e) (f) (g)

Figure 6.11.: Qualitative evaluation using the camera position as simulated head position of a driver, which is sent to the rendering system on the vehicle. A 3D grid tunnel is visualized both in the camera image (red) and on the HUD (white glow).

techniques. Second, we demonstrated that the idea of separating user and hardware related calibration pa-rameters can be exploited in order to realize a very simple user adaptation as compared to the classical ap-proaches [TN00,GTKN01,GTN02]. At the exhibit, visitors could quickly adjust the calibration to their own conditions and then immediately convince themselves of the compelling registration accuracy by means of small AR demo.

For the calibration, we propose to divide the calibration into two separate stages. In an offline stage, all user-independent parameters are retrieved, in particular the exact orientation and scaling of the two virtual dis-play planes for each eye with respect to the coordinate frame of the OST-HMD built-in camera. In an online stage, the calibration is fine-tuned to user-specific parameters, such as the individual eye-separation distance and height. This leads to a simple and user-friendly procedure, where the user can interactively adjust the remaining parameters until virtual objects (such as marker contours) are well-aligned with their physical counterparts.

Later, this main idea was more deeply explored and further simplified by one of our students [BEK17]. His results are not part of this work.

Preparative offline stage The main objective of the preparative offline stage is to retrieve as many calibration parameters of the HMD as possible, so that the user-specific fine-tuning in the online stage can be simplified.

Thus, our HMD calibration is in spirit similar to our approach to head-up display calibration. Again, We use a moving camera that observes a known reference pattern displayed on the HMD from various positions (see Fig.

6.12and Fig. 6.13). This camera also observes surrounding parts in the scene, where a well-textured known calibration rig is placed. It is used as tracking target to determine the camera position for each frame. The same calibration rig is also observed by the HMD built-in camera. With this setup we can estimate all calibration parameters in the coordinate frame of the built-in camera.

6.7. Extension for an User-Friendly HMD Calibration

Figure 6.12.: Setup of the offline stage of the OST-HMD calibration.

Having the whole path of the moving camera in the coordinate system of the built-in camera, we can estimate the exact position, orientation and scaling of virtual display planes of each eye. However, for the specific HMD that we used (Epson Moverio 200), it turned out that the virtual planes are located at a very far distance, so that for rendering they can safely assumed to be at infinity. This further simplifies the calibration because we have less parameters that need to be estimated.

Online stage Once the parameters from the offline stage are retrieved, it only remains to determine the position of the user’s eyes relative to the built-in camera. We propose a very simple procedure, where the user adjusts interactively the position the eyes by means of a slider-based interface. The user is asked to perform the adjust-ment until the virtual contours of a marker are well-aligned with the borders of the real marker seen by the user and tracked with the built-in camera. During calibration of one eye the user should close the other one. Once the displayed virtual and real objects match for one particular view they will also do for all other viewpoints.

However, calibration becomes more accurate, if the marker is held at a close distance. As the position of the eyes along the z-axis (viewing direction) is less important it can be fixed to a default value, so the user only needs to retrieve four parameters, which makes the calibration simple and fast.

(a) (b)

Figure 6.13.: Target pattern used for HMD calibration. (a) Original pattern. (b) Target pattern as seen from the moving camera with yellow-colored point detections, and the highly textured calibration rig in the background. The relevant viewing area is covered with a black square to simplify the target pattern detection.