• Keine Ergebnisse gefunden

Evaluation on Public Benchmarks

5. Reconstruction and Alignment of Feature Maps for Ready-to-Use Natural Feature Tracking 81

5.10. Evaluation on Public Benchmarks

Table 5.3.: Major contest events at which our approach has been used and compared with the solutions of other competitors.

event name year location description score reached / score max

numb. of teams

place

ISMAR Tracking Competition

2009 Orlando point picking,

mark points on a transparent board

6/16 4 1st

AVLUS Tracking Contest

2010 Ottobrunn various practical scenarios

qualitative comparison

3 report

[SK11]

ISMAR Tracking Competition

2010 Seoul point picking no official

announcementa

5 1st (tied)

ISMAR Tracking Contest

2011 Basel point picking, scene manipulation, weighted score,

negative points for false picks

8/26 5 2ndb

ISMAR Tracking Competition

2012 Atlanta object picking and placing

18/18 3 1st

VW Tracking Challenge at ISMAR

2014 Munich 6

Scenario 1 "Tracking of rotating vehicle", model-based, point picking

30/39 2nd

Scenario 2 "Tracking and learning on different vehicles", point picking

33/41 1st

Scenario 3 Tracking with high accuracy, point marker placement

placement error 2.16mm

1st (tied)c Scenario 4 Tracking inside unknown area,

point picking, SLAM tracking

0/12 (no winner)

aIn 2010 (Seoul), inaccuracies in measuring the competition site by the organizers have led to one of the participants disputing the results, so that officially no winner was announced.

bAt ISMAR 2011 (Basel) we participated with a remote tracking solution. A mobile phone was used to capture the images with the camera and to display the augmented image. The actual computations were performed by the server (laptop), notably the tracking and the augmentation. Transmission of the images back and forth was done via Wifi. This became problematic as soon as the room filled with many spectators, whose own cell phones were spamming the transmission link. As a result of the slow transmission rate we could not accomplish all tasks in time.

cIn Scenario 3 in 2014 (Munich) our approach had the highest accuracy. However, an own carelessness forced us to redo parts of the preparation during the contest run. Since the time needed to fully accomplish the tasks was also a scoring criterion according to the official rules, the jury decided to split the first place among the two participants with the highest accuracy.

5.10. Evaluation on Public Benchmarks

Figure 5.10.: ISMAR Tracking Competition 2009 at Orlando. The yellow crosses correctly indicate the loca-tion of some of the objects that needed to be identified with AR: individual books in a bookshelf, Christmas balls on a blinking tree, screws of an industrial object, parts inside an engine com-partment of a car. Red crosses mark the position of reference points provided by the organizer for registration (not used during the competition run). Images captured during preparation phase are shown. The first image depicts the used mobile configuration (laptop and webcam) of our system.

Figure 5.11.: ISMAR Tracking Competition 2011 at Basel. From top to bottom: (i) our remote, server-based tracking with visualization on a mobile phone, some general impressions of the contest and the reconstructed and globally aligned feature map, (ii) augmented view for the scenarios:

table with candies, mirror with transparent marbles, tunnel with grid of target points inside, transparent key board with grid of rings (iii) lego towers to be built at the correct position and height, target points inside and outside a vehicle.

Figure 5.12.: ISMAR Tracking Competition 2012 at Atlanta. Several objects distributed across the compe-tition site had to be picked (top row) and placed at designated spots on a table in the center of the room (bottom row). The red and blue arrows indicate the directions to the source and tar-get locations, respectively, whenever they were outside the view. The type of objects included (from left to right): glasses with target locations on a paper sheet (possible destinations had been marked by the organizer with pencil outlines), toy screws and other parts that had to be mounted on a construction, glasses in a barely textured area, and industrial parts in a bin.

Figure 5.13.: Volkswagen Tracking Challenge 2014 at Munich, third of a total of four scenarios (accuracy contest). The task was to place small point markers at target locations as accurately as possible.

After the contest run the organizers measured the deviations to the ground-truth positions with an optical coordinate measuring machine. In the center, within a demarcated area, it was allowed to place own markers or even 3D objects to support the tracking: we used a highly and irregularly textured planar pattern. Top row: general task description, the real setup of the task with our pattern, the reconstructed feature map, and a live view during the run with the highlighted target position. Bottom row: live recording of the run while placing the four markers.

5.10. Evaluation on Public Benchmarks

Figure 5.14.: Volkswagen Tracking Challenge 2014 at Munich, second of a total of four scenarios. From top to bottom: (i) general impressions and relevant tracking areas, (ii) images and feature tracks during preparative phase, (iii) reconstructed feature maps, (iv) live view of contest run, and (v) comparison of correct scene points and the points that have been selected during the run.

The difficulty across different tasks has been varied by the following means:

• sparsely textured scenes or objects, giving only a few features for tracking,

• transparent objects or parts, such as glasses (ISMAR 2012) or tiny and hardly visible elements on transparent boards (ISMAR 2011),

• variations in lighting conditions, such as using a projector to illuminate the scene with random patterns (VW Tracking Challenge 2014), or blinking fairy lights on a tree (ISMAR 2009),

• highlyreflective and specular objects, such as a mirror with a grid of transparent marbles (ISMAR 2011),

• substantial scene changesbetween preparation and contest run, e.g. the removal of the cover to the spare wheel in the trunk of a car (VW Tracking Challenge 2014),

• scene parts withrepetitive patterns,

• small separation distance between target objects or points, such as individual letters and digits on a receipt (VW Tracking Challenge 2014), or

• large distance to the reference pointsfor spatial registration to the target coordinate system (VW Track-ing Challenge 2014).

Table5.3provides an overview of the most important tracking contests in which the approach of this chapter has been used and summarizes the achieved results.45 We have always scored one of the top two places in all competitions, in the majority we even emerged as the winner.

As it turned out, a sophisticated method for spatial registration of the tracking system to the target coordinate frame was a crucial element for accurate absolute localization. Its value was particularly evident whenever high accuracy was required, for example when one of several points had to be identified, which were only a few millimeters apart (such as individual digits on a receipt). The procedure for spatial registration was also the part in which our system differed most from that of the other competitors.