• Keine Ergebnisse gefunden

6   Conclusion

5.1.2   Analysis of gestures in the home tour

5.1.2.1   Pointing gestures

Register situations were caused by different problems. The robot lost the users because they moved too fast and too far away from the robot, they turned too quickly, or the light changed from one room to the other. With the help of SInA, the problems that took place during the guiding task were identified in depth (see Section 5.3.1).

On the positive side, it can be noted that the problems were hardly ever directly followed by other or the same problems. This shows that the users were actually able to solve them on the interaction level. The transition matrix also displays the structure of the task that the participants were asked to complete. Greeting and introduction are likely followed by the guiding task (.42 and .60, respectively). As has been mentioned above, guiding is then succeeded by teaching the room (.21) and teaching the room is followed by teaching an object (.30) or by another guiding sequence (.25). Also teaching objects is frequently followed by guiding (.27). The main problems that interrupted the teaching tasks differed depending on what was taught. When being taught rooms, the robot frequently lost the user (.20) because it turned to improve its representation of the room and could not focus on the human the whole time. This case is described in more depth in Section 5.3.2. The main problem that occurred when teaching objects was that a previous task, i.e., following, needed to be completed. Afterwards the teaching was resumed (.31).

Conclusion

To conclude, ten tasks were identified and ordered in three groups. The social tasks frame the interaction and are closely connected to expectations from HHI where they occur very frequently; the functional tasks display the goal of the interaction to guide the robot and to teach it objects and rooms; and the problem-related tasks interrupt the social and functional tasks. In connection to the model, the problem-related tasks are disconfirmations of the users’

expectations.

This task description is only the first step of the analysis of the home tour data. In the following, a closer look is taken at the whole of the interaction and the tasks with respect to the modalities gesture, body orientation, and gaze.

1. movement of whole arm, pointing with finger at one spot of the object 2. movement of whole arm, pointing with finger at large parts of the object

3. movement of whole arm, pointing with open hand, palm up at one spot of the object 4. movement of whole arm, pointing with open hand, palm down at one spot of the object 5. movement of whole arm, pointing with open hand, palm up at large parts of the object 6. movement of forearm, pointing with finger at one spot of the object

7. movement of forearm, poiting with open hand, palm up at one spot of the object 8. sparse movement of hand towards the object (hand is not raised above the hip) 9. beat movement during speaking

10. touch the object with one hand 11. move the object to another location 12. other

This coding scheme does not consider the oblique orientation of the palm as suggested by Kendon (2004) (see Section 3.2.2) because the quality of the videos did not allow for such a detailed differentiation between the angles.

To make sure that the scheme was reliable, interrater agreement was calculated for four users (30.76%). In 97.37% of the cases the raters agreed that there was a gesture. However, the codes only agreed in 57.89% of all cases which led to a Kappa of .4676 which is very low. When looking at the data, it became obvious that most often one rater classified a movement as movement of the whole arm, pointing with a finger at one spot of the object while the other rater thought it was a movement of the forearm, pointing with a finger at one spot of the object.

Obviously, the raters could not clearly tell forearm and full arm apart. This points to the fact that even if something was shown with the whole arm it was barely ever fully extended and the differences in the angle between upper arm and upper body are rather small between both behaviors. Hence, the differences between the gestures did not mainly depend on the usage of single joints but on the intensity of the gesture, i.e., the degree to which all joints together were moved. Therefore, the behaviors were grouped and the coding scheme was adapted accordingly:

1. movement of arm, pointing with finger at one spot of the object 2. movement of arm, pointing with finger at large parts of the object

3. movement of arm, pointing with open hand, palm up at one spot of the object 4. movement of arm, pointing with open hand, palm down at one spot of the object 5. movement of arm, pointing with open hand, palm up at large parts of the object 6. sparse movement of hand towards the object (hand is not raised above the hip) 7. beat movement during speaking

8. touch the object with one hand 9. move the object to another location 10. other

This adaptation increased the agreement of the ratings to 83.78% and Kappa to .6929. This value is acceptable for such a low number of cases (38) because every single difference weighs

heavily on the overall result. Again it has to be kept in mind that the movement with the arm, pointing with one finger at the one spot of the object (behavior 1) occurred much more often than the other behaviors.

Figure 5-1. Pointing gestures in the home tour

From left to right: movement of arm, pointing with finger at one spot of the object (behavior 1);

movement of arm, pointing with open palm at one spot of the object (behavior 3); touch the object with one hand (behavior 8).

Results

Altogether, 113 gestures were annotated for the teaching task (1.7 per annotated object-teaching task) and 18 for the room-object-teaching task (0.45 per room-object-teaching task). Pointing at rooms was found to be much less common because they surrounded the robot and the human which made pointing gestures redundant.

Similar to the SALEM for the object-teaching study, the gestures were grouped according to their type (see Table 5-3).

Table 5-3. Overview of gesture types in the home tour behaviors gesture type

1-6 deictic

8, 9 deictic/manipulative

7 beat

Again no iconic gestures were identified. The gestures will not be analyzed with respect to the types because the large majority of the gestures was deictic (see Table 5-4). As has been mentioned above, most common for the object-teaching tasks were movements of the arm and pointing with the finger at one spot of the object. This result is in line with Kendon’s (2004) finding that the index finger extended is used to single out a particular object (see Section 3.2.2). In their data they found that open hand was used if the object was not itself the primary topic of the discourse but was linked to the topic as a location of some activity. Open palm was often found here when the users pointed out a room. Therefore, it seems that open palms are connected to locations which usually differ from objects in that they are more extended in space Very frequently the participants touched the objects (behavior 8). Touch establishes a close connection between the user and the objects in the participation framework. Hence, it can be assumed that touch is used to enable the robot to more easily recognize the object. Therefore, touch is categorized here as a deictic/manipulative gesture that facilitates the interaction.

Table 5-4. Pointing gestures in the object-teaching and room-teaching tasks gesture count

object-teaching gestures

count of all object- teaching gestures

(%)

count room- teaching gestures

count of all room- teaching gestures

(%)

1 65 57.52 6 33.33

2 4 3.54 0 0.00

3 6 5.31 7 38.89

4 2 1.77 1 5.56

5 2 1.77 0 0.00

6 2 1.77 0 0.00

7 8 7.08 4 22.22

8 21 18.58 0 0.00

9 3 2.65 0 0.00

sum 113 100 18 100