• Keine Ergebnisse gefunden

Chapter 1 Introduction

4.7 KT Emotional Interaction Corpus

4.7.2 Data Collection

Both our scenarios, the human-human and human-robot, had the same basic setup:

two subjects are seated across each other on a table and they talk about a topic.

Before the experiment started, we explained to both subjects that they would be part of an improvised dialogue, and they would be given a topic to discuss.

We did not let the subjects know that our goal was to analyze their emotional

4.7. KT Emotional Interaction Corpus

Figure 4.15: Picture of the half-circle environment used for the data recording.

Figure 4.16: Picture of one example of the recordings, showing one subject and the microphone position.

reaction. The data collection happened in into two steps: the instruction step and the recording step.

Instruction

In the instruction step a consent form was given to each subject, which can be found in Appendix A. In the consent form the subject had information about which kind of data would be collected, audio and visual data and that the data would not be correlated to his personal identification. The subject was also asked to identify if

Figure 4.17: Picture of the instruction step of the data collection, where the sub-jects were informed about the scenarios and the consent form was presented and signed.

the collected data could be used in future experiments, and if it could be published or not. Figure 4.17 shows a picture of the instruction step, where the instructor gives the consent form and directions about the scenario to two participants.

During this step, we collected also some demographic information about each subject. Each subject was asked to range their age into a group age between 0-12 years, 13-18 years, 19-49 years and 50+ years, to state his or her gender, the mother tongue and the city/country of birth. This information helps us to create a distinction between the data, and if to identify gender, age or place of birth influence in the interactions.

All the instruction, recordings, and data labeling happened in English, although persons from different countries participated in all these steps.

Recording

In both scenarios, the Human-Human Interaction (HHI) and the Human-Robot Interaction (HRI), two subjects interacted with each other, however, in the HHI scenario both are humans and in the HRI one human is replaced by a robot.

We invited different students from the informatics campus of the University of Hamburg, using an open call distributed via email to different student lists.

4.7. KT Emotional Interaction Corpus

For both scenarios, we created two roles: an active and a passive subject. Before initiating each dialogue session, we gave to the active subject a topic, and he or she should introduce it during the dialogue. The passive subject was not aware of the topic of the dialogue, and both subjects should improvise. The subjects were free to perform the dialogue as they wanted, with the only restriction of not standing up nor changing places. The following topics were available:

• Lottery: Tell the other person he or she won the lottery.

• Food: Introduce to the other person a very disgusting food.

• School: Tell the other person that his/her school records are gone.

• Pet: Tell the other person that his/her pet died.

• Family: Tell the other person that a family member of him/her is in the hospital.

These topics were selected in a way to provoke interactions related to at least one of the universal expressions each: “Happiness”, “Disgust”,“Anger”, “Fear”, and “Sadness”. To none of the subjects any information was given about the nature of the analyses, to not bias their expressions.

In the HHI scenario, both participants were humans. One of them was chosen to be the first active subject, and the topic was presented only to him. The topics were printed on a paper and shown to the active person, in a way that all the active persons received the same instruction. Figure 4.18 shows the topic assignment moment for an active participant.

After each dialogue session, the role of the active subject was given to the previously passive subject and a new topic was assigned. For each pair of partici-pants, a total of five dialogue sessions was recorded, one for each topic, and each one lasting between 30 seconds and 2 minutes. Although the topics were chosen to provoke different expressions, it was the case that in some dialogues none of the previously mentioned emotional concepts were expressed.

The HRI scenario followed the same pattern that the HHI scenario had but re-placed the active subject with a humanoid iCub robot [210] head. In this scenario, the robot was always the active subject. As in the previous scenario, the passive subject was not informed of the theme of the dialogue and had to improvise a discussion.

The iCub head has three degrees of freedom for head movement and 21 different eyebrows and mouth positions, for face expressions. Figure 4.19 illustrates the iCub robot used in our experiments. We captured the pictures from the iCub perspective placing a camera just in front of it. The iCub has a built-in camera, however, the resolution and frame rate are very low and incompatible with the ones necessary for our scenario.

The robot was remote controlled, which means that movements, face expres-sions and what the robot spoke were typed by an instructor. The participants

Figure 4.18: Picture of the topic assignment. One of the participants is chosen as the active subject, and one of the five topics is given to him/her.

Figure 4.19: Picture of the iCub robot used in our experiments.

were not informed about that, and the instructor controlled the robot from an-other room. This way, only the robot and the participant were in the room during the recordings. Each dialogue session did not take more than 4 minutes, taking longer than those in the first scenario, mostly due to the delay in the robot’s response. Figure 4.20 illustrates the HRI scenario.

4.7. KT Emotional Interaction Corpus

Figure 4.20: Picture of the HRI scenario. A person is seated in front of the iCub robot, which is the active subject.