• Keine Ergebnisse gefunden

7.1.2.1 Visual Stimuli

A set of six video sequences (trials) was created that displayed different events involving live actors performing simple actions (e.g. a man is waving a balloon) with either a neutral or negative (i.e. angry) facial expression (see Appendix A for a complete description of the action scenes presented in the trials). The gender of the actor and the grammatical gender of the involved inanimate objects were counterbalanced. This was necessary, because the gender in German is prominently marked on the definite determiner accompanying the noun. To this end, (a) the live actors were counterbalanced by their biological sex, i.e., three men vs. three women, and (b) every male actor acted on an object marked with masculine, and every female actor was displayed with an object marked with feminine respectively.

All video sequences were recorded at the same location in front of a white wall and edited, subsequently, in size and resolution (720 x 1280 pixels). Each trial lasted approximately 56 sec and was presented to each child against a black background on either a 107 cm (42”) flat-screen or a monitor (17”) of a TOBII 1750 eye-tracker. Although there was a difference in size between the flat-screen and eye-tracker monitor, this difference had no effect on children’s learning and memory performance. This was examined by analyzing the looking behavior of six children before the eye-tracking experiments started. Moreover, the difference of monitor size was tempered with

the distance children were seated to the screen. Children tested with the flat-screen were seated 160 cm in front of the screen, the one tested with the eye-tracker 60 cm.

7.1.2.1.1 Rating of the visual stimuli by adults

A rating experiment with 22 students (12 female, mean age: 24.9) from Leibniz Universität Hannover, Germany, was conducted to insure the reliability of the actor’s displayed emotional facial expressions in the video sequences. Each adult received five Euro for the participation in the experiment and was asked to rate the actor’s facial expression as positive, negative, or neutral by using a Likert scale with a range from ‘+3’ (positive) to ‘-3’ (negative), with ‘0’ indicating neutral.

The video sequences were randomized and presented to each rater individually in a single room.

The video events depicted the exact same visual information as children watched in the subsequent experiments, however, with the auditory stimuli turned off. As shown in Table 7-1, adults rated the presented negative facial expressions as more negative (M = -1.47, SD = 0.52) than the actors’

neutral facial expressions (M = 0.07, SD = 0.42). Both mean scores differed significantly from each other, t(21) = 14.87, p < .001.

Tab. 7-1: Adults’ ratings of the actors’ facial expressions

item neutral SD negative SD

waving balloon -0.23 0.87 -1.68 0.72

washing cup -0.09 0.81 -1.64 0.85

twirling umbrella 0.27 0.70 -1.27 0.77

pushing chair 0.41 0.59 -1.00 0.93

pulling box -0.18 0.66 -1.64 0.79

shaking blanket 0.23 0.61 -1.59 0.66

mean 0.07 0.42 -1.47 0.52

7.1.2.1.2 Rating of the visual stimuli by children

Further, a rating experiment with 30 children between four and six years of age (11 girls, mean age:

5.0 years) was carried out to explore whether children recognize the actors’ emotional facial expressions presented in the video sequences in the same way as adults do. Children were asked to

match the facial expressions of the video actors presented at the top of a PC screen with one of three different facial expressions presented as matching choices below by using a pointing task (the design and procedure was similar to the one used in Herba, Landau, Russell, Ecker, & Phillips, 2006; Szekely et al., 2011). The actors’ facial expressions were cut from the video sequences as still faces. In each trial, three matching choices were drawn from a set of six photographs of facial expressions displaying five different emotions (joy, sadness, fear, disgust, anger) and the neutral category, which were taken from the Ekman & Friesen corpus (1976). The faces were masked coercing children to focus on the facial features instead of hair color etc. when matching the facial expressions.

Each actor’s negative (i.e. angry) facial expression was presented with one positive (e.g.

happy) and two negative facial expressions (e.g. anger and disgust) such that children could either match the target facial expression correctly (i.e., angry with angry) or incorrectly (i.e., choosing happy or disgust). In cases where children selected the incorrect match it was coded if the mismatch was of the same (i.e., disgust) or a different emotional valence (i.e., happy). This served to verify previous findings suggesting that children start with broad categories like negative and positive at the outset, which they gradually differentiate across development (Widen & Russell, 2003, 2008b, see Chapter 2.1.2). Further, each actor’s neutral facial expression was presented with a neutral, positive, and negative matching item so that children could match the target correctly or incorrectly by choosing the positive or negative item. To avoid that children develop a bias for the neutral and angry expressions, three different distractor facial expressions displaying sad, fearful, and happy emotions were included. Equally to the angry and neutral facial expressions they were asked to match those with one of three choices. Overall, each child rated nine different facial expressions, i.e., three distractors, three neutral, and three angry facial expressions. To familiarize the children with the matching task, every rating session started by asking the child to match a triangle, rectangle, circle, and cross presented in the top of the screen with the corresponding one of three different geometric choices in the bottom. No child demonstrated any difficulties in accomplishing the task.

The descriptive results revealed that children rated the negative facial expressions correctly more frequently (M = 66.15%, SD = 16.37) than the neutral ones (M = 54.91%, SD = 12.39; see Table 7-2). Moreover, the results indicated that children matched the target angry expressions almost 72% with items of the same emotional valence, what corroborates Widen and Russel’s assumption of broad emotional categories in infancy and early pre-school years. Subsequent statistical analyses revealed that children did not perform significantly different for the negative and neutral items, t(29) = 1.54, ns. Further, children’s ratings were compared against a level of 33.33% (the possibility was 1:3 that they select the correct match) to examine whether they matched the neutral and negative facial expressions correctly more frequently than expected by chance. Their performance with neutral and negative facial expressions differed significantly from chance levels, both t’s ≥ 4.08, both p’s < .001. Thus, children were able to assign the actors’ facial expressions to the corresponding (emotional) category, albeit the neutral category might have caused more difficulties than the negative one as the descriptive results suggest. These findings were considered in the subsequent analyses and discussion of the data.

Tab. 7-2: Children’s ratings of the actors’ facial expressions (in percent)

item neutral as

neutral (correct) neutral as

negative angry as angry

(correct) angry as negative

waving balloon 62.50 31.25 64.29 6.22

washing cup 57.14 35.71 62.50 6.40

twirling umbrella 43.75 50.00 42.86 9.33

pushing chair 57.14 28.57 62.50 6.40

pulling box 37.50 43.75 71.43 5.60

shaking blanket 71.43 21.43 93.33 1.07

mean 54.91 35.12 66.15 5.84

7.1.2.2 Auditory Stimuli

The six presented pseudo-verbs were created in compliance with the canonical morphological structure of German verbs, i.e., verb stem + inflectional suffix ‘-en’. Every pseudo-verb was monosyllabic and inflected with the morpheme ‘–t’ in third person singular.

The auditory stimuli were recorded with a female German native speaker who was instructed to pronounce the sentences in infant-directed speech. Her utterances were recorded in a sound-attenuated booth and edited to control amplitude, timing, pitch peaks etc. Subsequently, the auditory stimuli were synchronized with the visual stimuli and presented during test via a hidden loudspeaker, which was placed beside the screen. The software Adobe Premiere CS5 was used for video editing and audio-video synchronization. For a full description of the auditory stimuli see Appendix B; an exemplary description of one trial for learning and memory is shown in Table 7-3 and 7-4 respectively.