• Keine Ergebnisse gefunden

Hemispheric contributions to the processing of emotion in chimeric faces : behavioural and electrophysiological evidence

N/A
N/A
Protected

Academic year: 2022

Aktie "Hemispheric contributions to the processing of emotion in chimeric faces : behavioural and electrophysiological evidence"

Copied!
83
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Hemispheric contributions to the processing of emotion in chimeric faces

- behavioural and electrophysiological evidence

Diplomarbeit

zur Erlangung des Grades Diplom- Psychologin

an der

Universität Konstanz

Mathematisch- naturwissenschaftliche Sektion Fachbereich Psychologie

Vorgelegt von: Anja Geiger

Matr.-Nr.: 01/458212

Allmannsdorferstr. 6 78464 Konstanz Tel.: (07531) 362889 Anja_Geiger@web.de

Erstgutachter: Privatdozent Dr. A. Keil

Zweitgutachter: Prof. Dr. S. Schweinberger

Konstanz, 05.07.2005

(2)

Acknowledgements

The experimental work of this study was conducted at the University of Glasgow, UK.

I am very grateful to Professor Stefan R. Schweinberger and Dr. Jürgen M. Kaufmann for providing resources, concept, and support during the whole process of planning and conducting this study, as well as assisting with the data analysis. Besides, I am very grateful to Professor Stefan R. Schweinberger for supervising my thesis.

At the University of Constance, I am very grateful to PD Dr. Andreas Keil for supervising my thesis, assisting with the data analysis and commenting on the draft version.

I also want to thank Cornelia Czacz for proofreading and valuable comments on the manuscript, and Bernhard Thurner for a final check-up and a hand on formatting.

Last but not least, I want to express my thanks to everybody else involved with supplying social support…!

(3)

Table of contents

Table of contents

0 Abstract ... 1

1 Introduction ... 2

1.1 Facial expressions in the context of emotion... 2

1.2 Relevance and features of facial expressions ... 2

1.3 Perceptual basis... 4

1.4 Neuronal processing features... 6

1.5 Laterality... 7

1.6 Hypothesis ... 10

1.6.1 Behavioural data ... 10

1.6.1.1 Intensity task ... 11

1.6.1.2 Efficiency task ... 11

1.6.2 Electrophysiological data... 12

2 Methods... 13

2.1 Participants ... 13

2.2 Stimuli and Apparatus ... 13

2.3 Pilot study ... 14

2.4 Procedure ... 14

2.4.1 Intensity task ... 15

2.4.2 Efficiency task... 15

2.5 Performance ... 17

2.6 Event- related potentials ... 17

2.7 Data analysis ... 18

2.7.1. Intensity task ... 18

2.7.2. Efficiency task ... 18

2.7.3.Event-related potentials ... 18

(4)

Table of contents

3 Results ... 20

3.1 Intensity task ... 20

3.2 Efficiency task ... 23

3.2. Efficiency task ... 23

3.2.1 Correct classifications... 23

3.2.2. Reaction times ... 26

3.2.3 Event-related potentials ... 28

3.2.3.1 Global analysis... 28

3.2.3.2 Topographic analysis... 31

4 Discussion ... 40

4.1 Intensity task ... 40

4.2. Efficiency task ... 42

4.3. Event-related potentials ... 45

4.3.1 General assumptions and limitations ... 45

4.3.2 Interpreting current ERP results ... 46

5. Conclusion... 57

6 References ... 64

Appendix I: List of tables and figures... 76

Appendix II: Examples of stimuli for experimental trials ... 79

(5)

0 Abstract 1

0 Abstract

The face in general and facial expressions in particular have always been a focus of sociological, biological and psychological interest, with numerous different scientific approaches and objectives, investigating the expression and perception of facial affect or the social interaction of transmitting facial information between poser and perceiver.

This study, dealing with two different aspects of facial expression, namely intensity and efficiency of facial expression, focuses on hemispheric contributions to the expression and perception of facial affect. Stimuli used that were based on the FEEST stimuli (Young, Perrett, Calder, Sprengelmeyer, & Ekman, 2002), differed in terms of the type of expression (anger, disgust, fear, happiness, sadness, surprise), in terms of the level of intensity (100% vs.

50%) and in terms of the type of chimeric (either consisting of left (LL) or right (RR) hemifaces).

Using six basic expressions, the first experiment of the present study (intensity task) aimed to investigate whether the left or the right side of an expressive face is perceived as being more intense, following a classical study by Sackeim et al. (1978) that was claiming a dominant role for the right hemisphere/ left hemiface. In a 2- AFC intensity judgement, subjects were simultaneously presented with two chimerics created from corresponding left (LL) and right (RR) hemifaces and had to select the more expressive one. Contrary to earlier findings, regardless of the level of intensity and regardless of the type of expression, participants judged the RR- composites as more expressive.

A second experiment (efficiency task), dealing with the question of the efficiency of emotional expression, investigated the statement of Indersmitten et al. (2003), who were claiming a dissociation between intensity and efficiency of emotional expression with a dominant role of the left hemisphere/ right hemiface for efficiency of emotional expression (Indersmitten & Gur, 2003). Testing this hypothesis, a subset of the chimeric stimuli from the first experiment were used (anger, fear, happiness, sadness). Subjects performed 4- AFC responses and classified faces (either being LL- or RR- chimerics) as fast and as accurate as possible according to the type of facial expression. There was indeed a RR- composite advantage, but modulated by emotion: It was present for anger and fear, but not for happiness, whereas sadness even yielded a LL-composite advantage. During the second experiment, ERP’s were recorded, supplying additional information, allowing further differentiation and interpretation of the processing of different basic emotions, hemifaces and intensity levels.

(6)

1 Introduction 2

1 Introduction

1.1 Facial expressions in the context of emotion

Emotions can be described as action dispositions that are elicited by a concrete stimulus and comprise at least three components. First, a physiological component, i.e. activity of the central and autonomous nervous system with the resulting neurohormonal changes in heart rate, blood pressure and other physiological parameters, second, a cognitive component and third, a behavioural component referring to posture, prosody and facial expression (see Carlson, 2001). Although these components can also be considered separately, they strongly interact and influence each other, for example by providing sensory feedback from facial muscles about the actual emotional state, known as the facial feedback hypothesis (Laird, 1974; Soussignan, 2002; Tomkins, 1984; Zajonc, 1985) or by providing cognitive explanations of physiological changes. However, it is still controversial, how and to what extend, these different emotional components actually interact and affect each other, since correlations between simultaneous measurements and comparisons of different indicators of the same component, e.g. different physiological indicators, are not very satisfying. Apart from this methodological problem, behavioural components, most important facial expressions nevertheless transmit important information about emotional states to the outside, allowing inference about emotional states and thus representing a very important nonverbal mean of communication. Therefore, even though emotional states are subjective and in some cases not even conscious to the individual itself, facial expressions represent a very efficient way of transmitting information.

1.2 Relevance and features of facial expressions

Human beings are real experts in recognizing facial expressions which are very important for nonverbal communication and the regulation of social behaviour. Even newborns already show a genuine interest in faces (Goren, Sarty, & Wu, 1975; Johnson, 1991) and moreover are able to distinguish between different facial expressions. This has been demonstrated experimentally by confronting them with different types of expressions (happiness, sadness

(7)

1 Introduction 3

and surprise) and registering the fixation time or adaptation respectively (Field, Woodson, Greenberg, & Cohen, 1982).

Due to individual differences in size, shape and unique features, facial expressions look slightly different on each individual face, which requires abstraction of constant information about facial expression from irrelevant information. Apart from this complicating factor of individuality, there is also the timing of emotional expression, emphasizing the extremely sophisticated human perceptual abilities: Facial expressions are very transient cues, available for analysis only one-half to four seconds (Ekman, Friesen, & Ellsworth, 1982).

An even more striking feature of the human ability of fast and accurate emotion processing, is the fact that there seems to be a rather limited set of universal basic emotions.

These six basic emotions often described as anger, disgust, fear, happiness, sadness and surprise are reliably recognized across different cultures with very little variation, even for posers and perceivers belonging to different cultures (Ekman, 1992, 1994; Ekman & Friesen, 1971). Evidence of constants in facial behaviour can even be demonstrated in blind children (Goodenough, 1932), thus facial expression does not seem to depend on learning, but rather on evolutionary mechanisms, innate neural programs for non-verbal communication.

A formal coding system for measuring facial behaviour and identifying emotional expressions has, among others, been established by Ekman and Friesen (1978). FACS, the Facial Action Coding System (Ekman & Friesen, 1978) is used to describe facial behaviour based on its underlying anatomical muscle activity (executed by so-called “action units”).

This technique avoids biases originating from experimenters’ or native observers’ judgements of the emotional content of facial behaviour and thus enabling objective scoring of facial expression.

Besides the issue of objectively measuring facial behaviour, another important methodological issue when discussing relevant features of facial expressions, is the distinction between different forms of facial expressive behaviour: spontaneous or unconscious vs. posed or voluntary emotional facial expression. A number of neurological studies, such as data from patients suffering from Parkinson’s disease, suggest a dissociation between spontaneous and voluntary facial expression, supporting the hypothesis that different neuronal substrates mediate these two kinds of facial behaviour. With spontaneous facial expression being primarily under bilaterally extrapyramidal influence and voluntary facial expression being primarily mediated by direct contralaterally organized corticobulbar paths (see Pizzamiglio, Caltagirone, & Zoccolotti, 1989), it seems obvious that this differentiation between spontaneous and voluntary facial

(8)

1 Introduction 4

expression might be quite relevant. Thus, even though a number of studies failed to make a distinction between different forms of facial expression, it remains in fact questionable, to what extend naturally occurring spontaneous facial expression can actually be compared to artificially posed facial expression. Furthermore, when considering experimental data of non- clinical subjects, it has to be kept in mind that in everyday social interactions, people are using some kind of internal display rules and thus voluntarily modulate their spontaneous facial expression, what probably results in a blending of spontaneous and voluntary facial expression which is arguing against the concept of two distinct and non-overlapping types of facial expression. The most prominent example demonstrating the difference between a posed and a spontaneous expression is represented by a smile: When people voluntarily smile by convention in social interactions, the smile looks authentic, but lacks a tiny little detail that makes it distinguishable from a spontaneous smile, namely the contraction of a muscle that raises the cheeks around the eyes (orbicularis oculi, pars lateralis) in addition to the contraction of the zygomaticus major. In contrast to the social smile, the spontaneous smile, known as the Duchenne smile is characterized by the additional contraction of the orbicularis oculi and considered to be qualitatively different from a non-Duchenne or social smile (Ekman, Davidson, & Friesen, 1990). Thus, when interpreting studies that are dealing with the generation or perception of facial expressions, the crucial part to be considered is always the mechanism through which a specific facial expression has been elicited – were subjects verbally asked to generate a certain facial expression or were they unconsciously monitored while watching certain expression eliciting pictures or were they asked to mimic a certain demonstrating example ?

Moreover, when discussing elicitation conditions of stimuli, there is not only the important distinction between spontaneous vs. posed, there is also the type of instruction in case of posed conditions, possibly influencing the outcome: Verbally or nonverbally instructing subjects to generate a facial expression might actually make a crucial difference for the resulting facial expression.

1.3 Perceptual basis

Although evolutionary aspects of facial expression of emotions have been investigated since the 19th century (see Darwin, 1872), the perceptual basis of the representation of different expressions is still debated. The fundamental question is as follows: Are there

(9)

1 Introduction 5

actually distinct categories for the assumed six basic emotions, as suggested for example by Ekman (1972, 1982) or do they rather vary along continuous dimensions?

Favouring a dimensional approach, Woodworth and Schlosberg (1954) proposed a model that implies gradual transition between different expressions, which are thought to be circularly arranged depending on the tendency of their confusability, with two orthogonal dimensions, one being pleasant vs. unpleasant and the other one being attention vs. rejection.

According to these dimensional models like the one proposed by Woodworth and Schlosberg (Woodworth & Schlosberg, 1954) or a comparable more recent model, the circumplex model of affect by Russel (Russel, 1980), which is also assuming two orthogonal dimensions, one being pleasantness and the other one being arousal, gradual changes from one expression to another would also include regions of expressions that are rather indifferent or neutral.

In contrast to dimensional models, category- based models such as the one proposed by Ekman et al. (1972) and Ekman (1982), would predict rather abrupt transition points between different expressions with almost non- existing regions of neutral or indifferent expressions in between. Testing the predictions that can be made by this model, it turned out that linear changes in the physical characteristics of a stimulus (by morphing two pictures of different prototypic basic expression) produced non-linear perceptual effects (Calder, Young, Perrett, Etcoff, & Rowland, 1996). Expanding this model, a recent ERP- study even demonstrated an electrophysiological correlate of the behavioural data, indicating categorical processing of facial expression (Campanella, Quinet, Bruyer, Crommelinck, & Guerit, 2002). In this study subjects were performing a same- different matching task, with three different kinds of stimulus pairs. The first category consisted of stimulus pairs being morphed faces of happy and fearful expressions of a single poser that were perceived as displaying the same emotional expression (within pairs). The second category again consisted of morphed stimulus pairs of a single poser, but in this case they were perceived as displaying different expressions (between pairs). The only difference between stimuli of the first vs. those of the second category consisted of their relative percentage of “happy” and “fearful” expressions in these morphes.

The third category consisted of pairs of identical faces (same pairs). The applied paradigm, a delayed same- different matching task, revealed amplitudes in response to the second stimuli of a given pair that were reduced for within and same pairs relative to between pairs for the occipito-temporal N170, whereas the P300 amplitudes towards the between pairs was higher than for the within and same pairs. These results have been interpreted as support for the hypothesis of categorical processing of facial expressions.

(10)

1 Introduction 6

To sum up, it can be said that most studies, testing dimensional vs. category- based models rather support the idea of discrete categories of emotions, whereby some remaining inconsistencies argue for a moderate version of the categorical approach, with categories being arranged around prototype- based categories (Young et al., 1997).

1.4 Neuronal processing features

In terms of localization of neuronal structures being involved with the processing of emotions, several brain regions have been identified playing an important role. The amygdala in the processing of aversive stimuli (Angrilli et al., 1996) and the insula as well as the basal ganglia being involved with the processing of disgust (Phillips et al., 1997). Several other distinct and non- overlapping neuronal substrates have been associated with recognition and processing of specific basic emotions (Batty & Taylor, 2003; Sprengelmeyer, Rausch, Eysel,

& Przuntek, 1998).

However, even though neuronal structures dealing with the processing of emotions have been identified more or less consistently across a number of studies with different experimental paradigms and methods being used, the timing at which identification of facial expression actually happens is still debated: According to the model by Bruce and Young (Bruce & Young, 1986), facial affect recognition is mediated by a neuronal circuit that is partly independent from the analysis or extraction of other facial information such as facial identity, sex or age, with facial expression being decoded at an early stage of the facial decoding. The hypothesis of early decoding of facial expression is quite consistent with behavioural data from studies investigating speed and accuracy of facial expression recognition, whereas data from electrophysiological studies dealing with the exact time course of facial expression recognition is far less convergent. Whether facial expression evokes a distinct neuronal correlate and is identified at a very early stage, such as stated for example by Eimer et al. (’02), at an intermediate stage (Batty & Taylor, 2003), or at a late stage of face processing (Krolak-Salmon, Fischer, Vighetto, & Mauguière, 2001) is still debated.

Furthermore identification time and neuronal correlates considerably vary with stimulus- and task factors. Beyond the issue of timing at which expressions are generally identified, there is also the question of timing at which different expressions are not only recognizable as facial expression per se, but also distinguishable from each other. Are there in fact different

(11)

1 Introduction 7

neuronal correlates and pattern of processing for different types of expressions and if so at which stage of processing are they actually distinguishable from each other?

1.5 Laterality

Although the human brain is working as a functional and hierarchical organized unity, the two cerebral hemispheres still differ from each other, they are at least up to a certain degree anatomically and functionally asymmetric, which relates to the concept of laterality.

Anatomically, there are a number of brain regions that are more pronounced or more differentiated within one hemisphere as compared to the other, such as for example Heschl’s gyrus (von Economo, 1930), Brocas area (Falzi, Perrone, & Vignolo, 1982) and the Planum temporale (Geschwind & Levitsky, 1968). Furthermore, several functional differences between corresponding areas within the left vs. the right hemisphere have been identified, such as the classical finding of relative left hemispheric dominance for processing information related to language (see Penfield et al., 1959; Binder et al., 1997). Despite these asymmetries or hemispheric differences that are affected and tuned by environmental and genetic factors, it is however important to stress that laterality is rather relative than absolute, with both hemispheres being able to carry out a given task and most functions being accomplished by both cooperating hemispheres. Research about laterality includes behavioural experiments by using techniques such as tachistoscopic stimulus presentation or dichotic listening paradigms, invasive methods such as injection of sodium amobarbital in the carotid artery and brain stimulation as well as various neuroimaging methods such as for example PET or fMRI.

Finally, there are also case studies on neurological patients suffering from brain lesions or commissurotomy. Despite the fact that laterality has been examined for a long time, there is unfortunately still the problem that behaviourally measured laterality does not necessarily mirror or correspond to anatomical or physiological asymmetry.

Regarding the question of hemispheric specialization of emotion processing, a general assumption has often been made based on behavioural data of neurological patients (Adolphs, Damasio, Tranel, & Damasio, 1996; Borod et al., 1998; Borod, Santschi, & Koff, 1997), pointing towards a special role of the right hemisphere, which seems to have a superior processing capacity for emotional stimuli, be they auditory (Erhan, Borod, Tenke, & Bruder, 1998) or visual such as facial expressions (Schweinberger, Baird, Bluemler, Kaufmann, &

Mohr, 2003). Further support for a special role of the right hemisphere comes from

(12)

1 Introduction 8

electrophysiological and functional brain imaging studies, revealing a greater overall activity for the right hemisphere when presented with emotional stimuli (Smith, Meyers, Kline, &

Bozman, 1987; Troisi et al., 1999). Specific activation patterns for specific time segments were demonstrated in different brain regions by asking subjects to generate or mimic basic expressions that have been presented to them (Esslen, Pascual-Marqui, Hell, Kochi, &

Lehmann, 2004). Therefore the general right hemispheric superiority does not only seem to exist for the perception of emotional stimuli, but also for the generation.

Three somewhat different hypotheses concerning the degree to which emotion processing might be lateralized have been proposed. According to the right- hemisphere hypothesis, the right hemisphere generally plays a dominant role in terms of emotion processing (Borod, Kent, Koff, & Mertin, 1989; Hertje, 2001). However, the valence hypothesis suggests a dissociation between positive and negative emotions, with only negative emotions being mediated by the right hemisphere (Lee et al., 2004; Reuter-Lorenz & Davidson, 1981; Schiff

& Lamon, 1989). Favoring the valence- dependant approach, Pizzagalli et al. (Pizzagalli, Regard, & Lehmann, 1999) even identified significantly modulated ERP responses for the left vs. the right hemisphere, depending on valence of the stimulus presented. These differences related to stimulus valence could already be demonstrated at a very early stage of processing and even 10 month old infants already showed an asymmetric brain response towards positive and negative affective stimuli (Davidson & Fox, 1982). A third hypothesis, the approach- withdrawal hypothesis, which is related to the valence hypothesis, states that emotional responses associated with approach behaviour are mediated by the left hemisphere, whereas emotional responses associated with withdrawal behaviour are mediated by the right hemisphere (Canli, Desmond, Zhao, Glover, & Gabrieli, 1998; Hamon & Allen, 1998).

Contradictory results of studies dealing with hemispheric contributions of emotional processing are partly due to the fact that differences in methodological approaches, stimulus- and task- factors as well as variables such as sex, handedness and age of subjects influence the outcomes of a given study to a large extend. Nevertheless, most researchers still concede a somehow important role to the right hemisphere for emotion processing, even though individual proportions or contributions of the two cerebral hemispheres are still debated.

Assuming a general dominance of the right hemisphere in terms of emotion processing and given the fact, that innervation of facial muscles is mainly contralateral (Liscic & Zidar, 1998), the left hemiface which is controlled by the right hemisphere should consequently be more expressive. This has indeed been demonstrated, for example by Dimberg and Petterson (2000), who recorded significantly larger muscle activity of the zygomatic major and the

(13)

1 Introduction 9

corrugator supercilii on the left side of the face as a reaction of subjects having been exposed to emotional expressive faces (Dimberg & Patterson, 2000). Moreover, not only the innervation of facial muscles, but also morphological characteristics of the human face are up to a certain degree asymmetric (Vannuci, Zoccolotti, & Mammucari, 1989), which in turns influences the perception of faces in general as well as the perception of facial expression. It could for example be demonstrated that pictures artificially composed out of right halves of human faces were rated as being more similar to the original face than composites made out of left hemifaces (Gilbert & Bakan, 1973).

Facial asymmetry has usually been taken as a behavioural index of hemispheric specialization for facial expression. Based on the observation of these facial asymmetries, Sackeim, Gur and Saucy (1978) demonstrated that the right hemisphere is not only playing an important role for the reaction to emotional stimuli, but also for the generation of facial expressions. Chimeric faces, consisting only of left hemifaces, were judged as being more emotionally expressive than chimerics consisting only of right hemifaces (Sackeim, Gur, &

Saucy, 1978). This result has often been taken as support for the hypotheses that facial expression of emotion is mainly controlled by the right hemisphere.

Yet, a still unsolved striking implication of that generally assumed right hemispheric dominance for the processing of emotion is the fact that in face-to-face situations, the left hemiface of a person displaying a facial emotion, which is thought to be more expressive, is falling into a perceivers’ right visual field. Due to neuronal wiring and the fact that the right visual field of a perceiver projects to his or her left hemisphere, extensive information coming from the poser ends up in the perceiver’s hemisphere that is thought to be less specialized for emotion processing. In terms of evolutionary principles or biological signification, this loss of information being transmitted between poser and perceiver appears quite paradoxical. On the other hand, having a situation in which the inferior hemisphere is supplied with stronger information in order to compensate for its relative weakness would actually makes sense.

Proceeding on the assumption that hemispheric substrates of the perception of emotional stimuli and those associated with generation of emotional responses are dissociable (Davidson

& Tomarken, 1989), it might be very useful to have a closer look at that general right hemisphere hypothesis and to identify different components and factors influencing the assumed right hemispheric superiority for the processing of facial expressions.

In search of a more differentiated approach in terms of laterality of facial expressions, Indersmitten and Gur (2003) suggested that intensity and efficiency of emotional expression might be dissociable from each other, with a left hemiface advantage for the expression of

(14)

1 Introduction 10

emotional intensity and a right hemiface advantage for efficiency of emotional expression.

Presenting happy, angry, sad and fearful chimeric faces, consisting either of left of right hemifaces, subjects had to provide intensity judgments on a scale from 0 to 100 in intervals of 10. Overall, there has been an advantage of intensity for left composites. In a second task, subjects had to classify these chimerics as fast and as accurate as possible with respect to type of expression, revealing an overall advantage of efficiency for right composites. Indersmitten and Gur concluded from their experiments that the partial double dissociation between the laterality of expression intensity and the laterality of expression efficiency turning out from their experiments, supports the idea of two different processes with two different neural substrates (Indersmitten & Gur, 2003).

If however both cerebral hemispheres are indeed playing different roles in terms of emotion processing, it could actually be a hint to solve the puzzling question of the above mentioned poser- perceiver paradox.

1.6 Hypothesis

1.6.1 Behavioural data

The present study aimed to replicate the classical findings by Sackeim et al. (1978), suggesting that the left hemiface is more emotional expressive than the right hemiface.

Additionally, it was aimed to check whether intensity and efficiency of emotional expression can actually be dissociated, as stated by Indersmitten et al. (2003). To investigate these two aspects of emotion processing, two different tasks were implemented, one dealing with the intensity of emotional expression (intensity task) and the other one dealing with the efficiency of emotional expression (efficiency task).

Although thematically related to the above cited studies, the present study used a stimulus set and an experimental paradigm that was different from these previous studies, trying to overcome some their inherent methodological weaknesses.

Stimuli used for the present study consisted of a set of chimerics (either being composed of left hemifaces (LL) or right hemifaces (RR)) that were created from original pictures of faces. The original face stimuli were all displaying one of the basic emotions (anger, disgust, fear, happiness, sadness or surprise) in two different levels of intensity. There were faces in 100% intensity level (standard emotional expressive faces) and faces in 50% intensity level,

(15)

1 Introduction 11

which have been created by morphing a neutral face and an expressive face. Both tasks were basically using the same stimuli, with stimuli for the efficiency task just being a subset of the stimulus set for the intensity task. This reduction in the number of basic emotions being investigated in the efficiency task had to be made due to experimental constraints.

1.6.1.1 Intensity task

Regarding the intensity task, it was expected that the type of chimeric (either LL- composite or RR-composite) would influence subjects’ choice when asked to pick the more expressive one from a set of corresponding LL- and RR-composites (stimulus pairs of a given trial differed only in terms of type of chimeric). Furthermore, there was also the hypothesis that the type of expression might as well have an influence on subjects’ choice referring to the question which one of a corresponding chimeric looked more intense. Finally, two different levels of stimulus intensity have been applied in order to check the possibility that perception of intensity of a given chimeric pair and choice respectively does also depend on the level of stimulus intensity.

Thus the intensity task focused on three variables, namely on the type of chimeric (left (LL) vs. right (RR) composites), on the type of expression (anger, disgust, fear, happiness, sadness, surprise) and on the level of intensity (100% vs. 50%). These variables might influence the perceived intensity of expressive faces and consequently serve as a hint to support any of the above outlined models of hemispheric contributions to the processing of facial expression.

1.6.1.2 Efficiency task

Regarding the efficiency task, it was aimed to test the hypothesis stated by Indersmitten et al. (2003) claiming that the efficiency of facial expression is better in the right than the left hemiface. Thus, it was assumed that performance of classification might be different for RR- vs. LL- composites. Apart from this expected difference in performance depending on the type of chimeric, there was also the possibility that the type of expression as well as the level of intensity might affect or modulate the efficiency with which stimuli are classified.

Investigating the assumption of a dissociation between intensity and efficiency of emotional expression, this task had been designed to complement the intensity task and hopefully offering further insight to the processing of facial expression.

(16)

1 Introduction 12

1.6.2 Electrophysiological data

Apart from studying behavioural effects of the perception of facial expressions, the present study also tried to identify neuronal correlates and timing characteristics of different expressions by recording event-related potentials (ERPs) for the classification task (efficiency task).

ERPs, which are time-locked changes in the electrical activity of the brain, vary with stimulus- and response- characteristics, and they are defined by typical components, such as a specific pattern of amplitude/polarity, latency and topography (see Rugg, 1995). Since the temporal resolution of ERPs is extremely high and therefore matching the time scale of emotional phenomena under investigation, it seems to be an optimal non-invasive electrophysiological method to make inferences about cortical processes underlying the perception of facial expressions. Even though, spatial resolution of the EEG is rather poor compared to alternative functional brain imaging methods, conclusions about local activation and topographic characteristics of ERP components can still be drawn, based on a limited level of spatial accuracy. Using a 32- channel set-up, the methodological drawback of EEG, namely limited spatial resolution, is still at an acceptable level, with the primary focus being on the temporal characteristics of emotion processing anyway.

By recording ERPs in addition to the collection of behavioural data, the present study aimed to identify neuronal correlates and timing characteristics towards different facial expressions, to compare the two different levels of intensity that have been used, and finally to compare brain responses towards different types of composites, consisting either of left or right hemifaces. Eventually, ERP data might also be a hint to solve the earlier described poser-perceiver paradox, since the additional electrophysiological data obtained from the efficiency task directly relates to the perceiver.

(17)

2 Methods 13

2 Methods

2.1 Participants

Twenty-four participants (6 male, 18 female), aged 18 to 32 (m= 21.3) contributed data.

All reported normal or corrected to normal visual acuity and were strongly right handed with a mean score of 92%, as assessed by the Edinburgh handedness questionnaire (Oldfield, 1971). Twelve participants contributed behavioural data only, whereas 12 participants contributed both, behavioural and EEG data. One additional participant was excluded from further data analysis due to an excessive number of artefacts in the EEG data.

Each participant was performing on two different tasks, the first one being a two alternative forced-choice paradigm and the second one being a classification task for which additional EEG data has been obtained from twelve participants.

2.2 Stimuli and Apparatus

The stimuli used were created from the FEEST stimulus set (Young et al., 2002), a validated set of facial expressions that has been derived from the Ekman and Friesen series of Pictures of Facial Affect (1976). Six posers (three male and three female) were chosen, each posing for six basic emotions (anger, disgust, fear, happiness, sadness and surprise) in two different levels of intensity (100% and 50%, being a morphed face of a neutral and a 100%

expressive face). Choice of posers was based on quality in terms of symmetry, tilt, contrast and brightness, which seemed to be highest for posers no. 03, 04, 06, 08, 09, 10 as determined by the pilot study.

From these 72 original pictures, composites were made by cutting the pictures vertically along the midline and mirror-reversing each hemiface, thus creating composites of left hemifaces (LL-composites) and right hemifaces (RR-composites). Composites were edited using Adobe PhotoshopTM, converted to greyscale, framed within an area of 200 x 257 pixels, corresponding to 5.28cm x 6.8cm, with all background being removed. The total set of stimuli consisted of 144 pictures.

(18)

2 Methods 14

All stimuli were presented on a 19’’ computer monitor. A chin rest provided a fixed viewing distance of 80 cm. An ERTSTM keypad was used to collect participants’ responses.

2.3 Pilot study

Five participants who were not subsequently contributing data to the study disclosed that composites were different in terms of perceived intensity. Applying a forced-choice paradigm, participants had to perform an intensity judgement on composites, namely having to indicate which of the two composite versions of an original picture simultaneously presented to them seemed to display a stronger emotional expression. A two-way ANOVA applying a within subjects design with one factor being poser (poser no. 01 to 10) and the other one being type of composite (LL- vs. RR-composite) revealed a significant difference between LL- and RR- composites, F(1,4)=33.36, p<.01, a significant interaction between type of composite and poser, F(9,36)=7.17, p<.001, but no significant difference between posers, F(9,36)< 1, p<.65 n.s.

In order to make sure that the experimental stimuli did not systematically differ in size (as assessed by measuring the maximum width, the maximum height and area, which was defined by the product of maximum width times maximum height of the faces), a one-way ANOVA with repeated measures on composites (LL-composite vs. RR-composite) was conducted, revealing that neither width, F(1,70)= 1.72, p<.194 n.s., nor height, F(1,70)= .01, p<.906 n.s.

or area, F(1,70) < 1, p<.346 n.s. differed significantly between composites being made of the original pictures for the six posers (no. 03, 04, 06, 08, 09, 10) that have been chosen as experimental stimuli.

2.4 Procedure

The study was conducted in a dimly lit chamber and controlled using ERTSTM software (Beringer, 1995) on an IBM- compatible PC.

(19)

2 Methods 15

2.4.1 Intensity task

For the first task, being a forced-choice intensity judgement, participants were instructed that they would always be presented with two faces, one on top of the other that would display the same expression. Participants had to decide which of these two faces looked more emotional expressive to them or that is to say more intensive and to indicate their choice by pressing the corresponding key on a keypad using both hands. Key assignment was counterbalanced across participants in order to avoid lateral biases in motor response. They were encouraged to scan the images thoroughly and were given a maximum of 10 s for their decisions.

To facilitate proper fixation of the subsequent stimuli, each trial started with the presentation of a central fixation cross, remaining on the screen for 500 ms. Then, a stimulus pair, consisting of a left and a right composite being created from the same original picture was presented for a maximum time of 10 s followed by a blank screen for 500 ms before the next trial. Thus, SOA was 11 s, yet participants were allowed to take their decision earlier (after a minimum time of 500 ms) and therefore to move on at their own pace.

Stimulus pairs were always presented twice in a randomised order with LL- composites and RR- composites being the upper or the lower picture respectively, resulting in a total of 144 experimental trials (six different posers, posing for six basic emotions in two different intensity levels with LL- and RR- composites once being the upper and once being the lower picture respectively). A break was allowed after 72 trials.

In order to adapt to the procedure, 12 practice trials, using stimuli that were not subsequently shown in the experimental trials were implemented (only one poser displaying all six emotions in 100% intensity).

After completing the first task, blink trials and eye movements to the left and to the right were recorded for those participants contributing not only behavioural but also EEG data.

Participants that were only contributing behavioural data moved straight on to the second task.

2.4.2 Efficiency task

For the efficiency task, participants were instructed that they would be presented with only one face at a time, displaying either an angry, a fearful, a happy or a sad emotional

(20)

2 Methods 16

expression. They were asked to classify these pictures as quickly and as accurately as possible, using their middle and index fingers of both hands for pressing four different keys on an ERTSTM keypad. Key assignment was kept constant because the primary interest was efficiency of classification for LL- vs. RR-composites rather than absolute efficiency for the classification of different expressions.

Each trial started with the presentation of a central fixation cross for 500 ms. Then, a stimulus was presented in the centre of the screen for 2500 ms, followed by a blank screen for 500 ms, thus SOA was 3500 ms.

The stimuli used were exactly the same as for the first task with the exception of using only four different expressions (angry, fearful, happy and sad) instead of six, thus resulting in a total of 96 experimental trials (six posers posing for four different expressions in two different intensities with each face being presented once as LL- and once being presented as RR-composite). The reduction in basic emotions being investigated in the classification task was made in order to keep the task reasonable feasible in terms of key assignment having to be learned by participants.

Five blocks each consisting of 96 experimental trials, were shown in a completely randomised order, separated by breaks. Thus the total number of experimental trials for the classification task was 480.

During breaks, a feedback of error percentage and mean reaction time for the preceding block was displayed on the screen and participants were encouraged to relax and blink as much as they wanted to (especially participants in the EEG group).

In order to ensure that participants were familiar with the key assignment and in order to avoid possible biases affecting the results in the classification task that were due to wrong finger pressing, a practice phase was implemented. During that practice phase, participants had to reach a certain criteria (at least 29 out of 32 consecutive trials had to be classified correctly, approximately corresponding to 90% of correctly classified faces). As soon as they reached the criteria, the actual experiment started. Most of the participants (19 out of 24) reached the criteria within 72 trials. The practice block, comprising 16 practice trials that were shown twice, consisted of stimuli that were not subsequently shown in the experimental trials (two composite versions of two posers displaying anger, fear, happiness and sadness in 100%

intensity only).

The experimental sequence (the first task always being the intensity judgement, the second one being the classification task) was kept constant in order to avoid possible learning

(21)

2 Methods 17

effects that might have occurred if the order would have been reversed, since six basic emotions were used for the first and only four basic emotions were used for the second task.

2.5 Performance

For the first task (intensity judgement), percentage of LL-composites and RR-composites respectively being chosen as more expressive within 10 s of presentation time was calculated.

Reaction times were not further analysed, because participants had been encouraged to scan the pictures thoroughly, without stressing reaction speed.

For the second task (classification of expressions), responses were scored as correct if the correct key was pressed within a time window lasting from 150 to 2500 ms after stimulus onset. Mean reaction times were calculated for correct responses only.

2.6 Event- related potentials

The electroencephalogram (EEG) was recorded with sintered Ag/AgCl electrodes mounted in an electrode cap (Easy-CapTM) at the scalp positions Fz, Cz, Pz, Iz, Fp1, Fp2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T7, T8, P7, P8, FT9, FT10, P9, P10, PO9, PO10, F9, F10, F9’, F10’, TP9 and TP10. Note that the T7, T8, P7 and P8 locations are equivalent to T3, T4, T5 and T6 in the old nomenclature. The F9’ electrode was positioned 2 cm anterior to F9 at the outer canthus of the left eye, and the F10’ electrode was positioned 2 cm anterior to F10 at the outer canthus of the right eye. The positions TP9 and TP10 refer to inferior temporal locations over the left and right mastoids, respectively. The TP10 (right upper mastoid) served as initial common reference, and a forehead electrode (AFz) served as ground. The impedances were kept below 10kΩ and were typically below 5kΩ. The horizontal electrooculogram (EOG) was recorded from F9’ and F10’ at the outer canthi of both eyes.

The vertical EOG was monitored from an electrode above the right eye against an electrode below the right eye. All signals were recorded with AC (0.05 Hz high pass, 40 Hz low pass, - 6dB attenuation, 12 dB/octave), and sampled at a rate of 250 Hz.

Offline, epochs were generated, lasting 2600 ms and starting 200 ms before the onset of a face stimulus. Automatic artifact detection software was run for an initial sorting of trials, and all trials were then visually inspected for artefacts of ocular (e.g. blinks, saccades) and non-

(22)

2 Methods 18

ocular origin (e.g. channel blocking or drifts). Trials with non-ocular artefacts, trials with saccades, and trials with incorrect behavioural responses were discarded. For all remaining trials, ocular blink contributions to the EEG were corrected. ERPs were averaged separately for each channel and for each experimental condition and block. Each averaged ERP was low- pass filtered at 10 Hz with a zero phase shift digital filter, and recalculated to average reference, excluding the vertical EOG channel.

2.7 Data analysis

2.7.1. Intensity task

Percentage of LL- and RR-composites being chosen were submitted to an analysis of variance (ANOVA) with repeated measures on type of composite (LL- vs. RR-composite), type of expression (anger vs. disgust vs. fear vs. happiness vs. sadness vs. surprise) and level of intensity (100% vs. 50%). Where appropriate, epsilon corrections for heterogeneity of covariance with the Huynh-Feldt method (Huynh & Feldt, 1976) were performed throughout.

2.7.2. Efficiency task

The percentage of correctly classified faces was submitted to an analysis of variance (ANOVA) with repeated measures on type of composite (LL- vs. RR-composite), type of expression (anger vs. fear vs. happiness vs. sadness) and level of intensity (100% vs. 50%).

Where appropriate, epsilon corrections for heterogeneity of covariance with the Huynh- Feldt method (Huynh & Feldt, 1976) were performed throughout.

2.7.3.Event-related potentials

ERPs to different conditions were quantified by mean amplitude measures in the time segments 110-140, 170-200, 200-300, 300-500 and 500-800 ms. The first segment was chosen to correspond to the occipital P1, the second segment corresponded to the occipito-

(23)

2 Methods 19

temporal N170 peak in the waveforms, the 300-500 time segment roughly corresponded to the central-parietal P3. The other segments were chosen upon visual inspection. All amplitude measures were taken relative to a 200-ms baseline preceding the target stimulus and recalculated to average reference.

For every time segment, ANOVAs were then performed with repeated measures on type of composite (LL- vs. RR-composite), type of expression (anger vs. fear vs. happiness vs.

sadness), level of intensity (100% vs. 50%) and the additional repeated measurement factor electrode site (32 levels). Where appropriate, epsilon corrections for heterogeneity of covariance with the Huynh- Feldt method (Huynh & Feldt, 1976) were performed throughout.

Further analysis was carried out for effects that turned out significant in the global analysis applying ANOVAs for selected time segments for different regions of interest, being prefrontal/ lateral frontal (FP1, FP2, F7, F8, FT9, FT10, F9, F10, F9, F10’), frontal (F3, F4, Fz), central- parietal (C3, C4, P3, P4, Cz, Pz), temporal (T7, T8, P7, P8, P9, P10, TP9, TP10) and occipital (O1, O2,

Iz, PO1, PO2) with repeated measures on type of composite (LL- vs. RR-composite), type of expression (anger vs. fear vs. happiness vs. sadness), level of intensity (100% vs. 50%) and the additional factor hemisphere (left vs. right) for regions of interest that did not include a midline electrode (prefrontal/lateral frontal and temporal region of interest). Regions of interest were defined by pooling electrodes that were roughly showing a similar pattern of ERP response, irrespective of experimental condition being investigated.

Individual electrodes within the above described regions of interest were not included as a separate factor for the ANOVAs, since regions of interest were considered to be homogenous.

Defined regions of interest that were kept consistent across all different experimental conditions. Where appropriate, epsilon corrections for heterogeneity of covariance with the Huynh- Feldt method (Huynh & Feldt, 1976) were performed throughout.

Interactions with factor type of expression that turned out significant within the defined regions of interest for a specific time segments were further analysed. Post-hoc comparisons using Tukey hsd were made to determine the significance of differences.

(24)

3 Results 20

3 Results

3.1 Intensity task

- Percentage of LL- vs. RR-composites being chosen

Percentage of LL- and RR-composites being chosen were submitted to an analysis of variance (ANOVA) with repeated measures on type of composite (LL- vs. RR-composite), type of expression (anger vs. disgust vs. fear vs. happiness vs. sadness vs. surprise) and level of intensity (100% vs. 50%). Where appropriate, epsilon corrections for heterogeneity of covariance with the Huynh-Feldt method (Huynh & Feldt, 1976) were performed throughout.

The ANOVA revealed a significant effect of type of composite, F(1,23)= 1.5, p<.001, with RR-composites overall being chosen in about 65.3% as more expressive (see Table 1). In addition to this main effect of composite, there was a significant interaction between expression and composite, F(5,115)=4.4, p<.01. Thus the contrast in terms of perceived intensity between LL- and RR-composites was different for individual expressions, although it was present for each expression (see Fig .1). Furthermore, there was a significant three- way interaction between composite, expression and intensity, F(5,115)=5.0, p<.001 (see Fig. 2).

LL/ RR-composites Level of

intensity

overall anger disgust fear happiness sadness surprise

overall 34.7/

65.3

35.1/

64.9

34.0/

66.0

28.6/

71.4

33.7/

66.3

41.3/

58.7

35.3/

64.7

100%

only

35.6/

64.4

39.4/

60.6

39.2/

60.8

26.1/

73.9

33.0/

67.0

44.3/

55.7

31.7/

68.3

50%

only

33.7/

66.3

30.8/

69.2

28.9/

71.1

31.0/

69.0

34.3/

65.7

38.4/

61.6

38.9/

61.1

Table 1. Percentage of LL- vs. RR- composites being chosen; overall and separately per level of intensity;

overall and separately for each individual expression

(25)

3 Results 21

Figure 1. Percentage of LL-vs. RR- composites being chosen; overall and separately for each individual expression.

Examining individual expressions, an analysis of variance (ANOVA) with repeated measures on type of composite (LL- vs. RR-composite) and level of intensity (50% vs. 100%) revealed a significant main effect of type of composite, F(1,23)=33.8, p<.001, for anger, with RR-composites overall being perceived as more intensive in about 64.9%. Moreover, there was a significant interaction between type of composite and level of intensity, F(1,23)=7.7, p<.05, indicating that the difference in perceived intensity between LL- and RR-composites varied for different levels of intensity. Post- hoc testing with Bonferroni- corrected pairwise comparisons revealed that the effect for RR-composites was significant at each level of intensity, (LL 100%- RR 100%, F(1,23)=15.36, p<.001; LL 50%- RR 50%, F(1,23)=34.83, p<.001; α=. 025).

RR-composites of disgust were overall perceived as being more intensive in about 66.0%, as reflected by a significant main effect of type of composite in an analysis of variance (ANOVA) with repeated measures on type of composite and level of intensity, F(1,23)=42.5, p<.001. Furthermore, there was a significant interaction between type of composite and level

% of LL-vs. RR-composites being chosen

0 10 20 30 40 50 60 70 80 90 100

overall anger disgust fear happiness sadness surprise

%

LL RR

***

***

***

***

***

*** **

(26)

3 Results 22

of intensity, F(1,23)=8.7, p<.01, indicating that the difference in perceived intensity between LL- and RR-composites varied for different levels of intensity. Post- hoc testing with Bonferroni- corrected pairwise comparisons revealed that the effect for RR-composites was significant at each level of intensity (LL 100%- RR 100%, F(1,23)=8.94, p<.01: LL 50%- RR 50%, F(1,23)= 89,81, p<.001; α=.025.

RR-composites of fear were perceived as being more intensive in 71.4% as reflected by a significant main effect of type of composite in an analysis of variance (ANOVA) with repeated measures on type of composite and level of intensity, F(1,23)=72.1, p<.001.

RR-composites of happiness perceived as being more intensive in 66.3% as reflected by a significant main effect of type of composite in an analysis of variance (ANOVA) with repeated measures on type of composite and level of intensity, F(1,23)=58.8, p<.001.

RR-composites of sadness were perceived as being more intensive in 58.7% as reflected by a significant main effect of type of composite in an analysis of variance (ANOVA) with repeated measures on type of composite and level of intensity, F(1,23)=10.8, p<.01. There was a trend towards an interaction between composite and level of intensity, F(1,23)=3.15, p<.09, indicating that the difference in perceived intensity between LL- and RR-composites varied for different levels of intensity. Post-hoc testing with Bonferroni- corrected pairwise comparisons revealed that the difference in perceived intensity between LL- and RR- composites was only significant when shown in 50% intensity (LL 100%- RR 100%, F(1, 23)=2.88, p<.10; LL 50%- RR 50%, F(1,23)=16.8, p<.001; α=.025).

RR-composites of surprise were perceived as being more intensive in 64.7% as reflected by a significant main effect of type of composite in an analysis of variance (ANOVA) with repeated measures on type of composite and level of intensity, F(1,23)=52.5, p<.001. There was trend towards an interaction between composite and level of intensity, F(1,23)=3.3, p<.082, indicating that the difference in perceived intensity between LL- and RR-composites varied for different levels of intensity. Post- hoc testing with Bonferroni- corrected pairwise comparisons revealed that the difference in intensity between LL- and RR-composites was significant at each level of intensity (LL 100%- RR 100%, F(1,23)=47.98, p<.001; LL 50%- RR 50%, F(1,23)=13.49, p<.001; α=.025).

(27)

3 Results 23

Figure 1. Percentage of LL- vs. RR- composites being chosen per level of intensity and separately 3.2 Efficiency task

Figure 2. Percentage of LL- vs. RR- composites being chosen per level of intensity and separately for each individual expression

3.2. Efficiency task

3.2.1 Correct classifications

The percentage of correctly classified faces was submitted to an analysis of variance (ANOVA) with repeated measures on type of composite (LL- vs. RR-composite), type of expression (anger vs. fear vs. happiness vs. sadness) and level of intensity (100% vs. 50%).

Where appropriate, epsilon corrections for heterogeneity of covariance with the Huynh- Feldt method (Huynh & Feldt, 1976) were performed throughout.

Overall, there was a significant effect of type of expression, F(3,69)=34.0,p<.001, a significant effect of type of composite, F(1,23)=7.6, p<.05, with RR-composites overall being classified correctly in 86.9% and LL-composites being classified correctly in 84.8% (see Table 2). There was also a significant effect of level of intensity, F(1,23)=328.2, p<.001, with faces being classified more accurately when presented in 100% intensity. Furthermore, there was a significant interaction between type of composite and type of expression, F(3,69)=9.7, p<.001 (see Fig. 3), a significant interaction between type of expression and level of intensity,

(28)

3 Results 24

F(3,69)=32.7, p<.001 and a trend towards an interaction between type of composite and level of intensity, F(1,23)=3.3, p<.09. The three-way interaction between type of composite, type of expression and level of intensity was significant too, F(3,69)=9.3, p<.001 (see Fig. 4).

LL/ RR-composites Level of

intensity

overall anger fear happiness sadness

overall 84.8/

86.9

78.5/

84.7

77.6/

83.3

97.6/

97.2

85.5/

82.4

100% only 93.2/

94.1

94.3/

96.4

91.7/

93.1

99.9/

100

86.8/

87.0

50% only 76.5/

79.8

62.8/

73.0

63.5/

73.6

95.4/

94.5

84.3/

77.9

Table 2. Percentage of correct classifications; overall and separately per level of intensity; overall and separately for each individual expression

Figure 3. Percentage of correctly classified composites; overall and separately for each individual expression.

Percentage of correctly classified composites

75 80 85 90 95 100

overall anger fear happiness sadness

%

LL

*** RR

**

*

n.s.

P= .09

(29)

3 Results 25

Further analysis of individual expressions revealed a significant main effect of type of composite for anger, F(1,23)=30.3, p<.001, with RR-composites being classified more accurately, a significant main effect of level of intensity, F(1,23)=150.1, p<.001, with faces shown in 100% intensity being classified more accurately and a significant interaction between type of composite and level of intensity, F(1,23)=15.2, p<.001. Post- hoc testing with Bonferroni- corrected pairwise comparisons revealed that the effect for RR-composites was significant at each level of intensity (LL 100%- RR 100%, F(1,23)=116.06, p<.001), (LL 50%- RR 50%, F(1,23)=148.47, p<.001; α=.025).

For fear, the ANOVA revealed a significant main effect of type of composite, F(1,23)=8.0, p<.05, with RR-composites being classified more accurately, a significant main effect of level of intensity, F(1,23)=103.4,p<.001 with faces shown in 100% intensity being classified more accurately and a significant interaction between type of composite and level of intensity, F(1,23)=7.2, p<.05. Post- hoc testing with Bonferroni- corrected pairwise comparisons revealed that the effect for RR-composites was significant at each level of intensity (LL 100%- RR 100%, F(1,23)=69.23, p<.001; LL 50%- RR 50%, F(1,23)=78.54, p<.001; α=.025).

Happy faces were classified more accurately when presented in the 100% intensity, as reflected by a significant main effect of level of intensity, F(1,23)=20.1, p<.001. There was no significant effect of type of composite, F(1,23)=.7, p<.4235 n.s.

Sad faces were classified more accurately when presented in the 100% intensity, as reflected by a significant main effect of level of intensity, F(1,23)=8.4, p<.01. There was a trend towards an effect of type of composite, F(1,23)=3.0, p<.10 for LL-composites being classified more accurately and a significant interaction between type of composite and level of intensity, F(1,23)=5.1, p<.05. Post- hoc testing with Bonferroni- corrected pairwise comparisons revealed that LL- vs. RR-composites did not significantly differ from each other when presented in the 100% intensity, however LL-composites were classified more accurately than RR-composite when presented in 50% intensity (LL 100%- RR 100%, F(1,23)=2.12, p<.1593 n.s.; LL 50%- RR 50%, F(1,23)=9.02, p<.01; α=.025).

(30)

3 Results 26

Figure 4. Percentage of correctly classified LL- and RR- composites per level of intensity, separately for each individual expression

3.2.2. Reaction times

Reaction times for correct responses were submitted to an analysis of variance (ANOVA) with repeated measures on type of composite (LL- vs. RR-composite), type of expression (anger vs. fear vs. happiness vs. sadness) and level of intensity (100% vs. 50%). Where appropriate, epsilon corrections for heterogeneity of covariance with the Huynh- Feldt method (Huynh & Feldt, 1976) were performed throughout.

Overall, there was a trend towards an effect of type of composite, F(1,23)=3.9, p<.062 (see Fig.5), a significant main effect of type of expression, F(3,69)= 7.5, p<.001 and a significant main effect of level of intensity, F(1,23)=328.9, p<.001, with faces being more quickly classified when shown in the 100% intensity version. Furthermore, there was a significant interaction between type of composite and type of expression, F(3,69)=8.0, p<.001 and a significant interaction between type of expression and level of intensity, F(3,69)=12.9, p<.001.

(31)

3 Results 27

LL/ RR-composites Level of

intensity

overall anger fear happiness sadness

overall 1146/

1131

1187/

1128

1285/

1257

910/

905

1202/

1235

100%only 1051/

1047

1077/

1034

1149/

1157

815/

810

1162/

1188

50% only 1241/

1215

1297/

1222

1421/

1357

1005/

1001

1242/

1281

Table 3. Reaction times in ms for correct classifications; overall and separately per level of intensity;

overall and separately for each individual expression

Figure 5. Reaction times in ms for correct classifications; overall and separately for each individual expression

RTs in ms

850 950 1050 1150 1250

overall anger fear happiness sadness ms

LL RR P= .06

***

n.s.

n.s.

*

Referenzen

ÄHNLICHE DOKUMENTE

Consequently, the matching of two faces sharing only external (SE) or internal (SI) features is expected to be more impaired by short exposure duration than matching of identical

Its comparison with the first model (small model, learning of participant-specific patterns permitted) points out that somatovisceral measures carry additional information

In this approach the PCA model is used for a coarse holistic shape representation and details are represented by the LFA-based local models.. This object representation provides

OE m¯ otan: true variable force; a single meaning distinct from ♦ and ME m¯ oten: genuine ambiguity between and

In connection to its 77th number, FACES magazine launches a peer-reviewed call for papers on the subject of decor, its use today in architecture and its relation to history..

A finite graph is isomorphic to the graph of a 3-dimensional convex polytope if and only if it is planar and vertex 3-connected.. Graphs of higher-dimensional polytopes probably have

7 Hashmal is the mysterious Hebrew word, translated &#34;gleaming b r o n z e &#34; in the Revised Standard Version of the Bible (RSV), which Ezekiel uses to describe the

Hauptschule, Realschule, Gymnasium: Konzepte, Arbeitsblätter, Kopiervorlagen, Unterrichtsentwürfe c OLZOG Verlag GmbH... The Many Faces of