• Keine Ergebnisse gefunden

2.7 Data analysis

4.3.2 Interpreting current ERP results

ERP date has only been obtained for the second part of the experiment, the efficiency task, in which subjects were presented centrally with one stimulus at a time, being either a left composite (LL) or a right composite (RR). These stimuli had to be classified as fast and as accurately as possible according to the corresponding type of expression (anger, fear, happiness, sadness).

4 Discussion 47

Type of expression effect

In the present study, brain response towards expressive faces varied with type of expression, but only within the late time segment from 500-800ms and only at specific electrode sites as reflected by a significant main effect of type of expression at temporal electrode sites. The effect resulted from a noticeably different pattern of activation after the presentation of happy faces compared to angry, fearful and sad expressions that basically all elicited a very similar brain response. This pop-out effect for happy expressions nicely fits the behavioural data, since happiness seemed to be very different from every other expression in terms of the extremely high performance in the classification task resulting from presentation of happy faces. Thus, as reported earlier by several studies, there seems to be something very special about the recognition of happy expressions, not only on the behavioural, but also on the neuronal level (Kerstenbaum & Nelson, 1989; Kirouac & Dore, 1983, 1984; Orozco &

Ehlers, 1998). Amplitudes elicited by happy expressions were significantly lower than amplitudes elicited by any other expression. Therefore, as suggested by Adolphs et al. (’96), it might be that the processing of happy faces involves neuronal networks that are diffusely scattered across several brain areas and less localized than neuronal networks for any other facial expression and consequently have a weaker effect on a given surface electrode within a defined region of interest (Adolphs et al., 1996), in that case at temporal regions.

An alternative explanation for this type of expression effect is offered for example by Schupp et al. ’03, who suggested that differences in onset of brain response towards different types of expressions depend on evolutionary mechanism, with attentional resources focused on cues that are potentially dangerous, such as anger (Sato, Kochiyama, Yoshikawa, &

Matsumura, 2001; Schupp, Junghöfer, Weike, & Hamm, 2003). In the light of the present study, this explanation indeed makes sense, since the least dangerous type of expression, namely happiness explicitly popped out of the general response pattern.

The fact that differences in the processing of different types of expressions only become visible at such a late stage, which is contrary to several recent studies, reporting a modulation of responses towards different expressions as early as about 100ms after presentation the of the stimulus (Eimer & Holmes, 2002; Pizzagalli et al., 1999), might however be the result of methodological differences between these studies. Using fearful or neutral expressive faces that were either presented in upright or inverted orientations and using houses as distracting stimuli, the results of the study by Eimer et al. could also be due to the fact that their stimuli did not only differ in terms of expressiveness, but also in terms of biological significance,

4 Discussion 48

arousal level and the level of attention they were attracting. Reporting a frontocentral positivity after the presentation of upright fearful faces, this alternative explanation would also make sense. Moreover, Eimer et al. were only comparing ERPs towards fearful vs.

neutral faces, whereas the present study aimed at identifying a correlate that distinguishes among different types of expressions.

Another study reporting early emotional face processing after about 100ms post stimulus presentation (Pizzagalli et al., 1999) is also considerably different from the present study:

Presenting either liked or disliked faces in a passive observation task, Pizzagalli et al. are dealing with a more valence-dependent approach of the processing of facial expressions, whereas the present study did not primarily focus on any potential difference between pleasant and unpleasant facial expressions. However, assuming that happy expressions in the present study would qualify for the category ‘pleasant stimuli’ and every other expression would rather qualify for the category ‘unpleasant stimuli’, the present study and its results with the distinct effect for happiness could in fact be taken as a partial support for the results of Pizzagalli et al.. Yet further limiting the degree to which these two studies are actually comparable is the fact that Pizzagalli et al. were primarily focusing on differences in local brain activation towards liked and disliked faces and not especially on differences of processing capacity or active performance of classification for different facial expressions.

Furthermore, Pizzagalli et al. reported a valence-dependant modulation with a trend for higher amplitudes being associated with liked faces, whereas the present study is pointing towards the opposite direction - reduced amplitudes towards happy or in other words ‘pleasant stimuli’. These contradictory results might also be due to the fact, that arousal level evoked by the stimuli of Pizzagalli et al. might be quite high, at least among disliked stimuli, since they have been taken from psychiatric patients (Szondi pictures), whereas arousal level evoked by the stimuli used in the present study might be generally lower and probably also more homogeneous across different stimulus categories.

Thus, the present study is more in line with studies that report ERPs discriminating between different facial expressions at later stages of processing. ERP modulations towards expressive faces have been found for the P200 component (Ashley, Vuilleumier, & Swick, 2004), for the time segment of 200- 400ms (Marinkovic & Halgren, 1998), at around 450ms (Münte et al., 1998), and for the time segment of 500-800ms post- stimulus onset (Krolak-Salmon et al., 2001). When discussing ERP studies dealing with facial expressions, regardless of the study in question, it is however crucial to clarify whether the discrimination between emotional expressive and neutral stimuli or the discrimination among different emotional

4 Discussion 49

expressive stimuli is addressed. It might be the very case that the differentiation between emotional expressive and neutral faces occurs much earlier than the differentiation among different facial expressions (Krolak-Salmon et al., 2001). Furthermore, as proposed by Marinkovic et al. 1998, the most salient aspects of face recognition, such as facial expression, are probable not only processed once, but multiple times, depending on the nature of a given task or on a given context (Marinkovic & Halgren, 1998). This hypothesis might help to explain the controversial outcomes of different studies reporting distinct correlates towards expressive faces ranging from very early to very late modulations of the evoked cortical response. In sum, it can be said that the emergence of an expression-specific correlate largely depends on stimulus material, underlying paradigm and experimental procedures, as well as on the focus of the individual study under inspection.

Yet, consistent with most previous studies, the present study did not reveal any expression specific modulation of the N170. This component which has for a long time been interpreted as a face-specific component (Eimer & McCarthy, 1999; Rossion et al., 1999), is probably rather reflecting structural encoding of facial stimuli and not selective for any specific facial expression (Ashley et al., 2004; Eimer & Holmes, 2002), even though contradictory results have also been published. For example, Batty et al. (’03) using more naturalistic pictures without removing background and hairstyle, reported specific effects of emotional expressions on the amplitude and latency of the N170 in an implicit emotional task (Batty &

Taylor, 2003).

Again, as for the behavioural results, differences in outcomes of the present study compared to other related studies can be explained by differences in task and stimulus factors.

Not only the nature of the task and physical differences between the stimuli, but also differences in pleasantness, attention being attracted and arousal being evoked by the stimuli, as well as gender (Orozco & Ehlers, 1998) and handedness of posers and perceivers are influencing the outcome of a given study.

Facing the discrepancies between all these studies in terms of the emergence of an expression specific modulation, there is at least some convergence in terms of localization of expression specific activation: the temporal region seems to be one of the primary candidates, in particular right temporal regions (Bentin, Allison, Puce, Perez, & McCarthy, 1996;

Kanwisher, McDermott, & Chun, 1997; Puce, Allison, Asgari, Gore, & McCarthy, 1996;

Sergent, Ohta, & MacDonald, 1992). Even though the present study could indeed demonstrate distinct temporal activation, differentiating between different types of expressions, particularly lateralized activation for the right temporal region did not turn out, which might

4 Discussion 50

also be due to reduced statistical power resulting from the small number of electrodes as well as limitations in terms of spatial resolution.

Eventually, there still is the possibility that differences in task are influencing the proportion of uni- vs. bilateral activation. Comparing active recognition and passive perception of facial expressions, Mikhailova et al. (2000) reported lateralized activation of the right hemisphere for passive perception whereas active recognition of facial expression was associated with activity of both hemispheres (Mikhailova & Bogomolova, 2000). Thus, lateralized brain activity seems to be different for implicit recognition tasks as compared to active recognition tasks.

Level of intensity effect

Amplitude towards stimuli was strongly effected by the modulation of intensity level.

Starting within the time segment of 170-200ms (N170) at occipital and central-parietal electrode sites, spreading out to temporal and frontal electrode sites within the following time segment of 200-300ms and finally peaking within the time segment of 300-500ms (P3), in which all electrode sites showed a significantly different activation pattern towards faces being presented in 100% intensity level compared to the activation pattern towards those faces being presented in 50% intensity level. Central-parietal electrode sites showed a significant effect of intensity, starting within the time segment of 170-200ms and continuing to the last time segment that has been analysed (500-800ms). At nearly all electrode sites, amplitude towards faces being presented in 100% intensity level was significantly larger than towards those being presented in the lower intensity level (50%), thus higher intensity levels resulted in stronger brain responsse. There were only two exceptions to this general pattern of results:

At occipital electrode sites, brain response towards faces having been presented in the low intensity level (50%) was significantly stronger than towards those having been presented in the high intensity level (100%), regardless of the time segment under inspection, be it for the time segment of 170-200ms (N170), be it for the one of 200-300ms or be it for the one of 300-500ms (P3). The second exception from the general pattern occurred at the prefrontal/

lateral frontal region of interest where faces that have been presented at the low intensity level (50%) elicited a significantly larger amplitude than those faces that have been presented at the higher intensity level (100%) for the time segment of 300-500ms (P3).

Relating the present ERP data to the behavioural data of this study, a generally stronger brain response towards stimuli presented at the higher intensity level (100%) seems to coincide with a better performance, as determined by accuracies and reaction times.

4 Discussion 51

When comparing this intensity effect to other studies, dealing with ERPs towards different levels of stimulus intensity, it first has to be defined what is actually meant by the term intensity level. In the present study, two different levels of intensity have been used, one being an original picture of a given facial expression, and the other one was obtained by morphing an original expressive picture with a neutral face of the same poser. The morphed versions are artificial faces resulting from interpolation of a set of anatomical feature points from an expressive face with its corresponding reference points on the original neutral face.

By doing so, stimuli presented where either in 50% or 100% intensity level in terms of intensity of emotional expression, with luminance and contrast of all stimuli approximately being the same, although this has not systematically been tested. However, it has to be taken into consideration that the stimuli used might also differ in level of arousal and level of attention being attracted by them. Therefore the question arises if and how stimuli differ from each other in these aspects. If stimuli of the present study actually differ in more than just the intensity level in its current definition which has been manipulated experimentally and intended, there might also be other systematic influences on the ERP results, e. i. they might somehow be biased due to other, in this case confounding variables. Thus, even though there are a number of apparent similarities, different concepts of ‘intensity’ have to be taken into account while making comparisons between the current and previous studies.

Generally, studies reporting effects of stimulus intensity on ERP components, such as on the N100, the P200, the N200 and the P300 component are numerous (Covington & Polich, 1996; Picton, Hillyard, Krausz, & Galambos, 1974; Rugg, 1995). However, these studies all used very simple visual or auditory stimuli, such as black and white checkerboards or a standard tones, where intensity as a physical dimension can easily be manipulated, whereas the level of intensity in the present study is defined on a rather psychological dimension.

Thus, it is questionable, if the results are actually comparable. Nevertheless, results of studies investigating effects of stimulus intensity on ERPs are surprisingly converging with data from the present study: Generally, the higher the stimulus intensity, the larger the N100, P200, N200 and P300 amplitudes. In this aspect, the present data fits quite well into the general assumption for most of the regions of interest– higher stimulus intensity (in this case 100%

expressive faces), elicited stronger brain responses.

However, one could also interpret the current data in terms of attentional effects, since it is well known that most ERP components are affected by attention. It has for example been demonstrated that the amplitude of the P300 component is larger and latency is shorter with stimuli that are attracting more attention (Polich & Kok, 1995) and thus, it might be the case

4 Discussion 52

that faces being presented at the high intensity level (100%) simply attract more attention than faces being presented at the low intensity level (50%). However, an important distinction between studies investigating these attentional effects on the resulting ERPs and the present study has to be made, since paradigms that were used to find out about attentional influences usually implemented an oddball technique, whereas in the present study all stimulus types where equally likely to occur. Therefore it is questionable whether the current study can be compared with studies implementing the classical oddball paradigm.

Additional alternative interpretations are arising from studies, that deal with more natural stimulus material such as facial stimuli or natural scenes like the International Affective Picture System (Lang, Bradley, & Cuthbert, 1999). These studies indicate that stimulus valence (Balconi & Pozzoli, 2003) as well as level of arousal (Mini, Palomba, Angrilli, &

Bravi, 1996) is influencing ERPs and contributing to augmented ERP components. Assuming that the concept of level of intensity of the present study is somehow comparable to the concepts of intensity that has been applied to previous studies, with level of arousal or level of attention being comparable, ERP results for the intensity effect are quite consistent.

Type of composite effect

Brain response towards composites either consisting of left or right hemifaces was significantly different for the N170 as well as for the P3 component. At temporal electrode sides, the amplitude of the N170 was significantly larger towards right composites than towards left composites. It has repeatedly been reported that attended stimuli are associated with a stronger negative amplitude of the N170 component (Hillyard, Hink, Schwent, &

Picton, 1973; Naatanen, 1982), which offers quite a good explanation for the current data.

Given the fact that right composites yielded better performances for the classification of expressions, higher amplitudes towards right composites which are probably of more importance than left composites, seem to mirror this effect on the neuronal level. At temporal electrode sides as well as at prefrontal/ lateral frontal electrode sides, there was also an effect of type of composite on the P3 component. Again, at temporal electrode sides, right composites elicited higher amplitudes than left composites. Stimuli that are relevant to the individual are known to elicit a larger P3 amplitude (Duncan-Johnson & Donchin, 1977, 1979), which once more argues for a greater importance of right hemifacial information.

However, at prefrontal/ lateral frontal electrode sides, the P3 amplitude was larger for left composites. Thus, different brain areas seem to react differently toward left vs. right composites and there is probably no uniform way of processing the two hemifaces. Anyway,

4 Discussion 53

this main effect for type of composite at temporal electrode sides for the N170 component as well as the P3 effect at temporal and prefrontal/ lateral frontal electrode sides has to be interpreted in its interactions with type of expression and type of composite.

Interaction between type of expression and type of composite

Compared to the effect of type of expression where ERP data was matching behavioural data, the interaction between type of expression and type of composite yielded somehow different results for ERPs compared to behavioural data. Accuracy and reaction times showed a general pattern of right composite advantage for the classification task that was modulated by type of expression, with left composites of sad expressions yielding a better performance than right composites of sad expression. However, ERP data failed to reveal such an effect for sad expressions. Only for the time segment of 170-200ms (N170) and only at occipital electrode sites did sad expressions actually modulate the general pattern of brain responses towards left vs. right composites with a resulting pattern of amplitude that was qualitatively different compared to every other expression, though not statistically significant. Interestingly however, ERP data revealed that the interaction between type of expression and type of composite was mainly modulated by fearful expressions. Fearful expressions elicited a distinct pattern of brain response towards left vs. right composites that differed from every other type of expression. This pattern was already present for the early time segment of 110-140ms, but also sustained for the following time segments of 170-200ms and of 200-300ms.

For the early time segment of 110-140ms (P1), temporal electrode sides as well as frontal electrode sides showed a modulation of brain response towards left vs. right composites of fearful expressions, with left composites eliciting a significantly larger amplitude than right composites, whereas every other expression did not elicit any differential reaction towards left vs. right composites. However, for the following time segment of 170-200ms (N170), the pattern of response towards left vs. right composites of fearful expressions at temporal and frontal electrode sides was reversed – a significantly larger amplitude resulted from the presentation of right compared to left composites of fearful expressions. Still, there was no difference between left and right composites for every other type of expression. This setting in right composite advantage for fearful expressions also lasted for the time segment of 200-300ms at temporal electrode sides and also emerged at prefrontal/ lateral frontal electrode sides. Thus, there seems to be a crucial difference between early and late stages in the processing of fearful expressions that turned up exactly at those regions of interest that are specialized for face processing in general and emotional stimuli in particular (Streit et al.,