• Keine Ergebnisse gefunden

5 Conclusion

By summing up the outcomes of the present study, it becomes obvious that both aspects of emotional processing investigated here, namely the intensity as well as the efficiency of emotional expression are not yet fully understood, with numerous contradictions between this study and several thematically related and above cited studies, thus requiring further research.

There is as well a need for further investigation of the precise nature of information transfer that takes place between a poser who is decoding and sending facial information and a perceiver who is receiving and decoding it.

Facing the discrepancies between the present study, the one run by Sackeim et al., the one run by Indersmitten et al., and several other behavioural and ERP studies mentioned above, it has to be kept in mind that the stimuli used, as well as the tasks that have been implemented differed considerably from each other.

In terms of stimulus material, one of the most crucial factors to consider when discussing all these conflicting results is represented by the elicitation condition of the stimuli: Verbally instructing actors to generate a certain facial expression is probably influencing the relative contribution of the left and the right hemisphere respectively. Moreover, there is an important distinction to be made between spontaneous and voluntary/ posed expression of facial emotions. In a review article by Pizzamiglio et al. (Pizzamiglio et al., 1989), spontaneous facial expressions in normal control subjects seemed to be symmetrical or biased either to the left or the right hemiface as concluded from studies videotaping subjects’ facial reaction towards movies and comparing the relative movement of the left vs. the right hemiface (Ekman, Hager, & Friesen, 1981; Lynn & Lynn, 1943), whereas voluntary facial expressions seemed to be more asymmetric with the left hemiface moving heavier than the right hemiface (Moscovitch & Olds, 1984). As already pointed out in the introduction, the difference between intuitive and posed facial expressions with its most prominent example being a Duchenne smile compared to the so-called social smile, does not only exist in terms of movements but also in terms of differences in muscle tone. Besides data from normal controls, clinical literature, too is suggesting two separated subsystems operating, one dealing with spontaneous and the other one dealing with voluntary facial expression, which has been concluded from a number of clear dissociations in patients suffering from morbus Parkinson or patients suffering from lesions to certain brain areas (see Rinn, 1984). Nevertheless, while describing and discussing the naturalness of facial expressions with the assumedly two

5 Conclusion 58

distinct types, namely spontaneous vs. posed, it has to be emphasized that everyday life situations are much less clear cut in terms of either evoking spontaneous or posed facial expression, with display rules filling the interface between the two extreme counterparts.

Thus, although the stimuli in the present study can relatively easy be taken as examples qualifying for the category of posed voluntary facial expressions, it still remains questionable to what extend they are actually comparable to naturally occurring facial expressions and to what extend conclusions can be drawn in terms of the expected amount of hemispheric contributions during the generation of these expressions.

Apart from the vagueness of the elicitation condition, another limiting factor for the interpretation of the current study in comparison with other related studies is the problem of ignorance concerning the handedness of the posers that have been used for generating stimulus material. Knowing that handedness is a relatively good indicator for brain lateralization and thus influences to a large amount the outcome of studies dealing with hemispheric differences, it might be crucial to assess the handedness of posers as well as the handedness of perceivers, which unfortunately in most studies has not been done or at least not been mentioned. Along the same biasing line, it is not only important to consider the gaining of stimuli per se, but also to keep in mind that there is always a chance of getting confused with the procedure of cutting, transposing and recomposing hemifaces or flipping slides, when dealing with chimeric faces. Further limiting the degree of comparability between different studies is the fact that almost none of the studies dealing with expressive faces, including the efficiency task of the present study, applied the complete set of basic expressions. Some other studies did not even investigate expressions belonging to the more or less generally accepted classical categories of basic emotions, but used stimuli of complex facial emotions that were rather based on dimensional approaches. As stimulus sets were also different in terms of arousal level, pleasantness, naturalness, and attention being directed towards them, comparisons between different studies are quite limited.

In terms of task factors determining or at least influencing the outcome of respective studies, which is an especially important issue when discussing and comparing ERP results, it has to be mentioned that the tasks implemented were not very alike either: Some of the studies cited above used a passive perceptual task or implemented an implicit oddball-paradigm, whereas in other studies subjects actively classified facial expressions. Since attentional and motivational effects largely modulate certain ERP components (see Rugg, 1995), studies using different experimental paradigms are quite limited in terms of their transferability and generality. Even the results for the behavioural part are quite difficult to

5 Conclusion 59

compare, due to these large differences in experimental design between different studies.

Thus, referring to the intensity task, it would be quite helpful to conduct further experiments, using for example a forced-choice paradigm in combination with the original stimuli used by Sackeim et al. (1978), or using the present stimuli in combination with a paradigm that involves a judgement of expressiveness on a rating scale such as the procedure done by Sackeim et al.. Doing this, it might be possible to separate effects resulting from differences in tasks from those resulting from differences in stimulus material. Referring to the efficiency task, there are even more possibilities for follow-up experiments that would make sense and probably help clarifying the above mentioned discrepancies. These will be discussed further down.

Another methodological issue arises from the fact that most studies either investigate the perceptual effect of facial expressions with half-field paradigms being implemented or investigate facial expressions, where the focus is set on posing effects. Thus, for most studies it is only possible to draw conclusions about one part of expression processing, with encoding and decoding being treated as non-related. In addition, studies implementing a task with lateral stimulus presentation also encompass the possibility that spatial attentional effects are influencing or distorting the results. It has repeatedly been reported (Rhodes, 1985; Yovel, Levy, Grabowecky, & Paller, 2003) that there is a bias towards the left visual field of a perceiver due to right hemispheric superiority for spatial tasks, with better representation of information from the left visual field and consequently (at least in normal face to face situations) more attention being attracted towards the right hemiface of a poser which is in the left visual field of a perceiver. These spatial attentional effects are quite robust and can even be established on the neuronal level (Yovel et al., 2003). This spatial attentional effect might actually serve as an explanation for the relatively consistent RR-composite advantage in the present study: It might be the case that the mechanism for the perception of facial expressions is somehow tuned to prefer information from the right hemiface of a poser, since this is the part of a face that normally falls into the left and thus preferred visual field of a perceiver.

Consequently, an initial preference for a certain visual field, namely the left one, might gradually have evolved into a preference for a certain type of facial information, namely the one coming from the right hemiface. Even though at first sight spatial attentional effects should not matter at all since the stimulus presentation has been kept centrally with any possible spatial attentional effects equally biasing LL- and RR-composites, there might still be this kind of adaptation or secondary influence of spatial preference on the perception of composites. This explanation for the RR-composite advantage would also partially match the

5 Conclusion 60

EEG data, where RR-composites elicited larger amplitudes of the N170 and the P3 at temporal electrode sides. In any case though, further research would be necessary to test and clarify this hypothesis.

Comparing different models, dealing with hemispheric specialization of emotion processing, the results of the present study do not support any of the initially proposed models, neither the idea of a global dominant role of the right hemisphere for emotion processing, nor a valence- or an approach-withdrawal-based hypothesis. Contrary to the general consensus, theoretical assumptions, and experimental data from most studies investigating hemispheric contributions to emotion processing, the outcomes of the present study point towards a dominant role of the left hemisphere for the expression of emotional intensity and a relatively dominant role for the left hemisphere for efficiency of facial expression, modulated by type of expression. Apparently, the left hemisphere, at least in terms of the generation of facial expressions, plays an important role which has largely been neglected or at least been underestimated so far.

Thus, even though none of the proposed models is helpful for the interpretation of the current data, it has become obvious that there is a need for further differentiation between different aspects of emotional expression, taking into account various stimulus factors, such as the type of hemiface, the level of intensity, the type of expression and additionally several other factors that have not been investigated by the present study. Hence, for the evaluation of different models of hemispheric specialization further research should also investigate and include the two basic emotions (namely disgust and surprise) that have been excluded from the present study due to experimental constraints. First, it might be helpful to replicate the present study, using “surprise” as a stimulus category, since surprise, such as happiness, has been classified and described by a number of authors as a positive type of expression and also been assigned a special hemispheric activation pattern (Batty & Taylor, 2003). In line with the valence hypothesis with its assumed left hemispheric superiority for positive expressions, one would expect a RR-composite advantage for the efficiency of surprised facial expressions, whereas the right hemisphere hypothesis, assuming a general right hemispheric superiority, would predict a LL-composite advantage. However, in line with the present study, one could even think of an equal performance for LL- and RR-composites, if the expression of surprise is somehow related to the expression of happiness. Second, the use of “disgust” as a stimulus category would probably be rewarding too, with a distinct neuronal correlate having been reported in a recent ERP study (Ashley et al., 2004) as well as clinical data pointing towards specific activation circuits (Sprengelmeyer et al., 1996; Sprengelmeyer et al., 1997). Thus,

5 Conclusion 61

searching for a behavioural correlate of these reported distinct features of disgust might be quite promising and probably even being a hint to support one or the other model about the hemispheric specialization of facial expression.

Finally the present study should also be replicated including a “neutral” stimulus category.

By doing so, it would be possible to test the hypothesis that differentiation between neutral and expressive faces occurs earlier than differentiation among different emotional expressions (Batty & Taylor, 2003; Krolak-Salmon et al., 2001), whereas the present study only allows to make comparisons among different emotional expressions. An early differentiation between emotional vs. non-emotional faces would also be consistent with the Bruce and Young model, assuming partially independent parallel processing of facial expression and facial identity (Bruce & Young, 1986). It would also be interesting to compare specific ERP components towards neutral and expressive faces, such as the P300 component, which has been reported to be sensitive to stimuli that are rated pleasant or unpleasant compared to neutral stimuli (Johnston, Miller, & Burleson, 1987; Lang, Nelson, & Collins, 1990). Thus using neutral stimuli could probably be helpful in terms of evaluating dimensional compared to categorical approaches and theoretical assumptions of the perceptual basis of facial expression. Finally, it would also be interesting to investigate whether there are differences in ERP components and behavioural performance resulting from the presentation of LL- vs. RR-composites of neutral faces such as it is the case for fearful faces.

Referring to the issue of fitting data to a model that is dealing with hemispheric contributions of the processing of facial expressions, the first step always involves clarification of the following question: Does the assumed right hemispheric superiority for the generation of facial emotion that has been reported by most studies exclusively refer to emotional expression or do these studies rather report a general effect for facial movements?

In fact the left hemiface has been reported to be characterized by heavier movements than the right hemiface regardless of the type of facial behaviour (Borod & Koff, 1983; Koff, Borod,

& White, 1981). Moreover facial movements without any emotional value have been reported to be equally affected by this left hemiface bias (Campbell, 1986). Thus it seems to be crucial to keep in mind the precise nature of the stimuli used as well as the conclusions that are drawn from the implemented experimental paradigms. Along the same line, the question arises, whether this often assumed right hemispheric specialization for the perception of facial expressions is meant to be specific or rather a specialization for faces in general or even more general, a simple specialization for complex visual stimuli.

5 Conclusion 62

Another issue that should be addressed by future research relates to the need for a sophisticated differentiation of different aspects of emotion processing, in order to separate effects stemming from the expression or from the perception of facial expressions, e.g. by conducting experiments with lateral stimulus presentation, in combination with chimeric stimuli. When discussing natural face-to-face situations, there are also several other influences besides the topic of the poser-perceiver paradox that have to be considered too, for example the effects of posing in a natural environment. It has for instance been demonstrated that posing and tilt effects by the poser largely influence the perception of facial expression (Nicholls, Wolfgang, Clode, & Lindell, 2002) and thus an attempt to separate these different components and influences would well be worthwhile. In sum, there are influences of the poser such as handedness and facial symmetry, variables by the experimental set-up such as stimulus presentation time (Naumann, Becker, Maier, Diedrich, & Bartussek, 1997) and implemented paradigms, as well as factors depending on the perceiver such as handedness, gender, age and even personality traits (Canli et al., 2001; Hugdahl, Iversen, & Johnsen, 1993), let alone complex interactions between all these different factors.

Apart from the issue of multiple influences on the outcome of a given study dealing with the perception or the expression of facial emotions, it also has become obvious that dealing with concepts encompassing definitions such as accuracy, efficiency, performance, dominance and intensity, precise terminology has to be taken into consideration.

As a final remark, it remains to say that the present results are challenging the traditional idea of a general right hemispheric dominance for emotion processing, and they are stressing the importance of a differentiated approach that takes into account various stimulus and task factors and clearly defines the underlying concepts. Although not being present in a very large number, there are nevertheless other studies, claiming an important role of the left hemisphere for the processing of emotions, be it for the perception or be it for the expression. For example, in one of these studies, where subjects had to imagine scenes with elated, depressed and neutral content, significantly stronger effects for the zygomatic muscle activity turned out for the right side of subjects’ face (Sirota & Schwarz, 1982). Another study, where subjects were asked to classify laterally presented emotional face stimuli as either showing positive or negative expressions, also reported a left hemispheric superiority or a right visual field advantage respectively for the recognition of emotional faces, regardless of the type of expression (Stalans & Wedding, 1985). These results have been interpreted as support for a left hemispheric superiority in terms of emotion processing and as a consequence of a rather analytical task, since subjects were asked to classify as fast and as accurate as possible

5 Conclusion 63

positive (happiness and surprise) and negative expressions (anger and disgust) according to their valence. However not only the task itself is considerably different from the present study, but comparisons are also limited due to the fact that only reaction times have been analysed but unfortunately no accuracy rates. In another study, testing a split-brain patient on the performance of his individual hemispheres to discriminate between two types of expressions, Stone et al. ’96 reported that hemispheric performance sharply depended on the instruction, suggesting that there are not only right hemispheric templates of facial expression, but also left hemispheric contributions (Stone, Nisenson, Eliassen, & Gazzaniga, 1996).

Therefore, as pointed out earlier, it is also essential to direct one’s attention to the instructions, be it for the poser or be it for the perceiver.

In sum, according to the present data, expressing a facial emotion seems to be quite lateralized, with a relatively dominant role for the left hemisphere, whereas the perception of facial expressions seems to be less lateralized, as concluded from the EEG data. Since the factor “hemisphere” in the ERP analysis did only marginally influence the ERP results, the perception of facial expressions seems to be more localized to specific brain regions - not side per se, but rather site seems to make the crucial difference, although conclusions about spatial characteristics of neuronal correlates can only be drawn on a very limited level of reliability, due to methodological constraints and drawbacks of the EEG. As proposed earlier, e.g. by Borod et al. 1986, encoding and decoding of facial expressions might be largely independent (Borod, Koff, Lorch, & Nicholas, 1986; Pizzamiglio et al., 1989), although at least in the present study, there has definitely been an apparent influence of differences in expression (LL- vs. RR-composites) on differences in perception, as revealed by ERP data as well as by the analysis of the behavioural performance.

6 References 64

6 References

Adolphs, R., Damasio, H., Tranel, D., & Damasio, A. R. (1996). Cortical systems for the recognition of emotion in facial expression. Journal of Neuroscience, 16, 7678-7687.

Angrilli, A., Mauri, A., Palomba, D., Flor, H., Bierbaumer, N., & Sartori, G. (1996). Startle reflex and emotion modulation impairment after right amygdala lesion. Brain, 119, 1991-2000.

Ashley, V., Vuilleumier, P., & Swick, D. (2004). Time course and specificity of event-related potentials to emotional expressions. NeuroReport, 15(1), 211-216.

Balconi, M., & Pozzoli, U. (2003). Face-selective processing and the effect of pleasant and unpleasant expressions on ERP correlates. International Journal of Psychophysiology, 49, 67-74.

Batty, M., & Taylor, M. J. (2003). Early processing of the six basic facial emotional expressions. Cognitive Brain Science, 17, 613-620.

Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551-565.

Beringer, J. (1995). Experimental run time system (Version 3.32c). Frankfurt: Frankfurt Berisoft cooperation.

Borod, J. C., Cicero, B., Obler, L. K., Welkowitz, J., Erhan, H. M., & Santschi, C. (1998).

Right hemisphere emotional perception: evidence across multiple channels., Neuropsychology(12), 446-458.

Borod, J. C., Kent, J., Koff, E., & Mertin, C. (1989). Facial asymmetry while posing positive and negative emotions: Support for the right hemisphere hypothesis.

Neuropsychologia, 26, 759-764.

6 References 65

Borod, J. C., & Koff, E. (1983). Hemiface mobility and facial expression asymmetry. Cortex,

Borod, J. C., & Koff, E. (1983). Hemiface mobility and facial expression asymmetry. Cortex,