• Keine Ergebnisse gefunden

Passive condition: pre-attentive stimulus processing

2.2 The ERP experiment

4.2.1 Passive condition: pre-attentive stimulus processing

In this part of the experiment, the high temporal resolution of ERPs was used to investigate pre-attentive processing of emotional prosody and its dimensions of arousal and valence.

A series of semantically neutral words (e.g., “Bahnhof” and “Treppe”) spoken in a distinct emotional intonation (for example: happy; 80%) was presented and interrupted by words spoken in a different emotional tone (for example: angry; 20%).

The hypothesis was that, if one is able to establish a standard emotional intonation with the presented words, then any other word that differs with respect to emotional tone, namely every deviant, would elicit the so-called mismatch negativity (MMN). However, the obtained results show that the MMN, which represents the marker of preattentive change detection, was not found. Additionally, when the same category of stimuli, for example, all happy intonated words, serving as a standard or a deviant, respectively, were compared, no

4.2. The ERP experiment 52

significant differences were found. MMN is an ERP component evoked approximately 100-200 ms upon the stimulus onset, using a stimulus sequence where acoustically deviant stimuli are presented in a series of homogeneous stimuli. It is evoked by any discriminable change that occurs in some repetitive aspect during the auditory stimulation (N¨a¨at¨anen and Alho, 1995; Tervaniemi et al., 1999; N¨a¨at¨anen, 2001). Even though in the present experiment, standard and deviant stimuli differed in more than one physical attribute (happily spoken words are louder and have a higher frequency when compared to sadly spoken words), the MMN component was not found. The question is how to explain the absence of the MMN. Previous studies have shown that emotionally intonated words in a series of neutrally spoken words can provoke an MMN (Schirmer et al., 2002; Schr¨oder et al., 2006). The same holds true for different emotional timbres in music (Goydke et al., 2004). In each of the above-referred studies, a distinct emotional tone was represented by one specific stimulus.

A typical example is the word “Anna” spoken in certain emotional intonations (Schr¨oder et al., 2006) or the syllables “dada” (Schirmer et al., 2002). Thus, the observed results can be explained by either dissimilarities in emotional processing or by simple differences among physical attributes of the used stimuli. To exclude the possibility that stimuli only differ in terms of associated physical attributes irrespectively of the emotional tone, in this experiment a set of words spoken by different speakers was employed. In this way, each presented word differed from the other. In order to detect the deviant, those physical attributes being crucial for emotional prosody had to be processed. Nevertheless, following this approach, it is more difficult to establish a series of standards that can be violated by a deviant, since standards also differ from each other. Such an approach points to more complex acoustical structures of the deviant stimuli.

The results of this experiment, can be interpreted as a consequence of a different position of the neural generators for the MMN, when simple tones and more complex stimuli are used. There exists evidence that neural generators for the MMN, when evoked by the complex acoustic signals, are located more medially in comparison to the generators for the simple tones. When the MMN is elicited by pure tones, the corresponding neural generators are located more laterally, at approximately 5.6 cm from the midline axis (Alho et al., 1996;

4.2. The ERP experiment 53

Tiitinen et al., 1993). The generators for the MMN evoked by contrast vowels /ba/ vs. /da/

and /da/ vs. /ga/ are located more medially, at approximately 4.5 cm from the midline of an axis drawn between preauricular points (Diesch and Luce, 1997). In the case of the complex stimuli, neural generators for the MMN might be placed even more medially than for the vowels, with a greater distance to the scalp electrodes, thus, rendering the signal more difficult to be observed. Another explanation might be due to different types of auditory cortex activations and lateral inhibition, when simple tones, in contrast to words, are used. May et al. (1999), investigated the stimulation of auditory cortex with pure tones in an oddball paradigm and pure tones only. According to their study, stimulation of auditory cortex with pure tones in an oddball paradigm resulted in specific adaptation and lateral inhibition of the auditory cortex. In addition, they argue that the MMN to frequency change in the oddball paradigm can be seen as the result of the activity of auditory neurons responding to a deviant tone, under inhibitory influence exerted by standards. As suggested by O’Leary et al. (1996), auditory perception of spectrally complex stimuli yields a broad activation of the auditory cortex. In electrophysiological animal studies on non-human primates, it was demonstrated that complex sounds evoke an extended activation in the auditory cortex (Rauschecker et al., 1995). Also, when comparing the perception of the acoustically-controlled complex speech (sine wave analogue) to the complex non-speech stimuli, larger portions of brain areas were activated (Vouloumanos et al., 2001).

According to the studies reviewed above, it can be seen that stimulation with spectral complex stimuli, such as words, may result in a more wide-spread activation of the auditory cortex. Presentations of a standard stimuli create a broad pattern of adaptation and lateral inhibition, and affect mostly the areas that are subsequently activated by a deviant sound.

In these conditions, the overlap between these neuronal populations would be significant, and although the deviant stimulus provokes the neuronal activity, it might not be sufficiently different from the standard to evoke a signal that can be recorded on the scalp.

4.2. The ERP experiment 54

4.2.2 Active condition: attentive stimulus processing

The main objective of this part of the experiment was to delineate whether attentive pro-cessing of emotional prosody differs along the dimensions of valence and arousal. In a so-called oddball paradigm, frequent standard words spoken in a specific emotional prosody, (for example, angry) were interrupted by infrequent deviants (20%). They differ with re-spect to valence in the valence condition (standard: angry; deviant: happy) and to arousal in the arousal condition (standard: angry; deviant: sad). The participant had to classi-fy deviant stimuli as being positive or negative in the valence condition, and as being of high or low arousal in the arousal condition. As expected, deviant stimuli elicited a P3b component with a parieto-occipital maximum that is usually correlated with stimulus eval-uation (Shahin et al., 2006). Interestingly, the results showed significant differences when comparing the data of the arousal and the data of the valence task.

Only in the valence task, P3b waveforms and also behavioral data showed differences between the four emotional tones of voices. Thus, the highest mean amplitudes were elicited when angry and happily spoken words had to be evaluated. This finding correlates with the behavioral data showing a better categorization of angry and happy state compared to relaxed and sad intonations that were, in overall, very low. Hence, stimuli of the high arousal yielded a higher mean amplitude of P3b together with a higher rate of correct classification irrespectively of their prosodic valence.

It is presumed that the P3b component is produced by a neuronal network associated with attention and memory operations (Polich, 2007). It is also known that trait and arousal influence the operability of attention resources and P300 measures, observed at the scalp (Kok, 2001; Polich and Kok, 1995). So far, neuroscientific research in the field of emotion and attention has mostly focused on perception of emotional faces and facial expressions. Although brain networks responsible for controlling perception of auditory and visual stimuli differ, some similarities can still be found. The areas that control the voice and the face perception might share the same similar principles of brain organization (Belin et al., 2004). For example, activities of the brain voice-sensitive areas are modulated by the higher threat value of the voice (Grandjean et al., 2005). Such patterns of brain

4.2. The ERP experiment 55

activation depending on the interaction of emotion and attention have also been found in the fusiform gyrus during the perception of visual stimuli (Vuilleumier et al., 2001).

The ERP results showing influence of valence and arousal on attention processing in the visual domain, are not consistent. Experimental data indicate that the valence of the perceived stimulus is taken at the different time points in the information-processing stream (Delplanque et al., 2004; Smith et al., 2003) and that the valence effect can be reflected by the amplitude modulation and the slow positive wave (Smith et al., 2003). Exclusive arousal effects (Carreti´e and Iglesias, 1995; Cuthbert et al., 2000) and those of both dimensions (Delplanque et al., 2005, 2006) were also demonstrated. In general, stronger emotional effects have been commonly reported, when more arousing, fearful, and happy pictures or facial expressions were compared to neutral ones (Vuilleumier et al., 2002; Schupp et al., 2004; Eimer et al., 2003). Previous studies also reported that theP3 component elicited by a rare target stimuli is sensitive to the arousal dimension, with higher amplitudes obtained for high arousing target stimuli than for less arousing ones (Diedrich et al., 1997).

Research in the auditory domain has shown that emotional stimuli, when compared to neutral ones (Ethofer et al., 2006; Beaucousin et al., 2007; Grandjean et al., 2005) in-crease the activity in the voice sensitive brain regions i.e., the superior temporal gyrus (STG) (Belin et al., 2002). The responses were found in different conditions, when subjects were instructed to attend either emotional prosody (Ethofer et al., 2006; Beaucousin et al., 2007) or some other speech features (like emotional semantics) (Ethofer et al., 2006) or the speaker’s gender. Recently, using a regression analysis between increased hemodynamical response and acoustic parameters of the speech (Wiethoff et al., 2008), stronger hemo-dynamical responses for emotional compared to neutral prosody were found, suggesting a strong modulating effect of arousal.

Lateralization effects

The reported P3b for angry intonation was significantly higher over the left than over the right hemisphere. This lateralization result is opposite to the theory of emotional valence that predicts a right hemisphere dominance for the negative emotional content and

4.2. The ERP experiment 56

lateralization toward the left hemisphere for the positive one (Davidson, 1992; Canli et al., 1998). However, such hemispheric asymmetries are not always reliably observed in healthy subjects. Studies that analyzed higher frequency EEG components found higher activity for positive valence at right hemisphere electrodes (Aftanas et al., 1998; M¨uller et al., 1999). The role of the left hemisphere in the control of attention toward negative stimuli has also been reported (Anderson and Phelps, 2001; Phelps et al., 2001). High arousal of the emotional stimuli also induces higher theta activity over the left hemisphere (Aftanas et al., 2002). The variability of the fundamental frequency correlates with the activity in the left primary auditory cortex and the left angular gyrus (Wiethoff et al., 2008). The left auditory cortex is also involved in the detection of acoustically deviant word stimuli (Tervaniemi et al., 2006). The lateralization toward the left side for angry stimuli might be explained by the influence of physical properties of fundamental frequency, which conveys the most emotional prosody information and correlates with the elevated arousal of the speaker’s voice.

Summary

The ERP study presented here consisted of two different parts, in which separately pre-attentive and pre-attentive emotional prosody processing having different degrees of arousal and valence were tested. The first part of the experiment demonstrated that the prosodic intonations of different arousal and valence were not able to elicit the marker of the pre-attentive auditory processing (MMN). The absence of this processing is likely to be the consequence of the complexity of speech sounds used. Moreover, the activation of the auditory cortex during the processing of such complex stimuli may have masked the MMN.

In the second part of the experiment, the oddball paradigm and explicit evaluation of the target stimuli were employed. Here, a modulation of the P3bcomponent by the emotional content of the stimuli was shown. The observed modulation (and resources allocation) was influenced by the arousal dimension, but not by the valence of voice intonations. The asymmetry and lateralization effect toward the left hemisphere might also represent the influence of the high arousal, and additionally, acoustic properties of an angry intonation.

Bibliography

L. Aftanas, N. Lotova, V. Koshkarov, and S. Popov. Non-linear dynamical coupling be-tween different brain areas during evoked emotions: An EEG investigation. Biological Psychology, 48(2):121–138, 1998.

L. Aftanas, A. Varlamov, S. Pavlov, V. Makhnev, and N. Reva. Time-dependent cortical asymmetries induced by emotional arousal: EEG analysis of event-related synchroniza-tion and desynchronizasynchroniza-tion in individually defined frequency bands.International Journal of Psychophysiology, 44(1):67–82, 2002.

K. Alho, M. Tervaniemi, M. Huotilainen, J. Lavikainen, H. Tiitinen, R. Ilmoniemi, J. Knuu-tila, and R. Naatanen. Processing of complex sounds in the human auditory cortex as revealed by magnetic brain responses. Psychophysiology, 33(4):369–375, 1996.

M. Alpert, E. Pouget, and R. Silva. Reflections of depression in acoustic measures of the patient’s speech. Journal of Affective Disorders, 66(1):59–69, 2001.

K. Alter, E. Rank, S. Kotz, U. Toepel, M. Besson, A. Schirmer, and A. Friederici. Affective encoding in the speech signal and in event-related brain potentials. Speech Communica-tion, 40(1-2):61–70, 2003.

V. Amassian, R. Cracco, P. Maccabee, J. Cracco, A. Rudell, and L. Eberle. Transcranial Magnetic Stimulation in Study of the Visual Pathway. Journal of Clinical Neurophysiol-ogy, 15(4):288, 1998.

A. Anderson and E. Phelps. Lesions of the human amygdala impair enhanced perception of emotionally salient events. Nature, 411(6835):305–309, 2001.

57

Bibliography 58

C. Baeken, L. Leyman, R. De Raedt, M. Vanderhasselt, and H. D’haenen. Lack of impact of repetitive High Frequency Transcranial Magnetic Stimulation on mood in healthy female subjects. Journal of Affective Disorders, 90(1):63–66, 2006.

C. Baeken, L. Leyman, R. De Raedt, M. Vanderhasselt, and H. D’haenen. Left and right High Frequency repetitive Transcranial Magnetic Stimulation of the dorsolateral pre-frontal cortex does not affect mood in female volunteers. Clin Neurophysiol, 2007.

R. Baken and R. Orlikoff. Clinical Measurement of Speech and Voice. Singular Pub Group, 2000.

R. Banse and K. Scherer. Acoustic profiles in vocal emotion expression. Journal of Per-sonality and Social Psychology, 70(3):614–636, 1996.

P. Bard. A diencephalic mechanism for the expression of rage with special reference to the central nervous system. Am. J. Phyisiol., 84:490–513, 1928.

A. Barker, R. Jalinous, and I. Freeston. Non-invasive magnetic stimulation of human motor cortex. Lancet, 1(8437):1106–7, 1985.

J. Barrett, V. Della-Maggiore, P. Chouinard, and T. Paus. Mechanisms of action underlying the effect of repetitive transcranial magnetic stimulation on mood: behavioral and brain imaging studies. Neuropsychopharmacology, 29(6):1172–89, 2004a.

J. Barrett, G. Pike, and T. Paus. The role of the anterior cingulate cortex in pitch variation during sad affect. European Journal of Neuroscience, 19(2):458–464, 2004b.

V. Beaucousin, A. Lacheret, M. Turbelin, M. Morel, B. Mazoyer, and N. Tzourio-Mazoyer.

FMRI Study of Emotional Speech Comprehension. Cerebral Cortex, 17(2):339, 2007.

A. Beck, R. Steer, and G. Brown. Beck Depression Inventory (BDI-II). San Antonio (TX):

Psychological Corporation, 1996.

S. Behrens. Characterizing sentence intonation in a right hemisphere-damaged population.

Brain Lang, 37(2):181–200, 1989.

Bibliography 59

P. Belin, R. Zatorre, P. Lafaille, P. Ahad, and B. Pike. Voice-selective areas in human auditory cortex. Foundations in Social Neuroscience, 2002.

P. Belin, S. Fecteau, and C. B´edard. Thinking the voice: neural correlates of voice percep-tion. Trends in Cognitive Sciences, 8(3):129–135, 2004.

R. Belmaker. Antibipolar potential for transcranial magnetic stimulation. Bipolar Disor-ders, 1(2):71–72, 1999.

S. Bestmann, J. Baudewig, H. Siebner, J. Rothwell, and J. Frahm. Functional MRI of the immediate impact of transcranial magnetic stimulation on cortical and subcortical motor circuits. European Journal of Neuroscience, 19(7):1950–1962, 2004.

K. B¨uhler. Sprachtheorie. Gustav Fischer, Jena, 1934.

P. Boersma and D. Weenink. PRAAT, a system for doing phonetics by computer, version 3.4.Institute of Phonetic Sciences of the University of Amsterdam, Report, 132:182, 1996.

V. Bostanov and B. Kotchoubey. Recognition of affective prosody: Continuous wavelet measures of event-related brain potentials to emotional exclamations. Psychophysiology, 41(2):259–268, 2004.

T. Buchanan, K. Lutz, S. Mirzazade, K. Specht, N. Shah, K. Zilles, and L. J¨ancke. Recog-nition of emotional prosody and verbal components of spoken language: an fMRI study.

Cognitive Brain Research, 9(3):227–238, 2000.

A. Cancelliere and A. Kertesz. Lesion localization in acquired deficits of emotional expres-sion and comprehenexpres-sion. Brain Cogn, 13(2):133–47, 1990.

T. Canli, J. Desmond, Z. Zhao, G. Glover, and J. Gabrieli. Hemispheric asymmetry for emotional stimuli detected with fMRI. NeuroReport, 9(14):3233, 1998.

W. Cannon. The James-Lange theory of emotions: a critical examination and an alternative theory. American Journal of Psychology, 39:106–124, 1927.

Bibliography 60

L. Carreti´e and J. Iglesias. An ERP study on the specificity of facial expression processing.

International Journal of Psychophysiology, 19(3):183–192, 1995.

R. Chen, C. Gerloff, J. Classen, E. Wassermann, M. Hallett, and L. G. Cohen. Safety of different inter-train intervals for repetitive transcranial magnetic stimulation and rec-ommendations for safe ranges of stimulation parameters. Electroencephalography and Clinical Neurophysiology/Electromyography and Motor Control, 105(6):415–421, 1997.

R. Cowie and E. Douglas-Cowie. Automatic statistical analysis of the signal and prosodic signs of emotion in speech. Spoken Language, 1996. ICSLP 96. Proceedings., Fourth International Conference on, 3, 1996.

B. Cuthbert, H. Schupp, M. Bradley, N. Birbaumer, and P. Lang. Brain potentials in affective picture processing: covariation with autonomic arousal and affective report.

Biological Psychology, 52(2):95–111, 2000.

C. Darwin. 1965. The Expression of the Emotions in Man and Animals, 1872.

R. Davidson. Emotion and affective style: Hemispheric Substrates. Psychological Science, 3(1):39–43, 1992.

R. Davidson, A. Shackman, and J. Maxwell. Asymmetries in face and brain related to emotion. Trends in Cognitive Sciences, 8(9):389–391, 2004.

J. Dearing, M. George, B. Greenberg, E. Wassermann, T. Schlaepfer, D. Murphy, M. Hallett, and R. Post. Mood effects of prefrontal repetitive high frequency transcranial magnetic stimulation (rTMS) in healthy volunteers. CNS Spectrums, 2:53–68, 1997.

S. Delplanque, M. Lavoie, P. Hot, L. Silvert, and H. Sequeira. Modulation of cognitive pro-cessing by emotional valence studied through event-related potentials in humans. Neu-roscience Letters, 356(1):1–4, 2004.

S. Delplanque, L. Silvert, P. Hot, and H. Sequeira. Event-related P3a and P3b in response to unpredictable emotional stimuli. Biological Psychology, 68(2):107–120, 2005.

Bibliography 61

S. Delplanque, L. Silvert, P. Hot, S. Rigoulot, and H. Sequeira. Arousal and valence effects on event-related P3a and P3b during emotional categorization. International Journal of Psychophysiology, 60(3):315–322, 2006.

O. Diedrich, E. Naumann, S. Maier, G. Becker, and D. Bartussek. A frontal positive slow wave in the ERP associated with emotional slides. Journal of Psychophysiology, 11(1):

71–84, 1997.

E. Diesch and T. Luce. Magnetic mismatch fields elicited by vowels and consonants. Ex-perimental Brain Research, 116(1):139–152, 1997.

G. Dogil, H. Ackermann, W. Grodd, H. Haider, H. Kamp, J. Mayer, A. Riecker, and D. Wildgruber. The speaking brain: a tutorial introduction to fMRI experiments in the production of speech, prosody and syntax.Journal of Neurolinguistics, 15(1):59–90, 2002.

E. Donchin. Surprise!...Surprise? Psychophysiology, 18(5):493–513, 1981.

W. Drevets. Neuroimaging and neuropathological studies of depression: implications for the cognitive-emotional features of mood disorders.Current Opinion in Neurobiology, 11 (2):240–249, 2001.

W. Drevets, T. Videen, J. Price, S. Preskorn, S. Carmichael, and M. Raichle. A functional anatomical study of unipolar depression. Journal of Neuroscience, 12(9):3628, 1992.

G. Duncan, J. Laver, and M. Jack. A psycho-acoustic interpretation of variations in di-vers ˜O voice fundamental frequency in a pressured helium–oxygen environment. Work in Progress, 16:9–16, 1983.

M. Eimer, A. Holmes, and F. McGlone. The role of spatial attention in the processing of facial expression: An ERP study of rapid brain responses to six basic emotions.Cognitive, Affective, & Behavioral Neuroscience, 3(2):97–110, 2003.

P. Ekman, W. Friesen, and P. Ellsworth. Emotion in the Human Face: Guidelines for Research and an Integration of Findings. Pergamon Press, 1972.

Bibliography 62

C. Epstein, K. Meador, D. Loring, R. Wright, J. Weissman, S. Sheppard, J. Lah, F. Puhalovich, L. Gaitan, and K. Davey. Localization and characterization of speech arrest during transcranial magnetic stimulation. Clinical Neurophysiology, 110(6):1073–

1079, 1999.

T. Ethofer, S. Anders, M. Erb, C. Herbert, S. Wiethoff, J. Kissler, W. Grodd, and D. Wild-gruber. Cerebral pathways in processing of affective prosody: A dynamic causal modeling study. Neuroimage, 30(2):580–587, 2006.

P. Fitzgerald, S. Fountain, and Z. Daskalakis. A comprehensive review of the effects of rTMS on motor cortical excitability and inhibition. Clinical Neurophysiology, 117(12):

2584–2596, 2006.

M. George. Changes in mood and hormone levels after rapid-rate transcranial magnetic stimulation (rTMS) of the prefrontal cortex, 1996.

M. George, E. Wassermann, W. Williams, A. Callahan, T. Ketter, P. Basser, M. Hallett, and R. Post. Daily repetitive transcranial magnetic stimulation (rTMS) improves mood in depression. Neuroreport, 6(14):1853–1856, 1995.

M. George, P. Parekh, N. Rosinsky, T. Ketter, T. Kimbrell, K. Heilman, P. Herscovitch, and R. Post. Understanding emotional prosody activates right hemisphere regions. Archives of Neurology, 53(7):665–670, 1996.

K. Goydke, E. Altenm¨uller, J. M¨oller, and T. M¨unte. Changes in emotional tone and instrumental timbre are reflected by the mismatch negativity. Cognitive Brain Research, 21(3):351–359, 2004.

D. Grandjean, D. Sander, G. Pourtois, S. Schwartz, M. Seghier, K. Scherer, and P. Vuilleu-mier. The voices of wrath: brain responses to angry prosody in meaningless speech.

Nature Neuroscience, 8(2):145–146, 2005.

M. Gross, L. Nakamura, A. Pascual-Leone, and F. Fregni. Has repetitive transcranial magnetic stimulation (rTMS) treatment for depression improved? A systematic review

Bibliography 63

and meta-analysis comparing the recent vs. the earlier rTMS studies. Acta Psychiatrica Scandinavica, 116(3):165–173, 2007.

K. Hammerschmidt and U. J¨urgens. Acoustical Correlates of Affective Prosody. Journal of Voice, 21(5):531–540, 2007.

J. Hansen. Event-related potential software system. La Jolla: Event-Related Potential Laboratory, University of California San Diego, 1993.

R. Hoffman, N. Boutros, S. Hu, R. Berman, J. Krystal, and D. Charney. Transcranial mag-netic stimulation and auditory hallucinations in schizophrenia. The Lancet, 355(9209):

1073–1075, 2000.

C. Izard. Innate and universal facial expressions: evidence from developmental and

C. Izard. Innate and universal facial expressions: evidence from developmental and